24.05.2014 Views

Modeling Climate Policy Instruments in a Stackelberg Game with ...

Modeling Climate Policy Instruments in a Stackelberg Game with ...

Modeling Climate Policy Instruments in a Stackelberg Game with ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

3.4 Analysis 47<br />

imply<br />

lim µ(S − S c )e −ρt = limλ R λ H (S − S c )e −ρt<br />

= λ H,0 lim λ R e R t<br />

0 (ρ+δ−¯r)ds (S − S c )e −ρt<br />

= λ H,0 lim λ R e R t<br />

0 (δ−¯r)ds (S − S c )<br />

= 0. (106)<br />

Thus, the market solution of λ H and λ R fulfills the optimality and transversality<br />

conditions of the social planner solution <strong>with</strong> the given transformation λ S =<br />

λ H λ R and anticipated mitigation goal S c = S. That is, the BAU scenario <strong>with</strong><br />

S c = S = 0 is always socially optimal.<br />

The solution of a RED scenario is equivalent to the social planner solution<br />

if the mitigation goal is anticipated by resource extractors.<br />

Market Failures Above considerations showed the social optimality of the<br />

basemodel if mitigation is anticipated by the resource sector and all taxes are<br />

set to zero. In particular, <strong>in</strong>troduc<strong>in</strong>g taxes unequal to zero on factor prices<br />

would distort social optimality and first-order conditions of decentralized economy<br />

would differ from those of the social planner. If the mitigation target<br />

is not anticipated by the resource sector due to an accumulated quantity restriction<br />

policy, overextraction would occur violat<strong>in</strong>g government’s mitigation<br />

target. This demands for taxes to reduce resource demand. Further market<br />

failures would be <strong>in</strong>troduced if knowledge spillovers or market power were considered.<br />

Public R&D expenditures are paid by the government and need public<br />

fund<strong>in</strong>g.<br />

All these market failures mentioned will be treated <strong>in</strong> this work by study<strong>in</strong>g<br />

several model ref<strong>in</strong>ements <strong>with</strong><strong>in</strong> the presented model framework. However,<br />

there are still further failures left for future research, for example consider<strong>in</strong>g<br />

differ<strong>in</strong>g discount rates (e.g. of capitalist and workers household) or consider<strong>in</strong>g<br />

damages due to climate change.<br />

3.4.2 Time Consistency<br />

<strong>Model<strong>in</strong>g</strong> the government as an actor participat<strong>in</strong>g as <strong>Stackelberg</strong> leader raises<br />

the question whether the government has an <strong>in</strong>centive to deviate from its former<br />

declared policy path at one time <strong>in</strong>stant of the plann<strong>in</strong>g horizon. Such a behavior<br />

is called time <strong>in</strong>consistent and motivates the research under which conditions<br />

the <strong>Stackelberg</strong> leader complies <strong>with</strong> her commitment.<br />

In general there is no reason why open-loop <strong>Stackelberg</strong> games should be<br />

time consistent (cf. Dockner et al., 2000, ch. 5). An analytical proof of time<br />

(<strong>in</strong>)consistency of the model used <strong>in</strong> this work would be a challeng<strong>in</strong>g task if<br />

possible at all. A more pragmatic approach would be to make numerical tests of<br />

consistent optimization, i.e. solv<strong>in</strong>g the optimization program for the whole time<br />

horizon, then restart<strong>in</strong>g the optimization program <strong>with</strong> <strong>in</strong>itial state variables of<br />

a specific time <strong>in</strong>stant t ∗ of the f<strong>in</strong>ished first optimization run and have a look<br />

on the re-calculated optimal policy paths for t ∗ ≤ t ≤ T end . While differ<strong>in</strong>g<br />

optimal policy paths would “proof” the time <strong>in</strong>consistency of the game, same<br />

policy paths show only the time consistency of this specific subgame Γ(t ∗ ) – at<br />

specific time <strong>in</strong>stant t ∗ – and a specific parameter set. 47<br />

47 For formal def<strong>in</strong>ition of the subgame Γ(t) see Dockner et al. (2000, ch. 4.3).

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!