Measuring the Effects of a Shock to Monetary Policy - Humboldt ...
Measuring the Effects of a Shock to Monetary Policy - Humboldt ...
Measuring the Effects of a Shock to Monetary Policy - Humboldt ...
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
<strong>Measuring</strong> <strong>the</strong> <strong>Effects</strong> <strong>of</strong> a <strong>Shock</strong> <strong>to</strong> <strong>Monetary</strong> <strong>Policy</strong>:<br />
A Fac<strong>to</strong>r-Augmented Vec<strong>to</strong>r Au<strong>to</strong>regression (FAVAR)<br />
Approach with Agnostic Identification<br />
Diplomarbeit<br />
zur Erlangung des Grades<br />
eines Diplom-Volkswirtes<br />
an der Wirtschaftswissenschaftlichen Fakultät<br />
der <strong>Humboldt</strong>-Universität zu Berlin<br />
vorgelegt von<br />
Pooyan Amir Ahmadi<br />
(Matrikel Nr.: 174701)<br />
Prüfer : Pr<strong>of</strong>. Harald Uhlig Ph.D.<br />
Berlin, 26. August 2005
Abstract<br />
In this <strong>the</strong>sis I try <strong>to</strong> measure <strong>the</strong> dynamic effects <strong>of</strong> a shock <strong>to</strong> monetary policy in a<br />
Bayesian FAVAR framework. The innovation is <strong>to</strong> combine <strong>the</strong> Bayesian FAVAR with<br />
<strong>the</strong> agnostic identification introduced by Uhlig [2005] which has not been done yet. This<br />
identification scheme provides reasonable results and fur<strong>the</strong>rmore <strong>the</strong> possibility <strong>to</strong> im-<br />
pose a broader set <strong>of</strong> sign restriction on variables, proposed by Uhlig that are consistent<br />
with <strong>the</strong> conventional wisdom. Due <strong>to</strong> <strong>the</strong> greater information set it is possible <strong>to</strong> set<br />
<strong>the</strong> sign restrictions on several prices, monetary aggregates and short term interest rates<br />
considered in <strong>the</strong> dataset. In this vein one can narrow down <strong>the</strong> space <strong>of</strong> reasonable<br />
impulse responses in order <strong>to</strong> disentangle precisely <strong>the</strong> quantitative effects induced by<br />
contractionary monetary policy. Although <strong>the</strong> agnostic identification is a ”weaker” one<br />
with respect <strong>to</strong> <strong>the</strong> structure and restrictions imposed, this identification scheme com-<br />
bined with Markov chain Monte Carlo simulation methods delivers results that appear <strong>to</strong><br />
be reasonable for a broad set <strong>of</strong> variables and with a higher accuracy than <strong>the</strong> alternative<br />
results provided by Bernanke, Boivin and Eliasz [2005]. Combining <strong>the</strong> two methodologies<br />
hold <strong>the</strong> enticing promise <strong>to</strong> measure <strong>the</strong> effects <strong>of</strong> a shock <strong>to</strong> monetary policy very pre-<br />
cisly when applying it <strong>to</strong> large panels <strong>of</strong> data. From <strong>the</strong> results one can conclude that <strong>the</strong><br />
identification scheme is crucial for a succesful identification especially when <strong>the</strong> dataset<br />
considered is large. However with increasingly restrictions <strong>the</strong> results are delivered in-<br />
creasingly infrequent. Additionally I provide a Matlab code for <strong>the</strong> estimation procedure.<br />
Acknowledgements<br />
I would like <strong>to</strong> thank Harald Uhlig for excellent guidance and also Albrecht Ritschl and<br />
Bar<strong>to</strong>sz Maćkowiak for supportive discussions. The material provided by Piotr Eliasz is<br />
thankfully acknowledged. I cordially thank Samad Sarferaz for pro<strong>of</strong>reading this <strong>the</strong>sis<br />
and for helpful discussions. But <strong>of</strong> most I am indebted <strong>to</strong> Alborz Radmanesch <strong>to</strong> who I<br />
am sincerely greatful for invaluable support.
2 Bayesian FAVARs with Agnostic Identification<br />
Contents<br />
1 Introduction 4<br />
2 Literature 7<br />
3 Dynamic Fac<strong>to</strong>r Models 11<br />
4 The Econometric Framework 16<br />
4.1 FAVARs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16<br />
4.2 FAVAR Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19<br />
4.3 Estimation Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21<br />
4.3.1 Generalized Dynamic Fac<strong>to</strong>r Model . . . . . . . . . . . . . . . . . . 21<br />
4.3.2 Two-Step Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . 22<br />
4.3.3 Likelihood-Based Estimation . . . . . . . . . . . . . . . . . . . . . . 23<br />
4.3.4 Markov Chain Monte Carlo . . . . . . . . . . . . . . . . . . . . . . 23<br />
4.3.5 The Gibbs Sampler . . . . . . . . . . . . . . . . . . . . . . . . . . . 24<br />
5 The Econometric Model 25<br />
5.1 The Bayesian Approach versus <strong>the</strong> Frequentists Approach . . . . . . . . . 25<br />
5.2 State-Space Representation . . . . . . . . . . . . . . . . . . . . . . . . . . 26<br />
5.3 Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29<br />
6 Structural FAVARs 33<br />
6.1 Identification <strong>of</strong> <strong>Shock</strong>s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33<br />
6.2 Identification Schemes in SVARs . . . . . . . . . . . . . . . . . . . . . . . . 34<br />
6.3 Identification in DFMs and FAVARs . . . . . . . . . . . . . . . . . . . . . 34<br />
7 Empirical Results 38<br />
8 Discussion 49<br />
9 Summary and Concluding Remarks 51
Bayesian FAVARs with Agnostic Identification 3<br />
10 Matlab Implementation 51<br />
References 63<br />
Appendix A: Data 69<br />
Appendix B: Figures 76<br />
Appendix C: Matlab Code 88
4 Bayesian FAVARs with Agnostic Identification<br />
1 Introduction<br />
What are <strong>the</strong> dynamic effects <strong>of</strong> a shock <strong>to</strong> monetary policy on business cycle fluctua-<br />
tions? Are <strong>the</strong>re any real effects or are <strong>the</strong> real aggregates independent from <strong>the</strong> monetary<br />
sec<strong>to</strong>r? Whe<strong>the</strong>r <strong>the</strong>re is a link between <strong>the</strong> monetary and <strong>the</strong> real aggregates over <strong>the</strong><br />
business cycle, has been a question economist deal with since <strong>the</strong> early work by Friedman<br />
and Schwartz (1963). In <strong>the</strong>ir influential book <strong>the</strong>y arrive at an affirmative conclusion and<br />
hence postulate that <strong>the</strong>re is a link between monetary policy and real economic activity.<br />
There have been many approaches that dealed with this question in an advanced manner.<br />
Most <strong>of</strong> <strong>the</strong> research concerned with this question faced <strong>the</strong> limitation that <strong>the</strong>y were<br />
only able <strong>to</strong> consider a limited information set that cannot capture <strong>the</strong> ”data-rich envi-<br />
ronment” 1 in which central bankers decision are assumed <strong>to</strong> take place. This limitation is<br />
entailed by <strong>the</strong> econometric frameworks mostly applied. Most <strong>of</strong> <strong>the</strong> methodologies and<br />
models applied provide <strong>the</strong> conclusion that <strong>the</strong> effects <strong>of</strong> monetary policy are not <strong>of</strong> great<br />
importance for <strong>the</strong> real sec<strong>to</strong>r.<br />
The recent literature advances applying dynamic fac<strong>to</strong>r models <strong>to</strong> deal with large pan-<br />
els <strong>of</strong> data in applied macroeconomics. These models extract from large datasets a few<br />
fac<strong>to</strong>rs that capture <strong>the</strong> main driving forces <strong>of</strong> <strong>the</strong> economy. In <strong>the</strong> strict sense <strong>the</strong> dynam-<br />
ics <strong>of</strong> <strong>the</strong> data is <strong>the</strong>n explained by a common component and an idiosyncratic component,<br />
which is series specific. Bernanke, Boivin and Eliasz [2005] introduced a combination <strong>of</strong><br />
<strong>the</strong> recent advances <strong>of</strong> dynamic fac<strong>to</strong>r models with <strong>the</strong> standard VAR analysis, in a unify-<br />
ing framework namely <strong>the</strong> fac<strong>to</strong>r-augmented vec<strong>to</strong>r au<strong>to</strong>regression (henceforth FAVAR).<br />
In <strong>the</strong>ir paper, Bernanke, Boivin and Eliasz [2005] also aim <strong>to</strong> disentangle <strong>the</strong> dynamic<br />
propagation <strong>of</strong> a shock <strong>to</strong> monetary policy throughout <strong>the</strong> economy. The identification<br />
scheme <strong>the</strong>y apply is a standard recursive one implemented in <strong>the</strong> FAVAR framework, and<br />
<strong>the</strong> results <strong>the</strong>y provide are based on a nonparametric two-step estimation using principal<br />
component analysis. The results <strong>the</strong>y achieve from a joint likelihood approach via Gibbs<br />
1 see Bernanke and Boivin [2003]
Bayesian FAVARs with Agnostic Identification 5<br />
sampling seems <strong>to</strong> be unfavorable.<br />
The aim <strong>of</strong> this paper is threefold. First I critically examine <strong>the</strong> result by BBE and<br />
provide an answer why <strong>the</strong>y arrive at inferior results and how one should apply this pro-<br />
cedure <strong>to</strong> receive more reasonable results. I like <strong>to</strong> combine <strong>the</strong> Bayesian FAVARs with<br />
<strong>the</strong> agnostic identification by Uhlig (2005). To my best knowledge this has not been done<br />
yet in <strong>the</strong> current literature on empirical macroeconomics. Hence I tackle <strong>the</strong> question<br />
raised above in a complete consistent Bayesian framework. Not only <strong>the</strong> fac<strong>to</strong>rs and <strong>the</strong><br />
parameters <strong>of</strong> interest are estimated with Bayesian methods, more importantly I apply<br />
<strong>the</strong> agnostic identification in order <strong>to</strong> identify <strong>the</strong> specific effects induced by contrac-<br />
tionary monetary policy. Here I try <strong>to</strong> be as precise as possible and as close as possible<br />
<strong>to</strong> <strong>the</strong> conventional wisdom. This is accomplished by setting different ”block criteria”<br />
that successively become stricter, while imposing more restrictions. This approach holds<br />
an enticing promise in that one can narrow down <strong>the</strong> space <strong>of</strong> economically reasonable<br />
dynamic reactions as accurate as <strong>the</strong> data allows. We do not have <strong>to</strong> set restrictions on<br />
only one price variable, one <strong>of</strong> <strong>the</strong> monetary aggregates and one short term interest rate<br />
as it has been applied so far and is common practice when applying sign-restrictions in<br />
<strong>the</strong> VAR framework. Having available such a large datasets, provides <strong>the</strong> possibility <strong>to</strong><br />
be more strict in imposing e.g. not only <strong>the</strong> CPI <strong>to</strong> react non-positively after a monetary<br />
policy shock but also o<strong>the</strong>r price variables that are incorporated in <strong>the</strong> estimation proce-<br />
dure. This serves <strong>the</strong> possibility <strong>to</strong> identify <strong>the</strong> structurally, reaction <strong>of</strong> <strong>the</strong> economy in<br />
an more exact manner than o<strong>the</strong>r identification approaches common in <strong>the</strong> literature on<br />
empirical macroeconomics.<br />
The second contribution and also <strong>the</strong> most time consuming one was <strong>to</strong> provide a Mat-<br />
lab code that does <strong>the</strong> estimation and identification. Here <strong>the</strong> major challenge has been<br />
<strong>to</strong> provide a code that is as efficient as possible, especially with respect <strong>to</strong> <strong>the</strong> calculating<br />
time and <strong>the</strong> memory required in a way that also students can run <strong>the</strong> program on PCs <strong>of</strong><br />
”common” capacity without waiting a week for <strong>the</strong> results. The Gibbs sampling procedure
6 Bayesian FAVARs with Agnostic Identification<br />
itself is a very cumbersome computer intensive and memory intensive estimation method<br />
that applied <strong>to</strong> such large datasets might take very long <strong>to</strong> produce results. The challenge<br />
is <strong>to</strong> combine <strong>the</strong> Gibbs sampling with <strong>the</strong> agnostic identification procedure which itself<br />
is time consuming. How <strong>the</strong> code accomplishes this challenge is explained on <strong>the</strong> section<br />
about <strong>the</strong> Matlab implementation. Third we present results that seem reasonable and<br />
consistent with <strong>the</strong> conventional wisdom in like <strong>the</strong> ones by BBE. This indicates that<br />
<strong>the</strong> key <strong>to</strong> <strong>the</strong> question above is <strong>the</strong> identification scheme. More importantly <strong>the</strong> results<br />
confirm <strong>the</strong> lead <strong>of</strong> Sims <strong>to</strong> avoid unreasonable identifying assumption. Hence opposed <strong>to</strong><br />
<strong>the</strong> conclusion <strong>of</strong> BBE our results confirm that not only information is very important but<br />
one receives reasonable estimation results in combination with economically sound identi-<br />
fication schemes such as <strong>the</strong> agnostic identification using a broader set <strong>of</strong> sign-restrictions.<br />
The main results are that <strong>the</strong> Bayesian estimation <strong>of</strong> FAVARs combined with <strong>the</strong> ag-<br />
nostic identification <strong>of</strong> Uhlig [2005] deliver results according <strong>to</strong> <strong>the</strong> conventional wisdom<br />
where no prize puzzle arises. It seems <strong>to</strong> be <strong>the</strong> crucial issue why Bernanke, Boivin and<br />
Eliasz [2005] arrive at inferior results compared <strong>to</strong> <strong>the</strong> two-step principal component esti-<br />
mation. They apply a standard recursive identification scheme. Fur<strong>the</strong>rmore <strong>the</strong> greater<br />
information set allows <strong>the</strong> researcher <strong>to</strong> be more strict w.r.t. <strong>to</strong> <strong>the</strong> sign restrictions im-<br />
posed, in order <strong>to</strong> disentangle <strong>the</strong> dynamic effects <strong>of</strong> a shock <strong>to</strong> monetary policy more<br />
exactly.<br />
However one should be cautious with <strong>the</strong> conclusion, due <strong>to</strong> <strong>the</strong> fact that one should<br />
collect results for longer Gibbs iterations in order <strong>to</strong> assure <strong>the</strong> results. One should also<br />
try out several number <strong>of</strong> fac<strong>to</strong>rs and also report <strong>the</strong>ir contribution via variance decom-<br />
position.<br />
The next section gives an overview <strong>of</strong> <strong>the</strong> relevant literature followed by an introdution<br />
in<strong>to</strong> dynamic fac<strong>to</strong>r models in section 3. Section 4 starts <strong>to</strong> explain <strong>the</strong> FAVAR framework<br />
with its different identification and various estimation approaches. Fur<strong>the</strong>rmore a short
Bayesian FAVARs with Agnostic Identification 7<br />
introduction for Markov chain Monte Carlo simulation methods and <strong>the</strong> Gibbs sampler<br />
is provided. Section five elaborates on <strong>the</strong> Bayesian estimation procedure for <strong>the</strong> FAVAR<br />
methodology and <strong>the</strong> inference on <strong>the</strong> fac<strong>to</strong>rs and parameters <strong>to</strong> be estimated. The<br />
Identification schemes for identifying <strong>the</strong> effecst <strong>of</strong> a policy shock are presented in section<br />
6. The empirical results and <strong>the</strong> discussuion are provided in section 7 and 8 respectively.<br />
Section 9 concludes and in <strong>the</strong> last section exlpaines <strong>the</strong> attached Matlab code with <strong>the</strong><br />
help <strong>of</strong> sequence diagrams.<br />
2 Literature<br />
The question what <strong>the</strong> effects <strong>of</strong> a monetary policy contration are, still is a very important<br />
one <strong>to</strong> economists. Regarding <strong>the</strong> qualitative effects <strong>the</strong>re has come up a broad consensus<br />
among economists, but <strong>the</strong> quantitative measure is still subject <strong>to</strong> controverse discus-<br />
sions. Since Friedman and Schwartz [1963] examined <strong>the</strong> question raised above, <strong>the</strong>re<br />
have been several empirical and <strong>the</strong>oretical approaches trying <strong>to</strong> tackle this matter, in a<br />
more advanced manner, focusing on <strong>the</strong> accurate quantitative measure <strong>of</strong> <strong>the</strong> effects <strong>of</strong> a<br />
shock <strong>to</strong> monetary policy. Regarding empirical studies dedicated <strong>to</strong> <strong>the</strong> question raised<br />
above, Sims [1992], Sims and Zha [1996], Leeper, Sims and Zha [1996], Sims [1998], Temin<br />
[1998], Christiano Eichenbaum and Evans [1999], Canova and De Nicolo [2002] and Uhlig<br />
[2005] provide advanced approaches. These studies in a broad sense all deal with <strong>the</strong><br />
examination <strong>of</strong> <strong>the</strong> link between monetary policy and <strong>the</strong> real sec<strong>to</strong>r. They mostly differ<br />
in <strong>the</strong> methodology applied <strong>to</strong> unriddle <strong>the</strong> mere effect <strong>of</strong> a shock induced by monetary<br />
policy. Theoretical approaches are provided amongst o<strong>the</strong>rs by Ireland [2001], Godfreind<br />
and King [1997], Clarida, Gali and Gertler [2000] and Gali [2002].<br />
Over <strong>the</strong> years, <strong>the</strong>re has come up a broad consensus about <strong>the</strong> monetary transmis-<br />
sion mechanism. Most <strong>of</strong> <strong>the</strong> literature agrees that monetary policy does not play an<br />
important role for business cycle fluctuations or more precisely does not affect output un-
8 Bayesian FAVARs with Agnostic Identification<br />
ambiguously 2 3 . In <strong>the</strong> current literature <strong>the</strong>re are also approaches <strong>to</strong> consider an index<br />
extracted with dynamic fac<strong>to</strong>r models, representing <strong>the</strong> relevant dynamics out <strong>of</strong> many<br />
macroeconomic variables associated with ”economic activity”, such as Korenok and Rad-<br />
chenko [2004]. The consensus seems <strong>to</strong> be consistent with <strong>the</strong> ”monetary neutrality” that<br />
monetary policy shocks only affect <strong>the</strong> nominal but not <strong>the</strong> real sec<strong>to</strong>r in <strong>the</strong> long-run.<br />
Contrariwise Canova and De Nicolo [2002] show that monetary policy had a significant<br />
effect in reducing <strong>the</strong> output in <strong>the</strong> G7 industry countries, hence <strong>the</strong>y implicitly disagree<br />
with <strong>the</strong> ambiguous effects <strong>of</strong> monetary policy shocks mentioned by <strong>the</strong> papers above. The<br />
contentious issue raising <strong>the</strong> overall dissentation is ra<strong>the</strong>r <strong>the</strong> quantitative magnitude <strong>of</strong><br />
<strong>the</strong> reaction 4 instead <strong>of</strong> its mere direction. There are methods required <strong>to</strong> address such<br />
issues quantitatively, which is important for central bankers who have <strong>to</strong> decide how <strong>to</strong><br />
react <strong>to</strong> <strong>the</strong> state <strong>of</strong> <strong>the</strong> economy. In order <strong>to</strong> accomplish a sound decision, <strong>the</strong> monetary<br />
authorities have <strong>to</strong> be certain about <strong>the</strong> precise reaction <strong>of</strong> <strong>the</strong> economy. For example<br />
in a recession, it might be required <strong>to</strong> know about <strong>the</strong> accurate effects, in order <strong>to</strong> avoid<br />
a policy decision that might ra<strong>the</strong>r be a drag on <strong>the</strong> recovery. The broader <strong>the</strong> relevant<br />
information set <strong>the</strong> more accurate is analysis <strong>the</strong> for central bankers.<br />
The ”workhorse” in applied empirical macroeconomics for accomplishing <strong>the</strong> question<br />
at hand is <strong>the</strong> structural vec<strong>to</strong>r au<strong>to</strong>regression (Hence forth SVAR). The problem is that<br />
<strong>the</strong> VAR framework with its limitations such as <strong>the</strong> ”curse <strong>of</strong> dimensionality” 5 cannot<br />
mirror <strong>the</strong> reality <strong>of</strong> central bankers decision making, particularly w.r.t. <strong>the</strong> amount <strong>of</strong><br />
information exploited which apparently is relevant for <strong>the</strong> economy. This might be very<br />
important for researchers because <strong>the</strong>re is little hope that economists can evaluate al-<br />
ternative <strong>the</strong>ories <strong>of</strong> <strong>the</strong> monetary policy transmission mechanism, or obtain quantitative<br />
2 See Uhlig [2005]<br />
3 Some researchers measure <strong>the</strong> effects on output, some ra<strong>the</strong>r on industrial output as a representative<br />
”proxy” or ”index” for economic activity (see Sims).<br />
4 In my <strong>the</strong>sis I only consider <strong>the</strong> effects <strong>of</strong> contractionary monetary policy shocks.<br />
5 This was raised by Sims (1980), where he states that with increasing number <strong>of</strong> variables included in<br />
<strong>the</strong> VAR, <strong>the</strong> parameters <strong>to</strong> be estimated increase quadratically and soon become intractable.
Bayesian FAVARs with Agnostic Identification 9<br />
estimates <strong>of</strong> <strong>the</strong> impact <strong>of</strong> monetary policy shocks and changes in policy on various sec<strong>to</strong>rs<br />
<strong>of</strong> <strong>the</strong> economy, if <strong>the</strong>re exists no reasonable objective means <strong>of</strong> determining <strong>the</strong> direction<br />
and size <strong>of</strong> changes in policy stances 6 .<br />
<strong>Monetary</strong> policy decisions are implicitly assumed <strong>to</strong> take place in a ”data-rich envi-<br />
ronment” 7 , reflecting that <strong>the</strong> central bankers analyze a vast amount <strong>of</strong> economic time<br />
series 8 prior <strong>to</strong> <strong>the</strong>ir policy decisions taken. The fact that central bankers bear this huge<br />
cost <strong>to</strong> analyze such a plethora <strong>of</strong> data and assuming that <strong>the</strong>y do not have a great<br />
irrational interest in wasting <strong>the</strong>ir time with dispensable data analysis, should provide<br />
<strong>the</strong> indication that a broad set <strong>of</strong> ”information” is <strong>of</strong> great relevance for <strong>the</strong> decisions<br />
<strong>to</strong> be taken. Hence much more variables would have <strong>to</strong> be considered for example if a<br />
researcher would be interested <strong>to</strong> model <strong>the</strong> monetary transmission mechanism, or <strong>the</strong><br />
Fed’s reaction function. The limitation does not come from <strong>the</strong> modeling researchers ig-<br />
noring <strong>the</strong> importance <strong>of</strong> <strong>the</strong> large data set, but from <strong>the</strong> methodologies mainly applied.<br />
The advantage <strong>of</strong> <strong>the</strong> SVAR framework is that it is small and <strong>the</strong>refore more tractable<br />
and easier <strong>to</strong> compute. Bernanke and Boivin (2003) were <strong>the</strong> first who tried <strong>to</strong> tackle <strong>the</strong><br />
problem <strong>of</strong> ”information dimensions” in <strong>the</strong> context <strong>of</strong> monetary policy in a framework<br />
that combines <strong>the</strong> advances <strong>of</strong> dynamic fac<strong>to</strong>r models with standard VAR analysis. They<br />
”explore <strong>the</strong> feasibility <strong>to</strong> incorporate a richer information set in<strong>to</strong> <strong>the</strong> analysis <strong>of</strong> positive<br />
and normative Fed policy making” 9 . They achieve this through a fac<strong>to</strong>r model approach<br />
based on <strong>the</strong> work <strong>of</strong> S<strong>to</strong>ck and Watson (1999,2002) and Watson (2000). Dynamic fac<strong>to</strong>r<br />
models that were introduced <strong>to</strong> economics by <strong>the</strong> seminal papers <strong>of</strong> Geweke (1977) and<br />
Sargent and Sims (1977), put <strong>the</strong> classical fac<strong>to</strong>r model, that only regards cross-sectional<br />
data, in a dynamic setting. These models have become increasingly popular since <strong>the</strong> work<br />
by S<strong>to</strong>ck and Watson (1989,1999,2003) that outperformed <strong>the</strong> forecasting accuracy <strong>of</strong> <strong>the</strong><br />
6 See Bernanke and Mihov (1998a)<br />
7 see Bernanke and Boivin (2003)<br />
8 Bernanke and Boivin (2003), state that central bankers literally moni<strong>to</strong>r several hundreds or even up<br />
<strong>to</strong> thousand time series.<br />
9 See Bernanke, Boivin [2003]
10 Bayesian FAVARs with Agnostic Identification<br />
standard au<strong>to</strong>regression approach, but only for <strong>the</strong> real variables except employment. A<br />
lot <strong>of</strong> good research has revealed advances <strong>to</strong> <strong>the</strong>se models, in particular w.r.t. <strong>the</strong> esti-<br />
mation procedure that allows for large datasets, in length and dimension, so that it can<br />
be applied <strong>to</strong> <strong>the</strong> economic question at hand. Hence one can overcome <strong>the</strong> dimensionality<br />
problem faced w.r.t. <strong>the</strong> large datasets <strong>to</strong> be included. The advantage <strong>of</strong> fac<strong>to</strong>r models<br />
is that <strong>the</strong> main driving forces in a large set <strong>of</strong> cross-sectional data, which in our case is<br />
required <strong>to</strong> consider, can be represented by a much smaller number <strong>of</strong> ”fac<strong>to</strong>rs” extracted<br />
from <strong>the</strong> dataset. For our analysis I decide <strong>to</strong> apply <strong>the</strong> so-called FAVAR 10 methodology<br />
which was introduced in Bernanke and Boivin (2003), advanced in Bernanke, Boivin and<br />
Eliasz (2005). S<strong>to</strong>ck and Watson (2005) added some variations <strong>to</strong> it, and provide a broad<br />
survey on its different approaches. These models exploit <strong>the</strong> advances <strong>of</strong> dynamic fac<strong>to</strong>r<br />
models and combine <strong>the</strong>m with <strong>the</strong> VAR methodology. And <strong>the</strong> crucial innovation is <strong>to</strong><br />
combine <strong>the</strong> Bayesian FAVAR with an Agnostic identification.<br />
The question <strong>of</strong> identification is an issue that has been covered a huge body <strong>of</strong> lit-<br />
erature and applied <strong>to</strong> SVAR. The most prevalent schemes are <strong>the</strong> Recursive Cholesky<br />
identification, <strong>the</strong> Long-run Identification, <strong>the</strong> combination <strong>of</strong> <strong>the</strong> two (Zero restrictions;<br />
Leeper, Sims and Zha [1996]) and <strong>the</strong> Agnostic Identification introduced by Uhlig [2005].<br />
In his paper Uhlig also seeks <strong>to</strong> measure <strong>the</strong> effects <strong>of</strong> a shock <strong>to</strong> monetary policy, strictly<br />
speaking <strong>to</strong> a ”contractionary” monetary policy shock, in particular he focuses on <strong>the</strong><br />
effects on Output and finds out that <strong>the</strong>re is no clear effect on output, and that <strong>the</strong><br />
neutrality <strong>of</strong> monetary policy shocks are not inconsistent with <strong>the</strong> data. But what is so<br />
crucial about <strong>the</strong> paper by Uhlig is <strong>the</strong> new more sophisticated identification scheme he<br />
introduces, namely <strong>the</strong> ”agnostic identification” 11 which imposes for a certain period <strong>of</strong><br />
time sign-restrictions on <strong>the</strong> impulse responses, that are consistent with <strong>the</strong> conventional<br />
wisdom.<br />
10 This terminology comes from Bernanke, Boivin and Eliaz [2005]<br />
11 The term agnostic refers <strong>to</strong> <strong>the</strong> missing restriction on <strong>the</strong> variable <strong>to</strong> be analysed.
Bayesian FAVARs with Agnostic Identification 11<br />
One <strong>of</strong> <strong>the</strong> main contributions <strong>of</strong> this <strong>the</strong>sis is <strong>to</strong> disentangle <strong>the</strong> dynamic propagation<br />
<strong>of</strong> a monetary policy shock on <strong>the</strong> macroeconomic variables is approached in a fully<br />
Bayesian perspective. First <strong>the</strong> model is estimated with a likelihood based approach via<br />
Gibbs sampling, and second I try <strong>to</strong> measure <strong>the</strong> dynamic effects through an agnostic<br />
identification scheme, using <strong>the</strong> sign-restriction approach advanced by Uhlig (2005) who<br />
imposes for a certain period <strong>of</strong> time sign restrictions on <strong>the</strong> impulse responses <strong>of</strong> prices,<br />
nonborrowed reserves and <strong>the</strong> federal funds rate in response <strong>to</strong> contractionary monetary<br />
policy shock 12 13 . Fur<strong>the</strong>rmore I provide an easily extendable Matlab code that does <strong>the</strong><br />
model estimation and identification.<br />
3 Dynamic Fac<strong>to</strong>r Models<br />
The key idea behind <strong>the</strong> fac<strong>to</strong>r models is, <strong>to</strong> represent <strong>the</strong> movements in a large set <strong>of</strong><br />
cross-sectional data by only a limited number <strong>of</strong> common shocks, which apparently are<br />
sufficient <strong>to</strong> represent <strong>the</strong> crucial dynamics, and an idiosyncratic component that reflects<br />
<strong>the</strong> variable specific part. The common shocks are referred <strong>to</strong> <strong>the</strong> common component,<br />
which consists <strong>of</strong> <strong>the</strong> fac<strong>to</strong>rs and <strong>the</strong> respective fac<strong>to</strong>r loadings.<br />
In <strong>the</strong> recent few years <strong>the</strong>re has been a surge in <strong>the</strong> research on dynamic fac<strong>to</strong>r models<br />
(henceforth DFM) where advances and several extensions have been introduced. It has<br />
become an important <strong>to</strong>ol in empirical macroeconomics since it provides a possibility <strong>to</strong><br />
break down <strong>the</strong> dimensionality <strong>of</strong> <strong>the</strong> large amounts <strong>of</strong> economic time series, which have<br />
become available, in<strong>to</strong> a few series <strong>of</strong> fac<strong>to</strong>rs. Large datasets are important in various<br />
fields in economics, such as in <strong>the</strong> study <strong>of</strong> <strong>the</strong> effects <strong>of</strong> trade (see Justiniano [2004]) and<br />
cross-country business cycle synchronization (Kose, Otrok and Whiteman [2003a, 2003b]<br />
12 To my best knowledge this combination has not been done by anyone. Nei<strong>the</strong>r in <strong>the</strong> DFM method-<br />
ology nor in <strong>the</strong> FAVAR methodology.<br />
13 As previously stated I follow <strong>the</strong> approach <strong>to</strong> apply <strong>the</strong> FAVAR methodology, but <strong>the</strong> application <strong>of</strong><br />
<strong>the</strong> sign-restriction approach can be applied <strong>to</strong> DFMs in an equivalent and straight forward manner.
12 Bayesian FAVARs with Agnostic Identification<br />
among o<strong>the</strong>rs). In <strong>the</strong> next subsections I am first going <strong>to</strong> explain DFMs on which <strong>the</strong><br />
FAVAR methodology grounds. Then I explain <strong>the</strong> FAVAR and <strong>the</strong> required identification<br />
assumptions in order <strong>to</strong> identify <strong>the</strong> fac<strong>to</strong>rs and fac<strong>to</strong>r loading separately. The last sub-<br />
section elaborates on <strong>the</strong> estimation procedures and in particular on <strong>the</strong> likelihood-based<br />
joint estimation.<br />
The classical (static) fac<strong>to</strong>r model applied only <strong>to</strong> cross-sectional data, was first set in<br />
a dynamic setting and introduced <strong>to</strong> economics by <strong>the</strong> seminal work <strong>of</strong> Sargent and Sims<br />
[1977] and Geweke [1977]. Sargent and Sims [1977] applied DFM <strong>to</strong> a low dimensional set<br />
<strong>of</strong> macroeconomic variables in order <strong>to</strong> explain <strong>the</strong> mutual comovements. The dynamic<br />
fac<strong>to</strong>rs parsimoniously summarize <strong>the</strong> dynamics and information <strong>of</strong> a large panel <strong>of</strong> data.<br />
Due <strong>to</strong> <strong>the</strong> small dimensionality <strong>of</strong> <strong>the</strong> data <strong>the</strong> estimation could be accomplished by<br />
maximum likelihood estimation methods (MLE). The MLE will soon arrive at its limits<br />
with increasing number <strong>of</strong> time series. Quah and Sargent [1993] apply <strong>the</strong> EM algorithm<br />
for a set <strong>of</strong> 60 variables which is <strong>the</strong> biggest application considered in a MLE framework.<br />
Due <strong>to</strong> <strong>the</strong> complicated nature <strong>of</strong> <strong>the</strong> shape <strong>of</strong> <strong>the</strong> likelihood in a high dimensional case<br />
it becomes soon infeasible <strong>to</strong> apply ML methods. S<strong>to</strong>ck and Watson [1998,2001] show<br />
that fac<strong>to</strong>r 14 extracted from eight 15 macroeconomic variables, improve <strong>the</strong> forecasting ac-<br />
curacy <strong>of</strong> inflation. Their results outperform <strong>the</strong> standard AR approach but only in a<br />
supportive manner for <strong>the</strong> real variables except for employment. Since <strong>the</strong>n DFMs have<br />
gained an increasing attention in <strong>the</strong> academic world <strong>of</strong> empirical macroeconomics and<br />
seem <strong>to</strong> become an important alternative <strong>to</strong> <strong>the</strong> VAR, which is still <strong>the</strong> ”workhorse” in<br />
empirical macroeconomics and most widely applied.<br />
Important extensions and advances have been achieved especially w.r.t. <strong>the</strong> estimation<br />
14 The authors sometimes refer <strong>to</strong> <strong>the</strong> fac<strong>to</strong>rs as diffusion indexes that summarize <strong>the</strong> inherent risk <strong>of</strong><br />
<strong>the</strong> variables considered<br />
15 In <strong>the</strong>ir first paper <strong>the</strong> consider four variables and extend <strong>the</strong>re data and variables in S<strong>to</strong>ck and<br />
Watson [2001]
Bayesian FAVARs with Agnostic Identification 13<br />
procedures. S<strong>to</strong>ck and Watson [1989,1991,1991,2001,2003,2005], study a static version <strong>of</strong><br />
<strong>the</strong> DFM estimated via principal component analysis. In <strong>the</strong> proceeding papers <strong>the</strong>y<br />
consider a two-step estimation procedure. Forni,Hallin,Lippi and Reichlin (2001), provide<br />
a dynamic version <strong>of</strong> <strong>the</strong> PCA which is known as <strong>the</strong> generalized dynamic fac<strong>to</strong>r model<br />
(henceforth GDFM). Some refer <strong>to</strong> it also as <strong>the</strong> dynamic principal component analysis<br />
(DPCA) where <strong>the</strong> model is considered in <strong>the</strong> frequency domain. Kim and Nelson (1998),<br />
Otrok and Whiteman (1998) tackle <strong>the</strong> model estimation from a Bayesian perspective<br />
via Markov chain Monte Carlo (MCMC) simulation methods, in particular applying <strong>the</strong><br />
Gibbs sampler. This will be <strong>the</strong> approach I apply in my <strong>the</strong>sis in order <strong>to</strong> extract <strong>the</strong><br />
fac<strong>to</strong>rs and do inference on <strong>the</strong> models parameters. One <strong>of</strong> <strong>the</strong> most recent advances<br />
have been <strong>the</strong> so-called fac<strong>to</strong>r augmented vec<strong>to</strong>r au<strong>to</strong>regression (FAVAR) which has been<br />
introduced by Bernanke and Boivin [2003], and advanced in Bernanke, Boivin and Eliasz<br />
[2005], a framework in which <strong>the</strong> advantages <strong>of</strong> DFMs are combined with <strong>the</strong> analysis <strong>of</strong><br />
SVARs. The various model specifications and estimation procedures <strong>of</strong> large data sets on<br />
which <strong>the</strong> FAVAR builds are briefly explained in <strong>the</strong> following subsection where I provide<br />
an overview <strong>of</strong> <strong>the</strong> most important and influencing ones and briefly explain <strong>the</strong> different<br />
approaches.<br />
Dynamic fac<strong>to</strong>r models can be considered ei<strong>the</strong>r in <strong>the</strong> frequency domain representa-<br />
tion or in <strong>the</strong> state-space representation depending on <strong>the</strong> estimation approach desired.<br />
The model cast in <strong>the</strong> frequency domain representation are introduced and explained<br />
in several papers by Forni,Hallin,Lippi and Reichlin. They use an approximate DFM,<br />
and compute <strong>the</strong> eigenvec<strong>to</strong>r-eigenvalue decomposition <strong>of</strong> <strong>the</strong> spectral density matrix fre-<br />
quency by frequency and inverse-Fourier transform <strong>the</strong> eigenvec<strong>to</strong>rs <strong>to</strong> create polynomials<br />
in <strong>the</strong> lag opera<strong>to</strong>r which when applied <strong>to</strong> <strong>the</strong> observables, yields estimates <strong>of</strong> <strong>the</strong> dynamic<br />
principal components (DPCA).<br />
The latter is a generalization that captures all time series models, such as <strong>the</strong> au-<br />
<strong>to</strong>regressive integrated moving average model (ARIMA), and consists <strong>of</strong> one observation
14 Bayesian FAVARs with Agnostic Identification<br />
equation and one state equation 16 which is part <strong>of</strong> <strong>the</strong> observation equation and itself<br />
driven by a s<strong>to</strong>chastic process:<br />
Xit = λift + eit (1)<br />
ft = φ(L)ft−1 + vit (2)<br />
where <strong>the</strong> subscript i = 1, ..., N stands for <strong>the</strong> observable variables and t = 1, ..., T<br />
denotes time. Equation (1) characterizes <strong>the</strong> stationary data or <strong>the</strong> economic variables<br />
(time series) that is driven by <strong>the</strong> unobservable fac<strong>to</strong>rs ft which itself is driven by a<br />
s<strong>to</strong>chastic process. λi denotes <strong>the</strong> fac<strong>to</strong>r loading . Strictly speaking Xit is driven by a<br />
distributed lag <strong>of</strong> a small number <strong>of</strong> fac<strong>to</strong>rs (K
Bayesian FAVARs with Agnostic Identification 15<br />
limN→∞N −1<br />
N N<br />
|E(eitejt)| < ∞<br />
i=1 j=1<br />
The decision which approach <strong>to</strong> choose will depend on <strong>the</strong> estimation procedure de-<br />
sired <strong>to</strong> apply. The frequentists DPA approach allows for an approximate DFM, <strong>the</strong><br />
Bayesian requires an exact DFM specification. From my perspective, <strong>the</strong> question which<br />
approach ra<strong>the</strong>r <strong>to</strong> pursue has not yet been sufficiently and conclusively enough answered<br />
in <strong>the</strong> literature. BBE provide results for both specifications and estimation approaches.<br />
Based on <strong>the</strong>ir results, which favors <strong>the</strong> classical nonparametric two-step estimation, <strong>the</strong>y<br />
conclude implicitly that <strong>the</strong> approximate specification might be <strong>the</strong> better one. Here<br />
one should be very cautious, because <strong>the</strong> results BBE compare, might not necessarily be<br />
representative enough <strong>to</strong> draw a conclusion w.r.t. <strong>to</strong> <strong>the</strong> model choice. This will become<br />
clear when comparing <strong>the</strong> likelihood based results with <strong>the</strong> two alternative identification<br />
schemes. As <strong>the</strong> results differ qualitatively <strong>the</strong> conclusion based on <strong>the</strong> results becomes<br />
invalid. In <strong>the</strong> section on <strong>the</strong> empirical results I will show, that with reasonable identify-<br />
ing assumptions such as <strong>the</strong> ”agnostic identification” one can get very reasonable results<br />
in an Bayesian framework. This does not give an obvious hint what <strong>the</strong> correct approach<br />
should be, but at least one can conclude that <strong>the</strong> Bayesian approach does not suffer from<br />
<strong>the</strong> structure it imposes on <strong>the</strong> idiosyncratic component. Therefore <strong>the</strong> structure imposed<br />
must not be an unreasonable restriction on <strong>the</strong> model. It remains <strong>to</strong> fur<strong>the</strong>r research on<br />
this specific issue <strong>to</strong> prove conclusively. More precise assumptions <strong>of</strong> DFMs can be found<br />
in <strong>the</strong> section on identification and normalization.
16 Bayesian FAVARs with Agnostic Identification<br />
4 The Econometric Framework<br />
4.1 FAVARs<br />
As already stated, <strong>the</strong> idea behind <strong>the</strong> FAVARs is <strong>to</strong> combine <strong>the</strong> standard structural<br />
VAR analysis with <strong>the</strong> recent developed and advanced features <strong>of</strong> dynamic fac<strong>to</strong>r models<br />
estimating a joint VAR that contains fac<strong>to</strong>rs extracted from large panel <strong>of</strong> informational<br />
data and in addition perfectly observable time series that have pervasive effects on <strong>the</strong><br />
economy such as <strong>the</strong> short-term interest rate set by <strong>the</strong> central. Therefore BBE labeled<br />
<strong>the</strong> model in a straight forward manner ”fac<strong>to</strong>r-augmented VAR”. This approach is well<br />
suited for structural analysis such as impulse response analysis and variance decomposi-<br />
tion (in particular for <strong>the</strong> problem at hand). For <strong>the</strong> estimation procedure <strong>the</strong> model has<br />
<strong>to</strong> be cast in<strong>to</strong> a state-space representation. For <strong>the</strong> rest <strong>of</strong> <strong>the</strong> <strong>the</strong>sis I will mostly follow<br />
<strong>the</strong> approach and notation <strong>of</strong> BBE o<strong>the</strong>rwise explicitly stated.<br />
The model consists <strong>of</strong> <strong>the</strong> two equations (1) and (2) introduced in <strong>the</strong> previous section.<br />
The FAVAR equation (2) already has <strong>the</strong> form <strong>to</strong> build <strong>the</strong> state equation <strong>to</strong> which one<br />
also refers <strong>of</strong>ten <strong>to</strong> as <strong>the</strong> transition equation. Equation (2) represents <strong>the</strong> joint dynamics<br />
<strong>of</strong> fac<strong>to</strong>rs and <strong>the</strong> observable pervasive variables (Ft, Yt).<br />
⎡<br />
⎢<br />
⎣ Ft<br />
Yt<br />
⎤<br />
⎡<br />
⎥ ⎢<br />
⎦ = Φ(L) ⎣ Ft−1<br />
Yt−1<br />
⎤<br />
⎥<br />
⎦ + vt<br />
(3)<br />
vt ∼ N(0, Q) (4)<br />
Here <strong>the</strong> variable Yt denotes <strong>the</strong> [M × 1] vec<strong>to</strong>r <strong>of</strong> observable economic variables<br />
that having pervasive effects throughout <strong>the</strong> economy. The index t = 1, ..., T represents<br />
<strong>the</strong> time <strong>the</strong> term Φ(L) represents a conformable lag polynomial <strong>of</strong> order d. In our<br />
specification that follows BBE Yt is assumed <strong>to</strong> represent <strong>the</strong> policy instrument,e.g. <strong>the</strong>
Bayesian FAVARs with Agnostic Identification 17<br />
federal funds rate in <strong>the</strong> US case. But it can also represent economic concepts such as<br />
”economic activity” an so forth. In fact one can let Yt represent a complete VAR, with<br />
<strong>the</strong> specification desired or that is standard in <strong>the</strong> literature instead <strong>of</strong> one single variable.<br />
If <strong>the</strong> observation equation (1) would only consist <strong>of</strong> Yt we would be in <strong>the</strong> well known<br />
standard VAR framework <strong>the</strong> fac<strong>to</strong>r augmentation would be missing. Hence one could<br />
apply standard SVAR analysis or o<strong>the</strong>r multivariate time series estimation using only<br />
data for Yt.<br />
The cross-sectional dynamics <strong>of</strong> <strong>the</strong> data are represented by <strong>the</strong> parsimoniously ex-<br />
tracted fac<strong>to</strong>rs. Through <strong>the</strong> fac<strong>to</strong>rs one can deduce <strong>the</strong> dynamic effects <strong>of</strong> a shock <strong>to</strong><br />
monetary policy. That are <strong>the</strong> relevant comovements not captured by Yt. The dynamics<br />
<strong>of</strong> <strong>the</strong> whole economy and its reaction induced by an unexpected shock are captured by<br />
<strong>the</strong> parsimoniously extracted fac<strong>to</strong>rs.<br />
The number <strong>of</strong> <strong>the</strong> fac<strong>to</strong>rs, K should be small 17 compared <strong>to</strong> <strong>the</strong> number <strong>of</strong> time<br />
series considered. One can think <strong>of</strong> <strong>the</strong> unobserved fac<strong>to</strong>rs as diffuse concepts such as<br />
”economic activity” or ”credit conditions” which are ra<strong>the</strong>r represented by a range <strong>of</strong> eco-<br />
nomic variables than by only one or a few. This hint given by BBE might be interesting <strong>to</strong><br />
pursue and fur<strong>the</strong>r explored in future research when it comes <strong>to</strong> question <strong>of</strong> structurally<br />
identifying <strong>the</strong> fac<strong>to</strong>rs. One approach <strong>to</strong> give <strong>the</strong> fac<strong>to</strong>rs a structural interpretation as has<br />
been done by Belviso and Milani (2005), is <strong>to</strong> explicitly model <strong>the</strong> fac<strong>to</strong>r loadings only<br />
<strong>to</strong> load on specified fac<strong>to</strong>r that is extracted from a subset <strong>of</strong> <strong>the</strong> sorted data associated<br />
with an economic concept. Belviso and Milani [2005], follow S<strong>to</strong>ck and Watson [2005],<br />
regarding <strong>the</strong> number <strong>of</strong> fac<strong>to</strong>rs required and relevant <strong>to</strong> represent <strong>the</strong> driving force <strong>of</strong> <strong>the</strong><br />
economy, and try <strong>to</strong> structurally identify seven Fac<strong>to</strong>rs which are supposed <strong>to</strong> represent<br />
17 To have a reference for <strong>the</strong> term ”small”, Gianone, Reichlin ans Sala [2004] assume that <strong>the</strong> driving<br />
forces <strong>of</strong> <strong>the</strong> US economy can be represented by two fac<strong>to</strong>rs opposed <strong>to</strong> S<strong>to</strong>ck and Watson [2005] who<br />
argue <strong>the</strong> fundamental driving forces <strong>of</strong> <strong>the</strong> US economy should be represented by seven fac<strong>to</strong>rs. These<br />
fac<strong>to</strong>rs were extracted out a large panel <strong>of</strong> 173 and 132 variables respectively
18 Bayesian FAVARs with Agnostic Identification<br />
<strong>the</strong> main dynamics <strong>of</strong> <strong>the</strong> US economy.<br />
Regarding <strong>the</strong> term Φ(L) BBE note that one may set a priori restrictions as it is well<br />
known and done in <strong>the</strong> structural VAR literature, that would reduce <strong>the</strong> number <strong>of</strong> param-<br />
eters <strong>to</strong> be estimated. The vec<strong>to</strong>r <strong>of</strong> error terms vt has mean 0 and variance-covariance<br />
Q. [ The above equation represents a VAR w.r.t. <strong>the</strong> joint dynamics <strong>of</strong> (Ft, Yt) <strong>to</strong> which<br />
BBE refer <strong>to</strong> as a fac<strong>to</strong>r-augmented vec<strong>to</strong>r au<strong>to</strong>regression henceforth FAVAR.] Note that<br />
if <strong>the</strong> block <strong>of</strong> <strong>the</strong> lag polynomial that relate Yt <strong>to</strong> Ft−1 is 0 <strong>the</strong> system reduces <strong>to</strong><br />
<strong>the</strong> standard VAR as <strong>the</strong>re would assume that Yt and Ft are independent <strong>to</strong> each o<strong>the</strong>r<br />
and hence not relevant <strong>to</strong> explain its dynamics. In this way one can measure <strong>the</strong> direct<br />
contribution <strong>of</strong> incorporating more economic information in<strong>to</strong> <strong>the</strong> observation equation<br />
via Ft.<br />
The results for <strong>the</strong> contribution <strong>of</strong> adding fur<strong>the</strong>r fac<strong>to</strong>rs are represented in section<br />
empirical results. If <strong>the</strong> true data generating process is a FAVAR, <strong>the</strong> standard VAR sys-<br />
tem in Yt will lead <strong>to</strong> biased estimates (especially w.r.t. IR-coefficients). This is straight<br />
forward as this implies that relevant information is not included. In order <strong>to</strong> be able <strong>to</strong><br />
estimate <strong>the</strong> FAVAR one needs <strong>the</strong> unobservable fac<strong>to</strong>rs Ft which will be extracted from<br />
<strong>the</strong> set <strong>of</strong> ”background” or ”informational” time series denoted by <strong>the</strong> [N × 1] vec<strong>to</strong>r<br />
Xt. The number <strong>of</strong> N might be very large, even larger than <strong>the</strong> number <strong>of</strong> observations<br />
or time periods T . Fur<strong>the</strong>rmore it is assumed <strong>to</strong> be much greater than <strong>the</strong> number <strong>of</strong><br />
Fac<strong>to</strong>rs and pervasive observables (K + M
Bayesian FAVARs with Agnostic Identification 19<br />
<strong>the</strong> statement that more data necessarily means more information and hence better for<br />
<strong>the</strong> analysis seems not <strong>to</strong> be <strong>the</strong> end <strong>of</strong> <strong>the</strong> s<strong>to</strong>ry.<br />
The dynamics <strong>of</strong> <strong>the</strong> ”informational” variables is assumed <strong>to</strong> be like <strong>the</strong> following:<br />
X ′ t = Λ f F ′<br />
t + Λ y Y ′<br />
t + e ′ t<br />
(5)<br />
et ∼ N(0, R) (6)<br />
Here Λ f denotes <strong>the</strong> matrix <strong>of</strong> fac<strong>to</strong>r loadings with dimension [N × K] and Λ y is<br />
[N × M]. The error term is et with mean 0 and covariance R. Note that et and vt are<br />
independent and that R is diagonal which means that <strong>the</strong> error terms <strong>of</strong> <strong>the</strong> observable<br />
variables are mutually uncorrelated. At this point one has <strong>to</strong> make a clear stand which<br />
assumption one follows when it comes <strong>to</strong> <strong>the</strong> issue <strong>of</strong> error correlation. One can think<br />
<strong>of</strong> <strong>the</strong> error terms <strong>to</strong> be weakly correlated or completely uncorrelated. We had this<br />
previously in <strong>the</strong> discussion <strong>of</strong> exact or approximate dynamic fac<strong>to</strong>r models. The standard<br />
assumptions in <strong>the</strong> literature with respect <strong>to</strong> dynamic fac<strong>to</strong>r models has been introduced<br />
in <strong>the</strong> previous section. As we follow <strong>the</strong> Bayesian likelihood-based approach we decide <strong>to</strong><br />
set <strong>the</strong> assumption <strong>of</strong> uncorrelated error terms. Hence we model an exact dynamic fac<strong>to</strong>r<br />
model in <strong>the</strong> vein <strong>of</strong> Sargent and Sims [1977]. The distinction between <strong>the</strong> observation<br />
equations <strong>of</strong> <strong>the</strong> DFMs we have seen so far and (1) is that <strong>the</strong>re, <strong>the</strong> dynamics <strong>of</strong> <strong>the</strong> data<br />
are supposed <strong>to</strong> be driven by Ft and Yt which in fact can be correlated. Here Xt only<br />
depends on <strong>the</strong> current and not lagged values <strong>of</strong> Ft. BBE state that this implication is<br />
not restrictive in practice as <strong>the</strong> fac<strong>to</strong>rs can be interpreted as including arbitrary lags <strong>of</strong><br />
<strong>the</strong> fundamental fac<strong>to</strong>rs.<br />
4.2 FAVAR Identification<br />
Identifying restrictions have <strong>to</strong> be set, in order <strong>to</strong> distinguish <strong>the</strong> idiosyncratic from <strong>the</strong><br />
common component. Additionally one can set fur<strong>the</strong>r identifying assumptions in order <strong>to</strong>
20 Bayesian FAVARs with Agnostic Identification<br />
identify <strong>the</strong> fac<strong>to</strong>rs and <strong>the</strong> loadings, separately, and fur<strong>the</strong>rmore distinguish <strong>the</strong> single<br />
fac<strong>to</strong>rs. In this <strong>the</strong>sis I follow <strong>the</strong> standard identification restrictions 18 ei<strong>the</strong>r on <strong>the</strong><br />
coefficient matrix Λ or on <strong>the</strong> fac<strong>to</strong>rs Ft employed by BBE in order <strong>to</strong> identify <strong>the</strong><br />
fac<strong>to</strong>rs and <strong>the</strong> fac<strong>to</strong>r loadings uniquely which looks like <strong>the</strong> following :<br />
Λ ′ fΛ ′ f<br />
N = I or F ′ F ′<br />
T<br />
The crucial assumption is that Yt (in our baseline model <strong>the</strong> policy instrument FFR)<br />
does not react <strong>to</strong> <strong>the</strong> X’s contemporaneously. The channels are restricted if <strong>the</strong> upper<br />
K × K block <strong>of</strong> Λ f is set <strong>to</strong> an identity matrix and <strong>the</strong> upper K × M block is set<br />
<strong>to</strong> a zero matrix. This restricts <strong>the</strong> impact <strong>of</strong> Yt on only those K variables that react<br />
contemporaneously and <strong>the</strong>refore such variables should be chosen for <strong>the</strong> respective block<br />
that do not react contemporaneously. Since fac<strong>to</strong>rs are estimated up <strong>to</strong> a rotation, <strong>the</strong><br />
choice <strong>of</strong> <strong>the</strong> K × K that is set <strong>to</strong> an identity matrix should not affect <strong>the</strong> space spanned<br />
by <strong>the</strong> estimated fac<strong>to</strong>rs 19 .<br />
For some proposes it is useful <strong>to</strong> separately identify <strong>the</strong> common shocks and <strong>the</strong> fac-<br />
<strong>to</strong>r loadings. But as only <strong>the</strong> product <strong>of</strong> <strong>the</strong> two is known, a rotation has <strong>to</strong> be chosen<br />
when one is interested in identifying <strong>the</strong> fac<strong>to</strong>rs and <strong>the</strong> loadings separately. In my case<br />
I am interested in <strong>the</strong> separate identification and in <strong>the</strong> following section I describe <strong>the</strong><br />
= I<br />
standard approach chosen by BBE that I also decided <strong>to</strong> choose.<br />
Digression on Fac<strong>to</strong>r identification In order <strong>to</strong> identify <strong>the</strong> fac<strong>to</strong>rs against rotation<br />
BBE impose <strong>the</strong> fac<strong>to</strong>r restriction F ′ F ′<br />
T = I, obtaining ˆ F = √ T ˆ Z, where Z are <strong>the</strong> first<br />
K largest eigenvec<strong>to</strong>rs sorted in descending order. In <strong>the</strong> joint estimation case <strong>the</strong> specified<br />
identification against rotation requires that <strong>the</strong> Fac<strong>to</strong>rs are identified in <strong>the</strong> following form:<br />
18 The fac<strong>to</strong>r identification should not be confused with <strong>the</strong> identification <strong>of</strong> <strong>the</strong> structural shocks <strong>of</strong><br />
e.g. monetary policy.<br />
19 see BBE [2005]
Bayesian FAVARs with Agnostic Identification 21<br />
F ∗<br />
t = AFt − BYt<br />
Here is a nonsingular K × K] matrix and is <strong>of</strong> dimension K × M]. Restrictions<br />
are only imposed on <strong>the</strong> observation equation. Here BBE substitute F ∗ in<strong>to</strong> (1) due <strong>to</strong><br />
<strong>the</strong> fact that restrictions should not be imposed on <strong>the</strong> VAR dynamics and hence arrive<br />
at<br />
Xt = Λ f A −1 F ∗<br />
t + <br />
Λ y + Λ f A −1 B <br />
Yt + et<br />
Now for <strong>the</strong> fac<strong>to</strong>rs and and <strong>the</strong>ir loadings <strong>to</strong> be identified uniquely it is required that<br />
Λ f A −1 = Λ f and Λ y + Λ f A −1 B = Λ y .<br />
When it comes <strong>to</strong> <strong>the</strong> identification <strong>of</strong> <strong>the</strong> VAR part or <strong>the</strong> FAVAR equation (2), one is<br />
concerned with <strong>the</strong> identification <strong>of</strong> <strong>the</strong> innovation, strictly speaking <strong>the</strong> innovation <strong>to</strong> Yt<br />
which is <strong>the</strong> shock <strong>to</strong> monetary policy. As this is <strong>the</strong> main issue <strong>of</strong> this paper we will only<br />
consider this case, but <strong>the</strong> agnostic identification by Uhlig [2005] using sign-restrictions<br />
is extendable <strong>to</strong> any o<strong>the</strong>r shock desired. The identification schemes are elaborately<br />
explained in section following section.<br />
4.3 Estimation Procedure<br />
There are two estimation procedures considered by BBE, <strong>the</strong> nonparametric two-step<br />
principal component estimation and <strong>the</strong> parametric Bayesian approach <strong>to</strong> which <strong>the</strong>y refer<br />
<strong>to</strong> as <strong>the</strong> one-step estimation or <strong>the</strong> likelihood-based estimation (Eliasz [2005]). As <strong>the</strong><br />
aim <strong>of</strong> this <strong>the</strong>sis is <strong>to</strong> tackle <strong>the</strong> question raised by <strong>the</strong> title from a Bayesian perspective,<br />
we will mention <strong>the</strong> two-step estimation procedure only very briefly and elaborate on <strong>the</strong><br />
likelihood-based estimation procedure. Here I mostly rely on BBE [2005] and Eliasz[2005].<br />
4.3.1 Generalized Dynamic Fac<strong>to</strong>r Model<br />
Fac<strong>to</strong>r models can be decomposed in<strong>to</strong> a common component and an idiosyncratic com-<br />
ponent. The authors Forni, Hallin, Lippi and Reichlin (2001) tried <strong>to</strong> combine <strong>the</strong> ap-<br />
proximate DFM <strong>of</strong> Chamberlain (1983) and Chamberlain and Rothschild (1984), which
22 Bayesian FAVARs with Agnostic Identification<br />
is static 20 and allows for some cross-correlation <strong>of</strong> <strong>the</strong> idiosyncratic components, and <strong>the</strong><br />
dynamic version <strong>of</strong> Geweke(1977) and Sargent and Sims (1977) which assumes orthog-<br />
onalized idiosyncratic components. Their concept is mostly known as <strong>the</strong> generalized<br />
dynamic fac<strong>to</strong>r model and in <strong>the</strong> current literature also known as <strong>the</strong> dynamic principal<br />
component analysis (DPCA). Here <strong>the</strong>y do a principal component analysis <strong>of</strong> <strong>the</strong> first<br />
Q eigenvalues and eigenvec<strong>to</strong>rs which are calculated from <strong>the</strong> variance-covariance matrix<br />
(VCV) <strong>of</strong> <strong>the</strong> data set. The GDFM or <strong>the</strong> DPCA can be summarized as a concept that<br />
has a similar representation as <strong>the</strong> static fac<strong>to</strong>r model but with a dynamic setting with<br />
respect <strong>to</strong> <strong>the</strong> fac<strong>to</strong>r loadings.<br />
4.3.2 Two-Step Estimation<br />
This approach is analogous <strong>to</strong> <strong>the</strong> estimation procedure used in S<strong>to</strong>ck and Watson [2002],<br />
where <strong>the</strong>y used DFMs <strong>to</strong> forecast inflation. In order <strong>to</strong> uncover <strong>the</strong> space spanned by <strong>the</strong><br />
common components Ct = (F ′<br />
t, Y ′<br />
t ) ′ as a first step <strong>the</strong> first [K+M] principal components<br />
(henceforth PC) <strong>of</strong> Xt are estimated. Here <strong>the</strong> attentive reader should note that this<br />
first step does not exploit <strong>the</strong> fact that Yt is observed 21 Fur<strong>the</strong>rmore <strong>the</strong> number <strong>of</strong> <strong>the</strong><br />
informative variables N has <strong>to</strong> be large and <strong>the</strong> number <strong>of</strong> <strong>the</strong> PCs have <strong>to</strong> be at least as<br />
large as <strong>the</strong> true number <strong>of</strong> <strong>the</strong> fac<strong>to</strong>rs for <strong>the</strong> PCs <strong>to</strong> recover <strong>the</strong> space spanned Ft and<br />
Yt consistently. As a fur<strong>the</strong>r disadvantage one should state that <strong>the</strong> two-step approach<br />
implies <strong>the</strong> presence <strong>of</strong> ”generated regressors” in <strong>the</strong> second step 22 . Ft is obtained as<br />
<strong>the</strong> part <strong>of</strong> <strong>the</strong> space covered by Ct that is not covered by Yt. The advantage <strong>of</strong> this<br />
approach is that it is easy <strong>to</strong> implement and computationally simple as opposed <strong>to</strong> <strong>the</strong><br />
computer intensive burdensome Gibbs-Sampling approach. S<strong>to</strong>ck and Watson state that<br />
it also imposes few distributional assumptions and allows for some degree correlation in<br />
20 In this context, <strong>the</strong> static version <strong>of</strong> <strong>the</strong> fac<strong>to</strong>r model means that <strong>the</strong> common shocks only affect <strong>the</strong><br />
series contemporaneously.<br />
21 For our baseline model this means that <strong>the</strong> fed’s policy action is not taken in<strong>to</strong> account contempo-<br />
raneously.<br />
22 For more details please refer <strong>to</strong> BBE [2005].
Bayesian FAVARs with Agnostic Identification 23<br />
<strong>the</strong> idiosyncratic error term et.<br />
4.3.3 Likelihood-Based Estimation<br />
The alternative <strong>to</strong> <strong>the</strong> approach discussed above is <strong>to</strong> use <strong>the</strong> joint estimation by likelihood-<br />
based Gibbs-Sampling techniques. The problem <strong>of</strong> such models with high dimensions is<br />
that it is very difficult get <strong>the</strong> joint marginal distribution by integration. As BBE [2005]<br />
state, <strong>the</strong> irregular nature <strong>of</strong> <strong>the</strong> likelihood function makes maximum likelihood estima-<br />
tion (henceforth MLE) infeasible in practice. In order <strong>to</strong> understand better why <strong>the</strong> Gibbs<br />
sampling is considered as a useful <strong>to</strong>ol <strong>to</strong> estimate large DFMs and FAVARs, I will briefly<br />
discuss <strong>the</strong> technical background <strong>of</strong> <strong>the</strong> estimation procedure. The idea <strong>of</strong> this section is<br />
<strong>to</strong> give <strong>the</strong> interested reader an introduction <strong>to</strong> <strong>the</strong> Markov Chain Monte Carlo methods<br />
(MCMC in <strong>the</strong> following) afterward elaborate on <strong>the</strong> Gibbs sampler and on <strong>the</strong> multi<br />
move version <strong>of</strong> <strong>the</strong> Gibbs sampler in order <strong>to</strong> convey its usefulness for DFMs.<br />
4.3.4 Markov Chain Monte Carlo<br />
The estimation <strong>of</strong> <strong>the</strong> parameter space (<strong>of</strong> LDFMs) is about integrating statistics, espe-<br />
cially in <strong>the</strong> Baye’s statistic or in Bayesian approaches obtaining <strong>the</strong> posterior distribution,<br />
which contains all relevant information on <strong>the</strong> unknown parameters given <strong>the</strong> observed<br />
data, requires <strong>the</strong> integration <strong>of</strong> high-dimensional functions. The statistical inference can<br />
be deduced from posterior distributions. The integration problem is one <strong>of</strong> <strong>the</strong> crucial<br />
ones and can be computationally very cumbersome. One can do <strong>the</strong> integration (evaluate<br />
<strong>the</strong> integrals) through approximation via numerical methods 23 but when <strong>the</strong> parameter<br />
space is multidimensional even numerical methods may fail. One can make use <strong>of</strong> s<strong>to</strong>chas-<br />
tic algorithms such as <strong>the</strong> Monte Carlo Integration techniques. The MCMC methods use<br />
computer simulations <strong>of</strong> Markov chains in <strong>the</strong> parameter chain. For random variables <strong>of</strong><br />
higher dimensions 24 , one has <strong>to</strong> solve multiple integrals 25 . The problem <strong>of</strong> such models<br />
23 Such as <strong>the</strong> Simpson or Trapez method<br />
24 as it is <strong>the</strong> case in FAVARs and (L)DFMs<br />
25 In such cases <strong>the</strong> integration via numerical methods is very hard if solvable at all.
24 Bayesian FAVARs with Agnostic Identification<br />
with high dimensions is that it is very difficult get <strong>the</strong> joint marginal distribution by<br />
integration. As BBE (2005) state, <strong>the</strong> irregular nature <strong>of</strong> <strong>the</strong> likelihood function makes<br />
integration infeasible in practice. Bayesian analysis usually requires integration <strong>to</strong> get<br />
<strong>the</strong> marginal posterior distribution <strong>of</strong> <strong>the</strong> individual parameters from a joint posterior<br />
distribution <strong>of</strong> all unknown parameters <strong>of</strong> <strong>the</strong> model in which statistical practioneers are<br />
interested in. As already stated <strong>the</strong>se integrals in a high dimensional case are very hard<br />
<strong>to</strong> solve. The joint posterior density itself is very difficult <strong>to</strong> derive, from which marginals<br />
are <strong>to</strong> be derived. The MCMC methods such as <strong>the</strong> Gibbs sampler serve <strong>the</strong> solution<br />
<strong>to</strong> such high dimensional problems, in that <strong>the</strong>y allow <strong>to</strong> implement posterior simulation<br />
that allow <strong>the</strong> point wise evaluation <strong>of</strong> prior distribution and likelihood function.<br />
Markov chain refers <strong>to</strong> <strong>the</strong> sequence <strong>of</strong> random variables (Z0, ..., Zn) that are generated<br />
by a Markov process. The MCMC methods attempt <strong>to</strong> simulate direct draws from some<br />
complex distribution <strong>of</strong> interest. Here <strong>the</strong> Gibbs sampling methodology <strong>of</strong>fers an easy way<br />
<strong>to</strong> solve <strong>the</strong> problem, given that conditional posterior distributions are readily available.<br />
It may be employed <strong>to</strong> obtain easily marginals <strong>of</strong> <strong>the</strong> parameters without integration and<br />
without having <strong>to</strong> know <strong>the</strong> joint density.<br />
4.3.5 The Gibbs Sampler<br />
The Gibbs sampling methodology <strong>of</strong>fers an easy way <strong>to</strong> <strong>to</strong> solve <strong>the</strong> dimensionality prob-<br />
lem given that conditional posterior distribution are readily available. In order <strong>to</strong> exem-<br />
plify <strong>the</strong> Gibbs sampler, think <strong>of</strong> θ as <strong>the</strong> parameter space. Fur<strong>the</strong>rmore assume that<br />
p(θ | YT ) represents <strong>the</strong> joint probability density function where YT denotes <strong>the</strong> data.<br />
Following a cyclical iterative pattern, <strong>the</strong> Gibbs sampler generates <strong>the</strong> joint distribution<br />
<strong>of</strong> p(θ | YT ) which can be also referred <strong>to</strong> as <strong>the</strong> target distribution that one tries <strong>to</strong><br />
approximate empirically via a Markov chain. The Gibbs sampler begins with a parti-<br />
tioning or blocking <strong>of</strong> <strong>the</strong> parameters in d subvec<strong>to</strong>rs θ ′ = (θ1, . . . , θd). In practice <strong>the</strong><br />
blocking is chosen so that it is feasible <strong>to</strong> draw from each <strong>of</strong> <strong>the</strong> conditional pdf’s so
Bayesian FAVARs with Agnostic Identification 25<br />
that p(θk | YT , θ.=k) where θ j<br />
.=k<br />
= (θj 1, . . . , θ j<br />
k−1<br />
, θj−1 k+1<br />
, . . . , θj−1<br />
d , ). The blocking can arise<br />
naturally, if <strong>the</strong> prior distribution θk are independent and each conditionally conjugate.<br />
Given an arbitrary set <strong>of</strong> starting values θ ′0 = (θ 0 1, . . . , θ 0 d) set <strong>the</strong> iteration index j <strong>to</strong> zero<br />
and repeat <strong>the</strong> following cycle J times.<br />
j = 1<br />
draw θ (j)<br />
(1) from p(θ1 | θ j−1<br />
2 , . . . , θ j−1<br />
k , YT )<br />
draw θ (j)<br />
(2) from p(θ2 | θ j<br />
1, θ j−1<br />
3 , . . . , θ j−1<br />
k , YT )<br />
.<br />
draw θ (j)<br />
(k) from p(θk | θ j<br />
1, θ j<br />
2, . . . , θ j−1<br />
k−1 , YT )<br />
j = j + 1<br />
After each cycle j is increased by one. Thus each subvec<strong>to</strong>r is updated conditional on<br />
<strong>the</strong> most recent value <strong>of</strong> θ for all o<strong>the</strong>r components. The Gibbs sampler produces a series<br />
<strong>of</strong> j = 1, . . . , B, . . . , B + M conditioning drawings by cycling through <strong>the</strong> conditional<br />
posteriors. In order <strong>to</strong> avoid <strong>the</strong> effect <strong>of</strong> <strong>the</strong> starting values on <strong>the</strong> desired joint density<br />
and <strong>to</strong> ensure convergence <strong>the</strong> first B draws should be discarded. Hence <strong>the</strong> last M cycles<br />
are considered as <strong>the</strong> approximate empirical simulated sample from p(θ | YT ).<br />
5 The Econometric Model<br />
In this part I will specify <strong>the</strong> model more precisely and show <strong>the</strong> steps required <strong>to</strong> prepare<br />
<strong>the</strong> model for <strong>the</strong> estimation procedure.<br />
5.1 The Bayesian Approach versus <strong>the</strong> Frequentists Approach<br />
Why do I favor <strong>the</strong> Bayesian approach ra<strong>the</strong>r than <strong>the</strong> classical approach in dynamic fac<strong>to</strong>r<br />
models? The Bayesian exploits <strong>the</strong> available information in a more efficient manner and<br />
does not ignore in case <strong>of</strong> having a priori information about <strong>the</strong> parameters <strong>of</strong> interest.<br />
In <strong>the</strong> classical approach inference about <strong>the</strong> unobserved state vec<strong>to</strong>r is based on <strong>the</strong>
26 Bayesian FAVARs with Agnostic Identification<br />
estimated values <strong>of</strong> <strong>the</strong> hyperparameters 26 <strong>of</strong> <strong>the</strong> model, which are obtained with <strong>the</strong><br />
maximum likelihood method. One has <strong>to</strong> treat <strong>the</strong>m as <strong>the</strong>y were <strong>the</strong> true values <strong>of</strong><br />
<strong>the</strong> models nonrandom hyperparameters. This is a disadvantage <strong>to</strong>wards <strong>the</strong> Bayesian<br />
approach where in <strong>the</strong> vein <strong>of</strong> Bayesian data analysis ei<strong>the</strong>r <strong>the</strong> models hyperparameters<br />
and <strong>the</strong> unobserved state vec<strong>to</strong>r are treated as random variables. The classical approach<br />
does not exploit <strong>the</strong> fact that Yt is observed and fur<strong>the</strong>rmore does not exploit <strong>the</strong> structure<br />
<strong>of</strong> <strong>the</strong> state equation in <strong>the</strong> estimation <strong>of</strong> <strong>the</strong> fac<strong>to</strong>rs.<br />
5.2 State-Space Representation<br />
This part elaborates on <strong>the</strong> specific model and <strong>the</strong> required transformations in order <strong>to</strong><br />
estimate <strong>the</strong> model via Gibbs sampling. For more details <strong>the</strong> reader is referred <strong>to</strong> Eliasz<br />
[2005] and Kim and Nelson [1999] for a very good and elaborate survey. In order <strong>to</strong> prepare<br />
(1) and (2) for <strong>the</strong> estimation <strong>the</strong> model has <strong>to</strong> be cast in<strong>to</strong> <strong>the</strong> following state-space form:<br />
⎡<br />
⎢<br />
⎣ Xt<br />
Yt<br />
⎤<br />
⎥<br />
⎦ =<br />
⎡<br />
⎢<br />
⎣ Ft<br />
Yt<br />
⎤<br />
⎡<br />
⎢<br />
⎣ Λf Λy 0 I<br />
⎤ ⎡<br />
⎥ ⎢<br />
⎦ ⎣ Ft<br />
Yt<br />
⎡<br />
⎥<br />
⎢<br />
⎦ = Φ(L) ⎣ Ft−1<br />
Yt−1<br />
⎤<br />
⎥<br />
⎦ +<br />
⎤<br />
⎡<br />
⎢<br />
⎣ et<br />
0<br />
⎥<br />
⎦ + vt<br />
⎤<br />
⎥<br />
⎦ , (7)<br />
The respective variables are <strong>the</strong> same as explained in <strong>the</strong> preceding sections. The<br />
loadings Λ f and Λ y are restricted and identified against rotational indeterminacies as<br />
it has been implemented by BBE and described in <strong>the</strong> previous section. According<br />
<strong>to</strong> BBE <strong>the</strong> inclusion <strong>of</strong> <strong>the</strong> policy instrument Yt in both equations will not change<br />
<strong>the</strong> model, it merely should serve notational and computational simplification. The<br />
Bayesian econometricians treats <strong>the</strong> parameters <strong>of</strong> <strong>the</strong> model, interested <strong>to</strong> do inference<br />
on, as random variables. We are interested in doing inference on <strong>the</strong> parameter space<br />
θ = <br />
Λf , Λy , R, vec(Φ), Q <br />
. Note that vec(Φ) is <strong>the</strong> vec<strong>to</strong>rized finite order conformable<br />
lag polynomial, i.e. Φ is columnwise stacked <strong>to</strong> have a vec<strong>to</strong>rized form 27 . To apply <strong>the</strong><br />
26 Hyperparameters are <strong>the</strong> elements <strong>of</strong> <strong>the</strong> parameter space <strong>to</strong> be estimated<br />
27 for more details about <strong>the</strong> vec opera<strong>to</strong>r please refer <strong>to</strong> Lütkepohl [1993]<br />
(8)
Bayesian FAVARs with Agnostic Identification 27<br />
multi move version <strong>of</strong> <strong>the</strong> Gibbs sampler one has <strong>to</strong> prepare <strong>the</strong> model fur<strong>the</strong>r which is<br />
done step by step in <strong>the</strong> following. The multi move Gibbs Sampling, alternately samples<br />
<strong>the</strong> parameters θ and <strong>the</strong> fac<strong>to</strong>rs Ft given <strong>the</strong> data. We use <strong>the</strong> multi move version <strong>of</strong> <strong>the</strong><br />
Gibbs sampler because this approach allows us as, a first step <strong>to</strong> estimate <strong>the</strong> unobserved<br />
common components, namely <strong>the</strong> fac<strong>to</strong>rs via <strong>the</strong> Kalman filtering technique conditional<br />
on <strong>the</strong> given hyperparameters and as a second step calculate <strong>the</strong> hyperparameters <strong>of</strong> <strong>the</strong><br />
model given <strong>the</strong> fac<strong>to</strong>rs via <strong>the</strong> Gibbs sampler in <strong>the</strong> respective blocking 28 .<br />
For <strong>the</strong> state space representation we define X ′<br />
t = (X ′ t, Y ′<br />
t ) , e ′ t = (e ′ t, 0) ′ and F ′<br />
t =<br />
(F ′<br />
t, Y ′<br />
t ). For <strong>the</strong> case that Φ(L) is <strong>of</strong> order one, <strong>the</strong> model can be rewritten as:<br />
with<br />
Λ =<br />
⎡<br />
⎢<br />
⎣ Λf Λy 0 I<br />
Xt = ΛFt + et<br />
Ft = Φ(L)Ft−1 + vt<br />
⎤<br />
⎥<br />
⎦ , R =<br />
⎡<br />
⎢<br />
⎣<br />
R 0<br />
0 0<br />
But in most applications one can expect <strong>the</strong> order <strong>to</strong> be d > 1, so is <strong>the</strong> case in<br />
<strong>the</strong> dataset I analyze. The dataset I analyze is in monthly frequency <strong>the</strong>refore I chose<br />
a lag order <strong>of</strong> 12 for Φ(L). The FAVAR equation has <strong>to</strong> be transformed in<strong>to</strong> a first-<br />
order Markov process, in order <strong>to</strong> be able <strong>to</strong> draw <strong>the</strong> fac<strong>to</strong>rs via Bayesian Kalman<br />
filtering. For that we define Φ(L) = Φ1L+Φ2L 2 +...+ΦdL d , ¯ Ft = (F ′<br />
t, F ′<br />
t−1, ..., F ′<br />
t−1−d) ′<br />
and ¯vt = (vt, 0, ..., 0) ′ . The lag polynomial <strong>of</strong> <strong>the</strong> FAVAR equation in <strong>the</strong> first-order<br />
representation changes <strong>to</strong>:<br />
28 Please note that we always also condition on <strong>the</strong> data due <strong>to</strong> notational convenience it is left out but<br />
is implicitly assumed and not fur<strong>the</strong>r explicitly written<br />
⎤<br />
⎥<br />
⎦ .<br />
(9)<br />
(10)
28 Bayesian FAVARs with Agnostic Identification<br />
⎡<br />
⎢ Φ1 Φ2 . . . Φd−1 Φd ⎥<br />
⎢<br />
⎥<br />
⎢<br />
⎥<br />
⎢ IK+M 0 . . . 0 0 ⎥<br />
⎢<br />
⎥<br />
⎢<br />
⎥<br />
¯Φ = ⎢ 0 IK+M . . . 0 0 ⎥<br />
⎢<br />
⎥<br />
⎢<br />
⎥<br />
⎢ . . . . . . . . . . . . . . . ⎥<br />
⎣<br />
⎦<br />
0 0 . . . IK+M 0<br />
Now we have <strong>to</strong> transform <strong>the</strong> VCV <strong>of</strong> <strong>the</strong> FAVAR disturbances with 0’s in a straight-<br />
forward way <strong>to</strong> adjust <strong>the</strong> dimensions <strong>of</strong> <strong>the</strong> state equation which results in <strong>the</strong> following<br />
matrix:<br />
⎡<br />
⎤<br />
⎢ Q<br />
⎢ 0<br />
¯Q = ⎢ . . .<br />
⎣<br />
0<br />
0<br />
. . .<br />
. . .<br />
. . .<br />
. . .<br />
0 ⎥<br />
0 ⎥<br />
. . . ⎥<br />
⎦<br />
0 0 . . . 0<br />
where <strong>the</strong> 0 ′ s and ¯ Q have dimension [(K + M) × (K + M)] , and [d(K + M) ×<br />
d(K + M)] respectively.<br />
This results in <strong>the</strong> final observation equation <strong>of</strong> <strong>the</strong> following form:<br />
And finally <strong>the</strong> last extensions<br />
Xt = ¯ Λ ¯ Ft + et<br />
⎤<br />
(11)<br />
(12)<br />
(13)<br />
¯Λ = [Λ 0 . . . 0] (14)<br />
The final state-space representation prepared <strong>to</strong> fit <strong>the</strong> estimation procedure are:<br />
¯Ft = ¯ Φ ¯ Ft−1 + ¯vt<br />
(15)
Bayesian FAVARs with Agnostic Identification 29<br />
Xt = ¯ Λ ¯ Ft + et<br />
According <strong>to</strong> <strong>the</strong> Bayesian approach <strong>the</strong> parameter space with <strong>the</strong> respective hyper-<br />
parameters 29 and <strong>the</strong> fac<strong>to</strong>rs {Ft} T<br />
t=1<br />
his<strong>to</strong>ries <strong>of</strong> X and F from period 1 through T are defined by<br />
5.3 Inference<br />
(16)<br />
are treated as random variables. The respective<br />
˜XT = (X1, X2, . . . , XT )<br />
˜FT = (F1, F2, . . . , FT )<br />
This part is very close <strong>to</strong> BBE [2005] and Eliasz [2005]. For completenes <strong>the</strong> single steps<br />
are presented at this stage. The task as it was described in <strong>the</strong> section about Gibbs<br />
sampling, is <strong>to</strong> derive <strong>the</strong> posterior densities. The aim is <strong>to</strong> empirically approximate <strong>the</strong><br />
marginal posterior densities <strong>of</strong><br />
p( ˜ FT ) = p( ˜ FT , θ)dθ and p(θ) = p( ˜ FT , θ)d ˜ FT where p( ˜ FT , θ)<br />
is <strong>the</strong> joint posterior density and <strong>the</strong> integrals are taken with respect <strong>to</strong> <strong>the</strong> supports <strong>of</strong><br />
θ and ˜ FT respectively. The procedure applied <strong>to</strong> obtain <strong>the</strong> empirical approximation<br />
<strong>of</strong> <strong>the</strong> posterior distribution is <strong>the</strong> previously explained multi move version <strong>of</strong> <strong>the</strong> Gibbs<br />
sampling technique by Carter and Kohn [1994]. BBE also apply this estimation procedure<br />
that is surveyed by Kim and Nelson [1999].<br />
Choosing <strong>the</strong> Starting Values θ 0<br />
In general one can start <strong>the</strong> iteration cycle with any arbitrary randomly drawn set <strong>of</strong><br />
parameters, as <strong>the</strong> joint and marginal empirical distributions <strong>of</strong> <strong>the</strong> generated parame-<br />
ters will converge at an exponential rate <strong>to</strong> its joint and marginal target distributions as<br />
S → ∞. This has been shown by Geman and Geman [1984]. Following <strong>the</strong> advice <strong>of</strong><br />
Eliasz [2005] one should judiciously select <strong>the</strong> starting values in <strong>the</strong> framework <strong>of</strong> large<br />
29 The hyperparameters refer <strong>to</strong> <strong>the</strong> elements <strong>of</strong> <strong>the</strong> parameter space
30 Bayesian FAVARs with Agnostic Identification<br />
dimensional models, due <strong>to</strong> <strong>the</strong> fact that in case <strong>of</strong> large cross-sections, highly dimen-<br />
sional likelihoods make irregularities more likely. This can reduce <strong>the</strong> number <strong>of</strong> draws<br />
relevant for convergence and hence saves time, which in a computer-intensive statistical<br />
framework is <strong>of</strong> great relevance. I follow <strong>the</strong> suggestions <strong>of</strong> Eliasz [2005] and apply <strong>the</strong><br />
first step estimates <strong>of</strong> PCA <strong>to</strong> select <strong>the</strong> starting values. A detailed description how <strong>to</strong><br />
obtain <strong>the</strong> starting values via <strong>the</strong> first step PCA can be found his paper. Since Gelman<br />
and Rubin [1992] have shown that a single chain <strong>of</strong> <strong>the</strong> Gibbs sampler might give a ”false<br />
sense <strong>of</strong> security ”, it has become common practice <strong>to</strong> try out different starting values, at<br />
best from a randomly (over)dispersed set <strong>of</strong> parameters and <strong>the</strong>n check <strong>the</strong> convergence<br />
verifying that <strong>the</strong>y lead <strong>to</strong> similar empirical distributions. The Inference part is very close<br />
<strong>to</strong> BBE [2005] and Eliasz [2005]. This part can also be found in a slightly more elaborate<br />
verion in <strong>the</strong>ir papers. But for completenes it is stated here.<br />
Conditional density <strong>of</strong> <strong>the</strong> fac<strong>to</strong>rs<br />
In this subsection we want <strong>to</strong> draw from<br />
pF ( ˜ FT | ˜ XT , θ)<br />
assuming that <strong>the</strong> hyperparameters <strong>of</strong> <strong>the</strong> parameter space θ are given, hence I describe<br />
Bayesian Inference on <strong>the</strong> dynamic evolution <strong>of</strong> <strong>the</strong> fac<strong>to</strong>rs Ft conditional on Xt for<br />
t = 1, . . . , T and conditional on θ. The transformations that are required <strong>to</strong> draw <strong>the</strong><br />
fac<strong>to</strong>rs have been done in <strong>the</strong> previous section. The conditional distribution, from which<br />
<strong>the</strong> state vec<strong>to</strong>r is generated, can be expressed as <strong>the</strong> product <strong>of</strong> conditional distributions<br />
by exploiting <strong>the</strong> Markov property <strong>of</strong> state space models in <strong>the</strong> following way<br />
pF ( ˜ FT | ˜ XT , θ) = pF (FT | ˜ XT , θ)pF (FT −1 | FT , ˜ XT , θ), . . . , pF (F1 | F2, ˜ XT , θ)<br />
pF ( ˜ FT | ˜ XT , θ) = pF (FT | ˜ XT , θ) T −1<br />
t=1 pF (Ft | Ft+1, ˜ XT , θ)<br />
At this point it is important <strong>to</strong> note that <strong>the</strong> conditioning is on <strong>the</strong> first [K + M] rows <strong>of</strong><br />
FT only, since o<strong>the</strong>rwise for <strong>the</strong> case <strong>of</strong> d > 1 <strong>the</strong> VCV <strong>of</strong> <strong>the</strong> density would be singular.
Bayesian FAVARs with Agnostic Identification 31<br />
This is an important hint, which is very relevant for <strong>the</strong> implementation in<strong>to</strong> Matlab, and<br />
was found in Eliasz [2005] but was not explicitly stated in BBE [2005]. The state space<br />
model is linear and Gaussian, hence we have:<br />
FT | ˜<br />
XT , θ ∼ N(FT |T , PT |T )<br />
Ft | Ft+1 ˜<br />
XT , θ ∼ N(Ft|t,Ft+1, Pt|t,Ft+1)<br />
where <strong>the</strong> first holds for <strong>the</strong> Kalman filter for t = 1, . . . , T and <strong>the</strong> second holds for <strong>the</strong><br />
Kalman smoo<strong>the</strong>r for t = T − 1, T − 2, . . . , 1. The derivation <strong>of</strong> <strong>the</strong> Kalman filter and<br />
smoo<strong>the</strong>r can be found in an elaborate manner in Eliasz [2005], <strong>the</strong>refore I do not repeat<br />
it here at this place.<br />
Inference on <strong>the</strong> parameters θ<br />
Drawing from <strong>the</strong> conditional 30 distribution p(θ | ˜ XT , ˜ FT )<br />
This part refers <strong>to</strong> <strong>to</strong> observation equation <strong>of</strong> <strong>the</strong> state space model which conditional on<br />
<strong>the</strong> estimated fac<strong>to</strong>rs and <strong>the</strong> data given specifies <strong>the</strong> distribution <strong>of</strong> Λ and R. Here we<br />
can apply equation by equation OLS in order <strong>to</strong> obtain ˆ Λ and ê. This is feasible due <strong>to</strong><br />
<strong>the</strong> fact that <strong>the</strong> errors are uncorrelated. According <strong>to</strong> <strong>the</strong> specification by BBE we also<br />
assume a proper (conjugate) but diffuse Inverse-Gamma(3,0.001) prior for Rii. Note that<br />
R is assumed <strong>to</strong> be diagonal. The posterior <strong>the</strong>n has <strong>the</strong> following form:<br />
.<br />
Rii | XT , FT ∼ iG( ¯ Rii, T + 0.001)<br />
where ¯ Rii = 3 + ê ′ iêi + ˆ Λ ′ i[M −1<br />
0 + ( F ′ T (i) F (i)<br />
T ) −1 ] −1Λi ˆ and M −1<br />
0<br />
denoting <strong>the</strong> variance<br />
parameter in <strong>the</strong> prior on <strong>the</strong> coefficients <strong>of</strong> <strong>the</strong> i-th equation <strong>of</strong> Λi. The normalization<br />
discussed in section (4) in order <strong>to</strong> identify <strong>the</strong> fac<strong>to</strong>rs and <strong>the</strong> loadings separately re-<br />
quires <strong>to</strong> set M0 = I. Conditional on <strong>the</strong> drawn value <strong>of</strong> Rii <strong>the</strong> prior on <strong>the</strong> fac<strong>to</strong>r<br />
loadings <strong>of</strong> <strong>the</strong> i-th equation is Λ prior<br />
i N(0, RiiM −1<br />
0 ). The regressors <strong>of</strong> <strong>the</strong> i-th equation<br />
are represented by ˜ F (i)<br />
T . The values <strong>of</strong> Λi are drawn from <strong>the</strong> posterior N( ¯ Λi, Rii ¯ M −1<br />
i )<br />
30 The following part is very close <strong>to</strong> BBE [2005]
32 Bayesian FAVARs with Agnostic Identification<br />
where ¯ Λi = ¯ M −1<br />
i ( F ′ T (i) F i T ) ˆ Λi and ¯ M −1<br />
i ( F ′ T (i) F i T ).<br />
The next Gibbs block requires <strong>to</strong> draw vec(Φ) and Q conditional on <strong>the</strong> most cur-<br />
rent draws <strong>of</strong> <strong>the</strong> fac<strong>to</strong>rs, <strong>the</strong> R ′ iis and Λ ′ is and <strong>the</strong> data. As <strong>the</strong> FAVAR equation has a<br />
standard VAR form one can likewise estimate vec( ˆ Φ) and ˆ Q via equation by equation OLS.<br />
BBE impose a diffuse conjugate Normal-Wishart prior :<br />
Posterior Q is drawn from:<br />
vec(Φ) | Q ∼ N(0, Q ⊗ Ω0), Q ∼ iW (Q0, K + M + 2)<br />
iW ( ¯ Q, T + K + M + 2);<br />
In order <strong>to</strong> assume a prior in <strong>the</strong> vein <strong>of</strong> <strong>the</strong> Minnesota prior that expresses more<br />
distant lags <strong>to</strong> have less impact, hence be more likely zero, <strong>the</strong>y follow Kadiyala and<br />
Karlsson [1997] 31 . First we draw Q from <strong>the</strong> Inverse-Wishart, iW ( ¯ Q, T + K + M + 2),<br />
where ¯ Q = Q0 + ˆ V ′ ˆ V + ˆ Φ ′ [Ω0 + ( ˜ F ′ T −1 ˜ FT −1) −1 ] −1 ˆ Φ and ˆ V <strong>the</strong> matrix <strong>of</strong> OLS residuals.<br />
The conditional on <strong>the</strong> drawn Q we draw vec(Φ) from <strong>the</strong> conditional normal according<br />
<strong>to</strong><br />
vec(Φ) ∼ N(vec( ¯ Φ), Q ⊗ ¯ Ω)<br />
Here ¯ Φ = ¯ Ω( ˜ F ′ T −1 ˜ FT −1) ˆ Φ and ¯ Ω = (Ω −1<br />
0 + ( ˜ F ′ T −1 ˜ FT −1)) −1 . It is straight forward that<br />
we truncate <strong>the</strong> draws <strong>to</strong> only acceptable values for Φ less than one in absolute values in<br />
order <strong>to</strong> ensure stationarity. This block on Kalman filter and smoo<strong>the</strong>r and <strong>the</strong> block on<br />
drawing <strong>the</strong> parameter space are iterated until convergence is achieved.<br />
31 for a detailed description please refer <strong>to</strong> BBE [2005]
Bayesian FAVARs with Agnostic Identification 33<br />
6 Structural FAVARs<br />
6.1 Identification <strong>of</strong> <strong>Shock</strong>s<br />
The issue <strong>of</strong> identifying structural shocks from <strong>the</strong> reduced form VAR innovations, and<br />
in particular identifying a shock <strong>to</strong> monetary policy has been dealt with in a huge body<br />
<strong>of</strong> literature. There have been introduced a lot <strong>of</strong> variations on how <strong>to</strong> achieve identifica-<br />
tion. The most prominent ones are explained below. As <strong>the</strong>re are various approaches <strong>to</strong><br />
deal with <strong>the</strong> same question it seems clear that <strong>the</strong>re is also a controversial debate about<br />
which scheme <strong>to</strong> choose in order <strong>to</strong> reveal <strong>the</strong> true propagation mechanism attributable<br />
<strong>to</strong> a monetary policy shock. After considering <strong>the</strong> different approaches available, it seems<br />
<strong>to</strong> for me <strong>to</strong> be advisable <strong>to</strong> head <strong>the</strong> challenge <strong>of</strong> identification through applying <strong>the</strong><br />
agnostic identification using sign restrictions 32 . Especially from <strong>the</strong> perspective <strong>of</strong> an<br />
economist it seems <strong>to</strong> me plausible <strong>to</strong> have an identification scheme that incorporates<br />
economic <strong>the</strong>ory and through imposing <strong>the</strong> impulse responses <strong>to</strong> satisfy <strong>the</strong> conventional<br />
wisdom. Although this is a weaker identification scheme 33 . In this section <strong>the</strong> well known<br />
identification schemes are presented, afterwards I show briefly how <strong>the</strong>y were extended <strong>to</strong><br />
be applicable <strong>to</strong> large scale DFMs and <strong>to</strong> <strong>the</strong> FAVAR framework. And finally in <strong>the</strong> last<br />
part <strong>of</strong> this section I elaborate on <strong>the</strong> extension <strong>of</strong> <strong>the</strong> sign restriction <strong>of</strong> Uhlig (2005) <strong>to</strong><br />
<strong>the</strong> FAVAR framework that incorporates <strong>the</strong> Gibbs sampling.<br />
In <strong>the</strong> common VAR framework one is required <strong>to</strong> deduce <strong>the</strong> structural shocks for<br />
<strong>the</strong> VAR innovations. In <strong>the</strong> DFM and FAVAR framework <strong>the</strong> task is actually <strong>the</strong> same<br />
with <strong>the</strong> main distinction that <strong>the</strong> structural shocks are not required <strong>to</strong> be deduced from<br />
<strong>the</strong> reduced form VAR innovation, but from <strong>the</strong> FAVAR innovation, including <strong>the</strong> fac<strong>to</strong>rs<br />
that drive <strong>the</strong> dynamics <strong>of</strong> <strong>the</strong> informational variables or <strong>the</strong> observed data.<br />
32 Agnostic because no restriction on output.<br />
33 weaker in a sense that only restriction is on <strong>the</strong> mentioned variables according <strong>to</strong> <strong>the</strong> conventional<br />
wisdom. The aim is <strong>to</strong> restrict as little as necessary a priori.
34 Bayesian FAVARs with Agnostic Identification<br />
6.2 Identification Schemes in SVARs<br />
In <strong>the</strong> well know SVAR framework, which is <strong>the</strong> mostly known and applied framework for<br />
<strong>the</strong> identification <strong>of</strong> <strong>the</strong> monetary policy shock. The widely applied ones are <strong>the</strong> recursive<br />
Cholesky identification that was advanced CEE [1999], <strong>the</strong> long-run identification that<br />
goes back <strong>to</strong> Blanchard and Quah [1989], and <strong>the</strong> combination <strong>of</strong> <strong>the</strong> previous two that<br />
sets zero restrictions on <strong>the</strong> coefficient matrix introduced by Leeper, Sims and Zha [1996].<br />
Very good surveys about <strong>the</strong> Identification in SVARs can be found in CEE [1999] and<br />
Leeper, Sims and Zha [1996]. They document <strong>the</strong> progress that has been done over <strong>the</strong><br />
time what versions <strong>the</strong>re are around and what <strong>the</strong> state-<strong>of</strong>-<strong>the</strong>-art is.<br />
6.3 Identification in DFMs and FAVARs<br />
Equivalently <strong>to</strong> <strong>the</strong> SVAR case, <strong>the</strong> structural shocks in DFMs and FAVARs have <strong>to</strong> be<br />
derived from <strong>the</strong> reduced form innovation, with <strong>the</strong> distinction that here one refers not<br />
<strong>to</strong> <strong>the</strong> VAR but <strong>to</strong> <strong>the</strong> innovation in <strong>the</strong> FAVAR strictly speaking on <strong>the</strong> Yt or in case<br />
that Yt consists <strong>of</strong> more than one variable, <strong>the</strong> one that <strong>the</strong> researcher is interested in.<br />
There are already some identification schemes applied <strong>to</strong> <strong>the</strong> framework <strong>of</strong> DFMs and<br />
FAVARs around which shall be discussed here very briefly. There is a very good survey<br />
by S<strong>to</strong>ck and Watson (2005), which introduces broadly <strong>the</strong> different approaches and how<br />
<strong>to</strong> set restrictions <strong>to</strong> identify fac<strong>to</strong>rs and fac<strong>to</strong>r loadings. Fur<strong>the</strong>rmore <strong>the</strong>y elaborate<br />
on <strong>the</strong> different identification schemes <strong>to</strong> figure out <strong>the</strong> structural shocks in DFMs and<br />
FAVARs. As <strong>the</strong>y mostly deal with <strong>the</strong> nonparametric and <strong>the</strong> frequentists approach, <strong>the</strong><br />
reader is kindly referred <strong>to</strong> this reference for fur<strong>the</strong>r details. My <strong>the</strong>sis concentrates on<br />
<strong>the</strong> Bayesian approach combined also with a Bayesian identification, namely <strong>the</strong> agnostic<br />
identification. Therefore in <strong>the</strong> following <strong>the</strong> o<strong>the</strong>r approaches are mentioned briefly with<br />
<strong>the</strong> relevant references, and for <strong>the</strong> rest I focus on <strong>the</strong> model and methodology that I<br />
chose <strong>to</strong> analyze described above.<br />
The identification schemes that are around are surveyed by S<strong>to</strong>ck and Watson (2005).
Bayesian FAVARs with Agnostic Identification 35<br />
There is <strong>the</strong> BBE FAVAR identification scheme, also applied in this <strong>the</strong>sis, and a slightly<br />
modified version that is applied by S<strong>to</strong>ck and Watson (2005). Fur<strong>the</strong>rmore <strong>the</strong> approach<br />
<strong>of</strong> Favero and Marcellino (2005) and Favero, Marcello and Neglia (2004) is introduced in<br />
<strong>the</strong> survey by S<strong>to</strong>ck and Watson (2005).<br />
Here as in <strong>the</strong> VAR case <strong>the</strong> fac<strong>to</strong>r’s structural shocks are assumed <strong>to</strong> be linearly<br />
related <strong>to</strong> <strong>the</strong> reduced form fac<strong>to</strong>r innovations.<br />
vt = Qut<br />
where Q is a an (orthonormal) invertible [q × q] matrix. For identifying <strong>the</strong> trans-<br />
formation matrix Q <strong>the</strong>re are two ways. The one is <strong>the</strong> full system identification by<br />
Blanchard and Watson (1986) who strive <strong>to</strong> identify all elements <strong>of</strong> Q. The o<strong>the</strong>r ap-<br />
proach is <strong>the</strong> single-equation identification where only one row <strong>of</strong> Q is required in order<br />
<strong>to</strong> identify <strong>the</strong> one respective shock. The latter one is <strong>the</strong> relevant one for us as, we are<br />
interested only in <strong>the</strong> identification <strong>of</strong> <strong>the</strong> shock attributable <strong>to</strong> monetary policy. There-<br />
fore we interested in a single row qs <strong>of</strong> <strong>the</strong> orthonormal matrix Q .<br />
Uhlig’s Sign Restriction<br />
As already introduced <strong>the</strong> sign restriction approach in its version advanced by Uhlig<br />
(2005) is <strong>the</strong> most reasonable approach in my view. Conventional wisdom says that after<br />
a monetary policy contraction <strong>the</strong> federal funds rate should increase, <strong>the</strong> prices should fall,<br />
and finally real output should fall. In o<strong>the</strong>r identification schemes that do not accomplish<br />
this wisdom, <strong>the</strong> researchers tend call this empirical observation a ”puzzle”. There are<br />
even some researchers that try <strong>to</strong> build a model producing such puzzles out <strong>of</strong> a model as<br />
has been done by CEE(2005). This seems unreasonable <strong>to</strong> me, because here one has <strong>to</strong> be<br />
very certain about <strong>the</strong> chosen identification scheme, and neglect any possible estimation<br />
mistakes not accomplished by <strong>the</strong> identification scheme chosen. Sims gives <strong>the</strong> advice<br />
<strong>to</strong> avoid unreasonable identification schemes. The approach by Uhlig seems reasonable,
36 Bayesian FAVARs with Agnostic Identification<br />
especially regarding <strong>the</strong> application FAVARs <strong>of</strong> DFMs. Because in <strong>the</strong>se frameworks one<br />
incorporates by far more relevant information than in <strong>the</strong> VAR methodology, and <strong>the</strong>refore<br />
one can set more restrictions in that sense that for instance not only CPI should fall after<br />
a monetary policy contraction but all <strong>the</strong> prices considered. Of course this becomes<br />
computationally by far more demanding than <strong>the</strong> o<strong>the</strong>r identification schemes and also<br />
than in <strong>the</strong> VAR case, because here we have now by far more information and respectively<br />
more ”reasonable” restrictions <strong>to</strong> set. It is straightforward that this method generates<br />
very few relevant candidate impulse responses due <strong>to</strong> <strong>the</strong> fact that <strong>the</strong> set <strong>of</strong> acceptable<br />
impulse responses is reduced due <strong>to</strong> <strong>the</strong> increased number <strong>of</strong> restrictions. It might and in<br />
practice is a difficult task <strong>to</strong> find an impulse vec<strong>to</strong>r that satisfies <strong>the</strong> ”stricter” restriction.<br />
This should not be considered as a disadvantage, it simply reflects that <strong>the</strong> economy is<br />
multi-causal where many things happen simultaneously and dynamically interact. From<br />
this fact one can deduce that <strong>the</strong> task <strong>to</strong> disentangle <strong>the</strong> effects <strong>of</strong> one single shock is very<br />
difficult, in particular has <strong>to</strong> satisfy a lot <strong>of</strong> ”economic conventional wisdom” in order <strong>to</strong><br />
be identified as a mere effect <strong>of</strong> a single cause out <strong>of</strong> many. I would not go so far and<br />
state that with this method <strong>the</strong> shock is perfectly identified, but <strong>to</strong> me it seems that this<br />
approach is one <strong>of</strong> <strong>the</strong> more reasonable ones that are available especially with respect <strong>to</strong><br />
<strong>the</strong> large set <strong>of</strong> information available, and fur<strong>the</strong>rmore provides at least <strong>the</strong> possibility <strong>to</strong><br />
figure out very precise respones after a shock induced by <strong>the</strong> monetary authority.<br />
The task is first <strong>to</strong> identify <strong>the</strong> structural shock wt according <strong>the</strong> FAVAR innovation vt.<br />
It is actually <strong>the</strong> same concept as in <strong>the</strong> VAR case, except that <strong>the</strong> innovations one has<br />
are fac<strong>to</strong>r and not VAR variable innovations. The relation between <strong>the</strong> reduced form<br />
dynamic fac<strong>to</strong>r innovation vt and <strong>the</strong> structural fac<strong>to</strong>r innovation wt is given by<br />
wt = Avt<br />
The matrix A is an orthogonal invertible matrix <strong>of</strong> order [(K + M) × (K + M)]. We<br />
are only interested in identifying one single shock <strong>the</strong>refore it is sufficient <strong>to</strong> identify one<br />
single row as where s refers <strong>to</strong> <strong>the</strong> respective shock. This single-equation identification<br />
is <strong>the</strong> more common approach that most <strong>of</strong> <strong>the</strong> recent literature pursue. The alternative
Bayesian FAVARs with Agnostic Identification 37<br />
would be <strong>to</strong> identify one row but <strong>the</strong> whole matrix A, which means <strong>to</strong> identify <strong>the</strong> full<br />
system. This approach goes back <strong>to</strong> Blanchard and Watson [1986].<br />
The structural FAVAR can be arrived at when we premultiply <strong>the</strong> reduced form version<br />
with <strong>the</strong> rotation matrix A, which results in:<br />
AFt = AΦA −1 AFt−1 + Avt<br />
F ∗<br />
t = Φ ∗ F ∗<br />
t−1 + wt<br />
The crucial step is <strong>to</strong> represent <strong>the</strong> one-step ahead prediction error vt as a linear com-<br />
bination <strong>of</strong> orthogonalized structural shocks 34 . The fundamental innovations are mutually<br />
independent and normalized <strong>to</strong> have variance 1. Hence E[wtw ′ t] = I. The restriction on<br />
A emerges from its covariance structure <strong>of</strong> <strong>the</strong> fac<strong>to</strong>r reduced form fac<strong>to</strong>r innovation which<br />
results in:<br />
Σv = E[vtv ′ t] = AE[wtw ′ t]A ′ = AA ′<br />
The reader can find an in-depth derivation and explanation <strong>of</strong> <strong>the</strong> sign restriction in<br />
Uhlig [2005]. Therefore we state very briefly <strong>the</strong> technical derivation very briefly and<br />
ra<strong>the</strong>r state its implementation for FAVARs. The steps are <strong>the</strong> following:<br />
First do a Cholesky decomposition <strong>of</strong> <strong>the</strong> variance-covariance matrix <strong>of</strong> <strong>the</strong> fac<strong>to</strong>r<br />
innovations ÃÃ′ = Σv, where<br />
à is <strong>the</strong> lower triangular Cholesky fac<strong>to</strong>r. Then a is an<br />
impulse vec<strong>to</strong>r if <strong>the</strong>re exists an [K + M]-dimensional vec<strong>to</strong>r α <strong>of</strong> unit length so that<br />
a = Ãα<br />
Given <strong>the</strong> impulse vec<strong>to</strong>r a we can calculate <strong>the</strong> impulse responses <strong>of</strong> <strong>the</strong> fac<strong>to</strong>rs <strong>to</strong> an<br />
innovation in for example <strong>the</strong> Federal funds rate. We collect <strong>the</strong> responses <strong>of</strong> <strong>the</strong> fac<strong>to</strong>rs<br />
in order <strong>to</strong> estimate <strong>the</strong> impulse responses <strong>of</strong> <strong>the</strong> variables <strong>of</strong> interest. For exemplification<br />
34 See Uhlig [2005]
38 Bayesian FAVARs with Agnostic Identification<br />
let us define <strong>the</strong> case <strong>of</strong> two fac<strong>to</strong>rs with Ft = (F 1<br />
t , F 2<br />
t ) ′ <strong>the</strong> impulse responses <strong>of</strong> prices,<br />
that evolve for <strong>the</strong> horizon s accordingly:<br />
F0 = a1; F1 = ΦF0; . . . ; Fs = ΦFs−1<br />
Here Φ is <strong>the</strong> lag polynomial <strong>of</strong> <strong>the</strong> fac<strong>to</strong>r equation. As a final step we calculate <strong>the</strong><br />
impulse responses <strong>of</strong> <strong>the</strong> informational variables given <strong>the</strong> fac<strong>to</strong>r responses. The case <strong>of</strong><br />
a price variable would look look <strong>the</strong> following way:<br />
P0 = ΛP 1F 1 0 + ΛP 2F 2 0<br />
P1 = ΛP 1F 1 1 + ΛP 2F 2 1<br />
. . .<br />
Ps = ΛP 1F 1 s + ΛP 2F 2 s<br />
Our final task is <strong>to</strong> find that respective as <strong>of</strong> length unity that satisfies <strong>the</strong> sign<br />
restrictions explained above for <strong>the</strong> horizon previously specified. Only those impulse<br />
responses that satisfy <strong>the</strong> restrictions for <strong>the</strong> given horizon are s<strong>to</strong>red o<strong>the</strong>rwise discarded.<br />
For a detailed describtion <strong>of</strong> <strong>the</strong> methodology please refer <strong>to</strong> Uhlig [2005].<br />
7 Empirical Results<br />
The dataset is an updated version from S<strong>to</strong>ck and Watson [1998,1999], which consists <strong>of</strong><br />
a balanced panel <strong>of</strong> 120 variables that are tabulated in Appendix A. The federal funds<br />
rate is interpreted as <strong>the</strong> monetary policy instrument and considered as <strong>the</strong> only variable<br />
that has pervasive effects on <strong>the</strong> economy. Alternative specifications are provided by BBE<br />
with respect <strong>to</strong> Yt. They fur<strong>the</strong>r state that <strong>the</strong> ferdaral funds rate should not suffer from<br />
measurement error issues, which is straight-forward and <strong>the</strong>refore can be considered as<br />
having pervasive effects, and imply no idiosyncratic component. The monetary policy<br />
shock is standardized <strong>to</strong> correspond <strong>to</strong> a 25-basis-point innovation in <strong>the</strong> federal funds<br />
rate and <strong>the</strong> responses presented are reported in standard deviation units.
Bayesian FAVARs with Agnostic Identification 39<br />
Before presenting <strong>the</strong> main results such as <strong>the</strong> plots for <strong>the</strong> impulse responses I provide<br />
<strong>the</strong> plots that show whe<strong>the</strong>r <strong>the</strong> chains <strong>of</strong> <strong>the</strong> single fac<strong>to</strong>rs <strong>of</strong> <strong>the</strong> Gibbs iteration converge<br />
or not. There are several convergence criteria that can be applied <strong>to</strong> check <strong>the</strong> convergence<br />
<strong>of</strong> <strong>the</strong> algorithm for different starting values. To assure convergence <strong>of</strong> <strong>the</strong> Gibbs algorithm<br />
I also imposed on <strong>the</strong> one hand, <strong>the</strong> proper priors BBE imposed. They are reported in<br />
section (5.3) on Inference. The task <strong>of</strong> convergence diagnostics is an important one, but<br />
its formal implementation would have gone beyond <strong>the</strong> scope <strong>of</strong> this <strong>the</strong>sis. I decided <strong>to</strong><br />
choose a less formal method where <strong>the</strong> first half <strong>of</strong> <strong>the</strong> median <strong>of</strong> <strong>the</strong> Gibbs sampling draws,<br />
<strong>of</strong> a single fac<strong>to</strong>r, are plotted against <strong>the</strong> second half after having discarded sufficient initial<br />
draws <strong>of</strong> Gibbs sampler in order <strong>to</strong> avoid <strong>the</strong> influence <strong>of</strong> <strong>the</strong> initial conditions. If <strong>the</strong><br />
second half does not deviate <strong>to</strong>o much from <strong>the</strong> first half, one might conclude as a first<br />
check that this single chain has converged. It is straight-forward that <strong>the</strong> convergence<br />
<strong>of</strong> <strong>the</strong> Gibbs chains should be checked for different starting values in order <strong>to</strong> assure <strong>the</strong><br />
convergence <strong>of</strong> <strong>the</strong> respective Gibbs iteration with <strong>the</strong> respective specifications such as<br />
<strong>the</strong> number <strong>of</strong> fac<strong>to</strong>rs, <strong>the</strong> number <strong>of</strong> draws, <strong>the</strong> number <strong>of</strong> initial draws <strong>to</strong> be discarded<br />
and so forth. Convergence has been tested for different starting values. The Figure 1-3<br />
provide provide <strong>the</strong> results for <strong>the</strong> single chains and <strong>the</strong> single fac<strong>to</strong>rs.
40 Bayesian FAVARs with Agnostic Identification<br />
Figure 1: Here <strong>the</strong> convergence is checked for <strong>the</strong> 2 fac<strong>to</strong>rs specification. It is obvious<br />
that each <strong>of</strong> <strong>the</strong> fac<strong>to</strong>rs have converged as <strong>the</strong> second half <strong>of</strong> <strong>the</strong> median <strong>of</strong> <strong>the</strong> fac<strong>to</strong>rs<br />
generated does not deviate from <strong>the</strong> first half. This can be considered as an indication<br />
that <strong>the</strong> empirical distribution has approximated <strong>the</strong> marginal distribution <strong>of</strong> <strong>the</strong> fac<strong>to</strong>rs<br />
sufficiently accurate.
Bayesian FAVARs with Agnostic Identification 41<br />
Figure 2: Here <strong>the</strong> convergence is checked for <strong>the</strong> 5 fac<strong>to</strong>rs specification. Each <strong>of</strong> <strong>the</strong><br />
fac<strong>to</strong>rs have converged as <strong>the</strong> second half <strong>of</strong> <strong>the</strong> median <strong>of</strong> <strong>the</strong> fac<strong>to</strong>rs generated does<br />
not deviate from <strong>the</strong> first half. This can be considered as an indication that <strong>the</strong> empirical<br />
distribution has approximated <strong>the</strong> marginal distribution <strong>of</strong> <strong>the</strong> fac<strong>to</strong>rs sufficiently accurate.
42 Bayesian FAVARs with Agnostic Identification<br />
Figure 3: Here <strong>the</strong> convergence is checked for <strong>the</strong> 7 fac<strong>to</strong>rs specification. Each <strong>of</strong> <strong>the</strong><br />
fac<strong>to</strong>rs have converged as <strong>the</strong> second half <strong>of</strong> <strong>the</strong> median <strong>of</strong> <strong>the</strong> fac<strong>to</strong>rs generated does not<br />
deviate from <strong>the</strong> first half. This can be considered as an indication that <strong>the</strong> empirical dis-<br />
tribution has approximated <strong>the</strong> marginal distribution <strong>of</strong> <strong>the</strong> fac<strong>to</strong>rs sufficiently accurate.<br />
On <strong>the</strong> whole I present results for fac<strong>to</strong>r specification <strong>of</strong> two, five and seven fac<strong>to</strong>rs,<br />
in order <strong>to</strong> check on <strong>the</strong> one hand <strong>the</strong> impact <strong>of</strong> <strong>the</strong> number <strong>of</strong> <strong>the</strong> chosen fac<strong>to</strong>rs for<br />
<strong>the</strong> reaction <strong>of</strong> <strong>the</strong> economy, visualized with <strong>the</strong> impulse response analysis. On <strong>the</strong> o<strong>the</strong>r<br />
hand I give an comment on <strong>the</strong> discussion how many fac<strong>to</strong>rs are relevant and sufficient<br />
<strong>to</strong> capture <strong>the</strong> dynamics <strong>of</strong> <strong>the</strong> US economy. Gianone, Reichlin, and Sala [2004] find evi-<br />
dence that <strong>the</strong> US comovements and <strong>the</strong> dynamics <strong>of</strong> <strong>the</strong> US economy can be described<br />
by two fac<strong>to</strong>rs whereas S<strong>to</strong>ck and Watson [2005] provide evidence that more, that is <strong>to</strong>
Bayesian FAVARs with Agnostic Identification 43<br />
say seven fac<strong>to</strong>rs are required. My results are not supposed <strong>to</strong> give an conclusive answer<br />
<strong>to</strong> this discussion, ra<strong>the</strong>r <strong>the</strong>y are supposed <strong>to</strong> give empirical evidence w.r.t. <strong>the</strong> impulse<br />
response analysis in favor <strong>of</strong> <strong>the</strong> one that provides more reasonable results. The specifi-<br />
cation used for <strong>the</strong> lags is 12, because <strong>the</strong> frequency <strong>of</strong> <strong>the</strong> data considered is monthly,<br />
however BBE report that even seven lags lead <strong>to</strong> similar results.<br />
Increasing <strong>the</strong> number <strong>of</strong> <strong>the</strong> fac<strong>to</strong>rs had an impact in so far that <strong>the</strong> responses tended<br />
<strong>to</strong> be smoo<strong>the</strong>r and with a lower amplitude respectively. The qualitative results are fairly<br />
<strong>the</strong> same. Al<strong>to</strong>ge<strong>the</strong>r <strong>the</strong> specification with seven fac<strong>to</strong>rs produced <strong>the</strong> most reasonable<br />
results especially with respect <strong>to</strong> <strong>the</strong> prices. With this specification <strong>the</strong> reaction <strong>of</strong> <strong>the</strong><br />
prices, in particular <strong>the</strong> commodity price index, had <strong>the</strong> most reasonable reaction. There<br />
is no prize puzzle prevalent. The restriction on <strong>the</strong> commodity price index appeared <strong>to</strong><br />
be <strong>the</strong> one that had <strong>the</strong> least number <strong>of</strong> accepted draws with respect <strong>to</strong> <strong>the</strong> sign restriction.<br />
The most interesting results <strong>to</strong> me seem, however <strong>the</strong> importance <strong>of</strong> <strong>the</strong> identification<br />
scheme, as this approach combined with Gibbs sampling provides more accurate results<br />
than with <strong>the</strong> standard one applied by BBE. The critique by BBE that <strong>the</strong> parametric<br />
approach with a joint likelihood-based estimation might impose <strong>to</strong>o much structure from<br />
which it suffers and <strong>the</strong>refore generates <strong>the</strong> inferior results compared <strong>to</strong> <strong>the</strong> nonpara-<br />
metric approach, cannot be approved. When applying <strong>the</strong> sign restriction approach in<br />
order <strong>to</strong> figure out <strong>the</strong> dynamic effects <strong>of</strong> a shock <strong>to</strong> monetary policy <strong>the</strong> results seem<br />
more reasonable and consistent with <strong>the</strong> conventional wisdom. In particular it is inter-<br />
esting that is delivers fairly tight error bands/confidence bands. The responses with <strong>the</strong><br />
highest uncertainty are <strong>the</strong> output variables which is consistent with Uhlig [2005] and<br />
o<strong>the</strong>rs who state that output does not have an unambiguous effect. It is fairly small.
44 Bayesian FAVARs with Agnostic Identification<br />
Figure 4: Here <strong>the</strong> impulse responses for 20 selected variables are presented for seven fac-<br />
<strong>to</strong>rs and <strong>the</strong> Block criteria 2 which has <strong>the</strong> following restrictions: The block criteria 1 sets<br />
<strong>the</strong> restriction consumer price index on nonborrowed reserves, M3 and <strong>the</strong> federal funds<br />
rate. The block criteria 2 sets <strong>the</strong> restrictions on consumer price index, nonborrowed<br />
reserves, M3, monetary base, and <strong>the</strong> federal funds rate.<br />
In order <strong>to</strong> get a better impression about <strong>the</strong> uncertainty <strong>of</strong> <strong>the</strong> impulse responses<br />
reported I provide mesh plots in figure 5-7. This is a possibility <strong>to</strong> visualize how certain<br />
one can be with median impulse responses. For that I collected all <strong>the</strong> accepted draws<br />
and sorted <strong>the</strong>m in ascending order. Now one can directly see <strong>the</strong> uncertainty, not only<br />
for one specified percentile for error bands. The wider and flater <strong>the</strong> area in <strong>the</strong> center <strong>of</strong><br />
<strong>the</strong> mesh <strong>the</strong> more certainty can be associated with mean response as most <strong>of</strong> <strong>the</strong> impulse<br />
resposes react alike akin <strong>to</strong> <strong>the</strong> same amplitude.
Bayesian FAVARs with Agnostic Identification 45<br />
As <strong>the</strong> sign-restrictions approach delivers results with comparably tighter error bands 35 ,<br />
than <strong>the</strong> BBE identification, one can conclude once again that <strong>the</strong> sign restriction is cru-<br />
cial for <strong>the</strong> structual anlysis. My results support <strong>the</strong> approach by S<strong>to</strong>ck and Watson<br />
[2005] in so far that <strong>the</strong> results with seven fac<strong>to</strong>rs provide more reasonable results with<br />
a relative higher degree <strong>of</strong> certainty compared <strong>to</strong> <strong>the</strong> Gibbs sampling results by BBE<br />
[2005]. The impulse responses for selected variables are reported in figure (4). Industrial<br />
Production declines after <strong>the</strong> shock for 1.5 years and <strong>the</strong>n converges <strong>to</strong> <strong>the</strong> zero line but<br />
stayes under it in <strong>the</strong> version with seven fac<strong>to</strong>rs. However <strong>the</strong> impulse responses on out-<br />
put stay ambigously regarding o<strong>the</strong>r output variables considered. Considering <strong>the</strong> plots<br />
in <strong>the</strong> appendix where impulse responses for fur<strong>the</strong>r output variables are provided. Some<br />
<strong>of</strong> <strong>the</strong> variables react positively whereas o<strong>the</strong>rs react negatively.<br />
Unfortunately <strong>the</strong> Matlab code appeared <strong>to</strong> be more challenging with respect <strong>to</strong> an<br />
efficient implementation, <strong>the</strong>refore it was accomplished fairly late so that <strong>the</strong> most fairly<br />
restricted Block criteria could not be provided where I impose all <strong>the</strong> relevant prices,<br />
all <strong>the</strong> money aggregates and <strong>the</strong> short term interest rates <strong>to</strong> satisfy <strong>the</strong> respective sign<br />
restrictions. This can be done in future work and can be approached by any user <strong>of</strong><br />
<strong>the</strong> Matlab code attached. The enticing promise is that with some patience one can<br />
disentangle quite exactly <strong>the</strong> dynamic effects <strong>of</strong> a shock <strong>to</strong> monetary policy that is merely<br />
due it. This seems <strong>to</strong> me an important task and advantage <strong>of</strong> <strong>the</strong> combination <strong>of</strong> Bayesian<br />
FAVARs that are identified with an agnostic identification using <strong>the</strong> sign restriction. The<br />
fact that one can narrow down <strong>the</strong> space <strong>of</strong> <strong>the</strong> reactions might hold <strong>the</strong> enticing promise<br />
<strong>of</strong> providing an exact answer also with respect <strong>to</strong> <strong>the</strong> quantitative measure.<br />
35 The error bands report <strong>the</strong> 16% lower and 84% upper bound <strong>of</strong> <strong>the</strong> responses
46 Bayesian FAVARs with Agnostic Identification<br />
Mesh <strong>of</strong> Impulse Responses <strong>of</strong> selected variables sorted for seven fac<strong>to</strong>rs in ascending or-<br />
der. Please note that <strong>the</strong> number <strong>of</strong> <strong>the</strong> accepted draws are ten times higher than shown<br />
on <strong>the</strong> mesh. The picture looks <strong>the</strong> same with all draws included but takes much more<br />
time and memory for Matlab <strong>to</strong> load.
Bayesian FAVARs with Agnostic Identification 47<br />
Mesh <strong>of</strong> Impulse Responses <strong>of</strong> selected variables, for five fac<strong>to</strong>rs sorted in ascending or-<br />
der. Here one might conclude from <strong>the</strong> meshs how uncertain <strong>the</strong> responses are as <strong>the</strong>y<br />
are given for each <strong>of</strong> <strong>the</strong> accepted draws. The wider <strong>the</strong> area in <strong>the</strong> center part <strong>the</strong> less<br />
volatile <strong>the</strong> impulse responses are over <strong>the</strong> draws. Such pictures serve <strong>the</strong> possibility <strong>to</strong><br />
get an impression how certain <strong>the</strong> responses are with respect <strong>to</strong> <strong>the</strong> error bands.
48 Bayesian FAVARs with Agnostic Identification<br />
Mesh <strong>of</strong> Impulse Responses <strong>of</strong> selected variables, for two fac<strong>to</strong>rs sorted in ascending order.<br />
It is also worth <strong>to</strong> have look at <strong>the</strong> money aggregates that show a negative reaction for <strong>the</strong><br />
whole 48 periods. Al<strong>to</strong>ge<strong>the</strong>r <strong>the</strong> results improved where more fac<strong>to</strong>rs were considered,<br />
in so far that <strong>the</strong> responses have more supportive error bands. Fur<strong>the</strong>rmore <strong>the</strong> results<br />
improved when I increased <strong>the</strong> number <strong>of</strong> variables on which sign-restrictions were im-<br />
posed. But one should note that it is computationally cumbersome <strong>to</strong> receive <strong>the</strong> results<br />
with more restrictions. The more variables are imposed with sign restrictions <strong>the</strong> less can<br />
be accepted that satisfy <strong>the</strong>m. I conclude that this mirros <strong>the</strong> difficulty <strong>to</strong> identify <strong>the</strong><br />
shock in a quantitativly precise manner. Therefore <strong>the</strong> critique on <strong>the</strong> Gibbs sampling<br />
approach by BBE [2005] seem not <strong>to</strong> be valid. The structure imposed is not appears not<br />
<strong>to</strong> be <strong>the</strong> restriction when combined with <strong>the</strong> alternative identification scheme.
Bayesian FAVARs with Agnostic Identification 49<br />
Regarding <strong>the</strong> impulse responses one can see that most <strong>of</strong> <strong>the</strong>m deliver tighter error<br />
bands than <strong>the</strong> results by BBE, which favors <strong>the</strong> alternative identification. In partic-<br />
ular regarding <strong>the</strong> commodity price index and <strong>the</strong> capacity utilization rate <strong>the</strong> results<br />
with Gibbs sampling combined with <strong>the</strong> agnostic identification seem not only more rea-<br />
sonable compared <strong>to</strong> <strong>the</strong> Gibbs sampling approach by BBE but also compared <strong>to</strong> <strong>the</strong>ir<br />
results estimated with <strong>the</strong> two-step PCA. There e.g. <strong>the</strong> capacity utilization rate and and<br />
<strong>the</strong> commodity price index increase increases directly after a shock. One <strong>of</strong> <strong>the</strong> striking<br />
results is <strong>the</strong> certainty with which <strong>the</strong> reactions <strong>of</strong> output (IP), CPI react and in par-<br />
ticular <strong>the</strong> moneytary aggregates react. All<strong>to</strong>ge<strong>the</strong>r one can conclude that output still<br />
has an ambigous effect in that sense that all output variables considered in <strong>the</strong> dataset<br />
react identically and in <strong>the</strong> same direction. There are some that increase, but only very<br />
slightly. Therefore one can conclude that <strong>the</strong> results are not necessarily inconsistent with<br />
<strong>the</strong> monetary neutrality.<br />
All <strong>the</strong> responses have been calculated for <strong>the</strong> different horizons for sign restriction<br />
imposed. The results approved <strong>the</strong> ones by Uhlig [2005] in so far that <strong>the</strong> longer <strong>the</strong><br />
restriction horizon <strong>the</strong> stronger is <strong>the</strong> reaction. The results reported have a restriction<br />
horizon <strong>of</strong> six month. As <strong>the</strong> results approve Uhlig and serve no new insights I decided,<br />
due <strong>to</strong> space limitations not provide <strong>the</strong> results in this <strong>the</strong>sis.<br />
There are more impulse responses provided in <strong>the</strong> Appendix(B) for different block criteria<br />
and number <strong>of</strong> fac<strong>to</strong>rs specified. There results reporte Impulse responses for 42 variables<br />
out <strong>of</strong> <strong>the</strong> panel <strong>of</strong> 120. They are lables with <strong>the</strong> respective mnemonics reported in <strong>the</strong><br />
data tabel in appendix(B).<br />
8 Discussion<br />
This section will provide a brief discussion <strong>of</strong> <strong>the</strong> results presented and a critical assess-<br />
ment. Fur<strong>the</strong>rmore some suggestions for future research are given. The impulse responses
50 Bayesian FAVARs with Agnostic Identification<br />
presented serve <strong>the</strong> indication that, for <strong>the</strong> researcher interested in measuring <strong>the</strong> effects<br />
<strong>of</strong> a shock <strong>to</strong> monetary policy, it is crucial <strong>to</strong> apply identifying restrictions that are<br />
consistent with <strong>the</strong> conventional wisdom, such as <strong>the</strong> agnostic identification using sign<br />
restrictions. When comparing <strong>the</strong> results from <strong>the</strong> Gibbs sampling approach and com-<br />
pare <strong>the</strong>m with <strong>the</strong> ones provided by BBE it is quite evident that <strong>the</strong> results seem more<br />
reasonable especially w.r.t. <strong>the</strong> quantitative measure and <strong>the</strong> certainty with with <strong>the</strong><br />
results are reported. They do not show <strong>the</strong> great uncertainty as <strong>the</strong> results generated eith<br />
standard identification. The results are even more accurate some variables compared <strong>to</strong><br />
<strong>the</strong> principal componant approach. Such variables are e.g. <strong>the</strong> commodity price index<br />
and <strong>the</strong> capacity utilization rate. However one should be cautious and still try out draws<br />
<strong>of</strong> at least 10000 in order <strong>to</strong> be conclusively certain with respect <strong>to</strong> <strong>the</strong> accuracy <strong>of</strong> <strong>the</strong><br />
results. Although I have tried out many versions and several runs producing very similar<br />
results I think <strong>the</strong> results will still have <strong>to</strong> be confirmed with an iteration <strong>of</strong> say 10000<br />
draws <strong>to</strong> be completely sure. This was not feasible due <strong>to</strong> severe time constraints and <strong>the</strong><br />
lack <strong>of</strong> time an appropriate (fast) computer that has no memory problems. It is advisable<br />
<strong>to</strong> use an unix based system.<br />
The most interesting suggestion <strong>to</strong> me seem <strong>to</strong> be even more strict with <strong>the</strong> restriction<br />
in that sense that one should set <strong>the</strong> restrictions for many prices, money aggregates <strong>the</strong><br />
some short term interest rates. This could be accomplished only partly as <strong>the</strong> acceptance<br />
<strong>of</strong> ”reasonable” impulse responses decreases sharply. Hence one should be patient whiöe<br />
waiting for <strong>the</strong> results. Fur<strong>the</strong>r extensions w.r.t. <strong>the</strong> model could be <strong>to</strong> model time vary-<br />
ing fac<strong>to</strong>r loadings and s<strong>to</strong>chastic volatility e.g.in order <strong>to</strong> analyze <strong>the</strong> change <strong>of</strong> monetary<br />
policy in a ”data-rich environment” over time 36 As a next step one could also start <strong>to</strong><br />
identify fur<strong>the</strong>r shocks and measure <strong>the</strong> respective effects like <strong>the</strong> one <strong>to</strong> fiscal policy in<br />
a FAVAR framework, as it has been done by Mountford and Uhlig [2004] in <strong>the</strong> VAR<br />
framework.<br />
36 See Cogley and Sargent [2003].
Bayesian FAVARs with Agnostic Identification 51<br />
9 Summary and Concluding Remarks<br />
In this <strong>the</strong>sis I combined <strong>the</strong> likelihood-based estimation <strong>of</strong> <strong>the</strong> FAVAR framework with<br />
<strong>the</strong> agnostic identification scheme <strong>to</strong> estimate <strong>the</strong> effects <strong>of</strong> a shock <strong>to</strong> monetary policy<br />
imposing sign restriction on <strong>the</strong> impulses responses on prices, nonborrowed reserves and<br />
<strong>the</strong> federal funds rate. Fur<strong>the</strong>rmore some combinations <strong>of</strong> restrictions on more than one<br />
monetary aggregate and price was tried out. I stay agnostic w.r.t. <strong>the</strong> output variables.<br />
The results seem produce results that appear <strong>to</strong> be more reasonable and more accurate<br />
than with standard identification schemes as provided by BBE. The accuracy increases<br />
with <strong>the</strong> increasing restrictions, however <strong>the</strong> number <strong>of</strong> accepted responses according <strong>to</strong><br />
<strong>the</strong> agnostic identification decreases sharply.<br />
I suggest <strong>to</strong> be more strict with <strong>the</strong> restriction <strong>to</strong> set in so far that one imposes not only<br />
single variables but several variables such as prices <strong>to</strong> react according <strong>to</strong> <strong>the</strong> conventional<br />
wisdom. This should serve <strong>the</strong> possibility <strong>to</strong> narrow down <strong>the</strong> space <strong>of</strong> reasonable reactions<br />
that are merely according <strong>to</strong> a shock <strong>to</strong> monetary policy. Price and monetary aggregates<br />
show reasonable responses after a monetary policy shock.<br />
10 Matlab Implementation<br />
This part is supposed <strong>to</strong> explain <strong>the</strong> attached Matlab code. The code uses some codes<br />
written by Chris<strong>to</strong>pher Sims. I fur<strong>the</strong>rmore used some codes written by Piotr Eliasz and<br />
Jean Boivin. The part on <strong>the</strong> Gibbs sampling and Kalman filtering, in parts draws on <strong>the</strong><br />
code written by Piotr Eliasz. Also some codes written by Bar<strong>to</strong>sz Maćkowiak provided<br />
for <strong>the</strong> course ”Empirical Macroeconomics” have been also a great help. All <strong>the</strong> codes<br />
used by o<strong>the</strong>rs are in an seperated folder in <strong>the</strong> attached cd-rom.
52 Bayesian FAVARs with Agnostic Identification<br />
BAYESIAN_FAVAR.m<br />
The main Script. After setting <strong>the</strong> global GLOG_MODE (see description <strong>of</strong> function GLOG)<br />
<strong>the</strong> functions DO_INPUT, DO_CALCULATION and DO_RESULTS are called. The output <strong>of</strong> each<br />
function is given <strong>to</strong> <strong>the</strong> following functions as <strong>the</strong>ir input parameter. DO_IMPORTgenerates<br />
a structure called input (see description <strong>of</strong> <strong>the</strong> used data structure in <strong>the</strong> attached cd-<br />
rom), which contains information about user entries <strong>to</strong> choose a set <strong>of</strong> presets and spec-<br />
ification data. This structure is passed <strong>to</strong> <strong>the</strong> DO_CALCULATION function as its input. In<br />
DO_CALCULATION <strong>the</strong> structure results is created, which contains <strong>the</strong> results <strong>of</strong> <strong>the</strong> cal-<br />
culation process. DO_RESULTS uses this two data structures <strong>to</strong> present <strong>the</strong> results <strong>to</strong> <strong>the</strong><br />
user.<br />
DO_INPUT.m<br />
This function returns <strong>the</strong> input data structure <strong>to</strong> <strong>the</strong> main function. To separate dif-<br />
ferent sources and groups <strong>of</strong> input information, <strong>the</strong> DO_INPUT function is separated in<strong>to</strong><br />
subfunctions, where each returns one part <strong>of</strong> <strong>the</strong> input structure.
Bayesian FAVARs with Agnostic Identification 53<br />
DO_INPUT_VERSION.m
54 Bayesian FAVARs with Agnostic Identification<br />
This function allows <strong>the</strong> user <strong>to</strong> load a set <strong>of</strong> parameters <strong>to</strong> replicate <strong>the</strong> results <strong>of</strong> <strong>the</strong><br />
<strong>the</strong>sis or enter his own settings. All data is s<strong>to</strong>red in<strong>to</strong> <strong>the</strong> input.version structure.<br />
DO_INPUT_DATA.m<br />
Writes <strong>the</strong> Datasource in<strong>to</strong> input.data.<br />
DO_INPUT_SPECIFICATIONS.m<br />
The aim <strong>of</strong> this function is <strong>to</strong> set all specification parameters for <strong>the</strong> calculation process<br />
including <strong>the</strong> Gibbs Sampler and Impulse Response Analysis. All parameters are s<strong>to</strong>red<br />
in <strong>the</strong> input.specification structure.<br />
DO_INPUT_GENERATEXDATA.m<br />
This function returns <strong>the</strong> matrix input.xdata which is input.data excluding <strong>the</strong> data<br />
column <strong>of</strong> <strong>the</strong> perfectly observable that has pervasive effects on <strong>the</strong> economy.<br />
DO_INPUT_STARTINGVALUES.m<br />
Returns starting values for all variables included in <strong>the</strong> input.startingvalues structure.<br />
These are F, lam_f, lam_y, R, Phi_lags and Q.<br />
DO_CALCULATION.m<br />
In this function all calculation processes are started and <strong>the</strong>ir outputs are s<strong>to</strong>red. These<br />
processes are initialized by calling <strong>the</strong> functions DO_CALCULATION_SETMODEL, DO_CALCULATION_CREATEST<br />
DO_CALCULATION_GIBBS_SAMPLING and DO_CALCULATION_IR where <strong>the</strong> last two are <strong>the</strong><br />
main calculation processes. To save memory <strong>the</strong> data structure calculation is declared<br />
as global in all calculation functions. In this way all functions can refer <strong>to</strong> it as an input<br />
and output parameter without moving this big sized structure.
Bayesian FAVARs with Agnostic Identification 55<br />
DO_CALCULATION_SETMODEL.m<br />
Initializes calculation.stateSpaceStructure.<br />
DO_CALCULATION_CREATESTRUCTURE.m<br />
Initializes<br />
calculation.Phi_bar_collect
56 Bayesian FAVARs with Agnostic Identification<br />
calculation.QQ_bar_collect<br />
calculation.F_bar_collect<br />
calculation.Lam_collect.<br />
DO_CALCULATION_GIBBS_SAMPLING.m<br />
This function does <strong>the</strong> Gibbs Sampling by calling <strong>the</strong> functions<br />
DO_CALCULATION_GIBBS_SAMPLING_BK_FILTER,<br />
DO_CALCULATION_GIBBS_SAMPLING_BK_SMOOTHER,<br />
DO_CALCULATION_GIBBS_SAMPLING_PARAM_PREC_OBS and<br />
DO_CALCULATION_GIBBS_SAMPLING_PARAM_PREC_FAC<br />
for each Gibbs iteration. After each Iteration <strong>the</strong> results are s<strong>to</strong>red in <strong>the</strong> global calculation<br />
data structure after ignoring <strong>the</strong> first input.version.burn_in draws.
Bayesian FAVARs with Agnostic Identification 57<br />
DO_CALCULATION_GIBBS_SAMPLING_BK_FILTER.m<br />
Bayesian Kalman Filter
58 Bayesian FAVARs with Agnostic Identification<br />
DO_CALCULATION_GIBBS_SAMPLING_BK_SMOOTHER.m<br />
Bayesian Kalman Smoo<strong>the</strong>r<br />
DO_CALCULATION_GIBBS_SAMPLING_PARAM_PREC_OBS.m<br />
Inference on Observation Equation<br />
DO_CALCULATION_GIBBS_SAMPLING_PARAM_PREC_FAC.m<br />
Inference on State Equation<br />
DO_CALCULATION_IRA.m<br />
This function starts <strong>the</strong> DO_CALCULATION_IRA_UHLIG or <strong>the</strong> DO_CALCULATION_IRA_BBE<br />
function depending on <strong>the</strong> value <strong>of</strong> input.version.ira_mode which contains information<br />
about <strong>the</strong> selected Impulse Response Mode <strong>to</strong> run.<br />
DO_CALCULATION_IRA_UHLIG.m<br />
Impulse Response Analysis with Uhlig (2005) Sign Restrictions. Returns finalresponse<br />
which is a vec<strong>to</strong>r with <strong>the</strong> length <strong>of</strong> <strong>the</strong> <strong>to</strong>tal number <strong>of</strong> block criteria. Responses are<br />
checked <strong>to</strong> satisfy each block criteria, which are set in input.specification.IRA.BC.<br />
Accepted Responses are added <strong>to</strong> finalresponses().response where is<br />
<strong>the</strong> block criteria satisfied by <strong>the</strong> responses.
Bayesian FAVARs with Agnostic Identification 59<br />
To keep <strong>the</strong> memory usage <strong>of</strong> <strong>the</strong> finalResponses().response in efficient limits,<br />
I first initialize it with <strong>the</strong> initial size <strong>of</strong> 3% <strong>of</strong> <strong>the</strong> <strong>the</strong> [draws × α].
60 Bayesian FAVARs with Agnostic Identification<br />
If <strong>the</strong> candidate satisfies <strong>the</strong> block <strong>of</strong> sign restrictions in <strong>the</strong> current block criteria, it is<br />
added <strong>to</strong> finalResponses().response. If <strong>the</strong> position <strong>the</strong> candidate is added <strong>to</strong><br />
is <strong>the</strong> last element <strong>of</strong> finalResponses().response <strong>the</strong> size <strong>of</strong> .response matrix<br />
is increased by fr_add_length which has a default value <strong>of</strong> 1% <strong>of</strong> <strong>the</strong> <strong>the</strong> [draws × α].
Bayesian FAVARs with Agnostic Identification 61<br />
Finally <strong>the</strong> size <strong>of</strong> finalResponses().response is reduced <strong>to</strong> set free <strong>the</strong> unused<br />
occupied memory. Also <strong>the</strong> size <strong>of</strong> finalResponses().response represents <strong>the</strong><br />
number <strong>of</strong> accepted candidates.
62 Bayesian FAVARs with Agnostic Identification<br />
DO_CALCULATION_IRA_UHLIG_CHECK_SIGNRESTRICTION.m<br />
This function checks if a given response satisfies <strong>the</strong> block criteria . Returns 1 if<br />
response is accepted, o<strong>the</strong>rwise 0.<br />
GLOG.m<br />
This Function is used <strong>to</strong> log an output string depending on its log level glog_type. The<br />
global variable GLOG_MODE signifies <strong>the</strong> global minimum level for outputs and is set di-<br />
rectly in BAYESIAN_FAVAR. If <strong>the</strong> glog_type <strong>of</strong> an output string is less than GLOG_MODE<br />
<strong>the</strong> output is ignored. O<strong>the</strong>rwise GLOG displays <strong>the</strong> output string.
Bayesian FAVARs with Agnostic Identification 63<br />
References<br />
[1] Bai, J. and Ng (2002), ”Determining <strong>the</strong> number <strong>of</strong> fac<strong>to</strong>rs in approximate fac<strong>to</strong>r<br />
models”; Econometrica 70, pp. 191-221<br />
[2] Bauwens,Luc,Michel Lubrano and Jean-Francois Richard (1999) ”Bayesian Inference<br />
in Dynamic Econometric Modeling”; Oxford University Press<br />
[3] Bernanke, Ben and Jean Boivin (2003), ”<strong>Monetary</strong> <strong>Policy</strong> in a Data-Rich Environ-<br />
ment”; Journal <strong>of</strong> <strong>Monetary</strong> Economics 50:3, pp. 525-546<br />
[4] Bernanke, Ben, Jean Boivin and P. Eliasz (2005), ”<strong>Measuring</strong> <strong>the</strong> effects <strong>of</strong> mone-<br />
tary policy: a fac<strong>to</strong>r-augmented vec<strong>to</strong>r au<strong>to</strong>regressive (FAVAR) approach”; Quarterly<br />
Journal <strong>of</strong> Economics 120, pp. 387-422<br />
[5] Brillinger, D. R. (1964), ”A frequency approach <strong>to</strong> <strong>the</strong> techniques <strong>of</strong> principal compo-<br />
nents, fac<strong>to</strong>r analysis and canonical variates in <strong>the</strong> case <strong>of</strong> stationary time series”;<br />
Invited Paper, Royal Statistical Society Conference, Cardiff Wales.<br />
[6] Bernanke, Ben and Illian Mihov (1998a), ”<strong>Measuring</strong> <strong>Monetary</strong> <strong>Policy</strong>”; Quarterly<br />
Journal <strong>of</strong> Economics 113, pp. 869-902<br />
[7] Bernanke, Ben and Illian Mihov (1998b), ”The Liquidity Effect and Long-run Neutral-<br />
ity: Identification by Inequality Constraints”; Carnegie-Rochster Conference Series<br />
on Public <strong>Policy</strong>; 49, pp. 149-94<br />
[8] Blanchard, Oliver J. and Mark Watson (1986), ”Are All Business Cycles Alike?”; in<br />
R. J. Gordon, ed., The American Business Cycle, Chicago: University Press<br />
[9] Blanchard, Oliver J. and Danny Quah (1989), ”The Dynamics <strong>Effects</strong> <strong>of</strong> Aggregate<br />
Demand and Supply Disturbances”; American Economic Review; 79, pp. 655-73<br />
[10] Canova, Fabio and Gianni De Nicolo (2002), ”<strong>Monetary</strong> Disturbances Matter for<br />
Business Cycle Fluctuations in <strong>the</strong> G-7”; Journal <strong>of</strong> <strong>Monetary</strong> Economics; 49, pp.<br />
1131-59
64 Bayesian FAVARs with Agnostic Identification<br />
[11] Carter, C.K. and P. Kohn (1994), ”On Gibbs Sampling for State Space Models?”;<br />
Biometrika 81, pp. 541-53<br />
[12] Chamberlain, G. and M. Rothschild (1983), ”Arbitrage fac<strong>to</strong>r structure, and mean-<br />
variance analysis <strong>of</strong> large asset markets”, Econometrica 51, 1281-1304<br />
[13] Christiano, Lawrence (1991), ”Modeling <strong>the</strong> Liquidity Effect <strong>of</strong> a <strong>Monetary</strong> <strong>Shock</strong>”;<br />
Federal Reserve Bank <strong>of</strong> Minneapolis, Quarterly Review 15, pp. 3-34<br />
[14] Christiano, Lawrence, Martin Eichenbaum (1992b), ”Identification and <strong>the</strong> Liquidity<br />
Effect <strong>of</strong> a <strong>Monetary</strong> <strong>Policy</strong> <strong>Shock</strong>”; in A. Cukierman, Z. Hercowitz and L. Leider-<br />
man, eds.; Political Economy, Growth and Business Cycles, Cambridge MA, MIT<br />
Press<br />
[15] Christiano, Lawrence, Martin Eichenbaum and Charles Evans (1999), ”<strong>Monetary</strong><br />
<strong>Policy</strong> <strong>Shock</strong>s: What Have We Learned and <strong>to</strong> What End”; ch.2 in J. B. Taylor and<br />
M. Woodford (eds.), The Handbook <strong>of</strong> Macroeconomics, v. 1a:65-148<br />
[16] Christiano, Lawrence, Martin Eichenbaum and Charles Evans (2005), ”Nominal<br />
Rigidities and <strong>the</strong> Dynamic <strong>Effects</strong> <strong>of</strong> a <strong>Shock</strong> <strong>to</strong> <strong>Monetary</strong> <strong>Policy</strong>”; Journal <strong>of</strong> Po-<br />
litical Economy, vol. 113<br />
[17] Timothy Cogley and Thomas J. Sargent, (2003), ”Drifts and volatilities: <strong>Monetary</strong><br />
policies and outcomes in <strong>the</strong> post WWII U.S”; Working Paper 2003-25, Federal Re-<br />
serve Bank <strong>of</strong> Atlanta.<br />
[18] J. Cochrane (1994), ”<strong>Shock</strong>s”; Carnegie Rochester Conference Series on Public <strong>Policy</strong><br />
41, pp. 295-364<br />
[19] Eliasz, Piotr (2005), ”Likelihood-Based Inference in Large Dynamic Fac<strong>to</strong>r Models<br />
Using Gibbs Sampling”; Prince<strong>to</strong>n University, unpublished Working Paper<br />
[20] Del Negro, Marco and Chris<strong>to</strong>pher Otrok, 2003, ”Time Varying European Business<br />
Cycle,” Discussion paper, Federal Reserve Bank <strong>of</strong> Atlanta and Univerity <strong>of</strong> Virginia.
Bayesian FAVARs with Agnostic Identification 65<br />
[21] Del Negro, Marco and Chris<strong>to</strong>pher Otrok, 2004,Dynamic Fac<strong>to</strong>r Model with Time<br />
Varying Parameters,” Discussion paper, Federal Reserve Bank <strong>of</strong> Atlanta and Uni-<br />
versity <strong>of</strong> Virginia.<br />
[22] Favero, C.A. and M. Marcellino (2005) ” Large dataset, small models and monetary<br />
<strong>Policy</strong> in Europe”; CLM Economia , 249-269<br />
[23] Favero, C.A., M. Marcellinho and F. Neglia (2002), ”Principal Components at Work:<br />
<strong>the</strong> empirical analysis <strong>of</strong> monetary policy with large datasets”; IGIER Working Paper<br />
No. 223 (Bocconi University), forthcoming, Journal <strong>of</strong> Applied Econometrics<br />
[24] Forni, M., M. Hallin, M. Lippi, and L. Reichlin (2000), ”The Generalized Dynamic<br />
Fac<strong>to</strong>r Model: Identification and Estimation”; Review <strong>of</strong> Economics and Statistics<br />
82, pp. 540-54<br />
[25] Forni, M. and L. Reichlin (1998), ”Dynamic Common Fac<strong>to</strong>rs in Large Cross-<br />
Sections”; Empirical Economics 21, pp. 27-42<br />
[26] Forni, M. and L. Reichlin (1998), ”Let’s Get Real: A Dynamic Fac<strong>to</strong>r Analytical<br />
Approach <strong>to</strong> Disaggregated Business Cycles”; Review <strong>of</strong> Economic Studies 65, pp.<br />
453-74<br />
[27] Gamerman, Dani (1997), ”Markov Chain Monte Carlo: S<strong>to</strong>chastic Simulation for<br />
Bayesian Inference”; Chapman and Hall, New York<br />
[28] Gelman, A. and D.B.Rubin, (1992), ”A Single Sequence from <strong>the</strong> Gibbs Sampler<br />
Gives a False Sense <strong>of</strong> Security”; in J.M. Bernardo, J. O. Berger, A. P. Dawid, and<br />
A.F.M. Smith, eds., Bayesian Statistics, Oxford: University Press<br />
[29] Geman, S. and D. Geman, (1984), ”S<strong>to</strong>chastic Relaxation, Gibbs Distribution and<br />
<strong>the</strong> Bayesian Res<strong>to</strong>ration <strong>of</strong> Images”; IEEE Transactions <strong>of</strong> Pattern Analysis and<br />
Machine Intelligence 6, pp. 721-41
66 Bayesian FAVARs with Agnostic Identification<br />
[30] Geweke, John and Kenneth J. Single<strong>to</strong>n, (1981), ” Maximum Likelihood ‘Confirma-<br />
<strong>to</strong>ry’ Fac<strong>to</strong>r Analysis <strong>of</strong> Economic Time Series,”; International Economic Review<br />
22, pp. 37 − 54.<br />
[31] Geweke, John and Gu<strong>of</strong>u Zhou, (1996), ”<strong>Measuring</strong> <strong>the</strong> Pricing Error <strong>of</strong> <strong>the</strong> Arbitrage<br />
Pricing Theory ”The Review <strong>of</strong> Financial Studies 9, pp. 557 − 587.<br />
[32] Geweke, John (1997), ” Using Simulation Methods for Bayesian Econometrics Mod-<br />
els: Inference, Development and Communication ”; University <strong>of</strong> Minnesota<br />
[33] Gianone D., Reichlin L. and Sala L. (2002), ”Tracking Greenspan: Systematic and<br />
Unsystematic <strong>Monetary</strong> <strong>Policy</strong> Revisited”<br />
[34] Gordon, David and Eric Leeper, (1994) ”The Dynamic Impacts <strong>of</strong> <strong>Monetary</strong> <strong>Policy</strong>:<br />
An Exercise in Tentative Identification”; Journal <strong>of</strong> Political Economy 102, pp. 1228-<br />
47<br />
[35] Hamil<strong>to</strong>n, James D., (1994) ”Time Series Analysis”; Prince<strong>to</strong>n: Prince<strong>to</strong>n University<br />
Press.<br />
[36] Judge, G.G., R.C. Hill, W.E. Griffith, H. Lütkepohl and T.C. Lee (1998), ” Intro-<br />
duction <strong>to</strong> <strong>the</strong> Theory and Practice <strong>of</strong> Econometrics”; New York: John Wiley Sons<br />
[37] Kadiyala, Rao and Sune Karlsson, (1997) ”Numerical Methods for Estimation and<br />
Inference Bayesian VAR-Models”; Journal <strong>of</strong> Applied Econometrics, Vol. 12, pp.<br />
99-132<br />
[38] Kose, Ayhan, Chris<strong>to</strong>pher Otrok and Charles H. Whiteman, (2003a), ”International<br />
Business Cycles: World, Region and Country-Specific Fac<strong>to</strong>rs”; American Economic<br />
Review,forthcoming<br />
[39] Kose, Ayhan, Chris<strong>to</strong>pher Otrok and Charles H. Whiteman, (2003b), ”Understanding<br />
<strong>the</strong> Evolution <strong>of</strong> World Business Cycles”; Unpublished paper
Bayesian FAVARs with Agnostic Identification 67<br />
[40] Kose, Ayhan, Chris<strong>to</strong>pher Otrok and Charles H. Whiteman (2003) ”Understanding<br />
<strong>the</strong> Evolution <strong>of</strong> World Business Cycles”; Unpublished paper<br />
[41] Krause, Andreas (1994), ”Computerinsive Statistische Methoden: Gibbs Sampling in<br />
Regressionsmodellen.”; Gustav Fischer Verlag<br />
[42] Leeper, Eric, Chris<strong>to</strong>pher Sims, and Tao Zha, (1996) ”What does <strong>Monetary</strong> <strong>Policy</strong><br />
Do?”; Brookings Papers on Economic Activity 2, 1-63<br />
[43] Lütkepohl, Helmut (1993), ”Introduction <strong>to</strong> Multiple Time Series Analysis”; Springer<br />
Verlag<br />
[44] Mountford, Andrew and Harald Uhlig (2005), ”What are <strong>the</strong> effects <strong>of</strong> fiscal policy<br />
shocks?”; draft, <strong>Humboldt</strong> University<br />
[45] Maćkowiak, Bar<strong>to</strong>sz (2004), ”Notes on Gibbs sampling and dynamic fac<strong>to</strong>r models”;<br />
<strong>Humboldt</strong> University Berlin<br />
[46] Quah, D. and T. J. Sargent (1993), ( ”A Dynamic Index Model for Large Cross<br />
Sections”); in J. H. S<strong>to</strong>ck and M. W. Watson, eds., Business Cycles, Indica<strong>to</strong>rs, and<br />
Forecasting (University <strong>of</strong> Chicago Press for <strong>the</strong> NBER, Chicago) CH.7<br />
[47] Sims, Chris<strong>to</strong>pher (1980), ”Macroeconomics and Reality”, Econometrica, vol. 48, pp.<br />
1-48<br />
[48] Sims, Chris<strong>to</strong>pher (1986), ”Are Forecasting Models Usable for <strong>Policy</strong> Analysis”, Min-<br />
neapolis Federal Reserve Bank Quarterly Review, Winter 1986, pp. 2-16<br />
[49] Sims, Chris<strong>to</strong>pher (1992), ”Interpreting <strong>the</strong> Macroeconomic Time Series Facts: The<br />
<strong>Effects</strong> <strong>of</strong> <strong>Monetary</strong> <strong>Policy</strong>”, European Economic Review, vol. 36, pp. 975-1011<br />
[50] Sims, Chris<strong>to</strong>pher and Uhlig Harald (1991) ”Understanding unit rooters: a helicopter<br />
<strong>to</strong>ur”, Econometrica, vol. 59, pp. 1591-1600
68 Bayesian FAVARs with Agnostic Identification<br />
[51] Sims, Chris<strong>to</strong>pher and Tao Zha (1998), ”Bayesian Methods for Dynamic Multivariate<br />
Models”, International Economic Review, vol. 39(4), pp. 649-68<br />
[52] Sims, Chris<strong>to</strong>pher and Tao Zha (1999), ”Error Bands for Impulse Responses”, Econo-<br />
metrica, vol. 67(5), pp. 1113-55<br />
[53] S<strong>to</strong>ck, J. H. and M. W. Watson (1989), ”New Indexes <strong>of</strong> coincident and leading<br />
economic indica<strong>to</strong>rs”,NBER Macroeconomics Annual, 351-393<br />
[54] S<strong>to</strong>ck, J. H. and M. W. Watson (1991), ”A probability model <strong>of</strong> <strong>the</strong> coincident eco-<br />
nomic indica<strong>to</strong>rs”,in G. Moore and K. Lahiri, eds. The leading Economic Indica-<br />
<strong>to</strong>rs:New Approaches and Forecasting Records (Cambridge University Press) 63-90<br />
[55] Uhlig, Harald (1994) ”What macroeconomists should know about unit roots: a<br />
Bayesian perspective”, Econometric Theory, vol. 10, pp. 645-71<br />
[56] Uhlig, Harald (1998) ”The robustness <strong>of</strong> identified VAR conclusion about money. A<br />
Comment.”, Carnegie-Rochester Series in Public Economics, vol. 49, pp. 245-63<br />
[57] Uhlig, Harald (2005) ”What are <strong>the</strong> effects <strong>of</strong> a shock <strong>to</strong> monetary policy? Results<br />
from an agnostic identification procedure”, Journal <strong>of</strong> <strong>Monetary</strong> Economics, vol. 52,<br />
pp. 381-419<br />
[58] W.R. Gilks, S.Richardson and D.J. Spiegelhalter (1996), ”Markov Chain Monte Carlo<br />
in Practice”; Chapman and Hall, London
Bayesian FAVARs with Agnostic Identification 69<br />
Appendix A: Data Appendix 1: Data Description<br />
The data is <strong>the</strong> one that Bernanke, Boivin and Eliasz [2005] use in <strong>the</strong>ir paper. Format<br />
is as in S<strong>to</strong>ck and Watson’s papers: series number; series mnemonic; data span; trans-<br />
formation code and series description as appears in <strong>the</strong> database. The transformation<br />
codes are: 1 – no transformation; 2 – first difference; 4 – logarithm; 5 – first difference<br />
<strong>of</strong> logarithm. An asterisk, ‘*’, next <strong>to</strong> <strong>the</strong> mnemonic denotes a variable assumed <strong>to</strong> be<br />
“slow-moving” in <strong>the</strong> estimation.<br />
Real output and income<br />
1. IPP* 1959:01-2001:08 5 INDUSTRIAL PRODUCTION: PRODUCTS, TOTAL (1992=100,SA)<br />
2. IPF* 1959:01-2001:08 5 INDUSTRIAL PRODUCTION: FINAL PRODUCTS (1992=100,SA)<br />
3. IPC* 1959:01-2001:08 5 INDUSTRIAL PRODUCTION: CONSUMER GOODS (1992=100,SA)<br />
4. IPCD* 1959:01-2001:08 5 INDUSTRIAL PRODUCTION: DURABLE CONS. GOODS (1992=100,SA)<br />
5. IPCN* 1959:01-2001:08 5 INDUSTRIAL PRODUCTION: NONDURABLE CONS. GOODS (1992=100,SA)<br />
6. IPE* 1959:01-2001:08 5 INDUSTRIAL PRODUCTION: BUSINESS EQUIPMENT (1992=100,SA)<br />
7. IPI* 1959:01-2001:08 5 INDUSTRIAL PRODUCTION: INTERMEDIATE PRODUCTS (1992=100,SA)<br />
8. IPM* 1959:01-2001:08 5 INDUSTRIAL PRODUCTION: MATERIALS (1992=100,SA)<br />
9. IPMD* 1959:01-2001:08 5 INDUSTRIAL PRODUCTION: DURABLE GOODS MATERIALS (1992=100,SA)<br />
10. IPMND* 1959:01-2001:08 5 INDUSTRIAL PRODUCTION: NONDUR. GOODS MATERIALS (1992=100,SA)<br />
11. IPMFG* 1959:01-2001:08 5 INDUSTRIAL PRODUCTION: MANUFACTURING (1992=100,SA)<br />
12. IPD* 1959:01-2001:08 5 INDUSTRIAL PRODUCTION: DURABLE MANUFACTURING (1992=100,SA)<br />
13. IPN* 1959:01-2001:08 5 INDUSTRIAL PRODUCTION: NONDUR. MANUFACTURING (1992=100,SA)<br />
14. IPMIN* 1959:01-2001:08 5 INDUSTRIAL PRODUCTION: MINING (1992=100,SA)<br />
15. IPUT* 1959:01-2001:08 5 INDUSTRIAL PRODUCTION: UTILITIES (1992-=100,SA)<br />
16. IP* 1959:01-2001:08 5 INDUSTRIAL PRODUCTION: TOTAL INDEX (1992=100,SA)<br />
17. IPXMCA* 1959:01-2001:08 1 CAPACITY UTIL RATE: MANUFAC.,TOTAL(<br />
18. PMI* 1959:01-2001:08 1 PURCHASING MANAGERS’ INDEX (SA)
70 Bayesian FAVARs with Agnostic Identification<br />
19. PMP* 1959:01-2001:08 1 NAPM PRODUCTION INDEX (PERCENT)<br />
20. GMPYQ* 1959:01-2001:08 5 PERSONAL INCOME (CHAINED) (SERIES #52) (BIL 92$,SAAR)<br />
21. GMYXPQ* 1959:01-2001:08 5 PERSONAL INC. LESS TRANS. PAYMENTS (CHAINED) (#51) (BIL 92$,SAAR)<br />
Employment and hours<br />
22. LHEL* 1959:01-2001:08 5 INDEX OF HELP-WANTED ADVERTISING IN NEWSPAPERS (1967=100;SA)<br />
23. LHELX* 1959:01-2001:08 4 EMPLOYMENT: RATIO; HELP-WANTED ADS:NO. UNEMPLOYED CLF<br />
24. LHEM* 1959:01-2001:08 5 CIVILIAN LABOR FORCE: EMPLOYED, TOTAL (THOUS.,SA)<br />
25. LHNAG* 1959:01-2001:08 5 CIVILIAN LABOR FORCE: EMPLOYED, NONAG.INDUSTRIES (THOUS.,SA)<br />
26. LHUR* 1959:01-2001:08 1 UNEMPLOYMENT RATE: ALL WORKERS, 16 YEARS & OVER (<br />
27. LHU680* 1959:01-2001:08 1 UNEMPLOY.BY DURATION: AVERAGE(MEAN)DURATION IN WEEKS (SA)<br />
28. LHU5* 1959:01-2001:08 1 UNEMPLOY.BY DURATION: PERS UNEMPL.LESS THAN 5 WKS (THOUS.,SA)<br />
29. LHU14* 1959:01-2001:08 1 UNEMPLOY.BY DURATION: PERS UNEMPL.5 TO 14 WKS (THOUS.,SA)<br />
30. LHU15* 1959:01-2001:08 1 UNEMPLOY.BY DURATION: PERS UNEMPL.15 WKS + (THOUS.,SA)<br />
31. LHU26* 1959:01-2001:08 1 UNEMPLOY.BY DURATION: PERS UNEMPL.15 TO 26 WKS (THOUS.,SA)<br />
32. LPNAG* 1959:01-2001:08 5 EMPLOYEES ON NONAG. PAYROLLS: TOTAL (THOUS.,SA)<br />
33. LP* 1959:01-2001:08 5 EMPLOYEES ON NONAG. PAYROLLS: TOTAL, PRIVATE (THOUS,SA)<br />
34. LPGD* 1959:01-2001:08 5 EMPLOYEES ON NONAG. PAYROLLS: GOODS-PRODUCING (THOUS.,SA)<br />
35. LPMI* 1959:01-2001:08 5 EMPLOYEES ON NONAG. PAYROLLS: MINING (THOUS.,SA)<br />
36. LPCC* 1959:01-2001:08 5 EMPLOYEES ON NONAG. PAYROLLS: CONTRACT CONSTRUC. (THOUS.,SA)<br />
37. LPEM* 1959:01-2001:08 5 EMPLOYEES ON NONAG. PAYROLLS: MANUFACTURING (THOUS.,SA)<br />
38. LPED* 1959:01-2001:08 5 EMPLOYEES ON NONAG. PAYROLLS: DURABLE GOODS (THOUS.,SA)<br />
39. LPEN* 1959:01-2001:08 5 EMPLOYEES ON NONAG. PAYROLLS: NONDURABLE GOODS (THOUS.,SA)<br />
40. LPSP* 1959:01-2001:08 5 EMPLOYEES ON NONAG. PAYROLLS: SERVICE-PRODUCING (THOUS.,SA)<br />
41. LPTU* 1959:01-2001:08 5 EMPLOYEES ON NONAG. PAYROLLS: TRANS. & PUBLIC UTIL. (THOUS.,SA)<br />
42. LPT* 1959:01-2001:08 5 EMPLOYEES ON NONAG. PAYROLLS: WHOLESALE & RETAIL (THOUS.,SA)
Bayesian FAVARs with Agnostic Identification 71<br />
43. LPFR* 1959:01-2001:08 5 EMPLOYEES ON NONAG. PAYROLLS: FINANCE,INS.&REAL EST (THOUS.,SA<br />
44. LPS* 1959:01-2001:08 5 EMPLOYEES ON NONAG. PAYROLLS: SERVICES (THOUS.,SA)<br />
45. LPGOV* 1959:01-2001:08 5 EMPLOYEES ON NONAG. PAYROLLS: GOVERNMENT (THOUS.,SA)<br />
46. LPHRM* 1959:01-2001:08 1 AVG. WEEKLY HRS. OF PRODUCTION WKRS.: MANUFACTURING (SA)<br />
47. LPMOSA* 1959:01-2001:08 1 AVG. WEEKLY HRS. OF PROD. WKRS.: MFG.,OVERTIME HRS. (SA)<br />
48. PMEMP* 1959:01-2001:08 1 NAPM EMPLOYMENT INDEX (PERCENT)<br />
Consumption<br />
49. GMCQ* 1959:01-2001:08 5 PERSONAL CONSUMPTION EXPEND (CHAINED) - TOTAL (BIL 92$,SAAR)<br />
50. GMCDQ* 1959:01-2001:08 5 PERSONAL CONSUMPTION EXPEND (CHAINED) – TOT. DUR. (BIL 96$,SAAR)<br />
51. GMCNQ* 1959:01-2001:08 5 PERSONAL CONSUMPTION EXPEND (CHAINED) – NONDUR. (BIL 92$,SAAR)<br />
52. GMCSQ* 1959:01-2001:08 5 PERSONAL CONSUMPTION EXPEND (CHAINED) - SERVICES (BIL 92$,SAAR)<br />
53. GMCANQ* 1959:01-2001:08 5 PERSONAL CONS EXPEND (CHAINED) - NEW CARS (BIL 96$,SAAR) Housing starts<br />
and sales<br />
54. HSFR 1959:01-2001:08 4 HOUSING STARTS:NONFARM(1947-58);TOT.(1959-)(THOUS.,SA<br />
55. HSNE 1959:01-2001:08 4 HOUSING STARTS:NORTHEAST (THOUS.U.)S.A.<br />
56. HSMW 1959:01-2001:08 4 HOUSING STARTS:MIDWEST(THOUS.U.)S.A.<br />
57. HSSOU 1959:01-2001:08 4 HOUSING STARTS:SOUTH (THOUS.U.)S.A.<br />
58. HSWST 1959:01-2001:08 4 HOUSING STARTS:WEST (THOUS.U.)S.A.<br />
59. HSBR 1959:01-2001:08 4 HOUSING AUTHORIZED: TOTAL NEW PRIV HOUSING (THOUS.,SAAR)<br />
60. HMOB 1959:01-2001:08 4 MOBILE HOMES: MANUFACTURERS’ SHIPMENTS (THOUS.OF UNITS,SAAR) Real in-<br />
ven<strong>to</strong>ries, orders and unfilled orders<br />
61. PMNV 1959:01-2001:08 1 NAPM INVENTORIES INDEX (PERCENT)<br />
62. PMNO 1959:01-2001:08 1 NAPM NEW ORDERS INDEX (PERCENT)<br />
63. PMDEL 1959:01-2001:08 1 NAPM VENDOR DELIVERIES INDEX (PERCENT)<br />
64. MOCMQ 1959:01-2001:08 5 NEW ORDERS (NET) - CONSUMER GOODS & MATERIALS, 1992 $ (BCI)<br />
65. MSONDQ 1959:01-2001:08 5 NEW ORDERS, NONDEFENSE CAPITAL GOODS, IN 1992 DOLLARS (BCI) S<strong>to</strong>ck prices
72 Bayesian FAVARs with Agnostic Identification<br />
66. FSNCOM 1959:01-2001:08 5 NYSE COMMON STOCK PRICE INDEX: COMPOSITE (12/31/65=50)<br />
67. FSPCOM 1959:01-2001:08 5 S&P’S COMMON STOCK PRICE INDEX: COMPOSITE (1941-43=10)<br />
68. FSPIN 1959:01-2001:08 5 S&P’S COMMON STOCK PRICE INDEX: INDUSTRIALS (1941-43=10)<br />
69. FSPCAP 1959:01-2001:08 5 S&P’S COMMON STOCK PRICE INDEX: CAPITAL GOODS (1941-43=10)<br />
70. FSPUT 1959:01-2001:08 5 S&P’S COMMON STOCK PRICE INDEX: UTILITIES (1941-43=10)<br />
71. FSDXP 1959:01-2001:08 1 S&P’S COMPOSITE COMMON STOCK: DIVIDEND YIELD (<br />
72. FSPXE 1959:01-2001:08 1 S&P’S COMPOSITE COMMON STOCK: PRICE-EARNINGS RATIO (<br />
73. EXRSW 1959:01-2001:08 5 FOREIGN EXCHANGE RATE: SWITZERLAND (SWISS FRANC PER U.S.$)<br />
74. EXRJAN 1959:01-2001:08 5 FOREIGN EXCHANGE RATE: JAPAN (YEN PER U.S.$)<br />
75. EXRUK 1959:01-2001:08 5 FOREIGN EXCHANGE RATE: UNITED KINGDOM (CENTS PER POUND)<br />
76. EXRCAN 1959:01-2001:08 5 FOREIGN EXCHANGE RATE: CANADA (CANADIAN $ PER U.S.$) Interest rates<br />
77. FYFF 1959:01-2001:08 1 INTEREST RATE: FEDERAL FUNDS (EFFECTIVE) (<br />
78. FYGM3 1959:01-2001:08 1 INTEREST RATE: U.S.TREASURY BILLS,SEC MKT,3-MO.(<br />
79. FYGM6 1959:01-2001:08 1 INTEREST RATE: U.S.TREASURY BILLS,SEC MKT,6-MO.(<br />
80. FYGT1 1959:01-2001:08 1 INTEREST RATE: U.S.TREASURY CONST MATUR. ,1-YR.(<br />
81. FYGT5 1959:01-2001:08 1 INTEREST RATE: U.S.TREASURY CONST MATUR., 5-YR.(<br />
82. FYGT10 1959:01-2001:08 1 INTEREST RATE: U.S.TREASURY CONST MATUR.,10-YR.(<br />
83. FYAAAC 1959:01-2001:08 1 BOND YIELD: MOODY’S AAA CORPORATE (<br />
84. FYBAAC 1959:01-2001:08 1 BOND YIELD: MOODY’S BAA CORPORATE (<br />
85. SFYGM3 1959:01-2001:08 1 Spread FYGM3 - FYFF<br />
86. SFYGM6 1959:01-2001:08 1 Spread FYGM6 - FYFF<br />
87. SFYGT1 1959:01-2001:08 1 Spread FYGT1 - FYFF<br />
88. SFYGT5 1959:01-2001:08 1 Spread FYGT5 - FYFF<br />
89. SFYGT10 1959:01-2001:08 1 Spread FYGT10 - FYFF<br />
90. SFYAAAC 1959:01-2001:08 1 Spread FYAAAC - FYFF<br />
91. SFYBAAC 1959:01-2001:08 1 Spread FYBAAC - FYFF<br />
Money and credit quantity aggregates
Bayesian FAVARs with Agnostic Identification 73<br />
92. FM1 1959:01-2001:08 5 MONEY STOCK: M1 (BIL$,SA)<br />
93. FM2 1959:01-2001:08 5 MONEY STOCK: M2 (BIL$, SA)<br />
94. FM3 1959:01-2001:08 5 MONEY STOCK: M3 (BIL$,SA)<br />
95. FM2DQ 1959:01-2001:08 5 MONEY SUPPLY - M2 IN 1992 DOLLARS (BCI)<br />
96. FMFBA 1959:01-2001:08 5 MONETARY BASE, ADJ FOR RESERVE REQUIREMENT CHANGES(MIL$,SA)<br />
97. FMRRA 1959:01-2001:08 5 DEPOSITORY INST RESERVES:TOTAL,ADJ FOR RES. REQ CHGS(MIL$,SA)<br />
98. FMRNBA 1959:01-2001:08 5 DEPOSITORY INST RESERVES:NONBOR. ,ADJ RES REQ CHGS(MIL$,SA)<br />
99. FCLNQ 1959:01-2001:08 5 COMMERCIAL & INDUST. LOANS OUSTANDING IN 1992 DOLLARS (BCI)<br />
100. FCLBMC 1959:01-2001:08 1 WKLY RP LG COM. BANKS: NET CHANGE COM & IND. LOANS(BIL$,SAAR)<br />
101. CCINRV 1959:01-2001:08 5 CONSUMER CREDIT OUTSTANDING NONREVOLVING G19<br />
Price indexes<br />
102. PMCP 1959:01-2001:08 1 NAPM COMMODITY PRICES INDEX (PERCENT)<br />
103. PWFSA* 1959:01-2001:08 5 PRODUCER PRICE INDEX: FINISHED GOODS (82=100,SA)<br />
104. PWFCSA* 1959:01-2001:08 5 PRODUCER PRICE INDEX:FINISHED CONSUMER GOODS (82=100,SA)<br />
105. PWIMSA* 1959:01-2001:08 5 PRODUCER PRICE INDEX:INTERMED MAT.SUP & COMPONENTS(82=100,SA)<br />
106. PWCMSA* 1959:01-2001:08 5 PRODUCER PRICE INDEX:CRUDE MATERIALS (82=100,SA)<br />
107. PSM99Q* 1959:01-2001:08 5 INDEX OF SENSITIVE MATERIALS PRICES (1990=100)(BCI-99A)<br />
108. PUNEW* 1959:01-2001:08 5 CPI-U: ALL ITEMS (82-84=100,SA)<br />
109. PU83* 1959:01-2001:08 5 CPI-U: APPAREL & UPKEEP (82-84=100,SA)<br />
110. PU84* 1959:01-2001:08 5 CPI-U: TRANSPORTATION (82-84=100,SA)<br />
111. PU85* 1959:01-2001:08 5 CPI-U: MEDICAL CARE (82-84=100,SA)<br />
112. PUC* 1959:01-2001:08 5 CPI-U: COMMODITIES (82-84=100,SA)<br />
113. PUCD* 1959:01-2001:08 5 CPI-U: DURABLES (82-84=100,SA)<br />
114. PUS* 1959:01-2001:08 5 CPI-U: SERVICES (82-84=100,SA)<br />
115. PUXF* 1959:01-2001:08 5 CPI-U: ALL ITEMS LESS FOOD (82-84=100,SA)
74 Bayesian FAVARs with Agnostic Identification<br />
116. PUXHS* 1959:01-2001:08 5 CPI-U: ALL ITEMS LESS SHELTER (82-84=100,SA)<br />
117. PUXM* 1959:01-2001:08 5 CPI-U: ALL ITEMS LESS MIDICAL CARE (82-84=100,SA)<br />
Average hourly earnings<br />
118. LEHCC* 1959:01-2001:08 5 AVG HR EARNINGS OF CONSTR WKRS: CONSTRUCTION ($,SA)<br />
119. LEHM* 1959:01-2001:08 5 AVG HR EARNINGS OF PROD WKRS: MANUFACTURING ($,SA)<br />
Miscellaneous<br />
120. HHSNTN 1959:01-2001:08 1 U. OF MICH. INDEX OF CONSUMER EXPECTATIONS(BCD-83)<br />
Appendix B: Figures The figure that are presented in <strong>the</strong> following are ei<strong>the</strong>r labeled<br />
with variable names used in <strong>the</strong> main text or with <strong>the</strong> mnemonic that are reported in <strong>the</strong><br />
data table. The block criteria 1 sets <strong>the</strong> restriction consumer price index on nonborrowed<br />
reserves, M3 and <strong>the</strong> federal funds rate. The block criteria 2 sets <strong>the</strong> restrictions on<br />
consumer price index, nonborrowed reserves, M3, monetary base, and <strong>the</strong> federal funds<br />
rate.
Bayesian FAVARs with Agnostic Identification 75<br />
Impulse Responses for 20 Variables with 5 fac<strong>to</strong>rs and <strong>the</strong> Block Criteria 1
76 Bayesian FAVARs with Agnostic Identification<br />
Impulse Responses for 20 Variables with 5 fac<strong>to</strong>rs and <strong>the</strong> Block Criteria 2
Bayesian FAVARs with Agnostic Identification 77<br />
Impulse Responses for 20 Variables with 2 fac<strong>to</strong>rs and <strong>the</strong> Block Criteria 1
78 Bayesian FAVARs with Agnostic Identification<br />
Impulse Responses <strong>of</strong> <strong>the</strong> first 20 variables <strong>of</strong> <strong>the</strong> selected Variables out <strong>of</strong> 43 with 7 fac-<br />
<strong>to</strong>rs and <strong>the</strong> Block Criteria 1
Bayesian FAVARs with Agnostic Identification 79<br />
Impulse Responses <strong>of</strong> <strong>the</strong> 21st <strong>to</strong> 40th variables <strong>of</strong> <strong>the</strong> selected Variables out <strong>of</strong> 43 with 7<br />
fac<strong>to</strong>rs and <strong>the</strong> Block Criteria 1
80 Bayesian FAVARs with Agnostic Identification<br />
Impulse Responses <strong>of</strong> <strong>the</strong> last 3 variables <strong>of</strong> <strong>the</strong> selected Variables out <strong>of</strong> 43 with 7 fac<strong>to</strong>rs<br />
and <strong>the</strong> Block Criteria 1
Bayesian FAVARs with Agnostic Identification 81<br />
Impulse Responses <strong>of</strong> <strong>the</strong> 21st <strong>to</strong> <strong>the</strong> 40th variables <strong>of</strong> <strong>the</strong> selected Variables out <strong>of</strong> 43<br />
with 5 fac<strong>to</strong>rs and <strong>the</strong> Block Criteria 1
82 Bayesian FAVARs with Agnostic Identification<br />
Impulse Responses <strong>of</strong> <strong>the</strong> last 3 variables <strong>of</strong> <strong>the</strong> selected Variables out <strong>of</strong> 43 with 5 fac<strong>to</strong>rs<br />
and <strong>the</strong> Block Criteria 1
Bayesian FAVARs with Agnostic Identification 83<br />
Impulse Responses for <strong>the</strong> Variables 41-52 with 5 fac<strong>to</strong>rs and <strong>the</strong> Block Criteria 1
84 Bayesian FAVARs with Agnostic Identification<br />
Impulse Responses for <strong>the</strong> Variables 1-20 with 2 fac<strong>to</strong>rs and <strong>the</strong> Block Criteria 1
Bayesian FAVARs with Agnostic Identification 85<br />
Impulse Responses for <strong>the</strong> Variables 21-40 with 2 fac<strong>to</strong>rs and <strong>the</strong> Block Criteria 1
86 Bayesian FAVARs with Agnostic Identification<br />
Impulse Responses for 2 fac<strong>to</strong>rs and <strong>the</strong> Block Criteria 1<br />
Appendix C: Matlab Code<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% Bayesian FAVAR Code August 26th %%%%%<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% The main Script. After setting <strong>the</strong> global GLOG_MODE<br />
%%%%%% (see description <strong>of</strong> function GLOG) <strong>the</strong> functions<br />
%%%%%% DO_INPUT, DO_CALCULATION AND DO_RESULTS are called.<br />
%%%%%% The output <strong>of</strong> each function is given <strong>to</strong> <strong>the</strong> following<br />
%%%%%% functions as <strong>the</strong>ir input parameter. DO_IMPORT generates<br />
%%%%%% a structure called "input" (see description <strong>of</strong> <strong>the</strong><br />
%%%%%% datastructure), which contains information about user<br />
%%%%%% entries <strong>to</strong> choose a set <strong>of</strong> presets and specification data.<br />
%%%%%% This structure is passed <strong>to</strong> <strong>the</strong> DO_CALCULATION function<br />
%%%%%% as its input. In DO_CALCULATION <strong>the</strong> structure "results"<br />
%%%%%% is created, which contains <strong>the</strong> results <strong>of</strong> <strong>the</strong> calculation<br />
%%%%%% process. DO_RESULTS uses this two data structures <strong>to</strong><br />
%%%%%% present <strong>the</strong> results <strong>to</strong> <strong>the</strong> user.<br />
%pr<strong>of</strong>ile on -detail builtin<br />
clear all;
Bayesian FAVARs with Agnostic Identification 87<br />
clc;<br />
% Declarations<br />
% Declare global<br />
% 1 : Internal Var Moni<strong>to</strong>ring<br />
% 2 : Information<br />
% 3 : Warnings<br />
% 4 : Errors<br />
global GLOG_MODE;<br />
GLOG_MODE = 2;<br />
GLOG (’Begin <strong>of</strong> Bayesian FAVAR Estimation’,2);<br />
% Main<br />
[input] = DO_INPUT; % see Sequence Diagram - Block A<br />
[results] = DO_CALCULATION (input); % see Sequence Diagram - Block B<br />
DO_RESULTS (input,results); % see Sequence Diagram - Block C<br />
GLOG (’End <strong>of</strong> Bayesian FAVAR Estimation’,2);<br />
%pr<strong>of</strong>ile viewer<br />
%<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% Bayesian FAVAR Code August 26th %%%%%<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% DO _INPUT_VERSION %%%%%<br />
%%%%%% see Sequence Diagram Block A.1 %%%%%<br />
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%<br />
%%%%%% version.versionId %%%%%<br />
%%%%%% version.nGibbsit %%%%%<br />
%%%%%% version.burn_in %%%%%<br />
%%%%%% version.ira_mode %%%%%<br />
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%<br />
%%%%%% This function allows <strong>the</strong> user <strong>to</strong> load a set <strong>of</strong><br />
%%%%%% parameters <strong>to</strong> replicate <strong>the</strong> results <strong>of</strong> <strong>the</strong> <strong>the</strong>sis<br />
%%%%%% or enter his own settings. All data is s<strong>to</strong>red in<strong>to</strong><br />
%%%%%% <strong>the</strong> input.version structure.<br />
function [version] = DO_INPUT_VERSION ()<br />
%function [version] = DO_INPUT_VERSION ()<br />
disp(’%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ’)<br />
disp(’%% You are asked <strong>to</strong> type in <strong>the</strong> number <strong>of</strong> iteratrions or <strong>to</strong> choose a %% ’)<br />
disp(’%% one <strong>of</strong> <strong>the</strong> following specifications for replicating <strong>the</strong> results in %% ’)<br />
disp(’%% my <strong>the</strong>sis which are <strong>the</strong> following: %% ’)<br />
disp(’%% %% ’)<br />
disp(’%% Version 1: Type in 1 %% ’)<br />
disp(’%% Version 2: Type in 2 %% ’)<br />
disp(’%% Version 3: Type in 3 %% ’)<br />
disp(’%% Version 4: Type in 4 %% ’)<br />
disp(’%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ’)<br />
disp(’ ’)<br />
%disp(’%% Hit any key when ready... %% ’)<br />
%disp(’ ’)<br />
%pause;
88 Bayesian FAVARs with Agnostic Identification<br />
%% --> EXTEND FOR MORE OPTIONS HERE !<br />
version.versionId = input(’%% Please choose one <strong>of</strong> <strong>the</strong> above specification = ’);<br />
disp(’ ’)<br />
switch version.versionId % switch <strong>to</strong> choose number <strong>of</strong> iteration<br />
case 1 % Version 1<br />
disp(’%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%’)<br />
disp(’%% YOU HAVE CHOSEN VERSION 1 %%’)<br />
disp(’%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%’)<br />
disp(’ ’)<br />
version.nGibbsit=10000;<br />
version.burn_in=4000;<br />
version.ira_mode = 1; %ira_mode: 1 -> Uhlig (2005), 2 -> BBE (2005)<br />
case 2 % Version 2<br />
disp(’%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%’)<br />
disp(’%% YOU HAVE CHOSEN VERSION 2 %%’)<br />
disp(’%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%’)<br />
disp(’ ’)<br />
version.nGibbsit=8000;<br />
version.burn_in=3000;<br />
version.ira_mode = 1; %ira_mode: 1 -> Uhlig (2005), 2 -> BBE (2005)<br />
case 3 % Version 3<br />
disp(’%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%’)<br />
disp(’%% YOU HAVE CHOSEN VERSION 3 %%’)<br />
disp(’%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%’)<br />
disp(’ ’)<br />
version.nGibbsit=5000;<br />
version.burn_in=1000;<br />
version.ira_mode = 1; %ira_mode: 1 -> Uhlig (2005), 2 -> BBE (2005)<br />
case 4 % Interactively <strong>to</strong> be chosen by user<br />
version.nGibbsit=input(’Please type in <strong>the</strong> number <strong>of</strong> iterations = ’);version.nGibbsit<br />
version.burn_in=input(’Please type in <strong>the</strong> number <strong>of</strong> iterations <strong>to</strong> be discarded = ’)<br />
version.ira_mode = 1; %ira_mode: 1 -> Uhlig (2005), 2 -> BBE (2005)<br />
%version.ira_mode=input(’Please chosse mode <strong>of</strong> IRA (1: Uhlig (2005) or 2: BBE (2005)) ’)<br />
o<strong>the</strong>rwise<br />
disp(’Please check whe<strong>the</strong>r you have chosen ’)<br />
disp(’a correct version or a correct ’)<br />
disp(’(natural) number. Please try again ’)<br />
end % end <strong>of</strong> switch for Iteration number<br />
GLOG (version.nGibbsit,1);<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% Bayesian FAVAR Code August 26th %%%%%<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% DO _INPUT %%%%%<br />
%%%%%% see Sequence Diagram Block A %%%%%<br />
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%<br />
%%%%%% This function returns <strong>the</strong> input data structure <strong>to</strong> <strong>the</strong><br />
%%%%%% main function. To seperate different sources and goups<br />
%%%%%% <strong>of</strong> input information, <strong>the</strong> DO_INPUT-function is seperated<br />
%%%%%% in<strong>to</strong> five subfunctions, which each return one part <strong>of</strong><br />
%%%%%% <strong>the</strong> input structure.<br />
function [input] = DO_INPUT ()<br />
%function [input] = DO_INPUT ()<br />
[input.version] = DO_INPUT_VERSION;<br />
% see Sequence Diagram - Block A.1<br />
[input.data] = DO_INPUT_DATA;
Bayesian FAVARs with Agnostic Identification 89<br />
% see Sequence Diagram - Block A.2<br />
[input.specification] = DO_INPUT_SPECIFICATIONS (input);<br />
% see Sequence Diagram - Block A.3<br />
[input.xdata] = DO_INPUT_GENERATEXDATA (input);<br />
% see Sequence Diagram - Block A.4<br />
[input.startingvalues] = DO_INPUT_STARTINGVALUES (input);<br />
% see Sequence Diagram - Block A.5<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% Bayesian FAVAR Code August 26th %%%%%<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% DO _INPUT_DATA %%%%%<br />
%%%%%% see Sequence Diagram Block A.2 %%%%%<br />
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%<br />
%%%%%% Loads <strong>the</strong> Datasource in<strong>to</strong> input.data.<br />
function [data] = DO_INPUT_DATA ()<br />
%function [data] = DO_INPUT_DATA ()<br />
%*************************%<br />
% Load data directly %<br />
%*************************%<br />
load Datasource.txt;<br />
data = Datasource;<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% Bayesian FAVAR Code August 26th %%%%%<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% DO _INPUT_GENERATEXDATA %%%%%<br />
%%%%%% see Sequence Diagram Block A.4 %%%%%<br />
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%<br />
%%%%%% This Function returns <strong>the</strong> matrix input.xdata which<br />
%%%%%% is input.data excluding <strong>the</strong> datacolumn <strong>of</strong> <strong>the</strong> perfectly<br />
%%%%%% observable thathas pervasive effects on <strong>the</strong> economy.<br />
%%%%%%<br />
%%%%%% xdata is data - col (varY)<br />
%%%%%%<br />
function [xdata] = DO_INPUT_GENERATEXDATA (input)<br />
%function [xdata] = DO_INPUT_GENERATEXDATA ()<br />
xdata = input.data;<br />
xdata(:,input.specification.varY ) = [];<br />
xdata = xdata - repmat(mean(xdata),input.specification.dim.T,1);<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% Bayesian FAVAR Code August 26th %%%%%<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% DO _INPUT_SPECIFICATION %%%%%<br />
%%%%%% see Sequence Diagram Block A.3 %%%%%<br />
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%<br />
%%%%%% The aim <strong>of</strong> this function is <strong>to</strong> set all specification<br />
%%%%%% parameters for <strong>the</strong> calculation prozess including <strong>the</strong><br />
%%%%%% Gibbs Sampler and Impulse Response Analysis.<br />
%%%%%% All parameters are s<strong>to</strong>red in<strong>to</strong> <strong>the</strong> input.specification<br />
%%%%%% structure.<br />
%%%%%%<br />
%%%%%% specification<br />
%%%%%% |---------- time<br />
%%%%%% |---------- varY<br />
%%%%%% |---------- y
90 Bayesian FAVARs with Agnostic Identification<br />
%%%%%% |---------- dim<br />
%%%%%% | |--- T<br />
%%%%%% | |--- M<br />
%%%%%% | |--- N<br />
%%%%%% |<br />
%%%%%% |---------- model<br />
%%%%%% | |--- draws<br />
%%%%%% | |--- K<br />
%%%%%% | |--- d<br />
%%%%%% |<br />
%%%%%% |-----------IRA<br />
%%%%%% | |--- nsteps<br />
%%%%%% | |--- tsteps<br />
%%%%%% | |--- zeroline<br />
%%%%%% | |--- alpha_draws<br />
%%%%%% | |--- var_index_sr<br />
%%%%%% | |--- sr_horizon<br />
%%%%%% | |<br />
%%%%%% | |--- BC (indexed)<br />
%%%%%% | |--- priceIndex<br />
%%%%%% | |--- moneyIndex<br />
%%%%%% | |--- interestIndex<br />
%%%%%% |<br />
%%%%%% |---------- VARNAMES_BBE<br />
%%%%%% |---------- ALL_VARNAMES<br />
function [specification] = DO_INPUT_SPECIFICATIONS (input)<br />
%function [specification] = DO_INPUT_SPECIFICATIONS ()<br />
specification.time = 1959.1667:1/12:2001.6667; %<br />
specification.varY = [77]; % variable <strong>to</strong> be chosen for Y(t)(most <strong>of</strong> <strong>the</strong> cases FFR)<br />
y = input.data(:,specification.varY); % observable (VAR) variables<br />
[T,M] = size(y);<br />
dim.T = T;<br />
dim.M = M;<br />
[N] = size(input.data,2) - size(specification.varY,2)<br />
dim.N = N;<br />
y = y - repmat(mean(y),T,1);<br />
specification.y = y;<br />
% Dim short cuts<br />
specification.dim = dim;<br />
model.draws = input.version.nGibbsit - input.version.burn_in;<br />
% final number <strong>of</strong> iterantions that counts<br />
model.K = 7; % number <strong>of</strong> fac<strong>to</strong>rs ; [GRS : 2 ; S<strong>to</strong>ck&Watson : 7]<br />
model.d = 12; % finite order <strong>of</strong> conformable lag polynomial<br />
specification.model = model;
Bayesian FAVARs with Agnostic Identification 91<br />
%**********************************%<br />
% IMPULSE RESPONSE SPECIFICATION %<br />
%**********************************%<br />
IRA.nsteps = 48;<br />
IRA.tstep = 1:IRA.nsteps;<br />
IRA.zeroline = zeros(IRA.nsteps,1);<br />
IRA.alpha_draws = 300;<br />
IRA.var_index_sr = [ 77 1;16 5;108 5;78 1;81 1;96 5;93 5;74 5;102 1;17 1;49 5; ...<br />
50 5;51 5;26 1;48 1;118 5;54 4;62 1;71 1;120 1];<br />
%IRA.var_index_sr = [ 16 5;17 1;26 1;48 1;49 5;50 5;51 5;54 4;...<br />
62 1; 71 1;74 5;77 1;78 1;79 1;81 1;92 5;93 5; ...<br />
% 94 5;95 5;96 5;97 5;98 5;99 5;100 1;101 5;...<br />
102 1;103 5;104 5;105 5;106 5;107 5;108 5;109 5;110 5; ...<br />
% 111 5;112 5;113 5;114 5;115 5;116 5;117 5;118 5;120 1];<br />
%%% Extended variables <strong>to</strong> be considered<br />
%IRA.var_index_sr = [1 5;2 5;3 5;4 5;5 5;11 5;16 5;17 1;26 1;...<br />
48 1;49 5;50 5;51 5;54 4;62 1; 71 1;73 5;74 5;...<br />
75 5;76 5; ...<br />
% 77 1;78 1;79 1;81 1;92 5;93 5;94 5;95 5;96 5;...<br />
97 5;98 5;99 5;100 1;101 5;102 1;103 5;104 5;...<br />
105 5;106 5;107 5; ...<br />
% 108 5;109 5;110 5;111 5;112 5;113 5;114 5;115 5;116 5;...<br />
117 5;118 5;120 1];<br />
IRA.sr_horizon = 6;<br />
%%%%% set block criteria<br />
%Block Criteria 1<br />
%BC(1).priceIndex = [26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41];<br />
%BC(1).moneyIndex = [16,17,18,19,20,21,22,23,24,25];<br />
%BC(1).interestIndex = [12,13,14];<br />
%Block Criteria 2<br />
%BC(1).priceIndex = [26];<br />
%BC(1).moneyIndex = [16];<br />
%BC(1).interestIndex = [12,13,14];<br />
%Block Criteria TEST SR<br />
%BC(1).priceIndex = [26];<br />
%BC(1).moneyIndex = [16];<br />
%BC(1).interestIndex = [12];<br />
%Block Criteria SR - 1<br />
%BC(1).priceIndex = [26];<br />
%BC(1).moneyIndex = [16];<br />
%BC(1).interestIndex = [12];<br />
%Block Criteria SR - 2<br />
%BC(2).priceIndex = [26,28];<br />
%BC(2).moneyIndex = [16,17];<br />
%BC(2).interestIndex = [12];<br />
%Block Criteria SR - 3
92 Bayesian FAVARs with Agnostic Identification<br />
%BC(3).priceIndex = [26,28,29,39];<br />
%BC(3).moneyIndex = [16,17,18,19,20];<br />
%BC(3).interestIndex = [12];<br />
%Block Criteria SR - 4 - Relaxed Max<br />
%BC(4).priceIndex = [26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41];<br />
%BC(4).moneyIndex = [16,17,18,19,20,21,22,23,24,25];<br />
%BC(4).interestIndex = [12];<br />
%Block Criteria SR - 5 - Max<br />
%BC(5).priceIndex = [26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41];<br />
%BC(5).moneyIndex = [16,17,18,19,20,21,22,23,24,25];<br />
%BC(5).interestIndex = [12,13,14];<br />
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%<br />
%Block Criteria BBE GOOD1<br />
BC(1).priceIndex = [3];<br />
BC(1).moneyIndex = [6];<br />
BC(1).interestIndex = [1];<br />
%Block Criteria BBE GOOD2<br />
BC(2).priceIndex = [3];<br />
BC(2).moneyIndex = [6,7];<br />
BC(2).interestIndex = [1];<br />
%Block Criteria BBE GOOD3<br />
BC(3).priceIndex = [3,9];<br />
BC(3).moneyIndex = [6,7];<br />
BC(3).interestIndex = [1];<br />
%%%%%%%%%%%%%%%%%Extended variables <strong>to</strong> be considered %%%%%%%%%%%%%%%%%%%<br />
%BC(1).priceIndex = [41];<br />
%BC(1).moneyIndex = [31]; %M3/ NBR<br />
%BC(1).interestIndex = [21];<br />
%BC(2).priceIndex = [35,41];<br />
%BC(2).moneyIndex = [27];<br />
%BC(2).interestIndex = [21];<br />
%*************************************<br />
IRA.BC = BC;<br />
specification.IRA = IRA;<br />
specification.VARNAMES_BBE = {’FFR’,’IP’,’CPI’,’3m TREASURY BILLS’,’5y TREASURY BONDS’,’MONETARY BASE’,’M2’,...<br />
’EXCHANGE RATE YEN’,’COMMODITY PRICE INDEX’,’CAPACITY UTIL RATE’,...<br />
’PERSONAL CONSUMPTION’,’DURABLE CONS’,’NONDURABLE CONS’,’UNEMPLOYMENT’,’EMPLOYMENT’,’AVG HOURLY EARNINGS’,...<br />
’HOUSING STARTS’,’NEW ORDERS’,’DIVIDENDS’,’CONSUMER EXPECTATIONS’};<br />
specification.ALL_VARNAMES = {’IPP’,’IPF’,’IPC’,’IPCD’,’IPCN’,’IPE’,’IPI’,’IPM’,’IPMD’,’IPMND’,’IPMFG’,’IPD’,’IPN’,’IPMIN’, ...<br />
’IPUT’,’IP’,’IPXMCA’,’PMI’,’PMP’,’GMPYQ’,’GMYXPQ’,’LHEL’,’LHELX’,’LHEM’,’LHNAG’,’LHUR’,’LHU680’, ...<br />
’LHU5’,’LHU14’,’LHU15’,’LHU26’,’LPNAG’,’LP’,’LPGD’,’LPMI’,’LPCC’,’LPEM’,’LPED’,’LPEN’,’LPSP’, ...<br />
’LPTU’,’LPT’,’LPFR’,’LPS’,’LPGOV’,’LPHRM’,’LPMOSA’,’PMEMP’,’GMCQ’,’GMCDQ’,’GMCNQ’,’GMCSQ’,’GMCANQ’, ...<br />
’HSFR’,’HSNE’,’HSMW’,’HSSOU’,’HSWST’,’HSBR’,’HMOB’,’PMNV’,’PMNO’,’PMDEL’,’MOCMQ’,’MSONDQ’, ...<br />
’FSNCOM’,’FSPCOM’,’FSPIN’,’FSPCAP’,’FSPUT’,’FSDXP’,’FSPXE’,’EXRSW’,’EXRJAN’,’EXRUK’,’EXRCAN’, ...<br />
’FYFF’,’FYGM3’,’FYGM6’,’FYGT1’,’FYGT5’,’FYGT10’,’FYAAAC’,’FYBAAC’,’SFYGM3’,’SFYGM6’,’SFYGT1’, ...
Bayesian FAVARs with Agnostic Identification 93<br />
’SFYGT5’,’SFYGT10’,’SFYAAAC’,’SFYBAAC’,’FM1’,’FM2’,’FM3’,’FM2DQ’,’FMFBA’,’FMRRA’,’FMRNBA’,’FCLNQ’, ...<br />
’FCLBMC’,’CCINRV’,’PMCP’,’PWSFA’,’PWFCSA’,’PWIMSA’,’PWCMSA’,’PSM99Q’,’PUNEW’,’PU83’,’PU84’,’PU85’, ...<br />
’PUC’,’PUCD’,’PUS’,’PUXF’,’PUXHS’,’PUXM’,’LEHCC’,’LEHM’,’HHSNTN’};<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% Bayesian FAVAR Code August 26th %%%%%<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% DO _INPUT_STARTINGVALUES %%%%%<br />
%%%%%% see Sequence Diagram Block A.5 %%%%%<br />
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%<br />
%%%%%% Returns starting values for all variables included in<br />
%%%%%% <strong>the</strong> input.startingvalues structure. These are F, lam_f,<br />
%%%%%% lam_y, R, Phi_lags and Q.<br />
%%%%%%<br />
%%%%%% startingvalues<br />
%%%%%% |---------- F<br />
%%%%%% |---------- lam_f<br />
%%%%%% |---------- lam_y<br />
%%%%%% |---------- R<br />
%%%%%% |---------- Phi_lags<br />
%%%%%% |---------- Q<br />
function [startingvalues] = DO_INPUT_STARTINGVALUES (input)<br />
%function [startingvalues] = DO_INPUT_STARTINGVALUES (input)<br />
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%<br />
%’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’% %<br />
%’’’ function Get_Starting_Values = [Input_Structure] ’’’% %<br />
%’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’% %<br />
% %<br />
%switch mode ==1 %<br />
% case Get_Starting_Values(Generated) %<br />
% Statement 1 %<br />
% case Get_Starting_Values(Dispersed distribution) %<br />
% Statement 2 %<br />
% case Get_Starting_Values(zero_values) %<br />
% Statement 3 %<br />
% o<strong>the</strong>rwise %<br />
% Statement 4 %<br />
% break %<br />
%end; %switch %<br />
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%<br />
X_st = input.xdata ./ repmat(std(input.xdata,1),input.specification.dim.T,1);<br />
Y_st = input.specification.y ./ repmat(std(input.specification.y,1),input.specification.dim.T,1);<br />
% first step - extract PC from X<br />
[F,lam_f] = extract(X_st,input.specification.model.K);<br />
% regress X on F0 and Y, obtain loadings<br />
Lfy = olssvd(X_st(:,input.specification.model.K+1:input.specification.dim.N),[F Y_st])’;<br />
% upper KxM block <strong>of</strong> Ly set <strong>to</strong> zero<br />
lam_f=[lam_f(1:input.specification.model.K,:);Lfy(:,1:input.specification.model.K)];<br />
lam_y=[zeros(input.specification.model.K,input.specification.dim.M);...<br />
Lfy(:,input.specification.model.K+1:input.specification.model.K+input.specification.dim.M)];<br />
% transform fac<strong>to</strong>rs and loadings for LE normalization<br />
[ql,rl]=qr(lam_f’);<br />
lam_f=rl; % do not transpose yet, is upper triangular
94 Bayesian FAVARs with Agnostic Identification<br />
F=F*ql;<br />
% need identity in <strong>the</strong> first K columns <strong>of</strong> Lf, call <strong>the</strong>m A for now<br />
A=lam_f(:,1:input.specification.model.K);<br />
lam_f=[eye(input.specification.model.K),inv(A)*lam_f(:,...<br />
(input.specification.model.K+1):input.specification.dim.N)]’;<br />
F=F*A;<br />
% obtain R:<br />
e=X_st-Y_st*lam_y’-F*lam_f’;<br />
R=e’*e ./ input.specification.dim.T;<br />
R=diag(diag(R));<br />
% run a VAR in [F,Y], obtain initial B and Q<br />
[Phi_lags,Bc,v,Q,invFYFY]=estvar([F,input.specification.y],input.specification.model.d,[]);<br />
%----------------------------------------------------------------------------------------%<br />
startingvalues.F = F;<br />
startingvalues.lam_f = lam_f;<br />
startingvalues.lam_y = lam_y;<br />
startingvalues.R = R;<br />
startingvalues.Phi_lags = Phi_lags;<br />
startingvalues.Q = Q;<br />
%----------------------------------------------------------------------------------------%<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% Bayesian FAVAR Code August 26th %%%%%<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% DO_CALCULATION_SETMODEL %%%%%<br />
%%%%%% see Sequence Diagram Block B.1 %%%%%<br />
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%<br />
%%%%%% Initializes calculation.stateSpaceStructure<br />
%%%%%%<br />
%%%%%% stateSpaceStructure.XX = XX;<br />
%%%%%% stateSpaceStructure.Lam = Lam;<br />
%%%%%% stateSpaceStructure.Xsi_in = Xsi_in;<br />
%%%%%% stateSpaceStructure.P_in = P_in;<br />
%%%%%% stateSpaceStructure.Phi_bar = Phi_bar;<br />
%%%%%% stateSpaceStructure.QQ_bar = QQ_bar;<br />
%%%%%% stateSpaceStructure.RR = RR;<br />
%%%%%% stateSpaceStructure.F_bar = F_bar;<br />
function DO_CALCULATION_SETMODEL (input)<br />
%function DO_CALCULATION_SETMODEL (input)<br />
global calculation;<br />
specM = input.specification.dim.M;<br />
specK = input.specification.model.K;<br />
specd = input.specification.model.d;<br />
specT = input.specification.dim.T;<br />
XX = [input.xdata, input.specification.y];<br />
FF = [input.startingvalues.F, input.specification.y];<br />
% initialize Fac<strong>to</strong>rs its covarianvce matrix for Bayesian Kalman Filter & Smoo<strong>the</strong>r<br />
F_bar = [FF zeros(specT,((specd-1)*(specK+specM)))];<br />
Xsi_in = zeros((specK+specM)*specd,1);<br />
P_in = eye((specK+specM)*specd); % for Kalman Filter&Smoo<strong>the</strong>r<br />
Lam = [input.startingvalues.lam_f input.startingvalues.lam_y; zeros(specM,specK) eye(specM)];<br />
Lam_bar = [Lam zeros((input.specification.dim.N+specM),((specd-1)*(specK+specM)))];
Bayesian FAVARs with Agnostic Identification 95<br />
RR=diag([diag(input.startingvalues.R);zeros(specM,1)]); %(N+M)x(N+M)<br />
Phi_lags = cat(2,input.startingvalues.Phi_lags(:,:));<br />
Phi_bar = [Phi_lags ; eye((specd-1)*(specK+specM)) zeros((specd-1)*(specK+specM),(specK+specM))];<br />
v = zeros(specT,(specK+specM));<br />
v_bar = [v zeros(specT,((specd-1)*(specK+specM)))];<br />
QQ_bar = [input.startingvalues.Q zeros((specK+specM),(specd-1)*(specK+specM)); zeros((specd-1)*(specK+specM),(specd*(specK+specM)))];<br />
%----------------------------------------------------------------------------------------%<br />
stateSpaceStructure.XX = XX;<br />
stateSpaceStructure.Lam = Lam;<br />
stateSpaceStructure.Xsi_in = Xsi_in;<br />
stateSpaceStructure.P_in = P_in;<br />
stateSpaceStructure.Phi_bar = Phi_bar;<br />
stateSpaceStructure.QQ_bar = QQ_bar;<br />
stateSpaceStructure.RR = RR;<br />
stateSpaceStructure.F_bar = F_bar;<br />
stateSpaceStructure.Lam_bar = Lam_bar;<br />
%----------------------------------------------------------------------------------------%<br />
calculation.stateSpaceStructure = stateSpaceStructure;<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% Bayesian FAVAR Code August 26th %%%%%<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% DO_CALCULATION %%%%%<br />
%%%%%% see Sequence Diagram Block B %%%%%<br />
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%<br />
%%%%%% In this function all calculationprocesses are started<br />
%%%%%% and <strong>the</strong>ir output s<strong>to</strong>red. These processes are started<br />
%%%%%% by calling <strong>the</strong> functions DO_CALCULATION_SETMODEL,<br />
%%%%%% DO_CALCULATION_CREATESTRUCTURE,<br />
%%%%%% DO_CALCULATION_GIBBS_SAMPLING and DO_CALCULATION_IR<br />
%%%%%% where <strong>the</strong> last two are <strong>the</strong> main calculation processes.<br />
%%%%%% To save memory <strong>the</strong> datastructure calculation is declared<br />
%%%%%% as global in all functions. In this way all functions<br />
%%%%%% can refer <strong>to</strong> it as an input and out parameter without<br />
%%%%%% moving this big sized structure.<br />
function [results] = DO_CALCULATION (input)<br />
%function [results] = DO_CALCULATION (input)<br />
% declare calculation As global structure<br />
global calculation;<br />
%<strong>the</strong> following DO_CALCULATION_x write <strong>the</strong>ir results directly in<strong>to</strong> <strong>the</strong><br />
%global calculation structure<br />
DO_CALCULATION_SETMODEL (input); % see Sequence Diagram - Block B.1<br />
DO_CALCULATION_CREATESTRUCTURE (input); % see Sequence Diagram - Block B.2<br />
DO_CALCULATION_GIBBS_SAMPLING (input); % see Sequence Diagram - Block B.3<br />
[results.ira] = DO_CALCULATION_IRA (input); % see Sequence Diagram - Block B.4<br />
% clear calculation<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% Bayesian FAVAR Code August 26th %%%%%<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% DO_CALCULATION_CREATESTRUCTURE %%%%%<br />
%%%%%% see Sequence Diagram Block B.2 %%%%%
96 Bayesian FAVARs with Agnostic Identification<br />
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%<br />
%%%%%% Initializes calculation.Phi_bar_collect<br />
%%%%%% calculation.QQ_bar_collect<br />
%%%%%% calculation.F_bar_collect<br />
%%%%%% calculation.Lam_collect<br />
%%%%%%<br />
%%%%%% calculation.Phi_bar_collect = zeros(input.specification.nGibbsit,specK+specM,specK+specM,specd);<br />
%%%%%% calculation.QQ_bar_collect = zeros(input.specification.nGibbsit,specK+specM,specK+specM);<br />
%%%%%% calculation.F_bar_collect = zeros(input.specification.nGibbsit,specT,specK);<br />
function DO_CALCULATION_CREATESTRUCTURE (input)<br />
%function DO_CALCULATION_CREATESTRUCTURE (input)<br />
global calculation;<br />
specM = input.specification.dim.M;<br />
specK = input.specification.model.K;<br />
specd = input.specification.model.d;<br />
specT = input.specification.dim.T;<br />
specN = input.specification.dim.N;<br />
specDraws = input.specification.model.draws;<br />
calculation.Phi_bar_collect = zeros(specDraws,specK+specM,specK+specM,specd);<br />
calculation.QQ_bar_collect = zeros(specDraws,specK+specM,specK+specM);<br />
calculation.F_bar_collect = zeros(specDraws,specT,specK);<br />
calculation.Lam_collect = zeros(specDraws,specN+specM,specK+specM);<br />
for i=1:specM<br />
end<br />
calculation.Lam_collect(:,input.specification.varY(i),specK+i)=1;<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% Bayesian FAVAR Code August 26th %%%%%<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% DO_CALCULATION_GIBBS_SAMPLING %%%%%<br />
%%%%%% see Sequence Diagram Block B.3 %%%%%<br />
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%<br />
%%%%%% This function does <strong>the</strong> Gibbs Sampling by calling <strong>the</strong><br />
%%%%%% functions DO_CALCULATION_GIBBS_SAMPLING_BK_FILTER,<br />
%%%%%% DO_CALCULATION_GIBBS_SAMPLING_BK_SMOOTHER,<br />
%%%%%% DO_CALCULATION_GIBBS_SAMPLING_PARAM_PREC_OBS<br />
%%%%%% and DO_CALCULATION_GIBBS_SAMPLING_PARAM_PREC_FAC for<br />
%%%%%% each Gibbs iteration. After each Iteration <strong>the</strong> results<br />
%%%%%% are s<strong>to</strong>red in<strong>to</strong> <strong>the</strong> global calculation data structure<br />
%%%%%% after ignoring <strong>the</strong> first input.version.burn_in draws.<br />
function DO_CALCULATION_GIBBS_SAMPLING (input)<br />
%function DO_CALCULATION_GIBBS_SAMPLING (input)<br />
global calculation;<br />
%%%%% set parameters<br />
K = input.specification.model.K;<br />
M = input.specification.dim.M;<br />
N = input.specification.dim.N;<br />
for Gibbsiteration=1:input.version.nGibbsit %%% Gibbs Start<br />
GLOG (sprintf(’Gibbsiteration: %d’,Gibbsiteration),2);
Bayesian FAVARs with Agnostic Identification 97<br />
end<br />
[bk_filter] = DO_CALCULATION_GIBBS_SAMPLING_BK_FILTER (input);<br />
% see Sequence Diagram Block B.3.1<br />
[bk_smoo<strong>the</strong>r] = DO_CALCULATION_GIBBS_SAMPLING_BK_SMOOTHER (input, bk_filter);<br />
% see Sequence Diagram Block B.3.2<br />
[param_prec_obs] = DO_CALCULATION_GIBBS_SAMPLING_PARAM_PREC_OBS (input, bk_smoo<strong>the</strong>r);<br />
% see Sequence Diagram Block B.3.3<br />
[param_prec_fac] = DO_CALCULATION_GIBBS_SAMPLING_PARAM_PREC_FAC (input, bk_smoo<strong>the</strong>r);<br />
% see Sequence Diagram Block B.3.4<br />
% all gibbs results <strong>to</strong> be s<strong>to</strong>red here<br />
if Gibbsiteration > input.version.burn_in<br />
end<br />
calculation.Lam_collect (Gibbsiteration-input.version.burn_in,[1:76, 78:120],:)=...<br />
calculation.stateSpaceStructure.Lam(1:N,1:K+M);<br />
calculation.Phi_bar_collect (Gibbsiteration-input.version.burn_in,:,:,:)=...<br />
param_prec_fac.Phi_draw;<br />
calculation.QQ_bar_collect (Gibbsiteration-input.version.burn_in,:,:)=...<br />
param_prec_fac.Q_draw;<br />
calculation.F_bar_collect (Gibbsiteration-input.version.burn_in,:,:)=...<br />
bk_smoo<strong>the</strong>r.Xsi_S(:,1:K);<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% Bayesian FAVAR Code August 26th %%%%%<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% DO_CALCULATION_GIBBS_SAMPLING_BK_FILTER %%%%%<br />
%%%%%% Kalman Filter %%%%%<br />
%%%%%% see Sequence Diagram Block B.3.1 %%%%%<br />
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%<br />
%%%%%% Bayesian Kalman Filter<br />
function [bk_filter] = DO_CALCULATION_GIBBS_SAMPLING_BK_FILTER (input)<br />
%function DO_CALCULATION_GIBBS_SAMPLING_BK_FILTER (input)<br />
global calculation;<br />
%%%%% set parameters<br />
Y_Data = calculation.stateSpaceStructure.XX;<br />
H_Prior = calculation.stateSpaceStructure.Lam;<br />
Xsi_Prior = calculation.stateSpaceStructure.Xsi_in;<br />
P_Prior = calculation.stateSpaceStructure.P_in;<br />
G_Prior = calculation.stateSpaceStructure.Phi_bar;<br />
Q_Prior = calculation.stateSpaceStructure.QQ_bar;<br />
R_Prior = calculation.stateSpaceStructure.RR;<br />
Xsi_all = calculation.stateSpaceStructure.F_bar;<br />
K = input.specification.model.K;<br />
M = input.specification.dim.M;<br />
d = input.specification.model.d;<br />
%GLOG (size(Y_Data),1);<br />
%GLOG (size(H_Prior),1);
98 Bayesian FAVARs with Agnostic Identification<br />
%%%%% start kalman filter<br />
% Setting Dimensions<br />
[T,var] = size(Y_Data);<br />
[H_row,H_col] = size(H_Prior); % has <strong>to</strong> equal size <strong>of</strong> Lam_bar<br />
[Xsi_row,Xsi_col] = size(Xsi_Prior);<br />
[G_row,G_col] = size(G_Prior);<br />
[Q_row,Q_col] = size(Q_Prior);<br />
[R_row,R_col] = size(R_Prior);<br />
km = Xsi_col/13;<br />
kmd = Xsi_col;<br />
%Variables for State-Space<br />
Y_t = Y_Data;<br />
H_t = H_Prior;<br />
G = G_Prior;<br />
R = R_Prior;<br />
Q = Q_Prior;<br />
vecQ = reshape(Q,Q_row^2,1);<br />
% Sequence <strong>of</strong> draws <strong>to</strong> be s<strong>to</strong>red in Xsi_all and P_all<br />
Xsi_all = zeros(T,((K+M)*d));<br />
P_all = zeros(((K+M)*d)^2,T);<br />
%invI = inv(eye(size(Xsi_Prior,1)^2) - kron(G,G));<br />
%vecP_Prior = invI * vecQ;<br />
%P_Prior = reshape(vecP_Prior,size(Xsi_row,1),size(Xsi_row,1));<br />
% Initialization <strong>the</strong> state vec<strong>to</strong>rs variance-covariance matrix<br />
%Xsi_Prior = zeros(Xsi_row,1); % could be taken in case <strong>of</strong> no initial value<br />
%P_Prior = eye(Xsi_row); % could be taken in case <strong>of</strong> no initial value<br />
Xsi_tlag = Xsi_Prior;<br />
P_tlag = P_Prior;<br />
% Final Draws <strong>to</strong> be s<strong>to</strong>red in Xsi_F and P_F<br />
Xsi_F = zeros(Xsi_row,Xsi_col);<br />
P_F = zeros(Xsi_row^2,T);<br />
for t=1:T<br />
%=======================================================%<br />
% Updating equations (Kim&Nelson) %<br />
%*******************************************************%<br />
Eta_tlag = Y_t(t,:)’ - H_t * Xsi_tlag(1:(K+M)); %<br />
f_tlag = H_t * P_tlag(1:(K+M),1:(K+M)) * H_t’ + R;%<br />
if_tlag = inv(f_tlag); %<br />
%if_tlag = pinv(f_tlag); %<br />
K_t = P_tlag(:,1:(K+M)) * H_t’ * if_tlag; %<br />
% %<br />
Xsi_tt = Xsi_tlag + K_t * Eta_tlag; %<br />
P_tt = P_tlag - K_t * H_t * P_tlag(1:(K+M),:); %<br />
%=======================================================%<br />
%-------------------------------------------------------%<br />
%=======================================================%<br />
% Prediction equation (Kim&Nelson) %<br />
%*******************************************************%
Bayesian FAVARs with Agnostic Identification 99<br />
end%for<br />
% Note that indexp* P_tt * G’ + Q; %<br />
%=======================================================%<br />
if t
100 Bayesian FAVARs with Agnostic Identification<br />
% Preparing dimension for Transformation<br />
regDim = [1:(K+M)];<br />
[l_star] = length(regDim);<br />
[l_reg] = length(Q_Prior);<br />
T = size(Xsi_Tlag,1);<br />
% Transformation <strong>of</strong> Q in case <strong>of</strong> singularity<br />
Q = Q_Prior;<br />
Q_star = Q(1:l_star,1:l_star);<br />
% Transformation <strong>of</strong> G in case <strong>of</strong> singular Q<br />
G = G_Prior;<br />
%G_star = zeros(l_star,l_reg);<br />
G_star = G(1:l_star,1:l_reg);<br />
% Transformation <strong>of</strong> Xsi in case <strong>of</strong> singular Q<br />
Xsi_F = Xsi_F(1,1:l_reg);<br />
Xsi_Tlag = Xsi_Tlag(:,1:l_reg);<br />
P_F = P_F(1:l_reg,1:l_reg);<br />
P_Tlag = P_Tlag(1:l_reg^2,:);<br />
for t=1:T-1<br />
end%for<br />
%=======================================================%<br />
% Final Updating procedure %<br />
%*******************************************************%<br />
Xsi_TP = Xsi_Tlag(T-t+1,1:l_star)’;<br />
Xsi_TT = Xsi_Tlag(T-t,:)’;<br />
P_TL = P_Tlag(:,T-t);<br />
P_TT = reshape(P_TL’,(d*(K+M)),(d*(K+M)));<br />
%f_ts = pinv(G_star * P_TT * G_star’ + Q_star);<br />
f_ts = inv(G_star * P_TT * G_star’ + Q_star);<br />
K_ts = P_TT * G_star’ * f_ts;<br />
Xsi_TXsi = Xsi_TT + K_ts * (Xsi_TP - G_star * Xsi_TT);<br />
P_TXsi = P_TT - K_ts * G_star * P_TT;<br />
%*******************************************************%<br />
% singles out latent fac<strong>to</strong>rs<br />
indexnM=[ones(K,d);zeros(M,d)];<br />
indexnM=find(indexnM==1);<br />
Xsi_Tlag(T-t,:) = Xsi_TXsi’;<br />
Xsi_Tlag(T-t,indexnM) = mvnrnd(Xsi_Tlag(T-t,indexnM)’,P_TXsi(indexnM,indexnM),1);<br />
Xsi_S = Xsi_Tlag(:,1:(K+M));<br />
P_S = P_TXsi;<br />
%%%%% set bk_smoo<strong>the</strong>r output structure<br />
bk_smoo<strong>the</strong>r.Xsi_S = Xsi_S;
Bayesian FAVARs with Agnostic Identification 101<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% Bayesian FAVAR Code August 26th %%%%%<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% DO_CALCULATION_GIBBS_SAMPLING_PARAM_PREC_FAC %%%%%<br />
%%%%%% %%%%%<br />
%%%%%% see Sequence Diagram Block B.3.4 %%%%%<br />
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%<br />
%%%%%% Inference on State Equation<br />
function [param_prec_fac] = DO_CALCULATION_GIBBS_SAMPLING_PARAM_PREC_FAC (input, bk_smoo<strong>the</strong>r);<br />
%function [param_prec_fac] = DO_CALCULATION_GIBBS_SAMPLING_PARAM_PREC_FAC (input, bk_smoo<strong>the</strong>r);<br />
global calculation;<br />
%%%%% set parameters<br />
K = input.specification.model.K;<br />
M = input.specification.dim.M;<br />
T = input.specification.dim.T;<br />
d = input.specification.model.d;<br />
Xsi_S = bk_smoo<strong>the</strong>r.Xsi_S;<br />
%****************************%<br />
% univariate AR OLS %<br />
%****************************%<br />
% At this point we are interested in generating <strong>the</strong> i’th variances:<br />
% Only Sigma Required<br />
for i=1:K+M<br />
end%for<br />
[Phi_i,Phi_ci,vi,Qi(i),invFYFYi]=estvar(Xsi_S(:,i),d,[]);<br />
% At this point we only need <strong>to</strong> save Qi(i) which respectively as<br />
% diagonal elements forms <strong>the</strong> Prior Q_0<br />
Q_0 = diag(Qi); % maybe better diag(Qi(:,:))<br />
Q_prior = Q_0;<br />
Omega_0 = zeros(d*(K+M));%,size(Qmega_0,2));<br />
Omega_0 = diag(kron(1./Qi,1./[1:d])); % Qmega_0 = Q_prior.*Omega_0;<br />
%
102 Bayesian FAVARs with Agnostic Identification<br />
end%for<br />
F_reg = F_reg(:,:);<br />
F_reg = F_reg(d+1:T,:);<br />
% For doing inference on <strong>the</strong> Transition equation one has <strong>to</strong> first draw Q and <strong>the</strong> draw vec(Phi)<br />
% Note that for generalization it is good <strong>to</strong> write [kappa1_prior,kappa2_prior; kappa1_post and kappa1_post]<br />
% shortcuts<br />
VV = V_hat’*V_hat;<br />
FF_reg = inv(F_reg’*F_reg);<br />
Phi_FF = inv(Omega_prior + FF_reg);<br />
% Draw posterior Q<br />
Q_bar = Q_prior + VV + Phi_hat’*Phi_FF*Phi_hat;<br />
kappa1_prior = K+M+2;<br />
kappa1_post = T+kappa1_prior;<br />
kappa2_prior = Q_prior;<br />
kappa2_post = Q_bar; % Scale Matrix<br />
% Inverse Wishart draw<br />
% df = degrees <strong>of</strong> freedom<br />
%QW = wishrnd(inv(kappa2_post),kappa1_post);<br />
%Q_draw = inv(QW); % =1 % 0.999<br />
end<br />
vec_Phi_draw = mvnrnd(vecPhi_posterior,sigma_Phi,1);<br />
Phi_draw = reshape(vec_Phi_draw’,d*(K+M),(K+M))’;<br />
calculation.stateSpaceStructure.Phi_bar(1:K+M,:) = Phi_draw;<br />
Phi_draw = reshape(Phi_draw,K+M,K+M,d);<br />
%%%%% set param_prec_obs output structure<br />
param_prec_fac.Q_draw = Q_draw;<br />
param_prec_fac.Phi_draw = Phi_draw;<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% Bayesian FAVAR Code August 26th %%%%%
Bayesian FAVARs with Agnostic Identification 103<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% DO_CALCULATION_GIBBS_SAMPLING_PARAM_PREC_OBS %%%%%<br />
%%%%%% %%%%%<br />
%%%%%% see Sequence Diagram Block B.3.3 %%%%%<br />
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%<br />
%%%%%% Inference on Observation Equation<br />
function [param_prec_obs] = DO_CALCULATION_GIBBS_SAMPLING_PARAM_PREC_OBS (input, bk_smoo<strong>the</strong>r);<br />
%function [param_prec_obs] = DO_CALCULATION_GIBBS_SAMPLING_PARAM_PREC_OBS (input, bk_smoo<strong>the</strong>r);<br />
global calculation;<br />
%%%%% set parameters<br />
XX = calculation.stateSpaceStructure.XX;<br />
K = input.specification.model.K;<br />
M = input.specification.dim.M;<br />
N = input.specification.dim.N;<br />
T = input.specification.dim.T;<br />
Xsi_S = bk_smoo<strong>the</strong>r.Xsi_S(:,1:K+M);<br />
% prior distributions for VAR part, need Lam and R<br />
s0 = 3;<br />
alpha = 0.001;<br />
M0 = eye(K+M); % Variance Parameter in prior on i-th coeff<br />
Param1 = inv( M0 + inv(Xsi_S’*Xsi_S) ) ;<br />
for i=1:N<br />
if i K<br />
%**********************%<br />
% b) draw Lam_ii %<br />
%**********************%<br />
% Given : Fac<strong>to</strong>rs,Data, and Previously generated R_ii
104 Bayesian FAVARs with Agnostic Identification<br />
end%for<br />
end%if<br />
% Variables needed<br />
M_i_bar = inv ( inv(M0) + Xsi_S’*Xsi_S );<br />
%M0 + Xsi_S(:,1:K+M)’*Xsi_S(:,1:K+M);<br />
Lam_i_bar = M_i_bar *(Xsi_S’*Xsi_S)*Lam_i_hat;<br />
%inv(M_i_bar) *(Xsi_S(:,1:K+M)’*Xsi_S(:,1:K+M)) * Lam_i_hat;<br />
Lam_i_hat = Lam_i_bar’ + randn (1, K+M) * chol (R_draw*M_i_bar);<br />
calculation.stateSpaceStructure.Lam(i,1:K+M) = Lam_i_hat;<br />
%%%% Alternative Approach<br />
%Lam_Sigma = calculation.stateSpaceStructure.RR(i,i) * inv(M_i_bar);<br />
% Draw Lam from Normal Distribution<br />
%Lam_draw = mvnrnd(Lam_i_bar’, Lam_Sigma,1);<br />
%calculation.stateSpaceStructure.Lam(i,1:K+M) = Lam_draw;<br />
% Ldraw(nit,NN,:)=Lam_bar(1:N,1:K);<br />
% Fdraw(nit,:,:)=Xsi_S(:,1:K);<br />
%%%%% set param_prec_obs output structure<br />
%param_prec_obs.Lam_draw = Lam_draw;<br />
%param_prec_obs.R_draw = R_draw;<br />
param_prec_obs = 1;<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% Bayesian FAVAR Code August 26th %%%%%<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% DO _CALCULATION_IRA %%%%%<br />
%%%%%% Impule-Response-Analysis %%%%%<br />
%%%%%% see Sequence Diagram Block B.4 %%%%%<br />
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%<br />
%%%%%% This function starts <strong>the</strong> DO_CALCULATION_IRA_UHLIG or <strong>the</strong><br />
%%%%%% DO_CALCULATION_IRA_BBE function depending on <strong>the</strong> value <strong>of</strong><br />
%%%%%% input.version.ira_mode which contains information about<br />
%%%%%% <strong>the</strong> selected Impule Response Mode <strong>to</strong> run.<br />
function [ira] = DO_CALCULATION_IRA (input)<br />
%function [ira] = DO_CALCULATION_IRA (input)<br />
% declare calculation As global structure<br />
global calculation;<br />
switch input.version.ira_mode<br />
end<br />
case 1<br />
case 2<br />
[ira.finalresponses] = DO_CALCULATION_IRA_UHLIG (input);<br />
% See Sequence Diagram Block B.4.1<br />
[ira.finalresponses] = DO_CALCULATION_IRA_BBE (input);<br />
% See Sequence Diagram Block B.4.2<br />
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%<br />
% clear calculation<br />
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Bayesian FAVARs with Agnostic Identification 105<br />
spec_nBC = length(input.specification.IRA.BC);<br />
for bc_i = 1:spec_nBC;<br />
end;<br />
nsteps = input.specification.IRA.nsteps;<br />
scale = calculation.IRA.scale;<br />
specDraws = input.specification.model.draws;<br />
specAlphaDraws = input.specification.IRA.alpha_draws;<br />
ira.finalresponses(bc_i).no = size(ira.finalresponses(bc_i).response,3);<br />
%GLOG (sprintf(’Accepted Responses for BC-%d: %d’,bc_i, ira.finalresponses(bc_i).no),1);<br />
% transform back <strong>to</strong> levels<br />
for i=1:size(ira.finalresponses(bc_i).response,1);<br />
end;<br />
if input.specification.IRA.var_index_sr(i,2)==4<br />
ira.finalresponses(bc_i).response(i,:,:) =...<br />
exp(ira.finalresponses(bc_i).response(i,:,:))-ones(1, nsteps,ira.finalresponses(bc_i).no);<br />
elseif input.specification.IRA.var_index_sr(i,2)==5<br />
end<br />
ira.finalresponses(bc_i).response(i,:,:) = ...<br />
exp(cumsum(ira.finalresponses(bc_i).response(i,:,:),2))-ones(1,...<br />
% FINAL RESPONSES<br />
nsteps,ira.finalresponses(bc_i).no);<br />
ira.finalresponses(bc_i).response = sort(ira.finalresponses(bc_i).response,3);<br />
ira.finalresponses(bc_i).fimpResponse = median(ira.finalresponses(bc_i).response,3);<br />
if ira.finalresponses(bc_i).no > 0<br />
end;<br />
% ERROR BANDS<br />
ira.finalresponses(bc_i).lowerErrorBand = ira.finalresponses(bc_i).response(:,:,floor(0.16*ira.finalresponses(bc_i).no));<br />
ira.finalresponses(bc_i).upperErrorBand = ira.finalresponses(bc_i).response(:,:,floor(0.84*ira.finalresponses(bc_i).no));<br />
% concatenate <strong>the</strong> estimate and confidence bounds<br />
ira.finalresponses(bc_i).collRespMat =...<br />
cat(3,ira.finalresponses(bc_i).lowerErrorBand, ...<br />
ira.finalresponses(bc_i).fimpResponse, ...<br />
ira.finalresponses(bc_i).upperErrorBand);<br />
% transform scale <strong>to</strong> std<br />
ira.finalresponses(bc_i).collRespMat =...<br />
ira.finalresponses(bc_i).collRespMat ./ repmat(scale’,[1 nsteps 3]) ;<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% Bayesian FAVAR Code August 26th %%%%%<br />
%%%%%%**********************************************************%%%%%
106 Bayesian FAVARs with Agnostic Identification<br />
%%%%%% DO _CALCULATION_IRA_UHLIG %%%%%<br />
%%%%%% Uhlig (2005) - Sign Restriction %%%%%<br />
%%%%%% see Sequence Diagram Block B.4.1 %%%%%<br />
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%<br />
%%%%%% Impulse Response Analysis with Uhlig (2005) Sign<br />
%%%%%% Restrictions. Returns finalresponse which is a vec<strong>to</strong>r<br />
%%%%%% with <strong>the</strong> length <strong>of</strong> nBlockCriteria. Responses are<br />
%%%%%% checked <strong>to</strong> satisfy each block criteria, which are<br />
%%%%%% set in input.specification.IRA.BC.<br />
%%%%%% Accepted Responses are added <strong>to</strong><br />
%%%%%% finalresponses(bc_i).response<br />
%%%%%% where bc_i is <strong>the</strong> block criteria satistied by<br />
%%%%%% <strong>the</strong> responses.<br />
function [finalresponses] = DO_CALCULATION_IRA_UHLIG (input)<br />
%function [finalresponses] = DO_CALCULATION_IRA_UHLIG (input)<br />
% declare calculation As global structure<br />
global calculation;<br />
GLOG (’Starting Impulse Responses Ulhig (2005)’,2);<br />
specDraws = input.specification.model.draws;<br />
specK = input.specification.model.K;<br />
specM = input.specification.dim.M;<br />
specNSteps = input.specification.IRA.nsteps;<br />
spec_nBC = length(input.specification.IRA.BC);<br />
specAlphaDraws = input.specification.IRA.alpha_draws;<br />
specZ = input.specification.IRA.sr_horizon;<br />
%prepare finalresponses<br />
fr_current_length = zeros (spec_nBC,1);<br />
% fr_current_length is current length <strong>of</strong> finalresponses matrix and also <strong>the</strong> initial size <strong>of</strong> it<br />
fr_add_length = zeros (spec_nBC,1);<br />
% if current length <strong>of</strong> finalresponses is not enough add fr_add_length more slices<br />
last_slice = zeros (spec_nBC,1);<br />
for bc_i = 1:spec_nBC;<br />
end;<br />
fr_current_length (bc_i) = ceil(0.03*(specDraws*specAlphaDraws));<br />
% set fr_current_length according <strong>to</strong> your block criteria.<br />
% <strong>the</strong> more restrictive a block criteria is, <strong>the</strong> smaller <strong>the</strong><br />
% impulse responses satisfying <strong>the</strong> restriction, <strong>the</strong> smaller<br />
% <strong>the</strong> initial value <strong>of</strong> ft_current_length<br />
fr_add_length (bc_i) = ceil(0.01*(specDraws*specAlphaDraws));<br />
last_slice (bc_i) = 0;<br />
% initialise finalresponses(bc_i).response with initial size<br />
finalresponses (bc_i).response (:,:,fr_current_length (bc_i)) = zeros(size(input.specification.IRA.var_index_sr,1), specNSteps);<br />
% /<strong>to</strong>do CHECK SCLE FOR SR!!!<br />
calculation.IRA.scale =...<br />
std(input.data(:,input.specification.IRA.var_index_sr(:,1)));
Bayesian FAVARs with Agnostic Identification 107<br />
calculation.IRA.choleskyRespMat = zeros(specDraws,specK+specM,specK+specM,specNSteps);<br />
% vec<strong>to</strong>r <strong>of</strong> initial impulse SHOCK: 25 basis points <strong>of</strong> FFR<br />
% "CONTRACTIONARY MONETARY POLICY"<br />
% /TODO : initial impulse vec<strong>to</strong>r <strong>to</strong> be premultiplied<br />
calculation.IRA.initialImpulse = diag([zeros(1,specK+specM-1) .25]);<br />
% cus<strong>to</strong>mize calculation.Lam_collect<br />
calculation.Lam_collect =...<br />
calculation.Lam_collect(:,input.specification.IRA.var_index_sr(:,1),:);<br />
% SELECT VARIABLES FOR IRA<br />
calculation.Lam_collect = permute(calculation.Lam_collect,[2 3 1]);<br />
%%%% this [X * F * draws]<br />
alpha_tilde = zeros(specK+specM,1);<br />
alpha = zeros(specK+specM,1);<br />
%each draw <strong>of</strong> alpha has size (nvar,1) and norm=1<br />
for i=1:specDraws<br />
GLOG (sprintf(’Impulse-Responses for Draw: %d’,i),2);<br />
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Begin Calculation <strong>of</strong> Cholesky Responses Matrix<br />
Phi_v = squeeze(calculation.Phi_bar_collect(i,:,:,:));<br />
Q_v = squeeze(calculation.QQ_bar_collect(i,:,:));<br />
%<br />
chol_Q = chol(Q_v);<br />
norm_mat = diag(diag(chol_Q));<br />
chol_Q = inv(norm_mat)*chol_Q; % SMAT<br />
% chol_Q is upper triangular decomposition <strong>of</strong> omega;<br />
% gives matrix <strong>of</strong> initial shocks with 1’s on <strong>the</strong> diagonal<br />
chol_Q = calculation.IRA.initialImpulse*chol_Q;<br />
calculation.IRA.choleskyRespMat(i,:,:,:) = impulsdtrf(Phi_v,chol_Q,specNSteps);<br />
%%%% End Calculation <strong>of</strong> Cholesky Responses Matrix<br />
for nalpha = 1:input.specification.IRA.alpha_draws;<br />
%for each Cholesky, draw alpha vec<strong>to</strong>rs <strong>of</strong> norm unity<br />
alpha_tilde=randn(specK+specM,1); % standard Gaussian draws<br />
alpha=(1/norm(alpha_tilde)) * alpha_tilde;<br />
candidateA = zeros(specK+specM,1,specNSteps);<br />
for j=1:specNSteps; %creating structural impulse responses<br />
candidateA(:,:,j) =...<br />
squeeze(calculation.IRA.choleskyRespMat(i,:,:,j)) * alpha;<br />
end; %structural responses combine Cholesky with alpha draws<br />
candidateB = calculation.Lam_collect(:,:,i) * squeeze(candidateA);<br />
for bc_i = 1:spec_nBC;<br />
check_sign_restriction_result = 0;
108 Bayesian FAVARs with Agnostic Identification<br />
end;<br />
end;<br />
check_sign_restriction_result =...<br />
DO_CALCULATION_IRA_UHLIG_CHECK_SIGNRESTRICTION (input,candidateB,bc_i);<br />
if abs(check_sign_restriction_result) == 1<br />
else<br />
end;<br />
last_slice (bc_i) = last_slice (bc_i) + 1;<br />
GLOG (sprintf(’Response accepted. This is <strong>the</strong><br />
%dth accepted Response for BC-%d’, last_slice (bc_i), bc_i),1);<br />
finalresponses (bc_i).response (:,:,last_slice (bc_i)) =...<br />
check_sign_restriction_result * candidateB;<br />
if fr_current_length (bc_i) == last_slice (bc_i)<br />
%increase length<br />
end;<br />
end; %alphadraws<br />
for bc_i = 1:spec_nBC;<br />
end;<br />
fr_current_length (bc_i) =...<br />
fr_current_length (bc_i) + fr_add_length (bc_i);<br />
finalresponses (bc_i).response (:,:,fr_current_length (bc_i)) = zeros(size(input.specification.IRA.var_index_sr,1), input.specification.I<br />
% CAN NOT ACCEPT RESPONSE<br />
GLOG (sprintf(’Number <strong>of</strong> accepted Responses for BC-%d: %d’,bc_i, last_slice (bc_i)),2);<br />
finalresponses (bc_i).response = finalresponses (bc_i).response (:,:,1:last_slice (bc_i));<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% Bayesian FAVAR Code August 26th %%%%%<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% DO_CALCULATION_IRA_UHLIG_CHECK_SIGNRESTRICTION %%%%%<br />
%%%%%% %%%%%<br />
%%%%%% see Sequence Diagram Block B.4.1.1 %%%%%<br />
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%<br />
%%%%%% This Function checks if a given Response satisfies <strong>the</strong><br />
%%%%%% block criteria bc_i. In that case <strong>the</strong> result<br />
%%%%%% is 1/-1. O<strong>the</strong>rwise 0.<br />
function [check_result] = DO_CALCULATION_IRA_UHLIG_CHECK_SIGNRESTRICTION (input,candidate,bc_i);<br />
%function [check_result] = DO_CALCULATION_IRA_UHLIG_CHECK_SIGNRESTRICTION (input,candidate,bc_i);<br />
%%%%% set parameters<br />
specZ = input.specification.IRA.sr_horizon;<br />
P_INDEX = input.specification.IRA.BC(bc_i).priceIndex;<br />
M_INDEX = input.specification.IRA.BC(bc_i).moneyIndex;<br />
I_INDEX = input.specification.IRA.BC(bc_i).interestIndex;<br />
if all(candidate(P_INDEX (1:(length(P_INDEX))),1:specZ) 0)
Bayesian FAVARs with Agnostic Identification 109<br />
% Price - Money - Interestrate<br />
check_result = 1;<br />
elseif all(candidate(P_INDEX (1:(length(P_INDEX))),1:specZ) > 0) &...<br />
all(candidate(M_INDEX (1:(length(M_INDEX))),1:specZ) > 0) &...<br />
all(candidate(I_INDEX (1:(length(I_INDEX))),1:specZ) < 0);<br />
else<br />
end;<br />
check_result = -1; %According <strong>to</strong> B. Mackowiak<br />
check_result = 0;<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% Bayesian FAVAR Code August 26th %%%%%<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% GLOG %%%%%<br />
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%<br />
%%%%% This Function is used <strong>to</strong> log an output string depending<br />
%%%%% on its log level glog_type. The global variable<br />
%%%%% GLOG_MODE signifies <strong>the</strong> global minimum level for<br />
%%%%% outputs and is set directly in BAYESIAN_FAVAR.<br />
%%%%% If <strong>the</strong> glog_type <strong>of</strong> an output string is less than<br />
%%%%% GLOG_MODE <strong>the</strong> output is ignored.<br />
%%%%% O<strong>the</strong>rwise GLOG uses <strong>the</strong> disp() function <strong>to</strong><br />
%%%%% display <strong>the</strong> output string.<br />
function GLOG (glog_text, glog_type)<br />
%function GLOG (glog_text, glog_type)<br />
global GLOG_MODE;<br />
if glog_type >= GLOG_MODE<br />
end<br />
disp (glog_text);<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% Bayesian FAVAR Code August 26th %%%%%<br />
%%%%%%**********************************************************%%%%%<br />
%%%%%% DO_RESULTS %%%%%<br />
%%%%%% see Sequence Diagram Block C %%%%%<br />
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%<br />
%%%%% Plots <strong>the</strong> results <strong>of</strong> all finalresponses containing more<br />
%%%%% than zero elements.<br />
function DO_RESULTS (input,results)<br />
%function DO_RESULTS (input,results)<br />
spec_nBC = length(input.specification.IRA.BC);<br />
for bc_i = 1:spec_nBC;<br />
if results.ira.finalresponses(bc_i).no > 0<br />
%%% BBE<br />
figure(bc_i )<br />
for i=1:20<br />
subplot(5,4,i)<br />
plot(input.specification.IRA.tstep,...<br />
input.specification.IRA.zeroline,’-k’, ...<br />
input.specification.IRA.tstep,...
110 Bayesian FAVARs with Agnostic Identification<br />
end;<br />
end;<br />
end;<br />
%%% SR<br />
squeeze(results.ira.finalresponses(bc_i).collRespMat(i,:,1)),’b--’, ...<br />
input.specification.IRA.tstep,...<br />
squeeze(results.ira.finalresponses(bc_i).collRespMat(i,:,2)),’r-’,...<br />
input.specification.IRA.tstep,...<br />
squeeze(results.ira.finalresponses(bc_i).collRespMat(i,:,3)),’b--’, ...<br />
’LineWidth’,2);<br />
set(gca,’XLim’,[0 input.specification.IRA.nsteps],’XTick’,...<br />
[0 input.specification.IRA.nsteps],’FontSize’,10);<br />
title(input.specification.VARNAMES_BBE( i));<br />
% input.specification.IRA.var_index_sr(i,1)));<br />
%figure( (bc_i-1)+1 )<br />
%for i=1:20<br />
% subplot(5,4,i)<br />
% plot(input.specification.IRA.tstep,input.specification.IRA.zeroline,...<br />
%’-k’,input.specification.IRA.tstep,...<br />
%squeeze(results.ira.finalresponses(bc_i).collRespMat(i,:,1)),’b--’, ...<br />
% input.specification.IRA.tstep,...<br />
%squeeze(results.ira.finalresponses(bc_i).collRespMat(i,:,2)),’r-’,...<br />
% input.specification.IRA.tstep,...<br />
%squeeze(results.ira.finalresponses(bc_i).collRespMat(i,:,3)),...<br />
%’b--’,’LineWidth’,2); axis tight;grid on;<br />
%title(input.specification.ALL_VARNAMES...<br />
%( input.specification.IRA.var_index_sr(i,1)));<br />
%end;<br />
%figure((bc_i -1) +2)<br />
%for i=21:40<br />
% subplot(5,4,i-20)<br />
% plot(input.specification.IRA.tstep,input.specification.IRA.zeroline,...<br />
%’-k’,input.specification.IRA.tstep,...<br />
%squeeze(results.ira.finalresponses(bc_i).collRespMat(i,:,1)),’b--’, ...<br />
% input.specification.IRA.tstep,...<br />
%squeeze(results.ira.finalresponses(bc_i).collRespMat(i,:,2)),’r-’,...<br />
% input.specification.IRA.tstep,.squeeze(results.ira.finalresponses(bc_i)..<br />
%.collRespMat(i,:,3)),’b--’,’LineWidth’,2); axis tight;grid on;<br />
% title(input.specification.ALL_VARNAMES( input.specification.IRA.var_index_sr(i,1)));<br />
%end;<br />
%figure((bc_i -1) +3 )<br />
%for i=41:52<br />
% subplot(5,4,i-40)<br />
% plot(input.specification.IRA.tstep,input.specification.IRA.zeroline,..<br />
%’-k’,input.specification.IRA.tstep,squeeze(results...<br />
%ira.finalresponses(bc_i).collRespMat(i,:,1)),’b--’, ...<br />
% input.specification.IRA.tstep,..<br />
%squeeze(results.ira.finalresponses(bc_i).collRespMat(i,:,2)),’r-’,...<br />
% input.specification.IRA.tstep,...<br />
%queeze(results.ira.finalresponses(bc_i).collRespMat(i,:,3)),...<br />
%’b--’,’LineWidth’,2); axis tight;grid on;<br />
% title(input.specification...<br />
%ALL_VARNAMES( input.specification.IRA.var_index_sr(i,1)));<br />
%end;
Bayesian FAVARs with Agnostic Identification 111<br />
function DO_RESULTS_PLF<br />
clc;<br />
k = [2,5,7];<br />
for i = 1:length (k)<br />
end;<br />
clear calculation;<br />
if k(i) == 2<br />
load DATA_050824_D5000_B1500_K2_C;<br />
elseif k(i) == 5<br />
load DATA_050824_D3500_B500_K5_C;<br />
calculation.F_bar_collect = calculation.F_bar_collect (2001:3000,:,:)<br />
elseif k(i) == 7<br />
end<br />
load DATA_050826_D3000_B2000_K7_C;<br />
disp (’data loaded’);<br />
t = 1959.1667:1/12:2001.6667;<br />
l = size(calculation.F_bar_collect,1);<br />
lh = l/2<br />
figure (k(i))<br />
for j = 1:k(i)<br />
end<br />
meanA = mean(calculation.F_bar_collect (1:lh,:,j));<br />
meanB = mean(calculation.F_bar_collect (lh+1:l,:,j));<br />
subplot(k(i),1,j); plot (t, meanA,’g-’,t,meanA,’r--’, ’LineWidth’,2);<br />
grid on;<br />
axis tight;<br />
title ( sprintf(’Convergence Plot for Fac<strong>to</strong>r %d’,j) );<br />
if j == 1<br />
legend(’First half’,’Second half’);legend(’First half’,’Second half’,0);<br />
end<br />
saveas(gcf, sprintf ( ’FIG_K%dD%d’ ,k(i),l) , ’fig’);<br />
saveas(gcf, sprintf ( ’FIG_K%dD%d’ ,k(i),l) , ’jpg’);<br />
function DO_RESULTS_PLM<br />
clc;<br />
clear;<br />
k = [2,5,7];<br />
for i = 1:length (k)<br />
clear input;<br />
clear results;<br />
disp (sprintf(’ready <strong>to</strong> load data for K%d’,k(i)));<br />
if k(i) == 2<br />
load FOR_MESH_K2_RESULTS; %name <strong>of</strong> <strong>the</strong> mat file where input and results are s<strong>to</strong>red<br />
%better use this code as a function
112 Bayesian FAVARs with Agnostic Identification<br />
end<br />
%directly called from DO_RESULTS<br />
d = size(squeeze( (results.ira.finalresponses(1).response(1,:,:))),2);<br />
draws = 1:10:d; %show every xth accepted draw response<br />
elseif k(i) == 5<br />
load FOR_MESH_K5_RESULTS;<br />
d = size(squeeze( (results.ira.finalresponses(1).response(1,:,:))),2);<br />
draws = 1:10:d;<br />
elseif k(i) == 7<br />
end<br />
load FOR_MESH_K7_RESULTS;<br />
d = size(squeeze( (results.ira.finalresponses(1).response(1,:,:))),2);<br />
draws = 1:d;<br />
disp (sprintf(’K%d data ready’,k(i)));<br />
n = input.specification.IRA.nsteps;<br />
srs = 1:size(squeeze( (results.ira.finalresponses(1).response(1,:,draws))),1);<br />
smax = size(squeeze( (results.ira.finalresponses(1).response(1,:,draws))),1);<br />
figure(k(i))<br />
axis normal;<br />
grid on;<br />
axis tight;<br />
subplot(2,2,1);<br />
mesh (squeeze(results.ira.finalresponses(1).response(1,srs,draws)));<br />
ylabel(’Horizon’);<br />
xlabel(’Accepted Response Draws’);<br />
zlabel(’Response Scale’);<br />
title (’FFR’);<br />
subplot(2,2,2);<br />
mesh (squeeze(results.ira.finalresponses(1).response(9,srs,draws)));<br />
zlabel(’Response Scale’);<br />
title (’COMMODITY PRICE INDEX’);<br />
subplot(2,2,3);<br />
mesh (squeeze(results.ira.finalresponses(1).response(14,srs,draws)));<br />
zlabel(’Response Scale’);<br />
title ( ’UNEMPLOYMENT’);<br />
subplot(2,2,4);<br />
mesh (squeeze(results.ira.finalresponses(1).response(10,srs,draws)));<br />
zlabel(’Response Scale’);<br />
title ( ’CAPACITY UTIL RATE’);<br />
saveas(gcf, sprintf ( ’FIG_MESH_K%d’ ,k(i)) , ’fig’);<br />
saveas(gcf, sprintf ( ’FIG_MESH_K%d’ ,k(i)) , ’jpg’);
Bayesian FAVARs with Agnostic Identification 113<br />
Erklärung zur Urheberschaft<br />
Hiermit erkläre ich, Pooyan Amir Ahmadi, dass ich die vorliegende Arbeit<br />
allein und nur unter Verwendung der aufgeführten Quellen und Hilfsmittel<br />
angefertigt habe.<br />
Pooyan Amir Ahmadi<br />
Berlin, den 26. August 2005