07.12.2012 Views

ECONOMETRIC METHODS II TA session 1 MATLAB Intro ...

ECONOMETRIC METHODS II TA session 1 MATLAB Intro ...

ECONOMETRIC METHODS II TA session 1 MATLAB Intro ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>ECONOMETRIC</strong> <strong>METHODS</strong> <strong>II</strong><br />

<strong>TA</strong> <strong>session</strong> 1<br />

<strong>MATLAB</strong> <strong>Intro</strong>: Simulation of VAR(p) processes<br />

1 <strong>Intro</strong>duction<br />

Fernando Pérez Forero<br />

April 19th, 2012<br />

In this …rst <strong>session</strong> we will cover the simulation of Vector Autoregressive (VAR) processes<br />

of order p. Besides, we will cover how to compute Impulse Response Functions (IRF)<br />

and the Forecast Error Variance Decomposition (FEVD). It is assumed that students are<br />

familiar with general concepts of Time Series Econometrics and Matrix Algebra, i.e. we<br />

will not put many de…nitions explicitly, since we assume that they were learnt by heart<br />

in the past.<br />

This <strong>session</strong> will be particularly useful for those students who are not familiar with<br />

<strong>MATLAB</strong> 1 . It is important to emphasize that when we produce codes, they need to be as<br />

general as possible, so that it is easy to consider di¤erent parametrizations in the future.<br />

However there is no free lunch, i.e. this comes at the cost of investing a large amount of<br />

time in debugging the code. On the other hand, <strong>MATLAB</strong> is easy to use relative to other<br />

languages such as C++ or Phyton. In particular, since we will not perform object-oriented<br />

programming, it will not be necessary to declare variables before using them.<br />

Most of the material covered in this <strong>session</strong> can be found in Lütkepohl (2005) (L)<br />

and Hamilton (1994) (H). In order to provide a better understanding of each step, I will<br />

indicate the number of equation of each reference as follows: (Book, equation).<br />

PhD student. Department of Economics, Universitat Pompeu Fabra, Ramon Trias Fargas 25–27,<br />

08005 Barcelona, Spain (email: fernandojose.perez@upf.edu)<br />

1 A very nice <strong>MATLAB</strong> primer by Winistorfer and Canova (2008) can be also found in<br />

http://www.crei.cat/people/canova/Matlab-intro%281%29.pdf<br />

1


<strong>ECONOMETRIC</strong> <strong>METHODS</strong> <strong>II</strong> <strong>TA</strong> Session 1<br />

2 Preliminaries<br />

2.1 VAR(p) process<br />

Consider a VAR(p) process (L,2.1.1),<br />

yt = v + A1yt 1 + + Apyt p + ut; t = 0; 1; 2; : : : ; (2.1)<br />

where yt; ut and v are (K 1) vectors and Ai are (K K) matrices for each i = 1; : : : ; p.<br />

In addition, the error term ut is a white noise random vector such that E (ut) = 0,<br />

E (utu 0 t) = u and E (utu 0 s) = 0 for s 6= t, where u is a (K K) positive de…nite matrix.<br />

It is useful to re-write (2:1) in its companion form (L,2.1.8),<br />

where Yt = y 0 t; y 0 t 1; : : : ; y 0 t p+1<br />

vectors and<br />

6<br />

A = 6<br />

4<br />

Yt = v + AYt 1 + Ut (2.2)<br />

0 , v = [v 0 ; 0 0 ; : : : ; 0 0 ] 0 and Ut = [u 0 t; 0 0 ; : : : ; 0 0 ] 0 are (Kp 1)<br />

2<br />

A1 A2 Ap 1 Ap<br />

IK 0 0 0<br />

0 IK 0 0<br />

.. . . . .<br />

0 0 IK 0<br />

is a (Kp Kp) matrix. The model (2:2) is said to be stable i¤ max (j Aj) < 1, where<br />

A denotes the vector of eigenvalues of A. The stability condition implies that the model<br />

can be re-written as a Moving-Average (MA) representation (L,2.1.13):<br />

yt = JYt = J + J<br />

3<br />

7<br />

5<br />

1X<br />

A i Ut i (2.3)<br />

where =E (Yt) = (IKp A) 1 h<br />

i<br />

v is the unconditional expectation and J = IK 0 0<br />

is a (K Kp) selection matrix. Notice also that Ut = J 0 JUt and ut = JUt, thus<br />

yt = JYt = J +<br />

1X<br />

i=0<br />

i=0<br />

JA i J 0 JUt i<br />

where i = JA i J 0 denotes the i th matrix lag on the MA representation (L,2.1.17)<br />

yt = JYt = J +<br />

1X<br />

i=0<br />

JA i J 0 ut i (2.4)<br />

Fernando Pérez Forero 2


<strong>ECONOMETRIC</strong> <strong>METHODS</strong> <strong>II</strong> <strong>TA</strong> Session 1<br />

2.2 Impulse Response Function (IRF)<br />

In multivariate systems one could be interested in explore the dynamic propagation of<br />

innovations across variables. For that purpose, there exists the impulse response function.<br />

Basically, one could study the e¤ect of a one-time innovation in a particular variable n<br />

on another variable m over time. The duration of that typoe of innovation will depend<br />

exclusively on the structure of the system.<br />

In this section we compute the Impulse Response function for the model (2:1). We<br />

proceed as follows: If the matrix u is positive de…nite, it can be decomposed such<br />

that u = P P 0 , where P is a lower triangular nonsingular matrix with positive diagonal<br />

elements. Moreover, we can re-write (2:4) as (L,2.3.33):<br />

yt = +<br />

1X<br />

i=0<br />

i!t i (2.5)<br />

where i = JA i J 0 P is a (K K) matrix and !t = P 1 ut is a white noise process with<br />

covariance matrix equal to the identity matrix, i.e. ! = IK.<br />

Thus, the impulse response function is<br />

@yt+i<br />

@!t<br />

= i; i = 0; 1; 2; : : : (2.6)<br />

where i = i<br />

jk is such that i<br />

jk denotes the response of variable j to a shock in variable<br />

k after i periods.<br />

2.3 Forecast Error Variance Decomposition (FEVD)<br />

We have shown how to compute the responses of variables after innovations in the system<br />

(2:5). It is also possible to perform a forecasting exercise using the latter system. Denote<br />

the forecast error h periods ahead as (L,2.3.34):<br />

et+h = yt+h yt (h) =<br />

Xh<br />

1<br />

i=0<br />

i!t+h i (2.7)<br />

Denote the contribution of innovation on variable k to the forecast error h periods ahead<br />

of variable j (L,2.3.37):<br />

where<br />

!jk;h =<br />

MSE [yj;t (h)] =<br />

P h 1<br />

i=0 e0 j iek<br />

MSE [yj;t (h)]<br />

Xh<br />

1<br />

i=0<br />

KX<br />

k=1<br />

2<br />

i<br />

jk<br />

2<br />

(2.8)<br />

(2.9)<br />

Fernando Pérez Forero 3


<strong>ECONOMETRIC</strong> <strong>METHODS</strong> <strong>II</strong> <strong>TA</strong> Session 1<br />

3 <strong>Intro</strong>ducing the model in <strong>MATLAB</strong><br />

In this section we simulate data using the system (2:1). We set matrices Ai, i = 1; : : : ; p<br />

such that the VAR(p) is stationary.<br />

3.1 Data simulation<br />

We set K = 4, p = 2 and the number of observations T = 100.<br />

p=4; % Lag-order<br />

T=100; % Observations<br />

K=4; % Variables;<br />

Then, we set the matrix u such that it is positive de…nite:<br />

scale=2*rand(K,1)+1e-30;<br />

Sigma_U=0.5*rand(K,K)+diag(scale);<br />

Sigma_U=Sigma_U’*Sigma_U;<br />

if all(eig((Sigma_U+Sigma_U’)/2)) >= 0<br />

disp(’Sigma_U is positive definite’)<br />

else<br />

error(’Sigma_U must be positive definite’)<br />

end<br />

And we set lag matrices Ai (K K) for each i = 1; : : : ; p in a 3-dimensional vector:<br />

Alag=zeros(K,K,p);<br />

lambda=2.5;<br />

for k=1:p Alag(:,:,k)=(lambda^(-k))*rand(K,K)-((2*lambda)^(-k))*ones(K,K);<br />

end<br />

With all these ingredients, we procced to simulate T = 100 observations of the process<br />

(2:1) assuming ut N (0; u) and p initial conditions Y p+1; : : : ; Y 1; Y0 that come from<br />

the same distribution. Besides, the intercept v is a vector of ones:<br />

Ylag=zeros(K,p);<br />

for k=1:p<br />

Ylag(:,k)=mvnrnd(zeros(K,1),Sigma_U)’;<br />

end<br />

Y=Ylag;<br />

v=ones(K,1);<br />

for i=1:T<br />

Fernando Pérez Forero 4


<strong>ECONOMETRIC</strong> <strong>METHODS</strong> <strong>II</strong> <strong>TA</strong> Session 1<br />

temp=v+mvnrnd(zeros(K,1),Sigma_U)’;<br />

for k=1:p<br />

temp=temp+Alag(:,:,k)*Ylag(:,k);<br />

end<br />

Y=[Y,temp];<br />

Ylag(:,p)=temp;<br />

for k=1:p-1<br />

Ylag(:,k)=Ylag(:,k+1);<br />

end<br />

end<br />

clear Ylag temp<br />

A_coeff=v;<br />

for k=1:p<br />

A_coeff=[A_coeff,Alag(:,:,k)];<br />

end<br />

Y=Y(:,p+1:end);<br />

We then plot the simulated data in Figure 3.1:<br />

FS=15;<br />

LW=2;<br />

gr_size2=ceil(K/3);<br />

figure(1)<br />

set(0,’DefaultAxesColorOrder’,[0 0 1],...<br />

’DefaultAxesLineStyleOrder’,’-j-j-’)<br />

set(gcf,’Color’,[1 1 1])<br />

set(gcf,’defaultaxesfontsize’,FS)<br />

for k=1:K<br />

subplot(gr_size2,2,k)<br />

plot(Y(k,:),’b’,’Linewidth’,LW)<br />

title(sprintf(’Y_%d’,k))<br />

end<br />

Fernando Pérez Forero 5


<strong>ECONOMETRIC</strong> <strong>METHODS</strong> <strong>II</strong> <strong>TA</strong> Session 1<br />

Figure 3.1: Simulated Data<br />

Next step is to generate the companion form (2:2) and check for stationarity given the<br />

parameter values:<br />

F1=[A_coeff(:,2:end)];<br />

Q1=Sigma_U;<br />

if p>1<br />

% Matrix F - [H,10.1.10]<br />

F2=cell(1,p-1);<br />

[F2{:}]=deal(sparse(eye(K)));<br />

F2=blkdiag(F2{:});<br />

F2=full(F2);<br />

F3=zeros(K*(p-1),K);<br />

F=[F1;F2,F3];<br />

% Var_cov Q - [H,10.1.11]<br />

Q2=cell(1,p-1);<br />

[Q2{:}]=deal(sparse(zeros(K,K)));<br />

Q2=blkdiag(Q2{:});<br />

Fernando Pérez Forero 6


<strong>ECONOMETRIC</strong> <strong>METHODS</strong> <strong>II</strong> <strong>TA</strong> Session 1<br />

Q2=full(Q2);<br />

Q=blkdiag(Q1,Q2);<br />

clear F1 F2 F3<br />

else<br />

F=F1;<br />

Q=Q1;<br />

clear F1 Q1<br />

end<br />

if max(abs(eig(F)))epsilon<br />

iter=iter+1;<br />

Cap_Sigma0=Cap_Sigma1;<br />

Cap_Sigma1=F*Cap_Sigma0*F’+Q;<br />

Fernando Pérez Forero 7


<strong>ECONOMETRIC</strong> <strong>METHODS</strong> <strong>II</strong> <strong>TA</strong> Session 1<br />

diff=max(max(abs(Cap_Sigma1-Cap_Sigma0)));<br />

diff1=[diff1,diff];<br />

figure(2)<br />

plot(diff1)<br />

title(’Evolution of convergence’,’FontSize’,14)<br />

end<br />

sprintf(’Convergence achieved after %d iterations’,iter)<br />

% Alternatively, solve the discrete Lyapunov equation<br />

Cap_Sigma1l=dlyap(F,Q);<br />

% Compare results<br />

diff2=max(max(abs(Cap_Sigma1l-Cap_Sigma1)));<br />

toc<br />

Figure 3.2 shows that convergence is roughly achieved after iterating this equation<br />

7 times. However, since we impose a convergence criteria of " = 1 10 10 , the loop<br />

converges after 20 iterations. Moreover, in this application di¤2 is less than 1 10 10 ,<br />

which shows that the loop produces a good approximation. We now proceed to the next<br />

section for a more interesting analysis.<br />

Figure 3.2: Evolution of convergence of Y<br />

Fernando Pérez Forero 8


<strong>ECONOMETRIC</strong> <strong>METHODS</strong> <strong>II</strong> <strong>TA</strong> Session 1<br />

3.2 Impulse Response Function (IRF)<br />

Once we have generated the companion form components in (2:2), computing Impulse<br />

Responses is extremely simple. First, we set a …nite horizon h and we generate matrices<br />

P and J.<br />

h=24; % Horizon<br />

P=chol(Sigma_U); % Cholesky decomposition<br />

P=P’; % Following the notation of Lutkepohl - Section 3.7 : U=P*E<br />

capJ=zeros(K,K*p);<br />

capJ(1:K,1:K)=eye(K);<br />

Then we construct matrices i(K K) according to equation (2:6) and store all of them<br />

in a 3-dimensional object:<br />

Iresp=zeros(K,K,h);<br />

Iresp(:,:,1)=P;<br />

for j=1:h-1<br />

temp=(F^j);<br />

Iresp(:,:,j+1)=capJ*temp*capJ’*P;<br />

end<br />

clear temp<br />

Finally, we plot the Impulse Responses in Figure 3.3:<br />

FS=15;<br />

LW=2;<br />

figure(3)<br />

set(0,’DefaultAxesColorOrder’,[0 0 1],...<br />

’DefaultAxesLineStyleOrder’,’-j-j-’)<br />

set(gcf,’Color’,[1 1 1])<br />

set(gcf,’defaultaxesfontsize’,FS-5)<br />

for i=1:K<br />

for j=1:K<br />

subplot(K,K,(j-1)*K+i)<br />

plot(squeeze(Iresp(i,j,:)),’Linewidth’,LW)<br />

hold on<br />

plot(zeros(1,h),’k’)<br />

title(sprintf(’Y_{%d} to U_{%d}’,i,j))<br />

xlim([1 h])<br />

set(gca,’XTick’,0:ceil(h/4):h)<br />

Fernando Pérez Forero 9


<strong>ECONOMETRIC</strong> <strong>METHODS</strong> <strong>II</strong> <strong>TA</strong> Session 1<br />

end<br />

end<br />

Figure 3.3: Impulse responses<br />

3.3 Forecast Error Variance Decomposition (FEVD)<br />

Once we have computed the Impulse Responses, we can now proceed to compute the<br />

contribution of each variable to the Forecast Error variance j = 1; : : : ; h periods ahead.<br />

We …rst compute the numerator and denominator of equation (2:8) separately, i.e. the<br />

absolute contribution and the Mean-Squared-Error (MSE).<br />

MSE=zeros(K,h);<br />

CONTR=zeros(K,K,h);<br />

for j=1:h<br />

% Compute MSE<br />

temp2=eye(K);<br />

for i=1:K<br />

if j==1<br />

Fernando Pérez Forero 10


<strong>ECONOMETRIC</strong> <strong>METHODS</strong> <strong>II</strong> <strong>TA</strong> Session 1<br />

MSE(i,j)=temp2(:,i)’*Iresp(:,:,j)*Sigma_U*Iresp(:,:,j)’*temp2(:,i);<br />

for k=1:K<br />

CONTR(i,k,j)=(temp2(:,i)’*Iresp(:,:,j)*P*temp2(:,k))^2;<br />

end<br />

else<br />

MSE(i,j)=MSE(i,j-1)+temp2(:,i)’*Iresp(:,:,j)*Sigma_U*Iresp(:,:,j)’*temp2(:,i);<br />

for k=1:K<br />

CONTR(i,k,j)=CONTR(i,k,j-1)+(temp2(:,i)’*Iresp(:,:,j)*P*temp2(:,k))^2;<br />

% (L,2.3.36)<br />

end<br />

end<br />

end<br />

end<br />

Then we compute the fraction (2:8) and plot the results in Figure 3.4:<br />

VD=zeros(K,h,K);<br />

for k=1:K<br />

VD(:,:,k)=squeeze(CONTR(:,k,:))./MSE; % (L,2.3.37)<br />

end<br />

clear temp2 CONTR MSE<br />

figure(4)<br />

set(0,’DefaultAxesColorOrder’,[0 0 1],...<br />

’DefaultAxesLineStyleOrder’,’-j-j-’)<br />

set(gcf,’Color’,[1 1 1])<br />

set(gcf,’defaultaxesfontsize’,FS-5)<br />

for i=1:K<br />

for j=1:K<br />

subplot(K,K,(j-1)*K+i)<br />

plot(squeeze(VD(i,:,j)),’Linewidth’,LW)<br />

hold on<br />

plot(zeros(1,h),’k’)<br />

title(sprintf(’U{%d} to Var(e_{%d,t+h}).’,j,i))<br />

xlim([1 h])<br />

set(gca,’XTick’,0:ceil(h/4):h)<br />

ylim([0 1])<br />

set(gca,’YTick’,0:0.25:1)<br />

end<br />

end<br />

Fernando Pérez Forero 11


<strong>ECONOMETRIC</strong> <strong>METHODS</strong> <strong>II</strong> <strong>TA</strong> Session 1<br />

Figure 3.4: Forecast Error Variance decomposition<br />

Fernando Pérez Forero 12


<strong>ECONOMETRIC</strong> <strong>METHODS</strong> <strong>II</strong> <strong>TA</strong> Session 1<br />

A List of new commands<br />

Type help followed by the command name for more details.<br />

clear: Clear variables and functions from memory<br />

clc: Clear command window<br />

rand(m,n): Produces a m by n matrix with uniformly distributed pseudorandom<br />

numbers.<br />

diag: Produces diagonal matrices and captures the main diagonal of a matrix.<br />

eig: Eigenvalues and eigenvectors<br />

all: True (=1) if all elements of a vector are nonzero.<br />

disp: displays the array, without printing the array name.<br />

error: Display message and abort function.<br />

zeros(m,n): Produces a m by n matrix of zeros.<br />

ones(m,n): Produces a m by n matrix of ones.<br />

eye(m): Produces an identity matrix of order m.<br />

for: Repeat statements a speci…c number of times.<br />

while: Repeat statements an inde…nite number of times.<br />

figure: Create …gure window<br />

set: Set object properties.<br />

mvnrnd(mu,sigma): Random vectors from the multivariate normal distribution (use<br />

also randn(m,n)).<br />

ceil(X): rounds the elements of X to the nearest integers towards in…nity.<br />

plot: Linear plot. See also subplot and options (e.g., title, xlim, etc.)<br />

cell(m,n): Create a m by n cell array of empty matrices.<br />

sparse(x): converts a sparse or full matrix to sparse form by squeezing out any<br />

zero elements.<br />

deal: matches up the input and output lists.<br />

Fernando Pérez Forero 13


<strong>ECONOMETRIC</strong> <strong>METHODS</strong> <strong>II</strong> <strong>TA</strong> Session 1<br />

blkdiag: Block diagonal concatenation of matrix input arguments.<br />

full: Convert sparse matrix to full matrix.<br />

abs(X): Produces the absolute value of X. The number X can be either real or<br />

complex.<br />

max(X) / min(X): Captures the largest / smallest component.<br />

chol(X): Cholesky factorization. Uses only the diagonal and upper triangle of X.<br />

squeeze(X): returns an array with the same elements as X but with all the singleton<br />

dimensions removed.<br />

sprintf: Write formatted data to string.<br />

B <strong>MATLAB</strong> tips<br />

B.1 Growing arrays<br />

Sometimes we do not know the …nal size of a vector/matrix in advance. In such cases it<br />

is useful to create an empty vector and make it grow at each iteration. However, if you<br />

know the exact size of the vector in advance, you should generate (e.g., x=zeros(1,k))<br />

to avoid out-of-memory outcomes.<br />

x=[];<br />

A0=rand;<br />

A1=rand;<br />

diff=abs(A1-A0);<br />

while diff>1e-2<br />

x=[x,diff];<br />

A0=rand;<br />

A1=rand;<br />

diff=abs(A1-A0);<br />

end<br />

B.2 Cumulative sums<br />

In <strong>MATLAB</strong> we can always assign a new value of an object using the previous one. This<br />

is useful for cumulative sums:<br />

a=0;<br />

Fernando Pérez Forero 14


<strong>ECONOMETRIC</strong> <strong>METHODS</strong> <strong>II</strong> <strong>TA</strong> Session 1<br />

for i=1:10;<br />

a=a+1;<br />

end<br />

B.3 Matrix partitions<br />

It is possible to extract a partition of an existing matrix. As a matter of fact, it is also<br />

possible to select and reorder rows and columns:<br />

A=magic(4);<br />

B=A(2:3,1:2);<br />

C=A([2 4 1],[1 3]);<br />

References<br />

Hamilton, J. D. (1994). Time Series Analysis. Princeton University Press.<br />

Lütkepohl, H. (2005). New <strong>Intro</strong>duction to Multiple Time Series Analysis. Springer.<br />

Fernando Pérez Forero 15

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!