12.07.2015 Views

"Frontmatter". In: Analysis of Financial Time Series

"Frontmatter". In: Analysis of Financial Time Series

"Frontmatter". In: Analysis of Financial Time Series

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

398 MCMC METHODSUnder some regularity conditions, it can be shown that, for a sufficiently large m,(θ 1,m ,θ 2,m ,θ 3,m ) is approximately equivalent to a random draw from the joint distributionf (θ 1 ,θ 2 ,θ 3 | X, M) <strong>of</strong> the three parameters. The regularity conditions areweak; they essentially require that for an arbitrary starting value (θ 1,0 ,θ 2,0 ,θ 3,0 ),the prior Gibbs iterations have a chance to visit the full parameter space. The actualconvergence theorem involves using the Markov Chain theory; see Tierney (1994).<strong>In</strong> practice, we use a sufficiently large n and discard the first m random draws <strong>of</strong>the Gibbs iterations to form a Gibbs sample, say(θ 1,m+1 ,θ 2,m+1 ,θ 3,m+1 ),...,(θ 1,n ,θ 2,n ,θ 3,n ). (10.2)Since the previous realizations form a random sample from the joint distributionf (θ 1 ,θ 2 ,θ 3 | X, M), they can be used to make inference. For example, a point estimate<strong>of</strong> θ i and its variance arêθ i = 1n − mn∑j=m+1θ i, j , ̂σ i 2 1=n − m − 1n∑j=m+1(θ i, j − ̂θ i ) 2 . (10.3)The Gibbs sample in Eq. (10.2) can be used in many ways. For example, if one isinterested in testing the null hypothesis H o : θ 1 = θ 2 versus the alternative hypothesisH a : θ 1 ̸= θ 2 , then she can simply obtain point estimate <strong>of</strong> θ = θ 1 − θ 2 and itsvariance aŝθ = 1n − mn∑j=m+1(θ 1, j − θ 2, j ), ̂σ 2 =1n − m − 1n∑j=m+1(θ 1, j − θ 2, j − ̂θ) 2 .The null hypothesis can then be tested by using the conventional t ratio statistict = ̂θ/̂σ .Remark: The first m random draws <strong>of</strong> a Gibbs sampling, which are discarded,are commonly referred to as the burn-ins sample. The burn-ins are used to ensurethat the Gibbs sample in Eq. (10.2) is indeed close enough to a random sample fromthe joint distribution f (θ 1 ,θ 2 ,θ 3 | X, M).Remark: The method discussed before consists <strong>of</strong> running a single long chainand keeping all random draws after the burn-ins to obtain a Gibbs sample. Alternatively,one can run many relatively short chains using different starting values and arelatively small n. The random draw <strong>of</strong> the last Gibbs iteration in each chain is thenused to form a Gibbs sample.From the prior introduction, Gibbs sampling has the advantage to decompose ahigh-dimensional estimation problem into several lower dimensional ones via fullconditional distributions <strong>of</strong> the parameters. At the extreme, a high-dimensional problemwith N parameters can be solved iteratively by using N univariate conditional

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!