final report - probability.ca
final report - probability.ca
final report - probability.ca
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
2.5.1 Inefficiencies of full-dimensional updates using RWM applied to discontinuous targets . . . . 26<br />
2.5.2 Algorithm and Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26<br />
3 Nonproduct Target Laws and Infinite Dimensional Target Measures 27<br />
3.1 Infinite-Dimensional Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27<br />
3.1.1 Problem specific context and Radon-Nikodym derivative for target measure . . . . . . . . . . 27<br />
3.1.2 Generality of non-product target measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27<br />
3.2 Likelihood Estimation for Discretely Observed Diffusions . . . . . . . . . . . . . . . . . . . . . . . . . 27<br />
3.3 Optimal s<strong>ca</strong>ling for the RWM algorithm when applied to non-product target measures . . . . . . . . 29<br />
3.3.1 MCMC methods for function space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29<br />
3.3.2 Karhunen-Loève expansion and invariant SPDEs for the target measure . . . . . . . . . . . . 29<br />
3.3.3 Sampling algorithm in finite dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30<br />
3.3.4 Assumptions and Main Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31<br />
3.4 Optimal s<strong>ca</strong>ling for MALA when applied to non-product target measures . . . . . . . . . . . . . . . 31<br />
3.4.1 Sampling Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32<br />
3.4.2 Main Theorem and Outline of Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33<br />
A Infinite Dimensional Analysis 34<br />
A.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34<br />
A.1.1 Infinitely Divisible Laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34<br />
A.2 Gaussian Measures in Hilbert Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34<br />
A.2.1 One-dimensional Hilbert spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34<br />
A.2.2 Finite Dimensional Gaussian measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36<br />
A.2.3 Measures in Hilbert spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37<br />
A.2.4 Mean and Covariance of Probability Measures . . . . . . . . . . . . . . . . . . . . . . . . . . 37<br />
A.3 Gaussian Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38<br />
A.3.1 Existence and Uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38<br />
A.3.2 Preliminaries on Countable Product of Measures . . . . . . . . . . . . . . . . . . . . . . . . . 39<br />
A.3.3 Definition of Gaussian measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39<br />
A.4 Brownian Motion and Wiener Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41<br />
A.4.1 Infinite Dimensional Wiener Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41<br />
A.4.2 Wiener Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41<br />
A.4.3 Cameron-Martin Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42<br />
A.5 Lack of a Canoni<strong>ca</strong>l Gaussian Distribution in Infinite Dimensions . . . . . . . . . . . . . . . . . . . . 43<br />
B Markov Processes 43<br />
B.1 Introduction to the Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44<br />
B.2 The Skorokhod Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44<br />
B.3 Operator Semigroups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44<br />
B.4 Markov Processes and Semigroups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45<br />
C R Codes for Simulations 45<br />
C.1 Code for Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46<br />
C.2 Code for Example 2.13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47<br />
1 Introduction and First Results<br />
1.1 The “Goldilocks” Principle<br />
Achieving the optimal convergence speed for a Markov chain Monte Carlo (MCMC) algorithm is important and<br />
often a crucial matter in order to ensure efficiency in computation. Due to the flexibility of the Metropolis-<br />
Hastings algorithm and other popular Monte Carlo algorithms, it is usually not difficult to set up a Markov chain<br />
that eventually converges to the stationary distribution of interest in many problems. However, the fundamental<br />
question that must be answered after one is able to construct such a Markov chain is the following: how many<br />
steps is sufficient in order for the chain to be close to the stationary distribution of interest? The amount of steps<br />
necessary before we have “convergence” is <strong>ca</strong>lled the “mixing time” of the chain/algorithm. One way of reaching<br />
some level of optimality in the performance for Metropolis-Hastings algorithms is through the selection of the s<strong>ca</strong>ling<br />
2