Research Statement - Carleton University
Research Statement - Carleton University
Research Statement - Carleton University
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
RESEARCH STATEMENT<br />
Son Luu Nguyen<br />
My primary research interests lie in the areas of regime-switching systems and game theory<br />
with a large population.<br />
Regime-switching diffusions, also known as hybrid diffusions, have two components. One of<br />
them is the continuous state component as in the usual diffusion, and the other is a switching<br />
component represented by a pure jump process. In recent years, it has been recognized that the<br />
traditional formulation of differential equation models is often inadequate for treating complex<br />
systems in many real-world applications. Due to the inherent complexity, the underlying systems<br />
often involve characters that cannot be represented by the usual dynamic systems, but rather<br />
display interactive behavior of continuous and discrete dynamics. As a result, effort has been<br />
directed to finding alternative models. Along this line, regime-switching diffusions have drawn<br />
increasing attention. The models have been used in such applications as option pricing [29], jump<br />
linear systems in automatic control [30], hierarchical decision making in production planning [38],<br />
estimation in hybrid systems [41], stock liquidation [43], and competitive Lotka-Volterra model in<br />
random environments [47] among others.<br />
With the aforementioned motivations, my research to date mainly focuses on the study of hybrid<br />
systems. One part of my research is on the pathwise convergence rates for numerical solutions<br />
of regime-switching stochastic differential equations. Our results are already interesting for the<br />
traditional stochastic differential equations since they present a new angle of ascertaining the convergence<br />
rates. Another part of my research is on asymptotic properties of Markov-modulated<br />
random sequences which can be considered as discrete-time counterpart of switching diffusion systems.<br />
We obtain the weak convergence and a strong invariance principle for the scaled processes<br />
of a two-component process under suitable conditions.<br />
Recently, I also have interests in mean field game theory with mixed players. In the context of<br />
noncooperative game theory, large population models have been well studied in economics [22, 10],<br />
social science [40], biological science [2], and engineering [12]. In these areas, of particular interest<br />
is the class of games in which each player interacts with the average effect of many others and<br />
individually has negligible effect on the overall population. Such an interaction pattern may be<br />
modeled by mean field coupling, and it has naturally arisen in economics [22, 10], engineering<br />
[19], and public health research [5]. The class of games we consider involves a major player and<br />
many minor players. The major player has a significant role in affecting others, but each minor<br />
player possesses only a weak influence. This kind of interaction modeling is motivated by many<br />
socio-economic problems.<br />
For large population dynamic games, a central issue is the development of low complexity<br />
solutions so that each player may implement a strategy based on local information. We consider<br />
a mean field linear-quadratic-Gaussian (LQG) game with a major player and a large number of<br />
minor players parametrized by a continuum set and construct the set of decentralized strategies<br />
which has an ε-Nash equilibrium property when applied to the large but finite population model.<br />
In the following sections, these lines of research are described in more detail. In addition, future<br />
directions of research are outlined.<br />
1 Convergence of Numerical Solutions of Stochastic Differential Equations<br />
Because of the nonlinearity, closed-form solutions of many important stochastic differential equations<br />
are virtually impossible to obtain. Thus, numerical solutions are a viable alternative. Due to<br />
its simplicity, Euler-Maruyama (E-M) numerical method has been investigated intensively by many<br />
authors and for different modes of convergence. For example, the pointwise strong and pointwise<br />
1
weak convergence can be found in [28]. The pathwise strong convergence is studied in [14]. In [6],<br />
the pathwise weak convergence is considered for a general class of numerical methods including<br />
E-M method, but the rate is not obtained.<br />
In our works in [36] and [37], we investigate a new approach to provide the rate of pathwise weak<br />
convergence (i.e. the convergence of the distributions of the whole trajectories of the numerical<br />
solutions) in the sense of strong invariance principle (see [8]) for the E-M numerical solutions of<br />
SDEs and regime-switching SDEs. Different from the well-known results in numerical solutions of<br />
stochastic differential equations, in lieu of the usually employed Brownian motion increments in<br />
the algorithm, an easily implementable sequence of independent and identically distributed (i.i.d.)<br />
random variables is used. Being easier to implement compared to the construction of Brownian<br />
increments, such an i.i.d. sequence is preferable in the actual computation. To establish the<br />
convergence of the algorithms, using either the Brownian increments or i.i.d. random variables do<br />
not make much difference. Nevertheless, the analysis becomes much more difficult for our study<br />
of rates of convergence because one has to deal with the difference in the almost sure sense of the<br />
Brownian increments and the i.i.d. sequence.<br />
Next, we will further describe our recent work [36, 37] concerning convergence rate for E-M<br />
numerical solutions of SDEs and regime-switching SDEs.<br />
1.1 Convergence Rate for Numerical Solutions of SDEs<br />
We begin with the SDE<br />
dX(t) = f(X(t))dt + σ(X(t))dW (t), X(0) = x0, (1.1)<br />
where W (·) is a standard one-dimensional Brownian motion, and f(·) and σ(·) are real-valued<br />
functions satisfying suitable conditions. A simple numerical scheme for obtaining approximate<br />
solutions to (1.1) is the E-M method using Brownian increments: for k ≥ 0 and step size ε,<br />
x ε k+1 = xε k + εf(xε k ) + σ(xε k )( W ((k + 1)ε) − W (kε) ) , x ε 0 = x0.<br />
In [36], we consider a similar scheme called weak E-M numerical method: for k ≥ 0 and step size ε,<br />
x ε k+1 = xε k + εf(xε k ) + √ εσ(x ε k )ξk+1, x ε 0 = x0, (1.2)<br />
where {ξn} is a sequence of independent and identically distributed (i.i.d.) random variables with<br />
mean 0 and covariance 1.<br />
One of the main advantages of this weak method is that it allows us to use simple random<br />
variables {ξn} instead of Brownian increments. However, the solution X(t) to (1.1) may be defined<br />
in a probability space different from that of the sequence {xε k , k ≥ 0} defined in (1.2).<br />
Assumptions. Assume that f(·) and σ(·) are real-valued Lipschitz functions, and {ξn} is a<br />
sequence of bounded i.i.d. random variables with zero mean and variance 1.<br />
Main Result. We show that one can define in the same probability space as the Brownian motion<br />
W (t) a sequence of i.i.d random variables { ˜ ξn} which have the same distribution as that of {ξn}<br />
such that for any 0 < λ < 1/4,<br />
sup<br />
0≤t≤T<br />
<br />
<br />
ε<br />
X(t) − ˜x (t) λ<br />
= O(ε ) a.s.<br />
where ˜x ε (t) = ˜x ε n for nε ≤ t < (n + 1)ε, and ˜x ε n+1 = ˜xε n + εf(˜x ε n) + √ εσ(˜x ε n) ˜ ξn+1, ˜x ε 0 = x0.<br />
Our approach is based on the Skorohod embedding theorem (see [15]) and the strong invariance<br />
principles. The rate we obtained appear to be sharp. That is, if we use the Skorohod embedding,<br />
it appears that the rate cannot be further improved; see Kiefer [27].<br />
2
1.2 Convergence Rate for Numerical Solutions of Regime-Switching SDEs<br />
In [37], we extend the result in previous section to the following regime-switching diffusion system<br />
and<br />
dX(t) = f(X(t), α(t))dt + σ(X(t), α(t))dW (t), (1.3)<br />
X(0) = x0, α(0) = α0<br />
P ( α(t + ∆t) = j|α(t) = i, α(s), x(s), s ≤ t ) = qij∆t + o(∆t), i, j ∈ M, i ̸= j (1.4)<br />
where W (·) is a d-dimensional standard Brownian motion defined in (Ω, F, P ), x ∈ R d , M =<br />
{1, 2, ..., m}, f(·, ·) : R d × M ↦→ R d , and σ(·, ·) : R d × M ↦→ R d×d are appropriate functions,<br />
and Q = (qij) ∈ R m0×m0 is the generator of the Markov chain satisfying qij ≥ 0 for i ̸= j and<br />
∑ m0<br />
j=1 qij = 0 for each i ∈ M.<br />
To approximate the d-dimensional standard Brownian motion W (·), we use a sequence of i.i.d.<br />
random vectors {ξn, n ≥ 0} taking values in R d . Therefore, to approximate the solution to (1.3)<br />
we propose the following algorithm<br />
x ε n+1 = x ε n + εf(x ε n, α ε n) + √ εσ(x ε n, α ε n)ξn, ε > 0, n = 0, 1, ..., (1.5)<br />
where {ξn, n ≥ 0} is a sequence of R d −valued i.i.d. random vectors and {α ε n, n ≥ 0} is a Markov<br />
chain independent of {ξn, n ≥ 0} with transition probability matrix P = exp(εQ).<br />
Assumptions. We assume the following conditions hold.<br />
(A1) For some K > 0, the functions f and σ satisfy<br />
|f(x, α) − f(y, α)| ≤ K|x − y|, |σ(x, α) − σ(y, α)| ≤ K|x − y| for x, y ∈ R d , α ∈ M.<br />
(A2) {ξn, n ≥ 0} is a sequence of R d -valued i.i.d. random vectors independent of {α ε n, n ≥ 0} such<br />
that Eξ0 = 0 and Eξ0ξ ′ 0 = I. Furthermore, the moment generating function of ξ0 exists, that<br />
is<br />
E[exp⟨h, ξ0⟩] < ∞ for |h| ≤ t0,<br />
where t0 > 0, h ∈ R r , and ⟨h, ξ0⟩ denotes the inner product of h and ξ0.<br />
Main Result. We show that one can construct in the same probability space as the Brownian<br />
motion W (t) a Markov chain {˜α ε n, n ≥ 1} with transition probability matrix P = exp(εQ) and a<br />
sequence of i.i.d. random vectors { ˜ ξn, n ≥ 1} which are independent of {˜α ε n, n ≥ 1} and have the<br />
same distribution as that of {αε n, n ≥ 1} such that for any 0 < λ < 1/4,<br />
sup<br />
0≤t≤T<br />
<br />
<br />
ε<br />
X(t) − ˜x (t) λ<br />
= O(ε ) a.s.<br />
where ˜x ε (t) = ˜x ε n for nε ≤ t < (n + 1)ε, and ˜x ε n+1 = ˜xε n + εf(˜x ε n, ˜α ε n) + √ εσ(˜x ε n, , ˜α ε n) ˜ ξn+1, ˜x ε 0 = x0.<br />
2 Asymptotic Properties for Markov-Modulated Sequences<br />
My recent research in [34, 35] has been encompassing asymptotic properties of the process {X(k, αk)},<br />
where αk is a Markov chain taking values in a finite set M, and for each α ∈ M, {X(k, α)} is the<br />
primary random sequence. Such processes belong to the class of regime-switching models and may<br />
be viewed as the discrete-time counterpart of the switching diffusion systems. The motivation of<br />
our study stems from a wide variety of applications in actuarial science, communication networks,<br />
3
production planning, manufacturing, and financial engineering. Due to the increasing complexity of<br />
the real-world scenarios, one is often forced to deal with large-scale systems. Due to the uncertainty<br />
of the random environment, there is a growing interest in modeling, analysis, and optimization of<br />
large-scale systems using a regime-switching model in addition to the usual dynamic systems.<br />
For the random sequences under consideration, there are two main issues. Firstly, not much<br />
structure of the sequence {X(k, αk)} is known. Secondly, for a large |M|, there are |M| sequences<br />
of {X(k, α)} with α ∈ M to be dealt with and the complexity becomes an important issue.<br />
Among the large number of states of a Markov chain, a typical situation is that transitions<br />
among some of the states are changing rapidly, whereas others are varying slowly. The state space<br />
can often be split into smaller subspaces such that within each subspace the transitions are about<br />
the same rate, and from one subspace to another, the transitions happen relatively rarely. More<br />
precisely, suppose the state space M admits the representation M = M1 ∪ · · · ∪ Ml0<br />
so that Mi,<br />
i = 1, . . . , l0, can be considered as subspaces, where the Mi’s are not isolated. Such a model is<br />
known to have nearly completely decomposable structure [7, 39] in the literature. A systematic<br />
study of the related Markovian models has been taken recently [45].<br />
Using two-time scales in the formulation, we introduce a small parameter ε > 0 into the transition<br />
probabilities so as to highlight the different rates of transitions. Then, we can write the<br />
sequence as X(k, α ε k ). We assume for each α, {X(k, α)} is mixing, and {αε k<br />
} is a discrete Markov<br />
chain with nearly completely decomposable structure. Due to the lack of structure of the multiple<br />
random sequences, the problem is difficult to solve. Using the idea of aggregation, we lump the<br />
states of the Markov chain in each Mi into one state for i = 1, . . . , l0 to get an aggregate process αε k .<br />
Corresponding to this, we consider a new sequence X(k, αε k ). Effectively, we use a single sequence<br />
{X(k, i)} as a representative for the sequences {X(k, α)} for all α ∈ Mi.<br />
We aim to demonstrate that the new sequence leads to a limit under suitable interpolations.<br />
The limit process is a switching diffusion with a well-defined operator. Denote the original state<br />
space and the state space of the limit by M and M, respectively. If |M| ≫ |M|, a substantial<br />
reduction of computational complexity will be achieved when one treats control and optimization<br />
problems.<br />
In the literature, treating discrete sequences, under suitable scaling, one may obtain a diffusion<br />
limit, which is the classical weak sense functional central limit theorem known as Donsker’s<br />
invariance principle. One may also be interested in obtaining the rates of convergence of the scaled<br />
sequence commonly referred to as strong invariance principle results. In what follows, our effort to<br />
obtain asymptotic properties of the underlying processes under suitable scaling will be described<br />
in more detail.<br />
2.1 Weak Convergence<br />
For ε > 0, let αε k be a time-homogeneous Markov chain on a probability space (Ω, F, P ) with state<br />
space M and transition matrix Pε = P + εQ, where P = (pij) is a transition probability matrix<br />
and Q = (qij) is a generator of a continuous-time Markov chain (i.e., pij ≥ 0 and ∑m j=1 pij = 1;<br />
qij ≥ 0 for i ̸= j and ∑m j=1 qij = 0 for each i). Assume that the state space M can be written as<br />
M = {s11, . . . , s1m1 } ∪ · · · ∪ {sl01, . . . , sl0ml 0 } = M1 ∪ · · · ∪ Ml0 ,<br />
with m0 = m1 + · · · + ml0 and P = diag[P 1 , . . . , P l0 ], where the subspace Mi consists of recurrent<br />
states belonging to the ith ergodic class and the transition probability matrix P i is irreducible and<br />
aperiodic, i = 1, . . . , l0.<br />
) the<br />
Let pε k be the probability vector, pε k = (P (αε k = sij)) ∈ R1×m0 , and νi = (νi1, . . . , νimi<br />
stationary distribution corresponding to the transition matrix P i . Assume that pε 0 = p0 and define<br />
4
an aggregated process α ε k of αε k by<br />
α ε k = i if αε k ∈ Mi, i = 1, . . . , l0, α ε (t) = α ε k<br />
for t ∈ [εk, ε(k + 1)).<br />
It is shown in [45] that the aggregated process α ε (·) converges weakly to α(·), which is a continuoustime<br />
Markov chain generated by Q = (q ij) = diag(ν1, . . . , νl0 )Qdiag(1lm1 , . . . , 1lml 0 ) where 1ll =<br />
(1, . . . , 1) ′ ∈ R l×1 .<br />
For each sij ∈ M, let {X(k, sij)} be a wide-sense stationary sequence of R d -valued random<br />
vectors on (Ω, F, P ). Denote X(n, i) = ∑ mi<br />
j=1 νijX(n, sij) for i = 1, ..., l0.<br />
Assumptions. Assume that the sequence {(X(k, s11), . . . , X(k, sl0ml )) : k ∈ Z} is independent of<br />
0<br />
the Markov process {αε k }, and is ϕ-mixing with mixing measure denoted by ϕ(·). Moreover, assume<br />
that there exist δ > 0 and a constant C that does not depend on k, i such that<br />
∞∑<br />
n=0<br />
ϕ(n) δ<br />
1+δ < ∞, EX(k, sij) = 0, E|X(k, sij)| 2(1+δ) ≤ C, k ≥ 0, sij ∈ M.<br />
Main Results. We obtain the following results:<br />
(i) Denote<br />
z ε (t) = √ ε<br />
⌊t/ε⌋−1<br />
∑<br />
n=0<br />
[ ε<br />
X(n, αn) − X(n, α ε n) ] = √ ⌊t/ε⌋−1 ∑<br />
ε<br />
n=0<br />
l0∑ mi ∑<br />
X(n, sij)[1 {αε − ν n=sij} ij 1 {αε n=i}].<br />
i=1 j=1<br />
Then (z ε (·), α ε (·)) converges weakly to (z(·), α(·)) which is the unique solution to the martingale<br />
problem with operator L determined by the initial layer term in the asymptotic expansion of ( ) k;<br />
Pε<br />
see [45].<br />
(ii) Denote zε (t) = √ ε ∑⌊t/ε⌋−1 n=0 X(n, αεn). Then (zε (·), αε (·)) converges weakly to (z(·), α(·))<br />
which is the unique solution to the martingale problem with operator given by<br />
Lf(x, i) = 1<br />
2<br />
d∑<br />
d∑<br />
j1=1 j2=1<br />
where A(i) = (a j1j2 (i)) = EX(0, i)X ′ (0, i) + ∑ ∞<br />
k=1<br />
Qf(x, ·)(i) = ∑ l0<br />
j=1 q ijf(x, j).<br />
2.2 Strong Invariance Principle<br />
a j1j2 ∂<br />
(i) 2f(x, i)<br />
∂xj1∂x + Qf(x, ·)(i), i = 1, . . . , l0,<br />
j2<br />
[<br />
EX(k, i)X ′ (0, i) + EX(0, i)X ′ ]<br />
(k, i)<br />
Let ε > 0 and αε k be a time-homogeneous Markov chain on (Ω, F, P ) with state space M =<br />
{1, 2, . . . , m} and transition matrix<br />
Pε = P + εQ, (2.1)<br />
where P = (pij) is a transition probability matrix and Q = (qij) is a generator of a continuous-time<br />
Markov chain. Suppose that P is irreducible and aperiodic (i.e. l0 = 1 or M = M1) with the<br />
stationary distribution denoted by ν = (ν1, . . . , νm) ∈ R1×m . Denote pε k = (P (αε k = 1), . . . , P (αε k =<br />
m)) and assume that the initial probability p ε 0 is independent of ε, i.e., pε 0<br />
= p0.<br />
Let {(X(k, 1), X(k, 2), . . . , X(k, m)) : k ∈ Z} be an Rm-valued wide-sense stationary ϕ-mixing<br />
sequence on (Ω, F, P ) which is independent of the Markov process {αε k }. Denote<br />
z ε (t) = √ ⌊t/ε⌋−1 ∑<br />
ε<br />
n=0<br />
m∑<br />
i=1<br />
5<br />
X(k, i)[1 {α ε n =i} − νi].<br />
and
Assumptions. Assume that the sequence {X(k, i) : k ∈ Z, i ∈ M} and its the mixing measure<br />
function ϕ(·) satisfy<br />
EX(k, i) = 0, E|X(k, i)| 4 4<br />
−<br />
< ∞, ϕ(n) < Cn 3 (1+β) for some constants C and β > 0.<br />
Main Result. We show that one can construct on a probability space a one-dimensional Brownian<br />
motion W (t) with EW (t) = 0 and E[W (t)] 2 = σ 2 t where σ can be determined explicitly, and<br />
stochastic processes ˜z ε (t) having the same distributions as those of z ε (t) such that<br />
for some θ > 0.<br />
sup<br />
0≤t≤T<br />
<br />
<br />
ε<br />
˜z (t) − W (t) θ<br />
= o(ε ) a.s.<br />
3 LQG Mixed Games with Continuum-Parametrized Minor Players<br />
We consider a mean field LQG game model with a major player and a large number of minor<br />
players which are parametrized by a continuum set. Large population stochastic dynamic games<br />
with mean field coupling have experienced intense investigation in the past decade; see, e.g., [19, 20,<br />
23, 24, 25, 26]. To obtain low complexity strategies, consistent mean field approximations provide<br />
a powerful approach, and in the resulting solution, each agent only needs to know its own state<br />
information and the aggregate effect of the overall population which may be pre-computed off-line.<br />
One may further establish an ε-Nash equilibrium property for the set of control strategies [20].<br />
The LQG model in [18] shows that the presence of the major player causes an interesting<br />
phenomenon called the lack of sufficient statistics. More specifically, in order to obtain asymptotic<br />
equilibrium strategies, the major player cannot simply use a strategy as a function of its current<br />
state and time; for a minor player, it cannot simply use the current states of the major player and<br />
itself. To overcome this lack of sufficient statistics for decision, the system dynamics are augmented<br />
by adding a new state, which approximates the mean field and is driven by the major player’s state.<br />
This additional state enters the obtained decentralized strategy of each player and it captures the<br />
past history of the major player.<br />
A crucial modeling assumption in [18] is that the minor players are from a finite number of<br />
classes labelled by a set {1, . . . , K}, where players in each class share the same set of parameters<br />
in their dynamics and costs. The size of the additional state introduced in [18] depends on the<br />
number of classes so that it provides sub-mean field approximations for different classes, and this<br />
approach becomes invalid when the dynamic parameters are from an infinite set.<br />
In [32], we consider a population of minor players parametrized by an infinite set such as<br />
a continuum, and seek a different approach for mean field approximations. Due to the linear<br />
quadratic structure of the game with a finite number of players, it is plausible to assume that the<br />
limiting mean field is a Gaussian process and may be represented by using the driving noise of the<br />
major player. Eventually, we will justify this argument by showing the consistency of the mean field<br />
approximation. Given the above representation of the limiting mean field, we may approximate<br />
the original problems of the major player and a typical minor player by stochastic control problems<br />
with random coefficients in the dynamics and costs [3]. This further enables the use of powerful<br />
tools from the theory of backward stochastic differential equations [3]. Also, the Gaussian property<br />
of various processes involved will play an important role and we exploit this to develop kernel<br />
representation to reduce the analysis to function spaces [17].<br />
Mean Field Games with Major and Minor Players. Consider the LQG game with a major<br />
player A0 and a population of minor players {Ai, 1 ≤ i ≤ N}. At time t ≥ 0, the states of the<br />
6
players A0 and Ai are, respectively, denoted by x0(t) and xi(t), 1 ≤ i ≤ N. Let (Ω, F, Ft, t ≥ 0, P )<br />
be the underlying filtration. The dynamics of the N + 1 players are given by a system of linear<br />
stochastic differential equations (SDEs)<br />
dx0(t) = [ A0x0(t) + B0u0(t) + F0x (N) (t) ] dt + D0dW0(t), (3.1)<br />
dxi(t) = [ A(θi)xi(t) + B(θi)ui(t) + F (θi)x (N) (t) ] dt + D(θi)dWi(t), (3.2)<br />
where x (N) = 1/N ∑ N<br />
i=1 xi is the mean field term. The initial states {xj(0), 0 ≤ j ≤ N} are measurable<br />
with respect to F0 and have finite second moments. The states x0, xi and controls u0, ui<br />
are, respectively, n and n1 dimensional vectors. The noise processes W0 and Wi are n2 dimensional<br />
independent standard Brownian motions adapted to (Ft)t≥0, which are also independent of<br />
{xj(0), 0 ≤ j ≤ N}. The vector θi ∈ R d is a parameter in the dynamics of player Ai.<br />
For 0 ≤ j ≤ N, denote u−j = (u0, ..., uj−1, uj+1, ..., uN). For positive semi-definite matrix<br />
M ≥ 0, we may write the quadratic form z T Mz as |z| 2 M . The cost function for A0 is given by<br />
∫ T [<br />
J0(u0, u−0) = E x0(t) − χ0(x<br />
0<br />
(N) (t)) 2 Q0 + uT0 (t)R0u0(t) ] dt, (3.3)<br />
where χ0(x (N) (t)) = H0x (N) (t) + η0, Q0 ≥ 0 and R0 > 0. The cost function for Ai, 1 ≤ i ≤ N, is<br />
∫ T [<br />
Ji(ui, u−i) = E xi(t) − χ(x0(t), x<br />
0<br />
(N) (t)) 2 Q + uT i (t)Rui(t) ] dt, (3.4)<br />
where χ(x0(t), x (N) (t)) = Hx0(t) + ˆ Hx (N) (t) + η, Q ≥ 0 and R > 0. The component Hx0 in<br />
the coupling term χ indicates the strong influence of the major player on each minor player. By<br />
contrast, the mutual impact of two minor players is insignificant. We assume that all matrix<br />
or vector parameters (A0, B0, . . . , η, . . . ) in (3.1)-(3.4) are deterministic and have compatible<br />
dimensions.<br />
Assumptions. We introduce the following assumptions:<br />
(A1) The initial states {xj(0), 0 ≤ j ≤ N}, are independent, and there exists a constant C independent<br />
of N such that sup 0≤j≤N E|xj(0)| 2 ≤ C.<br />
(A2) There exists a distribution function F(θ, x) on Rd+n such that the sequence of empirical<br />
distribution functions FN(θ, x) = 1 ∑N N i=1 1 {θi≤θ,Exi(0)≤x}, N ≥ 1, where each inequality holds<br />
componentwise, converges to F(θ, x) weakly, i.e., for any bounded and continuous function<br />
h(θ, x) on Rd+n ,<br />
∫<br />
lim<br />
N→∞ Rd+n ∫<br />
h(θ, x)dFN(θ, x) =<br />
Rd+n h(θ, x)dF(θ, x).<br />
(A3) A(·), B(·), F (·) and D(·) are continuous matrix-valued functions of θ ∈ Θ, where Θ is a<br />
compact subset of R d .<br />
Main Results. We approximate the mean field generated by the minor players by a kernel<br />
representation using the Brownian motion of the major player.<br />
Solve local optimal control problems for both the major player and a representative minor player<br />
via backward stochastic differential equations.<br />
Construct a set of decentralized control strategies based on consistent mean field approximations<br />
and show that these strategies have an ε-Nash equilibrium property.<br />
7
4 Future <strong>Research</strong> Plan<br />
My goal is to be a research mathematician and an educator, and to make life-time commitment<br />
to mathematical research and training the next generation of mathematicians, scientists, and engineers.<br />
My ultimate plan is to make substantial contributions to the research of stochastic systems<br />
and game theory. In the next few years, my research plan is outline below.<br />
4.1 Further Topics in Stochastic Hybrid Systems<br />
The study of stochastic hybrid systems is still in an early stage. A number of problems remain<br />
open. Obtaining convergence rates for other numerical methods like Milstein scheme (see [28, 31])<br />
for regime-switching diffusion processes will have many significant applications, especially in the<br />
more challenging case that the switching component is diffusion-dependent. Next, concerning the<br />
degenerated switching diffusion, can we obtain a strong invariance principle? It appears that the<br />
desired result will be more difficult to obtain compared to the case of traditional diffusion process<br />
(see [16]) since one needs to deal with the Markov dependent component when estimating the<br />
moments. Also, it will be useful and natural to consider the regime-switching model for the system<br />
of many interacting particles in a random environment. Although some stability results in this<br />
direction when the number of particles is finite was obtained in [42, 44], there are still a lot of<br />
open questions on the asymptotic behavior of such systems when the number of particles becomes<br />
large. Attempts in finding answers to these questions present new challenges, which will lead to<br />
many substantial discoveries. Hence, my immediate future work will be devoted to the above<br />
problems. In addition, I will also focus on applications of the theoretical results of hybrid systems<br />
to real world problems including risk theory, financial engineering, stochastic approximation and<br />
stochastic optimization and control for stochastic hybrid systems.<br />
4.2 Mean-Field Game Theory and Stochastic McKean-Vlasov Equations<br />
In the past decade, the mean field games theory has been investigated intensively with many<br />
applications in wireless communication systems, biological science, and economics. However, this<br />
still is a relative new area with many open questions. In the future, I would like to focus on<br />
this topic. In particular, I am interested in the general problems in the mean field games theory<br />
with a large population of minor player and a major player which generalize the LQG cases (see<br />
[18, 19, 20]), numerical solutions for mean-field game and control problems (see [21]), and stochastic<br />
mean-field systems (see [9, 13]). In what follows, two specific problems will be described in more<br />
detail.<br />
Mean-Field Game Problem. In order to obtain low complexity strategies so that each player<br />
only needs to know its own state information, the consistent mean field approximation (see [18,<br />
19, 20]) is a powerful approach. For the general mean field game problems with a major player<br />
and a large population of minor players, this approach leads to the importance of investigating the<br />
limiting behavior of the empirical measure obtained by stochastic interacting particles described<br />
by the following equations<br />
(<br />
dx0(t) = f0 t, x0(t), ε N ) (<br />
x(t) dt + σ0 t, x0(t), ε N )<br />
x(t) dW0(t), (4.1)<br />
dxi(t) = f ( t, x0(t), xi(t), ε N )<br />
N∑<br />
x(t) dt + σ ( t, x0(t), xi(t), ε N x(t) , j) dWj(t), 1 ≤ i ≤ N (4.2)<br />
j=1<br />
where {Wi, 0 ≤ i ≤ N} are n dimensional independent standard Brownian motions, f0, σ0 : R ×<br />
R n × P → R n , f : R × R 2n × P → R n , σ : R × R 2n × P × {1, 2, ..., N} → R n , P is the set of<br />
8
all probability measures on Rn , and εN x(t) = (1/N) ∑N i=1 δxi(t). This is an interesting extension of<br />
traditional stochastic interacting particle system considered in [9] with a major particle.<br />
In [9] (see also [13]), several significant results for the traditional stochastic interacting particle<br />
system are presented. In particular, the limiting process of the sequence {εN x(t) } is characterized<br />
as a weak solution to a stochastic McKean-Vlasov equation. For the above system, we also hope<br />
to characterize the limit of the coupled sequence {(x0(t), ε N x(t)<br />
)} as a weak solution to a coupled<br />
stochastic McKean-Vlasov equation by using martingale problem approach. Moreover, conditions<br />
for the existence and uniqueness for the weak solution, the strong solution, and the regular property<br />
of the paths of the solution to the desired equation may be determined by considering the martingale<br />
problems in a suitable rigged Hilbert space. This study is expected to bring new insights to<br />
stochastic mean field control and game problems with mixed players in our far future plan.<br />
Numerical Solution for Mean Field Stochastic System. Let consider the celebrated mean<br />
field stochastic differential equation (MFSDE) in which the state process involves as follow<br />
(<br />
dX(t) = f t, X(t), Eφ ( X(t) )) (<br />
dt + σ t, X(t), Eφ ( X(t) ))<br />
dW (t) (4.3)<br />
where W (t) is a multi-dimensional Brownian motion defined on a probability space (Ω, F, P ).<br />
The studies of MFSDE are of great theoretical values due to its interesting state law dependence<br />
structure. On the other hand, the MFSDE also possesses broad-range real applications. Such largepopulation<br />
interacting system has already been extensively investigated in many different fields,<br />
such as engineering and socioeconomic science, game theory, finance, etc. Since the MFSDEs are<br />
well-defined and widely-used systems, it is of importance to study the numerical approximations<br />
of X(t) and its probability law µt. In [1] and [4], stochastic particle methods were proposed to<br />
construct approximations for the distribution function Vt of µt. These methods, however, require to<br />
generate too many stochastic processes if one expects to get an accurate approximation. Therefore,<br />
we intend to investigate another method to find a numerical approximation of Vt and X(t) with<br />
reduced computational effort. It is expected that by combining both the MFSDE of X(t) and the<br />
Fokker-Planck-Kolmogorov equation of Vt, we will be able to obtain the desired method.<br />
I am keen on continuing these and other inter-disciplinary projects in the directions described<br />
above. One of my primary goal at this stage is to keep broadening my horizons to as many areas<br />
of mathematics as possible.<br />
References<br />
[1] F. Antonelli, and A. Kohatsu-Higa, Rate of convergence of a particle method to the solution of the<br />
McKean-Vlasov equation. Ann. Appl. Probab., 12 (2002), 423–476.<br />
[2] C.T. Bauch and D.J.D. Earn, Vaccination and the theory of games, Proc. Natl. Acad. Sci. U.S.A., 101<br />
(2004), 13391–13394.<br />
[3] J. M. Bismut. Linear quadratic optimal stochastic control with random coefficients. SIAM J. Control<br />
Optim., 14 (1976), 419–444.<br />
[4] M.Bossy, and D. Talay, A stochastic particle method for the McKean-Vlasov and the Burgers equation.<br />
Math. Comp., 66 (1997), 157–192.<br />
[5] R. Breban, R. Vardavas, and S. Blower, Mean-field analysis of an inductive reasoning game: Application<br />
to influenza vaccination, Phys. Rev. E, 76 (2007), 031127.<br />
[6] B. Charbonneau, Y. Svyrydov and P.F. Tupper, Weak convergence in the Prokhorov metric of methods<br />
for stochastic differentia equations, IMA J. Numer. Anal., 30 (2010), 579–594.<br />
[7] P.J. Courtois, Decomposability: Queueing and Computer System Applications, Academic Press, New<br />
York, NY, 1977.<br />
[8] M. Csörgö and P. Revesz, Strong Approximations in Probability and Statistics, Academic Press, New<br />
York-London, 1981.<br />
9
[9] D. Dawson, and J. Vaillancourt, Stochastic McKean-Vlasov equations. NoDEA Nonlinear Differential<br />
Equations Appl. 2 (1995), 199–229.<br />
[10] G. M. Erickson, Differential game models of advertising competition, European J. Oper. Res., 83 (1995),<br />
431–438.<br />
[11] S.N. Ethier and T.G. Kurtz, Markov Processes: Characterization and Convergence, J. Wiley, New York,<br />
NY, 1986.<br />
[12] D. Fudenberg and D.K. Levine, The Theory of Learning in Games, MIT Press, Cambridge, MA, 1998.<br />
[13] J. Gärtner, On the McKean-Vlasov limit for interacting diffusions, Math. Nachr., 137 (1988), 197–248.<br />
[14] I. Gyöngy, A note on Euler’s approximation, Potential Anal., 8 (1998), 205–216.<br />
[15] P. Hall and C.C. Heyde, Martingale Limit Theory and Its Application, Academic Press, New York,<br />
1980.<br />
[16] J.A. Heunis, Strong invariance principle for singular diffusions, Stochastic Process. Appl., 104 (2003),<br />
57–80.<br />
[17] T. Hida and M. Hitsuda. Gaussian Processes, AMS, Providence, RI, 1993.<br />
[18] M. Huang. Large-population LQG games involving a major player: the Nash certainty equivalence<br />
principle. SIAM J. Control Optim., 48 (2010), 3318–3353.<br />
[19] M. Huang, P.E. Caines, and R. P. Malhame, Individual and mass behaviour in large population stochastic<br />
wireless power control problems: Centralized and Nash equilibrium solutions, in Proc. 42nd IEEE<br />
Conf. Dec. Contr., Maui, HI, 2003, 98–103.<br />
[20] M. Huang, P. E. Caines and R. P. Malhamé. Large-population cost-coupled LQG problems with nonuniform<br />
agents: Individual-mass behavior and decentralized ϵ-Nash equilibria. IEEE Trans. Autom. Control,<br />
52 (2007), 1560–1571.<br />
[21] H. Kushner and P. Dupuis, Numerical Methods for Stochastic Control Problems in Continuous Time,<br />
Springer-Verlag, 2001.<br />
[22] V.E. Lambson, Self-enforcing collusion in large dynamic markets, J. Econ. Theory, 34 (1984), 282–291.<br />
[23] J.-M. Lasry and P.-L. Lions. Jeux à champ moyen. I. Le cas stationnaire, C. R. Math. Acad. Sci. Paris,<br />
343 (2006), 619–625.<br />
[24] J.-M. Lasry and P.-L. Lions. Jeux à champ moyen. II. Horizon fini et contrle optimal, C. R. Math.<br />
Acad. Sci. Paris, 343 (2006), 679–684.<br />
[25] J.-M. Lasry and P.-L. Lions. Mean field games, Japan. J. Math., 2 (2007), 229–260.<br />
[26] T. Li and J.-F. Zhang. Asymptotically optimal decentralized control for large population stochastic<br />
multiagent systems, IEEE Trans. Automat. Control, 53 (2008), 1643–1660.<br />
[27] J. Kiefer, On the deviations in the Skorokhod-Strassen approximation scheme, Z. Wahrsch. Verw.<br />
Gebiete, 13 (1969), 321–332.<br />
[28] P.E. Kloeden and E. Platen, Numerical Solution of Stochastic Differential Equations, Springer-Verlag,<br />
1992.<br />
[29] X. Mao, A. Truman, and C. Yuan, Euler-Maruyama approximations in mean-reverting stochastic<br />
volatility model under regime-switching, J. Appl. Math. Stoch. Anal., (2006), Article ID 80967.<br />
[30] M. Mariton, Jump Linear Systems in Automatic Control, Marcel Dekker, New York, 1990.<br />
[31] G.N. Milstein and M.V. Tretyakov, Stochastic Numerics for Mathematical Physics, Springer-Verlag,<br />
Berlin, 2004.<br />
[32] S.L. Nguyen and M. Huang, LQG Mixed games with continuum-parametrized minor players, (2011),<br />
submitted.<br />
[33] S.L. Nguyen and G. Yin, Asymptotic properties of hybrid random processes modulated by Markov<br />
chains, to appear in Nonlinear Analysis: Theory, Methods and Applications, 71 (2009), e1638–e1648.<br />
[34] S.L. Nguyen and G. Yin, Asymptotic properties of Markov modulated random sequences with fast and<br />
slow time scales, Stochastics, 82 (2010), 445–474.<br />
[35] S.L. Nguyen and G. Yin, Weak convergence of Markov modulated random sequences, Stochastics, 82<br />
(2010), 521–552.<br />
[36] S.L. Nguyen and G. Yin, Pathwise convergence rate for numerical solutions of stochastic differential<br />
equations, IMA Journal of Numerical Analysis, (2011), in press, doi: 10.1093/imanum/drr025.<br />
[37] S.L. Nguyen and G. Yin, Pathwise convergence rate for numerical solutions of Markovian switching<br />
stochastic differential equations, Nonlinear Analysis: Real World Applications, (2011), to appear.<br />
[38] S.P. Sethi and Q. Zhang, Hierarichical Decision Making in Stochastic Manufacturing Systems,<br />
Birkhäuser, Boston, 1994.<br />
[39] H.A. Simon and A. Ando, Aggregation of variables in dynamic systems, Econometrica, 29 (1961),<br />
111–138.<br />
10
[40] J.M. Smith, Evolution and the Theory of Games, Cambridge <strong>University</strong> Press, Cambridge, UK, 1982.<br />
[41] D.D. Sworder and J.E. Boyd, Estimation Problems in Hybrid Systems, Cambridge <strong>University</strong> Press,<br />
Cambridge, UK, 1999.<br />
[42] F. Xi, and G. Yin, Asymptotic properties of a mean-field model with a continuous-state-dependent<br />
switching process, J. Appl. Probab., 46 (2009), 221–243.<br />
[43] G. Yin, R.H. Liu, and Q. Zhang, Recursive algorithms for stock liquidation: a stochastic optimization<br />
approach, SIAM J. Optim., 13 (2002), 240–263.<br />
[44] G. Yin, G. Zhao, and F. Xi, Mean-Field models involving continuous-state-dependent random switching:<br />
Nonnegativity constraints, moment bounds, and two-time-scale limits, Taiwanese J. Math., 15 (2011),<br />
1783–1805.<br />
[45] G. Yin and Q. Zhang, Discrete-time Markov Chains: Two-time-scale Methods and Applications,<br />
Springer, New York. 2005.<br />
[46] H. Yin, P. G. Mehta, S. P. Meyn and U. V. Shanbhag. Synchronization of coupled oscillator is a game,<br />
Proc. American Control Conference, Baltimore, MD, 1783–1790, June, 2010.<br />
[47] C. Zhu and G. Yin, On competitive Lotka-Volterra model in random environments, J. Math. Anal.<br />
Appl., 357 (2009), 154–170.<br />
School of of Mathematics & Statistics, <strong>Carleton</strong> <strong>University</strong>, 1125 Colonel By<br />
Drive, Ottawa, ON, Canada, K1S 5B6<br />
Email: snguyen@math.carleton.ca<br />
11