30.04.2013 Views

Some applications of Dirac's delta function in Statistics for more than ...

Some applications of Dirac's delta function in Statistics for more than ...

Some applications of Dirac's delta function in Statistics for more than ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Abstract<br />

Available at<br />

http://pvamu.edu/aam<br />

Appl. Appl. Math.<br />

ISSN: 1932-9466<br />

Vol. 3, Issue 1 (June 2008), pp. 42 – 54<br />

(Previously, Vol. 3, No. 1)<br />

42<br />

Applications and Applied<br />

Mathematics:<br />

An International Journal<br />

(AAM)<br />

<strong>Some</strong> Applications <strong>of</strong> <strong>Dirac's</strong> Delta Function <strong>in</strong> <strong>Statistics</strong> <strong>for</strong><br />

More Than One Random Variable<br />

Santanu Chakraborty<br />

Department <strong>of</strong> Mathematics<br />

University <strong>of</strong> Texas Pan American,<br />

1201 West University Drive, Ed<strong>in</strong>burg, Texas 78541, USA<br />

schakraborty@utpa.edu<br />

Received July 14, 2006; accepted December 7, 2007<br />

In this paper, we discuss some <strong>in</strong>terest<strong>in</strong>g <strong>applications</strong> <strong>of</strong> <strong>Dirac's</strong> <strong>delta</strong> <strong>function</strong> <strong>in</strong> <strong>Statistics</strong>. We<br />

have tried to extend some <strong>of</strong> the exist<strong>in</strong>g results to the <strong>more</strong> <strong>than</strong> one variable case. While do<strong>in</strong>g<br />

that, we particularly concentrate on the bivariate case.<br />

Keywords: <strong>Dirac's</strong> Delta <strong>function</strong>, Random Variables, Distributions, Densities, Taylor's Series<br />

Expansions, Moment generat<strong>in</strong>g <strong>function</strong>s.<br />

1. Introduction<br />

Cauchy, <strong>in</strong> 1816, was the first (and <strong>in</strong>dependently, Poisson <strong>in</strong> 1815) gave a derivation <strong>of</strong> the<br />

Fourier <strong>in</strong>tegral theorem by means <strong>of</strong> an argument <strong>in</strong>volv<strong>in</strong>g what we would now recognize as a<br />

sampl<strong>in</strong>g operation <strong>of</strong> the type associated with the <strong>delta</strong> <strong>function</strong>. And there are similar examples<br />

<strong>of</strong> the use <strong>of</strong> what are essentially <strong>delta</strong> <strong>function</strong>s by Kirch<strong>of</strong>f, Helmholtz and Heaviside. But<br />

Dirac was the first to use the notationδ . The Dirac <strong>delta</strong> <strong>function</strong> (δ -<strong>function</strong>) was <strong>in</strong>troduced by<br />

Paul Dirac at the end <strong>of</strong> the 1920s <strong>in</strong> an ef<strong>for</strong>t to create the mathematical tools <strong>for</strong> the<br />

development <strong>of</strong> quantum filed theory. He referred to it as an “improper <strong>function</strong>” <strong>in</strong> Dirac<br />

(1930). Later, <strong>in</strong> 1947, Laurent Schwartz gave it a <strong>more</strong> rigorous mathematical def<strong>in</strong>ition as a<br />

l<strong>in</strong>ear <strong>function</strong>al on the space <strong>of</strong> test <strong>function</strong>s D (the set <strong>of</strong> all real-valued <strong>in</strong>f<strong>in</strong>itely<br />

differentiable <strong>function</strong>s with compact support) such that <strong>for</strong> a given <strong>function</strong> f (x)<br />

<strong>in</strong> D , the<br />

value <strong>of</strong> the <strong>function</strong>al is given by the property (b) below. This is called the sift<strong>in</strong>g or sampl<strong>in</strong>g<br />

property <strong>of</strong> the <strong>delta</strong> <strong>function</strong>. S<strong>in</strong>ce the <strong>delta</strong> <strong>function</strong> is not really a <strong>function</strong> <strong>in</strong> the classical<br />

sense, one should not consider the “value” <strong>of</strong> the <strong>delta</strong> <strong>function</strong> at x . Hence, the doma<strong>in</strong> <strong>of</strong> the<br />

<strong>delta</strong> <strong>function</strong> is D and its value, <strong>for</strong> f ∈ D and a given x 0 is ). ( 0 x f Khuri (2004) studied some<br />

<strong>in</strong>terest<strong>in</strong>g <strong>applications</strong> <strong>of</strong> the <strong>delta</strong> <strong>function</strong> <strong>in</strong> statistics. He ma<strong>in</strong>ly studied univariate cases even<br />

though he did give some <strong>in</strong>terest<strong>in</strong>g examples <strong>for</strong> the multivariate case. We shall study some<br />

<strong>more</strong> <strong>applications</strong> <strong>in</strong> the multivariate scenario <strong>in</strong> this work. These might help future researchers


AAM: Intern, J., Vol. 3, Issue 1 (June 2008) [Previously, Vol. 3, No. 1] 43<br />

<strong>in</strong> statistics to develop <strong>more</strong> ideas. In sections 2 and 3, we discuss derivatives <strong>of</strong> the <strong>delta</strong><br />

<strong>function</strong> <strong>in</strong> both univariate and multivariate case. Then, <strong>in</strong> section 4, we discuss some<br />

<strong>applications</strong> <strong>of</strong> the <strong>delta</strong> <strong>function</strong> <strong>in</strong> probability and statistics. In section 5, we discuss<br />

calculations <strong>of</strong> densities <strong>in</strong> both univariate and multivariate case us<strong>in</strong>g trans<strong>for</strong>mations <strong>of</strong><br />

variables. In section 6, we use vector notations <strong>for</strong> <strong>delta</strong> <strong>function</strong>s <strong>in</strong> the multidimensional case.<br />

In section 7, we discuss very briefly the trans<strong>for</strong>mations <strong>of</strong> variables <strong>in</strong> the discrete case. Then,<br />

<strong>in</strong> section 8, we discuss the moment generat<strong>in</strong>g <strong>function</strong> <strong>in</strong> the multivariate set up. We conclude<br />

with few remarks <strong>in</strong> section 9.<br />

2. Derivatives <strong>of</strong> the δ -<strong>function</strong> <strong>in</strong> the Univariate Case<br />

In the univariate case, some basic properties satisfied by <strong>Dirac's</strong> <strong>delta</strong> <strong>function</strong> are:<br />

where (x)<br />

have,<br />

∞<br />

∫<br />

−∞<br />

(a) δ ( x)<br />

dx = 1,<br />

b<br />

(b) ∫ f ( x)<br />

δ ( x − x0<br />

) dx = f ( x0<br />

) <strong>for</strong> all a < x0<br />

< b ,<br />

a<br />

f is any <strong>function</strong> cont<strong>in</strong>uous <strong>in</strong> a neighborhood <strong>of</strong> the po<strong>in</strong>t 0<br />

∞<br />

∫<br />

−∞<br />

f x)<br />

δ ( x − x ) dx = f ( x ).<br />

( 0<br />

0<br />

x . In particular, we<br />

This is the sift<strong>in</strong>g property that we mentioned <strong>in</strong> the previous section. If f (x)<br />

is any <strong>function</strong><br />

with cont<strong>in</strong>uous derivatives up to the<br />

th<br />

x , then<br />

b<br />

∫<br />

a<br />

In particular, we have,<br />

∞<br />

∫<br />

−∞<br />

f<br />

n order <strong>in</strong> some neighborhood <strong>of</strong> 0<br />

( n)<br />

n ( n)<br />

f x)<br />

δ ( x − x ) dx = ( −1)<br />

f ( x ), n ≥ 0 <strong>for</strong> all a < x0<br />

< b .<br />

( 0<br />

0<br />

( n)<br />

n ( n)<br />

x)<br />

δ ( x − x ) dx = ( −1)<br />

f ( x ), n ≥ 0<br />

( 0<br />

0<br />

(n)<br />

<strong>for</strong> a given x 0 . Here, δ is the generalized<br />

l<strong>in</strong>ear <strong>function</strong>al which assigns the value 1)<br />

( )<br />

f ( x )<br />

n n<br />

− to f (x)<br />

.<br />

th<br />

n order derivative <strong>of</strong> δ . This derivative def<strong>in</strong>es a<br />

( 0<br />

Now let us consider the Heaviside <strong>function</strong> H(x) unit step <strong>function</strong> def<strong>in</strong>ed by


44 Santanu Chakraborty<br />

H ( x)<br />

= 0 <strong>for</strong> x < 0<br />

= 1 <strong>for</strong> x ≥ 1.<br />

dH ( x)<br />

The generalized derivative <strong>of</strong> H (x)<br />

is δ (x)<br />

, i.e., δ ( x)<br />

= . As a result, we get a special<br />

dx<br />

th<br />

case <strong>of</strong> the <strong>for</strong>mula <strong>for</strong> the n order derivative mentioned above:<br />

∞<br />

∫<br />

−∞<br />

n ( i)<br />

x δ ( x)<br />

dx = 0 if i ≠ n<br />

( 1)<br />

n!<br />

n<br />

= − if i = n<br />

3. Derivatives <strong>of</strong> the <strong>delta</strong> <strong>function</strong> <strong>in</strong> the bivariate case<br />

Follow<strong>in</strong>g Saichev and Woyczynski (1997), Khuri (2004) provided the extended def<strong>in</strong>ition <strong>of</strong><br />

<strong>delta</strong> <strong>function</strong> to the n -dimensional Euclidean space. But we shall ma<strong>in</strong>ly concentrate on the<br />

bivariate case. As <strong>in</strong> the univariate case, we can write down similar properties <strong>for</strong> the bivariate<br />

case as well. In the bivariate case, δ ( x, y)<br />

= δ ( x)<br />

δ ( y)<br />

. So, if we assume f ( x,<br />

y)<br />

to be a<br />

cont<strong>in</strong>uous <strong>function</strong> <strong>in</strong> some neighborhood <strong>of</strong> x , ) , then we can write<br />

∫∫<br />

ℜ×<br />

ℜ<br />

( 0 0 y<br />

f x,<br />

y)<br />

δ ( x − x , y − y ) dxdy = f ( x , y ) ,<br />

( 0 0<br />

0 0<br />

where ℜ is the real l<strong>in</strong>e.<br />

Now, <strong>for</strong> this <strong>function</strong> f , if all its partial derivatives up to the th<br />

n order are cont<strong>in</strong>uous <strong>in</strong> the<br />

abovementioned neighborhood <strong>of</strong> x , ) , then,<br />

( 0 0 y<br />

∂ f ( x,<br />

y)<br />

δ …… (1)<br />

n<br />

( n)<br />

n n<br />

∫∫ f ( x,<br />

y)<br />

( x − x0<br />

, y − y0<br />

) dxdy = ( −1)<br />

∑ Ck<br />

|<br />

k n−k<br />

x=<br />

x0<br />

, y=<br />

y0<br />

ℜ×<br />

ℜ<br />

0≤k<br />

≤n<br />

∂x<br />

∂y<br />

n ( n)<br />

where C k is the number <strong>of</strong> comb<strong>in</strong>ations <strong>of</strong> k out <strong>of</strong> n objects, δ ( x,<br />

y)<br />

is the generalized<br />

th<br />

n order derivative <strong>of</strong> δ ( x,<br />

y)<br />

. In the general p-dimensional case, by us<strong>in</strong>g <strong>in</strong>duction on n , it<br />

can be shown that<br />

∞ ∞<br />

∫... ∫<br />

( n)<br />

f ( x1,...,<br />

x p ) δ ( x1<br />

*<br />

*<br />

x1<br />

,..., x p − x p ) dx1...<br />

−∞<br />

−∞<br />

∂<br />

= − ∑ ∑<br />

∂ ∂<br />

−<br />

n n−k1<br />

−...<br />

−k<br />

p 1<br />

( n)<br />

n<br />

n!<br />

f<br />

( 1)<br />

...<br />

k1<br />

k p<br />

0 0 k1!...<br />

k p!<br />

x1<br />

... x p<br />

− dx p<br />

| x = x<br />

*<br />

,


AAM: Intern, J., Vol. 3, Issue 1 (June 2008) [Previously, Vol. 3, No. 1] 45<br />

where x = ( x1<br />

,..., x ) ′ p ,<br />

* *<br />

= ( 1 ,..., ) ′ p x x<br />

*<br />

x and f is a <strong>function</strong> <strong>of</strong> p variables, namely,<br />

x x , , x .<br />

1, 2<br />

p<br />

4. Use <strong>of</strong> Delta Function to Obta<strong>in</strong> Discrete Probability Distributions<br />

If X is a discrete random variable that assumes the values n a a 1,..., with probabilities n p p 1 ,...,<br />

respectively such that ∑ pi<br />

= 1, then, the probability mass <strong>function</strong> <strong>of</strong> X can be represented<br />

1≤i≤<br />

n<br />

as p(<br />

x)<br />

= ∑ piδ<br />

( x − ai<br />

).<br />

1≤i≤n<br />

Now let us consider two discrete random variables X and Y which assume the values n a a 1 ,...,<br />

and n b b 1 ,..., , respectively, and the jo<strong>in</strong>t probability P ( X = ai<br />

, Y = b j ) is given by p ij <strong>for</strong><br />

i = 1,...,<br />

m and j = 1,...,<br />

n so that the jo<strong>in</strong>t probability mass <strong>function</strong> p ( x,<br />

y)<br />

is given by<br />

p( x,<br />

y)<br />

p δ ( x − a ) δ ( y − b ) .<br />

= ∑ ∑<br />

1≤i≤m 1≤<br />

j≤n<br />

ij<br />

i<br />

Similarly, one can write down the jo<strong>in</strong>t probability distribution <strong>of</strong> any f<strong>in</strong>ite number <strong>of</strong> random<br />

variables <strong>in</strong> terms <strong>delta</strong> <strong>function</strong>s as follows:<br />

j<br />

Suppose we have k random variables k X X 1 ,..., with X i tak<strong>in</strong>g values a ij , j = 1,<br />

2,...,<br />

ni<br />

<strong>for</strong><br />

i = 1,<br />

2,...,<br />

k with probability k i i p 1 ... . Then, the jo<strong>in</strong>t probability mass <strong>function</strong> is<br />

n<br />

1<br />

k<br />

P( X = x1,...,<br />

X k = xk<br />

) = ∑ ∑ pi<br />

( ) (<br />

1<br />

i δ x k 1 − a1i<br />

δ<br />

x<br />

1<br />

k − a<br />

n<br />

1 kik<br />

i1<br />

= 1 ik<br />

= 1<br />

As an example, we may consider the situation <strong>of</strong> mult<strong>in</strong>omial distributions. Let<br />

X X ,... X<br />

n p , p ,..., p . Then,<br />

1, 2 k follow mult<strong>in</strong>omial distribution with parameters , 1 2 k<br />

n!<br />

i1<br />

P( X 1 = i1,<br />

,<br />

X k = ik<br />

) = p1<br />

p<br />

i ! i<br />

!<br />

1<br />

k<br />

ik<br />

k<br />

where i 1, , ik<br />

add up to n and k p p , 1 add up to 1. In terms <strong>of</strong> <strong>delta</strong> <strong>function</strong>, the jo<strong>in</strong>t<br />

probability mass <strong>function</strong> is<br />

PX ( 1 = x1, X2 = x2,..., Xk = xk)<br />

= ...<br />

n!<br />

i1 i2<br />

ik<br />

p1 p2 ... pk δ( x1−i1) δ( x2 −i2)... δ( xk −ik)<br />

i ! i !... i !<br />

∑∑ ∑ .<br />

i1 i2 ik1 2 k<br />

)


46 Santanu Chakraborty<br />

We can also consider conditional probabilities and th<strong>in</strong>k <strong>of</strong> express<strong>in</strong>g them <strong>in</strong> terms <strong>of</strong> the δ -<br />

<strong>function</strong>. Let us go back to the example <strong>of</strong> the two discrete random variables X and Y , where<br />

X takes the values m a a a , , , 1 2 and Y takes the values n b b b , , , 1 2 . Then, the conditional<br />

probability <strong>of</strong> Y = y given X = x is given by<br />

PY ( = y, X= x)<br />

p( y| x) = PY ( = y| X= x)<br />

=<br />

PX ( = x)<br />

∑∑<br />

pijδ( x−ai) δ( y−bj) pxy ( , ) 1≤≤ i m1≤j≤n = =<br />

.<br />

px ( ) pδ( x−a) ∑<br />

1≤≤<br />

i n<br />

i i<br />

5. Densities <strong>of</strong> Trans<strong>for</strong>mations <strong>of</strong> Random Variables Us<strong>in</strong>g δ -<strong>function</strong><br />

If X is a cont<strong>in</strong>uous random variable with a density <strong>function</strong> f (x)<br />

and if Y = g(X<br />

) is a<br />

<strong>function</strong> <strong>of</strong> X , then the density <strong>function</strong> <strong>of</strong> Y , namely, h (y)<br />

is given by<br />

∞<br />

∫<br />

−∞<br />

h ( y)<br />

= f ( x)<br />

δ ( y − g(<br />

x))<br />

dx .<br />

We can extend this to the two-dimensional case. If X and Y are two cont<strong>in</strong>uous random<br />

variables with jo<strong>in</strong>t density <strong>function</strong> f ( x,<br />

y)<br />

and if Z = φ1(<br />

X , Y ) and W = φ2<br />

( X , Y ) are two<br />

random variables obta<strong>in</strong>ed as trans<strong>for</strong>mations from ( X , Y ) , then the bivariate density <strong>function</strong><br />

<strong>for</strong> Z and W is given by<br />

∞ ∞<br />

( z,<br />

w)<br />

∫ ∫ f ( x,<br />

y)<br />

−∞−∞<br />

( z −φ1<br />

( x,<br />

y))<br />

δ ( w −φ<br />

2<br />

h = δ ( x,<br />

y))<br />

dxdy ,<br />

where z and w are the variables correspond<strong>in</strong>g to the trans<strong>for</strong>mations φ1( X , Y ) and φ 2 ( X , Y ) .<br />

This has obvious extension to the general p-dimensional case.<br />

Khuri (2004) gave an example <strong>of</strong> two <strong>in</strong>dependent Gamma random variables X and Y so that<br />

X and Y are gamma random variables with distributions Γ ( λ,<br />

α1)<br />

and Γ ( λ,<br />

α 2 ) respectively. If<br />

we denote the densities as f 1 and f 2 respectively, then we have,<br />

α1<br />

λ α1−1<br />

−λx<br />

f1(<br />

x)<br />

= x e , <strong>for</strong> x > 0<br />

Γ(<br />

α )<br />

1<br />

= 0 , <strong>for</strong> x ≤ 0 .


AAM: Intern, J., Vol. 3, Issue 1 (June 2008) [Previously, Vol. 3, No. 1] 47<br />

f<br />

2<br />

α 2 λ α 2 −1<br />

−λy<br />

( y)<br />

= y e , <strong>for</strong> y > 0<br />

Γ(<br />

α )<br />

2<br />

= 0 , <strong>for</strong> y ≤ 0 .<br />

X<br />

In that case, if we def<strong>in</strong>e Z = and W = X + Y , then, Z is distributed as Beta with<br />

X + Y<br />

parameters α 1 , α 2 and W is distributed as Gamma with parameter values 1 and α 1 + α 2 . From<br />

now on, we shall use β ( a,<br />

b)<br />

to <strong>in</strong>dicate a Beta random variable with positive parameters a , b<br />

and Γ ( c,<br />

k)<br />

to <strong>in</strong>dicate a Gamma distribution with positive parameters c and k .<br />

Now let us consider 3 random variables X 1 , X 2 , X 3 distributed <strong>in</strong>dependently so that X i is<br />

1<br />

distributed as gamma, i.e., Γ ( , α i ) <strong>for</strong> i = 1, 2, 3, and if we def<strong>in</strong>e<br />

2<br />

X 1<br />

Y1<br />

=<br />

X 1 + X 2<br />

,<br />

X 1 + X 2<br />

Y2<br />

=<br />

X 1 + X 2 + X 3<br />

,<br />

Y = X + X + X .<br />

3<br />

1<br />

2<br />

Then, we have,<br />

Y 1 distributed as β ( α1,<br />

α 2 ) ,<br />

Y 2 distributed as β ( α1<br />

+ α 2 , α 3)<br />

,<br />

1<br />

Y 3 distributed as Γ ( , α 1 + α 2 + α 3)<br />

.<br />

2<br />

3<br />

This can be shown follow<strong>in</strong>g exactly the same technique used <strong>in</strong> Khuri (2004) which is a one<br />

step generalization <strong>of</strong> the result proved by him. So, we def<strong>in</strong>e<br />

*<br />

x1<br />

x1<br />

+ x2<br />

δ ( x1, x2<br />

, x3,<br />

y1,<br />

y2<br />

, y3<br />

) = δ ( − y1)<br />

δ (<br />

− y2<br />

) δ ( x1<br />

+ x2<br />

+ x3<br />

− y3<br />

) .<br />

x + x x + x + x<br />

The jo<strong>in</strong>t density <strong>of</strong> Y1, Y2 and Y3 is given by<br />

1<br />

1 2 3<br />

∞ ∞ ∞<br />

∫∫∫<br />

0 0 0<br />

1 2 3 1 2 3 1 2 3 1<br />

∞ ∞ ∞ x1<br />

+ x2<br />

+ x3<br />

1<br />

α1−1<br />

α 2 −1<br />

α3<br />

−1<br />

−<br />

2<br />

x1<br />

x2<br />

x3<br />

e<br />

α1+ α 2 + α3<br />

Γ(<br />

α1)<br />

Γ(<br />

α 2 ) Γ(<br />

α 3 ) ∫∫∫<br />

0 0 0<br />

*<br />

g( y , y , y ) = f ( x , x , x ) δ ( x , x , x , y , y , y ) dx dx dx<br />

=<br />

2<br />

Us<strong>in</strong>g properties <strong>of</strong> the <strong>delta</strong> <strong>function</strong>,<br />

2<br />

1<br />

2<br />

3<br />

2<br />

1<br />

2<br />

*<br />

δ ( x , x , x , y , y , y ) dx dx dx<br />

2<br />

3<br />

1<br />

2<br />

3<br />

1<br />

2<br />

3


48 Santanu Chakraborty<br />

the <strong>in</strong>nermost <strong>in</strong>tegral (<strong>in</strong>tegral with respect to x 1)<br />

∞ x1<br />

+ x2<br />

+ x3<br />

α1−1<br />

−<br />

2 *<br />

= ∫ x1<br />

e δ ( x1,<br />

x2<br />

, x3,<br />

y1,<br />

y2<br />

, y3<br />

) dx1<br />

0<br />

= ∫<br />

∞ x1<br />

+ x2<br />

+ x3<br />

α1−1<br />

− x<br />

2<br />

1<br />

x1<br />

+ x2<br />

x1<br />

e δ ( − y1)<br />

δ (<br />

− y2<br />

) δ ( x1<br />

− ( y3<br />

− x2<br />

− x3<br />

)) dx1<br />

x<br />

0<br />

1 + x2<br />

x1<br />

+ x2<br />

+ x3<br />

y<br />

3<br />

−<br />

α1−1<br />

y3<br />

− x2<br />

− x<br />

2<br />

3 y3<br />

− x3<br />

= ( y3<br />

− x2<br />

− x3<br />

) e δ (<br />

− y1)<br />

δ ( − y2<br />

) .<br />

y − x<br />

y<br />

3<br />

Next, we use <strong>delta</strong> <strong>function</strong> properties to deal with the second <strong>in</strong>tegral (the <strong>in</strong>tegral with respect<br />

to x 2 ) as<br />

∞<br />

y3<br />

α −<br />

−<br />

− − −<br />

2 1<br />

α 1 y3<br />

x<br />

2<br />

2 x<br />

1<br />

3<br />

∫ x2<br />

( y3<br />

− x2<br />

− x3<br />

) e δ (<br />

− y1)<br />

dx2<br />

y − x<br />

0<br />

3 3<br />

∞<br />

α1−1<br />

y3<br />

α 1 ( 3 2 3)<br />

2 − y − x − x −<br />

2<br />

∫ x2<br />

e ( x2<br />

− ( y3<br />

− x3<br />

)( 1−<br />

y1))<br />

dx2<br />

y<br />

0<br />

3 − x3<br />

y3<br />

1<br />

−<br />

α1<br />

+ α 2 −1<br />

α1−<br />

α 2 −1<br />

2<br />

( y3<br />

− x3<br />

) y1<br />

( 1−<br />

y1)<br />

e<br />

= δ<br />

=<br />

Then, we use <strong>delta</strong> <strong>function</strong> properties <strong>for</strong> the outermost <strong>in</strong>tegral (without the constant terms) is<br />

∞<br />

∫<br />

0<br />

x<br />

y3<br />

α −1<br />

−<br />

−<br />

+ −1<br />

1<br />

−1<br />

1<br />

3 α1<br />

α 2 α1<br />

α 2 2<br />

3 ( y3<br />

− x3<br />

) y1<br />

( 1−<br />

y1)<br />

e δ ( x3<br />

− y3<br />

( 1−<br />

y2<br />

)) dx3<br />

y3<br />

y3<br />

α1<br />

−1<br />

1<br />

1<br />

1<br />

1<br />

−<br />

α 2 − α1+<br />

α 2 −<br />

α3<br />

− α1+<br />

α 2 + α3<br />

− 2<br />

= y1<br />

( 1−<br />

y1)<br />

y2<br />

( 1−<br />

y2<br />

) y3<br />

e<br />

F<strong>in</strong>ally, putt<strong>in</strong>g the constant terms together, we get<br />

g(<br />

y1,<br />

y2<br />

, y3<br />

) =<br />

2<br />

1+<br />

α 2 + α3<br />

This completes the pro<strong>of</strong>.<br />

α<br />

1<br />

y<br />

Γ(<br />

α ) Γ(<br />

α ) Γ(<br />

α )<br />

1<br />

2<br />

3<br />

3<br />

α1−1<br />

1<br />

( 1−<br />

y<br />

α 2 −1<br />

α1+<br />

α 2 −1<br />

1)<br />

y2<br />

6. Vector notations <strong>for</strong> <strong>delta</strong> <strong>function</strong>s <strong>in</strong> the multidimensional case<br />

3<br />

( 1−<br />

y<br />

y3<br />

1<br />

1<br />

−<br />

α3<br />

− α1+<br />

α 2 + α3<br />

− 2<br />

2 ) y3<br />

e<br />

.


AAM: Intern, J., Vol. 3, Issue 1 (June 2008) [Previously, Vol. 3, No. 1] 49<br />

In the multidimensional case, if the trans<strong>for</strong>mation is l<strong>in</strong>ear, i.e., Y = AX where Y and X are<br />

m × 1 and n × 1 vectors respectively and A is an m × n matrix, then we can express g (y)<br />

, the<br />

density <strong>of</strong> Y , <strong>in</strong> vector notation <strong>in</strong> terms <strong>of</strong> f (x)<br />

, the density <strong>of</strong> X as follows<br />

where<br />

So,<br />

∞<br />

g ∫ ∫<br />

( y) = ... f ( x)<br />

δ ( y − Ax)<br />

dx<br />

, (2)<br />

−∞<br />

∞<br />

−∞<br />

δ y ) = δ ( y )... δ ( y ) .<br />

( 1 m<br />

δ ( y − Ax)<br />

= δ ( y1 − a1<br />

x)...<br />

δ ( ym<br />

− am<br />

x)<br />

,<br />

T<br />

1<br />

T<br />

m<br />

T<br />

T<br />

where a ,..., a are the rows <strong>of</strong> the matrix A . Now, <strong>in</strong> the one-dimensional set up, if a is a<br />

δ ( x)<br />

scalar, then δ (ax)<br />

is given by δ ( ax)<br />

= . Similarly, <strong>in</strong> the multidimensional set up, if<br />

| a |<br />

Y = AX as above and A is a nons<strong>in</strong>gular matrix so that m = n , then, we must have<br />

−1<br />

δ ( x − A y)<br />

δ ( y − Ax)<br />

=<br />

. (3)<br />

| A |<br />

This is because <strong>of</strong> the follow<strong>in</strong>g: s<strong>in</strong>ce the trans<strong>for</strong>mation is nons<strong>in</strong>gular, we have<br />

1 −1<br />

g ( y)<br />

= f ( A y)<br />

| A |<br />

and there<strong>for</strong>e, from (2)<br />

∞<br />

A y x A y Ax x<br />

1 −<br />

( ) = ... f ( ) | | δ ( − ) d .<br />

f ∫ ∫<br />

But we know that<br />

∞<br />

−∞<br />

∞<br />

∫ ∫<br />

−∞<br />

∞<br />

−∞<br />

−1<br />

−1<br />

... f ( x)<br />

δ ( x − A y)<br />

dx<br />

= f ( A y)<br />

.<br />

−∞<br />

There<strong>for</strong>e, (3) follows. Similarly, if Y = AX + b where A is a nons<strong>in</strong>gular matrix and b is a<br />

vector, then, we have


50 Santanu Chakraborty<br />

−1<br />

δ ( x − A (y − b))<br />

δ ( y − (AX + b))<br />

=<br />

.<br />

| A |<br />

Us<strong>in</strong>g this, one can conclude that if X is multivariate normal with μ as the mean vector and ∑<br />

as the covariance matrix, then, <strong>for</strong> a nons<strong>in</strong>gular trans<strong>for</strong>mation A and a constant vector b , the<br />

trans<strong>for</strong>med vector Y = AX + b follows multivariate normal with Aμ + b as the mean vector<br />

T<br />

and A ∑ A as the variance-covariance matrix.<br />

7. Trans<strong>for</strong>mation <strong>of</strong> Variables <strong>in</strong> the Discrete Case<br />

Trans<strong>for</strong>mation <strong>of</strong> variables can be applied to the discrete case as well. If X is a discrete random<br />

variable tak<strong>in</strong>g the values n a a , , 1 with probabilities n p p , , 1 and if ) (X g Y = is a<br />

trans<strong>for</strong>med variable, then the correspond<strong>in</strong>g probability mass <strong>function</strong> <strong>for</strong> Y is given by<br />

∞<br />

∫ p(<br />

x)<br />

( y − g(<br />

x))<br />

dx = ∑<br />

q(<br />

y)<br />

= δ p δ ( y − g(<br />

a )) .<br />

−∞<br />

n<br />

1<br />

In the two-dimensional case, if p ( x,<br />

y)<br />

is the jo<strong>in</strong>t probability mass <strong>function</strong> <strong>for</strong> the two<br />

variables X and Y , then q ( z,<br />

w)<br />

, the jo<strong>in</strong>t probability mass <strong>function</strong> <strong>for</strong> the trans<strong>for</strong>med pair<br />

Z = φ ( X , Y ) and W = φ ( X , Y ) is given by<br />

1<br />

2<br />

∞ ∞<br />

( z,<br />

w)<br />

∫ ∫ p(<br />

x,<br />

y)<br />

δ ( z −φ1<br />

( x,<br />

y))<br />

δ ( w −φ<br />

2<br />

−∞−∞<br />

m n<br />

q = ( x,<br />

y))<br />

dxdy<br />

=<br />

∑∑<br />

1 1<br />

p δ ( z −φ<br />

( a , b )) δ ( w −φ<br />

( a , b ))<br />

ij<br />

1<br />

i<br />

j<br />

Where the pair ( X , Y ) is discrete hav<strong>in</strong>g the values ( a i , b j ) with probabilities p ij <strong>for</strong> i = 1,<br />

2,...,<br />

n<br />

and j = 1,<br />

2,...,<br />

m .<br />

8. Moments and Moment Generat<strong>in</strong>g Functions<br />

In the univariate set up, the<br />

∞<br />

k<br />

k<br />

∫ x p x)<br />

dx = ∫ x ∑<br />

−∞<br />

∞<br />

−∞<br />

th<br />

k non-central moment <strong>of</strong> X is written as<br />

1≤i≤<br />

n<br />

∑<br />

2<br />

1≤i≤<br />

n<br />

i<br />

i<br />

∞<br />

j<br />

i<br />

k<br />

∫ x ( x − ai<br />

) dx = ∑<br />

( p δ ( x − a ) dx = p δ<br />

p a .<br />

i<br />

i<br />

i<br />

−∞<br />

1≤i≤<br />

n<br />

In the bivariate set up, the non-central moment <strong>of</strong> order (k,l) <strong>for</strong> (X,Y) is given by<br />

∞ ∞<br />

∫ ∫<br />

−∞−∞<br />

x<br />

k<br />

∞ ∞<br />

k l<br />

∫ ∫ x y ∑∑<br />

l<br />

y p(<br />

x,<br />

y)<br />

dxdy = p δ<br />

( x − a ) δ ( y − b ) dxdy<br />

−∞−∞<br />

m n<br />

1 1<br />

ij<br />

i<br />

j<br />

i<br />

k<br />

i


AAM: Intern, J., Vol. 3, Issue 1 (June 2008) [Previously, Vol. 3, No. 1] 51<br />

=<br />

m<br />

n<br />

∞ ∞<br />

k l<br />

∑∑ pij<br />

∫ ∫ x y ( x − ai<br />

, y − b j ) dxdy = ∑∑<br />

i=<br />

1 j=<br />

1 −∞−<br />

∞<br />

i=<br />

1 j=<br />

1<br />

For example, if ( X , Y ) follow tr<strong>in</strong>omial distribution, then<br />

n n−k<br />

n!<br />

k l<br />

p(<br />

x,<br />

y)<br />

= ∑∑<br />

p1<br />

p2<br />

( 1−<br />

p1<br />

− p2<br />

)<br />

k!<br />

l!<br />

( n − k − l)!<br />

k = 0 l=<br />

0<br />

m<br />

n<br />

k<br />

i<br />

δ a b p .<br />

n−k<br />

−l<br />

δ ( x − k)<br />

δ ( y − l)<br />

.<br />

As a result, the correspond<strong>in</strong>g non-central moment <strong>of</strong> order ( r , s)<br />

is given by<br />

∞ ∞<br />

r s<br />

∫ ∫ x y p x,<br />

y)<br />

dxdy = ∑∑<br />

−∞−<br />

∞<br />

k = 0 l=<br />

0<br />

n−k<br />

−l<br />

n n−k<br />

n!<br />

k l<br />

r s<br />

( p1<br />

p2<br />

( 1−<br />

p1<br />

− p2<br />

) ak<br />

bl<br />

.<br />

k!<br />

l!<br />

( n − k − l)!<br />

The most <strong>in</strong>terest<strong>in</strong>g part <strong>in</strong> Khuri's article is the representation <strong>of</strong> the density <strong>function</strong> <strong>of</strong> a<br />

cont<strong>in</strong>uous random variable X <strong>in</strong> terms <strong>of</strong> its non-central moments. Thus, if f (x)<br />

is the density<br />

<strong>function</strong> <strong>for</strong> X , then f (x)<br />

is represented as<br />

f ( x)<br />

= ∑<br />

0≤m<<br />

∞<br />

( −1)<br />

m!<br />

m<br />

µ δ<br />

m<br />

( m)<br />

( x)<br />

,<br />

( m)<br />

th<br />

th<br />

where δ ( x)<br />

is the generalized m derivative <strong>of</strong> δ (x)<br />

and µ m is the m order non-central<br />

moment <strong>for</strong> the random variable X . One can see a pro<strong>of</strong> <strong>of</strong> this result <strong>in</strong> Kanwal (1998). Let us<br />

briefly mention the technique used by him to derive the above expression. He showed that, <strong>for</strong><br />

any real <strong>function</strong> ψ def<strong>in</strong>ed and differentiable <strong>of</strong> all orders <strong>in</strong> a neighborhood <strong>of</strong> zero,<br />

n<br />

( −1)<br />

( n)<br />

f , ψ 〉 = 〈<br />

µ δ ( x),<br />

ψ 〉<br />

0≤<br />

<<br />

n ,<br />

n n!<br />

〈 ∑ ∞<br />

where 〈 f , ψ 〉 , the <strong>in</strong>ner product between the <strong>function</strong>s f and ψ , is def<strong>in</strong>ed as<br />

∞<br />

∫<br />

−∞<br />

〈 f , ψ 〉 = f ( x)<br />

ψ ( x)<br />

dx .<br />

This leads us to conclude<br />

n<br />

( −1)<br />

( n)<br />

f( x) = ∑ µδ ( )....<br />

0 n<br />

n x<br />

(4)<br />


52 Santanu Chakraborty<br />

Then,<br />

∞<br />

∫<br />

−∞<br />

( n)<br />

n<br />

( n)<br />

n ( n)<br />

ψ ( 0)<br />

= ( −1)<br />

ψ ( x) δ ( x)<br />

dx = ( −1)<br />

〈 ψ , δ 〉 .<br />

These two steps give us the relation (4).<br />

When we move to the two-dimensional scenario, we shall have to use Taylor's series expansion<br />

<strong>for</strong> a two-dimensional analytic <strong>function</strong> ψ about the po<strong>in</strong>t (0,0) which is given by<br />

1 ∂ ∂ n<br />

ψ ( x,<br />

y)<br />

= ∑ [ x + y ] ψ ( x,<br />

y)<br />

|<br />

0≤<br />

n<<br />

∞ n!<br />

∂x<br />

∂y<br />

1<br />

= ∑<br />

∂<br />

x=<br />

0,<br />

y=<br />

0<br />

∞ ∞<br />

n<br />

n<br />

k n−k<br />

∑ Ck<br />

ψ ( x,<br />

y)<br />

|<br />

k n k<br />

x=<br />

y=<br />

x y<br />

−<br />

0,<br />

0<br />

(5)<br />

n=<br />

0 n!<br />

k = 0 ∂x<br />

∂y<br />

Now, here also, we follow the same technique and so we compute 〈 f , ψ 〉 which is def<strong>in</strong>ed as<br />

∞ ∞<br />

∫ ∫<br />

〈 f , ψ 〉 = f ( x,<br />

y)<br />

ψ ( x,<br />

y)<br />

dxdy<br />

−∞−∞<br />

Now us<strong>in</strong>g Taylor’s series expansion <strong>of</strong> ψ ( x,<br />

y)<br />

about ( 0 , 0 ) y x from (5) and assum<strong>in</strong>g that<br />

<strong>in</strong>terchange <strong>of</strong> <strong>in</strong>tegrals with summations permissible, we get<br />

∞ ∞<br />

1<br />

∂<br />

〈 f , ψ 〉 = ∫ ∫ f ( x,<br />

y)[<br />

∑ ≤n<<br />

∞ ∑<br />

ψ ( x,<br />

y)<br />

|<br />

0 n!<br />

=<br />

=<br />

−∞−∞<br />

Now we def<strong>in</strong>e the<br />

∞ ∞<br />

n<br />

1 ∂<br />

k!<br />

( n − k)!<br />

∂x<br />

∂y<br />

n<br />

n<br />

C<br />

0≤k<br />

≤n<br />

k k n−k<br />

∂x<br />

∂y<br />

∑ ∑ ψ ( x,<br />

y)<br />

|<br />

0≤n< ∞ 0≤k ≤n<br />

k n−k<br />

x=<br />

0 , y=<br />

0 ∫ ∫<br />

∑ ∑<br />

0≤n<<br />

∞<br />

th<br />

(,)<br />

0≤k<br />

≤n<br />

1<br />

k!<br />

( n − k)!<br />

∂<br />

k<br />

∂x<br />

∂y<br />

x=<br />

0,<br />

y=<br />

0<br />

∞ ∞<br />

−∞−∞<br />

x<br />

k<br />

y<br />

f ( x,<br />

y)<br />

x<br />

n−k<br />

n<br />

k n−k<br />

ψ ( x,<br />

y)<br />

|<br />

−<br />

x=<br />

y=<br />

〈 f x y 〉<br />

n k<br />

0,<br />

0 , .<br />

rs order non-central moment <strong>for</strong> the pair ( X , Y ) as<br />

r s<br />

r s<br />

µ f ( x,<br />

y)<br />

x y dxdy = 〈 f , x y 〉 .<br />

r,<br />

s<br />

= ∫ ∫<br />

−∞−∞<br />

Then from above,<br />

n<br />

1 ∂<br />

〈 f , ψ 〉 = ∑ ≤ < ∞∑<br />

ψ ( x,<br />

y)<br />

|<br />

0 n 0 ≤k<br />

≤n<br />

k n−k<br />

x=<br />

0,<br />

y=<br />

0 µ k , n−k<br />

k!<br />

( n − k)!<br />

∂x<br />

∂y<br />

1<br />

n ( k ) ( n−k<br />

)<br />

= 〈 ∑ ∑<br />

µ −<br />

〉<br />

≤ < ∞ ≤ ≤<br />

− ( 1)<br />

δ ( ) δ ( ) , ψ<br />

0 n 0 k n<br />

k , n k x y .<br />

k!<br />

( n − k)!<br />

k<br />

y<br />

] dxdy<br />

n−k<br />

dxdy


AAM: Intern, J., Vol. 3, Issue 1 (June 2008) [Previously, Vol. 3, No. 1] 53<br />

The last equality follows because <strong>of</strong> the follow<strong>in</strong>g relation<br />

n<br />

( k ) ( n−k<br />

)<br />

n ∂<br />

〈 ψ , δ ( x) δ ( y)<br />

〉 = ( −1)<br />

ψ ( x,<br />

y)<br />

|<br />

k n−k<br />

x=<br />

0,<br />

y=<br />

0 .<br />

∂x<br />

∂y<br />

There<strong>for</strong>e, we have<br />

1<br />

f ( x,<br />

y)<br />

= ∑ ≤ n< ∞∑<br />

δ<br />

0 0≤k<br />

< n k!<br />

( n − k)!<br />

n ( k ) ( n−k<br />

)<br />

µ k , n−k<br />

( −1)<br />

δ ( x)<br />

( y)<br />

.<br />

Now, the non-central moment <strong>of</strong> order ( r , s)<br />

is given by<br />

∞ ∞<br />

∫ ∫<br />

−∞−∞<br />

But, we also have<br />

∫∫<br />

∞ ∞<br />

r s<br />

r s<br />

f ( x,<br />

y)<br />

x y dxdy = ∫ ∫ x y ∑ ∑<br />

−∞−∞<br />

0≤n< ∞ 0≤k<br />

≤n<br />

r s ( i)<br />

( j)<br />

x y δ ( x)<br />

δ ( y)<br />

dxdy = 0,<br />

if i ≠ r or j ≠ s<br />

1<br />

k!<br />

( n − k)!<br />

r+ s<br />

= ( −1)<br />

r!<br />

s!<br />

, if i = r and j = s .<br />

There<strong>for</strong>e, the non-central moment <strong>of</strong> order ( r , s)<br />

reduces to µ r, s .<br />

n ( k ) ( n−k<br />

)<br />

µ k , n−k<br />

( −1)<br />

δ ( x)<br />

δ ( y)<br />

dxdy .<br />

When we talk about the moment generat<strong>in</strong>g <strong>function</strong> <strong>in</strong> the one variable case, we have<br />

∞<br />

n<br />

tX tx ( −1)<br />

(<br />

φ ( t)<br />

= E(<br />

e ) = ∫ e ∑ µ<br />

0≤n<<br />

∞<br />

nδ<br />

n!<br />

=<br />

=<br />

=<br />

−∞<br />

n<br />

( −1)<br />

n!<br />

( −1)<br />

n!<br />

µ n<br />

t<br />

n<br />

∞<br />

∑ µ<br />

0≤n<<br />

∞<br />

n ∫<br />

∑0≤ n<<br />

∞<br />

∑0 ≤n<<br />

∞ !<br />

n<br />

n<br />

−∞<br />

e<br />

tx<br />

µ ( −1)<br />

.<br />

n<br />

δ<br />

n<br />

( n)<br />

d<br />

dx<br />

( x)<br />

dx<br />

n<br />

n<br />

e<br />

tx<br />

|<br />

n)<br />

x=<br />

0<br />

( x)<br />

dx<br />

In the two-variable case, the moment generat<strong>in</strong>g <strong>function</strong> is given by<br />

φ ( s,<br />

t)<br />

∞ ∞<br />

sX + tY<br />

sx+<br />

ty<br />

= E(<br />

e ) = ∫ ∫ e f ( x,<br />

y)<br />

dxdy<br />

−∞−∞<br />

∞ ∞<br />

sx+<br />

ty<br />

1<br />

= ∫ ∫ e ∑0≤ n< ∞∑<br />

0≤k<br />

< n k!<br />

( n − k<br />

−∞−∞<br />

=<br />

1<br />

k!<br />

( n − k)!<br />

µ<br />

)!<br />

k , n−k<br />

∞ ∞<br />

n<br />

∑ ∑ µ<br />

≤ < ∞ ≤ <<br />

− −<br />

0 n 0 k n<br />

k , n k ( 1)<br />

∫ ∫<br />

−∞−∞<br />

n<br />

( −1)<br />

δ<br />

e<br />

sx+<br />

ty<br />

δ<br />

( k )<br />

( k )<br />

( x)<br />

δ<br />

( x)<br />

δ<br />

( n−k<br />

)<br />

( n−k<br />

)<br />

( y)<br />

dxdy<br />

( y)<br />

dxdy


54 Santanu Chakraborty<br />

=<br />

=<br />

∑ ∑<br />

1<br />

µ<br />

k!<br />

( n − k)!<br />

0≤ n< ∞ 0 ≤k<br />

< n<br />

k , n−k<br />

∑ ∑<br />

0≤ n< ∞ 0 ≤k<br />

< n<br />

k , n−k<br />

s<br />

k n−k<br />

s t<br />

µ .<br />

k!<br />

( n − k)!<br />

t<br />

k n−k<br />

In the general p -dimensional case, the moment generat<strong>in</strong>g <strong>function</strong> could be obta<strong>in</strong>ed exactly <strong>in</strong><br />

the similar fashion so that, it is given by<br />

k<br />

k<br />

1 k2<br />

p<br />

t t<br />

1 t2<br />

p<br />

φ( t 1, t2,<br />

, t p ) = ∑0 n ∑ <br />

0 k n ∑<br />

µ<br />

0 k<br />

k k k<br />

p n k k<br />

, , ,<br />

1 1 p 1<br />

1 2 .<br />

≤ < ∞ ≤ < ≤ < − − − −<br />

p<br />

k ! k ! k !<br />

9. Conclud<strong>in</strong>g Remarks<br />

The study <strong>of</strong> generalized <strong>function</strong>s is now widely used <strong>in</strong> applied mathematics and eng<strong>in</strong>eer<strong>in</strong>g<br />

sciences. The δ -<strong>function</strong> approach provides us with a unified approach <strong>in</strong> treat<strong>in</strong>g discrete and<br />

cont<strong>in</strong>uous distributions. This approach has the potential to facilitate new ways <strong>of</strong> exam<strong>in</strong><strong>in</strong>g<br />

some classical concepts <strong>in</strong> mathematical statistics. However, some <strong>in</strong>terest<strong>in</strong>g <strong>applications</strong> can<br />

be found <strong>in</strong> the paper by Pazman and Pronzato (1996). In this paper, the authors use <strong>delta</strong><br />

<strong>function</strong> approach <strong>for</strong> densities <strong>of</strong> nonl<strong>in</strong>ear statistics and <strong>for</strong> marg<strong>in</strong>al densities <strong>in</strong> nonl<strong>in</strong>ear<br />

regression. We are also look<strong>in</strong>g <strong>for</strong>ward to obta<strong>in</strong> some <strong>in</strong>terest<strong>in</strong>g <strong>applications</strong> <strong>of</strong> the <strong>delta</strong><br />

<strong>function</strong> <strong>in</strong> statistics.<br />

Acknowledgement:<br />

I am s<strong>in</strong>cerely <strong>than</strong>kful to Pr<strong>of</strong>essor Lokenath Debnath <strong>in</strong> the University <strong>of</strong> Texas-Pan American<br />

<strong>for</strong> br<strong>in</strong>g<strong>in</strong>g this problem to my notice.<br />

REFERENCES<br />

Dirac, P.A.M. (1930). The Pr<strong>in</strong>ciples <strong>of</strong> Quantum Mechanics, Ox<strong>for</strong>d University Press.<br />

Hosk<strong>in</strong>s, R.F. (1998). Generalized Functions, Ellis Horwood Limited, Chichester, Sussex,<br />

England.<br />

Kanwal, R.P. (1998). Function Theory and Technique (2nd Edition), Boston, MA, Birkhauser.<br />

Khuri, A.I. (2004). Applications <strong>of</strong> <strong>Dirac's</strong> <strong>delta</strong> <strong>function</strong> <strong>in</strong> statistics, International Journal <strong>of</strong><br />

Mathematical Education <strong>in</strong> Science and Technology, 35, no. 2, 185-195.<br />

Pazman, A and Pronzato, L. (1996). A Dirac-<strong>function</strong> method <strong>for</strong> densities <strong>of</strong> nonl<strong>in</strong>ear statistics<br />

and <strong>for</strong> marg<strong>in</strong>al densities <strong>in</strong> nonl<strong>in</strong>ear regression, <strong>Statistics</strong> & Probability Letters, 26, 159-<br />

167.<br />

Saichev, A.I. and Woyczynski, W.A. (1997). Distributions <strong>in</strong> the Physical and Eng<strong>in</strong>eer<strong>in</strong>g<br />

Sciences, Boston, MA, Birkhauser.<br />

1<br />

2<br />

p

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!