18.02.2014 Views

Scribe 9 - Classes

Scribe 9 - Classes

Scribe 9 - Classes

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

i.e. the density f can be factored into a product such that one factor, h, does not depend on θ<br />

and the other factor, which does depend on θ, depends on X only through T(X).<br />

Example 2: If X 1 , . . . , X n are independent and uniformly distributed on the interval [0, θ],<br />

then T (X) = max(X 1 , . . . , X n ) is sufficient for θ the sample maximum is a sufficient statistic<br />

for the population maximum.<br />

To see this, consider the joint probability density function of X = (X 1 , . . . , X n ). Because the<br />

observations are independent, the pdf can be written as a product of individual densities.<br />

f X (x 1 , . . . , x n ) = 1 θ 1 {0≤x 1 ≤θ} · · · 1<br />

θ 1 {0≤x n≤θ} (1)<br />

= 1<br />

θ n 1 {0≤min{x i }}1 {max{xi }≤θ} (2)<br />

where 1 {...} is the indicator function. Thus the density takes form required by the FisherNeyman<br />

factorization theorem, where h(x) = 1 {min{xi }≥0}, and the rest of the expression is a function<br />

of only θ and T (x) = max{x i }.<br />

FANO’S INEQUALITY<br />

This scenario is viewed as a communication link between a source and a receiver. X is data<br />

transmitted from the source, and Y is received data at the receiver. The channel is noisy. If we<br />

estimate X from Y, what is P (X ≠ ˆX) ?<br />

H(X|Y )−H(Pe)<br />

log(N−1)<br />

≥<br />

H(X|Y )−1<br />

log(N−1)<br />

P e ≡ P (X ≠ ˆX) ≥<br />

N is the size of the outcome set of X. This inequality shows a lower bound on error of<br />

estimation of X from Y.<br />

Proof: Let E = I(X ≠ hatX) is an indicator function, so E is a binary random variable as:<br />

P (E = 1) = P e and P (E = 0) = 1 − P e and H(E) = H(P e )<br />

We have: H(E, X|Y ) = H(X|Y ) + X(E|X, Y ) = H(X, Y ) (1)(because knowing X and Y, we<br />

know E).<br />

Also, we have: H(E, X|Y ) = H(E|Y ) + H(X|E, Y )<br />

≤ H(E) + H(X|Y, E = 0) · (1 − P e ) + H(X|Y, E = 1) · P e<br />

= H(P e ) + H(X|Y, E = 1) · P e (since H(X|Y, E = 0) = 0 .)<br />

≤ H(P e ) + log(N − 1) · P e (2) (since H(X—Y,E=1) is maximized when X is equally likely in<br />

other N-1 choices)<br />

From (1) and (2), we have: H(X|Y ) ≤ H(P e ) + log(N − 1) · P e<br />

H(X|Y )−H(Pe)<br />

H(X|Y )−1<br />

log(N−1)<br />

⇒ P e ≥<br />

log(N−1)<br />

≥<br />

The last inequality holds since H(P e ) ≤ 1<br />

FANO’S INEQUALITY EXAMPLE<br />

Consider a source X = 1 : 5, P (X) = [0.35, 0.35, 0.1, 0.1, 0.1] T , also let Y=1,2 if X ≤ 2 then<br />

y=x with probability 6/7, while if x¿2 then y=1 or 2 with equal probability.<br />

Our best strategy is to guess hat(x) = y. We now calculate tha actual error probability and<br />

the bound given by Fano’s inequality.<br />

2

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!