14.07.2013 Views

A note on estimating autocovariance from short-time observations

A note on estimating autocovariance from short-time observations

A note on estimating autocovariance from short-time observations

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

A <str<strong>on</strong>g>note</str<strong>on</strong>g> <strong>on</strong> <strong>estimating</strong> <strong>autocovariance</strong> <strong>from</strong><br />

<strong>short</strong>-<strong>time</strong> observati<strong>on</strong>s<br />

Yoel Shkolnisky, Fred J. Sigworth and Amit Singer<br />

Abstract<br />

We revisit the classical problem of <strong>estimating</strong> the <strong>autocovariance</strong> functi<strong>on</strong> and power spectrum of a<br />

stochastic process. In the typical setting for this problem, <strong>on</strong>e observes a l<strong>on</strong>g sequence of samples of<br />

the process, <strong>from</strong> which the <strong>autocovariance</strong> needs to be estimated. It is well known how to c<strong>on</strong>struct<br />

c<strong>on</strong>sistent estimators of the <strong>autocovariance</strong> functi<strong>on</strong> in this case, and how to trade bias for variance. In<br />

physical settings such as cryo-electr<strong>on</strong> microscopy (EM) we are required to estimate the resp<strong>on</strong>se of the<br />

physical instrument through the observati<strong>on</strong> of many <strong>short</strong> noise sequences, each with a different mean.<br />

In this setting, the known estimators are significantly biased, and unlike the typical case, this bias does<br />

not disappear as the number of observati<strong>on</strong>s increases. The bias originates <strong>from</strong> replacing the unknown<br />

true mean by the sample average. We analyze and dem<strong>on</strong>strate this bias, derive an unbiased estimator,<br />

and examine its performance for various noise processes.<br />

EDICS Category: SSP-SSAN Statistical signal analysis<br />

I. INTRODUCTION<br />

The <strong>autocovariance</strong> functi<strong>on</strong> (ACF) of a discrete stati<strong>on</strong>ary ergodic <strong>time</strong> series {Xt} ∞ t=−∞<br />

value E[Xt] = µ at lag h is defined as<br />

1<br />

with expected<br />

C(h) = E[(Xt − µ)(Xt+h − µ)]. (1)<br />

Yoel Shkolnisky is with the Department of Mathematics, Program in Applied Mathematics, Yale University, 10 Hillhouse<br />

Ave. PO Box 208283, New Haven, CT 06520-8283 USA.<br />

Fred J. Sigworth is with the Department of Cellular and Molecular Physiology, Yale University School of Medicine, 333<br />

Cedar Street, New Haven, CT 06520 USA.<br />

Amit Singer is with the Department of Mathematics and PACM, Princet<strong>on</strong> University, Fine Hall, Washingt<strong>on</strong> Road, Princet<strong>on</strong><br />

NJ 08544-1000 USA.<br />

emails: yoel.shkolnisky@yale.edu, fred.sigworth@yale.edu, amits@math.princet<strong>on</strong>.edu<br />

July 14, 2008 DRAFT


The ACF can be estimated <strong>from</strong> a length N finite sample X1,X2,... ,XN as<br />

Ĉ0(h) = 1<br />

N−h <br />

(Xi − µ)(Xi+h − µ), h = 0,... ,N − 1. (2)<br />

N − h<br />

i=1<br />

The estimator Ĉ0(h) is an unbiased estimator of the ACF, i.e., E[ Ĉ0(h)] = C(h), but is known to have<br />

a larger variance than the biased estimator [1], [2]<br />

Ĉ1(h) = 1<br />

N−h <br />

(Xi − µ)(Xi+h − µ), h = 0,... ,N − 1. (3)<br />

N<br />

i=1<br />

When the true mean µ is unknown to the data analyst, as is often the case in practice, different estimators<br />

of the ACF have to be used. Many textbooks suggest replacing the unknown true mean by the sample<br />

mean<br />

in (2)–(3), leading to the estimators<br />

and<br />

¯X = 1<br />

N<br />

N<br />

i=1<br />

Xi<br />

˜C0(h) = 1<br />

N−h <br />

(Xi −<br />

N − h<br />

¯ X)(Xi+h − ¯ X), (5)<br />

i=1<br />

˜C1(h) = 1<br />

N−h <br />

(Xi −<br />

N<br />

¯ X)(Xi+h − ¯ X). (6)<br />

i=1<br />

Unfortunately, both ˜ C0(h) and ˜ C1(h) are biased estimators of C(h), as we see below. The negative bias<br />

caused by replacing the true mean by the sample mean was <str<strong>on</strong>g>note</str<strong>on</strong>g>d l<strong>on</strong>g ago and was shown to be of the<br />

order of N −1 [3]–[6]. For l<strong>on</strong>g <strong>time</strong> series, the 1/N bias is not so harmful, but for <strong>short</strong> <strong>time</strong> series it<br />

may cause a n<strong>on</strong>-negligible discrepancy. The bias can be further reduced to order N −2 by a procedure<br />

similar to the Richards<strong>on</strong> extrapolati<strong>on</strong> in numerical analysis or bootstrapping [7], [8]. In this method,<br />

the <strong>time</strong> series is chopped into two halves of length N/2. The ACF is estimated separately for each half<br />

based <strong>on</strong> its own sample mean, yielding two different estimates which are averaged and subtracted <strong>from</strong><br />

twice the ACF estimator for the entire series. Though the resulting bias is asymptotically smaller, its<br />

range is <strong>on</strong>ly N/2, and the questi<strong>on</strong> whether or not it is possible to c<strong>on</strong>struct an unbiased estimator for<br />

the ACF remains.<br />

An unbiased estimator of the ACF is useful in cases where the data analyst is faced with many<br />

independent <strong>short</strong> <strong>time</strong> series sharing the same ACF (up to a multiplicative c<strong>on</strong>stant) but different unknown<br />

true means. An unbiased estimator would be useful even at the expense of variance inflati<strong>on</strong> with respect<br />

to other biased estimators, since averaging over many different series would reduce the variance error<br />

term. This is exactly the case <strong>on</strong>e encounters in analyzing the noise characteristics of single-particle<br />

July 14, 2008 DRAFT<br />

2<br />

(4)


cryo-EM images [9] which are small, often with fewer than 100 pixels <strong>on</strong> a side. Every image is a<br />

superpositi<strong>on</strong> of signal and noise filtered by the c<strong>on</strong>trast transfer functi<strong>on</strong> of the microscope. The signal<br />

part differs <strong>from</strong> image to image as it corresp<strong>on</strong>ds to projecti<strong>on</strong>s of the molecule at different unknown<br />

view angles. After acquisiti<strong>on</strong>, the images are normalized to have zero mean and unit variance (ℓ2 energy).<br />

Note that further discrepancies are introduced by the acquisiti<strong>on</strong> process itself, as different images may<br />

have different acquisiti<strong>on</strong> characteristics (variati<strong>on</strong>s in acquisiti<strong>on</strong> <strong>time</strong>s, etc.).<br />

In this paper we explain the origin of the bias in (5) and (6) and derive a simple unbiased estimator<br />

for the ACF. We investigate the effect of the bias <strong>on</strong> the power spectrum, and dem<strong>on</strong>strate the bias and<br />

its eliminati<strong>on</strong> in numerical examples for different types of colored noise.<br />

II. ORIGIN OF THE BIAS<br />

To evaluate the estimati<strong>on</strong> bias of ˜ C0 of (5), we calculate its expected value<br />

We <str<strong>on</strong>g>note</str<strong>on</strong>g> that<br />

E[ ˜ C0(h)] =<br />

=<br />

From (8) it also follows that<br />

Plugging (8) and (9) in (7) gives<br />

where<br />

N−h<br />

1 <br />

E<br />

N − h<br />

(Xi − ¯ X)(Xi+h − ¯ X) <br />

i=1<br />

N−h<br />

1 <br />

E<br />

N − h<br />

(Xi − µ) − ( ¯X − µ) (Xi+h − µ) − ( ¯X − µ) .<br />

i=1<br />

E[(Xi − µ)( ¯ ⎡ ⎛<br />

X − µ)] = E ⎣(Xi − µ) ⎝ 1<br />

N<br />

= 1<br />

N<br />

= 1<br />

N<br />

E ( ¯ X − µ)( ¯ X − µ) = 1<br />

N<br />

B(h) = 1<br />

N−h 1<br />

N − h N<br />

i=1<br />

⎞⎤<br />

N<br />

(Xj − µ) ⎠⎦<br />

j=1<br />

N<br />

E [(Xi − µ)(Xj − µ)]<br />

j=1<br />

3<br />

(7)<br />

N<br />

C(i − j). (8)<br />

j=1<br />

= 1<br />

N 2<br />

N<br />

E (Xk − µ)( ¯ X − µ) <br />

k=1<br />

N<br />

j,k=1<br />

C(k − j).<br />

E[ ˜C0(h)] = C(h) − B(h), (10)<br />

N<br />

j=1<br />

[C(i − j) + C(i + h − j)] + 1<br />

N 2<br />

N<br />

j,k=1<br />

(9)<br />

C(j − k) (11)<br />

July 14, 2008 DRAFT


is the bias term, which is expressed as a linear combinati<strong>on</strong> of the true ACF.<br />

In applicati<strong>on</strong>s, the <strong>autocovariance</strong> functi<strong>on</strong> is often used to estimate the power spectrum of the noise,<br />

a method known as the correlogram [10]. The power spectrum is estimated <strong>from</strong> the ACF as<br />

S(ω) =<br />

N−1 <br />

h=−N+1<br />

C(h)w(h)e −2πiωh/(2N−1) , (12)<br />

with some window functi<strong>on</strong> w(h), where we used the Hermitian symmetry of the ACF C(h) = C ∗ (−h).<br />

Therefore, <strong>estimating</strong> the power spectrum as the Fourier transform of the estimated ACF (5) is a biased<br />

estimator due to the bias in C(h). Further bias is introduced due to the windowing and the finite lag<br />

extent of the <strong>autocovariance</strong>. The window in (12) is used also to reduce the variance of the estimate<br />

[10]. In our setting, windowing for that purpose is unnecessary, as we can always reduce the variance<br />

by averaging over many K different sequences.<br />

For example, for white noise (WN) the true ACF is<br />

C WN ⎧<br />

⎨ σ<br />

(h) =<br />

⎩<br />

2 h = 0<br />

0 h = 0,<br />

and equati<strong>on</strong> (10) becomes<br />

E[ ˜ C WN<br />

0 (h)] = C WN (h) − σ2<br />

, (13)<br />

N<br />

rendering the order N −1 negative bias of ˜ C0. In this case the bias is <strong>on</strong>ly a c<strong>on</strong>stant shift of σ2<br />

N<br />

independently of h. For general noise processes, however, the bias depends of h, as we see in Secti<strong>on</strong><br />

IV.<br />

III. ELIMINATING THE BIAS<br />

We would like the power spectrum estimator to be unbiased for all frequencies with the excepti<strong>on</strong> of<br />

the DC comp<strong>on</strong>ent. As the DC comp<strong>on</strong>ent usually plays no role in applicati<strong>on</strong>s, we gain a degree of<br />

freedom that allows us to estimate the <strong>autocovariance</strong> up to an additive c<strong>on</strong>stant.<br />

The first step in deriving an unbiased estimator for the power spectrum is noting that (1) can be<br />

rewritten as<br />

and in particular<br />

Subtracting (15) <strong>from</strong> (14) gives<br />

C(h) = E[XtXt+h] − µ 2 , (14)<br />

C(0) = E[X 2 t ] − µ 2 . (15)<br />

c(h) ≡ C(h) − C(0) = E[XtXt+h] − E[X 2 t ]. (16)<br />

July 14, 2008 DRAFT<br />

4


The representati<strong>on</strong> (16) does not involve the unknown parameter µ and thus can be estimated directly<br />

<strong>from</strong> the data as<br />

ĉ(h) = 1<br />

N−h <br />

XiXi+h −<br />

N − h<br />

1<br />

N<br />

i=1<br />

N<br />

i=1<br />

X 2 i . (17)<br />

The estimator ĉ(h) given by (17) is an unbiased estimator of c(h) = C(h) − C(0). Note that the power<br />

spectrum estimator<br />

ˆS(ω) =<br />

N−1 <br />

h=−N+1<br />

ĉ(h)w(h)e −2πiωh/(2N−1)<br />

is still a biased estimator of S(ω) due to the finite lag extent. However, we observe that<br />

E[ ˆ S(ω)] = E[S(ω)] − C(0)W(ω),<br />

where W(ω) is the discrete-<strong>time</strong> Fourier transform of the window functi<strong>on</strong>, so if a fast-decaying window<br />

is used, <strong>on</strong>ly the very low frequencies are affected compared to the case of a known µ.<br />

IV. NUMERICAL EXAMPLES<br />

We start by dem<strong>on</strong>strating the bias of the estimate (5) as well as the performance of the unbiased<br />

estimator (17) for the case of standard Gaussian noise in Figs. 1a and 1b. Figure 1a was generated<br />

follows. We generate K = 10 4 sequences of length N = 5 samples of a standard Gaussian random<br />

variable. For each of the K sequences, we use (5) to estimate an N-term <strong>autocovariance</strong> functi<strong>on</strong>,<br />

followed by averaging all K estimates. Figure 1a shows the true <strong>autocovariance</strong> functi<strong>on</strong> (a delta functi<strong>on</strong><br />

in this case), its biased estimate (5), and its unbiased estimate (17). Since the unbiased estimate (17)<br />

is determined up an additive c<strong>on</strong>stant, we shift it such that the <strong>autocovariance</strong> at distance zero is <strong>on</strong>e.<br />

Figure 1b is generated in exactly the same way but with N = 50. We see that the bias agrees with (13).<br />

The smaller N, the bigger the bias.<br />

We next take a stochastic process whose <strong>autocovariance</strong> is C(h) = a |h| , where a is a c<strong>on</strong>stant chosen<br />

such that the <strong>autocovariance</strong> at distance h = 50 is 10 −8 . As before, we generate K noise sequences<br />

of length N (using a simple autoregressive model), estimate the <strong>autocovariance</strong> of each sequence, and<br />

average all K estimates. This is shown in Figs. 2a and 2b for K = 10 4 and K = 10 6 , respectively, with<br />

N = 50. In each of the figures we show the true <strong>autocovariance</strong>, the biased estimate and the unbiased<br />

<strong>on</strong>e. It is clear <strong>from</strong> Figs. 2a and 2b that increasing K reduces the variance of the estimate.<br />

It is also apparent that unlike the Gaussian case above, the bias is not c<strong>on</strong>stant and varies with h. To<br />

see how the bias behaves as a functi<strong>on</strong> of h, we plot in Figs. 3 the theoretical bias, computed using (11),<br />

July 14, 2008 DRAFT<br />

5<br />

(18)


correlati<strong>on</strong><br />

correlati<strong>on</strong><br />

1<br />

0.8<br />

0.6<br />

0.4<br />

0.2<br />

0<br />

−0.2<br />

true<br />

biased<br />

unbiased (shifted)<br />

−0.4<br />

1 1.5 2 2.5 3<br />

correlati<strong>on</strong> lag<br />

3.5 4 4.5 5<br />

1.2<br />

1<br />

0.8<br />

0.6<br />

0.4<br />

0.2<br />

0<br />

(a) N = 5, K = 10 4<br />

correlati<strong>on</strong><br />

1.2<br />

1<br />

0.8<br />

0.6<br />

0.4<br />

0.2<br />

0<br />

true<br />

biased<br />

unbiased (shifted)<br />

−0.2<br />

0 5 10 15 20 25<br />

correlati<strong>on</strong> lag<br />

30 35 40 45 50<br />

(b) N = 50, K = 10 4<br />

Fig. 1: Autocovariance estimates <strong>from</strong> K realizati<strong>on</strong>s of N samples of Gaussian noise.<br />

true<br />

biased<br />

unbiased (shifted)<br />

−0.2<br />

0 5 10 15 20 25<br />

correlati<strong>on</strong> lag<br />

30 35 40 45 50<br />

(a) K = 10 4 , N = 50<br />

correlati<strong>on</strong><br />

1.2<br />

1<br />

0.8<br />

0.6<br />

0.4<br />

0.2<br />

0<br />

−0.2<br />

0 5 10 15 20 25<br />

correlati<strong>on</strong> lag<br />

30 35 40 45 50<br />

(b) K = 10 6 , N = 50<br />

true<br />

biased<br />

unbiased (shifted)<br />

Fig. 2: Autocovariance estimates <strong>from</strong> K realizati<strong>on</strong>s of N samples of noise with <strong>autocovariance</strong> a |h| .<br />

together with the measured bias, computed by subtracting a |h| <strong>from</strong> the <strong>autocovariance</strong> estimated using<br />

(5). Figures 3a and 3b show the behavior of the bias for K = 10 4 and K = 10 6 , respectively.<br />

We next show how the biased estimate of the <strong>autocovariance</strong> functi<strong>on</strong> affects the estimate of the power<br />

spectrum. As in previous figures, we estimate the <strong>autocovariance</strong> <strong>from</strong> K noise sequences of length N.<br />

We then compute the power spectrum using (12) with w(h) = 1 for both the biased and the unbiased<br />

July 14, 2008 DRAFT<br />

6


correlati<strong>on</strong><br />

−0.02<br />

−0.025<br />

−0.03<br />

−0.035<br />

−0.04<br />

−0.045<br />

−0.05<br />

−0.055<br />

measured bias<br />

theoretical bias<br />

−0.06<br />

0 5 10 15 20 25<br />

correlati<strong>on</strong> lag<br />

30 35 40 45 50<br />

(a) K = 10 4 , N = 50<br />

correlati<strong>on</strong><br />

−0.015<br />

−0.02<br />

−0.025<br />

−0.03<br />

−0.035<br />

−0.04<br />

−0.045<br />

−0.05<br />

−0.055<br />

measured bias<br />

theoretical bias<br />

−0.06<br />

0 5 10 15 20 25<br />

correlati<strong>on</strong> lag<br />

30 35 40 45 50<br />

(b) K = 10 6 , N = 50<br />

Fig. 3: Measured and theoretical bias for a process with <strong>autocovariance</strong> C(h) = a |h| .<br />

estimates. The result is shown in Figs. 4a and 4b for K = 10 4 and K = 10 6 , respectively. We show <strong>on</strong>ly<br />

positive frequencies, as the power spectrum is symmetric around the origin and the value at the DC is<br />

irrelevant. It is important to <str<strong>on</strong>g>note</str<strong>on</strong>g> that the wiggles in the biased estimate are not due improper windowing.<br />

In fact, no window is needed in this case. The c<strong>on</strong>stant a was chosen such that the <strong>autocovariance</strong> is<br />

essentially periodic. Therefore, there is no spectral leakage due to disc<strong>on</strong>tinuities at the boundaries. Also<br />

<str<strong>on</strong>g>note</str<strong>on</strong>g> that the power spectrum is sampled at the FFT points, in which the interacti<strong>on</strong> of a given frequency<br />

sample with adjacent samples is zero - these points are exactly the zeros of the Dirichlet kernel (the<br />

Fourier transform of the rectangular window). The relative error in the power spectrum estimate is shown<br />

in Figs. 5a and 5b.<br />

We next show a similar experiment for a different <strong>autocovariance</strong> functi<strong>on</strong>. In Figs. 6a–6d we take the<br />

power spectrum<br />

S(ω) = 1<br />

√ 2π e −ω2 /2 (1 + 0.1cos (10ω)) (19)<br />

which is essentially bandlimited, and use the sampled power spectrum to generate noise samples with<br />

the given sample spectrum. We again take K noise sequences of length N, <strong>from</strong> which we estimate the<br />

<strong>autocovariance</strong> functi<strong>on</strong>. In Figs. 6a and 6b we see the true, biased, and unbiased <strong>autocovariance</strong> estimate<br />

for N = 25 and N = 100, respectively. Since the <strong>autocovariance</strong> in this case has no closed form, we<br />

compute the “true” <strong>autocovariance</strong> simply by not removing the sample mean in (5), essentially using the<br />

prior knowledge that the process has zero mean. The unbiased estimate in both cases is shifted such that<br />

July 14, 2008 DRAFT<br />

7


3<br />

2.5<br />

2<br />

1.5<br />

1<br />

0.5<br />

true<br />

biased<br />

unbiased<br />

0<br />

0 5 10 15 20 25 30 35 40 45 50<br />

frequency (FFT bin)<br />

0.08<br />

0.06<br />

0.04<br />

0.02<br />

0<br />

−0.02<br />

−0.04<br />

−0.06<br />

−0.08<br />

(a) K = 10 4 , N = 50<br />

3<br />

2.5<br />

2<br />

1.5<br />

1<br />

0.5<br />

true<br />

biased<br />

unbiased<br />

0<br />

0 5 10 15 20 25 30 35 40 45 50<br />

frequency (FFT bin)<br />

(b) K = 10 6 , N = 50<br />

Fig. 4: Power spectrum estimate <strong>from</strong> <strong>autocovariance</strong> functi<strong>on</strong>s of Fig. 2.<br />

−0.1<br />

0 5 10 15 20 25 30 35 40 45 50<br />

frequency (FFT bin)<br />

(a) K = 10 4 , N = 50<br />

−0.02<br />

−0.04<br />

−0.06<br />

−0.08<br />

Fig. 5: Relative error in power spectrum estimati<strong>on</strong>.<br />

0.08<br />

0.06<br />

0.04<br />

0.02<br />

0<br />

−0.1<br />

−0.12<br />

0 5 10 15 20 25 30 35 40 45 50<br />

frequency (FFT bin)<br />

(b) K = 10 6 , N = 50<br />

it agrees with the true <strong>autocovariance</strong> for h = 0. The correlati<strong>on</strong> lags in Figs. 6a and 6b are given in<br />

<strong>time</strong> steps determined by the sampling rate of the power spectrum (19). In Figs. 6c and 6d we see the<br />

corresp<strong>on</strong>ding estimated power spectra.<br />

July 14, 2008 DRAFT<br />

8


correlati<strong>on</strong><br />

0.12<br />

0.1<br />

0.08<br />

0.06<br />

0.04<br />

0.02<br />

0<br />

true<br />

biased<br />

unbiased (shifted)<br />

−0.02<br />

0 5 10 15 20 25<br />

correlati<strong>on</strong> lag<br />

0.4<br />

0.35<br />

0.3<br />

0.25<br />

0.2<br />

0.15<br />

0.1<br />

0.05<br />

0<br />

(a) K = 10 5 , N = 25<br />

−0.05<br />

0 5 10 15 20 25<br />

frequency (FFT bin)<br />

(c) K = 10 5 , N = 25<br />

true<br />

biased<br />

unbiased<br />

correlati<strong>on</strong><br />

0.45<br />

0.12<br />

0.1<br />

0.08<br />

0.06<br />

0.04<br />

0.02<br />

0<br />

true<br />

biased<br />

unbiased (shifted)<br />

−0.02<br />

0 10 20 30 40 50<br />

correlati<strong>on</strong> lag<br />

60 70 80 90 100<br />

0.4<br />

0.35<br />

0.3<br />

0.25<br />

0.2<br />

0.15<br />

0.1<br />

0.05<br />

0<br />

(b) K = 10 5 , N = 100<br />

−0.05<br />

0 10 20 30 40 50 60 70 80 90 100<br />

frequency (FFT bin)<br />

(d) K = 10 5 , N = 100<br />

Fig. 6: Estimated <strong>autocovariance</strong> and power spectrum for the power spectrum in (19).<br />

V. ACKNOWLEDGMENTS<br />

We would like to thank Mark Tygert for interesting discussi<strong>on</strong>s.<br />

REFERENCES<br />

[1] A. Papoulis, Signal Analysis, McGraw-Hill Companies, 431 pages, 1977.<br />

true<br />

biased<br />

unbiased<br />

[2] A. V. Oppenheim, R. W. Schafer, and J. R. Buck, Discrete-Time Signal Processing, 2nd Ed., Prentice-Hall Signal Processing<br />

Series, 870 pages, 1999.<br />

[3] J. A. Pope, and F. H. C. Marriott, “Errors in the estimati<strong>on</strong> of serial correlati<strong>on</strong>”, Nature, 172, pp. 778, 1953.<br />

[4] J. A. Pope, and F. H. C. Marriott, “Bias in the estimati<strong>on</strong> of autocorrelati<strong>on</strong>”, Biometrika 41 (3-4), pp. 390–402, 1954.<br />

July 14, 2008 DRAFT<br />

9


[5] M. G. Kendall, “Note <strong>on</strong> bias in the estimati<strong>on</strong> of autocorrelati<strong>on</strong>”, Biometrika 41 (3-4), pp. 403–404, 1954.<br />

[6] D. B. Percival, “Three Curious Properties of the Sample Variance and Autocovariance for Stati<strong>on</strong>ary Processes with<br />

Unknown Mean”, The American Statistician, 47 (4), pp. 274-276, 1993.<br />

[7] M. H. Quenouille, “Notes <strong>on</strong> bias in estimati<strong>on</strong>”, Biometrika 43 (3-4), pp. 353–360, 1956.<br />

[8] M. H. Quenouille, “Approximate Tests of Correlati<strong>on</strong> in Time-Series”, Journal of the Royal Statistical Society. Series B<br />

(Methodological), 11 (1), pp. 68–84, 1949.<br />

[9] J. Frank, Three-Dimensi<strong>on</strong>al Electr<strong>on</strong> Microscopy of Macromolecular Assemblies: Visualizati<strong>on</strong> of Biological Molecules<br />

in Their Native State, 2nd Ed., Oxford University Press, USA, 432 pages, 2006.<br />

[10] S. Lawrence Marple, Digital Spectral Analysis: With Applicati<strong>on</strong>s, Prentice Hall Signal Processing Series, 492 pages, 1987.<br />

July 14, 2008 DRAFT<br />

10

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!