13.07.2015 Views

Passive, active, and digital filters (3ed., CRC, 2009) - tiera.ru

Passive, active, and digital filters (3ed., CRC, 2009) - tiera.ru

Passive, active, and digital filters (3ed., CRC, 2009) - tiera.ru

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Nonlinear Filtering Using Statistical Signal Models 26-9sample outliers than the Gaussian distribution optimal linear filter, consider the heavier tailed Laplaci<strong>and</strong>istribution (b ¼ 1) special case of the GGD,f (u) ¼ 1 2s expjujs: (26:10)The following theorem shows that the median filter is the optimal operator for Laplacian distributedsamples.THEOREM 26.4Consider a set of N independent samples fx(i) : i ¼ 1, 2, ..., Ng each obeying the Laplacian distributionwith common location u <strong>and</strong> variance s 2 . The ML estimate of location is given by^u ¼ arg minu" #X Ni¼1jx(i)uj¼ medianfx(i) : i ¼ 1, 2, ..., Ng: (26:11)The arguments utilized previously, with the appropriate distribution substitution, prove the result.The expression in Equation 26.11 shows that, in this case, the optimization criteria reduces to themore robust L 1 norm. Moreover, the resulting expression is simply a median filter st<strong>ru</strong>cture, y ¼medianfx(i) : i ¼ 1, 2, ..., Ng, where y denotes the filter output. This operation is clearly nonlinear asthe output is formed by sorting the observation samples <strong>and</strong> taking the middle, or median, value asthe output.*Similarly to the mean filtering case, the median filtering operation can be generalized to admit weights.The theoretical motivation for this generalization is, like in the previous case, the relaxation of theidentically distributed constraint placed on the observation samples in the above theorem.THEOREM 26.5Consider a set of N independent samples fx(i) : i ¼ 1, 2, ..., Ng each obeying the Laplacian distributionwith common location u <strong>and</strong> (possibly) different variances s 2 (i). The ML estimate of location is given by^u ¼ arg minu" #X N 1s 2 (i) jx(i) uji¼1¼ medianfh(i) x(i) : i ¼ 1, 2, ..., Ng: (26:12)h(i)timeszfflfflfflfflfflfflfflfflfflfflfflffl}|fflfflfflfflfflfflfflfflfflfflfflffl{where h i ¼ 1=s 2 (i) > 0 <strong>and</strong> is the replication operator defined as h(i) x(i) ¼ x(i), x(i), ..., x(i) .The weighting operation in this case is achieved through repetition, rather than the scaling employedin the linear filter. But like the linear case, sample weights are inversely proportional to the samplevariances, indicating again that samples with large variability contribute less to the determination of theoutput than well behaved (smaller variance) samples. This magnitude relationship between a sample’sweight <strong>and</strong> its influence holds even for the relaxed case of positive <strong>and</strong> negative weights. This relaxationon the weights employs sign coupling <strong>and</strong> enables a broader range of filtering characteristics to berealized by weighted median (WM) <strong>filters</strong> [31]:y ¼ medianfjh(i)jsgnðh(i)Þx(i) : i ¼ 1, 2, ..., Ng, (26:13)* For cases in which the number of observation samples is an even number, the median value is set as the average of the twocentral samples in the ordered set.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!