15.01.2013 Views

U. Glaeser

U. Glaeser

U. Glaeser

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

The parameter α depends on the signal slope, i.e., J bits in K output bits of y(n):<br />

The quantization step ∆(n) is increased or decreased using α parameter:<br />

where<br />

β = step decreasing coefficient,<br />

∆ min = minimum step size,<br />

∆ max = maximum step size.<br />

The estimated value xˆ ( n – 1)<br />

is given by<br />

where<br />

h is the accumulator decay coefficient.<br />

Entropy Coding Using Huffman Method<br />

© 2002 by CRC Press LLC<br />

∆( n)<br />

d( n– 1)<br />

⎧1<br />

J bits are the same<br />

α = ⎨<br />

⎩0<br />

else<br />

(27.56)<br />

(27.57)<br />

(27.58)<br />

The entropy coding is a lossless bit stream reduction and can be used on its own or a supplement to<br />

other methods, e.g., after the DPCM. This coding approach is based on the statistical redundancy, when<br />

the signal samples, or sequences (blocks) have different probabilities. The entropy of a signal is defined<br />

as the following average:<br />

(27.59)<br />

where −log 2 p i is an information of the ith codeword and p i is the probability of its occurrence. The most<br />

popular method for the entropy coding is the Huffman coding method [15], in which the optimal code<br />

can be found using an iterative procedure based on the so-called Huffman tree. The Huffman coding is,<br />

e.g., used in MPEG-1 audio standard to reduce the amount of output data in layer III. The set of 32<br />

Huffman tables are specially tuned for statistics of the MDCT coefficients divided into some regions and<br />

subregions [38].<br />

27.9 Transparent Audio Coding<br />

⎧min{<br />

∆( n – 1)<br />

+ ∆min, ∆max} for α = 1<br />

= ⎨<br />

⎩max{<br />

β∆( n – 1),<br />

∆min} for α = 0<br />

xˆ ( n – 1)<br />

= hd( n – 1)<br />

min dˆ { ( n – 1),<br />

dmax} for dˆ ( n – 1)<br />

≥ 0<br />

max dˆ { ( n – 1),<br />

dmin} for dˆ ⎧<br />

= ⎨<br />

⎩<br />

( n – 1)<br />

< 0<br />

H( p1,…,pn) =<br />

− pi log p<br />

A need for reduction of bit rate required for the transmission of high quality audio signals draws a<br />

growing attention to lossy audio coding techniques. Lossy audio coding will be fully acceptable, if it is<br />

perceptually transparent, i.e., if the corruption of the audio signal waveform is inaudible. An efficient<br />

transparent audio coding algorithm (Fig. 27.17) should:<br />

• remove redundancy contained in the original audio signal,<br />

• remove the perceptual irrelevancy.<br />

n<br />

∑<br />

i=1<br />

2 i

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!