12.07.2015 Views

1 Introduction

1 Introduction

1 Introduction

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

1.6. Information Theory 4922q =0.3q =1|y − t| q1|y − t| q10−2 −1 0 1 2y − t0−2 −1 0 1 2y − t22q =2q =10|y − t| q1|y − t| q10−2 −1 0 1 2y − t0−2 −1 0 1 2y − tFigure 1.29 Plots of the quantity L q = |y − t| q for various values of q.h(x) =− log 2 p(x) (1.92)where the negative sign ensures that information is positive or zero. Note that lowprobability events x correspond to high information content. The choice of basisfor the logarithm is arbitrary, and for the moment we shall adopt the conventionprevalent in information theory of using logarithms to the base of 2. In this case, aswe shall see shortly, the units of h(x) are bits (‘binary digits’).Now suppose that a sender wishes to transmit the value of a random variable toa receiver. The average amount of information that they transmit in the process isobtained by taking the expectation of (1.92) with respect to the distribution p(x) andis given by∑ H[x] =− p(x) log 2 p(x). (1.93)xThis important quantity is called the entropy of the random variable x. Note thatlim p→0 p ln p =0and so we shall take p(x)lnp(x) =0whenever we encounter avalue for x such that p(x) =0.So far we have given a rather heuristic motivation for the definition of informa-

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!