12.07.2015 Views

Nonextensive Statistical Mechanics

Nonextensive Statistical Mechanics

Nonextensive Statistical Mechanics

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

2.2 Kullback–Leibler Relative Entropy 29[ ∑WD μ (p, p (0) ) ≡ |p i − p (0)i| μ] 1/μi=1(μ >0) , (2.35)which exactly recovers Eq. (2.17).For some purposes, this definition of distance is quite convenient. For others, theKullback–Leibler entropy [87] has been introduced (see, for instance, [88, 92] andreferences therein). It is occasionally called cross entropy, orrelative entropy, ormutual information, and it is defined as follows:∫I 1 (p, p (0) ) ≡[ p(x)] ∫dx p(x)ln =−p (0) (x)[ p (0) (x)dx p(x)lnp(x)]. (2.36)It can be proved, by using ln r ≥ 1 − 1/r (with r ≡ p(x)/p (0) (x) > 0),that I 1 (p, p (0) ) ≥ 0, the equality being valid if and only if p(x) = p (0) (x) almosteverywhere. It is clear that in general I 1 (p, p (0) ) ≠ I 1 (p (0) , p). This inconvenienceis sometimes overcome by using the symmetrized quantity [I 1 (p, p (0) ) +I 1 (p (0) , p)]/2.I 1 (p, p (0) ) (like the distance (2.34)) has the property of being invariant undervariable transformation. Indeed, if we make x = f (y), the measure preservationimplies p(x)dx = ˜p(y)dy. Since p(x)/p (0) (x) = ˜p(x)/ ˜p (0) (x), we haveI 1 (p, p (0) ) = I 1 ( ˜p, ˜p (0) ), which proves the above-mentioned invariance. The BGentropy in its continuous (not in its discrete) form S BG =− ∫ dx p(x)lnp(x) lacksthis important property. Because of this fact, the BG entropy is advantageouslyreplaced, in some calculations, by the Kullback–Leibler one. Depending on theparticular problem, the referential distribution p (0) (x) is frequently taken to be astandard distribution such as the uniform, or Gaussian, or Lorentzian, or Poissonor BG ones. When p (0) (x) is chosen to be the uniform distribution on a compactsupport of Lebesgue measure W , we have the relationI 1 (p, 1/W ) = ln W − S BG (p) . (2.37)Because of relations of this kind, the minimization of the Kulback–Leibler entropyis sometimes used instead of the maximization of the Boltzmann–Gibbs–Shannon entropy.Although convenient for a variety of purposes, I 1 (p, p (0) ) has a disadvantage.It is needed that p(x) and p (0) (x) simultaneously vanish, if they do so for certainvalues of x (this property is usually referred to as being absolutely continuous).Indeed, it is evident that otherwise the quantity I 1 (p, p (0) ) becomes ill-defined. Toovercome this difficulty, a different distance has been defined along the lines of theKullback–Leibler entropy. We refer to the so-called Jensen–Shannon divergence.Although interesting in many respects, its study would take us too far from ourpresent line. Details can be seen in [93, 94] and references therein.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!