12.07.2015 Views

Nonextensive Statistical Mechanics

Nonextensive Statistical Mechanics

Nonextensive Statistical Mechanics

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

28 2 Learning with Boltzmann–Gibbs <strong>Statistical</strong> <strong>Mechanics</strong>τ → 0 we obtain finally the largest rate of increase of the BG entropy. This isbasically the Kolmogorov–Sinai entropy rate.It is not necessary to insist on how deeply inconvenient this definition can be forany computational implementation! Fortunately, a different type of entropy productioncan be defined [85], whose computational implementation is usually very simple.It is defined as follows. First partition the phase-space into W little cells (normallyequal in size) and denote them with i = 1, 2,...,W . Then randomly placeM initial conditions in one of those W cells (if d + ≥ 1, normally the result will notdepend on this choice). And then follow, as time evolves, the number of points M i (t)in each cell ( ∑ Wi=1 M i(t) = M). Define the probability set p i (t) ≡ M i (t)/M (∀i),and calculate finally S BG (t) through Eq. (1.1). The entropy production per unit timeis defined asK 1 ≡ limlimlimt→∞ W →∞ M→∞S BG (t)t. (2.32)The Pesin identity [86], or more precisely the Pesin-like identity that we shalluse here, states, for large classes of dynamical systems [85],K 1 =d +∑r=1λ (r)1 . (2.33)As it will become gradually clear along the book, this relationship (and its q-generalization) will play an important role in the determination of the particularentropic form which is adequate for a given nonlinear dynamical system.2.2 Kullback–Leibler Relative EntropyIn many problems the question arises on how different are two probability distributionsp and p (0) ; for reasons that will become clear soon, p (0) will be referred toas the reference. It becomes therefore interesting to define some sort of “distance”between them. One possibility is of course the distance introduced in Eq. (2.17). Inother words, for say continuous distributions, we can use[ ∫D μ (p, p (0) ) ≡ dx |p(x) − p (0) (x)| μ] 1/μ(μ >0) . (2.34)In general we have that D μ (p, p (0) ) = D μ (p (0) , p), and that D μ (p, p (0) ) = 0if and only if p(x) = p (0) (x) almost everywhere. We remind, however, that thetriangular inequality is satisfied only for μ ≥ 1. Therefore, only then the distanceconstitutes a metric. If p(x) = ∑ Wi=1 p i (x −x i ) and p (0) (x) = ∑ Wi=1 p(0) i(x −x i ),(see Eq. (2.4)) Eq. (2.34) leads to

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!