12.07.2015 Views

Nonextensive Statistical Mechanics

Nonextensive Statistical Mechanics

Nonextensive Statistical Mechanics

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

3.2 Nonadditive Entropy S q 43This form turns out to be in fact directly related to a generalized metric proposedin 1952 by Hardy, Littlewood and Polya [109], whose q = 2 particular case correspondsto the Pythagorean metric.A different path for arriving to the entropy (3.18) is the following one. This wasin fact the original path, inspired by multifractals, that led to the postulate adoptedin [39]. The entropic index q introduces a bias in the probabilities. Indeed, giventhe fact that generically 0 < p i < 1, we have that p q i> p i if q < 1 and p q i< p iif q > 1. Therefore, q < 1 (relatively) enhances the rare events, those which haveprobabilities close to zero, whereas q > 1 (relatively) enhances the frequent events,those whose probability is close to unity. This property can be directly checked ifwe compare p i with p q i / ∑ Wj=1 pq j.So, it appears as appealing to introduce an entropic form based on p q i. We wantalso the form to be invariant under permutations. So the simplest assumption is toconsider S q = f ( ∑ Wi=1 pq i), where f is some continuous function to be found. Thesimplest choice is the linear one, i.e., S q = a+b ∑ Wi=1 pq i. Since any entropy shouldbe a measure of disorder or ignorance, we want that certainty corresponds to zeroentropy. This immediately imposes a + b = 0, hence S q = a(1 − ∑ Wi=1 pq i). But,since we are seeking for a generalization (and not an alternative), for q = 1wewantto recover S BG . Therefore, in the q → 1 limit, a must be asymptotically proportionalto 1/(q − 1) (we remind the equivalence indicated in the previous paragraph). Thesimplest way for this to occur is just to be a = k/(q − 1), with k > 0, whichimmediately leads to Eq. (3.18).We shall next address the properties of S q . But before doing that, let us clarify apoint which has a generic relevance. If q > 0, then expression (3.18) is well definedwhether or not one or more states have zero probability. Not so if q < 0. In this case,it must be understood that the sum indicated in Eq. (3.8) runs only over states withpositive probability. For simplicity, we shall not explicitly indicate this fact alongthe book. But it is always to be taken into account.3.2.2 Properties3.2.2.1 Non-negativityIf we have certainty about the state of the system, then one of the probabilities equalsunity, and all the others vanish. Consequently, the entropy S q vanishes for all q.If we do not have certainty, at least two of the probabilities are smaller thanunity. Therefore, for those, 1/p i > 1, hence ln q (1/p i ) > 0 , ∀i (see also Fig. 3.5).Consequently, using Eq. (3.17), it immediately follows that S q > 0, for all q.3.2.2.2 Extremal at Equal ProbabilitiesFor the same reason indicated in the BG case (invariance of the entropy par rapportto any permutation of states), at equiprobability S q must be extremal. It turns out tobe a maximum for q > 0 and a minimum for q < 0. The proof will be completed

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!