Polymers in Confined Geometry.pdf
Polymers in Confined Geometry.pdf
Polymers in Confined Geometry.pdf
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
4.3. SAMPLING THE CANONICAL ENSEMBLE 39<br />
barely hit at all dur<strong>in</strong>g the numerical evaluation procedure. Therefore so called<br />
variance reduction techniques are needed.<br />
The most prom<strong>in</strong>ent idea is that of importance sampl<strong>in</strong>g. By replac<strong>in</strong>g the<br />
<strong>in</strong>itial probability distribution p(x) for the random numbers by an importance<br />
density q(x)—which should be chosen such that the poorly sampled regions are<br />
hit more frequently, otherwise the problem can be made worse—one can reduce<br />
the need for a large number of samples significantly! When us<strong>in</strong>g the importance<br />
density an additional step <strong>in</strong> the evaluation has to be performed: S<strong>in</strong>ce the<br />
probability distribution of the random samples has been changed to q(x) the<br />
score function has to be adjusted to Aq(x) = A(x)f(x)/q(x). It can be shown<br />
that this results <strong>in</strong> the same value for the desired average of the <strong>in</strong>tegral but the<br />
variance can be reduced by orders of magnitudes.<br />
The nice idea of simple Monte-Carlo <strong>in</strong>tegration can reduce the error <strong>in</strong> high<br />
dimensional <strong>in</strong>tegrals, but without the importance sampl<strong>in</strong>g method still a large<br />
number of problems would not be solvable <strong>in</strong> a reasonable amount of comput<strong>in</strong>g<br />
time!<br />
4.3 Sampl<strong>in</strong>g the canonical ensemble<br />
In our problem of sampl<strong>in</strong>g averages for polymers we are work<strong>in</strong>g <strong>in</strong> a canonical<br />
ensemble. Hence we have to deal with the probability distribution <strong>in</strong> the canonical<br />
ensemble for a given configuration/micro state C<br />
P (C) = 1<br />
Z(β) exp −βH(C) , (4.2)<br />
where H(C) is the energy of the micro state and β−1 = kBT is the temperature.<br />
The normalization constant is the canonical partition sum<br />
Z(β) = <br />
exp −βH(C) , (4.3)<br />
C<br />
the sum over all micro states weighted by the Boltzmann weight. In the case of<br />
cont<strong>in</strong>uous states the sum becomes an <strong>in</strong>tegral.<br />
Typically one deals with a large number of particles <strong>in</strong> a system, but already<br />
a system consist<strong>in</strong>g of N = 100 particles with two possible states per particle<br />
would <strong>in</strong>volve a number of 2 100 configurations which is not directly summable<br />
<strong>in</strong> a lifespan of an average physicist on a typical computer system (now imag<strong>in</strong>e<br />
a system of N ≈ 10 23 particles. . . ). Even simulat<strong>in</strong>g this system by the<br />
Monte-Carlo approach of choos<strong>in</strong>g n micro states by equal probability (out of a<br />
flat/uniform probability distribution) is not effective s<strong>in</strong>ce most micro states have<br />
very low probability 2 . Therefore we have the problem described above, where the<br />
2 E.g. the energy landscape of a high dimensional problem is ma<strong>in</strong>ly flat, but has some<br />
narrow deep m<strong>in</strong>ima. In this case it is very unlikely that the m<strong>in</strong>ima are hit. But s<strong>in</strong>ce they<br />
are govern<strong>in</strong>g the behavior of the system, they should be sampled effectively.