Quantum Mechanics - Prof. Eric R. Bittner - University of Houston
Quantum Mechanics - Prof. Eric R. Bittner - University of Houston
Quantum Mechanics - Prof. Eric R. Bittner - University of Houston
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
<strong>Quantum</strong> <strong>Mechanics</strong><br />
Lecture Notes for<br />
Chemistry 6312<br />
<strong>Quantum</strong> Chemistry<br />
<strong>Eric</strong> R. <strong>Bittner</strong><br />
<strong>University</strong> 1<br />
<strong>of</strong> <strong>Houston</strong><br />
Department <strong>of</strong> Chemistry
Lecture Notes on <strong>Quantum</strong> Chemistry<br />
Lecture notes to accompany Chemistry 6321<br />
Copyright @1997-2003, <strong>University</strong> <strong>of</strong> <strong>Houston</strong> and <strong>Eric</strong> R. <strong>Bittner</strong><br />
All Rights Reserved.<br />
August 12, 2003
Contents<br />
0 Introduction 8<br />
0.1 Essentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10<br />
0.2 Problem Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11<br />
0.3 2003 Course Calendar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13<br />
I Lecture Notes 14<br />
1 Survey <strong>of</strong> Classical <strong>Mechanics</strong> 15<br />
1.1 Newton’s equations <strong>of</strong> motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16<br />
1.1.1 Elementary solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16<br />
1.1.2 Phase plane analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17<br />
1.2 Lagrangian <strong>Mechanics</strong> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18<br />
1.2.1 The Principle <strong>of</strong> Least Action . . . . . . . . . . . . . . . . . . . . . . . . . 18<br />
1.2.2 Example: 3 dimensional harmonic oscillator in spherical coordinates . . . . 20<br />
1.3 Conservation Laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23<br />
1.4 Hamiltonian Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24<br />
1.4.1 Interaction between a charged particle and an electromagnetic field. . . . . 24<br />
1.4.2 Time dependence <strong>of</strong> a dynamical variable . . . . . . . . . . . . . . . . . . . 26<br />
1.4.3 Virial Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26<br />
2 Waves and Wavefunctions 29<br />
2.1 Position and Momentum Representation <strong>of</strong> |ψ〉 . . . . . . . . . . . . . . . . . . . . 29<br />
2.2 The Schrödinger Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31<br />
2.2.1 Gaussian Wavefunctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32<br />
2.2.2 Evolution <strong>of</strong> ψ(x) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34<br />
2.3 Particle in a Box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36<br />
2.3.1 Infinite Box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36<br />
2.3.2 Particle in a finite Box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38<br />
2.3.3 Scattering states and resonances. . . . . . . . . . . . . . . . . . . . . . . . 40<br />
2.3.4 Application: <strong>Quantum</strong> Dots . . . . . . . . . . . . . . . . . . . . . . . . . . 43<br />
2.4 Tunneling and transmission in a 1D chain . . . . . . . . . . . . . . . . . . . . . . 49<br />
2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49<br />
2.6 Problems and Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50<br />
1
3 Semi-Classical <strong>Quantum</strong> <strong>Mechanics</strong> 55<br />
3.1 Bohr-Sommerfield quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56<br />
3.2 The WKB Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58<br />
3.2.1 Asymptotic expansion for eigenvalue spectrum . . . . . . . . . . . . . . . . 58<br />
3.2.2 WKB Wavefunction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60<br />
3.2.3 Semi-classical Tunneling and Barrier Penetration . . . . . . . . . . . . . . 62<br />
3.3 Connection Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65<br />
3.4 Scattering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70<br />
3.4.1 Classical Scattering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70<br />
3.4.2 Scattering at small deflection angles . . . . . . . . . . . . . . . . . . . . . . 73<br />
3.4.3 <strong>Quantum</strong> treatment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74<br />
3.4.4 Semiclassical evaluation <strong>of</strong> phase shifts . . . . . . . . . . . . . . . . . . . . 75<br />
3.4.5 Resonance Scattering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78<br />
3.5 Problems and Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78<br />
4 Postulates <strong>of</strong> <strong>Quantum</strong> <strong>Mechanics</strong> 80<br />
4.0.1 The description <strong>of</strong> a physical state: . . . . . . . . . . . . . . . . . . . . . . 85<br />
4.0.2 Description <strong>of</strong> Physical Quantities: . . . . . . . . . . . . . . . . . . . . . . 85<br />
4.0.3 <strong>Quantum</strong> Measurement: . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86<br />
4.0.4 The Principle <strong>of</strong> Spectral Decomposition: . . . . . . . . . . . . . . . . . . . 86<br />
4.0.5 The Superposition Principle . . . . . . . . . . . . . . . . . . . . . . . . . . 87<br />
4.0.6 Reduction <strong>of</strong> the wavepacket: . . . . . . . . . . . . . . . . . . . . . . . . . 89<br />
4.0.7 The temporal evolution <strong>of</strong> the system: . . . . . . . . . . . . . . . . . . . . 90<br />
4.0.8 Dirac <strong>Quantum</strong> Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . 90<br />
4.1 Dirac Notation and Linear Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . 94<br />
4.1.1 Transformations and Representations . . . . . . . . . . . . . . . . . . . . . 94<br />
4.1.2 Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96<br />
4.1.3 Products <strong>of</strong> Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97<br />
4.1.4 Functions Involving Operators . . . . . . . . . . . . . . . . . . . . . . . . . 98<br />
4.2 Constants <strong>of</strong> the Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100<br />
4.3 Bohr Frequency and Selection Rules . . . . . . . . . . . . . . . . . . . . . . . . . . 101<br />
4.4 Example using the particle in a box states . . . . . . . . . . . . . . . . . . . . . . 102<br />
4.5 Time Evolution <strong>of</strong> Wave and Observable . . . . . . . . . . . . . . . . . . . . . . . 103<br />
4.6 “Unstable States” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104<br />
4.7 Problems and Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106<br />
5 Bound States <strong>of</strong> The Schrödinger Equation 110<br />
5.1 Introduction to Bound States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110<br />
5.2 The Variational Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112<br />
5.2.1 Variational Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112<br />
5.2.2 Constraints and Lagrange Multipliers . . . . . . . . . . . . . . . . . . . . . 114<br />
5.2.3 Variational method applied to Schrödinger equation . . . . . . . . . . . . . 117<br />
5.2.4 Variational theorems: Rayleigh-Ritz Technique . . . . . . . . . . . . . . . . 118<br />
5.2.5 Variational solution <strong>of</strong> harmonic oscillator ground State . . . . . . . . . . . 119<br />
5.3 The Harmonic Oscillator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121<br />
2
5.3.1 Harmonic Oscillators and Nuclear Vibrations . . . . . . . . . . . . . . . . . 124<br />
5.3.2 Classical interpretation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131<br />
5.3.3 Molecular Vibrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134<br />
5.4 Numerical Solution <strong>of</strong> the Schrödinger Equation . . . . . . . . . . . . . . . . . . . 136<br />
5.4.1 Numerov Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136<br />
5.4.2 Numerical Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . 139<br />
5.5 Problems and Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144<br />
6 <strong>Quantum</strong> <strong>Mechanics</strong> in 3D 152<br />
6.1 <strong>Quantum</strong> Theory <strong>of</strong> Rotations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153<br />
6.2 Eigenvalues <strong>of</strong> the Angular Momentum Operator . . . . . . . . . . . . . . . . . . 157<br />
6.3 Eigenstates <strong>of</strong> L 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158<br />
6.4 Eigenfunctions <strong>of</strong> L 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159<br />
6.5 Addition theorem and matrix elements . . . . . . . . . . . . . . . . . . . . . . . . 162<br />
6.6 Legendre Polynomials and Associated Legendre Polynomials . . . . . . . . . . . . 164<br />
6.7 <strong>Quantum</strong> rotations in a semi-classical context . . . . . . . . . . . . . . . . . . . . 165<br />
6.8 Motion in a central potential: The Hydrogen Atom . . . . . . . . . . . . . . . . . 170<br />
6.8.1 Radial Hydrogenic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 173<br />
6.9 Spin 1/2 Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173<br />
6.9.1 Theoretical Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174<br />
6.9.2 Other Spin Observables . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175<br />
6.9.3 Evolution <strong>of</strong> a state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175<br />
6.9.4 Larmor Precession . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176<br />
6.10 Problems and Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177<br />
7 Perturbation theory 180<br />
7.1 Perturbation Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180<br />
7.2 Two level systems subject to a perturbation . . . . . . . . . . . . . . . . . . . . . 182<br />
7.2.1 Expansion <strong>of</strong> Energies in terms <strong>of</strong> the coupling . . . . . . . . . . . . . . . 183<br />
7.2.2 Dipole molecule in homogenous electric field . . . . . . . . . . . . . . . . . 184<br />
7.3 Dyson Expansion <strong>of</strong> the Schrödinger Equation . . . . . . . . . . . . . . . . . . . . 188<br />
7.4 Van der Waals forces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190<br />
7.4.1 Origin <strong>of</strong> long-ranged attractions between atoms and molecules . . . . . . . 190<br />
7.4.2 Attraction between an atom a conducting surface . . . . . . . . . . . . . . 192<br />
7.5 Perturbations Acting over a Finite amount <strong>of</strong> Time . . . . . . . . . . . . . . . . . 193<br />
7.5.1 General form <strong>of</strong> time-dependent perturbation theory . . . . . . . . . . . . 193<br />
7.5.2 Fermi’s Golden Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194<br />
7.6 Interaction between an atom and light . . . . . . . . . . . . . . . . . . . . . . . . 197<br />
7.6.1 Fields and potentials <strong>of</strong> a light wave . . . . . . . . . . . . . . . . . . . . . 197<br />
7.6.2 Interactions at Low Light Intensity . . . . . . . . . . . . . . . . . . . . . . 198<br />
7.6.3 Photoionization <strong>of</strong> Hydrogen 1s . . . . . . . . . . . . . . . . . . . . . . . . 202<br />
7.6.4 Spontaneous Emission <strong>of</strong> Light . . . . . . . . . . . . . . . . . . . . . . . . 204<br />
7.7 Time-dependent golden rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209<br />
7.7.1 Non-radiative transitions between displaced Harmonic Wells . . . . . . . . 210<br />
7.7.2 Semi-Classical Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . 216<br />
3
7.8 Problems and Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219<br />
8 Many Body <strong>Quantum</strong> <strong>Mechanics</strong> 222<br />
8.1 Symmetry with respect to particle Exchange . . . . . . . . . . . . . . . . . . . . . 222<br />
8.2 Matrix Elements <strong>of</strong> Electronic Operators . . . . . . . . . . . . . . . . . . . . . . . 227<br />
8.3 The Hartree-Fock Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . 229<br />
8.3.1 Two electron integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230<br />
8.3.2 Koopman’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231<br />
8.4 <strong>Quantum</strong> Chemistry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231<br />
8.4.1 The Born-Oppenheimer Approximation . . . . . . . . . . . . . . . . . . . . 232<br />
8.5 Problems and Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241<br />
A Physical Constants and Conversion Factors 247<br />
B Mathematical Results and Techniques to Know and Love 249<br />
B.1 The Dirac Delta Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249<br />
B.1.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249<br />
B.1.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250<br />
B.1.3 Spectral representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250<br />
B.2 Coordinate systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253<br />
B.2.1 Cartesian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253<br />
B.2.2 Spherical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254<br />
B.2.3 Cylindrical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255<br />
C Mathematica Notebook Pages 256<br />
4
List <strong>of</strong> Figures<br />
1.1 Tangent field for simple pendulum with ω = 1. The superimposed curve is a linear<br />
approximation to the pendulum motion. . . . . . . . . . . . . . . . . . . . . . . . 17<br />
1.2 Vector diagram for motion in a central forces. The particle’s motion is along the<br />
Z axis which lies in the plane <strong>of</strong> the page. . . . . . . . . . . . . . . . . . . . . . . 21<br />
1.3 Screen shot <strong>of</strong> using Mathematica to plot phase-plane for harmonic oscillator.<br />
Here k/m = 1 and our xo = 0.75. . . . . . . . . . . . . . . . . . . . . . . . . . . . 28<br />
2.1 A gaussian wavepacket, ψ(x) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33<br />
2.2 Momentum-space distribution <strong>of</strong> ψ(k). . . . . . . . . . . . . . . . . . . . . . . . . 33<br />
2.3 Go for fixed t as a function <strong>of</strong> x. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35<br />
2.4 Evolution <strong>of</strong> a free particle wavefunction. . . . . . . . . . . . . . . . . . . . . . . 36<br />
2.5 Particle in a box states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38<br />
2.6 Graphical solution to transendental equations for an electron in a truncated hard<br />
well <strong>of</strong> depth Vo = 10 and width a = 2. The short-dashed blue curve corresponds<br />
to the symmetric case�and the long-dashed blue curve corresponds to the asymetric<br />
case. The red line is<br />
1 − V o/E. Bound state solution are such that the red and<br />
blue curves cross. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41<br />
2.7 Transmission (blue) and Reflection (red) coefficients for an electron scattering over<br />
a square well (V = −40 and a = 1 ). . . . . . . . . . . . . . . . . . . . . . . . . . 42<br />
2.8 Transmission Coefficient for particle passing over a bump. . . . . . . . . . . . . . 43<br />
2.9 Scattering waves for particle passing over a well. . . . . . . . . . . . . . . . . . . . 44<br />
2.10 Argand plot <strong>of</strong> a scattering wavefunction passing over a well. . . . . . . . . . . . . 45<br />
2.11 Density <strong>of</strong> states for a 1-, 2- , and 3- dimensional space. . . . . . . . . . . . . . . . 46<br />
2.12 Density <strong>of</strong> states for a quantum well and quantum wire compared to a 3d space.<br />
Here L = 5 and s = 2 for comparison. . . . . . . . . . . . . . . . . . . . . . . . . . 47<br />
2.13 Spherical Bessel functions, j0, j1, and j1 (red, blue, green) . . . . . . . . . . . . . 48<br />
2.14 Radial wavefuncitons (left column) and corresponding PDFs (right column) for an<br />
electron in a R = 0.5˚Aquantum dot. The upper two correspond to (n, l) = (1, 0)<br />
(solid) and (n, l) = (1, 1) (dashed) while the lower correspond to (n, l) = (2, 0)<br />
(solid) and (n, l) = (2, 1) (dashed) . . . . . . . . . . . . . . . . . . . . . . . . . . . 49<br />
3.1 Eckart Barrier and parabolic approximation <strong>of</strong> the transition state . . . . . . . . . 63<br />
3.2 Airy functions, Ai(y) (red) and Bi(y) (blue) . . . . . . . . . . . . . . . . . . . . . 66<br />
3.3 Bound states in a graviational well . . . . . . . . . . . . . . . . . . . . . . . . . . 69<br />
3.4 Elastic scattering trajectory for classical collision . . . . . . . . . . . . . . . . . . 70<br />
5
3.5 Form <strong>of</strong> the radial wave for repulsive (short dashed) and attractive (long dashed)<br />
potentials. The form for V = 0 is the solid curve for comparison. . . . . . . . . . . 76<br />
4.1 Gaussian distribution function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81<br />
4.2 Combination <strong>of</strong> two distrubitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . 82<br />
4.3 Constructive and destructive interference from electron/two-slit experiment. The<br />
superimposed red and blue curves are P1 and P2 from the classical probabilities . 83<br />
4.4 The diffraction function sin(x)/x . . . . . . . . . . . . . . . . . . . . . . . . . . . 102<br />
5.1 Variational paths between endpoints. . . . . . . . . . . . . . . . . . . . . . . . . . 116<br />
5.2 Hermite Polynomials, Hn up to n = 3. . . . . . . . . . . . . . . . . . . . . . . . . 128<br />
5.3 Harmonic oscillator functions for n = 0 to 3 . . . . . . . . . . . . . . . . . . . . . 132<br />
5.4 <strong>Quantum</strong> and Classical Probability Distribution Functions for Harmonic Oscillator.133<br />
5.5 London-Eyring-Polanyi-Sato (LEPS) empirical potential for the F +H2 → F H+H<br />
chemical reaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135<br />
5.6 Morse well and harmonic approximation for HF . . . . . . . . . . . . . . . . . . . 136<br />
5.7 Model potential for proton tunneling. . . . . . . . . . . . . . . . . . . . . . . . . . 137<br />
5.8 Double well tunneling states as determined by the Numerov approach. . . . . . . . 138<br />
5.9 Tchebyshev Polynomials for n = 1 − 5 . . . . . . . . . . . . . . . . . . . . . . . . 140<br />
5.10 Ammonia Inversion and Tunneling . . . . . . . . . . . . . . . . . . . . . . . . . . 147<br />
6.1 Vector model for the quantum angular momentum state |jm〉, which is represented<br />
here by the vector j which precesses about the z axis (axis <strong>of</strong> quantzation) with<br />
projection m. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156<br />
6.2 Spherical Harmonic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160<br />
6.3 Classical and <strong>Quantum</strong> Probability Distribution Functions for Angular Momentum.168<br />
7.1 Variation <strong>of</strong> energy level splitting as a function <strong>of</strong> the applied field for an ammonia<br />
molecule in an electric field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185<br />
7.2 Photo-ionization spectrum for hydrogen atom. . . . . . . . . . . . . . . . . . . . . 205<br />
8.1 Various contributions to the H + 2 Hamiltonian. . . . . . . . . . . . . . . . . . . . . 236<br />
8.2 Potential energy surface for H + 2 molecular ion. . . . . . . . . . . . . . . . . . . . . 238<br />
8.3 Three dimensional representations <strong>of</strong> ψ+ and ψ− for the H + 2 molecular ion. . . . . 238<br />
8.4 Setup calculation dialog screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243<br />
8.5 HOMO-1, HOMO and LUMO for CH2 = O. . . . . . . . . . . . . . . . . . . . . . 245<br />
8.6 Transition state geometry for H2 + C = O → CH2 = O. The Arrow indicates the<br />
reaction path. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246<br />
B.1 sin(xa)/πx representation <strong>of</strong> the Dirac δ-function . . . . . . . . . . . . . . . . . . 251<br />
B.2 Gaussian representation <strong>of</strong> δ-function . . . . . . . . . . . . . . . . . . . . . . . . . 251<br />
6
List <strong>of</strong> Tables<br />
3.1 Location <strong>of</strong> nodes for Airy, Ai(x) function. . . . . . . . . . . . . . . . . . . . . . . 68<br />
5.1 Tchebychev polynomials <strong>of</strong> the first type . . . . . . . . . . . . . . . . . . . . . . . 140<br />
5.2 Eigenvalues for double well potential computed via DVR and Numerov approaches 143<br />
6.1 Spherical Harmonics (Condon-Shortley Phase convention. . . . . . . . . . . . . . . 160<br />
6.2 Relation between various notations for Clebsch-Gordan Coefficients in the literature169<br />
8.1 Vibrational Frequencies <strong>of</strong> Formaldehyde . . . . . . . . . . . . . . . . . . . . . . . 244<br />
A.1 Physical Constants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248<br />
A.2 Atomic Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248<br />
A.3 Useful orders <strong>of</strong> magnitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248<br />
7
Chapter 0<br />
Introduction<br />
Nothing conveys the impression <strong>of</strong> humungous intellect so much as even the sketchiest<br />
knowledge <strong>of</strong> quantum physics, and since the sketchiest knowledge is all anyone will<br />
ever have, never be shy about holding forth with bags <strong>of</strong> authority about subatomic<br />
particles and the quantum realm without having done any science whatsoever.<br />
Jack Klaff –Bluff Your Way in the <strong>Quantum</strong> Universe<br />
The field <strong>of</strong> quantum chemistry seeks to provide a rigorous description <strong>of</strong> chemical processes<br />
at its most fundamental level. For ordinary chemical processes, the most fundamental and underlying<br />
theory <strong>of</strong> chemistry is given by the time-dependent and time-independent version <strong>of</strong> the<br />
Schrödinger equation. However, simply stating an equation that provides the underlying theory<br />
in now shape or form yields and predictive or interpretive power. In fact, most <strong>of</strong> what we do<br />
in quantum mechanics is to develop a series <strong>of</strong> well posed approximation and physical assumptions<br />
to solve basic equations <strong>of</strong> quantum mechanics. In this course, we will delve deeply into<br />
the underlying physical and mathematical theory. We will learn how to solve some elementary<br />
problems and apply these to not so elementary examples.<br />
As with any course <strong>of</strong> this nature, the content reflects the instructors personal interests in<br />
the field. In this case, the emphasis <strong>of</strong> the course is towards dynamical processes, transitions<br />
between states, and interaction between matter and radiation. More “traditional” quantum<br />
chemistry courses will focus upon electronic structure. In fact, the moniker “quantum chemistry”<br />
typically refers to electronic structure theory. While this is an extremely rich topic, it is my<br />
personal opinion that a deeper understanding <strong>of</strong> dynamical processes provides a broader basis<br />
for understanding chemical processes.<br />
It is assumed from the beginning, that students taking this course have had some exposure<br />
to the fundamental principles <strong>of</strong> quantum theory as applied to chemical systems. This is usually<br />
in the context <strong>of</strong> a physical chemistry course or a separate course in quantum chemistry. I<br />
also assume that students taking this course have had undergraduate level courses in calculus,<br />
differential equations, and have some concepts <strong>of</strong> linear algebra. Students lacking in any <strong>of</strong> these<br />
areas are strongly encouraged to sit through my undergraduate Physical Chemistry II course<br />
(<strong>of</strong>fered in the Spring Semester at the Univ. <strong>of</strong> <strong>Houston</strong>) before attempting this course. This<br />
course is by design and purpose theoretical in nature.<br />
The purpose <strong>of</strong> this course is to provide a solid and mathematically rigorous tour through<br />
modern quantum mechanics. We will begin with simple examples which can be worked out<br />
8
exactly on paper and move on to discuss various approximation schemes. For cases in which<br />
analytical solutions are either too obfuscating or impossible, computer methods will be introduced<br />
using Mathematica. Applications toward chemically relevant topics will be emphasized<br />
throughout.<br />
We will primarily focus upon single particle systems, or systems in which the particles are<br />
distinguishable. Special considerations for systems <strong>of</strong> indistinguishable particles, such as the<br />
electrons in a molecule, will be discussed towards the end <strong>of</strong> the course. The pace <strong>of</strong> the course<br />
is fairly rigorous, with emphasis on solving problems either analytically or using computer.<br />
I also tend to emphasize how to approach a problem from a theoretical viewpoint. As you<br />
will discover rapidly, very few <strong>of</strong> the problem sets in this course are <strong>of</strong> the “look-up the right<br />
formula” type. Rather, you will need to learn to use the various techniques (perturbation theory,<br />
commutation relations, etc...) to solve and work out problems for a variety <strong>of</strong> physical systems.<br />
The lecture notes in this package are really to be regarded as a work in progress and updates<br />
and additions will be posted as they evolve. Lacking is a complete chapter on the Hydrogen<br />
atom and atomic physics and a good overview <strong>of</strong> many body theory. Also, I have not included<br />
a chapter on scattering and other topics as these will be added over the course <strong>of</strong> time. Certain<br />
sections are clearly better than others and will be improved upon over time. Each chapter ends<br />
with a series <strong>of</strong> exercises and suggested problems Some <strong>of</strong> which have detailed solutions. Others,<br />
you should work out on your own. At the end <strong>of</strong> this book are a series <strong>of</strong> Mathematica notebooks<br />
I have written which illustrate various points and perform a variety <strong>of</strong> calculations. These can be<br />
downloaded from my web-site (http://k2.chem.uh.edu/quantum/) and run on any recent version<br />
<strong>of</strong> Mathematica. (≥ v4.n).<br />
It goes entirely without saying (but I will anyway) that these notes come from a wide variety<br />
<strong>of</strong> sources which I have tried to cite where possible.<br />
9
0.1 Essentials<br />
• Instructor: <strong>Pr<strong>of</strong></strong>. <strong>Eric</strong>. R. <strong>Bittner</strong>.<br />
• Office: Fleming 221 J<br />
• Email: bittner@uh.edu<br />
• Phone: -3-2775<br />
• Office Hours: Monday and Thurs. afternoons or by appointment.<br />
• Course Web Page: http://k2.chem.uh.edu/quantum/ Solution sets, course news, class<br />
notes, sample computer routines, etc...will be posted from time to time on this web-page.<br />
• Other Required Text: <strong>Quantum</strong> <strong>Mechanics</strong>, Landau and Lifshitz. This is volume 3 <strong>of</strong><br />
L&L’s classical course in modern physics. Every self-respecting scientist has at least two or<br />
three <strong>of</strong> their books on their book-shelves. This text tends to be pretty terse and uses the<br />
classical phrase it is easy to show... quite a bit (it usually means just the opposite). The<br />
problems are usually worked out in detail and are usually classic applications <strong>of</strong> quantum<br />
theory. This is a land-mark book and contains everything you really need to know to do<br />
quantum mechanics.<br />
• Recommended Texts: I highly recommend that you use a variety <strong>of</strong> books since one<br />
author’s approach to a given topic may be clearer than another’s approach.<br />
– <strong>Quantum</strong> <strong>Mechanics</strong>, Cohen-Tannoudji, et al. This two volume book is very comprehensive<br />
and tends to be rather formal (and formidable) in its approach. The problems<br />
are excellent.<br />
–<br />
– Lectures in <strong>Quantum</strong> <strong>Mechanics</strong>, Gordon Baym. Baym’s book covers a wide range <strong>of</strong><br />
topics in a lecture note style.<br />
– <strong>Quantum</strong> Chemistry, I. Levine. This is usually the first quantum book that chemists<br />
get. I find it to be too wordy and the notation and derivations a bit ponderous. Levine<br />
does not use Dirac notation. However, he does give a good overview <strong>of</strong> elementary<br />
electronic structure theory and some if its important developments. Good for starting<br />
<strong>of</strong>f in electronic structure.<br />
– Modern <strong>Quantum</strong> <strong>Mechanics</strong>, J. J. Sakurai. This is a real classic. Not good for a first<br />
exposure since it assumes a fairly sophisticated understanding <strong>of</strong> quantum mechanics<br />
and mathematics.<br />
– Intermediate <strong>Quantum</strong> <strong>Mechanics</strong>, Hans Bethe and Roman Jackiw. This book is a<br />
great exploration <strong>of</strong> advanced topics in quantum mechanics as illustrated by atomic<br />
systems.<br />
10
– What is <strong>Quantum</strong> <strong>Mechanics</strong>?, Transnational College <strong>of</strong> LEX. OK, this one I found<br />
at Barnes and Noble and it’s more or less a cartoon book. But, it is really good. It<br />
explores the historical development <strong>of</strong> quantum mechanics, has some really interesting<br />
insights into semi-classical and ”old” quantum theory, and presents the study <strong>of</strong><br />
quantum mechanics as a unfolding story. This book I highly recommend if this<br />
is the first time you are taking a course on quantum mechanics.<br />
– <strong>Quantum</strong> <strong>Mechanics</strong> in Chemistry by George Schatz and Mark Ratner. Ratner and<br />
Schatz have more in terms <strong>of</strong> elementary quantum chemistry, emphasizing the use <strong>of</strong><br />
modern quantum chemical computer programs than almost any text I have reviewed.<br />
• Prequisites: Graduate status in chemistry. This course is required for all Physical Chemistry<br />
graduate students. The level <strong>of</strong> the course will be fairly rigorous and I assume that students<br />
have had some exposure to quantum mechanics at the undergraduate level–typically<br />
in Physical Chemistry, and are competent in linear algebra, calculus, and solving elementary<br />
differential equations.<br />
• Tests and Grades: There are no exams in this course, only problem sets and participation<br />
in discussion. This means coming to lecture prepared to ask and answer questions. My<br />
grading policy is pretty simple. If you make an honest effort, do the assigned problems<br />
(mostly correctly), and participate in class, you will be rewarded with at least a B. Of<br />
course this is the formula for success for any course.<br />
0.2 Problem Sets<br />
Your course grade will largely be determined by your performance on these problems as well as<br />
the assigned discussion <strong>of</strong> a particular problem. My philosophy towards problem sets is that this<br />
is the only way to really learn this material. These problems are intentionally challenging, but<br />
not overwhelming, and are paced to correspond to what will be going on in the lecture.<br />
Some ground rules:<br />
1. Due dates are posted on each problem–usually 1 week or 2 weeks after they are assigned.<br />
Late submissions may be turned in up to 1 week later. All problems must be turned in by<br />
December 3. I will not accept any submissions after that date.<br />
2. Handwritten Problems. If I can’t read it, I won’t grade it. Period. Consequently, I strongly<br />
encourage the use <strong>of</strong> word processing s<strong>of</strong>tware for your final submission. Problem solutions<br />
can be submitted electronically as Mathematica, Latex, or PDF files to bittner@uh.edu with<br />
the subject: QUANTUM PROBLEM SET. Do not send me a MSWord file as an email<br />
attachment. I expect some text (written in compete and correct sentences) to explain your<br />
steps where needed and some discussion <strong>of</strong> the results. The computer lab in the basement<br />
<strong>of</strong> Fleming has 20 PCs with copies <strong>of</strong> Mathematica or you can obtain your own license<br />
from the <strong>University</strong> Media Center.<br />
3. Collaborations. You are strongly encouraged to work together and collaborate on problems.<br />
However, simply copying from your fellow student is not an acceptable collaboration.<br />
11
4. These are the only problems you need to turn in. We will have additional exercises–mostly<br />
coming from the lecture. Also, at the end <strong>of</strong> the lectures herein, are a set <strong>of</strong> suggested<br />
problems and exercises to work on. Many <strong>of</strong> these have solutions.<br />
12
0.3 2003 Course Calendar<br />
This is a rough schedule <strong>of</strong> topics we will cover. In essence we will follow the starting from<br />
a basic description <strong>of</strong> quantum wave mechanics and bound states. We will then move onto<br />
the more formal aspects <strong>of</strong> quantum theory: Dirac notation, perturbation theory, variational<br />
theory, and the like. Lastly, we move onto applications: Hydrogen atom, many-electron systems,<br />
semi-classical approximations, and a semi-classical treatment <strong>of</strong> light absorption and emission.<br />
We will also have a recitation session in 221 at 10am Friday morning. The purpose <strong>of</strong> this<br />
will be to specifically discuss the problem sets and other issues.<br />
• 27-August: Course overview: Classical Concepts<br />
• 3-Sept: Finishing Classical <strong>Mechanics</strong>/Elementary <strong>Quantum</strong> concepts<br />
• 8-Sept: Particle in a box and hard wall potentials (Perry?)<br />
• 10 Sept: Tunneling/Density <strong>of</strong> states (Perry)<br />
• 15/17 Bohr-Sommerfield Quantization/Old quantum theory/connection to classical mechanics<br />
(Perry)<br />
• 22/24 Sept: Semiclassical quantum mechanics: WKB Approx. Application to scattering<br />
• 29 Sept/1 Oct. Postulates <strong>of</strong> quantum mechanics: Dirac notation, superposition principle,<br />
simple calculations.<br />
• 6/8 Oct: Bound States: Variational principle, quantum harmonic oscillator<br />
• 13/15 Oct: <strong>Quantum</strong> mechanics in 3D: Angular momentum (Chapt 4.1-4.8)<br />
• 20/22 Oct: Hydrogen atom/Hydrogenic systems/Atomic structure<br />
• 27/29 Oct: Perturbation Theory:<br />
• 3/5 Nov: Time-dependent Perturbation Theory:<br />
• 10/12 Identical Particles/<strong>Quantum</strong> Statistics<br />
• 17/19 Nov: Helium atom, hydrogen ion<br />
• 24/26 Nov: <strong>Quantum</strong> Chemistry<br />
• 3 Dec–Last day to turn in problem sets<br />
• Final Exam: TBA<br />
13
Part I<br />
Lecture Notes<br />
14
Chapter 1<br />
Survey <strong>of</strong> Classical <strong>Mechanics</strong><br />
<strong>Quantum</strong> mechanics is in many ways the cumulation <strong>of</strong> many hundreds <strong>of</strong> years <strong>of</strong> work and<br />
thought about how mechanical things move and behave. Since ancient times, scientists have<br />
wondered about the structure <strong>of</strong> matter and have tried to develop a generalized and underlying<br />
theory which governs how matter moves at all length scales.<br />
For ordinary objects, the rules <strong>of</strong> motion are very simple. By ordinary, I mean objects that<br />
are more or less on the same length and mass scale as you and I, say (conservatively) 10 −7 m to<br />
10 6 m and 10 −25 g to 10 8 g moving less than 20% <strong>of</strong> the speed <strong>of</strong> light. On other words, almost<br />
everything you can see and touch and hold obey what are called “classical” laws <strong>of</strong> motion. The<br />
term “classical” means that that the basic principles <strong>of</strong> this class <strong>of</strong> motion have their foundation<br />
in antiquity. Classical mechanics is a extremely well developed area <strong>of</strong> physics. While you may<br />
think that given that classical mechanics has been studied extensively for hundreds <strong>of</strong> years<br />
there really is little new development in this field, it remains a vital and extremely active area <strong>of</strong><br />
research. Why? Because the majority <strong>of</strong> universe “lives” in a dimensional realm where classical<br />
mechanics is extremely valid. Classical mechanics is the workhorse for atomistic simulations<br />
<strong>of</strong> fluids, proteins, polymers. It provides the basis for understanding chaotic systems. It also<br />
provides a useful foundation <strong>of</strong> many <strong>of</strong> the concepts in quantum mechanics.<br />
<strong>Quantum</strong> mechanics provides a description <strong>of</strong> how matter behaves at very small length and<br />
mass scales: i.e. the realm <strong>of</strong> atoms, molecules, and below. It was developed over the last century<br />
to explain a series <strong>of</strong> experiments on atomic systems that could not be explained using purely<br />
classical treatments. The advent <strong>of</strong> quantum mechanics forced us to look beyond the classical<br />
theories. However, it was not a drastic and complete departure. At some point, the two theories<br />
must correspond so that classical mechanics is the limiting behavior <strong>of</strong> quantum mechanics for<br />
macroscopic objects. Consequently, many <strong>of</strong> the concepts we will study in quantum mechanics<br />
have direct analogs to classical mechanics: momentum, angular momentum, time, potential<br />
energy, kinetic energy, and action.<br />
Much like classical music is in a particular style, classical mechanics is based upon the principle<br />
that the motion <strong>of</strong> a body can be reduced to the motion <strong>of</strong> a point particle with a given mass<br />
m, position x, and velocity v. In this chapter, we will review some <strong>of</strong> the concepts <strong>of</strong> classical<br />
mechanics which are necessary for studying quantum mechanics. We will cast these in form<br />
whereby we can move easily back and forth between classical and quantum mechanics. We will<br />
first discuss Newtonian motion and cast this into the Lagrangian form. We will then discuss the<br />
principle <strong>of</strong> least action and Hamiltonian dynamics and the concept <strong>of</strong> phase space.<br />
15
1.1 Newton’s equations <strong>of</strong> motion<br />
Newton’s Principia set the theoretical basis <strong>of</strong> mathematical mechanics and analysis <strong>of</strong> physical<br />
bodies. The equation that force equals mass times acceleration is the fundamental equation <strong>of</strong><br />
classical mechanics. Stated mathematically<br />
m¨x = f(x) (1.1)<br />
The dots refer to differentiation with respect to time. We will use this notion for time derivatives.<br />
We may also use x ′ or dx/dt as well. So,<br />
¨x = d2x .<br />
dt2 For now we are limiting our selves to one particle moving in one dimension. For motion in<br />
more dimensions, we need to introduce vector components. In cartesian coordinates, Newton’s<br />
equations are<br />
m¨x = fx(x, y, z) (1.2)<br />
m¨y = fy(x, y, z) (1.3)<br />
m¨z = fz(x, y, z) (1.4)<br />
where the force vector � f(x, y, z) has components in all three dimensions and varies with location.<br />
We can also define a position vector, �x = (x, y, z), and velocity vector �v = ( ˙x, ˙y, ˙z). We can also<br />
replace the second-order differential equation with two first order equations.<br />
˙x = vx (1.5)<br />
˙vx = fx/m (1.6)<br />
These, along with the initial conditions, x(0) and v(0) are all that are needed to solve for the<br />
motion <strong>of</strong> a particle with mass m given a force f. We could have chosen two end points as well<br />
and asked, what path must the particle take to get from one point to the next. Let us consider<br />
some elementary solutions.<br />
1.1.1 Elementary solutions<br />
First the case in which f = 0 and ¨x = 0. Thus, v = ˙x = const. So, unless there is an applied<br />
force, the velocity <strong>of</strong> a particle will remain unchanged.<br />
Second, we consider the case <strong>of</strong> a linear force, f = −kx. This is restoring force for a spring<br />
and such force laws are termed Hooke’s law and k is termed the force constant. Our equations<br />
are:<br />
˙x = vx (1.7)<br />
˙vx = −k/mx (1.8)<br />
or ¨x = −(k/m)x. So we want some function which is its own second derivative multiplied by<br />
some number. The cosine and sine functions have this property, so let’s try<br />
x(t) = A cos(at) + B sin(bt).<br />
16
Taking time derivatives<br />
˙x(t) = −aA sin(at) + bB cos(bt);<br />
¨x(t) = −a 2 A cos(at) − b 2 B sin(bt).<br />
�<br />
So we get the required result if a = b = k/m, leaving A and B undetermined. Thus, we need<br />
two initial conditions to specify these coefficients. Let’s pick x(0) = xo �<br />
and v(0) = 0. Thus,<br />
x(0) = A = xo and B = 0. Notice that the term k/m has units <strong>of</strong> angular frequency.<br />
So, our equation <strong>of</strong> motion are<br />
v<br />
3<br />
2<br />
1<br />
0<br />
-1<br />
-2<br />
-3<br />
�<br />
k<br />
ω =<br />
m<br />
x(t) = xo cos(ωt) (1.9)<br />
v(t) = −xoω sin(ωt). (1.10)<br />
-2 p -p 0 p 2 p<br />
-2 p -p 0 p 2 p<br />
x<br />
Figure 1.1: Tangent field for simple pendulum with ω = 1. The superimposed curve is a linear<br />
approximation to the pendulum motion.<br />
1.1.2 Phase plane analysis<br />
Often one can not determine the closed form solution to a given problem and we need to turn<br />
to more approximate methods or even graphical methods. Here, we will look at an extremely<br />
useful way to analyze a system <strong>of</strong> equations by plotting their time-derivatives.<br />
First, let’s look at the oscillator we just studied. We can define a vector s = ( ˙x, ˙v) =<br />
(v, −k/mx) and plot the vector field. Fig. 1.3 shows how to do this in Mathematica. The<br />
17
superimposed curve is one trajectory and the arrows give the “flow” <strong>of</strong> trajectories on the phase<br />
plane.<br />
We can examine more complex behavior using this procedure. For example, the simple<br />
pendulum obeys the equation ¨x = −ω 2 sin x. This can be reduced to two first order equations:<br />
˙x = v and ˙v = −ω 2 sin(x).<br />
We can approximate the motion <strong>of</strong> the pendulum for small displacements by expanding the<br />
pendulum’s force about x = 0,<br />
−ω 2 sin(x) = −ω 2 (x − x3<br />
6<br />
For small x the cubic term is very small, and we have<br />
˙v = −ω 2 x = − k<br />
m x<br />
+ · · ·).<br />
which is the equation for harmonic motion. So, for small initial displacements, we see that the<br />
pendulum oscillates back and forth with an angular frequency ω. For large initial displacements,<br />
xo = π or if we impart some initial velocity on the system vo > 1, the pendulum does not oscillate<br />
back and forth, but undergoes librational motion (spinning!) in one direction or the other.<br />
1.2 Lagrangian <strong>Mechanics</strong><br />
1.2.1 The Principle <strong>of</strong> Least Action<br />
The most general form <strong>of</strong> the law governing the motion <strong>of</strong> a mass is the principle <strong>of</strong> least action<br />
or Hamilton’s principle. The basic idea is that every mechanical system is described by a single<br />
function <strong>of</strong> coordinate, velocity, and time: L(x, ˙x, t) and that the motion <strong>of</strong> the particle is such<br />
that certain conditions are satisfied. That condition is that the time integral <strong>of</strong> this function<br />
S =<br />
� tf<br />
to<br />
L(x, ˙x, t)dt<br />
takes the least possible value give a path that starts at xo at the initial time and ends at xf at<br />
the final time.<br />
Lets take x(t) be function for which S is minimized. This means that S must increase for<br />
any variation about this path, x(t) + δx(t). Since the end points are specified, δx(0) = δx(t) = 0<br />
and the change in S upon replacement <strong>of</strong> x(t) with x(t) + δx(t) is<br />
δS =<br />
� tf<br />
to<br />
L(x + δx, ˙x + δ ˙x, t)dt −<br />
� tf<br />
to<br />
L(x, ˙x, t)dt = 0<br />
This is zero, because S is a minimum. Now, we can expand the integrand in the first term<br />
� �<br />
∂L ∂L<br />
L(x + δx, ˙x + δ ˙x, t) = L(x, ˙x, t) + δx + δ ˙x<br />
∂x ∂ ˙x<br />
Thus, we have<br />
� tf<br />
to<br />
� �<br />
∂L ∂L<br />
δx + δ ˙x dt = 0.<br />
∂x ∂ ˙x<br />
18
Since δ ˙x = dδx/dt and integrating the second term by parts<br />
δS =<br />
�<br />
∂L<br />
δ ˙x δx<br />
�tf to<br />
+<br />
� tf<br />
to<br />
� ∂L<br />
∂x<br />
�<br />
d ∂L<br />
− δxdt = 0<br />
dt ∂ ˙x<br />
The surface term vanishes because <strong>of</strong> the condition imposed above. This leaves the integral. It<br />
too must vanish and the only way for this to happen is if the integrand itself vanishes. Thus we<br />
have the<br />
∂L d ∂L<br />
− = 0<br />
∂x dt ∂ ˙x<br />
L is known as the Lagrangian. Before moving on, we consider the case <strong>of</strong> a free particle.<br />
The Lagrangian in this case must be independent <strong>of</strong> the position <strong>of</strong> the particle since a freely<br />
moving particle defines an inertial frame. Since space is isotropic, L must only depend upon the<br />
magnitude <strong>of</strong> v and not its direction. Hence,<br />
L = L(v 2 ).<br />
Since L is independent <strong>of</strong> x, ∂L/∂x = 0, so the Lagrange equation is<br />
d ∂L<br />
dt ∂v<br />
= 0.<br />
So, ∂L/∂v = const which leads us to conclude that L is quadratic in v. In fact,<br />
which is the kinetic energy for a particle.<br />
L = 1<br />
m v2 ,<br />
T = 1<br />
2 mv2 = 1<br />
2 m ˙x2 .<br />
For a particle moving in a potential field, V , the Lagrangian is given by<br />
L = T − V.<br />
L has units <strong>of</strong> energy and gives the difference between the energy <strong>of</strong> motion and the energy <strong>of</strong><br />
location.<br />
This leads to the equations <strong>of</strong> motion:<br />
d ∂L<br />
dt ∂v<br />
= ∂L<br />
∂x .<br />
Substituting L = T − V , yields<br />
m ˙v = − ∂V<br />
∂x<br />
which is identical to Newton’s equations given above once we identify the force as the minus the<br />
derivative <strong>of</strong> the potential. For the free particle, v = const. Thus,<br />
S =<br />
� tf<br />
to<br />
m<br />
2 v2 dt = m<br />
2 v2 (tf − to).<br />
19
You may be wondering at this point why we needed a new function and derived all this from<br />
some minimization principle. The reason is that for some systems we have constraints on the type<br />
<strong>of</strong> motion they can undertake. For example, there may be bonds, hinges, and other mechanical<br />
hinderances which limit the range <strong>of</strong> motion a given particle can take. The Lagrangian formalism<br />
provides a mechanism for incorporating these extra effects in a consistent and correct way. In<br />
fact we will use this principle later in deriving a variational solution to the Schrodinger equation<br />
by constraining the wavefunction solutions to be orthonormal.<br />
Lastly, it is interesting to note that v 2 = (dl/d) 2 = (dl) 2 /(dt) 2 is the square <strong>of</strong> the element<br />
<strong>of</strong> an arc in a given coordinate system. Thus, within the Lagrangian formalism it is easy to<br />
convert from one coordinate system to another. For example, in cartesian coordinates: dl 2 =<br />
dx 2 + dy 2 + dz 2 . Thus, v 2 = ˙x 2 + ˙y 2 + ˙z 2 . In cylindrical coordinates, dl = dr 2 + r 2 dφ 2 + dz 2 , we<br />
have the Lagrangian<br />
L = 1<br />
2 m( ˙r2 + r 2 ˙ φ 2 + ˙z 2 )<br />
and for spherical coordinates dl 2 = dr 2 + r 2 dθ 2 + r 2 sin 2 θdφ 2 ; hence<br />
L = 1<br />
2 m( ˙r2 + r 2 ˙ θ 2 + r 2 sin 2 θ ˙ φ 2 ).<br />
1.2.2 Example: 3 dimensional harmonic oscillator in spherical coordinates<br />
Here we take the potential energy to be a function <strong>of</strong> r alone (isotropic)<br />
V (r) = kr 2 /2.<br />
Thus, the Lagrangian in cartesian coordinates is<br />
L = m<br />
2 ( ˙x2 + ˙y 2 + ˙z 2 ) + k<br />
2 r2<br />
since r 2 = x 2 + y 2 + z 2 , we could easily solve this problem in cartesian space since<br />
L = m<br />
=<br />
2 ( ˙x2 + ˙y 2 + ˙z 2 ) + k<br />
2 (x2 + y 2 + z 2 ) (1.11)<br />
�<br />
�<br />
�<br />
� m<br />
2 ˙x2 + k<br />
2 x2<br />
+<br />
� m<br />
2 ˙y2 + k<br />
2 y2<br />
+<br />
� m<br />
2 ˙z2 + k<br />
2 z2<br />
(1.12)<br />
and we see that the system is separable into 3 independent oscillators. To convert to spherical<br />
polar coordinates, we use<br />
and the arc length given above.<br />
x = r sin(φ) cos(θ) (1.13)<br />
y = r sin(φ) sin(θ) (1.14)<br />
z = r cos(θ) (1.15)<br />
L = m<br />
2 ( ˙r2 + r 2 ˙ θ 2 + r 2 sin 2 θ ˙ φ 2 ) − k<br />
2 r2<br />
20
The equations <strong>of</strong> motion are<br />
d ∂L<br />
dt ∂ ˙ ∂L<br />
−<br />
φ ∂φ<br />
d ∂L<br />
dt ∂ ˙ ∂L<br />
−<br />
θ ∂θ<br />
d ∂L ∂L<br />
−<br />
dt ∂ ˙r ∂r<br />
= d<br />
dt (mr2 sin 2 θ ˙ φ = 0 (1.16)<br />
= d<br />
dt (mr2 ˙ θ) − mr 2 sin θ cos θ ˙ φ = 0 (1.17)<br />
= d<br />
dt (m ˙r) − mr ˙ θ 2 − mr sin 2 θ ˙ φ 2 + kr = 0 (1.18)<br />
We now prove that the motion <strong>of</strong> a particle in a central force field lies in a plane containing<br />
the origin. The force acting on the particle at any given time is in a direction towards the origin.<br />
Now, place an arbitrary cartesian frame centered about the particle with the z axis parallel to<br />
the direction <strong>of</strong> motion as sketched in Fig. 1.2 Note that the y axis is perpendicular to the plane<br />
<strong>of</strong> the page and hence there is no force component in that direction. Consequently, the motion<br />
<strong>of</strong> the particle is constrained to lie in the zx plane, i.e. the plane <strong>of</strong> the page and there is no<br />
force component which will take the particle out <strong>of</strong> this plane.<br />
Let’s make a change <strong>of</strong> coordinates by rotating the original frame to a new one whereby the<br />
new z ′ is perpendicular to the plane containing the initial position and velocity vectors. In the<br />
sketch above, this new z ′ axis would be perpendicular to the page and would contain the y axis<br />
we placed on the moving particle. In terms <strong>of</strong> these new coordinates, the Lagrangian will have<br />
the same form as before since our initial choice <strong>of</strong> axis was arbitrary. However, now, we have<br />
some additional constraints. Because the motion is now constrained to lie in the x ′ y ′ plane,<br />
θ ′ = π/2 is a constant, and ˙ θ = 0. Thus cos(π/2) = 0 and sin(π/2) = 1 in the equations above.<br />
From the equations for φ we find<br />
d<br />
dt mr2 ˙ φ = 0<br />
or<br />
mr 2 ˙ φ = const = pφ.<br />
This we can put into the r equation<br />
o<br />
d<br />
dt (m ˙r) − mr ˙ φ 2 + kr = 0 (1.19)<br />
d<br />
dt (m ˙r) − p2φ + kr<br />
mr3 = 0 (1.20)<br />
Figure 1.2: Vector diagram for motion in a central forces. The particle’s motion is along the Z<br />
axis which lies in the plane <strong>of</strong> the page.<br />
a<br />
21<br />
F<br />
Z<br />
Y<br />
X
Z'<br />
Y<br />
Z<br />
Y'<br />
where we notice that −p 2 φ/mr 3 is the centrifugal force. Taking the last equation, multiplying by<br />
˙r and then integrating with respect to time gives<br />
i.e.<br />
X'<br />
˙r 2 = − p2 φ<br />
m 2 r 2 − kr2 + b (1.21)<br />
˙r =<br />
Integrating once again with respect to time,<br />
�<br />
t − to =<br />
=<br />
X<br />
− p2 φ<br />
m 2 r 2 − kr2 + b (1.22)<br />
� rdr<br />
�<br />
= 1<br />
�<br />
2<br />
˙r<br />
�<br />
rdr<br />
− p2 φ<br />
m 2 − kr 4 + br 2<br />
dx<br />
√ a + bx + cx 2<br />
(1.23)<br />
(1.24)<br />
(1.25)<br />
where x = r 2 , a = −p 2 φ/m 2 , b is the constant <strong>of</strong> integration, and c = −k This is a standard<br />
integral and we can evaluate it to find<br />
where<br />
r 2 = 1<br />
2ω (b + A sin(ω(t − to))) (1.26)<br />
�<br />
A =<br />
m2 .<br />
What we see then is that r follows an elliptical path in a plane determined by the initial velocity.<br />
b 2 − ω2 p 2 φ<br />
22
This example also illustrates another important point which has tremendous impact on molecular<br />
quantum mechanics, namely, the angular momentum about the axis <strong>of</strong> rotation is conserved.<br />
We can choose any axis we want. In order to avoid confusion, let us define χ as the angular<br />
rotation about the body-fixed Z ′ axis and φ as angular rotation about the original Z axis. So<br />
our conservation equations are<br />
mr 2 ˙χ = pχ<br />
about the Z ′ axis and<br />
mr 2 sin θ ˙ φ = pφ<br />
for some arbitrary fixed Z axis. The angle θ will also have an angular momentum associated<br />
with it pθ = mr 2 ˙ θ, but we do not have an associated conservation principle for this term since it<br />
varies with φ. We can connect pχ with pθ and pφ about the other axis via<br />
Consequently,<br />
pχdχ = pθdθ + pφdφ.<br />
mr 2 ˙χ 2 dχ = mr 2 ( ˙ φ sin θdφ + ˙ θdθ).<br />
Here we see that the the angular momentum vector remains fixed in space in the absence <strong>of</strong><br />
any external forces. Once an object starts spinning, its axis <strong>of</strong> rotation remains pointing in a<br />
given direction unless something acts upon it (torque), in essence in classical mechanics we can<br />
fully specify Lx, Ly, and Lz as constants <strong>of</strong> the motion since d � L/dt = 0. In a later chapter, we<br />
will cover the quantum mechanics <strong>of</strong> rotations in much more detail. In the quantum case, we<br />
will find that one cannot make such a precise specification <strong>of</strong> the angular momentum vector for<br />
systems with low angular momentum. We will, however, recover the classical limit in end as we<br />
consider the limit <strong>of</strong> large angular momenta.<br />
1.3 Conservation Laws<br />
We just encountered one extremely important concept in mechanics, namely, that some quantities<br />
are conserved if there is an underlying symmetry. Next, we consider a conservation law arising<br />
from the homogeneity <strong>of</strong> time. For a closed dynamical system, the Lagrangian does not explicitly<br />
depend upon time. Thus we can write<br />
dL<br />
dt<br />
= ∂L<br />
∂x<br />
∂L<br />
˙x + ¨x (1.27)<br />
∂ ˙x<br />
Replacing ∂L/∂x with Lagrange’s equation, we obtain<br />
dL<br />
dt<br />
=<br />
� �<br />
d ∂L<br />
˙x +<br />
dt ∂ ˙x<br />
∂L<br />
=<br />
¨x<br />
∂ ˙x<br />
(1.28)<br />
d<br />
�<br />
˙x<br />
dt<br />
∂L<br />
�<br />
∂ ˙x<br />
(1.29)<br />
Now, rearranging this a bit,<br />
�<br />
d<br />
˙x<br />
dt<br />
∂L<br />
�<br />
− L = 0. (1.30)<br />
∂ ˙x<br />
23
So, we can take the quantity in the parenthesis to be a constant.<br />
�<br />
E = ˙x ∂L<br />
�<br />
− L = const. (1.31)<br />
∂ ˙x<br />
is an integral <strong>of</strong> the motion. This is the energy <strong>of</strong> the system. Since L can be written in form<br />
L = T − V where T is a quadratic function <strong>of</strong> the velocities, and using Euler’s theorem on<br />
homogeneous functions:<br />
This gives,<br />
˙x ∂L<br />
∂ ˙x<br />
= ˙x∂T<br />
∂ ˙x<br />
E = T + V<br />
= 2T.<br />
which says that the energy <strong>of</strong> the system can be written as the sum <strong>of</strong> two different terms: the<br />
kinetic energy or energy <strong>of</strong> motion and the potential energy or the energy <strong>of</strong> location.<br />
One can also prove that linear momentum is conserved when space is homogeneous. That is,<br />
when we can translate our system some arbitrary amount ɛ and our dynamical quantities must<br />
remain unchanged. We will prove this in the problem sets.<br />
1.4 Hamiltonian Dynamics<br />
Hamiltonian dynamics is a further generalization <strong>of</strong> classical dynamics and provides a crucial link<br />
with quantum mechanics. Hamilton’s function, H, is written in terms <strong>of</strong> the particle’s position<br />
and momentum, H = H(p, q). It is related to the Lagrangian via<br />
Taking the derivative <strong>of</strong> H w.r.t. x<br />
H = ˙xp − L(x, ˙x)<br />
∂H<br />
∂x<br />
= −∂L<br />
∂x<br />
= − ˙p<br />
Differentiation with respect to p gives<br />
∂H<br />
= ˙q.<br />
∂p<br />
These last two equations give the conservation conditions in the Hamiltonian formalism. If H<br />
is independent <strong>of</strong> the position <strong>of</strong> the particle, then the generalized momentum, p is constant in<br />
time. If the potential energy is independent <strong>of</strong> time, the Hamiltonian gives the total energy <strong>of</strong><br />
the system,<br />
H = T + V.<br />
1.4.1 Interaction between a charged particle and an electromagnetic<br />
field.<br />
We consider here a free particle with mass m and charge e in an electromagnetic field. The<br />
Hamiltonian is<br />
H = px ˙x + py ˙y + pz ˙z − L (1.32)<br />
= ˙x ∂L<br />
∂ ˙x<br />
+ ˙y ∂L<br />
∂ ˙y<br />
24<br />
+ ˙z ∂L<br />
∂ ˙z<br />
− L. (1.33)
Our goal is to write this Hamiltonian in terms <strong>of</strong> momenta and coordinates.<br />
For a charged particle in a field, the force acting on the particle is the Lorenz force. Here it<br />
is useful to introduce a vector and scaler potential and to work in cgs units.<br />
�F = e<br />
c �v × (� ∇ × � A) − e ∂<br />
c<br />
� A<br />
∂t − e� ∇φ.<br />
The force in the x direction is given by<br />
Fx = d<br />
�<br />
e<br />
m ˙x = ˙y<br />
dt c<br />
∂Ay<br />
�<br />
∂Az<br />
+ ˙z −<br />
∂x ∂x<br />
e<br />
�<br />
˙y<br />
c<br />
∂Ax<br />
�<br />
∂Ax ∂Ax<br />
+ ˙z + − e<br />
∂y ∂z ∂t<br />
∂φ<br />
∂x<br />
with the remaining components given by cyclic permutation. Since<br />
dAx ∂Ax<br />
∂Ax ∂Ax<br />
= + ˙x∂Ax + ˙y + ˙z<br />
dt ∂t ∂x ∂y ∂z ,<br />
Fx = e<br />
�<br />
+ ˙x<br />
c<br />
∂Ax<br />
�<br />
∂Ax ∂Ax<br />
+ ˙y + ˙z −<br />
∂x ∂y ∂z<br />
e<br />
c �v · � A − eφ.<br />
Based upon this, we find that the Lagrangian is<br />
L = 1 1 1<br />
m ˙x2 m ˙y2<br />
2 2 2 m ˙z2 + e<br />
c �v · � A − eφ<br />
where φ is a velocity independent and static potential.<br />
Continuing on, the Hamiltonian is<br />
H = m<br />
2 ( ˙x2 + ˙y 2 + ˙z 2 ) + eφ (1.34)<br />
= 1<br />
2m ((m ˙x)2 + (m ˙y) 2 + (m ˙y) 2 ) + eφ (1.35)<br />
The velocities, m ˙x, are derived from the Lagrangian via the canonical relation<br />
From this we find,<br />
and the resulting Hamiltonian is<br />
H = 1<br />
�� px −<br />
2m<br />
e<br />
c Ax<br />
p = ∂L<br />
∂ ˙x<br />
m ˙x = px − e<br />
c Ax<br />
m ˙y = py − e<br />
c Ay<br />
m ˙z = pz − e<br />
c Az<br />
� 2<br />
�<br />
+ py − e<br />
c Ay<br />
� 2<br />
�<br />
+ pz − e<br />
c Az<br />
� �<br />
2<br />
+ eφ.<br />
(1.36)<br />
(1.37)<br />
(1.38)<br />
We see here an important concept relating the velocity and the momentum. In the absence <strong>of</strong> a<br />
vector potential, the velocity and the momentum are parallel. However, when a vector potential<br />
is included, the actual velocity <strong>of</strong> a particle is no longer parallel to its momentum and is in fact<br />
deflected by the vector potential.<br />
25
1.4.2 Time dependence <strong>of</strong> a dynamical variable<br />
On <strong>of</strong> the important applications <strong>of</strong> Hamiltonian mechanics is in the dynamical evolution <strong>of</strong> a<br />
variable which depends upon p and q, G(p, q). The total derivative <strong>of</strong> G is<br />
dG<br />
dt<br />
= ∂G<br />
∂t<br />
+ ∂G<br />
∂q<br />
˙q + ∂G<br />
∂p ˙p<br />
From Hamilton’s equations, we have the canonical definitions<br />
Thus,<br />
dG<br />
dt<br />
dG<br />
dt<br />
˙q = ∂H<br />
, ˙p = −∂H<br />
∂p ∂q<br />
∂G<br />
=<br />
∂t<br />
∂G<br />
=<br />
∂t<br />
∂G ∂H<br />
+<br />
∂q ∂p<br />
∂G ∂H<br />
−<br />
∂p ∂q<br />
(1.39)<br />
+ {G, H}, (1.40)<br />
where {A, B} is called the Poisson bracket <strong>of</strong> two dynamical quantities, G and H.<br />
{G, H}, = ∂G ∂H<br />
∂q ∂p<br />
∂G ∂H<br />
−<br />
∂p ∂q<br />
We can also define a linear operator L as generating the Poisson bracket with the Hamiltonian:<br />
LG = 1<br />
{H, G}<br />
i<br />
so that if G does not depend explicitly upon time,<br />
G(t) = exp(iLt)G(0).<br />
where exp(iLt) is the propagator which carried G(0) to G(t).<br />
Also, note that if {G, H} = 0, then dG/dt = 0 so that G is a constant <strong>of</strong> the motion. This<br />
too, along with the construction <strong>of</strong> the Poisson bracket has considerable importance in the realm<br />
<strong>of</strong> quantum mechanics.<br />
1.4.3 Virial Theorem<br />
Finally, we turn our attention to a concept which has played an important role in both quantum<br />
and classical mechanics. Consider a function G that is a product <strong>of</strong> linear momenta and<br />
coordinate,<br />
G = pq.<br />
The time derivative is simply.<br />
G<br />
dt<br />
= q ˙p + p ˙q<br />
26
Now, let’s take a time average <strong>of</strong> both sides <strong>of</strong> this last equation.<br />
�<br />
d<br />
dt pq<br />
�<br />
1<br />
= lim<br />
T →∞<br />
= lim<br />
T →∞<br />
= lim<br />
T →∞<br />
� T<br />
�<br />
d<br />
dt pq<br />
�<br />
dt (1.41)<br />
T 0<br />
�<br />
1 T<br />
d(pq) (1.42)<br />
T 0<br />
1<br />
T ((pq)T − (pq)0) (1.43)<br />
If the trajectories <strong>of</strong> system are bounded, both p and q are periodic in time and are therefore<br />
finite. Thus, the average must vanish as T → ∞ giving<br />
Since p ˙q = 2T and ˙p = −F , we have<br />
〈p ˙q + q ˙p〉 = 0 (1.44)<br />
〈2T 〉 = −〈qF 〉. (1.45)<br />
In cartesian coordinates this leads to<br />
� �<br />
�<br />
〈2T 〉 = − xiFi . (1.46)<br />
i<br />
For a conservative system F = −∇V . Thus, if we have a centro-symmetric potential given<br />
by V = Cr n , it is easy to show that<br />
〈2T 〉 = n〈V 〉.<br />
For the case <strong>of</strong> the Harmonic oscillator, n = 2 and 〈T 〉 = 〈V 〉. So, for example, if we have a<br />
total energy equal to kT in this mode, then 〈T 〉 + 〈V 〉 = kT and 〈T 〉 = 〈V 〉 = kT/2. Moreover,<br />
for the interaction between two opposite charges separated by r, n = −1 and<br />
〈2T 〉 = −〈V 〉.<br />
27
Figure 1.3: Screen shot <strong>of</strong> using Mathematica to plot phase-plane for harmonic oscillator. Here<br />
k/m = 1 and our xo = 0.75.<br />
28
Chapter 2<br />
Waves and Wavefunctions<br />
In the world <strong>of</strong> quantum physics, no phenominon is a phenominon until it is a recorded<br />
phenominon.<br />
– John Archibald Wheler<br />
The physical basis <strong>of</strong> quantum mechanics is<br />
1. That matter, such as electrons, always arrives at a point as a discrete chunk, but that the<br />
probibility <strong>of</strong> finding a chunk at a specified position is like the intensity distribution <strong>of</strong> a<br />
wave.<br />
2. The “quantum state” <strong>of</strong> a system is described by a mathematical object called a “wavefunction”<br />
or state vector and is denoted |ψ〉.<br />
3. The state |ψ〉 can be expanded in terms <strong>of</strong> the basis states <strong>of</strong> a given vector space, {|φi〉}<br />
as<br />
where 〈φi|ψ〉 denotes an inner product <strong>of</strong> the two vectors.<br />
|ψ〉 = �<br />
|φi〉〈φi|ψ〉 (2.1)<br />
i<br />
4. Observable quantities are associated with the expectation value <strong>of</strong> Hermitian operators and<br />
that the eigenvalues <strong>of</strong> such operators are always real.<br />
5. If two operators commute, one can measure the two associated physical quantities simultaneously<br />
to arbitrary precision.<br />
6. The result <strong>of</strong> a physical measurement projects |ψ〉 onto an eigenstate <strong>of</strong> the associated<br />
operator |φn〉 yielding a measured value <strong>of</strong> an with probability |〈φn|ψ〉| 2 .<br />
2.1 Position and Momentum Representation <strong>of</strong> |ψ〉<br />
1 Two common operators which we shall use extensively are the position and momentum operator.<br />
1 The majority <strong>of</strong> this lecture comes from Cohen-Tannoudji Chapter 1, part from Feynman & Hibbs<br />
29
The position operator acts on the state |ψ〉 to give the amplitude <strong>of</strong> the system to be at a<br />
given position:<br />
ˆx|ψ〉 = |x〉〈x|ψ〉 (2.2)<br />
= |x〉ψ(x) (2.3)<br />
We shall call ψ(x) the wavefunction <strong>of</strong> the system since it is the amplitude <strong>of</strong> |ψ〉 at point x. Here<br />
we can see that ψ(x) is an eigenstate <strong>of</strong> the position operator. We also define the momentum<br />
operator ˆp as a derivative operator:<br />
Thus,<br />
ˆp = −i¯h ∂<br />
∂x<br />
(2.4)<br />
ˆpψ(x) = −i¯hψ ′ (x). (2.5)<br />
Note that ψ ′ (x) �= ψ(x), thus an eigenstate <strong>of</strong> the position operator is not also an eigenstate <strong>of</strong><br />
the momentum operator.<br />
We can deduce this also from the fact that ˆx and ˆp do not commute. To see this, first consider<br />
∂<br />
∂x xf(x) = f(x) + xf ′ (x) (2.6)<br />
Thus (using the shorthand ∂x as partial derivative with respect to x.)<br />
[ˆx, ˆp]f(x) = i¯h(x∂xf(x) − ∂x(xf(x))) (2.7)<br />
= −i¯h(xf ′ (x) − f(x) − xf ′ (x)) (2.8)<br />
= i¯hf(x) (2.9)<br />
What are the eigenstates <strong>of</strong> the ˆp operator? To find them, consider the following eigenvalue<br />
equation:<br />
ˆp|φ(k)〉 = k|φ(k)〉 (2.10)<br />
Inserting a complete set <strong>of</strong> position states using the idempotent operator<br />
�<br />
I = |x〉〈x|dx (2.11)<br />
and using the “coordinate” representation <strong>of</strong> the momentum operator, we get<br />
Thus, the solution <strong>of</strong> this is (subject to normalization)<br />
− i¯h∂xφ(k, x) = kφ(k, x) (2.12)<br />
φ(k, x) = C exp(ik/¯h) = 〈x|φ(k)〉 (2.13)<br />
30
We can also use the |φ(k)〉 = |k〉 states as a basis for the state |ψ〉 by writing<br />
�<br />
|ψ〉 = dk|k〉〈k|ψ〉 (2.14)<br />
�<br />
= dk|k〉ψ(k) (2.15)<br />
where ψ(k) is related to ψ(x) via:<br />
�<br />
ψ(k) = 〈k|ψ〉 =<br />
�<br />
dx〈k|x〉〈x|ψ〉 (2.16)<br />
= C dx exp(ikx/¯h)ψ(x). (2.17)<br />
This type <strong>of</strong> integral is called a “Fourier Transfrom”. There are a number <strong>of</strong> ways to define<br />
the normalization C when using this transform, for our purposes at the moment, we’ll set C =<br />
1/ √ 2π¯h so that<br />
and<br />
ψ(x) = 1<br />
�<br />
√<br />
2π¯h<br />
ψ(x) = 1<br />
�<br />
√<br />
2π¯h<br />
dkψ(k) exp(−ikx/¯h) (2.18)<br />
dxψ(x) exp(ikx/¯h). (2.19)<br />
Using this choice <strong>of</strong> normalization, the transform and the inverse transform have symmetric forms<br />
and we only need to remember the sign in the exponential.<br />
2.2 The Schrödinger Equation<br />
Postulate 2.1 The quantum state <strong>of</strong> the system is a solution <strong>of</strong> the Schrödinger equation<br />
i¯h∂t|ψ(t)〉 = H|ψ(t)〉, (2.20)<br />
where H is the quantum mechanical analogue <strong>of</strong> the classical Hamiltonian.<br />
From classical mechanics, H is the sum <strong>of</strong> the kinetic and potential energy <strong>of</strong> a particle,<br />
H = 1<br />
2m p2 + V (x). (2.21)<br />
Thus, using the quantum analogues <strong>of</strong> the classical x and p, the quantum H is<br />
H = 1<br />
2m ˆp2 + V (ˆx). (2.22)<br />
To evaluate V (ˆx) we need a theorem that a function <strong>of</strong> an operator is the function evaluated<br />
at the eigenvalue <strong>of</strong> the operator. The pro<strong>of</strong> is straight forward, Taylor expand the function<br />
about some point, If<br />
V (x) = (V (0) + xV ′ (0) + 1<br />
2 V ′′ (0)x 2 · · ·) (2.23)<br />
31
then<br />
Since for any operator<br />
Thus, we have<br />
V (ˆx) = (V (0) + ˆxV ′ (0) + 1<br />
2 V ′′ (0)ˆx 2 · · ·) (2.24)<br />
So, in coordinate form, the Schrödinger Equation is written as<br />
[ ˆ f, ˆ f p ] = 0∀ p (2.25)<br />
〈x|V (ˆx)|ψ〉 = V (x)ψ(x) (2.26)<br />
i¯h ∂<br />
�<br />
ψ(x, t) = −<br />
∂t ¯h<br />
2m<br />
2.2.1 Gaussian Wavefunctions<br />
∂ 2<br />
+ V (x)<br />
∂x2 �<br />
ψ(x, t) (2.27)<br />
Let’s assume that our initial state is a Gaussian in x with some initial momentum k◦.<br />
ψ(x, 0) =<br />
The momentum representation <strong>of</strong> this is<br />
�<br />
2<br />
πa2 �1/4<br />
exp(ikox) exp(−x 2 /a 2 ) (2.28)<br />
ψ(k, 0) = 1<br />
�<br />
2π¯h<br />
dxe −ikx ψ(x, 0) (2.29)<br />
= (πa) 1/2 e −(k−ko)2a2 /4)<br />
(2.30)<br />
In Fig.2.1, we see a gaussian wavepacket centered about x = 0 with ko = 10 and a = 1.<br />
For now we will use dimensionaless units. The red and blue components correspond to the real<br />
and imaginary components <strong>of</strong> ψ and the black curve is |ψ(x)| 2 . Notice, that the wavefunction is<br />
pretty localized along the x axis.<br />
In the next figure, (Fig. 2.2) we have the momentum distribution <strong>of</strong> the wavefunction, ψ(k, 0).<br />
Again, we have chosen ko = 10. Notice that the center <strong>of</strong> the distribution is shifted about ko.<br />
So, for f(x) = exp(−x 2 /b 2 ), ∆x = b/ √ 2. Thus, when x varies form 0 to ±∆x, f(x) is<br />
diminished by a factor <strong>of</strong> 1/ √ e. (∆x is the RMS deviation <strong>of</strong> f(x).)<br />
For the Gaussian wavepacket:<br />
or<br />
Thus, ∆x∆p = ¯h/2 for the initial wavefunction.<br />
∆x = a/2 (2.31)<br />
∆k = 1/a (2.32)<br />
∆p = ¯h/a (2.33)<br />
32
0.75<br />
0.5<br />
0.25<br />
-3 -2 -1 1 2 3<br />
-0.25<br />
-0.5<br />
-0.75<br />
Figure 2.1: Real (red), imaginary (blue) and absolute value (black) <strong>of</strong> gaussian wavepacket ψ(x)<br />
è<br />
y@kD<br />
2.5<br />
2<br />
1.5<br />
1<br />
0.5<br />
6 8 10 12 14<br />
Figure 2.2: Momentum-space distribution <strong>of</strong> ψ(k).<br />
33<br />
k
2.2.2 Evolution <strong>of</strong> ψ(x)<br />
Now, let’s consider the evolution <strong>of</strong> a free particle. By a “free” particle, we mean a particle<br />
whose potential energy does not change, I.e. we set V (x) = 0 for all x and solve:<br />
i¯h ∂<br />
�<br />
ψ(x, t) = −<br />
∂t ¯h<br />
2m<br />
∂2 ∂x2 �<br />
ψ(x, t) (2.34)<br />
This equation is actually easier to solve in k-space. Taking the Fourier Transform,<br />
Thus, the temporal solution <strong>of</strong> the equation is<br />
i¯h∂tψ(k, t) = k2<br />
ψ(k, t) (2.35)<br />
2m<br />
ψ(k, t) = exp(−ik 2 /(2m)t/¯h)ψ(k, 0). (2.36)<br />
This is subject to some initial function ψ(k, 0). To get the coordinate x-representation <strong>of</strong> the<br />
solution, we can use the FT relations above:<br />
ψ(x, t) =<br />
=<br />
=<br />
=<br />
�<br />
1<br />
√ dkψ(k, t) exp(−ikx) (2.37)<br />
2π¯h<br />
�<br />
dx ′ 〈x| exp(−iˆp 2 /(2m)t/¯h)|x ′ 〉ψ(x ′ , 0) (2.38)<br />
�<br />
�<br />
�<br />
m<br />
2πi¯ht<br />
�<br />
dx ′ exp<br />
� im(x − x ′ ) 2<br />
2¯ht<br />
ψ(x ′ , 0) (2.39)<br />
dx ′ Go(x, x ′ )ψ(x ′ , 0) (2.40)<br />
(homework: derive Go and show that Go is a solution <strong>of</strong> the free particle schrodinger equation<br />
HGo = i∂tGo.) The function Go is called the “free particle propagator” or “Green’s Function”<br />
and tells us the amplitude for a particle to start <strong>of</strong>f at x ′ and end up at another point x at time<br />
t.<br />
The sketch tells me that in order to got far away from the initial point in time t I need to<br />
have a lot <strong>of</strong> energy (wiggles get closer together implies higher Fourier component )<br />
Here we see that the probability to find a particle at the initial point decreases with time.<br />
Since the period <strong>of</strong> oscillation (T ) is the time required to increase the phase by 2π.<br />
2π = mx2 mx2<br />
−<br />
2¯ht 2¯h(t + T )<br />
= mx2<br />
2¯ht2 � �<br />
2 T<br />
1 + T/t<br />
Let ω = 2π/T and take the long time limit t ≫ T , we can estimate<br />
ω ≈ m<br />
2¯h<br />
� �<br />
x 2<br />
t<br />
34<br />
(2.41)<br />
(2.42)<br />
(2.43)
Figure 2.3: Go for fixed t as a function <strong>of</strong> x.<br />
0.4<br />
0.2<br />
-10 -5 5 10<br />
-0.2<br />
-0.4<br />
Since the classical kinetic energy is given by E = m/2v 2 , we obtain<br />
E = ¯hω (2.44)<br />
Thus, the energy <strong>of</strong> the wave is proportional to the period <strong>of</strong> oscillation.<br />
We can evaluate the evolution in x using either the Go we derived above, or by taking the<br />
FT <strong>of</strong> the wavefunction evolving in k-space. Recall that the solution in k-space was<br />
ψ(k, t) = exp(−ik 2 /(2m)t/¯h)ψ(k, 0) (2.45)<br />
Assuming a Gaussian form for ψ(k) as above,<br />
√<br />
a<br />
ψ(x, t) =<br />
(2π) 3/4<br />
�<br />
dke −a2 /4(k−ko) 2<br />
e i(kx−ω(k)t)<br />
where ω(k) is the dispersion relation for a free particle:<br />
Cranking through the integral:<br />
ψ(x, t) =<br />
� 2a 2<br />
π<br />
� 1/4<br />
�<br />
ω(k) = ¯hk2<br />
2m<br />
e iφ<br />
a 4 + 4¯h2 t 2<br />
m 2<br />
� 1/4 e ikox exp<br />
where φ = −θ − ¯hk 2 o/(2m)t and tan 2θ = 2¯ht/(ma 2 ).<br />
Likewise, for the amplitude:<br />
�<br />
(x − ¯hko/mt) 2<br />
�<br />
a 2 + 2i¯ht/m<br />
|ψ(x, t)| 2 �<br />
�<br />
2<br />
1<br />
(x − v◦t)<br />
=<br />
exp −<br />
2π∆x(t) 2 2∆x(t) 2<br />
�<br />
35<br />
(2.46)<br />
(2.47)<br />
(2.48)<br />
(2.49)
Figure 2.4: Evolution <strong>of</strong> a free particle wavefunction. In this case we have given the initial state<br />
a kick in the +x direction. Notice that as the system moves, the center moves at a constant rate<br />
where as the width <strong>of</strong> the packet constantly spreads out over time.<br />
Where I define<br />
∆x(t) = a<br />
�<br />
1 +<br />
2<br />
4¯h2 t2 m2a4 as the time dependent RMS width <strong>of</strong> the wave and the group velocity:<br />
(2.50)<br />
vo = ¯hko<br />
. (2.51)<br />
m<br />
Now, since ∆p = ¯h∆k = ¯h/a is a constant for all time, the uncertainty relation becomes<br />
∆x(t)∆p ≥ ¯h/2 (2.52)<br />
corresponding to the particle’s wavefunction becoming more and more diffuse as it evolves in<br />
time.<br />
2.3 Particle in a Box<br />
2.3.1 Infinite Box<br />
The Mathematica handout shows how one can use Mathematica to set up and solve some simple<br />
problems on the computer. (One good class problem would be to use Mathematica to carry<br />
out the symbolic manipulations for a useful or interesting problem and/or to solve the problem<br />
numerically.)<br />
The potential we’ll work with for this example consists <strong>of</strong> two infinitely steep walls placed<br />
at x = ℓ and x = 0 such that between the two walls, V (x) = 0. Within this region, we seek<br />
solutions to the differential equation<br />
∂ 2 xψ(x) = −2mE/¯h 2 ψ(x). (2.53)<br />
The solutions <strong>of</strong> this are plane waves traveling to the left and to the right,<br />
ψ(x) = A exp(−ikx) + B exp(+ikx) (2.54)<br />
The coefficients A and B we’ll have to determine. k is determined by substitution back into the<br />
differential equation<br />
ψ ′′ (x) = −k 2 ψ(x) (2.55)<br />
Thus, k 2 = 2mE/¯h 2 , or ¯hk = √ 2mE. Let’s work in units in which ¯h = 1 and me = 1. Energy in<br />
these units is the Hartree (≈ 27.eV.) Posted on the web-page is a file (c-header file) which has a<br />
number <strong>of</strong> useful conversion factors.<br />
36
Since ψ(x) must vanish at x = 0 and x = ℓ<br />
A + B = 0 (2.56)<br />
A exp(ikℓ) + B exp(−ikℓ) = 0 (2.57)<br />
We can see immediately that A = −B and that the solutions must correspond to a family <strong>of</strong> sine<br />
functions:<br />
Just a check,<br />
ψ(x) = A sin(nπ/ℓx) (2.58)<br />
ψ(ℓ) = A sin(nπ/ℓℓ) = A sin(nπ) = 0. (2.59)<br />
To obtain the coefficient, we simply require that the wavefunctions be normalized over the range<br />
x = [0, ℓ].<br />
� ℓ<br />
sin(nπx/ℓ)<br />
0<br />
2 dx = ℓ<br />
(2.60)<br />
2<br />
Thus, the normalized solutions are<br />
�<br />
2<br />
ψn(x) = sin(nπ/ℓx) (2.61)<br />
ℓ<br />
The eigenenergies are obtained by applying the Hamiltonian to the wavefunction solution<br />
Thus we can write En as a function <strong>of</strong> n<br />
Enψn(x) = − ¯h2<br />
2m ∂2 xψn(x) (2.62)<br />
= ¯h2 n 2 π 2<br />
2a 2 m ψn(x) (2.63)<br />
En = ¯h2 π2 2a2m n2<br />
(2.64)<br />
for n = 0, 1, 2, .... What about the case where n = 0? Clearly it’s an allowed solution <strong>of</strong><br />
the Schrödinger Equation. However, we also required that the probability to find the particle<br />
anywhere must be 1. Thus, the n = 0 solution cannot be permitted.<br />
Note also that the cosine functions are also allowed solutions. However, the restriction <strong>of</strong><br />
ψ(0) = 0 and ψ(ℓ) = 0 discounts these solutions.<br />
In Fig. 2.5 we show the first few eigenstates for an electron trapped in a well <strong>of</strong> length a = π.<br />
The potential is shown in gray. Notice that the number <strong>of</strong> nodes increases as the energy increases.<br />
In fact, one can determine the state <strong>of</strong> the system by simply counting nodes.<br />
What about orthonormality. We stated that the solution <strong>of</strong> the eigenvalue problem form an<br />
orthonormal basis. In Dirac notation we can write<br />
�<br />
〈ψn|ψm〉 = dx〈ψn|x〉〈x|ψm〉 (2.65)<br />
=<br />
� ℓ<br />
0<br />
� ℓ<br />
dxψ ∗ n(x)ψm(x) (2.66)<br />
= 2<br />
ℓ 0<br />
dx sin(nπx/ℓ) sin(mπx/ℓ) (2.67)<br />
= δnm. (2.68)<br />
37
14<br />
12<br />
10<br />
8<br />
6<br />
4<br />
2<br />
-1 1 2 3 4<br />
Figure 2.5: Particle in a box states<br />
Thus, we can see in fact that these solutions do form a complete set <strong>of</strong> orthogonal states on<br />
the range x = [0, ℓ]. Note that it’s important to specify “on the range...” since clearly the sin<br />
functions are not a set <strong>of</strong> orthogonal functions over the entire x axis.<br />
2.3.2 Particle in a finite Box<br />
Now, suppose our box is finite. That is<br />
V (x) =<br />
�<br />
−Vo if −a < x < a<br />
0 otherwise<br />
(2.69)<br />
Let’s consider the case for E < 0. The case E > 0 will correspond to scattering solutions. In<br />
side the well, the wavefunction oscillates, much like in the previous case.<br />
ψW (x) = A sin(kix) + B cos(kix) (2.70)<br />
where ki comes from the equation for the momentum inside the well<br />
¯hki =<br />
�<br />
2m(En + Vo) (2.71)<br />
We actually have two classes <strong>of</strong> solution, a symmetric solution when A = 0 and an antisymmetric<br />
solution when B = 0.<br />
Outside the well the potential is 0 and we have the solutions<br />
ψO(x) = c1e ρx andc2e −ρx<br />
38<br />
(2.72)
We will choose the coefficients c1 and c2 as to create two cases, ψL and ψR on the left and right<br />
hand sides <strong>of</strong> the well. Also,<br />
¯hρ = √ −2mE (2.73)<br />
Thus, we have three pieces <strong>of</strong> the full solution which we must hook together.<br />
ψL(x) = Ce ρx for x < −a (2.74)<br />
ψR(x) = De −ρx for x > −a (2.75)<br />
(2.76)<br />
ψW (x) = A sin(kix) + B cos(kix)for inside the well (2.77)<br />
To find the coefficients, we need to set up a series <strong>of</strong> simultaneous equations by applying the<br />
conditions that a.) the wavefunction be a continuous function <strong>of</strong> x and that b.) it have continuous<br />
first derivatives with respect to x. Thus, applying the two conditions at the boundaries:<br />
The matching conditions at x = a<br />
The final results are (after the chalk dust settles):<br />
ψL(−a) − ψW (−a) = 0 (2.78)<br />
(2.79)<br />
ψR(a) − ψW (a) = 0 (2.80)<br />
(2.81)<br />
ψ ′ L(−a) − ψ ′ W (−a) = 0 (2.82)<br />
(2.83)<br />
ψ ′ R(a) − ψ ′ W (a) = 0 (2.84)<br />
1. For A = 0. B = D sec(aki)e −aρ and C = D. (Symmetric Solution)<br />
2. For B = 0, A = C csc(aki)e −aρ and C = −D. (Antisymmetric Solution)<br />
So, now we have all the coefficients expressed in terms <strong>of</strong> D, which we can determine by normalization<br />
(if so inclined). We’ll not do that integral, as it is pretty straightforward.<br />
For the energies, we substitute the symmetric and antisymmetric solutions into the Eigenvalue<br />
equation and obtain:<br />
ρ cos(aki) = ki sin(ki) (2.85)<br />
39
or<br />
for the symmetric case and<br />
�<br />
for the anti-symmetric case, or<br />
�<br />
E<br />
Vo − E<br />
ρ<br />
ki<br />
= tan(aki) (2.86)<br />
= tan(a<br />
�<br />
2m(Vo − E)/¯h) (2.87)<br />
ρ sin(aki) = −ki cos(aki) (2.88)<br />
E<br />
Vo − E<br />
ρ<br />
ki<br />
= cot(aki) (2.89)<br />
(2.90)<br />
= cot(a<br />
�<br />
2m(Vo − E)/¯h) (2.91)<br />
Substituting the expressions for ki and ρ into final results for each case we find a set <strong>of</strong><br />
matching conditions: For the symmetric case, eigenvalues occur when ever the two curves<br />
and for the anti-symmetric case,<br />
�<br />
�<br />
1 − Vo/E = tan(a 2m(E − Vo)/¯h) (2.92)<br />
�<br />
�<br />
1 − Vo/E = cot(a 2m(E − Vo)/¯h) (2.93)<br />
These are called “transcendental” equations and closed form solutions are generally impossible<br />
to obtain. Graphical solutions are helpful. In Fig. ?? we show the graphical solution to the<br />
transendental equations for an electron in a Vo = −10 well <strong>of</strong> width a = 2. The black dots<br />
indicate the presence <strong>of</strong> two bound states, one symmetric and one anti-symmetric at E = 2.03<br />
and 3.78 repectively.<br />
2.3.3 Scattering states and resonances.<br />
Now let’s take the same example as above, except look at states for which E > 0. In this case, we<br />
have to consider where the particles are coming from and where they are going. We will assume<br />
that the particles are emitted with precise energy E towards the well from −∞ and travel from<br />
left to right. As in the case above we have three distinct regions,<br />
1. x > −a where ψ(x) = e ik1x + Re −ik1x = ψL(x)<br />
2. −a ≤ x ≤ +a where ψ(x) = Ae −ik2x + Be +ik2x = ψW (x)<br />
40
symêasym<br />
4<br />
3<br />
2<br />
1<br />
-1<br />
-2<br />
-3<br />
-4<br />
2 4 6 8 10 E<br />
Figure 2.6: Graphical solution to transendental equations for an electron in a truncated hard<br />
well <strong>of</strong> depth Vo = 10 and width a = 2. The short-dashed blue curve corresponds to the<br />
symmetric � case and the long-dashed blue curve corresponds to the asymetric case. The red line<br />
is 1 − V o/E. Bound state solution are such that the red and blue curves cross.<br />
3. x > +a where ψ(x) = T e +ik1x = ψR(x)<br />
where k1 = √ �<br />
2mE/¯h is the momentum outside the well, k2 = 2m(E − V )/¯h is the momentum<br />
inside the well, and A, B, T , and R are coefficients we need to determine. We also have the<br />
matching conditions:<br />
ψL(−a) − ψW (−a) = 0<br />
ψ ′ L(−a) − ψ ′ W (−a) = 0<br />
ψR(a) − ψW (a) = 0<br />
ψ ′ R(a) − ψ ′ W (a) = 0<br />
This can be solved by hand, however, Mathematica make it easy. The results are a series <strong>of</strong> rules<br />
which we can use to determine the transmission and reflection coefficients.<br />
T →<br />
A →<br />
B →<br />
R →<br />
−4e−2iak1+2iak2k1k2 −k1 2 + e4iak2k1 2 − 2k1k2 − 2e4iak2k1k2 − k2 2 + e4iak2k2 2 ,<br />
2e−iak1+3iak2k1 (k1 − k2)<br />
−k1 2 + e4iak2k1 2 − 2k1k2 − 2e4iak2k1k2 − k2 2 + e4iak2k2 2 ,<br />
−2e−iak1+iak2k1 (k1 + k2)<br />
−k1 2 + e4iak2k1 2 − 2k1k2 − 2e4iak2k1k2 − k2 2 + e4iak2k2 2 ,<br />
�<br />
−1 + e4iak2 � �<br />
k1 2 − k2 2�<br />
e2iak1 �<br />
−k1 2 + e4iak2k1 2 − 2k1k2 − 2e4iak2k1k2 − k2 2 + e4iak2k2 2�<br />
41
1<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
R,T<br />
10 20 30 40<br />
En HhartreeL<br />
Figure 2.7: Transmission (blue) and Reflection (red) coefficients for an electron scattering over<br />
a square well (V = −40 and a = 1 ).<br />
The R and T coefficients are related to the rations <strong>of</strong> the reflected and transimitted flux to<br />
the incoming flux. The current operator is given by<br />
Inserting the wavefunctions above yields:<br />
j(x) = ¯h<br />
2mi (ψ∗ ∇ψ − ψ∇ψ ∗ ) (2.94)<br />
jin = ¯hk1<br />
m<br />
2 ¯hk1R<br />
jref = −<br />
m<br />
2 ¯hk1T<br />
jtrans =<br />
m<br />
Thus, R 2 = −jref/jin and T 2 = jtrans/jin. In Fig. 2.7 we show the transmitted and reflection<br />
coefficients for an electron passing over a well <strong>of</strong> depth V = −40, a = 1 as a function <strong>of</strong> incident<br />
energy, E.<br />
Notice that the transmission and reflection coefficients under go a series oscillations as the<br />
incident energy is increased. These are due to resonance states which lie in the continuum. The<br />
condition for these states is such that an integer number <strong>of</strong> de Broglie wavelength <strong>of</strong> the wave in<br />
the well matches the total length <strong>of</strong> the well.<br />
λ/2 = na<br />
Fig. 2.8,show the transmission coefficient as a function <strong>of</strong> both incident energy and the well<br />
depth and (or height) over a wide range indicating that resonances can occur for both wells and<br />
bumps. Figures 2.9 show various scattering wavefunctions for on an <strong>of</strong>f-resonance cases. Lastly,<br />
Fig. ?? shows an Argand plot <strong>of</strong> both complex components <strong>of</strong> ψ.<br />
42
1<br />
0.95<br />
T<br />
0.9<br />
0.85<br />
10<br />
En<br />
20<br />
30<br />
-10<br />
40<br />
Figure 2.8: Transmission Coefficient for particle passing over a bump. Here we have plotted T as<br />
a function <strong>of</strong> V and incident energy En. The oscillations correspond to resonance states which<br />
occur as the particle passes over the well (for V < 0) or bump V > 0.<br />
2.3.4 Application: <strong>Quantum</strong> Dots<br />
One <strong>of</strong> the most active areas <strong>of</strong> research in s<strong>of</strong>t condensed matter is that <strong>of</strong> designing physical<br />
systems which can confine a quantum state in some controllable way. The idea <strong>of</strong> engineering a<br />
quantum state is extremely appealing and has numerous technological applications from small<br />
logic gates in computers to optically active materials for biomedical applications. The basic<br />
physics <strong>of</strong> these materials is relatively simple and we can use the basic ideas presented in this<br />
chapter. The basic idea is to layer a series <strong>of</strong> materials such that electrons can be trapped in a<br />
geometrically confined region. This can be accomplished by insulator-metal-insulator layers and<br />
etching, creating disclinations in semiconductors, growing semi-conductor or metal clusters, and<br />
so on. A quantum dot can even be a defect site.<br />
We will assume through out that our quantum well contains a single electron so that we can<br />
treat the system as simply as possible. For a square or cubic quantum well, energy levels are<br />
simply those <strong>of</strong> an n-dimensional particle in a box. For example for a three dimensional system,<br />
Enx,ny,nz = ¯h2 π2 ⎛<br />
⎝<br />
2m<br />
� �<br />
nx<br />
2<br />
Lx<br />
+<br />
-5<br />
� �2 ny<br />
Ly<br />
+<br />
0<br />
� nz<br />
Lz<br />
5<br />
V<br />
10<br />
⎞<br />
�2 ⎠ (2.95)<br />
where Lx, Ly, and Lz are the lengths <strong>of</strong> the box and m is the mass <strong>of</strong> an electron.<br />
The density <strong>of</strong> states is the number <strong>of</strong> energy levels per unit energy. If we take the box to be<br />
43
1.5<br />
1<br />
0.5<br />
-10 -5 5 10<br />
-0.5<br />
-1<br />
-1.5<br />
1<br />
0.5<br />
-10 -5 5 10<br />
-0.5<br />
-1<br />
Figure 2.9: Scattering waves for particle passing over a well. In the top graphic, the particle is<br />
partially reflected from the well (V < 0) and in the bottom graphic, the particle passes over the<br />
well with a slightly different energy than above, this time with little reflection.<br />
a cube Lx = Ly = Lz we can relate n to a radius <strong>of</strong> a sphere and write the density <strong>of</strong> states as<br />
ρ(n) = 4π 2 2 dn<br />
n<br />
dE = 4π2n 2<br />
Thus, for a 3D cube, the density <strong>of</strong> states is<br />
�<br />
2 4mL<br />
ρ(n) =<br />
π¯h 2<br />
�<br />
n<br />
� �−1 dE<br />
.<br />
dn<br />
i.e. for a three dimensional cube, the density <strong>of</strong> states increases as n and hence as E 1/2 .<br />
Note that the scaling <strong>of</strong> the density <strong>of</strong> states with energy depends strongly upon the dimen-<br />
44
Im@yD<br />
4<br />
2<br />
0<br />
-2<br />
-4<br />
-10<br />
x<br />
0<br />
10<br />
-5<br />
0<br />
Figure 2.10: Argand plot <strong>of</strong> a scattering wavefunction passing over a well. (Same parameters as<br />
in the top figure in Fig. 2.9).<br />
5<br />
Re@yD<br />
sionality <strong>of</strong> the system. For example in one dimension,<br />
and in two dimensions<br />
ρ(n) = 2mL2<br />
¯h 2 π 2<br />
ρ(n) = const.<br />
The reason for this lies in the way the volume element for linear, circular, and spherical integration<br />
scales with radius n. Thus, measuring the density <strong>of</strong> states tells us not only the size <strong>of</strong> the system,<br />
but also its dimensionality.<br />
We can generalize the results here by realizing that the volume <strong>of</strong> a d dimensional sphere in<br />
k space is given by<br />
Vd = kd π d/2<br />
Γ(1 + d/2)<br />
where Γ(x) is the gamma-function. The total number <strong>of</strong> states per unit volume in a d-dimensional<br />
space is then<br />
nk = 2 1<br />
Vd<br />
2π2 and the density is then the number <strong>of</strong> states per unit energy. The relation between energy and<br />
k is<br />
Ek = ¯h2<br />
2m k2 .<br />
i.e.<br />
√<br />
2Ekm<br />
k =<br />
¯h<br />
45<br />
1<br />
n
3<br />
2.5<br />
2<br />
1.5<br />
1<br />
0.5<br />
DOS<br />
which gives<br />
0.2 0.4 0.6 0.8 1<br />
energy HauL<br />
Figure 2.11: Density <strong>of</strong> states for a 1-, 2- , and 3- dimensional space.<br />
d<br />
d<br />
−2+<br />
2−1+ 2 d π 2<br />
� √ �<br />
m ɛ<br />
d<br />
¯h<br />
ρd(E) =<br />
ɛ Γ(1 + d<br />
2 )<br />
A quantum well is typically constructed so that the system is confined in one dimension<br />
and unconfined in the other two. Thus, a quantum well will typically have discrete state only<br />
in the confined direction. The density <strong>of</strong> states for this system will be identical to that <strong>of</strong> the<br />
3-dimensional system at energies where the k vectors coincide. If we take the thickness to be s,<br />
then the density <strong>of</strong> states for the quantum well is<br />
ρ = L<br />
s ρ2(E)<br />
�<br />
L ρ3(E)<br />
�<br />
Lρ2(E)/s<br />
where ⌊x⌋ is the ”floor” function which means take the largest integer less than x. This is plotted<br />
in Fig. 2.12 and the stair-step DOS is indicative <strong>of</strong> the embedded confined structure.<br />
Next, we consider a quantum wire <strong>of</strong> thickness s along each <strong>of</strong> its 2 confined directions. The<br />
DOS along the unconfined direction is one-dimensional. As above, the total DOS will be identical<br />
46
30<br />
20<br />
10<br />
DOS <strong>Quantum</strong> well vs. 3d body<br />
e HauL<br />
0.005 0.01 0.015 0.02 0.025 0.03<br />
120<br />
100<br />
DOS <strong>Quantum</strong> wire vs. 3d body<br />
80<br />
60<br />
40<br />
20<br />
e HauL<br />
0.05 0.1 0.15 0.2 0.25 0.3<br />
Figure 2.12: Density <strong>of</strong> states for a quantum well and quantum wire compared to a 3d space.<br />
Here L = 5 and s = 2 for comparison.<br />
to the 3D case when the wavevectors coincide. Increasing the radius <strong>of</strong> the wire eventually leads<br />
to the case where the steps decrease and merge into the 3D curve.<br />
� �2<br />
�<br />
2 L L ρ2(E)<br />
ρ = ρ1(E)<br />
s L2 �<br />
ρ2(E)/s<br />
For a spherical dot, we consider the case in which the radius <strong>of</strong> the quantum dot is small<br />
enough to support discrete rather than continuous energy levels. In a later chapter, we will derive<br />
this result in more detail, for now we consider just the results. First, an electron in a spherical<br />
dot obeys the Schrödinger equation:<br />
where ∇ 2 is the Laplacian operator in spherical coordinates<br />
∇ 2 = 1<br />
r<br />
∂ 2<br />
r +<br />
∂r2 1<br />
− ¯h2<br />
2m ∇2 ψ = Eψ (2.96)<br />
r 2 sin θ<br />
∂ ∂<br />
sin θ<br />
∂θ ∂θ +<br />
1<br />
r 2 sin 2 θ<br />
∂2 .<br />
∂φ2 The solution <strong>of</strong> the Schrodinger equation is subject to the boundary condition that for r ≥ R,<br />
ψ(r) = 0, where R is the radius <strong>of</strong> the sphere and are given in terms <strong>of</strong> the spherical Bessel<br />
function, jl(r) and spherical harmonic functions, Ylm.<br />
with energy<br />
ψnlm = 21/2<br />
R3/2 jl(αr/R)<br />
jl+1(α) Ylm(Ω), (2.97)<br />
E = ¯h2 α<br />
2m<br />
2<br />
R2 (2.98)<br />
Note that the spherical Bessel functions (<strong>of</strong> the first kind) are related to the Bessel functions via,<br />
jl(x) =<br />
� π<br />
2x Jl+1/2(x). (2.99)<br />
47
1<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
-0.2<br />
j lHxL<br />
5 10 15 20 x<br />
Figure 2.13: Spherical Bessel functions, j0, j1, and j1 (red, blue, green)<br />
The first few <strong>of</strong> these are<br />
sin x<br />
j0(x) = (2.100)<br />
x<br />
sin x cos x<br />
j1(x) = − (2.101)<br />
x2 x<br />
� �<br />
3 1<br />
j2(x) = − sin x −<br />
x3 x<br />
3<br />
cos x (2.102)<br />
x2 jn(x) = (−1) n x n<br />
� �n 1 d<br />
j0(x) (2.103)<br />
x dx<br />
where the last line provides a way to generate jn from j0.<br />
The α’s appearing in the wavefunction and in the energy expression are determined by the<br />
boundary condition that ψ(R) = 0. Thus, for the lowest energy state we require<br />
i.e. α = π. For the next state (l = 1),<br />
j1(α) =<br />
j0(α) = 0, (2.104)<br />
sin α cos α<br />
−<br />
α2 α<br />
= 0. (2.105)<br />
This can be solved to give α = 4.4934. These correspond to where the spherical Bessel functions<br />
pass through zero. The first 6 <strong>of</strong> these are 3.14159, 4.49341, 5.76346, 6.98793, 8.18256, 9.35581.<br />
These correspond to where the first zeros occur and give the condition for the radial quantization,<br />
n = 1 with angular momentum l = 0, 1, 2, 3, 4, 5. There are more zeros, and these correspond to<br />
the case where n > 1.<br />
In the next set <strong>of</strong> figures (Fig. 2.14), we look at the radial wavefunctions for an electron in<br />
a 0.5˚Aquantum dot. First, the case where n = 1, l = 0 and n = 0, l = 1. In both cases, the<br />
wavefunctions vanish at the radius <strong>of</strong> the dot. The radial probability distribution function (PDF)<br />
is given by P = r 2 |ψnl(r)| 2 . Note that increasing the angular momentum l from 0 to 1 causes<br />
the electron’s most probable position to shift outwards. This is due to the centrifugal force due<br />
to the angular motion <strong>of</strong> the electron. For the n, l = (2, 0) and (2, 1) states, we have 1 node in<br />
the system and two peaks in the PDF functions.<br />
48
12<br />
10<br />
5<br />
-5<br />
-10<br />
-15<br />
-20<br />
-25<br />
8<br />
6<br />
4<br />
2<br />
y<br />
y<br />
0.1 0.2 0.3 0.4 0.5 r<br />
0.1 0.2 0.3 0.4 0.5 r<br />
4<br />
3<br />
2<br />
1<br />
P<br />
4<br />
3<br />
2<br />
1<br />
P<br />
0.1 0.2 0.3 0.4 0.5 r<br />
0.1 0.2 0.3 0.4 0.5 r<br />
Figure 2.14: Radial wavefuncitons (left column) and corresponding PDFs (right column) for an<br />
electron in a R = 0.5˚Aquantum dot. The upper two correspond to (n, l) = (1, 0) (solid) and<br />
(n, l) = (1, 1) (dashed) while the lower correspond to (n, l) = (2, 0) (solid) and (n, l) = (2, 1)<br />
(dashed) .<br />
2.4 Tunneling and transmission in a 1D chain<br />
In this example, we are going to generalize the ideas presented here and look at what happens if<br />
we discretize the space in which a particle can move. This happens physically when we consider<br />
what happens when a particle (eg. an electron) can hop from one site to another. If an electron<br />
is on a given site, it has a certain energy ε to be there and it takes energy β to move the electron<br />
from the site to its neighboring site. We can write the Schrödinger equation for this system as<br />
ujε + βuj+1 + βuj−1 = Euj.<br />
for the case where the energy depends upon where the electron is located. If the chain is infinite,<br />
we can write uj = T e ikdj and find that the energy band goes as E = ε + 2β cos(kd) where k is<br />
now the momentum <strong>of</strong> the electron.<br />
2.5 Summary<br />
We’ve covered a lot <strong>of</strong> ground. We now have enough tools at hand to begin to study some physical<br />
systems. The traditional approach to studying quantum mechanics is to progressively a series<br />
<strong>of</strong> differential equations related to physical systems (harmonic oscillators, angular momentum,<br />
49
hydrogen atom, etc...). We will return to those models in a week or so. Next week, we’re going to<br />
look at 2 and 3 level systems using both time dependent and time-independent methods. We’ll<br />
develop a perturbative approach for computing the transition amplitude between states. We will<br />
also look at the decay <strong>of</strong> a state when its coupled to a continuum. These are useful models for<br />
a wide variety <strong>of</strong> phenomena. After this, we will move on to the harmonic oscillator.<br />
2.6 Problems and Exercises<br />
Exercise 2.1 1. Derive the expression for<br />
where ho is the free particle Hamiltonian,<br />
Go(x, x ′ ) = 〈x| exp(−ihot/¯h)|x ′ 〉 (2.106)<br />
ho = − ¯h2 ∂<br />
2m<br />
2<br />
∂x2 2. Show that Go is a solution <strong>of</strong> the free particle Schrödinger Equation<br />
(2.107)<br />
i¯h∂tGo(t) = hoGo(t). (2.108)<br />
Exercise 2.2 Show that the normalization <strong>of</strong> a wavefunction is independent <strong>of</strong> time.<br />
Solution:<br />
i∂t〈ψ(t)|ψ(t)〉 = (i〈 ˙ ψ(t)|)(|ψ(t)〉) + (〈ψ(t)|)(i| ˙ ψ(t)〉) (2.109)<br />
= −〈ψ(t)| ˆ H † |ψ(t)〉 + 〈ψ(t)| ˆ H|ψ(t)〉 (2.110)<br />
= −〈ψ(t)| ˆ H|ψ(t)〉 + 〈ψ(t)| ˆ H|ψ(t)〉 = 0 (2.111)<br />
Exercise 2.3 Compute the bound state solutions (E < 0) for a square well <strong>of</strong> depth Vo where<br />
�<br />
−Vo<br />
V (x) =<br />
0<br />
−a/2 ≤ x ≤ a/2<br />
otherwise<br />
(2.112)<br />
1. How many energy levels are supported by a well <strong>of</strong> width a.<br />
2. Show that a very narrow well can support only 1 bound state, and that this state is an even<br />
function <strong>of</strong> x.<br />
3. Show that the energy <strong>of</strong> the lowest bound state is<br />
4. Show that as<br />
ρ =<br />
E ≈<br />
mV 2<br />
o a 2<br />
2¯h 2<br />
the probability <strong>of</strong> finding the particle inside the well vanishes.<br />
�<br />
(2.113)<br />
− 2mE<br />
2 → 0 (2.114)<br />
¯h<br />
50
Exercise 2.4 Consider a particle with the potential<br />
V (x) =<br />
⎧<br />
⎪⎨<br />
⎪⎩<br />
0 for x > a<br />
−Vo for 0 ≤ x ≤ a<br />
∞ for x < 0<br />
(2.115)<br />
1. Let φ(x) be a stationary state. Show that φ(x) can be extended to give an odd wavefunction<br />
corresponding to a stationary state <strong>of</strong> the symmetric well <strong>of</strong> width 2a (i.e the one studied<br />
above) and depth Vo.<br />
2. Discuss with respect to a and Vo the number <strong>of</strong> bound states and argue that there is always<br />
at least one such state.<br />
3. Now turn your attention toward the E > 0 states <strong>of</strong> the well. Show that the transmission<br />
<strong>of</strong> the particle into the well region vanishes as E → 0 and that the wavefunction is perfectly<br />
reflected <strong>of</strong>f the sudden change in potential at x = a.<br />
Exercise 2.5 Which <strong>of</strong> the following are eigenfunctions <strong>of</strong> the kinetic energy operator<br />
e x , x 2 , x n ,3 cos(2x), sin(x) + cos(x), e −ikx ,<br />
.<br />
Solution Going in order:<br />
1. e x<br />
2. x 2<br />
3. x n<br />
4. 3 cos(2x)<br />
5. sin(x) + cos(x)<br />
6. e −ikx<br />
ˆT = − ¯h2 ∂<br />
2m<br />
2<br />
∂x2 f(x − x ′ � ∞<br />
) = dke −ik(x−x′ ) −ik<br />
e 2 /(2m)<br />
−∞<br />
(2.116)<br />
(2.117)<br />
Exercise 2.6 Which <strong>of</strong> the following would be acceptable one dimensional wavefunctions for a<br />
bound particle (upon normalization): f(x) = e−x , f(x) = e−x2, f(x) = xe−x2, and<br />
Solution In order:<br />
f(x) =<br />
�<br />
e−x2 2e−x2 51<br />
x ≥ 0<br />
x < 0<br />
(2.118)
1. f(x) = e −x<br />
2. f(x) = e −x2<br />
3. f(x) = xe −x2<br />
4.<br />
f(x) =<br />
�<br />
e−x2 2e−x2 x ≥ 0<br />
x < 0<br />
Exercise 2.7 For a one dimensional problem, consider a particle with wavefunction<br />
ψ(x) = N exp(ipox/¯h)<br />
√ x 2 + a 2<br />
where a and po are real constants and N the normalization.<br />
1. Determine N so that ψ(x) is normalized.<br />
� ∞<br />
−∞<br />
Thus ψ(x) is normalized when<br />
dx|ψ(x)| 2 = N 2<br />
� ∞<br />
N =<br />
2 π<br />
= N<br />
a<br />
� a<br />
π<br />
−∞<br />
dx<br />
1<br />
x 2 + a 2<br />
(2.119)<br />
(2.120)<br />
(2.121)<br />
(2.122)<br />
(2.123)<br />
2. The position <strong>of</strong> the particle is measured. What is the probability <strong>of</strong> finding a result between<br />
−a/ √ 3 and +a/ √ 3?<br />
� √<br />
a +a/ 3<br />
π −a/ √ dx|ψ(x)|<br />
3<br />
2 =<br />
� +a/ √ 3<br />
−a/ √ 3<br />
dx<br />
= 1<br />
π tan−1 (x/a)<br />
= 1<br />
3<br />
1<br />
x2 + a2 �<br />
�+a/<br />
�<br />
�<br />
√ 3<br />
−a/ √ 3<br />
3. Compute the mean value <strong>of</strong> a particle which has ψ(x) as its wavefunction.<br />
(2.124)<br />
(2.125)<br />
(2.126)<br />
〈x〉 = a<br />
� ∞ x<br />
dx<br />
π −∞ x2 + a2 (2.127)<br />
= 0 (2.128)<br />
52
Exercise 2.8 Consider the Hamiltonian <strong>of</strong> a particle in a 1 dimensional well given by<br />
H = 1<br />
2m ˆp2 + ˆx 2<br />
where ˆx and ˆp are position and momentum operators. Let |φn〉 be a solution <strong>of</strong><br />
for n = 0, 1, 2, · · ·. Show that<br />
(2.129)<br />
H|φn〉 = En|φn〉 (2.130)<br />
〈φn|ˆp|φm〉 = αnm〈φn|ˆx|φm〉 (2.131)<br />
where αnm is a coefficient depending upon En − Em. Compute αnm. (Hint: you will need to use<br />
the commutation relations <strong>of</strong> [ˆx, H] and [ˆp, H] to get this). Finally, from all this, deduce that<br />
�<br />
m<br />
(En − Em) 2 |φn|ˆx|φm〉| 2 = ¯h2<br />
2m 〈φn|ˆp 2 |φn〉 (2.132)<br />
Exercise 2.9 The state space <strong>of</strong> a certain physical system is three-dimensional. Let |u1〉, |u2〉,<br />
and |u3〉 be an orthonormal basis <strong>of</strong> the space in which the kets |ψ1〉 and |ψ2〉 are defined by<br />
1. Are the states normalized?<br />
|ψ1〉 = 1<br />
√ 2 |u1〉 + i<br />
2 |u2〉 + 1<br />
2 |u3〉 (2.133)<br />
|ψ2〉 = 1<br />
√ 3 |u1〉 + i<br />
√ 3 |u3〉 (2.134)<br />
2. Calculate the matrices, ρ1 and ρ2 representing in the {|ui〉〉 basis, the projection operators<br />
onto |ψ1〉 and |ψ2〉. Verify that these matrices are Hermitian.<br />
Exercise 2.10 Let ψ(r) = ψ(x, y, z) be the normalized wavefunction <strong>of</strong> a particle. Express in<br />
terms <strong>of</strong> ψ(r):<br />
1. A measurement along the x-axis to yield a result between x1 and x2.<br />
2. A measurement <strong>of</strong> momentum component px to yield a result between p1 and p2.<br />
3. Simultaneous measurements <strong>of</strong> x and pz to yield x1 ≤ x ≤ x2 and pz > 0.<br />
4. Simultaneous measurements <strong>of</strong> px, py, and pz, to yield<br />
p1 ≤ px ≤ p2<br />
p3 ≤ py ≤ p4<br />
p5 ≤ pz ≤ p6<br />
(2.135)<br />
(2.136)<br />
(2.137)<br />
(2.138)<br />
(2.139)<br />
Show that this result is equal to the result <strong>of</strong> part 2 when p3, p5 → −∞ and p4, p6 → +∞.<br />
53
Exercise 2.11 Consider a particle <strong>of</strong> mass m whose potential energy is<br />
V (x) = −α(δ(x + l/2) + δ(x − l/2))<br />
1. Calculate the bound states <strong>of</strong> the particle, setting<br />
Show that the possible energies are given by<br />
E = − ¯h2 ρ 2<br />
2m .<br />
e −ρl �<br />
= ± 1 − 2ρ<br />
�<br />
µ<br />
where µ = 2mα/¯h 2 . Give a graphic solution <strong>of</strong> this equation.<br />
(a) The Ground State. Show that the ground state is even about the origin and that it’s<br />
energy, Es is less than the bound state <strong>of</strong> a particle in a single δ-function potential,<br />
−EL. Interpret this physically. Plot the corresponding wavefunction.<br />
(b) Excited State. Show that when l is greater than some value (which you need to determine),<br />
there exists an odd excited state <strong>of</strong> energy EA with energy greater than −EL.<br />
Determine and plot the corresponding wavefunction.<br />
(c) Explain how the preceeding calculations enable us to construct a model for an ionized<br />
diatomic molecule, eg. H + 2 , whose nuclei are separated by l. Plot the energies <strong>of</strong> the<br />
two states as functions <strong>of</strong> l, what happens as l → ∞ and l → 0?<br />
(d) If we take Coulombic repulsion <strong>of</strong> the nuclei into account, what is the total energy<br />
<strong>of</strong> the system? Show that a curve which gives the variation with respect to l <strong>of</strong> the<br />
energies thus obtained enables us to predict in certain cases the existence <strong>of</strong> bound<br />
states <strong>of</strong> H + 2 and to determine the equilibrium bond length.<br />
2. Calculate the reflection and transmission coefficients for this system. Plot R and T as<br />
functions <strong>of</strong> l. Show that resonances occur when l is an integer multiple <strong>of</strong> the de Broglie<br />
wavelength <strong>of</strong> the particle. Why?<br />
54
Chapter 3<br />
Semi-Classical <strong>Quantum</strong> <strong>Mechanics</strong><br />
Good actions ennoble us, and we are the sons <strong>of</strong> our own deeds.<br />
–Miguel de Cervantes<br />
The use <strong>of</strong> classical mechanical analogs for quantum behavour holds a long and proud tradition in<br />
the development and application <strong>of</strong> quantum theory. In Bohr’s original formulation <strong>of</strong> quantum<br />
mechanics to explain the spectra <strong>of</strong> the hydrogen atom, Bohr used purely classical mechanical<br />
notions <strong>of</strong> angular momentum and rotation for the basic theory and imposed a quantization<br />
condition that the angular momentum should come in integer multiples <strong>of</strong> ¯h. Bohr worked under<br />
the assumption that at some point the laws <strong>of</strong> quantum mechanics which govern atoms and<br />
molecules should correspond to the classical mechanical laws <strong>of</strong> ordinary objects like rocks and<br />
stones. Bohr’s Principle <strong>of</strong> Correspondence states that quantum mechanics was not completely<br />
separate from classical mechanics; rather, it incorporates classical theory.<br />
From a computational viewpoint, this is an extremely powerful notion since performing a<br />
classical trajectory calculation (even running 1000’s <strong>of</strong> them) is simpler than a single quantum<br />
calculation <strong>of</strong> a similar dimension. Consequently, the development <strong>of</strong> semi-classical methods has<br />
and remains an important part <strong>of</strong> the development and untilization <strong>of</strong> quantum theory. In fact<br />
even in the most recent issues <strong>of</strong> the Journal <strong>of</strong> Chemical Physics, Phys. Rev. Lett, and other<br />
leading physics and chemical physics journals, one finds new developments and applications <strong>of</strong><br />
this very old idea.<br />
In this chapter we will explore this idea in some detail. The field <strong>of</strong> semi-classical mechanics<br />
is vast and I would recommend the following for more information:<br />
1. Chaos in Classical and <strong>Quantum</strong> <strong>Mechanics</strong>, Martin Gutzwiller (Springer-Verlag, 1990).<br />
Chaos in quantum mechanics is a touchy subject and really has no clear-cut definition that<br />
anyone seems to agree upon. Gutzwiller is one <strong>of</strong> the key figures in sorting all this out. This<br />
is very nice and not too technical monograph on quantum and classical correspondence.<br />
2. Semiclassical Physics, M. Brack and R. Bhaduri (Addison-Wesley, 1997). Very interesting<br />
book, mostly focusing upon many-body applications and Thomas-Fermi approximations.<br />
3. Computer Simulations <strong>of</strong> Liquids, M. P. Allen and D. J. Tildesley (Oxford, 1994). This<br />
book mostly focus upon classical MD methods, but has a nice chapter on quantum methods<br />
which were state <strong>of</strong> the art in 1994. Methods come and methods go.<br />
There are many others, <strong>of</strong> course. These are just the ones on my bookshelf.<br />
55
3.1 Bohr-Sommerfield quantization<br />
Let’s first review Bohr’s original derivation <strong>of</strong> the hydrogen atom. We will go through this a bit<br />
differently than Bohr since we already know part <strong>of</strong> the answer. In the chapter on the Hydrogen<br />
atom we derived the energy levels in terms <strong>of</strong> the principle quantum number, n.<br />
En = − me4<br />
2¯h 2<br />
In Bohr’s correspondence principle, the quantum energy must equal the classical energy. So<br />
for an electron moving about a proton, that energy is inversely proportional to the distance <strong>of</strong><br />
separation. So, we can write<br />
− me4<br />
2¯h 2<br />
1<br />
n 2<br />
1 e2<br />
= −<br />
n2 2r<br />
Now we need to figure out how angular momentum gets pulled into this. For an orbiting body<br />
the centrifugal force which pulls the body outward is counterbalenced by the inward tugs <strong>of</strong> the<br />
centripetal force coming from the attractive Coulomb potential. Thus,<br />
(3.1)<br />
(3.2)<br />
mrω 2 = e2<br />
, (3.3)<br />
r2 where ω is the angular frequency <strong>of</strong> the rotation. Rearranging this a bit, we can plug this into<br />
the RHS <strong>of</strong> Eq. 3.2 and write<br />
− me4<br />
2¯h 2<br />
1<br />
n2 = −mr3 ω2 2r<br />
The numerator now looks amost like the classical definition <strong>of</strong> angular momentum: L = mr 2 ω.<br />
So we can write the last equation as<br />
Solving for L 2 :<br />
− me4<br />
2¯h 2<br />
(3.4)<br />
1 L2<br />
= − . (3.5)<br />
n2 2mr2 L 2 = me4<br />
2¯h 2<br />
2mr2 n2 . (3.6)<br />
Now, we need to pull in another one <strong>of</strong> Bohr’s results for the orbital radius <strong>of</strong> the H-atom:<br />
Plug this into Eq.3.6 and after the dust settles, we find<br />
r = ¯h2<br />
me 2 n2 . (3.7)<br />
L = ¯hn. (3.8)<br />
But, why should electrons be confined to circular orbits? Eq. 3.8 should be applicable to<br />
any closed path the electron should choose to take. If the quantization condition only holds<br />
56
for circular orbits, then the theory itself is in deep trouble. At least that’s what Sommerfield<br />
thought.<br />
The numerical units <strong>of</strong> ¯h are energy times time. That is the unit <strong>of</strong> action in classical<br />
mechanics. In classical mechanics, the action <strong>of</strong> a mechanical system is given by the integral <strong>of</strong><br />
the classical momentum along a classical path:<br />
S =<br />
� x 2<br />
x1<br />
pdx (3.9)<br />
For an orbit, the initial point and the final point must coincide, x1 = x2, so the action integral<br />
must describe some the area circumscribed by a closed loop on the p−x plane called phase-space.<br />
�<br />
S = pdx (3.10)<br />
So, Bohr and Sommerfield’s idea was that the circumscribed area in phase-space was quantized<br />
as well.<br />
As a check, let us consider the harmonic oscillator. The classical energy is given by<br />
E(p, q) = p2 k<br />
+<br />
2m 2 q2 .<br />
This is the equation for an ellipse in phase space since we can re-arrange this to read<br />
1 =<br />
p 2<br />
2mE<br />
= p2 q2<br />
+<br />
a2 b2 + k<br />
2E q2<br />
(3.11)<br />
where a = √ �<br />
2mE and b = 2E/k describe the major and minor axes <strong>of</strong> the ellipse. The area <strong>of</strong><br />
an ellipse is A = πab, so the area circumscribed by a classical trajectory with energy E is<br />
�<br />
S(E) = 2Eπ m/k (3.12)<br />
Since<br />
�<br />
k/m = ω, S = 2πE/ω = E/ν. Finally, since E/ν must be an integer multiple <strong>of</strong> h, the<br />
Bohr-Sommerfield condition for quantization becomes<br />
�<br />
pdx = nh (3.13)<br />
�<br />
where p is the classical momentum for a path <strong>of</strong> energy E, p = 2m(V (x) − E. Taking this a<br />
bit farther, the de Broglie wavelength is p/h, so the Bohr-Sommerfield rule basically states that<br />
stationary energies correspond to classical paths for which there are an integer number <strong>of</strong> de<br />
Broglie wavelengths.<br />
Now, perhaps you can see where the problem with quantum chaos. In classical chaos, chaotic<br />
trajectories never return to their exact staring point in phase-space. They may come close, but<br />
there are no closed orbits. For 1D systems, this is does not occur since the trajectories are the<br />
contours <strong>of</strong> the energy function. For higher dimensions, the dimensionality <strong>of</strong> the system makes<br />
it possible to have extremely complex trajectories which never return to their starting point.<br />
Exercise 3.1 Apply the Bohr-Sommerfield proceedure to determine the stationary energies for<br />
a particle in a box <strong>of</strong> length l.<br />
57
3.2 The WKB Approximation<br />
The original Bohr-Sommerfield idea can be imporoved upon considerably to produce an asymptotic<br />
(¯h → 0) approximation to the Schrödinger wave function. The idea was put forward at about<br />
the same time by three different theoreticians, Brillouin (in Belgium), Kramers (in Netherlands),<br />
and Wentzel (in Germany). Depending upn your point <strong>of</strong> origin, this method is the WKB (US<br />
& Germany), BWK (France, Belgium), JWKB (UK), you get the idea. The original references<br />
are<br />
1. “La mécanique odularatoire de Schrödinger; une méthode générale de résolution par approximations<br />
successives”, L. Brillouin, Comptes rendus (Paris). 183, 24 (1926).<br />
2. “Wellenmechanik und halbzahlige Quantisierung”, H. A. Kramers, Zeitschrift für Physik<br />
39, 828 (1926).<br />
3. “Eine Verallgemeinerung der Quantenbedingungen für die Zwecke der Wellenmechanik”,<br />
Zeitschrift für Physik 38, 518 (1926).<br />
We will first go through how one can use the approach to determine the eigenvalues <strong>of</strong> the<br />
Schrödinger equation via semi-classical methods, then show how one can approximate the actual<br />
wavefunctions themselves.<br />
3.2.1 Asymptotic expansion for eigenvalue spectrum<br />
The WKB proceedure is initiated by writing the solution to the Schödinger equation<br />
as<br />
ψ ′′ + 2m<br />
2 (E − V (x))ψ = 0<br />
¯h<br />
� �<br />
i<br />
ψ(x) = exp<br />
¯h<br />
�<br />
χdx . (3.14)<br />
We will soon discover that χ is the classical momentum <strong>of</strong> the system, but for now, let’s consider<br />
it to be a function <strong>of</strong> the energy <strong>of</strong> the system. Substituting into the Schrodinger equation<br />
produces a new differential equation for χ<br />
If we take ¯h → 0, it follows then that<br />
¯h dχ<br />
i dx = 2m(E − V ) − χ2 . (3.15)<br />
χ = χo =<br />
�<br />
2m(E − V ) = |p| (3.16)<br />
which is the magnitude <strong>of</strong> the classical momentum <strong>of</strong> a particle. So, if we assume that this is<br />
simply the leading order term in a series expansion in ¯h we would have<br />
χ = χo + ¯h<br />
i χ1 +<br />
58<br />
� �2 ¯h<br />
χ2 . . . (3.17)<br />
i
Substituting Eq. 3.17 into<br />
χ = ¯h<br />
i<br />
1<br />
ψ<br />
∂ψ<br />
x<br />
(3.18)<br />
and equating to zero coefficients with different powers <strong>of</strong> ¯h, one obtains equations which determine<br />
the χn corrections in succession:<br />
for n = 1, 2, 3 . . .. For example,<br />
and so forth.<br />
χ2 = − χ2 1 + χ ′ 1<br />
2χo<br />
= − 1<br />
�<br />
2χo<br />
= −<br />
d<br />
dx χn−1<br />
n�<br />
= − χn−mχm<br />
m=0<br />
χ1 = − 1<br />
2<br />
χ ′ o<br />
χo<br />
V ′2<br />
+<br />
16(E − V ) 2<br />
= 1 V<br />
4<br />
′<br />
E − V<br />
V ′2<br />
+<br />
4(E − V ) 2<br />
V ′′<br />
�<br />
.<br />
4(E − V )<br />
5V ′2<br />
32(2m) 1/2 V<br />
−<br />
(E − V ) 5/2 ′′<br />
8(2m) 1/2 (E − V ) 3/2<br />
Exercise 3.2 Verify Eq. 3.19 and derive the first order correction in Eq.3.20.<br />
(3.19)<br />
(3.20)<br />
(3.21)<br />
Now, to use these equations to determine the spectrum, we replace x everywhere by a complex<br />
coordinate z and suppose that V (z) is a regular and analytic function <strong>of</strong> z in any physically<br />
relevant region. Consequently, we can then say that ψ(z) is an analytic function <strong>of</strong> z. So, we<br />
can write the phase integral as<br />
n = 1<br />
�<br />
h<br />
= 1<br />
2πi<br />
C<br />
�<br />
χ(z)dz<br />
C<br />
ψ ′ n(z)<br />
dz (3.22)<br />
ψn(z)<br />
where ψn is the nth discrete stationary solution to the Schrödinger equation and C is some<br />
contour <strong>of</strong> integration on the z plane. If there is a discrete spectrum, we know that the number<br />
<strong>of</strong> zeros, n, in the wavefunction is related to the quantum number corresponding to the n + 1<br />
energy level. So if ψ has no real zeros, this is the ground state wavefunction with energy Eo, one<br />
real zero corresponds to energy level E1 and so forth.<br />
Suppose the contour <strong>of</strong> integration, C is taken such that it include only these zeros and no<br />
others, then we can write<br />
n = 1<br />
�<br />
χodz +<br />
¯h C<br />
1<br />
� �<br />
−¯h χ2dz + . . . (3.23)<br />
2πi c C<br />
59
Each <strong>of</strong> these terms involves E − V in the denominator. At the classical turning points where<br />
V (z) = E, we have poles and we can use the residue theorem to evaluate the integrals. For<br />
example, χ1 has a pole at each turnining point with residue −1/4 at each point, hence,<br />
1<br />
2πi<br />
The next term we evaluate by integration by parts<br />
Hence, we can write<br />
Putting it all together<br />
�<br />
C<br />
�<br />
V ′′<br />
�<br />
dz = −3<br />
(E − V (z)) 3/2 2 C<br />
C<br />
χ2(z)dz =<br />
n + 1/2 = 1<br />
h<br />
−<br />
�<br />
C<br />
χ1dz = − 1<br />
. (3.24)<br />
2<br />
1<br />
32(2m) 1/2<br />
�<br />
C<br />
�<br />
c<br />
�<br />
2m(E − V (z))dz<br />
h<br />
128π 2 (2m) 1/2<br />
�<br />
c<br />
V ′2<br />
dz. (3.25)<br />
(E − V (z)) 5/2<br />
V ′2<br />
dz. (3.26)<br />
(E − V (z)) 5/2<br />
V ′2<br />
dz + . . . (3.27)<br />
(E − V (z)) 5/2<br />
Granted, the above analysis is pretty formal! But, what we have is something new. Notice that<br />
we have an extra 1/2 added here that we did not have in the original Bohr-Sommerfield (BS)<br />
theory. What we have is something even more general. The original BS idea came from the<br />
notion that energies and frequencies were related by integer multiples <strong>of</strong> h. But this is really<br />
only valid for transitions between states. If we go back and ask what happens at n = 0 in<br />
the Bohr-Sommerfield theory, this corresponds to a phase-space ellipse with major and minor<br />
axes both <strong>of</strong> length 0–which violates the Heisenberg Uncertainly rule. This new quantization<br />
condition forces the system to have some lowest energy state with phase-space area 1/2.<br />
Where did this extra 1/2 come from? It originates from the classical turning points where<br />
V (x) = E. Recall that for a 1D system bound by a potential, there are at least two such points.<br />
Each contributes a π/4 to the phase. We will see this more explicitly in the next section.<br />
3.2.2 WKB Wavefunction<br />
Going back to our original wavefunction in Eq. 3.14 and writing<br />
ψ = e iS/¯h<br />
where S is the integral <strong>of</strong> χ, we can derive equations for S:<br />
� �<br />
1 ∂S<br />
−<br />
2m ∂x<br />
i¯h<br />
2m<br />
∂2S + V (x) = E. (3.28)<br />
∂x2 Again, as above, one can seek a series expansion <strong>of</strong> S in powers <strong>of</strong> ¯h. The result is simply the<br />
integral <strong>of</strong> Eq. 3.17.<br />
S = So + ¯h<br />
i S1 + . . . (3.29)<br />
60
If we make the approximation that ¯h = 0 we have the classical Hamilton-Jacobi equation for the<br />
action, S. This, along with the definition <strong>of</strong> the momentum, p = dSo/dx = χo, allows us to make<br />
a very firm contact between quantum mechanics and the motion <strong>of</strong> a classical particle.<br />
Looking at Eq. 3.28, it is clear that the classical approximation is valid when the second term<br />
is very small compared to the first. i.e.<br />
¯h d<br />
dx<br />
� dS<br />
dx<br />
¯h S′′<br />
S ′2<br />
� � �2 dx<br />
dS<br />
≪<br />
≪<br />
1<br />
1<br />
¯h d 1<br />
dx p<br />
≪ 1 (3.30)<br />
where we equate dS/dx = p. Since p is related to the de Broglie wavelength <strong>of</strong> the particle<br />
λ = h/p , the same condition implies that<br />
� �<br />
�<br />
� 1 dλ�<br />
�<br />
� � ≪ 1. (3.31)<br />
�2π<br />
dx �<br />
Thus the semi-classical approximation is only valid when the wavelength <strong>of</strong> the particle as determined<br />
by λ(x) = h/p(x) varies slightly over distances on the order <strong>of</strong> the wavelength itself.<br />
Written another way by noting that the gradiant <strong>of</strong> the momentum is<br />
dp<br />
dx<br />
d �<br />
=<br />
dx<br />
Thus, we can write the classical condition as<br />
2m(E − V (x)) = − m<br />
p<br />
dV<br />
dx .<br />
m¯h|F |/p 3 ≪ 1 (3.32)<br />
Consequently, the semi-classical approximation can only be used in regions where the momentum<br />
is not too small. This is especially important near the classical turning points where p → 0. In<br />
classical mechanics, the particle rolls to a stop at the top <strong>of</strong> the potential hill. When this happens<br />
the de Broglie wavelength heads <strong>of</strong>f to infinity and is certainly not small!<br />
Exercise 3.3 Verify the force condition given by Eq. 3.32.<br />
Going back to the expansion for χ<br />
or equivalently for S1<br />
So,<br />
χ1 = − 1<br />
2<br />
χ ′ o<br />
χo<br />
= 1 V<br />
4<br />
′<br />
E − V<br />
S ′ 1 = − S′′ o p′<br />
= − ′<br />
2S<br />
2p<br />
S1(x) = − 1<br />
log p(x)<br />
2<br />
61<br />
(3.33)<br />
(3.34)
If we stick to regions where the semi-classical condition is met, then the wavefunction becomes<br />
ψ(x) ≈ C1<br />
� �<br />
i<br />
p(x)dx C2 i<br />
p(x)dx<br />
� e ¯h + � e− ¯h<br />
p(x) p(x)<br />
(3.35)<br />
The 1/ √ p prefactor has a remarkably simple interpretation. The probability <strong>of</strong> find the particle<br />
in some region between x and x+dx is given by |ψ| 2 so that the classical probability is essentially<br />
proportional to 1/p. So, the fast the particle is moving, the less likely it is to be found in some<br />
small region <strong>of</strong> space. Conversly, the slower a particle moves, the more likely it is to be found in<br />
that region. So the time spend in a small dx is inversely proportional to the momentum <strong>of</strong> the<br />
particle. We will return to this concept in a bit when we consider the idea <strong>of</strong> time in quantum<br />
mechanics.<br />
The C1 and C2 coefficients are yet to be determined. If we take x = a to be one classical<br />
turning point so that x > a corresponds to the classically inaccessible region where E < V (x),<br />
then the wavefunction in that region must be exponentially damped:<br />
ψ(x) ≈ C<br />
�<br />
|p| exp<br />
�<br />
− 1<br />
� x �<br />
|p(x)|dx<br />
¯h a<br />
To the left <strong>of</strong> x = a, we have a combination <strong>of</strong> incoming and reflected components:<br />
ψ(x) = C1<br />
� �<br />
i a �<br />
√p exp pdx +<br />
¯h x<br />
C2<br />
�<br />
√p exp − i<br />
� a �<br />
pdx<br />
¯h x<br />
3.2.3 Semi-classical Tunneling and Barrier Penetration<br />
(3.36)<br />
(3.37)<br />
Before solving the general problem <strong>of</strong> how to use this in an arbitrary well, let’s consider the case<br />
for tunneling through a potential barrier that has some bumpy top or corresponds to some simple<br />
potential. So, to the left <strong>of</strong> the barrier the wavefunction has incoming and reflected components:<br />
Inside we have<br />
ψB(x) =<br />
and to the right <strong>of</strong> the barrier:<br />
ψL(x) = Ae ikx + Be −ikx . (3.38)<br />
C<br />
�<br />
|p(x)|<br />
�<br />
i<br />
|p|dx<br />
e+ ¯h +<br />
D<br />
�<br />
|p(x)|<br />
�<br />
i<br />
|p|dx<br />
e− ¯h<br />
(3.39)<br />
ψR(x) = F e +ikx . (3.40)<br />
If F is the transmitted amplitude, then the tunneling probability is the ratio <strong>of</strong> the transmitted<br />
probability to the incident probability: T = |F | 2 /|A| 2 . If we assume that the barrier is high or<br />
broad, then C = 0 and we obtain the semi-classical estimate for the tunneling probability:<br />
T ≈ exp<br />
�<br />
− 2<br />
� �<br />
b<br />
|p(x)|dx<br />
¯h a<br />
62<br />
(3.41)
where a and b are the turning points on either side <strong>of</strong> the barrier.<br />
Mathematically, we can “flip the potential upside down” and work in imaginary time. In this<br />
case the action integral becomes<br />
S =<br />
� b<br />
a<br />
�<br />
2m(V (x) − E)dx. (3.42)<br />
So we can think <strong>of</strong> tunneling as motion under the barrier in imaginary time.<br />
There are a number <strong>of</strong> useful applications <strong>of</strong> this formula. Gamow’s theory <strong>of</strong> alpha-decay is<br />
a common example. Another useful application is in the theory <strong>of</strong> reaction rates where we want<br />
to determine tunneling corrections to the rate constant for a particular reaction. Close to the top<br />
<strong>of</strong> the barrier, where tunneling may be important, we can expand the potential and approximate<br />
the peak as an upside down parabola<br />
V (x) ≈ Vo − k<br />
2 x2<br />
where +x represents the product side and −x represents the reactant side. See Fig. 3.1 Set the<br />
zero in energy to be the barrier height, Vo so that any transmission for E < 0 corresponds to<br />
tunneling. 1<br />
0<br />
-0.2<br />
-0.4<br />
-0.6<br />
-0.8<br />
e<br />
-4 -2 2 4 x<br />
Figure 3.1: Eckart Barrier and parabolic approximation <strong>of</strong> the transition state<br />
At sufficiently large distances from the turning point, the motion is purely quasi-classical and<br />
we can write the momentum as<br />
�<br />
p = 2m(E + kx2 /2) ≈ x √ �<br />
mk + E m/k/x (3.43)<br />
and the asymptotic for <strong>of</strong> the Schrödinger wavefunction is<br />
ψ = Ae +iξ2 /2 ξ +iɛ−1/2 + Be −iξ 2 /2 ξ −iɛ−1/2<br />
(3.44)<br />
where A and B are coefficients we need to determine by the matching condition and ξ and ɛ are<br />
dimensionless lengths and energies given by ξ = x(mk/¯h) 1/4 �<br />
, and ɛ = (E/¯h) m/k.<br />
1 The analysis is from Kembel, 1935 as discussed in Landau and Lifshitz, QM<br />
63
The particular case we are interested in is for a particle coming from the left and passing to<br />
the right with the barrier in between. So, the wavefunctions in each <strong>of</strong> these regions must be<br />
and<br />
ψR = Be +iξ2 /2 ξ iɛ−1/2<br />
ψL = e −iξ2 /2 (−ξ) −iɛ−1/2 + Ae +iξ 2 /2 (−ξ) iɛ−1/2<br />
(3.45)<br />
(3.46)<br />
where the first term is the incident wave and the second term is the reflected component. So,<br />
|A| 2 | is the reflection coefficient and |B| 2 is the transmission coefficient normalized so that<br />
|A| 2 + |B| 2 = 1.<br />
Lets move to the complex plane and write a new coordinate, ξ = ρe iφ and consider what happens<br />
as we rotate around in φ and take ρ to be large. Since iξ 2 = ρ 2 (i cos 2φ − sin 2φ), we have<br />
and at φ = π<br />
ψR(φ = 0) = Be iρ2<br />
ρ +iɛ−1/2<br />
ψL(φ = 0) = Ae iρ2<br />
(−ρ) +iɛ−1/2<br />
ψR(φ = π) = Be iρ2<br />
(−ρ) +iɛ−1/2<br />
ψL(φ = π) = Ae iρ2<br />
ρ +iɛ−1/2<br />
So, in otherwords, ψR(φ = π) looks like ψL(φ = 0) when<br />
A = B(e iπ ) iɛ−1/2<br />
(3.47)<br />
(3.48)<br />
So, we have the relation A = −iBe−πɛ . Finally, after we normalize this we get the transmission<br />
coefficient:<br />
T = |B| 2 1<br />
=<br />
1 + e−2πɛ which must hold for any energy. If the energy is large and negative, then<br />
T ≈ e −2πɛ .<br />
Also, we can compute the reflection coefficient for E > 0 as 1 − D,<br />
R =<br />
1<br />
.<br />
1 + e +2πɛ<br />
Exercise 3.4 Verify these last relationships by taking the ψR and ψL, performing the analytic<br />
continuation.<br />
64
This gives us the transmission probabilty as a function <strong>of</strong> incident energy. But, normal<br />
chemical reactions are not done at constant energy, they are done at constant temperature.<br />
To get the thermal transmission coefficient, we need to take a Boltzmann weighted average <strong>of</strong><br />
transmission coefficients<br />
Tth(β) = 1<br />
�<br />
Z<br />
dEe −Eβ T (E) (3.49)<br />
where β = 1/kT and Z is the partition function. If E represents a continuum <strong>of</strong> energy states<br />
then<br />
Tth(β) = − βω¯h(ψ(0) ( βω¯h<br />
4π ) − ψ(0) ( 1 βω¯h<br />
( 4 π<br />
4π<br />
+ 2)))<br />
(3.50)<br />
where ψ (n) (z) is the Polygamma function which is the nth derivative <strong>of</strong> the digamma function,<br />
ψ (0) (z), which is the logarithmic derivative <strong>of</strong> Eulers gamma function, ψ (0) (z) = Γ(z)/Γ(z). 2<br />
3.3 Connection Formulas<br />
In what we have considered thus far, we have assumed that up until the turning point the<br />
wavefunction was well behaved and smooth. We can think <strong>of</strong> the problem as having two domains:<br />
an exterior and an interior. The exterior part we assumed to be simple and the boundary<br />
conditions trivial to impose. The next task is to figure out the matching condition at the turning<br />
point for an arbitrary system. So far what we have are two pieces, ψL and ψR, in the notation<br />
above. What we need is a patch. To do so, we make a linearizing assumption for the force at<br />
the classical turning point:<br />
E − V (x) ≈ Fo(x − a) (3.51)<br />
where Fo = −dV/dx evaluated at x = a. Thus, the phase integral is easy:<br />
�<br />
1 x<br />
pdx =<br />
¯h a<br />
2 �<br />
2mFo(x − a)<br />
3¯h<br />
3/2<br />
(3.52)<br />
But, we can do better than that. We can actually solve the Schrodinger equation for the linear<br />
potential and use the linearized solutions as our patch. The Mathematica Notebook AiryFunctions.nb<br />
goes through the solution <strong>of</strong> the linearized Schrodinger equation<br />
which can be re-written as<br />
with<br />
2 See the Mathematica Book, sec 3.2.10.<br />
− ¯h2 dψ<br />
2m<br />
dx 2 + (E + V ′ )ψ = 0 (3.53)<br />
α =<br />
ψ ′′ = α 3 xψ (3.54)<br />
�<br />
2m<br />
¯h 2 V ′ �1/3<br />
(0) .<br />
65
Absorbing the coefficient into a new variable y, we get Airy’s equation<br />
ψ ′′ (y) = yψ.<br />
The solutions <strong>of</strong> Airy’s equation are Airy functions, Ai(y) and Bi(y) for the regular and irregular<br />
cases. The integral representation <strong>of</strong> the Ai and Bi are<br />
and<br />
Bi(y) = 1<br />
π<br />
Plots <strong>of</strong> these functions are shown in Fig. 3.2.<br />
Ai@yD, Bi@yD<br />
1.5<br />
-10 -8 -6 -4 -2 2<br />
1<br />
0.5<br />
-0.5<br />
Ai(y) = 1<br />
� � �<br />
∞ 3 s<br />
cos + sy ds (3.55)<br />
π 0 3<br />
� �<br />
∞<br />
e<br />
0<br />
−s3 � ��<br />
3<br />
/3+sy s<br />
+ sin + sy ds (3.56)<br />
3<br />
y<br />
Figure 3.2: Airy functions, Ai(y) (red) and Bi(y) (blue)<br />
Since both Ai and Bi are acceptible solutions, we will take a linear combination <strong>of</strong> the two<br />
as our patching function and figure out the coefficients later.<br />
ψP = aAi(αx) + bBi(αx) (3.57)<br />
We now have to determine those coefficients. We need to make two assumptions. One,<br />
that the overlap zones are sufficiently close to the turning point that a linearized potential is<br />
reasonable. Second, the overlap zone is far enough from the turning point (at the origin) that<br />
the WKB approximation is accurate and reliable. You can certainly cook up some potential<br />
for which this will not work, but we will assume it’s reasonable. In the linearized region, the<br />
momentum is<br />
So for +x,<br />
� x<br />
0<br />
p(x) = ¯hα 3/2 (−x) 3/2<br />
(3.58)<br />
|p(x)|dx = 2¯h(αx) 3/2 /3 (3.59)<br />
66
and the WKB wavefunction becomes:<br />
ψR(x) =<br />
D<br />
√ ¯hα 3/4 x 1/4 e−2(αx)3/2 /3 . (3.60)<br />
In order to extend into this region, we will use the asymptotic form <strong>of</strong> the Ai and Bi functions<br />
for y ≫ 0<br />
Ai(y) ≈ e−2y3/2 /3<br />
2 √ πy 1/4<br />
Clearly, the Bi(y) term will not contribute, so b = 0 and<br />
�<br />
4π<br />
a =<br />
α¯h D.<br />
(3.61)<br />
Bi(y) ≈ e+2y3/2 /3<br />
√ . (3.62)<br />
πy1/4 Now, for the other side, we do the same proceedure. Except this time x < 0 so the phase<br />
integral is<br />
� 0<br />
pdx = 2¯h(−αx) 3/2 /3. (3.63)<br />
Thus the WKB wavefunction on the left hand side is<br />
ψL(x) = 1<br />
√ p<br />
=<br />
x<br />
�<br />
Be 2i(−αx)3/2 /3 + Ce −2i(−αx) 3/2 /3 �<br />
1 �<br />
√ Be<br />
¯hα3/4 (−x) 1/4<br />
2i(−αx)3/2 /3 −2i(−αx)<br />
+ Ce 3/2 /3 �<br />
(3.64)<br />
(3.65)<br />
That’s the WKB part, to connect with the patching part, we again use the asymptotic forms for<br />
y ≪ 0 and take only the regular solution,<br />
Ai(y) ≈<br />
≈<br />
1 �<br />
√ sin 2(−y)<br />
π(−y) 1/4 3/2 /3 + π/4 �<br />
1<br />
2i √ π(−y) 1/4<br />
�<br />
e iπ/4 e i2(−y)3/2 /3 −iπ/4 −i2(−y)<br />
− e e 3/2 /3 �<br />
Comparing the WKB wave and the patching wave, we can match term-by-term<br />
a<br />
2i √ π eiπ/4 = B<br />
√<br />
¯hα<br />
−a<br />
2i √ π e−iπ/4 = C<br />
√<br />
¯hα<br />
(3.66)<br />
(3.67)<br />
(3.68)<br />
Since we know a in terms <strong>of</strong> the normalization constant D, B = ie iπ/4 D and C = ie −iπ/4 . This<br />
is the connection! We can write the WKB function across the turning point as<br />
⎧<br />
⎪⎨<br />
ψW KB(x) =<br />
⎪⎩<br />
√2D sin<br />
p(x) � 1<br />
1<br />
√ 2D −<br />
e ¯h<br />
|p(x)|<br />
� 0<br />
¯h x<br />
� 0<br />
x pdx<br />
67<br />
pdx + π/4�<br />
x < 0<br />
x > 0<br />
(3.69)
node xn<br />
1 -2.33811<br />
2 -4.08795<br />
3 -5.52056<br />
4 -6.78671<br />
5 -7.94413<br />
6 -9.02265<br />
7 -10.0402<br />
Table 3.1: Location <strong>of</strong> nodes for Airy, Ai(x) function.<br />
Example: Bound states in the linear potential<br />
Since we worked so hard, we have to use the results. So, consider a model problem for a particle<br />
in a gravitational field. Actually, this problem is not so far fetched since one can prepare trapped<br />
atoms above a parabolic reflector and make a quantum bouncing ball. Here the potential is<br />
V (x) = mgx where m is the particle mass and g is the graviational constant (g = 9.80m/s).<br />
We’ll take the case where the reflector is infinite so that the particle cannot penetrate into it.<br />
The Schrödinger equation for this potential is<br />
− ¯h2<br />
2m ψ′′ + (E − mgx)ψ = 0. (3.70)<br />
The solutions are the Airy Ai(x) functions. Setting, β = mg and c = ¯h 2 /2m, the solutions are<br />
ψ = CAi(<br />
�<br />
− β<br />
c<br />
� 1/3<br />
(x − E/β)) (3.71)<br />
However, there is one caveat: ψ(0) = 0, thus the Airy functions must have their nodes at x = 0.<br />
So we have to systematically shift the Ai(x) function in x until a node lines up at x = 0. The<br />
nodes <strong>of</strong> the Ai(x) function can be determined and the first 7 <strong>of</strong> them are To find the energy<br />
levels, we systematically solve the equation<br />
�<br />
− β<br />
�1/3 En<br />
c β<br />
= xn<br />
So the ground state is where the first node lands at x = 0,<br />
E1 = 2.33811β<br />
(β/c) 1/3<br />
= 2.33811mg<br />
(2m 2 g/¯h 2 ) 1/3<br />
(3.72)<br />
and so on. Of course, we still have to normalize the wavefunction to get the correct energy.<br />
We can make life a bit easier by using the quantization condition derived from the WKB<br />
approximation. Since we require the wavefunction to vanish exactly at x = 0, we have:<br />
�<br />
1 xt<br />
p(x)dx +<br />
¯h 0<br />
π<br />
= nπ. (3.73)<br />
4<br />
68
15<br />
10<br />
5<br />
-5<br />
-10<br />
2 4 6 8 10<br />
Figure 3.3: Bound states in a graviational well<br />
This insures us that the wave vanishes at x = 0, xt in this case is the turning point E = mgxt.<br />
(See Figure 3.3) As a consequence,<br />
Since p(x) =<br />
� xt<br />
0<br />
� xt<br />
0<br />
p(x)dx = (n − 1/4)π<br />
�<br />
2m(En − mgx), The integral can be evaluated<br />
�<br />
2m(E − mghdx = √ ⎛<br />
2 ⎝ 2En<br />
√ �<br />
⎞<br />
Enm 2 m (En − gmxt) (−En + gmxt)<br />
+ ⎠ (3.74)<br />
3gm<br />
3gm<br />
Since xt = En/mg for the classical turning point, the phase intergral becomes<br />
{ 2√2En 2<br />
3g √ } = (n − 1/4)π¯h.<br />
Enm<br />
Solving for En yields the semi-classical approximation for the eigenvalues:<br />
En =<br />
2<br />
g 3 m 1 �<br />
1<br />
2<br />
2� 3<br />
3 (1 − 4 n) (3 π) 3 ¯h 2<br />
3<br />
4 2 1<br />
3<br />
(3.75)<br />
In atomic units, the gravitional constant is g = 1.08563 × 10 −22 bohr/au 2 (Can you guess why<br />
we rarely talk about gravitational effects on molecules?). For n = 0, we get for an electron<br />
E sc<br />
o = 2.014 × 10 −15 hartree or about 12.6 Hz. So, graviational effects on electrons are extremely<br />
tiny compared to the electron’s total energy.<br />
69
3.4 Scattering<br />
b<br />
r<br />
θ<br />
Figure 3.4: Elastic scattering trajectory for classical collision<br />
The collision between two particles plays an important role in the dynamics <strong>of</strong> reactive molecules.<br />
We consider here the collosion between two particles interacting via a central force, V (r). Working<br />
in the center <strong>of</strong> mass frame, we consider the motion <strong>of</strong> a point particle with mass µ with<br />
position vector �r. We will first examine the process in a purely classical context since it is<br />
intuitive and then apply what we know to the quantum and semiclassical case.<br />
3.4.1 Classical Scattering<br />
The angular momentum <strong>of</strong> the particle about the origin is given by<br />
θ c<br />
r c<br />
�L = �r × �p = µ(�r × ˙ �r) (3.76)<br />
we know that angular momentum is a conserved quantity and it is is easy to show that ˙ � L = 0<br />
viz.<br />
˙�L = d<br />
dt �r × �p = (˙ �r × ˙ �r + (�r × ˙ �p). (3.77)<br />
Since ˙r = ˙p/µ, the first term vanishes; likewise, the force vector, ˙ �p = −dV/dr, is along �r so that<br />
the second term vanishes. Thus, L = const meaning that angular momentum is a conserved<br />
quantity during the course <strong>of</strong> the collision.<br />
In cartesian coordinates, the total energy <strong>of</strong> the collision is given by<br />
To convert from cartesian to polar coordinates, we use<br />
E = µ<br />
2 ( ˙x2 + ˙y 2 ) + V. (3.78)<br />
x = r cos θ (3.79)<br />
70<br />
χ
Thus,<br />
where we use the fact that<br />
y = r sin θ (3.80)<br />
˙x = ˙r cos θ − r ˙ θ sin θ (3.81)<br />
˙y = ˙r sin θ + r ˙ θ cos θ. (3.82)<br />
E = mu<br />
2 ˙r2 + V (r) + L2<br />
2µr 2<br />
L = µr 2 ˙ θ 2<br />
(3.83)<br />
(3.84)<br />
where L is the angular momentum. What we see here is that we have two potential contributions.<br />
The first is the physical attraction (or repulsion) between the two scattering bodies. The second is<br />
a purely repulsive centrifugal potential which depends upon the angular momentum and ultimatly<br />
upon the inpact parameters. For cases <strong>of</strong> large impact parameters, this can be the dominant<br />
force. The effective radial force is given by<br />
µ¨r = L2<br />
2r3 ∂V<br />
−<br />
µ ∂r<br />
(3.85)<br />
Again, we note that the centrifugal contribution is always repulsive while the physical interaction<br />
V(r) is typically attractive at long range and repulsive at short ranges.<br />
We can derive the solutions to the scattering motion by integrating the velocity equations for<br />
r and θ<br />
˙r =<br />
� �<br />
2<br />
± E − V (r) −<br />
µ<br />
L2<br />
2µr2 ��1/2 ˙θ = L<br />
µr2 (3.86)<br />
(3.87)<br />
and taking into account the starting conditions for r and θ. In general, we could solve the<br />
equations numerically and obtain the complete scatering path. However, really what we are<br />
interested in is the deflection angle χ since this is what is ultimately observed. So, we integrate<br />
the last two equations and derive θ in terms <strong>of</strong> r.<br />
θ(r) =<br />
� θ<br />
= −<br />
0<br />
� r<br />
� r<br />
dθ = −<br />
∞<br />
L<br />
µr 2<br />
∞<br />
dθ<br />
dr (3.88)<br />
dr<br />
1<br />
�<br />
(2/µ)(E − V − L 2 /2µr 2 )<br />
dr (3.89)<br />
where the collision starts at t = −∞ with r = ∞ and θ = 0. What we want to do is derive this<br />
in terms <strong>of</strong> an impact parameter, b, and scattering angle χ. These are illustrated in Fig. 3.4 and<br />
can be derived from basic kinematic considerations. First, energy is conserved through out, so if<br />
71
we know the asymptotic velocity, v, then E = µv 2 /2. Secondly, angular momentum is conserved,<br />
so L = µ|r × v| = µvb. Thus the integral above becomes<br />
� r<br />
θ(r) = −b<br />
= −<br />
∞<br />
� r<br />
∞<br />
dθ<br />
dr (3.90)<br />
dr<br />
dr<br />
r2 �<br />
1 − V/E − b2 . (3.91)<br />
/r2 Finally, the angle <strong>of</strong> deflection is related to the angle <strong>of</strong> closest approach by 2θc + χ = π; hence,<br />
� ∞<br />
χ = π − 2b<br />
rc<br />
dr<br />
r2 �<br />
1 − V/E − b2 /r2 The radial distance <strong>of</strong> closest approach is determined by<br />
which can be restated as<br />
E = L2<br />
2µr 2 c<br />
b 2 = r 2 c<br />
�<br />
1 −<br />
(3.92)<br />
+ V (rc) (3.93)<br />
�<br />
V (rc)<br />
E<br />
(3.94)<br />
Once we have specified the potential, we can compute the deflection angle using Eq.3.94. If<br />
V (rc) < 0 , then rc < b and we have an attractive potential, if V (rc) > 0 , then rc > b and the<br />
potential is repulsive at the turning point.<br />
If we have a beam <strong>of</strong> particles incident on some scattering center, then collisions will occur<br />
with all possible impact parameter (hence angular momenta) and will give rise to a distribution<br />
in the scattering angles. We can describe this by a differential cross-section. If we have some<br />
incident intensity <strong>of</strong> particles in our beam, Io, which is the incident flux or the number <strong>of</strong> particles<br />
passing a unit area normal to the beam direction per unit time, then the differential cross-section,<br />
I(χ), is defined so that I(χ)dΩ is the number <strong>of</strong> particles per unit time scattered into some solid<br />
angle dΩ divided by the incident flux.<br />
The deflection pattern will be axially symmetric about the incident beam direction due the<br />
spherical symmetry <strong>of</strong> the interaction potential; thus, I(χ) only depends upon the scattering angle.<br />
Thus, dΩ can be constructed by the cones defining χ and χ+dχ, i.e. dΩ = 2π sin χdχ. Even<br />
if the interaction potential is not spherically symmetric, since most molecules are not spherical,<br />
the scattering would be axially symmetric since we would be scattering from a homogeneous distribution<br />
<strong>of</strong> al possible orientations <strong>of</strong> the colliding molecules. Hence any azimuthal dependency<br />
must vanish unless we can orient or the colliding species.<br />
Given an initial velocity v, the fraction <strong>of</strong> the incoming flux with impact parameter between<br />
b and b + db is 2πbdb. These particles will be deflected between χ and χ + dχ if dχ/db > 0 or<br />
between χ and χ − dχ if dχ/db < 0. Thus, I(χ)dΩ = 2πbdb and it follows then that<br />
I(χ) =<br />
b<br />
. (3.95)<br />
sin χ|dχ/db|<br />
72
Thus, once we know χ(b) for a given v, we can get the differential cross-section. The total<br />
cross-section is obtained by integrating<br />
σ = 2π<br />
� π<br />
0<br />
I(χ) sin χdχ. (3.96)<br />
This is a measure <strong>of</strong> the attenuation <strong>of</strong> the incident beam by the scattering target and has the<br />
units <strong>of</strong> area.<br />
3.4.2 Scattering at small deflection angles<br />
Our calculations will be greatly simplified if we consider collisions that result in small deflections<br />
in the forward direction. If we let the initial beam be along the x axis with momentum p, then<br />
the scattered momentum, p ′ will be related to the scattered angle by p ′ sin χ = p ′ y. Taking χ to<br />
be small<br />
χ ≈ p′ y momentum transfer<br />
= . (3.97)<br />
p ′ momentum<br />
Since the time derivative <strong>of</strong> momentum is the force, the momentum transfered perpendicular to<br />
the incident beam is obtained by integrating the perpendicular force<br />
F ′ y = − ∂V<br />
∂y<br />
= −∂V<br />
∂r<br />
∂r<br />
∂y<br />
where we used r 2 = x 2 + y 2 and y ≈ b. Thus we find,<br />
χ =<br />
p ′ y<br />
µ(2E/µ) 1/2<br />
= −b(2µE) −1/2<br />
= −b(2µE) −1/2<br />
� +∞<br />
−∞<br />
� 2E<br />
µ<br />
= −∂V<br />
∂r<br />
∂V dt<br />
∂r r<br />
b<br />
r<br />
� −1/2 � +∞<br />
−∞<br />
∂V<br />
∂r<br />
dx<br />
r<br />
(3.98)<br />
(3.99)<br />
(3.100)<br />
(3.101)<br />
= − b<br />
� ∞ ∂V<br />
E b ∂r (r2 − b 2 ) −1/2 dr (3.102)<br />
where we used x = (2E/µ) 1/2t and x varies from −∞ to +∞ as r goes from −∞ to b and back.<br />
Let us use this in a simple example <strong>of</strong> the V = C/rs potential for s > 0. Substituting V into<br />
the integral above and solving yields<br />
χ = sCπ1/2<br />
2b s E<br />
Γ((s + 1)/2)<br />
. (3.103)<br />
Γ(s/2 + 1)<br />
This indicates that χE ∝ b−s and |dχ/db| = χs/b.<br />
differential cross-section<br />
Thus, we can conclude by deriving the<br />
I(χ) = 1<br />
s χ−(2+2/s)<br />
� �<br />
1/2<br />
2/s<br />
sCπ Γ((s + 1)/2)<br />
2E Γ(s/2 + 1)<br />
(3.104)<br />
for small values <strong>of</strong> the scattering angle. Consequently, a log-log plot <strong>of</strong> the center <strong>of</strong> mass<br />
differential cross-section as a function <strong>of</strong> the scattering angle at fixed energy should give a straight<br />
line with a slope −(2 + 2/s) from which one can determine the value <strong>of</strong> s. For the van der Waals<br />
potential, s = 6 and I(χ) ∝ E −1/3 χ −7/3 .<br />
73
3.4.3 <strong>Quantum</strong> treatment<br />
The quantum mechanical case is a bit more complex. Here we will develop a brief overview<br />
<strong>of</strong> quantum scattering and move onto the semiclassical evaluation. The quantum scattering is<br />
determined by the asymptotic form <strong>of</strong> the wavefunction,<br />
ψ(r, χ) r→∞<br />
�<br />
−→ A e ikz + f(χ)<br />
r eikr<br />
�<br />
(3.105)<br />
where A is some normalization constant and k = 1/λ = µv/¯h is the initial wave vector along<br />
the incident beam direction (χ = 0). The first term represents a plane wave incident upon<br />
the scatterer and the second represents an out going spherical wave. Notice that the outgoing<br />
amplitude is reduced as r increases. This is because the wavefunction spreads as r increases. If<br />
we can collimate the incoming and out-going components, then the scattering amplitude f(χ) is<br />
related to the differential cross-section by<br />
I(χ) = |f(χ)| 2 . (3.106)<br />
What we have is then that asymptotic form <strong>of</strong> the wavefunction carries within it information<br />
about the scattering process. As a result, we do not need to solve the wave equation for all <strong>of</strong><br />
space, we just need to be able to connect the scattering amplitude to the interaction potential.<br />
We do so by expanding the wave as a superposition <strong>of</strong> Legendre polynomials<br />
∞�<br />
ψ(r, χ) = Rl(r)Pl(cos χ). (3.107)<br />
l=0<br />
Rl(r) must remain finite as r = 0. This determines the form <strong>of</strong> the solution.<br />
When V (r) = 0, then ψ = A exp(ikz) and we can expand the exponential in terms <strong>of</strong> spherical<br />
waves<br />
e ikz ∞�<br />
ilπ/2 sin(kr − lπ/2)<br />
= (2l + 1)e Pl(cos χ) (3.108)<br />
l=0<br />
kr<br />
= 1<br />
∞�<br />
(2l + 1)i<br />
2i l=0<br />
l �<br />
i(kr−lπ/2) e<br />
Pl(cos χ)<br />
+<br />
kr<br />
e−i(kr−lπ/2)<br />
�<br />
(3.109)<br />
kr<br />
We can interpret this equation in the following intuitive way: the incident plane wave is equivalent<br />
to an infinite superposition <strong>of</strong> incoming and outgoing spherical waves in which each term<br />
corresponds to a particular angular momentum state with<br />
�<br />
L = ¯h l(l + 1) ≈ ¯h(l + 1/2). (3.110)<br />
From our analysis above, we can relate L to the impact parameter, b,<br />
b = L<br />
µv<br />
l + 1/2<br />
≈ λ. (3.111)<br />
k<br />
In essence the incoming beam is divided into cylindrical zones in which the lth zone contains<br />
particles with impact parameters (and hence angular momenta) between lλ and (l + 1)λ.<br />
74
Exercise 3.5 The impact parameter, b, is treated as continuous; however, in quantum mechanics<br />
we allow only discrete values <strong>of</strong> the angular momentum, l. How will this affect our results, since<br />
b = (l + 1/2)λ from above.<br />
If V (r) is short ranged (i.e. it falls <strong>of</strong>f more rapidly than 1/r for large r), we can derive a<br />
general solution for the asymptotic form<br />
∞�<br />
� � ��<br />
lπ sin(kr − lπ/2 + ηl<br />
ψ(r, χ) −→ (2l + 1) exp i + ηl<br />
Pl(cos χ). (3.112)<br />
2 kr<br />
l=0<br />
The significant difference between this equation and the one above for the V (r) = 0 case is the<br />
addition <strong>of</strong> a phase-shift ηl. This shift only occurs in the outgoing part <strong>of</strong> the wavefunction and so<br />
we conclude that the primary effect <strong>of</strong> a potential in quantum scattering is to introduce a phase<br />
in the asymptotic form <strong>of</strong> the scattering wave. This phase must be a real number and has the<br />
following physical interpretation illustrated in Fig. 3.5 A repulsive potential will cause a decrease<br />
in the relative velocity <strong>of</strong> the particles at small r resulting in a longer de Broglie wavelength. This<br />
causes the wave to be “pushed out” relative to that for V = 0 and the phase shift is negative.<br />
An attractive potential produces a positive phase shift and “pulls” the wavefunction in a bit.<br />
Furthermore, the centrifugal part produces a negative shift <strong>of</strong> −lπ/2.<br />
Comparing the various forms for the asymptotic waves, we can deduce that the scattering<br />
amplitude is given by<br />
f(χ) = 1<br />
2ik<br />
From this, the differential cross-section is<br />
I(χ) = λ 2<br />
�<br />
� ∞� �<br />
� (2l + 1)e<br />
�<br />
iηl<br />
�<br />
�<br />
�<br />
sin(ηl)Pl(cos χ) �<br />
�<br />
∞�<br />
(2l + 1)(e<br />
l=0<br />
2iηl − 1)Pl(cos χ). (3.113)<br />
l=0<br />
2<br />
(3.114)<br />
What we see here is the possibility for interference between different angular momentum components<br />
Moving forward at this point requires some rather sophisticated treatments which we reserve<br />
for a later course. However, we can use the semiclassical methods developed in this chapter to<br />
estimate the phase shifts.<br />
3.4.4 Semiclassical evaluation <strong>of</strong> phase shifts<br />
The exact scattering wave is not so important. What is important is the asymptotic extent <strong>of</strong><br />
the wavefunction since that is the part which carries the information from the scattering center<br />
to the detector. What we want is a measure <strong>of</strong> the shift in phase between a scattering with and<br />
without the potential. From the WKB treatment above, we know that the phase is related to<br />
the classical action along a given path. Thus, in computing the semiclassical phase shifts, we<br />
are really looking at the difference between the classical actions for a system with the potential<br />
switched on and a system with the potential switched <strong>of</strong>f.<br />
η SC<br />
l<br />
= lim<br />
R→∞<br />
� � R<br />
rc<br />
dr<br />
λ(r) −<br />
� R dr<br />
b λ(r)<br />
75<br />
�<br />
(3.115)
1<br />
0.5<br />
-0.5<br />
-1<br />
5 10 15 20 25 30<br />
Figure 3.5: Form <strong>of</strong> the radial wave for repulsive (short dashed) and attractive (long dashed)<br />
potentials. The form for V = 0 is the solid curve for comparison.<br />
R is the radius a sphere about the scattering center and λ(r) is a de Broglie wavelength<br />
λ(r) = ¯h<br />
p<br />
= 1<br />
k(r) =<br />
¯h<br />
µv(1 − V (r) − b 2 /r 2 ) 1/2<br />
(3.116)<br />
associated with the radial motion. Putting this together:<br />
η SC<br />
l = lim<br />
R→∞ k<br />
⎛<br />
� R V (r) b2<br />
⎝ (1 − −<br />
rc E r2 )1/2 � �<br />
R<br />
dr − 1 −<br />
b<br />
b2<br />
r2 =<br />
� ⎞<br />
1/2<br />
dr⎠<br />
⎛<br />
� R<br />
� �<br />
R<br />
lim ⎝ k(r)dr − k 1 −<br />
R→∞ rc<br />
b<br />
(3.117)<br />
b2<br />
r2 � ⎞<br />
1/2<br />
dr⎠<br />
(3.118)<br />
(k is the incoming wave-vector.) The last integral we can evaluate:<br />
� R<br />
k<br />
b<br />
(r 2 − b 2 )) 1/2<br />
r<br />
dr = k<br />
�<br />
(r 2 − b 2 −1 b<br />
) − b cos<br />
r<br />
��<br />
���� R<br />
b<br />
= kR − kbπ<br />
2<br />
(3.119)<br />
Now, to clean things up a bit, we add and subtract an integral over k. (We do this to get rid <strong>of</strong><br />
the R dependence which will cause problems when we take the limit R → ∞.)<br />
η SC<br />
�� R<br />
l = lim<br />
R→∞ rc<br />
� R<br />
=<br />
=<br />
rc<br />
� R<br />
rc<br />
k(r)dr −<br />
� R<br />
rc<br />
kdr +<br />
� R<br />
rc<br />
kdr − (kR − kbπ<br />
2 )<br />
�<br />
(3.120)<br />
(k(r) − k)dr − k(rc − bπ/2) (3.121)<br />
(k(r) − k)dr − krcπ(l + 1/2)/2 (3.122)<br />
This last expression is the standard form <strong>of</strong> the phase shift.<br />
The deflection angle can be determined in a similar way.<br />
�� �<br />
� � �<br />
��<br />
χ = lim<br />
R→∞<br />
π − 2 dθ<br />
actual path<br />
− π − dθ<br />
V =0 path<br />
76<br />
(3.123)
We transform this into an integral over r<br />
⎡<br />
� �<br />
∞ V (r) b2<br />
χ = −2b ⎣ 1 − −<br />
E r2 �−1/2 � �<br />
dr ∞<br />
− 1 −<br />
r2 b<br />
b2<br />
r2 �−1/2 dr<br />
r2 ⎤<br />
⎦ (3.124)<br />
rc<br />
Agreed, this is weird way to express the scattering angle. But let’s keep pushing this forward.<br />
The last integral can be evaluated<br />
� �<br />
∞<br />
1 −<br />
b<br />
b2<br />
r2 �−1/2 �<br />
dr 1 b �<br />
�<br />
= cos−1 �<br />
r2 b r �<br />
∞<br />
b<br />
= − π<br />
. (3.125)<br />
2b<br />
which yields the classical result we obtained before. So, why did we bother? From this we can<br />
derive a simple and useful connection between the classical deflection angle and the rate <strong>of</strong> change<br />
<strong>of</strong> the semiclassical phase shift with angular momentum, dη SC<br />
l /dl. First, recall the Leibnitz rule<br />
for taking derivatives <strong>of</strong> integrals:<br />
�<br />
d b(x)<br />
f(x, y)dy =<br />
dx a(x)<br />
b<br />
dx<br />
Taking the derivative <strong>of</strong> η SC<br />
l<br />
(∂b/∂l)E = b/k, we find that<br />
�<br />
da<br />
b(x)<br />
f(b(x), y) − f(a(x), y) +<br />
dx a(x)<br />
∂f(x, y)<br />
dy (3.126)<br />
∂x<br />
with respect to l, using the last equation a and the relation that<br />
dη SC<br />
l<br />
dl<br />
χ<br />
= . (3.127)<br />
2<br />
Next, we examine the differential cross-section, I(χ). The scattering amplitude<br />
f(χ) = λ<br />
2i<br />
∞�<br />
(2l + 1)e<br />
l=0<br />
2iηlPl(cos χ). (3.128)<br />
where we use λ = 1/k and exclude the singular point where χ = 0 since this contributes nothing<br />
to the total flux.<br />
Now, we need a mathematical identity to take this to the semiclassical limit where the potential<br />
varies slowly with wavelength. What we do is to first relate the Legendre polynomial,<br />
Pl(cos θ) to a zeroth order Bessel function for small values <strong>of</strong> θ (θ ≪ 1).<br />
Pl(cos θ) = J0((l + 1/2)θ). (3.129)<br />
Now, when x = (l + 1/2)θ ≫ 1 (i.e. large angular momentum), we can use the asymptotic<br />
expansion <strong>of</strong> J0(x)<br />
Pulling this together,<br />
Pl(cos θ) →<br />
�<br />
2<br />
π(l + 1/2)θ<br />
�<br />
2<br />
J0(x) →<br />
πx sin<br />
�<br />
x + π<br />
�<br />
. (3.130)<br />
4<br />
�1/2 �<br />
sin ((l + 1/2)θ + π/4) ≈<br />
77<br />
2<br />
π(l + 1/2)<br />
�1/2 sin ((l + 1/2)θ + π/4)<br />
(sin θ) 1/2 (3.131)
for θ(l + 1/2) ≫ 1. Thus, we can write the semi-classical scattering amplitude as<br />
where<br />
f(χ) = −λ<br />
∞�<br />
l=0<br />
� �1/2 (l + 1/2) �<br />
e<br />
2π sin χ<br />
iφ+<br />
+ e iφ−�<br />
(3.132)<br />
φ ± = 2ηl ± (l + 1/2)χ ± π/4. (3.133)<br />
The phases are rapidly oscillating functions <strong>of</strong> l. Consequently, the majority <strong>of</strong> the terms must<br />
cancel and the sum is determined by the ranges <strong>of</strong> l for which either φ + or φ − is extremized.<br />
This implies that the scattering amplitude is determined almost exclusively by phase-shifts which<br />
satisfy<br />
2 dηl<br />
dl<br />
± χ = 0, (3.134)<br />
where the + is for dφ + /dl = 0 and the − is for dφ − /dl = 0. This demonstrates that the only the<br />
phase-shifts corresponding to impact parameter b can contribute significantly to the differential<br />
cross-section in the semi-classical limit. Thus, the classical condition for scattering at a given<br />
deflection angle, χ is that l be large enough for Eq. 3.134 to apply.<br />
3.4.5 Resonance Scattering<br />
3.5 Problems and Exercises<br />
Exercise 3.6 In this problem we will (again) consider the ammonia inversion problem, this time<br />
we will proceed in a semi-classical context.<br />
Recall that the ammonia inversion potential consists <strong>of</strong> two symmetrical potential wells separated<br />
by a barrier. If the barrier was impenetrable, one would find energy levels corresponding to<br />
motion in one well or the other. Since the barrier is not infinite, there can be passage between<br />
wells via tunneling. This causes the otherwise degenerate energy levels to split.<br />
In this problem, we will make life a bit easier by taking<br />
V (x) = α(x 4 − x 2 )<br />
as in the examples in Chapter 5.<br />
Let ψo be the semi-classical wavefunction describing the motion in one well with energy Eo.<br />
Assume that ψo is exponentially damped on both sides <strong>of</strong> the well and that the wavefunction<br />
is normalized so that the integral over ψ 2 o is unity. When tunning is taken into account, the<br />
wavefunctions corresponding to the new energy levels, E1 and E2 are the symmetric and antisymmetric<br />
combinations <strong>of</strong> ψo(x) and ψo(−x)<br />
ψ1 = (ψo(x) + ψo(−x)/ √ 2<br />
ψ2 = (ψo(x) − ψo(−x)/ √ 2<br />
where ψo(−x) can be thought <strong>of</strong> as the contribution from the zeroth order wavefunction in the<br />
other well. In Well 1, ψo(−x) is very small and in well 2, ψo(+x) is very small and the product<br />
ψo(x)ψo(−x) is vanishingly small everywhere. Also, by construction, ψ1 and ψ2 are normalized.<br />
78
1. Assume that ψo and ψ1 are solutions <strong>of</strong> the Schrödinger equations<br />
and<br />
ψ ′′<br />
o + 2m<br />
¯h 2 (Eo − V )ψo = 0<br />
ψ ′′<br />
1 + 2m<br />
¯h 2 (E1 − V )ψ1 = 0,<br />
Multiply the former by ψ1 and the latter by ψo, combine and subtract equivalent terms, and<br />
integrate over x from 0 to ∞ to show that<br />
Perform a similar analysis to show that<br />
E1 − Eo = − ¯h2<br />
m ψo(0)ψ ′ o(0),<br />
E2 − Eo = + ¯h2<br />
m ψo(0)ψ ′ o(0),<br />
2. Show that the unperturbed semiclassical wavefunction is<br />
and<br />
where vo =<br />
ψo(0) =<br />
� ω<br />
2πvo<br />
�<br />
exp − 1<br />
� a �<br />
|p|dx<br />
¯h 0<br />
ψ ′ o(0) = mvo<br />
¯h ψo(0)<br />
�<br />
2(Eo − V (0))/m and a is the classical turning point at Eo = V (a).<br />
3. Combining your results, show that the tunneling splitting is<br />
∆E = ¯hω<br />
π exp<br />
�<br />
− 1<br />
� +a �<br />
|p|dx .<br />
¯h −a<br />
where the integral is taken between classical turning points on either side <strong>of</strong> the barrier.<br />
4. Assuming that the potential in the barrier is an upside-down parabola<br />
what is the tunneling splitting.<br />
V (x) ≈ Vo − kx 2 /2<br />
5. Now, taking α = 0.1 expand the potential about the barrier and compute determine the<br />
harmonic force constant for the upside-down parabola. Using the equations you derived and<br />
compute the tunneling splitting for a proton in this well. How does this compare with the<br />
calculations presented in Chapter 5.<br />
79
Chapter 4<br />
Postulates <strong>of</strong> <strong>Quantum</strong> <strong>Mechanics</strong><br />
When I hear the words “Schrödinger’s cat”, I wish I were able to reach for my gun.<br />
Stephen Hawkings.<br />
The dynamics <strong>of</strong> physical processes at a microscopic level is very much beyond the realm <strong>of</strong><br />
our macroscopic comprehension. In fact, it is difficult to imagine what it is like to move about on<br />
the length and timescales for whcih quantum mechanics is important. However, for molecules,<br />
quantum mechanics is an everyday reality. Thus, in order to understand how molecules move and<br />
behave, we must develop a model <strong>of</strong> that dynamics in terms which we can comprehend. Making<br />
a model means developing a consistent mathematical framework in which the mathematical<br />
operations and constructs mimic the physical processes being studied.<br />
Before moving on to develop the mathematical framework required for quantum mechanics,<br />
let us consider a simple thought experiment. WE could do the experiment, however, we would<br />
have to deal with some additional technical terms, like funding. The experiment I want to<br />
consider goes as follows: Take a machine gun which shoots bullets at a target. It’s not a very<br />
accurate gun, in fact, it sprays bullets randomly in the general direction <strong>of</strong> the target.<br />
The distribution <strong>of</strong> bullets or histogram <strong>of</strong> the amount <strong>of</strong> lead accumulated in the target is<br />
roughly a Gaussian, C exp(−x 2 /a). The probability <strong>of</strong> finding a bullet at x is given by<br />
P (x) = Ce −x2 /a . (4.1)<br />
C here is a normalization factor such that the probability <strong>of</strong> finding a bullet anywhere is 1. i.e.<br />
� ∞<br />
−∞<br />
The probability <strong>of</strong> finding a bullet over a small interval is<br />
� b<br />
a<br />
dxP (x) = 1 (4.2)<br />
dxP (x)〉0. (4.3)<br />
Now suppose we have a bunker with 2 windows between the machine gun and the target such<br />
that the bunker is thick enough that the bullets coming through the windows rattle around a<br />
few times before emerging in random directions. Also, let’s suppose we can “color” the bullets<br />
with some magical (or mundane) means s.t. bullets going through 1 slit are colored “red” and<br />
80
Figure 4.1: Gaussian distribution function<br />
1<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
-10 -5 5 10<br />
81
Figure 4.2: Combination <strong>of</strong> two distrubitions.<br />
1<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
-10 -5 5 10<br />
bullets going throught the other slit are colored “blue”. Thus the distribution <strong>of</strong> bullets at a<br />
target behind the bunker is now<br />
P12(x) = P1(x) + P2(x) (4.4)<br />
where P1 is the distribution <strong>of</strong> bullets from window 1 (the blue bullets) and P2 the “red” bullets.<br />
Thus, the probability <strong>of</strong> finding a bullet that passed through either 1 or 2 is the sum <strong>of</strong> the<br />
probabilies <strong>of</strong> going through 1 and 2. This is shown in Fig. ??<br />
Now, let’s make an “electron gun” by taking a tungsten filiment heated up so that electrons<br />
boil <strong>of</strong>f and can be accellerated toward a phosphor screen after passing through a metal foil with<br />
a pinhole in the middle We start to see little pin points <strong>of</strong> light flicker on the screen–these are<br />
the individual electron “bullets” crashing into the phosphor.<br />
If we count the number <strong>of</strong> electrons which strike the screen over a period <strong>of</strong> time–just as in<br />
the machine gun experiment, we get a histogram as before. The reason we get a histogram is<br />
slightly different than before. If we make the pin hole smaller, the distribution gets wider. This<br />
is a manifestation <strong>of</strong> the Heisenberg Uncertainty Principle which states:<br />
∆x · δp ≥ ¯h/2 (4.5)<br />
In otherwords, the more I restrict where the electron can be (via the pin hole) the more uncertain<br />
I am about which direction is is going (i.e. its momentum parallel to the foil.) Thus, I wind up<br />
with a distribution <strong>of</strong> momenta leaving the foil.<br />
Now, let’s poke another hole in the foil and consider the distribution <strong>of</strong> electrons on the foil.<br />
Based upon our experience with bullets, we would expect:<br />
P12 = P1 + P2<br />
BUT electrons obey quantum mechanics! And in quantum mechanics we represent a particle<br />
via an amplitude. And one <strong>of</strong> the rules <strong>of</strong> quantum mechanics is that we first add amplitudes<br />
and that probabilities are akin to the intensity <strong>of</strong> the combinded amplitudes. I.e.<br />
P = |ψ1 + ψ2| 2<br />
82<br />
(4.6)<br />
(4.7)
Figure 4.3: Constructive and destructive interference from electron/two-slit experiment. The<br />
superimposed red and blue curves are P1 and P2 from the classical probabilities<br />
1<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
-10 -5 5 10<br />
where ψ1 and ψ2 are the complex amplitudes associated with going through hole 1 and hole 2.<br />
Since they are complex numbers,<br />
Thus,<br />
ψ1 = a1 + ib1 = |psi1|e iφ1 (4.8)<br />
ψ2 = a2 + ib2 = |psi2|e iφ2 (4.9)<br />
ψ1 + ψ2 = |ψ1|e iφ1 + |ψ2|e iφ2 (4.10)<br />
|ψ1 + ψ2| 2 = (|ψ1|e iφ1 + |ψ2|e iφ2 )<br />
× (|ψ1|e −iφ1 + |ψ2|e −iφ2 ) (4.11)<br />
P12 = |ψ1| 2 + |ψ2| 2 + 2|ψ1||ψ2| cos(φ1 − φ2)<br />
�<br />
(4.12)<br />
P12 = P1 + P2 + 2<br />
P1P2 cos(φ1 − φ2) (4.13)<br />
In other words, I get the same envelope as before, but it’s modulated by the cos(φ1 − φ2)<br />
“interference” term. This is shown in Fig. 4.3. Here the actual experimental data is shown as a<br />
dashed line and the red and blue curves are the P1 and P2. Just as if a wave <strong>of</strong> electrons struck<br />
the two slits and diffracted (or interfered) with itself. However, we know that electrons come in<br />
definite chunks–we can observe individual specks on the screen–only whole lumps arrive. There<br />
are no fractional electrons.<br />
Conjecture 1 Electrons–being indivisible chunks <strong>of</strong> matter–either go through slit 1 or slit 2.<br />
Assuming Preposition 1 is true, we can divide the electrons into two classes:<br />
1. Those that go through slit 1<br />
83
2. Those that go through slit 2.<br />
We can check this preposition by plugging up hole 1 and we get P2 as the resulting distribution.<br />
Plugging up hole 2, we get P1. Perhaps our preposition is wrong and electrons can be split in<br />
half and half <strong>of</strong> it went through slit 1 and half through slit 2. NO! Perhaps, the electron went<br />
through 1 wound about and went through 2 and through some round-about way made its way<br />
to the screen.<br />
Notice that in the center region <strong>of</strong> P12, P12 > P1+P2, as if closing 1 hole actually decreased the<br />
number <strong>of</strong> electrons going through the other hole. It seems very hard to justify both observations<br />
by proposing that the electrons travel in complicated pathways.<br />
In fact, it is very mysterious. And the more you study quantum mechanics, the more mysterious<br />
it seems. Many ideas have been cooked up which try to get the P12 curve in terms <strong>of</strong><br />
electrons going in complicated paths–all have failed.<br />
Surprisingly, the math is simple (in this case). It’s just adding complex valued amplitudes.<br />
So we conclude the following:<br />
Electrons always arrive in discrete, indivisible chunks–like particles. However, the<br />
probability <strong>of</strong> finding a chunk at a given position is like the distribution <strong>of</strong> the intensity<br />
<strong>of</strong> a wave.<br />
We could conclude that our conjecture is false since P12 �= P1 + P2. This we can test.<br />
Let’s put a laser behind the slits so that an electron going through either slit scatters a bit<br />
<strong>of</strong> light which we can detect. So, we can see flashes <strong>of</strong> light from electrons going through slit<br />
1, flashes <strong>of</strong> light from electrons going through slit 2, but NEVER two flashes at the same<br />
time. Conjecture 1 is true. But if we look at the resulting distribution: we get P12 = P1 + P2.<br />
Measuring which slit the electon passes through destroys the phase information. When we make<br />
a measurement in quantum mechanics, we really disturb the system. There is always the same<br />
amount <strong>of</strong> disturbance because electrons and photons always interact in the same way every time<br />
and produce the same sized effects. These effects “rescatter” the electrons and the phase info is<br />
smeared out.<br />
It is totally impossible to devise an experiment to measure any quantum phenomina without<br />
disturbing the system you’re trying to measure. This is one <strong>of</strong> the most fundimental and perhaps<br />
most disturbing aspects <strong>of</strong> quantum mechanics.<br />
So, once we have accepted the idea that matter comes in discrete bits but that its behavour<br />
is much like that <strong>of</strong> waves, we have to adjust our way <strong>of</strong> thinking about matter and dynamics<br />
away from the classical concepts we are used to dealing with in our ordinary life.<br />
These are the basic building blocks <strong>of</strong> quantum mechanics. Needless to say they are stated in<br />
a rather formal language. However, each postulate has a specific physical reason for its existance.<br />
For any physical theory, we need to be able to say what the system is, how does it move, and<br />
what are the possible outcomes <strong>of</strong> a measurement. These postulates provide a sufficient basis for<br />
the development <strong>of</strong> a consistent theory <strong>of</strong> quantum mechanics.<br />
84
4.0.1 The description <strong>of</strong> a physical state:<br />
The state <strong>of</strong> a physical system at time t is defined by specifying a vector |ψ(t)〉 belonging to a<br />
state space H. We shall assume that this state vector can be normalized to one:<br />
〈ψ|ψ〉 = 1<br />
4.0.2 Description <strong>of</strong> Physical Quantities:<br />
Every measurable physical quantity, A, is described by an operator acting in H; this operator is<br />
an observable.<br />
A consequence <strong>of</strong> this is that any operator related to a physical observable must be Hermitian.<br />
This we can prove. Hermitian means that<br />
〈x|O|y〉 = 〈y|O|x〉 ∗<br />
Thus, if O is a Hermitian operator and 〈O〉 = 〈ψ|O|ψ〉 = λ〈ψ|ψ〉,<br />
Likewise,<br />
If O is Hermitian, we can also write<br />
(4.14)<br />
〈O〉 = 〈x|O|x〉 + 〈x|O|y〉 + 〈y|O|x〉 + 〈y|O|y〉. (4.15)<br />
〈O〉 ∗ = 〈x|O|x〉 ∗ + 〈x|O|y〉 ∗ + 〈y|O|x〉 ∗ + 〈y|O|y〉 ∗<br />
= 〈x|O|x〉 + 〈y|O|x〉 ∗ + 〈x|O|y〉 + 〈y|O|y〉<br />
= 〈O〉 (4.16)<br />
= λ (4.17)<br />
〈ψ|O = λ〈ψ|. (4.18)<br />
which shows that 〈ψ| is an eigenbra <strong>of</strong> O with real eigenvalue λ. Therefore, for an arbitrary ket,<br />
〈ψ|O|φ〉 = λ〈ψ|φ〉 (4.19)<br />
Now, consider eigenvectors <strong>of</strong> a Hermitian operator, |ψ〉 and |φ〉. Obviously we have:<br />
Since O is Hermitian, we also have<br />
Thus, we can write:<br />
O|ψ〉 = λ|ψ〉 (4.20)<br />
O|φ〉 = µ|φ〉 (4.21)<br />
〈ψ|O = λ〈ψ| (4.22)<br />
〈φ|O = µ〈φ| (4.23)<br />
〈φ|O|ψ〉 = λ〈φ|ψ〉 (4.24)<br />
〈φ|O|ψ〉 = µ〈φ|ψ〉 (4.25)<br />
Subtracting the two: (λ − µ)〈φ|ψ〉 = 0. Thus, if λ �= µ, |ψ〉 and |φ〉 must be orthogonal.<br />
85
4.0.3 <strong>Quantum</strong> Measurement:<br />
The only possible result <strong>of</strong> the measurement <strong>of</strong> a physical quantity is one <strong>of</strong> the eigenvalues <strong>of</strong><br />
the corresponding observable. To any physical observable we ascribe an operator, O. The result<br />
<strong>of</strong> a physical measurement must be an eigenvalue, a. With each eigenvalue, there corresponds<br />
an eigenstate <strong>of</strong> O, |φa〉. This function is such that the if the state vector, |ψ(t◦)〉 = |φa〉 where<br />
t◦ corresponds to the time at which the measurement was preformed, O|ψ〉 = a|ψ〉 and the<br />
measurement will yield a.<br />
Suppose the state-function <strong>of</strong> our system is not an eigenfunction <strong>of</strong> the operator we are<br />
interested in. Using the superposition principle, we can write an arbitrary state function as a<br />
linear combination <strong>of</strong> eigenstates <strong>of</strong> O<br />
|ψ(t◦)〉 = �<br />
〈φa|ψ(t◦)〉|φa〉<br />
a<br />
= �<br />
ca|φa〉. (4.26)<br />
a<br />
where the sum is over all eigenstates <strong>of</strong> O. Thus, the probability <strong>of</strong> observing answer a is |ca| 2 .<br />
IF the measurement DOES INDEED YIELD ANSWER a, the wavefunction <strong>of</strong> the system<br />
at an infinitesmimal bit after the measurement must be in an eigenstate <strong>of</strong> O.<br />
4.0.4 The Principle <strong>of</strong> Spectral Decomposition:<br />
|ψ(t + ◦ )〉 = |φa〉. (4.27)<br />
For a non-discrete spectrum: When the physical quantity, A, is measured on a system in a<br />
normalized state |ψ〉, the probability P(an) <strong>of</strong> obtaining the non-degenerate eigenvalue an <strong>of</strong> the<br />
corresponding observable is given by<br />
P(an) = |〈un|ψ〉| 2<br />
where |un〉 is a normalized eigenvector <strong>of</strong> A associated with the eigenvalue an. i.e.<br />
A|un〉 = an|un〉<br />
(4.28)<br />
For a discrete spectrum: the sampe principle applies as in the non-discrete case, except we<br />
sum over all possible degeneracies <strong>of</strong> an<br />
gn�<br />
P(an) = |〈un|ψ〉|<br />
i=1<br />
2<br />
Finally, for the case <strong>of</strong> a continuous spectrum: the probability <strong>of</strong> obtaining a result between<br />
α and α + dα is<br />
dPα = |〈α|ψ〉| 2 dα<br />
86
4.0.5 The Superposition Principle<br />
Let’s formalize the above discussion a bit and write the electron’s state |ψ〉 = a|1〉 + b|2〉 where<br />
|1〉 and |2〉 are “basis states” corresponding to the electron passing through slit 1 or 2. The<br />
coefficients, a and b, are just the complex numbers ψ1 and ψ2 written above. This |ψ〉 is a vector<br />
in a 2-dimensional complex space with unit length since ψ 2 1 + ψ 2 1 = 1. 1<br />
Let us define a Vector Space by defining a set <strong>of</strong> objects {|ψ〉}, an addition rule: |φ〉 =<br />
|ψ〉 + |ψ ′ > which allows us to construct new vectors, and a scaler multiplication rule |φ〉 = a|ψ〉<br />
which scales the length <strong>of</strong> a vector. A non-trivial example <strong>of</strong> a vector space is the x, y plane.<br />
Adding two vectors gives another vector also on the x, y plane and multiplying a vector by a<br />
constant gives another vector pointed in the same direction but with a new length.<br />
The inner product <strong>of</strong> two vectors is written as<br />
〈φ|ψ〉 = (φxφy)<br />
�<br />
ψx<br />
�<br />
(4.29)<br />
=<br />
ψy<br />
φ ∗ xψx + φ ∗ yψy (4.30)<br />
= 〈ψ|φ〉 ∗ . (4.31)<br />
The length <strong>of</strong> a vector is just the inner product <strong>of</strong> the vector with itself, i.e. ψ|ψ〉 = 1 for the<br />
state vector we defined above.<br />
The basis vectors for the slits can be used as a basis for an arbitrary state |ψ〉 by writing it<br />
as a linear combination <strong>of</strong> the basis vectors.<br />
|ψ〉 = ψ1|1〉 + ψ1|1〉 (4.32)<br />
In fact, any vector in the vector space can always be written as a linear combination <strong>of</strong> basis<br />
vectors. This is the superposition principle.<br />
The different ways <strong>of</strong> writing the vector |ψ〉 are termed representations. Often it is easier to<br />
work in one representation than another knowing fully that one can always switch back in forth at<br />
will. Each different basis defines a unique representation. An example <strong>of</strong> a representation are the<br />
unit vectors on the x, y plane. We can also define another orthonormal representation <strong>of</strong> the x, y<br />
plane by introducing the unit vectors |r〉, |θ〉, which define a polar coordinate system. One can<br />
write the vector v = a|x〉 + b|y > as v = √ a 2 + b 2 |r〉 + tan −1 (b/a)|θ〉 or v = r sin θ|x〉 + r cos θ|y〉<br />
and be perfectly correct. Usually experience and insight is the only way to determine a priori<br />
which basis (or representation) best suits the problem at hand.<br />
Transforming between representations is accomplished by first defining an object called an<br />
operator which has the form:<br />
I = �<br />
|i〉〈i|. (4.33)<br />
The sum means “sum over all members <strong>of</strong> a given basis”. For the xy basis,<br />
i<br />
I = |x〉〈x| + |y〉〈y| (4.34)<br />
1 The notation we are introducing here is known as “bra-ket” notation and was invented by Paul Dirac. The<br />
vector |ψ〉 is called a “ket”. The corresponding “bra” is the vector 〈ψ| = (ψ ∗ xψ ∗ y), where the ∗ means complex<br />
conjugation. The notation is quite powerful and we shall use is extensively throughout this course.<br />
87
This operator is called the “idempotent” operator and is similar to multiplying by 1. For example,<br />
We can also write the following:<br />
I|ψ〉 = |1〉〈1|ψ〉 + |2〉〈2|ψ〉 (4.35)<br />
= ψ1|1〉 + ψ2|2〉 (4.36)<br />
= |ψ〉 (4.37)<br />
|ψ〉 = |1〉〈1|ψ〉 + |2〉〈2|ψ〉 (4.38)<br />
The state <strong>of</strong> a system is specified completely by the complex vector |ψ〉 which can be written<br />
as a linear superposition <strong>of</strong> basis vectors spanning a complex vector space (Hilbert space). Inner<br />
products <strong>of</strong> vectors in the space are as defined above and the length <strong>of</strong> any vector in the space<br />
must be finite.<br />
Note, that for state vectors in continuous representations, the inner product relation can be<br />
written as an integral:<br />
�<br />
〈φ|ψ〉 = dqφ ∗ (q)φ(q) (4.39)<br />
and normalization is given by<br />
�<br />
〈ψ|ψ〉 = dq|φ(q)| 2 ≤ ∞. (4.40)<br />
The functions, ψ(q) are termed square integrable because <strong>of</strong> the requirement that the inner<br />
product integral remain finite. The physical motivation for this will become apparent in a<br />
moment when ascribe physical meaning to the mathematical objects we are defining. The class<br />
<strong>of</strong> functions satisfying this requirement are also known as L 2 functions. (L is for Lebesgue,<br />
referring to the class <strong>of</strong> integral.)<br />
The action <strong>of</strong> the laser can also be represented mathematically as an object <strong>of</strong> the form<br />
and<br />
P1 = |1〉〈1|. (4.41)<br />
P2 = |2〉〈2| (4.42)<br />
and note that P1 + P2 = I.<br />
When P1 acts on |ψ〉 it projects out only the |1〉 component <strong>of</strong> |ψ〉<br />
The expectation value <strong>of</strong> an operator is formed by writing:<br />
Let’s evaluate this:<br />
P1|ψ〉 = ψ1|1〉. (4.43)<br />
〈P1〉 = 〈ψ|P1|ψ〉 (4.44)<br />
〈Px〉 = 〈ψ|1〉〈1|ψ〉<br />
= ψ 2 1 (4.45)<br />
88
Similarly for P2.<br />
Part <strong>of</strong> our job is to insure that the operators which we define have physical counterparts.<br />
We defined the projection operator, P1 = |1〉〈1|, knowing that the physical polarization filter<br />
removed all “non” |1〉 components <strong>of</strong> the wave. We could have also written it in another basis,<br />
the math would have been slightly more complex, but the result the same. |ψ1| 2 is a real number<br />
which we presumably could set <strong>of</strong>f to measure in a laboratory.<br />
4.0.6 Reduction <strong>of</strong> the wavepacket:<br />
If a measurement <strong>of</strong> a physical quantity A on the system in the state |ψ〉 yields the result, an,<br />
the state <strong>of</strong> the physical system immediately after the measurement is the normalized projection<br />
Pn|ψ〉 onto the eigen subspace associated with an.<br />
In more plain language, if you observe the system at x, then it is at x. This is perhaps<br />
the most controversial posulate since it implies that the act <strong>of</strong> observing the system somehow<br />
changes the state <strong>of</strong> the system.<br />
Suppose the state-function <strong>of</strong> our system is not an eigenfunction <strong>of</strong> the operator we are<br />
interested in. Using the superposition principle, we can write an arbitrary state function as a<br />
linear combination <strong>of</strong> eigenstates <strong>of</strong> O<br />
|ψ(t◦)〉 = �<br />
〈φa|ψ(t◦)〉|φa〉<br />
a<br />
= �<br />
ca|φa〉. (4.46)<br />
a<br />
where the sum is over all eigenstates <strong>of</strong> O. Thus, the probability <strong>of</strong> observing answer a is |ca| 2 .<br />
IF the measurement DOES INDEED YIELD ANSWER a, the wavefunction <strong>of</strong> the system<br />
at an infinitesmimal bit after the measurement must be in an eigenstate <strong>of</strong> O.<br />
|ψ(t + ◦ )〉 = |φa〉. (4.47)<br />
This is the only postulate which is a bit touchy deals with the reduction <strong>of</strong> the wavepacket<br />
as the result <strong>of</strong> a measurement. On one hand, you could simply accept this as the way one<br />
goes about business and simply state that quantum mechanics is an algorithm for predicting<br />
the outcome <strong>of</strong> experiments and that’s that. It says nothing about the inner workings <strong>of</strong> the<br />
universe. This is what is known as the “Reductionist” view point. In essence, the Reductionist<br />
view point simply wants to know the answer: “How many?”, “How wide?”, “How long?”.<br />
On the other hand, in the Holistic view, quantum mechanics is the underlying physical theory<br />
<strong>of</strong> the universe and say that the process <strong>of</strong> measurement does play an important role in how the<br />
universe works. In otherwords, in the Holist wants the (w)hole picture.<br />
The Reductionist vs. Holist argument has been the subject <strong>of</strong> numerous articles and books<br />
in both the popular and scholarly arenas. We may return to the philosophical discussion, but<br />
for now we will simply take a reductionist view point and first learn to use quantum mechanics<br />
as a way to make physical predictions.<br />
89
4.0.7 The temporal evolution <strong>of</strong> the system:<br />
The time evolution <strong>of</strong> the state vector is given by the Schrödinger equation<br />
i¯h ∂<br />
|ψ(t)〉 = H(t)|ψ(t)〉<br />
∂t<br />
where H(t) is an operator/observable associated withthe total energy <strong>of</strong> the system.<br />
As we shall see, H is the Hamiltonian operator and can be obtained from the classical Hamiltonian<br />
<strong>of</strong> the system.<br />
4.0.8 Dirac <strong>Quantum</strong> Condition<br />
One <strong>of</strong> the crucial aspects <strong>of</strong> any theory is that we need to be able to construct physical observables.<br />
Moreover, we would like to be able to connect the operators and observables in quantum<br />
mechanics to the observables in classical mechanics. At some point there must be a correspondence.<br />
This connection can be made formally by relating what is known as the Poisson bracket<br />
in classical mechanics:<br />
{f(p, q), g(p, q)} = ∂f ∂g ∂g ∂f<br />
−<br />
∂q ∂p ∂q ∂p<br />
which looks a lot like the commutation relation between two linear operators:<br />
(4.48)<br />
[ Â, ˆ B] = Â ˆ B − ˆ BÂ (4.49)<br />
Of course, f(p, q) and g(p, q) are functions over the classical position and momentum <strong>of</strong> the<br />
physical system. For position and momentum, it is easy to show that the classical Poisson<br />
bracket is<br />
{q, p} = 1.<br />
Moreover, the quantum commutation relation between the observable x and p is<br />
[ˆx, ˆp] = i¯h.<br />
Dirac proposed that the two are related and that this relation defines an acceptible set <strong>of</strong><br />
quantum operations.<br />
The quantum mechanical operators ˆ f and ˆg, which in classical theory replace the<br />
classically defined functions f and g, must always be such that the commutator <strong>of</strong> ˆ f<br />
and ˆg corresponds to the Poisson bracket <strong>of</strong> f and g according to<br />
To see how this works, we write the momentum operator as<br />
i¯h{f, g} = [ ˆ f, ˆg] (4.50)<br />
ˆp = ¯h<br />
i<br />
∂<br />
∂x<br />
90<br />
(4.51)
Thus,<br />
Thus,<br />
Let’s see if ˆx and ˆp commute. First <strong>of</strong> all<br />
ˆpψ(x) = ¯h ∂ψ(x)<br />
i ∂x<br />
(4.52)<br />
∂<br />
∂x xf(x) = f(x) + xf ′ (x) (4.53)<br />
[ˆx, ˆp]f(x) = ¯h ∂ ∂<br />
(x f(x) −<br />
i ∂x ∂x xf(x)<br />
= ¯h<br />
i (xf ′ (x) − f(x) − xf ′ (x))<br />
= i¯hf(x) (4.54)<br />
The fact that ˆx and ˆp do not commute has a rather significant consequence:<br />
In other words, if two operators do not commute, one cannot devise and experiment to<br />
simultaneously measure the physical quantities associated with each operator. This in fact limits<br />
the precision in which we can preform any physical measurement.<br />
The principle result <strong>of</strong> the postulates is that the wavefunction or state vector <strong>of</strong> the system<br />
carries all the physical information we can obtain regarding the system and allows us to make<br />
predictions regarding the probable outcomes <strong>of</strong> any experiment. As you may well know, if one<br />
make a series <strong>of</strong> experimental measurements on identically prepared systems, one obtains a<br />
distribution <strong>of</strong> results–usually centered about some peak in the distribution.<br />
When we report data, we usually don’t report the result <strong>of</strong> every single experiment. For<br />
a spectrscopy experiment, we may have made upwards <strong>of</strong> a million individual measurement,<br />
all distributed about some average value. From statistics, we know that the average <strong>of</strong> any<br />
distribution is the expectation value <strong>of</strong> some quantity, in this case x:<br />
for the case <strong>of</strong> a discrete spectra, we would write<br />
�<br />
E(x) = P(x)xdx (4.55)<br />
E[h] = �<br />
n<br />
hnPn<br />
where hn is some value and Pn the number <strong>of</strong> times you got that value normalized so that<br />
�<br />
Pn = 1<br />
n<br />
. In the language above, the hn’s are the possible eigenvalues <strong>of</strong> the h operator.<br />
A similar relation holds in quantum mechanics:<br />
(4.56)<br />
Postulate 4.1 Observable quantities are computed as the expectation value <strong>of</strong> an operator 〈O〉 =<br />
〈ψ|O|ψ〉. The expectation value <strong>of</strong> an operator related to a physical observable must be real.<br />
91
For example, the expectation value <strong>of</strong> ˆx the position operator is computed by the integral<br />
or for the discrete case:<br />
〈x〉 =<br />
� +∞<br />
−∞<br />
ψ ∗ (x)xψ(x)dx.<br />
〈O〉 = �<br />
on|〈n|ψ〉| 2 .<br />
n<br />
Of course, simply reporting the average or expectation values <strong>of</strong> an experiment is not enough,<br />
the data is usually distributed about either side <strong>of</strong> this value. If we assume the distribution is<br />
Gaussian, then we have the position <strong>of</strong> the peak center xo = 〈x〉 as well as the width <strong>of</strong> the<br />
Gaussian σ 2 .<br />
The mean-squared width or uncertainty <strong>of</strong> any measurement can be computed by taking<br />
σ 2 A = 〈(A − 〈A〉)〉.<br />
In statistical mechanics, this the fluctuation about the average <strong>of</strong> some physical quantity, A. In<br />
quantum mechanics, we can push this definition a bit further.<br />
Writing the uncertainty relation as<br />
σ 2 A = 〈(A − 〈A〉)(A − 〈A〉)〉 (4.57)<br />
= 〈ψ|(A − 〈A〉)(A − 〈A〉)|ψ〉 (4.58)<br />
= 〈f|f〉 (4.59)<br />
where the new vector |f〉 is simply short hand for |f〉 = (A − 〈A〉)|ψ〉. Likewise for a different<br />
operator B<br />
σ 2 B = = 〈ψ|(B − 〈B〉)(B − 〈B〉)|ψ〉 (4.60)<br />
We now invoke what is called the Schwartz inequality<br />
= 〈g|g〉. (4.61)<br />
σ 2 Aσ 2 B = 〈f|f〉〈g|g〉 ≥ |〈f|g〉| 2<br />
So if we write 〈f|g〉 as a complex number, then<br />
So we conclude<br />
|〈f|g〉| 2 = |z| 2<br />
σ 2 Aσ 2 B ≥<br />
= ℜ(z) 2 + ℑ(z) 2<br />
≥ ℑ(z) 2 �<br />
1<br />
=<br />
2i (z − z∗ )<br />
� �2<br />
1<br />
≥ (〈f|g〉 − 〈g|f〉)<br />
2i<br />
� �2<br />
1<br />
(〈f|g〉 − 〈g|f〉)<br />
2i<br />
92<br />
�2<br />
(4.62)<br />
(4.63)<br />
(4.64)
Now, we reinsert the definitions <strong>of</strong> |f〉 and |g〉.<br />
Likewise<br />
Combining these results, we obtain<br />
〈f|g〉 = 〈ψ|(A − 〈A〉)(B − 〈B〉)|ψ〉<br />
= 〈ψ|(AB − 〈A〉B − A〈B〉 + 〈A〉〈B〉)|ψ〉<br />
= 〈ψ|AB|ψ〉 − 〈A〉〈ψ|B|ψ〉 − 〈B〉〈ψ|A|ψ〉 + 〈A〉〈B〉<br />
= 〈AB〉 − 〈A〉〈B〉 (4.65)<br />
〈g|f〉 = 〈BA〉 − 〈A〉〈B〉. (4.66)<br />
〈f|g〉 − 〈g|f〉 = 〈AB〉 − 〈BA〉 = 〈AB − BA〉 = 〈[A, B]〉. (4.67)<br />
So we finally can conclude that the general uncertainty product between any two operators is<br />
given by<br />
σ 2 Aσ 2 � �2<br />
1<br />
B = 〈[A, B]〉<br />
(4.68)<br />
2i<br />
This is commonly referred to as the Generalized Uncertainty Principle. What is means is that<br />
for any pair <strong>of</strong> observables whose corresponding operators do not commute there will always be<br />
some uncertainty in making simultaneous measurements. In essence, if you try to measure two<br />
non-commuting properties simultaneously, you cannot have an infinitely precise determination <strong>of</strong><br />
both. A precise determination <strong>of</strong> one implies that you must give up some certainty in the other.<br />
In the language <strong>of</strong> matrices and linear algebra this implies that if two matrices do not commute,<br />
then one can not bring both matrices into diagonal form using the same transformation<br />
matrix. in other words, they do not share a common set <strong>of</strong> eigenvectors. Matrices which do<br />
commute share a common set <strong>of</strong> eigenvectors and the transformation which diagonalizes one will<br />
also diagonalize the other.<br />
Theorem 4.1 If two operators A and B commute and if |ψ〉 is an eigenvector <strong>of</strong> A, then B|ψ〉<br />
is also an eigenvector <strong>of</strong> A with the same eigenvalue.<br />
Pro<strong>of</strong>: If |ψ〉 is an eigenvector <strong>of</strong> A, then A|ψ〉 = a|ψ〉. Thus,<br />
Assuming A and B commute, i.e. [A, B] = AB − BA = 0,<br />
Thus, (B|ψ〉) is an eigenvector <strong>of</strong> A with eigenvalue, a.<br />
BA|ψ〉 = aB|ψ〉 (4.69)<br />
AB|ψ〉 = a(B|ψ〉) (4.70)<br />
Exercise 4.1 1. Show that matrix multiplication is associative, i.e. A(BC) = (AB)C, but<br />
not commutative (in general), i.e. BC �= CB<br />
2. Show that (A + B)(A − B) = A 2 + B 2 only <strong>of</strong> A and B commute.<br />
3. Show that if A and B are both Hermitian matrices, AB + BA and i(AB − BA) are also<br />
Hermitian. Note that Hermitian matrices are defined such that Aij = A ∗ ji where ∗ denotes<br />
complex conjugation.<br />
93
4.1 Dirac Notation and Linear Algebra<br />
Part <strong>of</strong> the difficulty in learning quantum mechanics comes from fact that one must also learn a<br />
new mathematical language. It seems very complex from the start. However, the mathematical<br />
objects which we manipulate actually make life easier. Let’s explore the Dirac notation and the<br />
related mathematics.<br />
We have stated all along that the physical state <strong>of</strong> the system is wholly specified by the<br />
state-vector |ψ〉 and that the probability <strong>of</strong> finding a particle at a given point x is obtained via<br />
|ψ(x)| 2 . Say at some initial time |ψ〉 = |s〉 where s is some point along the x axis. Now, the<br />
amplitude to find the particle at some other point is 〈x|s〉. If something happens between the<br />
two points we write<br />
〈x|operator describing process|s〉 (4.71)<br />
The braket is always read from right to left and we interpret this as the amplitude for“starting<br />
<strong>of</strong>f at s, something happens, and winding up at i”. An example <strong>of</strong> this is the Go function in the<br />
homework. Here, I ask “what is the amplitude for a particle to start <strong>of</strong>f at x and to wind up at<br />
x ′ after some time interval t?”<br />
Another Example: Electrons have an intrinsic angular momentum called “spin” . Accordingly,<br />
they have an associated magnetic moment which causes electrons to align with or against an<br />
imposed magnetic field (eg.this gives rise to ESR). Lets say we have an electron source which<br />
produces spin up and spin down electrons with equal probability. Thus, my initial state is:<br />
|i〉 = a|+〉 + b|−〉 (4.72)<br />
Since I’ve stated that P (a) = P (b), |a| 2 = |b| 2 . Also, since P (a) + P (b) = 1, a = b = 1/ √ 2.<br />
Thus,<br />
|i〉 = 1<br />
√ 2 (|+〉 + |−〉) (4.73)<br />
Let’s say that the spin ups can be separated from the spin down via a magnetic field, B and we<br />
filter <strong>of</strong>f the spin down states. Our new state is |i ′ 〉 and is related to the original state by<br />
4.1.1 Transformations and Representations<br />
〈i ′ |i〉 = a〈+|+〉 + b〈+|−〉 = a. (4.74)<br />
If I know the amplitudes for |ψ〉 in a representation with a basis |i〉 , it is always possible to<br />
find the amplitudes describing the same state in a different basis |µ〉. Note, that the amplitude<br />
between two states will not change. For example:<br />
also<br />
|a〉 = �<br />
|i〉〈i|a〉 (4.75)<br />
i<br />
|a〉 = �<br />
|µ〉〈µ|a〉 (4.76)<br />
µ<br />
94
Therefore,<br />
and<br />
〈µ|a〉 = �<br />
〈µ|i〉〈i|a〉 (4.77)<br />
i<br />
〈i|a〉 = �<br />
〈i|µ〉〈µ|a〉. (4.78)<br />
µ<br />
Thus, the coefficients in |µ〉 are related to the coefficients in |i〉 by 〈µ|i〉 = 〈i|µ〉 ∗ . Thus, we can<br />
define a transformation matrix Sµi as<br />
and a set <strong>of</strong> vectors<br />
Thus, we can see that<br />
Thus,<br />
Now, we can also write<br />
Siµ =<br />
Sµi =<br />
⎡<br />
⎢<br />
⎣<br />
⎡<br />
⎢<br />
⎣<br />
〈µ|i〉 〈µ|j〉 〈µ|k〉<br />
〈ν|i〉 〈ν|j〉 〈ν|k〉<br />
〈λ|i〉 〈λ|j〉 〈λ|k〉<br />
ai =<br />
aµ =<br />
⎡<br />
⎢<br />
⎣<br />
⎡<br />
⎢<br />
⎣<br />
aµ = �<br />
〈i|a〉<br />
〈j|a〉<br />
〈k|a〉<br />
〈µ|a〉<br />
〈ν|a〉<br />
〈λ|a〉<br />
i<br />
⎤<br />
Sµiai<br />
⎤<br />
⎥<br />
⎦ (4.79)<br />
⎥<br />
⎦ (4.80)<br />
⎤<br />
〈µ|i〉 ∗ 〈µ|j〉 ∗ 〈µ|k〉 ∗<br />
〈ν|i〉 〈ν|j〉 〈ν|k〉 ∗<br />
〈λ|i〉 ∗ 〈λ|j〉 ∗ 〈λ|k〉 ∗<br />
ai = �<br />
µ<br />
Siµaµ<br />
⎥<br />
⎦ (4.81)<br />
⎤<br />
⎥<br />
⎦ = S ∗ µi<br />
Since 〈i|µ〉 = 〈µ|i〉 ∗ , S is the Hermitian conjugate <strong>of</strong> S. So we write<br />
S = S †<br />
95<br />
(4.82)<br />
(4.83)<br />
(4.84)<br />
(4.85)<br />
(4.86)
So in short;<br />
thus<br />
S † = (S T ) ∗<br />
(S † )ij = S ∗ ji<br />
and S is called a unitary transformation matrix.<br />
4.1.2 Operators<br />
(4.87)<br />
(4.88)<br />
(4.89)<br />
a = Sa = SSa = S † Sa (4.90)<br />
S † S = 1 (4.91)<br />
A linear operator  maps a vector in space H on to another vector in the same space. We can<br />
write this in a number <strong>of</strong> ways:<br />
or<br />
Linear operators have the property that<br />
|φ〉 Â<br />
↦→ |χ〉 (4.92)<br />
|χ〉 = Â|φ〉 (4.93)<br />
Â(a|φ〉 + b|χ〉) = aÂ|φ〉 + bÂ|χ〉 (4.94)<br />
Since superposition is rigidly enforced in quantum mechanics, all QM operators are linear operators.<br />
The Matrix Representation <strong>of</strong> an operator is obtained by writing<br />
Aij = 〈i| Â|j〉 (4.95)<br />
For example: Say we know the representation <strong>of</strong> A in the |i〉 basis, we can then write<br />
Thus,<br />
�<br />
|χ〉 = Â|φ〉 = Â|i〉〈i|φ〉 = �<br />
|j〉〈j|χ〉 (4.96)<br />
i<br />
j<br />
〈j|χ〉 = �<br />
〈j|A|i〉〈i|φ〉 (4.97)<br />
We can keep going if we want by continuing to insert 1’s where ever we need.<br />
i<br />
96
The matrix A is Hermitian if A = A † . If it is Hermitian, then I can always find a basis |µ〉 in<br />
which it is diagonal, i.e.<br />
So, what is Â|µ〉 ?<br />
Aµν = aµδµν<br />
(4.98)<br />
Â|µ〉 = �<br />
|i〉〈i|A|j〉〈j|µ〉 (4.99)<br />
ij<br />
= �<br />
|i〉Aijδjµ<br />
ij<br />
= �<br />
|i〉Aiµ<br />
i<br />
= �<br />
|i〉aµδiµ<br />
i<br />
(4.100)<br />
(4.101)<br />
(4.102)<br />
(4.103)<br />
(4.104)<br />
(4.105)<br />
(4.106)<br />
= aµ|µ〉 (4.107)<br />
An important example <strong>of</strong> this is the “time-independent” Schroedinger Equation:<br />
which we spend some time in solving above.<br />
Finally, if Â|φ〉 = |χ〉 then 〈φ|A† = 〈χ|.<br />
4.1.3 Products <strong>of</strong> Operators<br />
An operator product is defined as<br />
ˆH|ψ〉 = E|ψ〉 (4.108)<br />
( Â ˆ B)|ψ〉 = Â[ ˆ B|ψ〉] (4.109)<br />
where we operate in order from right to left. We proved that in general the ordering <strong>of</strong> the<br />
operations is important. In other words, we cannot in general write  ˆ B = ˆ BÂ. An example <strong>of</strong><br />
this is the position and momentum operators. We have also defined the “commutator”<br />
[ Â, ˆ B] = Â ˆ B − ˆ BÂ. (4.110)<br />
Let’s now briefly go over how to perform algebraic manipulations using operators and commutators.<br />
These are straightforward to prove<br />
97
1. [ Â, ˆ B] = −[ ˆ B, Â]<br />
2. [ Â, Â] = −[Â, Â] = 0<br />
3. [ Â, ˆ BĈ] = [Â, ˆ B] Ĉ + ˆ B[ Â, Ĉ]<br />
4. [ Â, ˆ B + Ĉ] = [Â, ˆ B] + [ Â, Ĉ]<br />
5. [ Â, [ ˆ B, Ĉ]] + [ ˆ B, [ Ĉ, Â]] + [Ĉ, [Â, ˆ B]] = 0 (Jacobi Identity)<br />
6. [ Â, ˆ B] † = [ † , ˆ B † ]<br />
4.1.4 Functions Involving Operators<br />
Another property <strong>of</strong> linear operators is that the inverse operator always can be found. I.e. if<br />
|χ〉 = Â|φ〉 then there exists another operator ˆ B such that |φ〉 = ˆ B|χ〉. In other words ˆ B = Â−1 .<br />
We also need to know how to evaluate functions <strong>of</strong> operators. Say we have a function, F (z)<br />
which can be expanded as a series<br />
Thus, by analogy<br />
For example, take exp( Â)<br />
thus<br />
exp(x) =<br />
∞�<br />
F (z) = fnz<br />
n=0<br />
n<br />
F ( Â) =<br />
∞�<br />
n=0<br />
(4.111)<br />
∞�<br />
fnÂn . (4.112)<br />
n=0<br />
x n<br />
n! = 1 + x + x2 /2 + · · · (4.113)<br />
exp( Â) =<br />
If  is Hermitian, then F (Â) is also Hermitian. Also, note that<br />
Likewise, if<br />
then<br />
∞�<br />
n=0<br />
 n<br />
n!<br />
[ Â, F (Â)] = 0.<br />
(4.114)<br />
Â|φa〉 = a|φa〉 (4.115)<br />
 n |φa〉 = a n |φa〉. (4.116)<br />
98
Thus, we can show that<br />
F ( Â)|φa〉 = �<br />
n<br />
fn Ân |φa〉 (4.117)<br />
(4.118)<br />
= �<br />
fna n |φa〉 (4.119)<br />
n<br />
(4.120)<br />
= F (a) (4.121)<br />
Note, however, that care must be taken when we evaluate F ( Â + ˆ B) if the two operators<br />
do not commute. We ran into this briefly in breaking up the propagator for the Schroedinger<br />
Equation in the last lecture (Trotter Product). For example,<br />
exp( Â + ˆ B) �= exp( Â) exp( ˆ B) (4.122)<br />
unless [ Â, ˆ B] = 0. One can derive, however, a useful formula (Glauber)<br />
exp( Â + ˆ B) = exp( Â) exp( ˆ B) exp(−[ Â, ˆ B]/2) (4.123)<br />
Exercise 4.2 Let H be the Hamiltonian <strong>of</strong> a physical system and |φn〉 the solution <strong>of</strong><br />
1. For an arbitrary operator, Â, show that<br />
2. Let<br />
(a) Compute [H, ˆp], [H, ˆx], and [H, ˆxˆp].<br />
(b) Show 〈φn|ˆp|φn〉 = 0.<br />
H|φn〉 = En|φn〉 (4.124)<br />
〈φn|[ Â, H]|φn〉 = 0 (4.125)<br />
H = 1<br />
2m ˆp2 + V (ˆx) (4.126)<br />
(c) Establish a relationship between the average <strong>of</strong> the kinetic energy given by<br />
and the average force on a particle given by<br />
Ekin = 〈φn| ˆp2<br />
2m |φn〉 (4.127)<br />
∂V (x)<br />
F = 〈φn|ˆx<br />
∂x |φn〉. (4.128)<br />
Finally, relate the average <strong>of</strong> the potential for a particle in state |φn〉 to the average<br />
kinetic energy.<br />
99
Exercise 4.3 Consider the following Hamiltonian for 1d motion with a potential obeying a simple<br />
power-law<br />
H = p2<br />
+ αxn<br />
2m<br />
where α is a constant and n is an integer. Calculate<br />
(4.129)<br />
〈A〉 = 〈ψ|[xp, H]||ψ〉 (4.130)<br />
and use the result to relate the average potential energy to the average kinetic energy <strong>of</strong> the<br />
system.<br />
4.2 Constants <strong>of</strong> the Motion<br />
In a dynamical system (quantum, classical, or otherwise) a constant <strong>of</strong> the motion is any quantity<br />
such that<br />
For quantum systems, this means that<br />
∂tA = 0. (4.131)<br />
[A, H] = 0 (4.132)<br />
(What’s the equivalent relation for classical systems?) In other words, any quantity which<br />
commutes with H is a constant <strong>of</strong> the motion. Furthermore, for any conservative system (in<br />
which there is no net flow <strong>of</strong> energy to or from the system),<br />
From Eq.4.131, we can write that<br />
[H, H] = 0. (4.133)<br />
∂t〈A〉 = ∂t〈ψ(t)|A|ψ(t)〉 (4.134)<br />
Since [A, H] = 0, we know that if the state |φn〉 is an eigenstate <strong>of</strong> H,<br />
then<br />
H|φn〉 = En|φn〉 (4.135)<br />
A|φn〉 = an|φn〉 (4.136)<br />
The an are <strong>of</strong>ten referred to as “good quantum numbers”. What are some constants <strong>of</strong> the<br />
motion for systems that we have studied thus far? (Bonus: how are constants <strong>of</strong> motion related<br />
to particular symmetries <strong>of</strong> the system?)<br />
A state which is in an eigenstate <strong>of</strong> H it’s also in an eigenstate <strong>of</strong> A. Thus, I can simultaneously<br />
measure quantities associated with H and A. Also, after I measure with A, the system remains<br />
in a the original state.<br />
100
4.3 Bohr Frequency and Selection Rules<br />
What if I have another operator, B, which does not commute with H? What is 〈B(t)〉? This we<br />
can compute by first writing<br />
|ψ(t)〉 = �<br />
cne −iEnt/¯h |φn〉. (4.137)<br />
Then<br />
= �<br />
Let’s define the “Bohr Frequency” as ωnm = (En − Em)/¯h.<br />
〈B(t)〉 = �<br />
n<br />
n<br />
〈B(t)〉 = 〈ψ|B|ψ(t)〉 (4.138)<br />
cnc ∗ me −i(En−Em)t/¯h 〈φm|B|φn〉. (4.139)<br />
n<br />
cnc ∗ me −iωnmt 〈φm|B|φn〉. (4.140)<br />
Now, the observed expectation value <strong>of</strong> B oscillates in time at number <strong>of</strong> frequencies corresponding<br />
to the energy differences between the stationary states. The matrix elements Bnm =<br />
〈φm|B|φn〉 do not change with time. Neither do the coefficients, {cn}. Thus, let’s write<br />
B(ω) = cnc ∗ m〈φm|B|φn〉δ(ω − ωnm) (4.141)<br />
and transform the discrete sum into an continuous integral<br />
〈B(t)〉 = 1<br />
� ∞<br />
e<br />
2π 0<br />
−iωt B(ω) (4.142)<br />
where B(ω) is the power spectral <strong>of</strong> B. In other words, say I monitor < B(t) > with my<br />
instrument for a long period <strong>of</strong> time, then take the Fourier Transform <strong>of</strong> the time-series. I get<br />
the power-spectrum. What is the power spectrum for a set <strong>of</strong> discrete frequencies: If I observe<br />
the time-sequence for an infinite amount <strong>of</strong> time, I will get a series <strong>of</strong> discretely spaced sticks<br />
along the frequency axis at precisely the energy difference between the n and m states. The<br />
intensity is related to the probability <strong>of</strong> making a transition from n to m under the influence<br />
<strong>of</strong> B. Certainly, some transitions will not be allowed because 〈φn|B|φm〉 = 0. These are the<br />
“selection rules”.<br />
We now prove an important result regarding the integrated intensity <strong>of</strong> a series <strong>of</strong> transitions:<br />
Exercise 4.4 Prove the Thomas-Reiche-Kuhn sum rule: 2<br />
� 2m|xn0| 2<br />
n<br />
¯h 2 (En − Eo) = 1 (4.143)<br />
where the sum is over a compete set <strong>of</strong> states, |ψn〉 <strong>of</strong> energy En <strong>of</strong> a particle <strong>of</strong> mass m which<br />
moves in a potential; |ψo〉 represents a bound state, and xn0 = 〈ψn|x|ψo〉. (Hint: use the commutator<br />
identity: [x, [x, H]] = ¯h 2 /m)<br />
2 This is perhaps one <strong>of</strong> the most important results <strong>of</strong> quantum mechanics since it is gives the total spectral<br />
intensity for a series <strong>of</strong> transitions. c.f Bethe and Jackiw for a great description <strong>of</strong> sum-rules.<br />
101
Figure 4.4: The diffraction function sin(x)/x<br />
1<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
-20 -10 10 20<br />
-0.2<br />
4.4 Example using the particle in a box states<br />
What are the constants <strong>of</strong> motion for a particle in a box?<br />
Recall that the energy levels and wavefunctions for this system are<br />
φn(x) =<br />
2ma2 (4.144)<br />
�<br />
2<br />
a sin(nπ x) (4.145)<br />
a<br />
En = n2 π 2 ¯h 2<br />
Say our system in in the nth state. What’s the probability <strong>of</strong> measuring the momentum and<br />
obtaining a result between p and p + dp?<br />
where<br />
Pn(p)dp = |φn(p)| 2 dp (4.146)<br />
�<br />
�<br />
1 a 2<br />
φn(p) = √ dx<br />
2π¯h 0 a sin(nπx/a)e−ipx/¯h<br />
(4.147)<br />
= 1<br />
�<br />
i(nπ/a−p/¯h)a 1 e − 1<br />
√<br />
2i 2π¯h i(nπ/a − p/¯h) − e−i(nπ/a−p/¯h)a �<br />
− 1<br />
(4.148)<br />
−i(nπ/a − p/¯h)<br />
= 1<br />
�<br />
a<br />
2i π¯h exp i(nπ/2 − pa/(2¯h))[F (p − nπ¯h/2) + (−1)n+1F (p + nπ¯h/a)] (4.149)<br />
Where the F (p) are “diffraction functions”<br />
F (p) = sin(pa/(2¯h))<br />
pa/(2¯h)<br />
(4.150)<br />
Note that the width 4π¯h/a does not change as I change n. Nor does the amplitude. However,<br />
note that (F (x + n) ± FF (x − n)) 2 is always an even function <strong>of</strong> x. Thus, we can say<br />
〈p〉n =<br />
� +∞<br />
−∞<br />
Pn(p)pdp = 0 (4.151)<br />
102
We can also compute:<br />
〈p 2 〉 = ¯h 2<br />
� � �<br />
∞ �<br />
�∂φn(x)<br />
�2<br />
�<br />
dx � �<br />
0 � ∂x �<br />
= ¯h 2<br />
� a<br />
0<br />
=<br />
2<br />
a<br />
(4.152)<br />
� �<br />
nπ 2<br />
cos(nπx/a)dx (4.153)<br />
a<br />
� nπ¯h<br />
a<br />
� 2<br />
Thus, the RMS deviation <strong>of</strong> the momentum:<br />
∆pn =<br />
= 2mEn<br />
�<br />
〈p 2 〉n − 〈p〉 2 n = nπ¯h<br />
a<br />
(4.154)<br />
(4.155)<br />
Thus, as n increases, the relative accuracy at which we can measure p increases due the fact<br />
that we can resolve the wavefunction into two distinct peaks corresponding to the particle either<br />
going to the left or to the right. ∆p increases due to the fact that the two possible choices for<br />
the measurement are becoming farther and farther apart and hence reflects the distance between<br />
the two most likely values.<br />
4.5 Time Evolution <strong>of</strong> Wave and Observable<br />
Now, suppose we put our system into a superposition <strong>of</strong> box-states:<br />
|ψ(0)〉 = 1<br />
√ 2 (|φ1〉 + |φ2〉) (4.156)<br />
What is the time evolution <strong>of</strong> this state? We know the eigen-energies, so we can immediately<br />
write:<br />
|ψ(t)〉 = 1<br />
√ 2 (exp(−iE1t/¯h)|φ1〉 + exp(−iE2t/¯h)|φ2〉) (4.157)<br />
Let’s factor out a common phase factor <strong>of</strong> e −iE1t/¯h and write this as<br />
|ψ(t)〉 ∝ 1<br />
√ 2 (|φ1〉 + exp(−i(E2 − E1)t/¯h)|φ2〉) (4.158)<br />
and call (E2 − E1)/¯h = ω21 the Bohr frequency.<br />
where<br />
|ψ(t)〉 ∝ 1<br />
√ 2 (|φ1〉 + exp(−iω21t)|φ2〉) (4.159)<br />
ω21 = 3π2¯h . (4.160)<br />
2ma2 103
The phase factor is relatively unimportant and cancels out when I make a measurement. Eg.<br />
the prob. density:<br />
|ψ(x, t)| 2 = |〈x|ψ(t)〉| 2<br />
(4.161)<br />
(4.162)<br />
= 1<br />
2 φ2 1(x) + 1<br />
2 φ2 2(x) + φ1(x)φ2(x) cos(ω21t) (4.163)<br />
Now, let’s compute 〈x(t)〉 for the two state system. To do so, let’s first define x ′ = x − a/2<br />
as the center <strong>of</strong> the well to make the integrals easier. The first two are easy:<br />
〈φ1|x ′ |φ1〉 ∝<br />
〈φ2|x ′ |φ2〉 ∝<br />
which we can do by symmetry. Thus,<br />
Thus,<br />
� a<br />
0<br />
� a<br />
0<br />
dx(x − a/2) sin 2 (πx/a) = 0 (4.164)<br />
dx(x − a/2) sin 2 (2πx/a) = 0 (4.165)<br />
〈x ′ (t)〉 = Re{e −iω21t 〈φ1|x ′ |φ2〉} (4.166)<br />
〈φ1|x ′ |φ2〉 = 〈φ1|x|φ2〉 − (a/2)〈φ1|φ2〉 (4.167)<br />
= 2<br />
� a<br />
dxx sin(πx/a) sin(2πx/a) (4.168)<br />
a 0<br />
= − 16a<br />
9π2 (4.169)<br />
〈x(t)〉 = a 16a<br />
−<br />
2 9π2 cos(ω21t) (4.170)<br />
Compare this to the classical trajectory. Also, what about 〈E(t)〉?<br />
4.6 “Unstable States”<br />
So far in this course, we have been talking about systems which are totally isolated from the<br />
rest <strong>of</strong> the universe. In these systems, there is no influx or efflux <strong>of</strong> energy and all our dynamics<br />
are governed by the three principle postulates I mentioned a the start <strong>of</strong> the lecture. IN essence,<br />
if at t = 0 I prepare the system in an eigenstate <strong>of</strong> H, then for all times later, it’s still in that<br />
state (to within a phase factor). Thus, in a strictly conservative system, a system prepared in<br />
an eigenstate <strong>of</strong> H will remain in an eigenstate forever.<br />
However, this is not exactly what is observed in nature. We know from experience that atoms<br />
and molecules, if prepared in an excited state (say via the absorption <strong>of</strong> a photon) can relax<br />
104
ack to the ground state or some lower state via the emission <strong>of</strong> a photon or a series <strong>of</strong> photons.<br />
Thus, these eigenstates are “unstable”.<br />
What’s wrong here? The problem is not so much to do with what is wrong with our description<br />
<strong>of</strong> the isolated system, it has to do with full description is not included. A isolated atom or<br />
molecule can still interact with the electro-magnetic field (unless we do some tricky confinement<br />
experiments). Thus, there is always some interaction with an outside “environment”. Thus,<br />
while it is totally correct to describe the evolution <strong>of</strong> the global system in terms <strong>of</strong> some global<br />
“atom” + “environment” Hamiltonian, it it NOT totally rigorous to construct a Hamiltonian<br />
which describes only part <strong>of</strong> the story. But, as the great <strong>Pr<strong>of</strong></strong>. Karl Freed (at U. Chicago) once<br />
told me “Too much rigor makes rigor mortis”.<br />
Thankfully, the coupling between an atom and the electromagnetic field is pretty weak. Each<br />
photon emission probability is weighted by the fine-structure constant, α ≈ 1/137. Thus a 2<br />
photon process is weighted by α 2 . Thus, the isolated system approximation is pretty good. Also,<br />
we can pretty much say that most photon emission processes occur as single photon events.<br />
Let’s play a bit “fast and loose” with this idea. We know from experience that if we prepare<br />
the system in an excited state at t = 0, the probability <strong>of</strong> finding it still in the excited state at<br />
some time t later, is<br />
P (t) = e −t/τ<br />
(4.171)<br />
where τ is some time constant which we’ll take as the lifetime <strong>of</strong> the state. One way to “prove”<br />
this relation is to go back to Problem Set 0. Let’s say we have a large number <strong>of</strong> identical systems<br />
N , each prepared in the excited state at t = 0. At time t, there are<br />
N(t) = N e −t/τ<br />
(4.172)<br />
systems in the excited state. Between time t and t + dt a certain number, dn(t) will leave the<br />
excited state via photon emission.<br />
Thus,<br />
dn(t) = N(t) − N(t + dt) = − dN(t)<br />
dt = N(t)dt<br />
dt τ<br />
dn(t)<br />
N(t)<br />
= dt<br />
τ<br />
Thus, 1/τ is the probability per unit time for leaving the unstable state.<br />
The Avg. time a system spends in the unstable state is given by:<br />
1<br />
τ<br />
(4.173)<br />
(4.174)<br />
� ∞<br />
dtte<br />
0<br />
−t/τ = τ (4.175)<br />
For a stable state P (t) = 1 thus, τ → ∞.<br />
The time a system spends in the state is independent <strong>of</strong> its history. This is a characteristic <strong>of</strong><br />
an unstable state. (Also has to do with the fact that the various systems involved to not interact<br />
with each other. )<br />
105
Finally, according to the time-energy uncertainty relation:<br />
∆Eτ ≈ ¯h. (4.176)<br />
Thus, an unstable system has an intrinsic “energy width” associated with the finite time the<br />
systems spends in the state.<br />
For a stable state:<br />
and<br />
for real energies.<br />
What if I instead write: E ′ n = En − i¯hγn/¯h? Then<br />
Thus,<br />
|ψ(t)〉 = e −iEnt/¯h |φn〉 (4.177)<br />
Pn(t) = |e −iEnt/¯h | 2 = 1 (4.178)<br />
Pn(t) = |e −iEnt/¯h e −γn/2t | 2 = e −γnt<br />
γn = 1/τn<br />
(4.179)<br />
(4.180)<br />
is the “Energy Width” <strong>of</strong> the unstable state.<br />
The surprising part <strong>of</strong> all this is that in order to include dissipative effects (photon emission,<br />
etc..) the Eigenvalues <strong>of</strong> H become complex. In other words, the system now evolves under a<br />
non-hermitian Hamiltonian! Recall the evolution operator for an isolated system:<br />
U(t) = e −iHt/¯h (4.181)<br />
(4.182)<br />
U † (t) = e iHt/¯h (4.183)<br />
where the first is the forward evolution <strong>of</strong> the system and the second corresponds to the backwards<br />
evolution <strong>of</strong> the system. Thus, Unitarity is thus related to the time-reversal symmetry<br />
<strong>of</strong> conservative systems. The inclusion <strong>of</strong> an “environment” breaks the intrinsic time-reversal<br />
symmetry <strong>of</strong> an isolated system.<br />
4.7 Problems and Exercises<br />
Exercise 4.5 Find the eigenvalues and eigenvectors <strong>of</strong> the matrix:<br />
⎛<br />
⎜<br />
M = ⎜<br />
⎝<br />
0 0 0 1<br />
0 0 1 0<br />
0 1 0 0<br />
1 0 0 0<br />
106<br />
⎞<br />
⎟<br />
⎠<br />
(4.184)
Solution: You can either do this the hard way by solving the secular determinant and then<br />
finding the eigenvectors by Gramm-Schmidt orthogonalization, or realize that since M = M −1<br />
and M = M † , M is a unitary matrix, this, its eigenvalues can only be ± 1. Furthermore, since<br />
the trace <strong>of</strong> M is 0, the sum <strong>of</strong> the eigenvalues must be 0 as well. Thus, λ = (1, 1, −1, −1) are<br />
the eigenvalues. To get the eigenvectors, consider the following. Let φmu be an eigenvector <strong>of</strong><br />
M, thus,<br />
⎛<br />
⎜<br />
φµ = ⎜<br />
⎝<br />
Since Mφµ = λµφmu, x1 = λµx4 and x2 = λµx3 Thus, 4 eigenvectors are<br />
for λ = (−1, −1, 1, 1).<br />
⎛<br />
⎜<br />
⎝<br />
−1<br />
0<br />
0<br />
1<br />
⎞<br />
⎛<br />
⎟<br />
⎠ ,<br />
⎜<br />
⎝<br />
0<br />
1<br />
−1<br />
0<br />
⎞<br />
x1<br />
x2<br />
x3<br />
x4<br />
⎛<br />
⎟<br />
⎠ ,<br />
⎜<br />
⎝<br />
Exercise 4.6 Let λi be the eigenvalues <strong>of</strong> the matrix:<br />
calculate the sums:<br />
and<br />
H =<br />
⎛<br />
⎜<br />
⎝<br />
⎞<br />
⎟ . (4.185)<br />
⎠<br />
1<br />
0<br />
0<br />
1<br />
⎞<br />
2 −1 −3<br />
−1 1 2<br />
−3 2 3<br />
3�<br />
λi<br />
i<br />
3�<br />
λ<br />
i<br />
2 i<br />
⎛<br />
⎟<br />
⎠ ,<br />
⎜<br />
⎝<br />
⎞<br />
0<br />
1<br />
1<br />
0<br />
⎞<br />
⎟<br />
⎠<br />
(4.186)<br />
⎟<br />
⎠ (4.187)<br />
Hint: use the fact that the trace <strong>of</strong> a matrix is invariant to choice representation.<br />
Solution: Using the hint,<br />
and<br />
(4.188)<br />
(4.189)<br />
trH = �<br />
λi = �<br />
Hii = 2 + 3 + 1 = 6 (4.190)<br />
i<br />
i<br />
�<br />
λ<br />
i<br />
2 i = �<br />
HijHji =<br />
ij<br />
�<br />
H<br />
ij<br />
2 ij = 42 (4.191)<br />
107
Exercise 4.7 1. Let |φn〉 be the eigenstates <strong>of</strong> the Hamiltonian, H <strong>of</strong> some arbitrary system<br />
which form a discrete, orthonormal basis.<br />
Define the operator, Unm as<br />
(a) Calculate the adjoint U † nm <strong>of</strong> Unm.<br />
(b) Calculate the commutator, [H, Unm].<br />
(c) Prove: UmnU † pq = δnqUmp<br />
H|φn〉 = En|φn〉.<br />
Unm = |φn〉〈φm|.<br />
(d) For an arbitrary operator, A, prove that<br />
〈φn|[A, H]|φn〉 = 0.<br />
(e) Now consider some arbitrary one dimensional problem for a particle <strong>of</strong> mass m and<br />
potential V (x). For here on, let<br />
H = p2<br />
+ V (x).<br />
2m<br />
i. In terms <strong>of</strong> p, x, and V (x), compute: [H, p], [H, x], and [H, xp].<br />
ii. Show that 〈φn|p|φn〉 = 0.<br />
iii. Establish a relation between the average value <strong>of</strong> the kinetic energy <strong>of</strong> a state<br />
(f) Show that<br />
〈T 〉 = 〈φn| p2<br />
2m |φn〉<br />
and<br />
〈φn|x dV<br />
dx |φn〉.<br />
The average potential energy in the state φn is<br />
〈V 〉 = 〈φn|V |φn〉,<br />
find a relation between 〈V 〉 and 〈T 〉 when V (x) = Vox λ for λ = 2, 4, 6, . . ..<br />
〈φn|p|φm〉 = α〈φn|x|φm〉<br />
where α is some constant which depends upon En − Em. Calculate α, (hint, consider<br />
the commutator [x, H] which you computed above.<br />
(g) Deduce the following sum-rule for the linear -response function.<br />
〈φ0|[x, [H, x]]|φ0〉 = 2 �<br />
(En − E0)|〈φ0|x|φn〉| 2<br />
n>0<br />
Here |φ0〉 is the ground state <strong>of</strong> the system. Give a physical interpretation <strong>of</strong> this last<br />
result.<br />
108
Exercise 4.8 For this section, consider the following 5 × 5 matrix:<br />
⎛<br />
⎜<br />
H = ⎜<br />
⎝<br />
0 0 0 0 1<br />
0 0 0 1 0<br />
0 0 1 0 0<br />
0 1 0 0 0<br />
1 0 0 0 0<br />
⎞<br />
⎟<br />
⎠<br />
(4.192)<br />
1. Using Mathematica determine the eigenvalues, λj, and eigenvectors, φn, <strong>of</strong> H using the<br />
Eigensystem[] command. Determine the eigenvalues only by solving the secular determinant.<br />
|H − Iλ| = 0<br />
Compare the computational effort required to perform both calculations. Note: in entering<br />
H into Mathematica, enter the numbers as real numbers rather than as integers (i.e. 1.0<br />
vs 1 ).<br />
2. Show that the column matrix <strong>of</strong> the eigenvectors <strong>of</strong> H,<br />
T = {φ1, . . . , φ5},<br />
provides a unitary transformation <strong>of</strong> H between the original basis and the eigenvector basis.<br />
T † HT = Λ<br />
where Λ is the diagonal matrix <strong>of</strong> the eigenvalues λj. i.e. Λij = λiδij.<br />
3. Show that the trace <strong>of</strong> a matrix is invarient to representation.<br />
4. First, without using Mathematica, compute: T r(H 2 ). Now check your result with Mathematica.<br />
109
Chapter 5<br />
Bound States <strong>of</strong> The Schrödinger<br />
Equation<br />
A #2 pencil and a dream can take you anywhere.<br />
– Joyce A. Myers<br />
Thus far we have introduced a series <strong>of</strong> postulates and discussed some <strong>of</strong> their physical implications.<br />
We have introduced a powerful notation (Dirac Notation) and have been studying how<br />
we describe dynamics at the atomic and molecular level where ¯h is not a small number, but is<br />
<strong>of</strong> order unity. We now move to a topic which will serve as the bulk <strong>of</strong> our course, the study<br />
<strong>of</strong> stationary systems for various physical systems. We shall start with some general principles,<br />
(most <strong>of</strong> which we have seen already), and then tackle the following systems in roughly this<br />
order:<br />
1. Harmonic Oscillators: Molecular vibrational spectroscopy, phonons, photons, equilibrium<br />
quantum dynamics.<br />
2. Angular Momentum: Spin systems, molecular rotations.<br />
3. Hydrogen Atom: Hydrogenic Systems, basis for atomic theory<br />
5.1 Introduction to Bound States<br />
Before moving on to these systems, let’s first consider what is meant by a “bound state”. Say<br />
we have a potential well which has an arbitrary shape except that at x = ±a, V (x) = 0 and<br />
remains so in either direction. Also, in the range <strong>of</strong> −a ≤ x ≤ a, V (x) < 0. The Schrodinger<br />
Equation for the stationary states is:<br />
� −¯h 2<br />
2m<br />
∂2 �<br />
+ V (x)<br />
∂x2 φn(x) = Enφn(x) (5.1)<br />
Rather than solve this exactly (which we can not do since we haven’t specified more about<br />
V ) let’s examine the topology <strong>of</strong> the allowed bound state solutions. As we have done with the<br />
square well cases, let’s cut the x axis into three domains: Domain 1 for x < −a, Domain 2 for<br />
−a ≤ x ≤ a, Domain 3 for x > a. What are the matching conditions that must be met?<br />
110
For Domain 1 we have:<br />
For Domain 2 we have:<br />
For Domain 3 we have:<br />
∂2 ∂x2 φI(x) = − 2mE<br />
¯h 2 φI(x) ⇒ (+)φI(x) (5.2)<br />
∂ 2<br />
∂x 2 φII(x) =<br />
2m(V (x) − E)<br />
¯h 2 φII(x) ⇒ (−)φII(x) (5.3)<br />
∂2 ∂x2 φIII(x) = − 2m(E)<br />
¯h 2 φIII(x) ⇒ (+)φIII(x) (5.4)<br />
At the rightmost end <strong>of</strong> each equation, the (±) indicates the sign <strong>of</strong> the second derivative <strong>of</strong><br />
the wavefunction. (i.e. the curvature must have the same or opposite sign as the function itself.)<br />
For the + curvature functions, the wavefunctions curve away from the x-axis. For − curvature,<br />
the wavefunctions are curved towards the x-axis.<br />
Therefore, we can conclude that for regions outside the well, the solutions behave much like<br />
exponentials and within the well, the behave like superpositions <strong>of</strong> sine and cosine functions.<br />
Thus, we adopt the asymptotic solution that<br />
and<br />
φ(x) ≈ exp(+αx)for x < a as x → −∞ (5.5)<br />
φ(x) ≈ exp(−αx)for x > a as x → +∞ (5.6)<br />
Finally, in the well region, φ(x) oscillates about the x-axis. We can try to obtain a more complete<br />
solution by combining the solutions that we know. To do so, we must find solutions which are<br />
both continuous functions <strong>of</strong> x and have continuous first derivatives <strong>of</strong> x.<br />
Say we pick an arbitrary energy, E, and seek a solution at this energy. Define the righthand<br />
part <strong>of</strong> the solution to within a multiplicative factor, then the left hand solution is then a<br />
complicated function <strong>of</strong> the exact potential curve and can be written as<br />
φIII(x) = B(E)e +ρx + B ′ (E)e −ρx<br />
where B(E) and B ′ (E) are both real functions <strong>of</strong> E and depend upon the potential function.<br />
Since the solutions must be L 2 , the only appropriate bound states are those for which B(E) = 0.<br />
Any other value <strong>of</strong> B(E) leads to diverging solutions.<br />
Thus we make the following observations concerning bound states:<br />
1. They have negative energy.<br />
2. They vanish exponentially outside the potential well and oscillate within.<br />
(5.7)<br />
3. They form a discrete spectrum as the result <strong>of</strong> the boundary conditions imposed by the<br />
potential.<br />
111
5.2 The Variational Principle<br />
Often the interaction potential is so complicated that an exact solution is not possible. This<br />
is <strong>of</strong>ten the case in molecular problems in which the potential energy surface is a complicated<br />
multidimensional function which we know only at a few points connected by some interpolation<br />
function. We can, however, make some approximations. The method, we shall use is the<br />
“Variational method”.<br />
5.2.1 Variational Calculus<br />
The basic principle <strong>of</strong> the variational method lies at the heart <strong>of</strong> most physical principles. The<br />
idea is that we represent a quantity as a stationary integral<br />
J =<br />
� x2<br />
x1<br />
f(y, yx, x)dx (5.8)<br />
where f(y, yx, x) is some known function which depened upon three variables, which are also<br />
functions <strong>of</strong> x, y(x), yx = dy/dx, and x itself. The dependency <strong>of</strong> y on x uis generally unknown.<br />
This means that while we have fixed the end-points <strong>of</strong> the integral, the path that we actually<br />
take between the endpoints is not known.<br />
Picking different paths leads to different values <strong>of</strong> J. However ever certain paths will minimize,<br />
maximize, or find the saddle points <strong>of</strong> J. For most cases <strong>of</strong> physical interest, its the extrema<br />
that we are interested. Lets say that there is one path, yo(t) which minimizes J (See Fig. 5.1).<br />
If we distort that path slightly, we get another path y(x) which is not too unlike yo(x) and we<br />
will write it as y(x) = yo(x) + η(x) where η(x1) = η(x2) = 0 so that the two paths meet at the<br />
terminal points. If η(x) differs from 0 only over a small region, we can write the new path as<br />
and the variation from the minima as<br />
y(x, α) = yo(x) + αη(x)<br />
δy = y(x, α) − yo(x, 0) = αη(x).<br />
Since yo is the path which minimizes J, and y(x, α) is some other path, then J is also a<br />
function <strong>of</strong> α.<br />
� x2<br />
J(α) = f(y(x, α), y ′ (x, α), x)dx<br />
x1<br />
and will be minimized when � �<br />
∂J<br />
= 0<br />
∂α α=0<br />
Because J depends upon α, we can examine the α dependence <strong>of</strong> the integral<br />
Since<br />
� � � �<br />
∂J<br />
x2 ∂f ∂y ∂f<br />
=<br />
+<br />
∂α<br />
x1 ∂y ∂α ∂y α=0<br />
′<br />
∂y ′<br />
�<br />
dx<br />
∂α<br />
∂y<br />
∂α<br />
= η(x)<br />
112
and<br />
∂y ′<br />
∂α<br />
= ∂η<br />
∂x<br />
we have � � � �<br />
∂J<br />
x2 ∂f ∂f<br />
= η(x) +<br />
∂α<br />
x1 ∂y ∂y α=0<br />
′<br />
�<br />
∂η<br />
dx.<br />
∂x<br />
Now, we need to integrate the second term by parts to get η as a common factor. Remember<br />
integration by parts? �<br />
�<br />
udv = vu − vdu<br />
From this<br />
� x2<br />
x1<br />
�<br />
∂f<br />
∂y ′<br />
�<br />
∂η<br />
dx = η(x)<br />
∂x<br />
∂f<br />
�<br />
�<br />
�<br />
�<br />
∂x �<br />
x2<br />
x1<br />
−<br />
� x2<br />
x1<br />
η(x) d ∂f<br />
dx<br />
dx ∂y ′<br />
The boundaty term vanishes since η vanishes at the end points. So putting it all together and<br />
setting it equal to zero:<br />
� �<br />
x2 ∂f d ∂f<br />
η(x) −<br />
∂y dx ∂y ′<br />
�<br />
dx. = 0<br />
x1<br />
We’re not done yet, since we still have to evaluate this. Notice that α has disappeared from<br />
the expression. In effect, we can take an arbitrary variation and still find the desired path tha<br />
minimizes J. Since η(x) is arbitrary subject to the boundary conditions, we can make it have the<br />
same sign as the remaining part <strong>of</strong> the integrand so that the integrand is always non-negative.<br />
Thus, the only way for the integral to vanish is if the bracketed term is zero everywhere.<br />
�<br />
∂f d ∂f<br />
−<br />
∂y dx ∂y ′<br />
�<br />
= 0 (5.9)<br />
This is known as the Euler equation and it has an enormous number <strong>of</strong> applications. Perhaps<br />
the simplest is the pro<strong>of</strong> that the shortest distance between two points is a straight line (or<br />
on a curved space, a geodesic). The straightline distance between two points on the xy plane<br />
is s = √ x 2 + y 2 and the differential element <strong>of</strong> distance is ds =<br />
Thus, we can write a distance along some line in the xy plane as<br />
J =<br />
� x2y2<br />
x1y1<br />
ds =<br />
� x2y2<br />
x1y1<br />
�<br />
1 + y 2 xdx.<br />
�<br />
(dx) 2 + (dy) 2 = �<br />
1 + y 2 xdx.<br />
If we knew y(x) then J would be the arclength or path-length along the function y(x) between<br />
two points. Sort <strong>of</strong> like, how many steps you would take along a trail between two points. The<br />
trail may be curvy or straight and there is certainly a single trail which is the shortest. So,<br />
setting<br />
f(y, yx, x) =<br />
and substituting it into the Euler equation one gets<br />
d ∂f<br />
dx ∂yx<br />
�<br />
1 + y 2 x<br />
= − d 1<br />
�<br />
dx 1 + y2 x<br />
= 0. (5.10)<br />
113
So, the only way for this to be true is if<br />
1<br />
�<br />
1 + y2 = constant. (5.11)<br />
x<br />
Solving for yx produces a second constant: yx = a, which immediatly yields that y(x) = ax + b.<br />
In other words, its a straight line! Not too surprising.<br />
An important application <strong>of</strong> this principle is when the integrand f is the classical Lagrangian<br />
for a mechanical system. The Lagrangian is related to Hamiltonian and is defined as the difference<br />
between the kinetic and potential energy.<br />
L = T − V (5.12)<br />
where as H is the sum <strong>of</strong> T +V . Rather than taking x as the independent variable, we take time,<br />
t, and position and velocity oa a particle as the dependent variables. The statement <strong>of</strong> δJ = 0<br />
is a mathematical statement <strong>of</strong> Hamilton’s principle <strong>of</strong> least action<br />
� t2<br />
δ L(x, ˙x, t)dt = 0. (5.13)<br />
t1<br />
In essence, Hamilton’s principle asserts that the motion <strong>of</strong> the system from one point to another<br />
is along a path which minimizes the integral <strong>of</strong> the Lagrangian. The equations <strong>of</strong> motion for that<br />
path come from the Euler-Lagrange equations,<br />
So if we write the Lagrangian as<br />
d ∂L ∂L<br />
−<br />
dt ∂ ˙x ∂x<br />
and substitute this into the Euler-Lagarange equation, we get<br />
which is Newton’s law <strong>of</strong> motion: F = ma.<br />
= 0. (5.14)<br />
L = 1<br />
2 m ˙x2 − V (x) (5.15)<br />
m¨x = − ∂V<br />
∂x<br />
5.2.2 Constraints and Lagrange Multipliers<br />
(5.16)<br />
Before we can apply this principle to a quantum mechanical problem, we need to ask our selves<br />
what happens if there is a constraint on the system which exclues certain values or paths so that<br />
not all the η’s may be varied arbitrarily? Typically, we can write the constraint as<br />
φi(y, x) = 0 (5.17)<br />
For example, for a bead on a wire we need to constrain the path to always lie on the wire or for<br />
a pendulum, the path must lie on in a hemisphere defined by the length <strong>of</strong> the pendulum from<br />
114
the pivot point. In any case, the general proceedure is to introduce another function, λi(x) and<br />
integrate<br />
so that<br />
� x2<br />
x1<br />
λi(x)φi(y, x)dx = 0 (5.18)<br />
� x2<br />
δ λi(x)φi(y, x)dx = 0 (5.19)<br />
x1<br />
as well. In fact it turns out that the λi(x) can be even be taken to be a constant, λi for this<br />
whole proceedure to work.<br />
Regardless <strong>of</strong> the case, we can always write the new stationary integral as<br />
�<br />
δ<br />
(f(y, yx, x) + �<br />
λiφi(y, x))dx = 0. (5.20)<br />
i<br />
The multiplying constants are called Lagrange Mulipliers. In your statistical mechanics course,<br />
these will occur when you minimize various thermodynamic functions subject to the various<br />
extensive constraints, such as total number <strong>of</strong> particles in the system, the average energy or<br />
temperature, and so on.<br />
In a sence, we have redefined the original function or Lagrangian to incorporate the constraint<br />
into the dynamics. So, in the presence <strong>of</strong> a constraint, the Euler-Lagrange equations become<br />
d ∂L ∂L<br />
−<br />
dt ∂ ˙x ∂x<br />
= �<br />
i<br />
∂φi<br />
∂x λi<br />
(5.21)<br />
where the term on the right hand side <strong>of</strong> the equation represents a force due to the constraint.<br />
The next issue is that we still need to be able to determine the λi Lagrange multipliers.<br />
115
Figure 5.1: Variational paths between endpoints. The thick line is the stationary path, yo(x)<br />
and the dashed blue curves are variations y(x, α) = yo(x) + αη(x).<br />
fHxL<br />
2<br />
1<br />
-1.5 -1 -0.5 0.5 1 1.5 x<br />
-1<br />
-2<br />
116
5.2.3 Variational method applied to Schrödinger equation<br />
The goal <strong>of</strong> all this is to develop a procedure for computing the ground state <strong>of</strong> some quantum<br />
mechanical system. What this means is that we want to minimize the energy <strong>of</strong> the system<br />
with respect to arbitrary variations in the state function subject to the constraint that the state<br />
function is normalized (i.e. number <strong>of</strong> particles remains fixed). This means we want to construct<br />
the variation:<br />
δ〈ψ|H|ψ〉 = 0 (5.22)<br />
with the constraint 〈ψ|ψ〉 = 0.<br />
In the coordinate representation, the integral involves taking the expectation value <strong>of</strong> the<br />
kinetic energy operator...which is a second derivative operator. That form is not too convenient<br />
for our purposes since it will allow us to write Eq.5.22 in a form suitable for the Euler-Lagrange<br />
equations. But, we can integrate by parts!<br />
�<br />
ψ ∗ ∂2 �<br />
ψ ∂ψ �<br />
�<br />
dx = ψ∗ �<br />
∂x2 ∂x � −<br />
� � ∂ψ∗ � � �<br />
∂ψ<br />
dx (5.23)<br />
∂x ∂x<br />
Assuming that the wavefunction vanishes at the limits <strong>of</strong> the integration, the surface term vanishes<br />
leaving only the second term. We can now write the energy expectation value in terms<br />
<strong>of</strong> two dependent variables, ∇ψ and ψ. OK, they’re functions, but we can still treat them as<br />
dependent variables just like we treated the y(x) ′ s above.<br />
�<br />
E =<br />
� ¯h 2<br />
2m (∇ψ∗ )(∇ψ) + V ψ ∗ �<br />
ψ dx (5.24)<br />
Adding on the constraint and defining the Lagrangian as<br />
L =<br />
� ¯h 2<br />
2m (∇ψ∗ )(∇ψ) + V ψ ∗ ψ<br />
we can substitute this into the Euler-Lagrange equations<br />
This produces the result<br />
�<br />
− λψ ∗ ψ, (5.25)<br />
∂L ∂L<br />
− ∇∂x = 0. (5.26)<br />
∂ψ∗ ∂(∇ψ∗ (V − λ)ψ = ¯h2<br />
2m ∇2 ψ, (5.27)<br />
which we immediately recognize as the Schrödinger equation.<br />
While this may be a rather academic result, it gives us the key to recognize that we can<br />
make an expansion <strong>of</strong> ψ in an arbitrary basis and take variations with respect to the coeffients<br />
<strong>of</strong> that basis to find the lowest energy state. This is the basis <strong>of</strong> a number <strong>of</strong> powerful numerical<br />
methods used solve the Schrödinger equation for extremely complicated systems.<br />
117
5.2.4 Variational theorems: Rayleigh-Ritz Technique<br />
We now discuss two important theorems:<br />
Theorem 5.1 The expectation value <strong>of</strong> the Hamiltonian is stationary in the neighborhood <strong>of</strong> its<br />
eigenstates.<br />
To demonstrate this, let |ψ〉 be a state in which we compute the expectation value <strong>of</strong> H. Also,<br />
let’s modify the state just a bit and write<br />
Expectation values are computed as<br />
|ψ〉 → |ψ〉 + |δψ〉. (5.28)<br />
〈H〉 = 〈ψ|H|ψ〉<br />
〈ψ|ψ〉<br />
(where we assume arb. normalization). In other words<br />
Now, insert the variation<br />
or<br />
(5.29)<br />
〈ψ|ψ〉〈H〉 = 〈ψ|H|ψ〉 (5.30)<br />
〈ψ|ψ〉δ〈H〉 + 〈δψ|ψ〉〈H〉 + 〈ψ|δψ〉〈H〉 = 〈δψ|H|ψ〉 + 〈ψ|H|δψ〉 (5.31)<br />
〈ψ|ψ〉δ〈H〉 = 〈δψ|H − 〈H〉|ψ〉 − 〈ψ|H − 〈H〉|δψ〉 (5.32)<br />
If the expectation value is to be stationary, then δ〈H〉 = 0. Thus the RHS must vanish for an<br />
arbitrary variation in the wavefunction. Let’s pick<br />
Thus,<br />
|δψ〉 = ɛ|ψ〉. (5.33)<br />
(H − 〈H〉)|ψ〉 = 0 (5.34)<br />
That is to say that |ψ〉 is an eigenstate <strong>of</strong> H. Thus proving the theorem.<br />
The second theorem goes:<br />
Theorem 5.2 The Expectation value <strong>of</strong> the Hamiltonian in an arbitrary state is greater than or<br />
equal to the ground-state energy.<br />
The pro<strong>of</strong> goes as this: Assume that H has a discrete spectrum <strong>of</strong> states (which we demonstrated<br />
that it must) such that<br />
H|n〉 = En|n〉 (5.35)<br />
118
Thus, we can expand any state |ψ〉 as<br />
Consequently<br />
and<br />
Thus, (assuming that |ψ〉 is normalized)<br />
〈H〉 = �<br />
En|cn| 2 ≥ �<br />
Eo|cn| 2 ≥ Eo<br />
|ψ〉 = �<br />
cn|n〉. (5.36)<br />
n<br />
〈ψ|ψ〉 = �<br />
|cn| 2 , (5.37)<br />
n<br />
〈ψ|H|ψ〉 = �<br />
|cn| 2 En. (5.38)<br />
n<br />
n<br />
n<br />
(5.39)<br />
quid est demonstrato.<br />
Using these two theorems, we can estimate the ground state energy and wavefunctions for a<br />
variery <strong>of</strong> systems. Let’s first look at the Harmonic Oscillator.<br />
Exercise 5.1 Use the variational principle to estimate the ground-state energy <strong>of</strong> a particle in<br />
the potential<br />
�<br />
Cx x > 0<br />
V (x) =<br />
(5.40)<br />
+∞ x ≤ 0<br />
Use xe −ax as the trial function.<br />
5.2.5 Variational solution <strong>of</strong> harmonic oscillator ground State<br />
The Schrödinger Equation for the Harmonic Osc. (HO) is<br />
Take as a trial function,<br />
�<br />
− ¯h2 ∂<br />
2m<br />
2<br />
∂x<br />
2 + k2<br />
2 x2<br />
�<br />
φ(x) − Eφ(x) = 0 (5.41)<br />
φ(x) = exp(−λx 2 ) (5.42)<br />
where λ is a positive, non-zero constant to be determined. The variational principle states that<br />
the energy reaches a minimum<br />
∂〈H〉<br />
∂λ<br />
when φ(x) is the ground state solution. Let us first derive 〈H〉(λ).<br />
〈H〉(λ) = 〈φ|H|φ〉<br />
〈φ|φ〉<br />
= 0. (5.43)<br />
119<br />
(5.44)
To evaluate this, we break the problem into a series <strong>of</strong> integrals:<br />
and<br />
Putting it all together:<br />
〈φ|p 2 � ∞<br />
|φ〉 =<br />
� ∞<br />
〈φ|φ〉 = dx|φ(x)| 2 =<br />
−∞<br />
� π<br />
2λ<br />
(5.45)<br />
dxφ<br />
−∞<br />
′′ (x)φ(x) = −2λ〈φ|φ〉 + 4λ 2 〈φ|x 2 |φ〉 (5.46)<br />
< φ|x 2 � ∞<br />
|φ〉 =<br />
Taking the derivative with respect to λ:<br />
Thus,<br />
−∞<br />
dxx 2 |φ(x)| 2 = 1<br />
〈φ|φ〉. (5.47)<br />
4λ<br />
〈φ|H|φ〉<br />
〈φ|φ〉 =<br />
�<br />
− ¯h2<br />
� �<br />
�<br />
2 1<br />
−2λ + 4λ +<br />
2m<br />
4λ<br />
k 1<br />
2 4λ<br />
〈φ|H|φ〉<br />
〈φ|φ〉 =<br />
� �<br />
2<br />
¯h<br />
λ +<br />
2m<br />
k<br />
8λ<br />
∂〈H〉<br />
∂λ<br />
Since only positive values <strong>of</strong> λ are allowed.<br />
= ¯h2<br />
2m<br />
λ = ±<br />
λ =<br />
(5.48)<br />
(5.49)<br />
k<br />
− = 0 (5.50)<br />
8λ2 √ mk<br />
2¯h<br />
√ mk<br />
2¯h<br />
Using this we can calculate the ground state energy by substituting λ back into 〈H〉(λ).<br />
〈H〉(λ) =<br />
Now, define the angular frequency: ω =<br />
√ �<br />
2<br />
mk ¯h<br />
2¯h 2m<br />
�<br />
k/m.<br />
k 4¯h<br />
+<br />
8<br />
2<br />
�<br />
=<br />
mk<br />
¯h<br />
�<br />
k<br />
2 m<br />
(5.51)<br />
(5.52)<br />
(5.53)<br />
〈H〉(λ) = ¯h<br />
ω (5.54)<br />
2<br />
which ( as we can easily prove) is the ground state energy <strong>of</strong> the harmonic oscillator.<br />
Furthermore, we can write the HO ground state wavefunction as<br />
120
φo(x) =<br />
φo(x) =<br />
1<br />
φo(x) = � φ(x) (5.55)<br />
〈φ|φ〉<br />
� �1/4 � √<br />
2λ<br />
mk<br />
exp −<br />
π<br />
2¯h x2<br />
�<br />
�√ mk<br />
¯hπ<br />
�1/4 � √<br />
mk<br />
exp −<br />
2¯h x2<br />
�<br />
(5.56)<br />
(5.57)<br />
To compute the “error” in our estimate, let’s substitute the variational solution back into the<br />
Schrodinger equation:<br />
�<br />
− ¯h2 ∂<br />
2m<br />
2<br />
∂x<br />
2 + k2<br />
2 x2<br />
�<br />
φo(x) = − ¯h2<br />
2m φ′′<br />
o(x) + k2<br />
2 φo(x) (5.58)<br />
− ¯h2<br />
2m φ′′ o(x) + k2<br />
2 φo(x) = − ¯h2<br />
� √<br />
2 kmx − ¯h km<br />
2m ¯h 2<br />
�<br />
φo(x) + k<br />
2 x2φo(x) (5.59)<br />
− ¯h2<br />
2m φ′′ o(x) + k2<br />
2 φo(x) = ¯h<br />
�<br />
k<br />
2 m φo(x) (5.60)<br />
Thus, φo(x) is in fact the correct ground state wavefunction for this system. If it were not<br />
the correct function, we could re-introduce the solution as a new trial function, re-compute the<br />
energy, etc... and iterate through until we either find a solution, or run out <strong>of</strong> patience! (Usually<br />
it’s the latter than the former.)<br />
5.3 The Harmonic Oscillator<br />
Now that we have the HO ground state and the HO ground state energy, let us derive the whole<br />
HO energy spectrum. To do so, we introduce “dimensionless” quantities: X and P related to<br />
the physical position and momentum by<br />
X = ( mω<br />
2¯h )1/2 x (5.61)<br />
1<br />
P = (<br />
2¯hmω )1/2p (5.62)<br />
This will save us from carrying around a bunch <strong>of</strong> coefficients. In these units, the HO Hamiltonian<br />
is<br />
H = ¯hω(P 2 + X 2 ). (5.63)<br />
121
The X and P obey the canonical comutational relation:<br />
We can also write the following:<br />
Thus, I can construct the commutator:<br />
[X, P ] = 1 i<br />
[x, p] =<br />
2¯h 2<br />
(5.64)<br />
(X + iP )(X − iP ) = X 2 + P 2 + 1/2 (5.65)<br />
(X − iP )(X + iP ) = X 2 + P 2 − 1/2. (5.66)<br />
[(X + iP ), (X − iP )] = (X + iP )(X − iP ) − (X − iP )(X + iP )<br />
Let’s define the following two operators:<br />
Therefore, the a and a † commute as<br />
= 1/2 + 1/2<br />
Let’s write H in terms <strong>of</strong> the a and a † operators:<br />
or in terms <strong>of</strong> the a and a † operators:<br />
= 1 (5.67)<br />
a = (X + iP ) (5.68)<br />
a † = (X + iP ) † = (X − iP ). (5.69)<br />
[a, a † ] = 1 (5.70)<br />
H = ¯hω(X 2 + P 2 ) = ¯hω(X − iP )(X + iP ) + ¯hω/2 (5.71)<br />
H = ¯hω(a † a + 1/2) (5.72)<br />
Now, consider that |φn〉 is the nth eigenstate <strong>of</strong> H. Thus, we write:<br />
¯hω(a † a + 1/2)|φn〉 = En|φn〉 (5.73)<br />
What happens when I multiply the whole equation by a? Thus, we write:<br />
Now, since aa † − a † a = 1,<br />
a¯hω(a † a + 1/2)|φn〉 = aEn|φn〉 (5.74)<br />
¯hω(aa † + 1/2)(a|φn〉) = En(a|φn〉) (5.75)<br />
¯hω(a † a + 1/2 − 1)(a|φn〉) = En(a|φn〉) (5.76)<br />
In other words, a|φn〉 is an eigenstate <strong>of</strong> H with energy E = En − ¯hω.<br />
122
Since<br />
What happends if I do the same procedure, this time using a † ? Thus, we write:<br />
we have<br />
we can write<br />
Thus,<br />
or,<br />
a † ¯hω(a † a + 1/2)|φn〉 = a † En|φn〉 (5.77)<br />
[a, a † ] = aa † − a † a (5.78)<br />
a † a = aa † − 1 (5.79)<br />
a † a † a = a † (aa † − 1) (5.80)<br />
= (a † a − 1)a † . (5.81)<br />
a † ¯hω(a † a + 1/2)|φn〉 = ¯hω((a † a − 1 + 1/2)a † )|φn〉 (5.82)<br />
¯hω(a † a − 1/2)(a † |φn〉) = En(a † |φn〉). (5.83)<br />
Thus, a † |φn〉 is an eigenstate <strong>of</strong> H with energy E = En + ¯hω.<br />
Since a † and a act on harmonic oscillator eigenstates to give eigenstates with one more or one<br />
less ¯hω quanta <strong>of</strong> energy, these are termed “creation” and “annihilation” operators since they<br />
act to create additional quata <strong>of</strong> excitation or decrease the number <strong>of</strong> quanta <strong>of</strong> excitation in<br />
the system. Using these operators, we can effectively “ladder” our way up the energy scale and<br />
determine any eigenstate one we know just one.<br />
Well, we know the ground state solution. That we got via the variational calculation. What<br />
happens when I apply a † to the φo(x) we derived above? In coordinate form:<br />
(X − iP ) φo(x) =<br />
X acting on φo is:<br />
iP acting on φo is<br />
=<br />
� �mω<br />
2¯h<br />
� �mω<br />
2¯h<br />
Xφo(x) =<br />
�1/2 � �1/2<br />
�<br />
1 ∂<br />
x +<br />
φo(x)<br />
2mω¯h ∂x<br />
�1/2 � �1/2 1 ∂<br />
x +<br />
2mω¯h ∂x<br />
� �mω<br />
¯hπ<br />
(5.84)<br />
�1/4 mω<br />
(−x2 e 2¯h ) (5.85)<br />
� mω<br />
2¯h xφo(x) (5.86)<br />
�<br />
iP φo(x) = −¯h<br />
1 ∂<br />
mω2¯h ∂x φo(x) (5.87)<br />
123
After cleaning things up:<br />
iP φo(x) =<br />
�<br />
mω<br />
−x<br />
2¯h φo(x) (5.88)<br />
= −Xφo(x) (5.89)<br />
(X − iP ) φo(x) =<br />
=<br />
2Xφo(x)<br />
�<br />
mω<br />
2<br />
2¯h<br />
(5.90)<br />
xφo(x) (5.91)<br />
=<br />
=<br />
2Xφo(x)<br />
�<br />
mω<br />
2<br />
2¯h<br />
(5.92)<br />
x<br />
� �<br />
mω 1/4 � �<br />
2 mω<br />
exp −x<br />
2¯h<br />
2¯h<br />
(5.93)<br />
5.3.1 Harmonic Oscillators and Nuclear Vibrations<br />
We introduced one <strong>of</strong> the most important applications <strong>of</strong> quantum mechanics...the solution <strong>of</strong><br />
the Schrödinger equation for harmonic systems. These are systems in which the amplitude <strong>of</strong><br />
motion is either small enough so that the physical potential energy operator can be expanded<br />
about its minimum to second order in the displacement from the minima. When we do so, the<br />
Hamiltonian can be written in the form<br />
H = ¯hω(P 2 + X 2 ) (5.94)<br />
where P and X are dimensionless operators related to the physical momentum and position<br />
operators via<br />
�<br />
mω<br />
X = x (5.95)<br />
2¯h<br />
and<br />
�<br />
P =<br />
1<br />
p.<br />
2¯hmω<br />
(5.96)<br />
We also used the variational method to deduce the ground state wavefunction and demonstrated<br />
that the spectrum <strong>of</strong> H is a series <strong>of</strong> levels separated by ¯hω and that the ground-state energy is<br />
¯hω/2 above the energy minimum <strong>of</strong> the potential.<br />
We also defined a new set <strong>of</strong> operators by taking linear combinations <strong>of</strong> the X and P .<br />
a = X + iP (5.97)<br />
a † = X − iP. (5.98)<br />
We also showed that the commutation relation for these operators is<br />
[a, a † ] = 1. (5.99)<br />
These operators are non-hermitian operators, and hence, do not correspond to a physical observable.<br />
However, we demonstrated that when a acts on a eigenstate <strong>of</strong> H, it produces another<br />
124
eigenstate withe energy En − ¯hω. Also, a † acting on an eigenstate <strong>of</strong> H produces another eigenstate<br />
with energy En + ¯hω. Thus,we called a the destruction or annihilation operator since it<br />
removes a quanta <strong>of</strong> excitation from the system and a † the creation operator since it adds a<br />
quanta <strong>of</strong> excitation to the system. We also wrote H using these operators as<br />
H = ¯hω(a † a + 1<br />
) (5.100)<br />
2<br />
Finally, ω is the angular frequency <strong>of</strong> the classical harmonic motion, as obtained via Hooke’s<br />
law:<br />
¨x = − k<br />
x. (5.101)<br />
m<br />
Solving this produces<br />
and<br />
x(t) = xo sin(ωt + φ) (5.102)<br />
p(t) = po cos(ωt + φ). (5.103)<br />
Thus, the classical motion in the x, p phase space traces out the circumference <strong>of</strong> a circle every<br />
1/ω regardless <strong>of</strong> the initial amplitude.<br />
The great advantage <strong>of</strong> using the a, and a † operators is that they we can replace a differential<br />
equation with an algebraic equation. Furthermore, since we can represent any Hermitian operator<br />
acting on the HO states as a combination <strong>of</strong> the creation/annihilation operators, we can replace<br />
a potentially complicated series <strong>of</strong> differentiations, integrations, etc... with simple algebraic<br />
manipulations. We just have to remember a few simple rules regarding the commutation <strong>of</strong> the<br />
two operators. Two operators which we may want to construct are:<br />
and<br />
• position operator: � �1/2 2¯h (a mω<br />
† + a)<br />
• momentum operator: i � �1/2 ¯hmω (a 2<br />
† − a).<br />
Another important operator is<br />
N = a † a. (5.104)<br />
H = ¯hω(N + 1/2). (5.105)<br />
Since [H, N] = 0, eigenvalues <strong>of</strong> N are “good quantum numbers” and N is a constant <strong>of</strong> the<br />
motion. Also, since<br />
then if<br />
H|φn〉 = En|φn〉 = ¯hω(N + 1/2)|φn〉 (5.106)<br />
N|φn〉 = n|φn〉, (5.107)<br />
then n must be an integer n = 0, 1, 2, · · · corresponding to the number <strong>of</strong> quanta <strong>of</strong> excitation in<br />
the state. This gets the name “Number Operator”.<br />
Some useful relations (that you should prove )<br />
125
1. [N, a] = [a † a, a] = −a<br />
2. [N, a † ] = [a † a, a † ] = a †<br />
To summarize, we have the following relations using the a and a † operators:<br />
1. a|φn〉 = √ n|φn−1〉<br />
2. a † |φn〉 = √ n + 1|φn+1〉<br />
3. 〈φn|a = √ n + 1〈φn+1 = (a|φn〉) †<br />
4. 〈φn|a † = √ n + 1〈φn−1|<br />
5. N|φn〉 = n|φn〉<br />
6. 〈φn|N = n〈φn|<br />
Using the second <strong>of</strong> these relations we can write<br />
which can be iterated back to the ground state to produce<br />
This is the “generating relation” for the eigenstates.<br />
Now, let’s look at x and p acting on |φn〉.<br />
Also,<br />
|φn+1〉 = a†<br />
√ n + 1 |φn〉 (5.108)<br />
|φn〉 = (a† ) n<br />
√ n! |φo〉 (5.109)<br />
�<br />
x|φn〉 =<br />
¯h<br />
2mω (a† + a)|φn〉 (5.110)<br />
�<br />
¯h<br />
=<br />
2mω (√n + 1|φn+1〉 + √ n|φn−1〉) (5.111)<br />
= i<br />
�<br />
p|φn〉 = i<br />
�<br />
m¯hω<br />
2 (a† − a)|φn〉 (5.112)<br />
m¯hω<br />
2 (√ n + 1|φn+1〉 − √ n|φn−1〉) (5.113)<br />
Thus,the matrix elements <strong>of</strong> x and p in the HO basis are:<br />
�<br />
¯h �√<br />
〈φm|x|φn〉 = n + 1δm,n+1 +<br />
2mω<br />
√ �<br />
nδm,n−1<br />
126<br />
(5.114)
〈φm|p|φn〉 = i<br />
�<br />
mω¯h<br />
2<br />
�√ n + 1δm,n+1 − √ nδm,n−1<br />
The harmonic oscillator wavefunctions can be obtained by solving the equation:<br />
�<br />
(5.115)<br />
〈x|a|φo〉 = (X + iP )φo(x) = 0 (5.116)<br />
� mω<br />
The solution <strong>of</strong> this first order differential equation is easy:<br />
¯h<br />
�<br />
∂<br />
x + φo(x) = 0 (5.117)<br />
∂x<br />
φo(x) = c exp(− mω<br />
2 x2 ) (5.118)<br />
where c is a constant <strong>of</strong> integration which we can obtain via normalization:<br />
Doing the integration produces:<br />
�<br />
φo(x) =<br />
dx|φo(x)| 2 = 1 (5.119)<br />
� �<br />
mω 1/4<br />
mω<br />
−<br />
e 2¯h<br />
¯hπ<br />
x2<br />
127<br />
(5.120)
Figure 5.2: Hermite Polynomials, Hn up to n = 3.<br />
HnHxL<br />
10<br />
7.5<br />
5<br />
2.5<br />
-3 -2 -1 1 2 3 x<br />
-2.5<br />
-5<br />
-7.5<br />
-10<br />
128
Since we know that a † acting on |φo〉 gives the next eigenstate, we can write<br />
φ1(x) =<br />
� mω<br />
¯h<br />
�<br />
∂<br />
x − φo(x) (5.121)<br />
∂x<br />
Finally, using the generating relation, we can write<br />
φn(x) = 1<br />
� �n mω ∂<br />
√ x − φo(x). (5.122)<br />
n! ¯h ∂x<br />
Lastly, we have the “recursion relations” which generates the next solution one step higher or<br />
lower in energy given any other solution.<br />
� �<br />
1 mω ∂<br />
φn+1(x) = √ x − φn(x) (5.123)<br />
n + 1 ¯h ∂x<br />
and<br />
φn−1(x) = 1<br />
� �<br />
mω ∂<br />
√ x + φn(x). (5.124)<br />
n ¯h ∂x<br />
These are the recursion relationships for a class <strong>of</strong> polynomials called Hermite polynomials, after<br />
the 19th French mathematician who studied such functions. These are also termed “Gauss-<br />
Hermite” and form a set <strong>of</strong> orthogonal polynomials. The first few Hermite Polynomials, Hn(x)<br />
are {1, 2 x, −2 + 4 x 2 , −12 x + 8 x 3 , 12 − 48 x 2 + 16 x 4 } for n = 0 to 4. Some <strong>of</strong> these are plotted<br />
in Fig. 5.2<br />
The functions themselves are defined by the generating function<br />
g(x, t) = e −t2 ∞�<br />
+2tx<br />
=<br />
n=0<br />
Hn(x) tn<br />
. (5.125)<br />
n!<br />
Differentiating the generating function n times and setting t = 0 produces the nth Hermite<br />
polynomial<br />
Hn(x) = dn<br />
�<br />
�<br />
�<br />
g(x, t) �<br />
dtn � = (−1)n dn x2<br />
e e−x2<br />
(5.126)<br />
dxn Another useful relation is the Fourier transform relation:<br />
�<br />
1 ∞<br />
√<br />
2π −∞<br />
e itx e −x2 /2 Hn(x)dx = −i n e −t2 /2 Hn(t) (5.127)<br />
which is useful in generating the momentum space representation <strong>of</strong> the harmonic oscillator<br />
functions. Also, from the generating function, we can arrive at the recurrence relation:<br />
and<br />
Hn+1 = 2xHn − 2nHn−1<br />
(5.128)<br />
H ′ n(x) = 2nHn−1(x). (5.129)<br />
129
Consequently, the hermite polynomials are solutions <strong>of</strong> the second-order differental equation:<br />
H ′′<br />
n − 2xH ′ n + 2nHn = 0 (5.130)<br />
which is not self-adjoint! To put this into self-adjoint form, we multiply by the weighting function<br />
w = e−x2, which leads to the orthogonality integral<br />
� ∞<br />
Hn(x)Hm(x)e −x2<br />
dx = δnm. (5.131)<br />
−∞<br />
For the harmonic oscillator functions, we absorb the weighting function into the wavefunction<br />
itself<br />
ψn(x) = e −x2 /2 Hn(x).<br />
When we substitute this function into the differential equation for Hn we get<br />
ψ ′′<br />
n + (2n + 1 − x 2 )ψn = 0. (5.132)<br />
To normalize the functions, we first multipy g by itself and then multiply by w<br />
e −x2<br />
e −s2 +2sx −t<br />
e 2 �<br />
+2tx<br />
=<br />
mn<br />
e −x2<br />
Hn(x)Hm(x) smtn n!m!<br />
(5.133)<br />
When we integrate over −∞ to ∞ the cross terms drop out by orthogonality and we are left<br />
with<br />
� (st)<br />
n=0<br />
n � ∞<br />
e<br />
n!n! −∞<br />
−x2<br />
H 2 n(x)dx =<br />
� ∞<br />
e<br />
−∞<br />
−x2−s2 +2sx−t2 =<br />
+2xt<br />
dx<br />
� ∞<br />
e<br />
−∞<br />
−(x−s−t)2<br />
dx<br />
= π 1/2 e 2st = � 2<br />
n=0<br />
n (st) n<br />
.<br />
n!<br />
(5.134)<br />
Equating like powers <strong>of</strong> st we obtain,<br />
� ∞<br />
e −x2<br />
H 2 n(x)dx = 2 n π 1/2 n!. (5.135)<br />
When we apply this technology to the SHO, the solutions are<br />
where z = αx and<br />
A few gratuitous solutions:<br />
−∞<br />
ψn(z) = 2 −n/2 π −1/4 (n!) −1/2 e −z2<br />
Hn(z) (5.136)<br />
φ1(x) =<br />
φ2(x) =<br />
� 4<br />
π<br />
Fig. 5.3 shows the first 4 <strong>of</strong> these functions.<br />
α 2 = mω<br />
¯h .<br />
� � �<br />
mω 3 1/4<br />
x exp(−<br />
¯h<br />
1<br />
2 mωx2 ) (5.137)<br />
� �<br />
mω 1/4 �<br />
2<br />
4π¯h<br />
mω<br />
¯h x2 �<br />
− 1 exp(− 1<br />
2 mωx2 ) (5.138)<br />
130
5.3.2 Classical interpretation.<br />
In Fig. 5.3 are a few <strong>of</strong> the lowest energy states for the harmonic oscillator. Notice that as the<br />
quantum number increases the amplitude <strong>of</strong> the wavefunction is pushed more and more towards<br />
larger values <strong>of</strong> ±x. This becomes more pronounced when we look at the actual probability<br />
distribution functions, |ψn(x)| 2 | for the same 4 states as shown in Fig 5.4.<br />
Here, in blue are the actual quantum distributions for the ground-state through n = 3. In<br />
gray are the classical probability distrubutions for the corresponding energies. The gray curves<br />
tell us the probabilty per unit length <strong>of</strong> finding the classical particle at some point x and any<br />
point in time. This is inversely proportional to how long a particle spends at a given point...i.e.<br />
Pc(x) ∝ 1/v(x). Since E = mv 2 /2 + V (x),<br />
and<br />
For the Harmonic Oscillator:<br />
v(x) =<br />
�<br />
2(E − V (x))/m<br />
�<br />
m<br />
P (x) ∝<br />
2(E − V (x))<br />
�<br />
m<br />
Pn(x) ∝<br />
2(¯hω(n + 1/2) − kx2 /2) .<br />
Notice that the denominator goes to zero at the classical turning points, in other words,<br />
the particle comes to a dead-stop at the turning point and consequently we have the greatest<br />
likelyhood <strong>of</strong> finding the particle in these regions. Likewise in the quantum case, as we increase<br />
the quantum number, the quantum distrubution function becomes more and more like its classical<br />
counterpart. This is shown in the last four frames <strong>of</strong> Fig. 5.4 where we have the same plots as in<br />
Fig. 5.4, except we look at much higher quantum numbers. For the last case, where n = 19 the<br />
classical and quantum distributions are nearly identical. This is an example <strong>of</strong> the correspondence<br />
principle. As the quantum number increases, we expect the quantum system to look more and<br />
more like its classical counter part.<br />
131
1<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
-4 -2 2 4<br />
2<br />
1<br />
-4 -2 2 4<br />
-1<br />
-2<br />
Figure 5.3: Harmonic oscillator functions for n = 0 to 3<br />
1<br />
0.5<br />
-4 -2 2 4<br />
-0.5<br />
-1<br />
4<br />
2<br />
-4 -2 2 4<br />
-2<br />
-4<br />
132
Figure 5.4: <strong>Quantum</strong> and Classical Probability Distribution Functions for Harmonic Oscillator<br />
for n = 0, 1, 2, 3, 4, 5, 9, 14, 19<br />
2.5<br />
2<br />
1.5<br />
1<br />
0.5<br />
-4 -2 2 4<br />
1.5<br />
1.25<br />
1<br />
0.75<br />
0.5<br />
0.25<br />
-4 -2 2 4<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
-7.5 -5 -2.5 2.5 5 7.5<br />
0.6<br />
0.5<br />
0.4<br />
0.3<br />
0.2<br />
0.1<br />
-7.5 -5 -2.5 2.5 5 7.5<br />
1.75<br />
1.5<br />
1.25<br />
1<br />
0.75<br />
0.5<br />
0.25<br />
-4 -2 2 4<br />
1.4<br />
1.2<br />
1<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
-4 -2 2 4<br />
0.7<br />
0.6<br />
0.5<br />
0.4<br />
0.3<br />
0.2<br />
0.1<br />
-7.5 -5 -2.5 2.5 5 7.5<br />
0.6<br />
0.5<br />
0.4<br />
0.3<br />
0.2<br />
0.1<br />
-7.5 -5 -2.5 2.5 5 7.5<br />
133
5.3.3 Molecular Vibrations<br />
The fully quantum mechanical treatment <strong>of</strong> both the electronic and nuclear dynamics <strong>of</strong> even a<br />
diatomic molecule is a complicated affair. The reason for this is that we are forced to find the<br />
stationary states for a potentially large number <strong>of</strong> particles–all <strong>of</strong> which interact, constrained<br />
by a number <strong>of</strong> symmetry relations (such as the fact that no two electrons can be in the same<br />
state at the same time.) In general, the exact solution <strong>of</strong> a many-body problem (such as this)<br />
is impossible. (In fact, believe it it is rigorously impossible for even three classically interacting<br />
particles..although many have tried. ) However, the mass <strong>of</strong> the electron is on the order <strong>of</strong> 10 3 to<br />
10 4 times smaller than the mass <strong>of</strong> a typical nuclei. Thus, the typical velocities <strong>of</strong> the electrons<br />
is much larger than the typical nuclear velocities. We can then assume that the electronic cloud<br />
surrounding the nuclei will respond instantaneously to small and slow changes to the nuclear<br />
positions. Thus, to a very good approximation, we can separate the nuclear motion from the<br />
electonic motion. This separation <strong>of</strong> the nuclear and electronic motion is called the Born-<br />
Oppenheimer Approximation or the Adiabatic Approximation. This approximation is<br />
one <strong>of</strong> the MOST important concepts in chemical physics and is covered in more detail in Section<br />
8.4.1.<br />
Fundimental notion is that the nuclear motion <strong>of</strong> a molecule occurs in the average field <strong>of</strong><br />
the electrons. In other words, the electronic charge distribution acts as an extremely complex<br />
multi-dimensional potential energy surface which governs the motion and dynamics <strong>of</strong> the atoms<br />
in a molecule. Consequently, since chemistry is the science <strong>of</strong> chemical structure, changes, and<br />
dynamics, nearly all chemical reactions can be described in terms <strong>of</strong> nuclear motion on one (or<br />
more) potential energy surface. In Fig. ?? is the London-Eyring-Polanyi-Sato (LEPS) [1]surface<br />
for the F + H2 → HF + H reaction using the Mukerman V set <strong>of</strong> parameters.[3] The LEPS<br />
surface is an empirical potential energy surface based upon the London-Heitler valance bond<br />
theory. Highly accurate potential functions are typically obtained by performing high level ab<br />
initio electronic structure calculations sampling over numerous configurations <strong>of</strong> the molecule.[2]<br />
For diatomic molecules, the nuclear stretching potential can be approximated as a Morse<br />
potential curve<br />
V (r) = De(1 − e −α(r−req ) 2 − De<br />
(5.139)<br />
where De is the dissociation energy, α sets the range <strong>of</strong> the potential, and req is the equilibrium<br />
bond length. The Morse potential for HF is shown in Fig. 5.6 and is parameterized by De =<br />
591.1kcal/mol, α = 2.2189˚A −1 , and req = 0.917˚A.<br />
Close to the very bottom <strong>of</strong> the potential well, where r − re is small, the potential is nearly<br />
harmonic and we can replace the nuclear SE with the HO equation by simply writing that the<br />
angular frequancy is<br />
�<br />
V<br />
ω =<br />
′′ (re)<br />
(5.140)<br />
m<br />
So, measuring the vibrational spectrum <strong>of</strong> the well will give us the curvature <strong>of</strong> the well since<br />
(En − Em)/¯h is always an integer multiple <strong>of</strong> ω for harmonic systems. The red curve in Fig. 5.6<br />
is a parabolic approximation for the bottom <strong>of</strong> the well.<br />
V (r) = De(−1 + α 2 (r − re) 2 /2 + . . .) (5.141)<br />
134
Figure 5.5: London-Eyring-Polanyi-Sato (LEPS) empirical potential for the F + H2 → F H + H<br />
chemical reaction<br />
3.5<br />
3<br />
2.5<br />
2<br />
1.5<br />
1<br />
0.5<br />
rHH<br />
4<br />
0.5 1 1.5 2 2.5 3 3.5 4<br />
135<br />
rFH
V HkCalêmolL<br />
600<br />
400<br />
200<br />
-200<br />
-400<br />
-600<br />
Figure 5.6: Morse well and harmonic approximation for HF<br />
1 2 3 4<br />
Clearly, Deα2 �<br />
is the force constant. So the harmonic frequency for the well is ω = k/µ, where<br />
µ is the reduced mass, µ = m1m2/(m1 + m2) and one would expect that the vibrational energy<br />
levels would be evenly spaced according to a harmonic progression. Deviations from this are due<br />
to anharmonic effects introduced by the inclusion <strong>of</strong> higher order terms in the Taylor expansion<br />
<strong>of</strong> the well. As one might expect, the harmonic expansion provides a descent estimate <strong>of</strong> the<br />
potential energy surface close to the equilibrium geometry.<br />
5.4 Numerical Solution <strong>of</strong> the Schrödinger Equation<br />
5.4.1 Numerov Method<br />
Clearly, finding bound state soutions for the Schrödinger equation is an important task. Unfortunately,<br />
we can only solve a few systems exactly. For the vast majority <strong>of</strong> system which<br />
we cannot handle exactly, we need to turn to approximate means to finde solutions. In later<br />
chapters, we will examine variational methods and perturbative methods. Here, we will look at<br />
a very simple scheme based upon propagating a trial solution at a given point. This methods is<br />
called the ”Numerov” approach after the Russian astronomer who developed the method. It can<br />
be used to solve any differential equation <strong>of</strong> the form:<br />
f ′′ (r) = f(r)u(r) (5.142)<br />
where u(r) is simply a function <strong>of</strong> r and f ′′ is the second derivative <strong>of</strong> f(r) with respect to r and<br />
which is the solution we are looking for. For the Schrödinger equation we would write:<br />
ψ ′′ = 2m<br />
2 (V (r) − E)ψ (5.143)<br />
¯h<br />
136<br />
rFH
Figure 5.7: Model potential for proton tunneling.<br />
V Hcm<br />
6000<br />
-1L 4000<br />
2000<br />
-1 -0.5 0.5 1<br />
-2000<br />
-4000<br />
x HbohrL<br />
The basic proceedure is to expand the second derivative as a finite difference so that if we know<br />
the solution at one point, xn, we can get the solution at some small point a distance h away,<br />
xn+1 = xn + h.<br />
f[n + 1] = f[n] + hf ′ [n] + h2<br />
2! f ′′ [n] + h3<br />
3! f ′′′ [n] + h4<br />
4! f (4) [n] . . . (5.144)<br />
f[n − 1] = f[n] − hf ′ [n] + h2<br />
2! f ′′ [n] − h3<br />
3! f ′′′ [n] + h4<br />
4! f (4) [n] (5.145)<br />
If we combine these two equations and solve for f[n + 1] we get a result:<br />
f[n + 1] = −f[n − 1] + 2f[n] + f ′′ [n]h 2 + h4<br />
12 f (4) [n] + O[h 6 ] (5.146)<br />
Since f is a solution to the Schrödinger equation, f ′′ = Gf where<br />
G = 2m<br />
2 (V − E)<br />
¯h<br />
we can get the second derivative <strong>of</strong> f very easily. However, for the higher order terms, we have<br />
to work a bit harder. So let’s expand f ′′<br />
f ′′ [n + 1] = −2f ′′ [n] − f ′′ [n − 1] + s 2 f (4) [n]<br />
and truncate at order h 6 . Now, solving for f (4) [n] and substituting f ′′ = Gf we get<br />
f[n + 1] =<br />
�<br />
2f[n] − f[n − 1] + h2 (G[n − 1]f[n − 1] + 10G[n]f[n])�<br />
12<br />
�<br />
1 − h2<br />
(5.147)<br />
G[n + 1]�<br />
12<br />
which is the working equation.<br />
Here we take a case <strong>of</strong> proton tunneling in a double well potential. The potential in this case<br />
is the V (x) = α(x 4 − x 2 ) function shown in Fig. 5.7. Here we have taken the parameter α = 0.1<br />
137
1<br />
0.5<br />
-2 -1 1 2<br />
-0.5<br />
-1<br />
-2 -1 1 2<br />
Figure 5.8: Double well tunneling states as determined by the Numerov approach. On the left<br />
is the approximate lowest energy (symmetric) state with no nodes and on the rigth is the next<br />
lowest (antisymmetric) state with a single node. The fact that the wavefunctions are heading <strong>of</strong>f<br />
towards infinity indicated the introduction <strong>of</strong> an additional node coming in from x = ∞.<br />
and m = 1836 as the proton and use atomic units throughout. Also show in Fig. 5.7 are effective<br />
harmonic oscillator wells for each side <strong>of</strong> the barrier. Notice that the harmonic approximation<br />
is pretty crude since the harminic well tends to over estimate the steepness <strong>of</strong> the inner portion<br />
and under estimate the steepness <strong>of</strong> the outer portions. Nonetheless, we can use the harminic<br />
oscillator ground states in each well as starting points.<br />
To use the Numerov method, one starts by guessing an initial energy, E, and then propagating<br />
a trial solution to the Schrödinger equation. The Curve you obtain is in fact a solution to the<br />
equation, but it will ususally not obey the correct boundary conditions. For bound states, the<br />
boundary condition is that ψ must vanish exponentially outside the well. So, we initialize the<br />
method by forcing ψ[1] to be exactly 0 and ψ[2] to be some small number. The exact values<br />
really make no difference. If we are <strong>of</strong>f by a bit, the Numerov wave will diverge towards ±∞ as x<br />
increases. As we close in on a physically acceptible solution, the Numerov solution will begin to<br />
exhibit the correct asymptotic behavour for a while before diverging. We know we have hit upon<br />
an eigenstate when the divergence goes from +∞ to −∞ or vice versa, signeling the presence<br />
<strong>of</strong> an additional node in the wavefunction. The proceedure then is to back up in energy a bit,<br />
change the energy step and gradually narrow in on the exact energy. In Figs. 5.8a and 5.8b are<br />
the results <strong>of</strong> a Numerov search for the lowest two states in the double well potential. One at<br />
-3946.59cm −1 and the other at -3943.75cm −1 . Notice that the lowest energy state is symmetric<br />
about the origin and the next state is anti-symmetric about the origin. Also in both cases, the<br />
Numerov function diverges since we are not precisely at a stationary solution <strong>of</strong> the Schrödinger<br />
equation...but we are within 0.01cm −1 <strong>of</strong> the true eigenvalue.<br />
The advantage <strong>of</strong> the Numerov method is that it is really easy to code. In fact you can<br />
even code it in Excell. Another advantage is that for radial scattering problems, the out going<br />
boundary conditions occur naturally, making it a method <strong>of</strong> choice for simple scattering problems.<br />
In the Mathematica notebooks, I show how one can use the Numerov method to compute the<br />
scattering phase shifts and locate resonance for atomic collisions. The disadvantage is that you<br />
have to search by hand for the eigenvalues which can be extremely tedious.<br />
138<br />
3<br />
2<br />
1<br />
-1
5.4.2 Numerical Diagonalization<br />
A more general approach is based upon the variational principle (which we will discuss later)<br />
and the use <strong>of</strong> matrix representations. If we express the Hamiltonian operator in matrix form in<br />
some suitable basis, then the eigenfunctions <strong>of</strong> H can also be expressed as linear combinations<br />
<strong>of</strong> those basis functions, subject to the constraint that the eigenfunctions be orthonormal. So,<br />
what we do is write:<br />
〈φn|H|φm〉 = Hnm<br />
and<br />
|ψj〉 = �<br />
〈φn|ψj〉|φn〉<br />
n<br />
The 〈φn|ψj〉 coefficients are also elements <strong>of</strong> a matrix, Tnj which transforms a vector in the φ<br />
basis to the ψ basis. Conseqently, there is a one-to-one relation between the number <strong>of</strong> basis<br />
functions in the ψ basis and the basis functions in the φ basis.<br />
If |ψn〉 is an eigenstate <strong>of</strong> H, then<br />
Multiplying by 〈φm| and resolving the identity,<br />
Thus,<br />
or in more compact form<br />
H|ψj〉 = Ej|ψj〉.<br />
�<br />
〈φm|H|φn〉〈φn|ψj〉 = 〈φm|ψj〉Ej<br />
n<br />
�<br />
HmnTnj<br />
n<br />
= EjTmj (5.148)<br />
�<br />
TmjHmnTnj = Ej<br />
mn<br />
T † HT = E Î<br />
(5.149)<br />
where Î is the identity matrix. In otherwords, the T -matrix is simply the matrix which brings<br />
H to diagonal form.<br />
Diagonalizing a matrix by hand is very tedeous for anything beyond a 3×3 matrix. Since this<br />
is an extremely common numerical task, there are some very powerful numerical diagonalization<br />
routines available. Most <strong>of</strong> the common ones are in the Lapack package and are included as part<br />
<strong>of</strong> the Mathematica kernel. So, all we need to do is to pick a basis, cast our Hamiltonian into<br />
that basis, truncate the basis (usually determined by some energy cut-<strong>of</strong>f) and diagonalize away.<br />
Usually the diagonalization part is the most time consuming. Of course you have to be prudent<br />
in choosing your basis.<br />
A useful set <strong>of</strong> basis functions are the trigonmetric forms <strong>of</strong> the Tchebychev polynomials. 1<br />
These are a set <strong>of</strong> orthogonal functions which obey the following recurrence relation<br />
Tn+1(x) − 2xTn(x) + Tn−1(x) = 0 (5.150)<br />
139
1<br />
0.75<br />
0.5<br />
0.25<br />
-1 -0.5 0.5 1<br />
-0.25<br />
-0.5<br />
-0.75<br />
-1<br />
Figure 5.9: Tchebyshev Polynomials for n = 1 − 5<br />
Table 5.1: Tchebychev polynomials <strong>of</strong> the first type<br />
To = 1<br />
T1 = x<br />
T2 = 2x 2 − 1<br />
T3 = 4x 3 − 3x<br />
T4 = 8x 4 − 8x 2 − 1<br />
T5 = 16x 5 − 20x 3 + 5x<br />
140
Table 5.1 lists a the first few <strong>of</strong> these polynomials as functions <strong>of</strong> x and a few <strong>of</strong> these are plotted<br />
in Fig. 5.9<br />
It is important to realize that these functions are orthogonal on a finite range and that<br />
integrals over these functions must include a weighting function w(x) = 1/ √ 1 − x 2 . The orthogonality<br />
relation for the Tn polynomials is<br />
� +1<br />
−1<br />
⎧<br />
⎪⎨ 0 m �= m<br />
π<br />
Tm(x)Tn(x)w(x)dx = m = n �= 0<br />
⎪⎩<br />
2<br />
π m = n = 0<br />
(5.151)<br />
Arfkin’s Mathematical Methods for Physicists has a pretty complete overview <strong>of</strong> these special<br />
functions as well as many others. As usual, These are encorporated into the kernel <strong>of</strong> Mathematica<br />
and the Mathematica book and on-line help pages has some useful information regarding<br />
these functions as well as a plethera <strong>of</strong> other functions.<br />
From the recurrence relation it is easy to show that the Tn(x) polynomials satisfy the differential<br />
equation:<br />
(1 − x 2 )T ′′<br />
n − xT ′ x + n 2 Tn = 0 (5.152)<br />
If we make a change <strong>of</strong> variables from x = cos(θ) and dx = − sin θdθ, then the differential<br />
equation reads<br />
dTn<br />
dθ + n2 Tn = 0 (5.153)<br />
This is a harmonic oscillator and has solutions sin nθ and cos nθ. From the boundary conditions<br />
we have two linearly independent solutions<br />
and<br />
The normalization condition then becomes:<br />
and<br />
� +1<br />
−1<br />
� +1<br />
−1<br />
Tn = cos nθ = cos n(arccosx)<br />
Vn = sin nθ.<br />
Tm(x)Tn(x)w(x)dx =<br />
Vm(x)Vn(x)w(x)dx =<br />
� π<br />
0<br />
� π/2<br />
−π/2<br />
cos(mθ) cos(nθ)dθ (5.154)<br />
sin(mθ) sin(nθ)dθ (5.155)<br />
which is precisely the normalization integral we perform for the particle in a box state assuming<br />
the width <strong>of</strong> the box was π. For more generic applications, we can scale θ and its range to any<br />
range.<br />
1 There are at least 10 ways to spell Tchebychev’s last name Tchebychev, Tchebyshev, Chebyshev are the<br />
most common, as well as Tchebysheff, Tchebycheff, Chebysheff, Chevychef, . . .<br />
141
The way we use this is to use the φn = N sin nx basis functions as a finite basis and truncate<br />
any expansion in this basis at some point. For example, since we are usually interested in low<br />
lying energy states, setting an energy cut-<strong>of</strong>f to basis is exactly equivalent to keeping only the<br />
lowest ncut states. The kinetic energy part <strong>of</strong> the Hamiltonian is diagonal in this basis, so we get<br />
that part for free. However, the potential energy part is not diagonal in the φn = N sin nx basis,<br />
so we have to compute its matrix elements:<br />
�<br />
Vnm = φn(x)V (x)φm(x)dx (5.156)<br />
To calculate this integral, let us first realize that [V, x] = 0, so the eigenstates <strong>of</strong> x are also<br />
eigenstates <strong>of</strong> the potential. Taking matrix elements in the finite basis,<br />
xnm = N 2<br />
�<br />
φn(x)xφm(x)dx,<br />
and diagonalizing it yields a finite set <strong>of</strong> ”position” eigenvalues, {xi} and a transformation for<br />
converting between the ”position representation” and the ”basis representation”,<br />
Tin = 〈xi|φn〉,<br />
which is simply a matrix <strong>of</strong> the basis functions evaluated at each <strong>of</strong> the eigenvalues. The special<br />
set <strong>of</strong> points defined by the eigenvalues <strong>of</strong> the position operator are the Gaussian quadrature<br />
points over some finite range.<br />
This proceedure termed the ”discrete variable representation” was developed by Light and<br />
coworkers in the 80’s and is a very powerful way to generate coordinate representations <strong>of</strong> Hamiltonian<br />
matrixes. Any matrix in the basis representation (termed the FBR for finite basis representation)<br />
can be transformed to the discrete variable representation (DVR) via the transformation<br />
matrix T . Moreover, there is a 1-1 correspondency between the number <strong>of</strong> DVR points and the<br />
number <strong>of</strong> FBR basis functions. Here we have used only the Tchebychev functions. One can<br />
generate DVRs for any set <strong>of</strong> orthogonal polynomial function. The Mathematica code below generates<br />
the required transformations, the points, the eigenvalues <strong>of</strong> the second-derivative operator,<br />
and a set <strong>of</strong> quadrature weights for the Tchebychev sine functions over a specified range:<br />
dv2fb[DVR_, T_] := T.DVR.Transpose[T];<br />
fb2dv[FBR_, T_] := Transpose[T].FBR.T;<br />
tcheby[npts_, xmin_, xmax_] := Module[{pts, fb, del},<br />
del = xmax - xmin;<br />
pts = Table[i*del*(1/(npts + 1)) + xmin, {i, npts}] // N;<br />
fbrke = Table[(i*(Pi/del))^2, {i, npts}] // N;<br />
w = Table[del/(npts + 1), {i, npts}] // N;<br />
T = Table[<br />
Sqrt[2.0/(npts + 1)]*Sin[(i*j)*Pi/(npts + 1)],<br />
{i, npts}, {j, npts}] // N;<br />
Return[{pts, T, fbrke, w}]<br />
]<br />
To use this, we first define a potential surface, set up the Hamiltonian matrix, and simply<br />
diagonalize. For this example, we will take the same double well system described above and<br />
compare results and timings.<br />
142
V[x_] := a*(x^4 - x^2);<br />
cmm = 8064*27.3;<br />
params = {a -> 0.1, m -> 1836};<br />
{x, T, K, w} = tcheby[100, -1.3, 1.3];<br />
Kdvr = (fb2dv[DiagonalMatrix[K], T]*m)/2 /. params;<br />
Vdvr = DiagonalMatrix[V[x]] /. params;<br />
Hdvr = Kdvr + Vdvr;<br />
tt = Timing[{w, psi} = Transpose[<br />
Sort[Transpose[Eigensystem[Hdvr]]]]];<br />
Print[tt]<br />
(Select[w*cmm , (# < 3000) &]) // TableForm<br />
This code sets up the DVR points x, the transformation T and the FBR eigenvalues K using<br />
the tcheby[n,xmin,xmax] Mathematica module defined above. We then generate the kinetic<br />
energy matrix in the DVR using the transformation<br />
and form the DVR Hamiltonian<br />
KDV R = T † KF BRT<br />
HDV R = KDV R + VDV R.<br />
The eigenvalues and eigenvectors are computed via the Eigensystem[] routine. These are then<br />
sorted according to their energy. Finally we print out only those states with energy less than<br />
3000 cm −1 and check how long it took. On my 300 MHz G3 laptop, this took 0.3333 seconds to<br />
complete. The first few <strong>of</strong> these are shown in Table 5.2 below. For comparison, each Numerov<br />
iteration took roughly 1 second for each trial function. Even then, the eigenvalues we found are<br />
probabily not as accurate as those computed here.<br />
Table 5.2: Eigenvalues for double well potential computed via DVR and Numerov approaches<br />
i ωi (cm −1 ) Numerov<br />
1 -3946.574 -3946.59<br />
2 -3943.7354 -3943.75<br />
3 -1247.0974<br />
4 -1093.5204<br />
5 591.366<br />
6 1617.424<br />
143
5.5 Problems and Exercises<br />
Exercise 5.2 Consider a harmonic oscillator <strong>of</strong> mass m and angular frequency ω. At time<br />
t = 0, the state <strong>of</strong> this system is given by<br />
|ψ(0)〉 = �<br />
cn|φn〉 (5.157)<br />
where the states |φn〉 are stationary states with energy En = (n + 1/2)¯hω.<br />
n<br />
1. What is the probability, P , that at a measurement <strong>of</strong> the energy <strong>of</strong> the oscillator at some<br />
later time will yield a result greater than 2¯hω. When P = 0, what are the non-zero coefficients,<br />
cn?<br />
2. From now on, let only co and c1 be non zero. Write the normalization condition for |ψ(0)〉<br />
and the mean value 〈H〉 <strong>of</strong> the energy in terms <strong>of</strong> co and c1. With the additional requirement<br />
that 〈H〉 = ¯hω, calculate |co| 2 and |c1| 2 .<br />
3. As the normalized state vector |ψ〉 is defined only to within an arbitrary global phase factor,<br />
as can fix this factor by setting co to be real and positive. We set c1 = |c1|e iφ . We assume<br />
also that 〈H〉 = ¯hω and show that<br />
Calculate φ.<br />
〈x〉 = 1<br />
�<br />
2<br />
¯h<br />
.<br />
mω<br />
(5.158)<br />
4. With |ψ〉 so determined, write |ψ(t)〉 for t > 0 and calculate the value <strong>of</strong> φ at time t.<br />
Deduce the mean <strong>of</strong> 〈x〉(t) <strong>of</strong> the position at time t.<br />
Exercise 5.3 Find 〈x〉, 〈p〉, 〈x 2 〉 and 〈p 2 〉 for the ground state <strong>of</strong> a simple harmonic oscillator.<br />
What is the uncertainty relation for the ground state.<br />
Exercise 5.4 In this problem we consider the the interaction between molecule adsorbed on a<br />
surface and the surface phonons. Represent the vibrational motion <strong>of</strong> the molecule (with reduced<br />
mass µ) as harmonic with force constant K<br />
and the coupling to the phonons as<br />
Ho = −¯h2<br />
2µ<br />
∂2 K<br />
+<br />
∂x2 2 x2<br />
(5.159)<br />
H ′ = −x �<br />
Vk cos(Ωkt) (5.160)<br />
k<br />
where Vk is the coupling between the molecule and phonon <strong>of</strong> wavevector k and frequency Ωk.<br />
144
1. Express the total Hamiltonian as a displaced harmonic well. What happens to the well as<br />
a function <strong>of</strong> time?<br />
2. What is the Golden-Rule transition rate between the ground state and the nth excited state<br />
<strong>of</strong> the system due to phonon interactions? Are there any restrictions as to which final state<br />
can be reached? Which phonons are responsible for this process?<br />
3. From now on, let the perturbing force be constant in time<br />
H ′ = x �<br />
k<br />
Vk<br />
(5.161)<br />
where Vk is the interaction with a phonon with wavevector k. Use the lowest order level<br />
<strong>of</strong> perturbation theory necessary to construct the transition probability between the ground<br />
state and the second-excited state.<br />
Exercise 5.5 Let<br />
Show that the Harmonic Oscillator Hamiltonian is<br />
X = ( mω<br />
2¯h )1/2 x (5.162)<br />
1<br />
P = (<br />
2¯hmω )1/2p (5.163)<br />
H = ¯hω(P 2 + X 2 ) (5.164)<br />
Now, define the operator: a † = X − iP . Show that a † acting on the harmonic oscillator ground<br />
state is also an eigenstate <strong>of</strong> H. What is the energy <strong>of</strong> this state? Use a † to define a generating<br />
relationship for all the eigenstates <strong>of</strong> H.<br />
Exercise 5.6 Show that if one expands an arbitrary potential, V (x) about its minimum at xmin,<br />
and neglects terms <strong>of</strong> order x 3 and above, one always obtains a harmonic well. Show that a<br />
harmonic oscillator subject to a linear perturbation can be expressed as an unperturbed harmonic<br />
oscillator shifted from the origin.<br />
Exercise 5.7 Consider the one-dimensional Schrödinger equation with potential<br />
�<br />
m<br />
V (x) = 2 ω2x2 x > 0<br />
+∞ x ≤ 0<br />
Find the energy eigenvalues and wavefunctions.<br />
(5.165)<br />
Exercise 5.8 An electron is contained inside a hard sphere <strong>of</strong> radius R. The radial components<br />
<strong>of</strong> the lowest S and P state wavefunctions are approximately<br />
ψS(x) ≈ sin(kr)<br />
kr<br />
ψP (x) ≈ cos(kr)<br />
kr<br />
− sin(kr)<br />
(kr)<br />
145<br />
(5.166)<br />
∂ψS(kr)<br />
= . (5.167)<br />
2 ∂(kr)
1. What boundary conditions must each state obey?<br />
2. Using E = k 2 ¯h 2 /(2m) and the above boundary conditions, what are the energies <strong>of</strong> each<br />
state?<br />
3. What is the pressure exerted on the surface <strong>of</strong> the sphere if the electron is in the a.) S state,<br />
b.) the P state. (Hint, recall from thermodynamics: dW = P dV = −(dE(R)/dR)dR.)<br />
4. For a solvated e − in water, the S to P energy gap is about 1.7 eV. Estimate the size <strong>of</strong><br />
the the hard-sphere radius for the aqueous electron. If the ground state is fully solvated,<br />
the pressure <strong>of</strong> the solvent on the electron must equal the pressure <strong>of</strong> the electron on the<br />
solvent. What happens to the system when the electron is excited to the P -state from the<br />
equilibrated S state? What happens to the energy gap between the S and P as a result <strong>of</strong><br />
this?<br />
Exercise 5.9 A particle moves in a three dimensional potential well <strong>of</strong> the form:<br />
V (x) =<br />
�<br />
∞ z 2 > a 2<br />
mω 2<br />
2 (x2 + y 2 ), otherwise<br />
Obtain an equation for the eigenvalues and the associated eigenfunctions.<br />
(5.168)<br />
Exercise 5.10 A particle moving in one-dimension has a ground state wavefunction (not-normalized)<br />
<strong>of</strong> the form:<br />
ψo(x) = e −α4 x 4 /4<br />
(5.169)<br />
where α is a real constant with eigenvalue Eo = ¯h 2 α 2 /m. Determine the potential in which the<br />
particle moves. (You do not have to determine the normalization.)<br />
Exercise 5.11 A two dimensional oscillator has the Hamiltonian<br />
H = 1<br />
2 (p2 x + p 2 y) + 1<br />
2 (1 + δxy)(x2 + y 2 ) (5.170)<br />
where ¯h = 1 and δ
Figure 5.10: Ammonia Inversion and Tunneling<br />
1. Using the Spartan electronic structure package (or any other one you have access to), build<br />
a model <strong>of</strong> NH3, and determine its ground state geometry using various levels <strong>of</strong> ab initio<br />
theory. Make a table <strong>of</strong> N − H bond lengths and θ = � H − N − H bond angles for the<br />
equilibrium geometries as a function <strong>of</strong> at least 2 or 3 different basis sets. Looking in the<br />
literature, find experimental values for the equilibrium configuration. Which method comes<br />
closest to the experimental values? Which method has the lowest energy for its equilibrium<br />
configuration.<br />
2. Using the method which you deamed best in part 1, repeat the calculations you performed<br />
above by systematically constraining the H − N − H bond angle to sample configurations<br />
around the equilibrium configuration and up to the planar D3h configuration. Note, it may<br />
be best to constrain two H − N − H angles and then optimize the bond lengths. Sample<br />
enough points on either side <strong>of</strong> the minimum to get a descent potential curve. This is your<br />
Born-Oppenheimer potential as a function <strong>of</strong> θ.<br />
3. Defining the orgin <strong>of</strong> a coordinate system to be the θ = 120 o D3h point on the surface, fit<br />
your ab initio data to the ”W”-potential<br />
V (x) = αx 2 + βx 4<br />
What are the theoretical values <strong>of</strong> α and β?<br />
4. We will now use perturbation theory to compute the tunneling dynamics.<br />
(a) Show that the points <strong>of</strong> minimum potential energy are at<br />
� �1/2 α<br />
xmin = ±<br />
2β<br />
(5.171)<br />
and that the energy difference between the top <strong>of</strong> the barrier and the minimum energy<br />
is given by<br />
V = V (0) − V (xmin) (5.172)<br />
= α2<br />
4β<br />
147<br />
(5.173)
(b) We first will consider the barrier to be infinitely high so that we can expand the potential<br />
function around each xmin. Show that by truncating the Taylor series expansion<br />
above the (x − xmin) 2 terms that the potential for the left and right hand sides are<br />
given by<br />
VL = 2α (x + xmin) 2 − V<br />
and<br />
VR = 2α (x − xmin) 2 − V.<br />
What are the vibrational energy levels for each well?<br />
(c) The wavefunctions for the lowest energy states in each well are given by<br />
with<br />
ψ(x) = γ1/2<br />
exp[−γ2<br />
π1/4 2 (x ± xmin) 2 ]<br />
γ =<br />
� (4µα) 1/2<br />
¯h<br />
� 1/2<br />
.<br />
The energy levels for both sides are degenerate in the limit that the barrier height is<br />
infinite. The total ground state wavefunction for this case is<br />
Ψ(x) =<br />
�<br />
ψL(x)<br />
ψR(x)<br />
However, as the barrier height decreases, the degenerate states begin to mix causing<br />
the energy levels to split. Define the ”high barrier” hamiltonian as<br />
for x < 0 and<br />
�<br />
H = − ¯h2 ∂<br />
2µ<br />
2<br />
+ VL(x)<br />
∂x<br />
H = − ¯h2 ∂<br />
2µ<br />
2<br />
+ VR(x)<br />
∂x<br />
for x > 0. Calculate the matrix elements <strong>of</strong> H which mix the two degenerate left and<br />
right hand ground state wavefunctions: i.e.<br />
where<br />
〈Ψ|H|Ψ〉 =<br />
�<br />
.<br />
HRR HLR<br />
HRL HLL<br />
HRR = 〈ψR|H|ψR〉<br />
, with similar definitions for HRL, HLL and HLR. Obtain numerical values <strong>of</strong> each<br />
matrix element using the values <strong>of</strong> α and β you determined above (in cm −1 ). Use the<br />
mass <strong>of</strong> a H atom for the reduced mass µ.<br />
148<br />
�
(d) Since the ψL and ψR basis functions are non-orthogonal, you will need to consider the<br />
overlap matrix, S, when computing the eigenvalues <strong>of</strong> H. The eigenvalues for this<br />
system can be determined by solving the secular equation<br />
�<br />
�<br />
�<br />
�<br />
�<br />
α − λ β − λS<br />
β − λS α − λ<br />
�<br />
�<br />
�<br />
� = 0 (5.174)<br />
�<br />
where α = HRR = HLL and β = HLR = HRL (not to be confused with the potential<br />
parameters above). Using Eq. 5.174, solve for λ and determine the energy splitting<br />
in the ground state as a function the unperturbed harmonic frequency and the barrier<br />
height, V . Calculate this splitting using the parameters you computed above. What is<br />
the tunneling frequency? The experimental results is ∆E = 0.794cm −12 .<br />
Exercise 5.14 Consider a system in which the Lagrangian is given by<br />
L(qi, ˙qi) = T (qi, ˙qi) − V (qi) (5.175)<br />
where we assume T is quadratic in the velocities. The potential is independent <strong>of</strong> the velocity<br />
and neither T nor V carry any explicit time dependency. Show that<br />
⎛<br />
d<br />
⎝<br />
dt<br />
�<br />
⎞<br />
∂L<br />
˙qj − L⎠<br />
= 0<br />
∂ ˙qj<br />
j<br />
The constant quantity in the (. . .) defines a Hamiltonian, H. Show that under the assumed<br />
conditions, H = T + V<br />
Exercise 5.15 The Fermat principle in optics states that a light ray will follow the path, y(x)<br />
which minimizes its optical length, S, through a media<br />
S =<br />
� x2,y2<br />
x1,y1<br />
n(y, x)ds<br />
where n is the index <strong>of</strong> refraction. For y2 = y1 = 1 and −x1 = x2 = 1 find the ray-path for<br />
1. n = exp(y)<br />
2. n = a(y − yo) for y > yo<br />
Make plots <strong>of</strong> each <strong>of</strong> these trajectories.<br />
Exercise 5.16 In a quantum mechanical system there are gi distinct quantum states between<br />
energy Ei and Ei + dEi. In this problem we will use the variational principle and Lagrange<br />
multipliers to determine how ni particles are distributed amongst these states subject to the constraints<br />
1. The number <strong>of</strong> particles is fixed:<br />
n = �<br />
2 From Molecular Structure and Dynamics, by W. Flygare, (Prentice Hall, 1978)<br />
149<br />
i<br />
ni
2. the total energy is fixed �<br />
niEi = E<br />
We consider two cases:<br />
i<br />
1. For identical particles obeying the Pauli exclusion principle, the probability <strong>of</strong> a given configuration<br />
is<br />
WF D = �<br />
i<br />
gi<br />
ni!(gi − ni)!<br />
Show that maximizing WF D subject to the constraints above leads to<br />
ni =<br />
gi<br />
e λ1+λ2Ei + 1<br />
(5.176)<br />
with the Lagrange multipliers λ1 = −Eo/kT and λ2 = 1/kT . Hint: try working with the<br />
log W and use Sterling’s approximation in the limit <strong>of</strong> a large number <strong>of</strong> particles .<br />
2. In this case we still consider identical particles, but relax the restriction on the fixed number<br />
<strong>of</strong> particles in a given state. The probability for a given distribution is then<br />
WBE = �<br />
i<br />
(ni + gi − 1)!<br />
.<br />
ni!(gi − 1)!<br />
Show that by minimizing WBE subject to the constraints above leads to the occupation<br />
numbers:<br />
gi<br />
ni =<br />
eλ1+λ2Ei − 1<br />
where again, the Lagrange multipliers are λ1 = −Eo/kT and λ2 = 1/kT . This yields the<br />
Bose-Einstein statistics. Note: assume that gi ≫ 1<br />
3. Photons satisfy the Bose-Einstein distribution and the constraint that the total energy is<br />
constant. However, there is no constrain regarding the total number <strong>of</strong> photons. Show that<br />
by eliminating the fixed number constraint leads to the foregoing result with λ1 = 0.<br />
150
Bibliography<br />
[1] C. A. Parr and D. G. Trular, J. Phys. Chem. 75, 1844 (1971).<br />
[2] H. F. Schaefer III, J. Phys. Chem. 89, 5336 (1985).<br />
[3] P. A. Whitlock and J. T. Muckermann, J. Chem. Phys. 61, 4624 (1974).<br />
151
Chapter 6<br />
<strong>Quantum</strong> <strong>Mechanics</strong> in 3D<br />
In the next few lectures, we will focus upon one particular symmetry, the isotropy <strong>of</strong> free space.<br />
As a collection <strong>of</strong> particles rotates about an arbitrary axis, the Hamiltonian does not change. If<br />
the Hamoltonian does in fact depend explicitly upon the choice <strong>of</strong> axis, the system is “gauged”,<br />
meaning all measurements will depend upon how we set up the coordinate frame. A Hamiltonian<br />
with a potential function which depends only upon the coordinates, e.g. V = f(x, y, z), is gauge<br />
invarient, meaning any measurement that I make will not depend upon my choice <strong>of</strong> reference<br />
frame. On the other hand, if our Hamiltonian contains terms which couple one reference frame<br />
to another (as in the case <strong>of</strong> non-rigid body rotations), we have to be careful in how we select<br />
the “gauge”. While this sounds like a fairly specialized case, it turns out that many ordinary<br />
phenimina depend upon this, eg. figure skaters, falling cats, floppy molecules. We focus upon<br />
rigid body rotations first.<br />
For further insight and information into the quantum mechanics <strong>of</strong> angular momentum, I<br />
recommend the following texts and references:<br />
1. Theory <strong>of</strong> Atomic Structure, E. Condon and G. Shortley. This is the classical book on<br />
atomic physics and theory <strong>of</strong> atomic spectroscopy and has inspired generations since it<br />
came out in 1935.<br />
2. Angular Momentum–understanding spatial aspects in chemistry and physics, R. N. Zare.<br />
This book is the text for the second-semester quantum mechanics at Stanford taught by<br />
Zare (when he’s not out looking for Martians). It’s a great book with loads <strong>of</strong> examples in<br />
spectroscopy.<br />
3. <strong>Quantum</strong> Theory <strong>of</strong> Angular Momentum, D. A. Varshalovich, A. Moskalev, and V. Khersonskii.<br />
Not to much physics in this book, but if you need to know some relation between<br />
Wigner-D functions and Racah coefficients, or how to derive 12j symbols, this book is for<br />
you.<br />
First, we need to look at what happens to a Hamiltonian under rotation. In order to show that<br />
H is invariant to any rotations, we need only to show that it is invarient under an infinitesimal<br />
rotation.<br />
152
6.1 <strong>Quantum</strong> Theory <strong>of</strong> Rotations<br />
Let δ � φ by a vector <strong>of</strong> a small rotation equal in magnitude to the angle δφ directed long an<br />
arbitrary axis. Rotating the system by δ � φ changes the direction vectors �rα by δ�rα.<br />
δ�rα = δ � φ × �rα<br />
Note that the × denotes the vector “cross” product. Since we will be using cross-products<br />
through out these lectures, we pause to review the operation.<br />
A cross product between two vectors is computed as<br />
�c = �a × �b �<br />
�<br />
� î ˆj kˆ<br />
�<br />
= �<br />
�<br />
�<br />
ai aj ak<br />
bi bj bk<br />
�<br />
�<br />
�<br />
�<br />
�<br />
�<br />
�<br />
= î(ajbk − bjak) − ˆj(aibk − biak) + ˆ k(aibj − biaj)<br />
(6.1)<br />
= ɛijkajbk (6.2)<br />
Where ɛijk is the Levi-Cevita symbol or the “anti-symmetric unit tensor” defined as<br />
⎧<br />
⎪⎨ 0 if any <strong>of</strong> the indices are the same<br />
ɛijk = 1 for even permuations <strong>of</strong> the indices<br />
⎪⎩<br />
−1 for odd permutations <strong>of</strong> the indices<br />
(Note that we also have assumed a “summation convention” where by we sum over all repeated<br />
indices. Some elementary properties are ɛiklɛikm = δlm and ɛiklɛikl = 6.)<br />
So, an arbitrary function ψ(r1, r2, · · ·) is transformed by the rotation into:<br />
ψ1(r1 + δr1, r2 + δr2, · · ·) = ψ(r1, r2, · · ·) + �<br />
δra · � ∇ψa<br />
Thus, we conclude, that the operator<br />
= ψ(r1, r2, · · ·) + �<br />
δ� φ × ra · � ∇aψa<br />
=<br />
1 + δ� φ · �<br />
�ra × � ∇a<br />
a<br />
�<br />
1 + δ� φ · �<br />
�ra × � �<br />
∇a<br />
is the operator for an infintesimal rotation <strong>of</strong> a system <strong>of</strong> particles. Since δφ is a constant, we<br />
can show that this operator commutes with the Hamiltonian<br />
�<br />
�<br />
[ �ra × � �<br />
∇a , H] = 0 (6.6)<br />
a<br />
This implies then a particular conservation law related to the isotropy <strong>of</strong> space. This is <strong>of</strong> course<br />
angular momentum so that<br />
�<br />
�<br />
�ra × � �<br />
∇a<br />
(6.7)<br />
a<br />
153<br />
a<br />
a<br />
a<br />
ψa<br />
(6.3)<br />
(6.4)<br />
(6.5)
must be at least proportional to the angular momentum operator, L. The exact relation is<br />
which is much like its classical counterpart<br />
¯hL = �r × �p = −i¯h�r × � ∇ (6.8)<br />
L = 1<br />
�r × �v. (6.9)<br />
m<br />
The operator is <strong>of</strong> course a vector quantity, meaning that is has direction. The components <strong>of</strong><br />
the angular momentum vector are:<br />
¯hLx = ypz − zpy<br />
¯hLy = zpx − zpz<br />
¯hLz = xpy − ypx<br />
¯hLi = ɛijkxjpk<br />
(6.10)<br />
(6.11)<br />
(6.12)<br />
(6.13)<br />
For a system in a external field, antgular momentum is in general not conserved. However, if<br />
the field posesses spherical symmetry about a central point, all directions in space are equivalent<br />
and angular momentum about this point is conserved. Likewise, in an axially symmetric field,<br />
motion about the axis is conserved. In fact all the conservation laws which apply in classical<br />
mechanics have quantum mechanical analogues.<br />
We now move on to compute the commutation rules between the Li operators and the x and<br />
p operators First we note:<br />
In short hand:<br />
[Lx, x] = [Ly, y] = [Lz, z] = 0 (6.14)<br />
[Lx, y] = 1<br />
¯h ((ypz − zpy)y − y(ypz − zpy)) = − z<br />
¯h [py, y] = iz (6.15)<br />
[Li, xk] = iɛiklxl<br />
We need also to know how the various components commute with one another:<br />
¯h[Lx, Ly] = Lx(zpx − xpz) − (zpx − xpz)Lx<br />
(6.16)<br />
(6.17)<br />
= (Lxz − zLx)px − x(Lxpz − pzLx) (6.18)<br />
= −iypx + ixpy<br />
154<br />
(6.19)
Which we can summarize as<br />
= i¯hLz<br />
[Ly, Lz] = iLx<br />
[Lz, Lx] = iLy<br />
[Lx, Ly] = iLz<br />
[Li, Lj] = iɛijkLk<br />
Now, denote the square <strong>of</strong> the modulus <strong>of</strong> the total angular momentum by L 2 , where<br />
L 2 = L 2 x + L 2 y + L 2 z<br />
Notice that this operator commutes with all the other Lj operators,<br />
For example:<br />
also,<br />
Thus,<br />
(6.20)<br />
(6.21)<br />
(6.22)<br />
(6.23)<br />
(6.24)<br />
(6.25)<br />
[L 2 , Lx] = [L 2 , Ly] = [L 2 , Lz] = 0 (6.26)<br />
[L 2 x, Lz] = Lx[Lx, Lz] + [Lx, Lz]Lx = −i(LxLy + LyLx) (6.27)<br />
[L 2 y, Lz] = i(LxLy + LyLx) (6.28)<br />
[L 2 , Lz] = 0 (6.29)<br />
Thus, I can measure L 2 and Lz simultaneously. (Actually I can measure L 2 and any one component<br />
Lk simultaneously. However, we ueually pick this one as the z axis to make the math<br />
easier, as we shall soon see.)<br />
A consequence <strong>of</strong> the fact that Lx, Ly, and Lz do not commute is that the angular momentum<br />
vector � L can never lie exactly along the z axis (or exactly along any other<br />
�<br />
axis for that matter).<br />
We can interpret this in a classical context as a vector <strong>of</strong> length |L| = ¯h L(L + 1) with the Lz<br />
component being ¯hm. The vector is then constrained to lie in a cone as shown in Fig. ??. We<br />
will take up this model at the end <strong>of</strong> this chapter in the semi-classical context.<br />
It is also convienent to write Lx and Ly as a linear combination<br />
L+ = Lx + iLyL− = Lx − iLy<br />
(Recall what we did for Harmonic oscillators?) It’s easy to see that<br />
[L+, L−] = 2Lz<br />
155<br />
(6.30)<br />
(6.31)
Figure 6.1: Vector model for the quantum angular momentum state |jm〉, which is represented<br />
here by the vector j which precesses about the z axis (axis <strong>of</strong> quantzation) with projection m.<br />
Z<br />
Y<br />
Likewise:<br />
m<br />
θ<br />
|j|=(j (j + 1))<br />
1/2<br />
X<br />
[Lz, L+] = L+<br />
[Lz, L−] = −L−<br />
L 2 = L+L− + L 2 z − Lz = L−L+ + L 2 z + Lz<br />
(6.32)<br />
(6.33)<br />
(6.34)<br />
We now give some frequently used expressions for the angular momentum operator for a<br />
single particle in spherical polar coordinates (SP). In SP coordinates,<br />
It’s easy and straightforward to demonstrate that<br />
and<br />
L± = e ±φ<br />
x = r sin θ cos φ (6.35)<br />
y = r sin θ sin φ (6.36)<br />
z = r cos θ (6.37)<br />
Lz = −i ∂<br />
∂φ<br />
�<br />
± ∂<br />
�<br />
∂<br />
+ i cot θ<br />
∂θ ∂φ<br />
156<br />
(6.38)<br />
(6.39)
Thus,<br />
L 2 = − 1<br />
�<br />
1 ∂<br />
sin θ sin θ<br />
2<br />
�<br />
∂ ∂<br />
+ sin θ<br />
∂φ2 ∂θ ∂θ<br />
which is the angular part <strong>of</strong> the Laplacian in SP coordinates.<br />
∇ 2 1<br />
=<br />
r2 �<br />
sin θ<br />
sin θ<br />
∂ ∂ ∂ ∂ 1 ∂<br />
r2 + sin θ +<br />
∂r ∂r ∂θ ∂θ sin θ<br />
2<br />
∂φ2 �<br />
= 1<br />
r 2<br />
∂ ∂<br />
r2<br />
∂r ∂r<br />
1<br />
− L2<br />
r2 In other words, the kinetic energy operator in SP coordinates is<br />
− ¯h2<br />
2m ∇2 = − ¯h2<br />
�<br />
1<br />
2m r2 �<br />
∂ ∂ 1<br />
r2 − L2<br />
∂r ∂r r2 6.2 Eigenvalues <strong>of</strong> the Angular Momentum Operator<br />
Using the SP form<br />
(6.40)<br />
(6.41)<br />
(6.42)<br />
(6.43)<br />
Lzψ = i ∂ψ<br />
∂φ = lzψ (6.44)<br />
Thus, we conclude that ψ = f(r, θ)e ilzφ . This must be single valued and thus periodic in φ with<br />
period 2π. Thus,<br />
Thus, we write the azimuthal solutions as<br />
which are orthonormal functions:<br />
� 2π<br />
0<br />
lz = m = 0, ±1, ±2, · · · (6.45)<br />
Φm(φ) = 1<br />
√ 2π e imφ<br />
(6.46)<br />
Φ ∗ m(φ)Φm ′(φ)dφ = δmm ′ (6.47)<br />
In a centrally symmetric case, stationary states which differ only in their m quantum number<br />
must have the same energy.<br />
We now look for the eigenvalues and eigenfunctions <strong>of</strong> the L 2 operator belonging to a set <strong>of</strong><br />
degenerate energy levels distinguished only by m. Since the +z−axis is physically equivalent to<br />
the −z−axis, for every +m there must be a −m. Let L denote the greatest possible m for a<br />
given L 2 eigenstate. This upper limit must exist because <strong>of</strong> the fact that L 2 − L 2 z = L 2 x + L 2 y is a<br />
operator for an essentially positive quantity. Thus, its eigenvalues cannot be negative. We now<br />
apply LzL± to ψm.<br />
Lz(L±ψm) = (Lz ± 1)(L±ψm) = (m ± 1)(L±ψm) (6.48)<br />
157
(note: we used [Lz, L±] = ±L± ) Thus, L±ψm is an engenfunction <strong>of</strong> Lz with eigenvalue m ± 1.<br />
i.e.<br />
ψm+1 ∝ L+ψm<br />
ψm−1 ∝ L−ψm<br />
If m = l then, we must have L+ψl = 0. Thus,<br />
(6.49)<br />
(6.50)<br />
L−L+ψl = (L 2 − L 2 z − Lz)ψl = 0 (6.51)<br />
L 2 ψl = (L 2 z + Lz)ψl = l(l + 1)ψl<br />
(6.52)<br />
Thus, the eigenvalues <strong>of</strong> L 2 operator are l(l + 1) for l any positive integer (including 0). For a<br />
given value <strong>of</strong> l, the component Lz can take values<br />
l, l − 1, · · · , 0, −l (6.53)<br />
or 2l + 1 different values. Thus an energy level with angular momentum l has 2l + 1 degenerate<br />
states.<br />
6.3 Eigenstates <strong>of</strong> L 2<br />
Since l ansd m are the good quantum numbers, we’ll denote the eigenstates <strong>of</strong> L 2 as<br />
This we will <strong>of</strong>ten write in short hand after specifying l as<br />
Since L 2 = L+L− + L 2 z − Lz, we have<br />
Also, note that<br />
thus we have<br />
〈m|L 2 |m〉 = m 2 − m − �<br />
L 2 |lm〉 = l(l + 1)|lm〉. (6.54)<br />
L 2 |m〉 = l(l + 1)|m〉. (6.55)<br />
m ′<br />
〈m|L+|m ′ 〉〈m ′ |L−|m〉 = l(l + 1) (6.56)<br />
〈m − 1|L−|m〉 = 〈m|L+|m − 1〉 ∗ , (6.57)<br />
|〈m|L+|m − 1〉| 2 = l(l + 1) − m(m − 1) (6.58)<br />
Choosing the phase (Condon and Shortly phase convention) so that<br />
〈m|L+|m − 1〉 =<br />
〈m − 1|L−|m〉 = 〈m|L+|m − 1〉 (6.59)<br />
�<br />
�<br />
l(l + 1) − m(m − 1) = (l + m)(l − m + 1) (6.60)<br />
Using this relation, we note that<br />
〈m|Lx|m − 1〉 = 〈m − 1|Lx|m〉 = 1�<br />
(l + m)(l − m + 1) (6.61)<br />
2<br />
〈m|Ly|m − 1〉 = 〈m − 1|Lx|m〉 = −i�<br />
(l + m)(l − m + 1) (6.62)<br />
2<br />
Thus, the diagonal elements <strong>of</strong> Lx and Ly are zero in states with definite values <strong>of</strong> 〈Lz〉 = m.<br />
158
6.4 Eigenfunctions <strong>of</strong> L 2<br />
The wavefunction <strong>of</strong> a particle is not entirely determined when l and m are presribed. We still<br />
need to specify the radial component. Thus, all the angular momentum operators (in SP coords)<br />
contain and explicit r dependency. For the time, we’ll take r to be fixed and denote the angular<br />
momentum eigenfunctions in SP coordinates as Ylm(θ, φ) with normalization<br />
�<br />
|Ylm(θ, φ)| 2 dΩ (6.63)<br />
where dΩ = sin θdθdφ = d(cos θ)dφ and the integral is over all solid angles. Since we can<br />
determine common eigenfunctions for L 2 and Lz, there must be a separation <strong>of</strong> variables, θ and<br />
φ, so we seek solutions <strong>of</strong> the form:<br />
The normalization requirement is that<br />
and I require<br />
I thus seek solution <strong>of</strong><br />
� 1<br />
sin θ<br />
i.e.<br />
Ylm(θ, φ) = Φm(φ)Θlm(θ) (6.64)<br />
� π<br />
0<br />
� 2π � π<br />
0<br />
0<br />
∂ ∂ 1<br />
sin θ +<br />
∂θ ∂θ sin2 θ<br />
|Θlm(θ)| 2 sin θdθ = 1 (6.65)<br />
Y ∗<br />
l ′ m ′YlmdΩ = δll ′δmm ′. (6.66)<br />
∂2 ∂φ2 �<br />
ψ + l(l + 1)ψ = 0 (6.67)<br />
�<br />
1 ∂ ∂ m2<br />
sin θ −<br />
sin θ ∂θ ∂θ sin2 �<br />
+ l(l + 1) Θlm(θ) = 0 (6.68)<br />
θ<br />
which is well known from the theory <strong>of</strong> spherical harmonics.<br />
Θlm(θ) = (−1) m i l<br />
�<br />
�<br />
�<br />
� (2l + 1)(l − m)!<br />
P<br />
2(l − m)!<br />
m<br />
l (cos θ) (6.69)<br />
for m > 0. Where P m<br />
l are associated Legendre Polynomials. For m < 0 we get<br />
Θl,−|m| = (−1) m Θl,|m|<br />
(6.70)<br />
Thus, the angular momentum eigenfunctions are the spherical harmonics, normalized so that<br />
the matrix relations defined above hold true. The complete expression is<br />
Ylm = (−1) (m+|m|)/2 i l<br />
� 2l + 1<br />
4π<br />
�1/2 (l − |m|)!<br />
P<br />
(l + |m|)!<br />
|m|<br />
l (cos θ)e imφ<br />
159<br />
(6.71)
Table 6.1: Spherical Harmonics (Condon-Shortley Phase convention.<br />
1<br />
Y00 = √<br />
4π<br />
� �1/2 3<br />
Y1,0 = cos(θ)<br />
4π<br />
� �1/2 3<br />
Y1,±1 = ∓ sin(θ)e<br />
8π<br />
±iφ<br />
�<br />
5<br />
Y2,±2 = 3<br />
96π sin2 θe ∓iφ<br />
�<br />
5<br />
Y2,±1 = ∓3 sin θ cos θeiφ<br />
24π<br />
�<br />
�<br />
5 3<br />
Y2,0 =<br />
4π 2 cos2 θ − 1<br />
�<br />
2<br />
These can also be generated by the SphericalHarmonicY[l,m,θ,φ] function in Mathematica.<br />
Figure 6.2: Spherical Harmonic Functions for up to l = 2. The color indicates the phase <strong>of</strong> the<br />
function.<br />
160
For the case <strong>of</strong> m = 0,<br />
Yl0 = i l<br />
� 2l + 1<br />
4π<br />
� 1/2<br />
Pl(cos θ) (6.72)<br />
Other useful relations are in cartesian form, obtained by using the relations<br />
cos θ = z<br />
, (6.73)<br />
r<br />
sin θ cos φ = x<br />
, (6.74)<br />
r<br />
and<br />
sin θ sin φ = y<br />
. (6.75)<br />
r<br />
Y1,0 =<br />
Y1,1 =<br />
� �1/2 3 z<br />
4π r<br />
� �1/2 3 x + iy<br />
8π r<br />
� �1/2 3 x − iy<br />
Y1,−1 =<br />
8π r<br />
The orthogonality integral <strong>of</strong> the Ylm functions is given by<br />
� 2π � π<br />
Another useful relation is that<br />
0<br />
0<br />
(6.76)<br />
(6.77)<br />
(6.78)<br />
Y ∗<br />
lm(θ, φ)Yl ′ m ′(θ, φ) sin θdθdφ = δll ′δmm ′. (6.79)<br />
Yl,−m = (−1) m Y ∗<br />
lm. (6.80)<br />
This relation is useful in deriving real-valued combinations <strong>of</strong> the spherical harmonic functions.<br />
Exercise 6.1 Demonstrate the following:<br />
1. [L+, L 2 ] = 0<br />
2. [L−, L 2 ] = 0<br />
Exercise 6.2 Derive the following relations<br />
�<br />
�<br />
�<br />
ψl,m(θ, φ) = � (l + m)!<br />
(2l!)(l − m)! (L−) l−m ψl,l(θ, φ)<br />
and<br />
�<br />
�<br />
�<br />
ψl,m(θ, φ) = � (l − m)!<br />
(2l!)(l + m)! (L+) l+m ψl,−l(θ, φ)<br />
where ψl,m = Yl,m are eigenstates <strong>of</strong> the L 2 operator.<br />
161
6.5 Addition theorem and matrix elements<br />
In the quantum mechanics <strong>of</strong> rotations, we will come across integrals <strong>of</strong> the general form<br />
�<br />
Y ∗<br />
l1m1 Yl2m2Yl3m3dΩ<br />
or �<br />
Y ∗<br />
l1m1 Pl2Yl3m3dΩ<br />
in computing matrix elements between angular momentum states. For example, we may be<br />
asked to compute the matrix elements for dipole induced transitions between rotational states <strong>of</strong><br />
a spherical molecule or between different orbital angular momentum states <strong>of</strong> an atom. In either<br />
case, we need to evaluate an integral/matrix element <strong>of</strong> the form<br />
�<br />
〈l1m1|z|l2m2〉 =<br />
�<br />
Realizing that z = r cos θ = r 4π/3Y10(θ, φ), Eq. 6.87 becomes<br />
〈l1m1|z|l2m2〉 =<br />
�<br />
4π<br />
3 r<br />
�<br />
Y ∗<br />
l1m1 zYl2m2dΩ (6.81)<br />
Y ∗<br />
l1m1 Y10Yl2m2dΩ (6.82)<br />
Integrals <strong>of</strong> this form can be evaluated by group theoretical analysis and involves the introduction<br />
<strong>of</strong> Clebsch-Gordan coefficients, CLM 1<br />
l1m1l2m2 which are tabulated in various places or can be<br />
computed using Mathematica. In short, some basic rules will always apply.<br />
1. The integral will vanish unless the vector sum <strong>of</strong> the angular momenta sums to zero.<br />
i.e.|l1 − l3| ≤ l2 ≤ (l1 + l3). This is the “triangle” rule and basically means you have to be<br />
able make a triangle with length <strong>of</strong> each side being l1, l2, and l3.<br />
2. The integral will vanish unless m2 + m3 = m1. This reflects the conservation <strong>of</strong> the z<br />
component <strong>of</strong> the angular momentum.<br />
3. The integral vanishes unless l1 + l2 + l3 is an even integer. This is a parity conservation<br />
law.<br />
So the general proceedure for performing any calculation involving spherical harmonics is to first<br />
check if the matrix element violates any <strong>of</strong> the three symmetry rules, if so, then the answer is 0<br />
and you’re done. 2<br />
To actually perform the integration, we first write the product <strong>of</strong> two <strong>of</strong> the Ylm’s as a<br />
Clebsch-Gordan expansion:<br />
Yl1m1Yl2m2 = �<br />
�<br />
�<br />
�<br />
� (2l1 + 1)(2l2 + 1)<br />
4π(2L + 1) CL0<br />
LM<br />
l10l20C LM<br />
l1m1l2m2 YLM. (6.83)<br />
1 Our notation is based upon Varshalovich’s book. There at least 13 different notations that I know <strong>of</strong> for<br />
expressing these coefficients which I list in a table at the end <strong>of</strong> this chapter.<br />
2 In Mathematica, the Clebsch-Gordan coefficients are computed using the function<br />
ClebschGordan[j1,m1,j2,m2,j,m] for the decomposition <strong>of</strong> |jm〉 in to |j1, m1〉 and |j2, m2〉.<br />
162
We can use this to write<br />
�<br />
Y ∗<br />
lmYl1m2Yl2m2dΩ = �<br />
�<br />
�<br />
�<br />
� (2l1 + 1)(2l2 + 1)<br />
4π(2L + 1) CL0<br />
LM<br />
�<br />
�<br />
�<br />
= �<br />
�<br />
LM<br />
(2l1 + 1)(2l2 + 1)<br />
4π(2L + 1) CL0<br />
�<br />
�<br />
�<br />
= � (2l1 + 1)(2l2 + 1)<br />
4π(2l + 1)<br />
l10l20C LM<br />
�<br />
l1m1l2m2<br />
l10l20C LM<br />
l1m1l2m2 δlLδmM<br />
C l0<br />
l10l20C lm<br />
l1m1l2m2<br />
Y ∗<br />
lmYLMdΩ<br />
(6.84)<br />
In fact, the expansion we have done above for the product <strong>of</strong> two spherical harmonics can be<br />
inverted to yield the decomposition <strong>of</strong> one angular momentum state into a pair <strong>of</strong> coupled angular<br />
momentum states, such as would be the case for combining the orbital angular momentum <strong>of</strong> a<br />
particle with, say, its spin angular momentum. In Dirac notation, this becomes pretty apparent<br />
|LM〉 = �<br />
〈l1m1l2m2|LM〉|l1m1l2m2〉 (6.85)<br />
m1m2<br />
where the state |l1m1l2m2〉 is the product <strong>of</strong> two angular momentum states |l1m1〉 and |l2m2〉.<br />
The expansion coefficients are the Clebsch-Gordan coefficients<br />
C LM<br />
l1m1l2m2 = 〈l1m1l2m2|LM〉 (6.86)<br />
Now, let’s go back the problem <strong>of</strong> computing the dipole transition matrix element between<br />
two angular momentum states in Eq. 6.87. The integral we wish to evaluate is<br />
�<br />
〈l1m1|z|l2m2〉 =<br />
Y ∗<br />
l1m1 zYl2m2dΩ (6.87)<br />
and we noted that z was related to the Y10 spherical harmonic. So the integral over the angular<br />
coordinates involves:<br />
�<br />
Y ∗<br />
l1m1 Y10Yl2m2dΩ. (6.88)<br />
First, we evaluate which matrix elements are going to be permitted by symmetry.<br />
1. Clearly, by the triangle inequality, |l1 − l2| = 1. In other words, we change the angular<br />
momentum quantum number by only ±1.<br />
2. Also, by the second criteria, m1 = m2<br />
3. Finally, by the third criteria: l1 + l2 + 1 must be even, which again implies that l1 and l2<br />
differ by 1.<br />
Thus the integral becomes<br />
�<br />
Y ∗<br />
l+1,mY10Ylm =<br />
�<br />
�<br />
�<br />
� (2l + 1)(2 + 1)<br />
4π(2l + 3) Cl+1,0 l010 C1m l+1,ml0<br />
163<br />
(6.89)
From tables,<br />
So<br />
Thus,<br />
�<br />
C l+1,0<br />
l010<br />
C 1m<br />
l+1,ml0 = −<br />
= −<br />
� √ �<br />
2 (1 + l)<br />
√ √<br />
2 + 2 l 3 + 2 l<br />
�√ √ √ �<br />
2 1 + l − m 1 + l + m<br />
√ √<br />
2 + 2 l 3 + 2 l<br />
C l+1,0<br />
l010 C1m<br />
l+1,ml0 = 2 (1 + l) √ 1 + l − m √ 1 + l + m<br />
(2 + 2 l) (3 + 2 l)<br />
Y ∗<br />
l+1,mY10YlmdΩ =<br />
� �<br />
�<br />
3 �<br />
� (l + m + 1)(l − m + 1)<br />
4π (2l + 1)(2l + 3)<br />
Finally, we can construct the matrix element for dipole-transitions as<br />
(6.90)<br />
�<br />
�<br />
�<br />
〈l1m1|z|l2m2〉 = r�<br />
(l + m + 1)(l − m + 1)<br />
δl1±1,l2δm1,m2. (6.91)<br />
(2l + 1)(2l + 3)<br />
Physically, this make sense because a photon carries a single quanta <strong>of</strong> angular momentum. So<br />
in order for molecule or atom to emit or absorb a photon, its angular momentum can only change<br />
by ±1.<br />
Exercise 6.3 Verify the following relations<br />
�<br />
�<br />
Y ∗<br />
l+1,m+1Y11Ylmdω =<br />
Y ∗<br />
l−1,m−1Y11Ylmdω = −<br />
�<br />
� �<br />
�<br />
3 �<br />
� (l + m + 1)(l + m + 2)<br />
8π 2l + 1)(2l + 3)<br />
� �<br />
�<br />
3 �<br />
� (l − m)(l − m − 1)<br />
8π 2l − 1)(2l + 1)<br />
Y ∗<br />
lmY00YlmdΩ = 1<br />
√ 4π<br />
(6.92)<br />
(6.93)<br />
(6.94)<br />
6.6 Legendre Polynomials and Associated Legendre Polynomials<br />
Ordinary Legendre polynomials are generated by<br />
Pl(cos θ) = 1<br />
2ll! d l<br />
(d cos θ) l (cos2 θ − 1) l<br />
164<br />
(6.95)
i.e. (x = cos θ)<br />
and satisfy<br />
Pl(x) = 1<br />
2l ∂<br />
l!<br />
l<br />
∂x l (x2 − 1) l<br />
(6.96)<br />
� �<br />
1 ∂ ∂<br />
sin θ + l(l + 1) Pl = 0 (6.97)<br />
sin θ ∂θ ∂θ<br />
The Associated Legendre Polynomials are derived from the Legendre Polynomials via<br />
P m<br />
l (cos θ) = sin m θ<br />
∂m (∂ cos θ) m Pl(cos θ) (6.98)<br />
6.7 <strong>Quantum</strong> rotations in a semi-classical context<br />
Earlier we established the fact that the angular momentum vector can never exactly lie on a<br />
single spatial axis. By convention we take the quantization axis to be the z axis, but this is<br />
arbitrary and we can pick any axis as the quantization axis, it is just that picking the z axis<br />
make the mathematics much simpler. Furthermore, we established that the maximum length<br />
the angular momentum vector can have along the z axis is the eigenvalue <strong>of</strong> Lz �<br />
when m = l,<br />
so 〈lz〉 = l which is less than l(l + 1). Note, however, we can write the eigenvalue <strong>of</strong> L2 as<br />
l 2 (1+1/l 2 ) . As l becomes very large the eigenvalue <strong>of</strong> Lz and the eigenvalue <strong>of</strong> L 2 become nearly<br />
identical. The 1/l term is in a sense a quantum mechanical effect resulting from the uncertainty<br />
in determining the precise direction <strong>of</strong> � L.<br />
We can develop a more quantative model for this by examining both the uncertainty product<br />
and the semi-classical limit <strong>of</strong> the angular momentum distribution function. First, recall, that if<br />
we have an observable, A then the spread in the measurements <strong>of</strong> A is given by the varience.<br />
∆A 2 = 〈(A − 〈A〉) 2 〉 = 〈A 2 〉 − 〈A〉 2 . (6.99)<br />
In any representation in which A is diagonal, ∆A 2 = 0 and we can determine A to any level <strong>of</strong><br />
precision. But if we look at the sum <strong>of</strong> the variances <strong>of</strong> lx and ly we see<br />
∆L 2 x + ∆L 2 y = l(l + 1) − m 2 . (6.100)<br />
So for a fixed value <strong>of</strong> l and m, the sum <strong>of</strong> the two variences is constant and reaches its minimum<br />
when |m| = l corresponding to the case when the vector points as close to the ±z axis as it<br />
possible can. The conclusion we reach is that the angular momentum vector lies somewhere in a<br />
cone in which the apex half-angle, θ satisfies the relation<br />
m<br />
cos θ = �<br />
(6.101)<br />
l(l + 1)<br />
which we can varify geometrically. So as l becomes very large the denominator becomes for m = l<br />
l<br />
�<br />
l(l + 1) =<br />
1<br />
� → 1 (6.102)<br />
1 1 + 1/l2 165
and θ = 0,cooresponding to the case in which the angular momentum vector lies perfectly along<br />
the z axis.<br />
Exercise 6.4 Prove Eq. 6.100 by writing 〈L 2 〉 = 〈L 2 x〉 + 〈L 2 y〉 + 〈L 2 z〉.<br />
To develop this further, let’s look at the asymptotic behavour <strong>of</strong> the Spherical Harmonics at<br />
large values <strong>of</strong> angular momentum. The angular part <strong>of</strong> the Spherical Harmonic function satisfies<br />
�<br />
1 ∂ ∂<br />
m2<br />
sin θ + l(l + 1) −<br />
sin θ ∂θ ∂θ sin2 �<br />
Θlm = 0 (6.103)<br />
θ<br />
For m = 0 this reduces to the differential equation for the Legendre polynomials<br />
� �<br />
2 ∂ ∂<br />
+ cot θ + l(l + 1) Pl(cos θ) = 0 (6.104)<br />
∂θ2 ∂θ<br />
If we make the substitution<br />
Pl(cos θ) =<br />
χlθ<br />
(sin θ) 1/2<br />
(6.105)<br />
then we wind up with a similar equation for χl(θ)<br />
�<br />
2 ∂<br />
∂θ2 + (l + 1/2)2 + csc2 �<br />
θ<br />
χl = 0. (6.106)<br />
4<br />
For very large l, the l + 1/2 term dominates and we can ignore the cscθ term everywhere except<br />
for angles close to θ = 0 or θ = π. If we do so then our differential equation becomes<br />
� �<br />
2 ∂<br />
+ (l + 1/2)2 χl = 0, (6.107)<br />
∂θ2 which has the solution<br />
χl(θ) = Al sin ((l + 1/2)θ + α) (6.108)<br />
where Al and α are constants we need to determine from the boundary conditions <strong>of</strong> the problem.<br />
For large l and for θ ≫ l −1 and π − θ ≫ l −1 one obtains<br />
Similarly,<br />
Ylo(θ, φ) ≈<br />
sin((l + 1/2)θ + α)<br />
Pl(cos θ) ≈ Al<br />
(sin θ) 1/2 . (6.109)<br />
� l + 1/2<br />
2π<br />
� 1/2<br />
so that the angular probability distribution is<br />
� �<br />
l + 1/2<br />
|Ylo| 2 =<br />
2π<br />
A 2 l<br />
sin((l + 1/2)θ + α)<br />
Al<br />
(sin θ) 1/2 . (6.110)<br />
sin2 ((l + 1/2)θ + α)<br />
. (6.111)<br />
sin θ<br />
166
When l is very large the sin 2 ((l + 1/2)θ) factor is extremely oscillatory and we can replace it<br />
by its average value <strong>of</strong> 1/2. Then, if we require the integral <strong>of</strong> our approximation for |Yl0| 2 to be<br />
normalized, one obtains<br />
|Yl0| 2 =<br />
1<br />
2π 2 sin(θ)<br />
(6.112)<br />
which holds for large values <strong>of</strong> l and all values <strong>of</strong> θ except for theta = 0 or θ = π.<br />
We can also recover this result from a purely classical model. In classical mechanics, the<br />
particle moves in a circular orbit in a plane perpendicular to the angular momentum vector. For<br />
m = 0 this vector lies in the xy plane and we will define θ as the angle between the particle and<br />
the z axis, φ as the azimuthal angle <strong>of</strong> the angular momentum vector in the xy plane. Since the<br />
particles speed is uniform, its distribution in θ is uniform. Thus the probability <strong>of</strong> finding the<br />
particle at any instant in time between θ and θ + dθ is dθ/π. Furthermore, we have not specified<br />
the azimuthal angle, so we assume that the probability distribution is also uniform over φ and<br />
the angular probability dθ/π must be smeared over some band on the uniform sphere defined by<br />
the angles θ and θ + dθ. The area <strong>of</strong> this band is 2π sin θdθ. Thus, we can define the “classical<br />
estimate” <strong>of</strong> the as a probability per unit area<br />
P (θ) = dθ<br />
π<br />
1<br />
2π sin θ =<br />
1<br />
2π 2 sin θ<br />
which is in agreement with the estimate we made above.<br />
For m �= 0 we have to work a bit harder since the angular momentum vector is tilted out <strong>of</strong><br />
the plane. For this we define two new angles γ which is the azimuthal rotation <strong>of</strong> the particle’s<br />
position about the L vector and α with is constrained by the length <strong>of</strong> the angular momentum<br />
vector and its projection onto the z axis.<br />
m m<br />
cos α = � ≈<br />
l(l + 1 l<br />
The analysis is identical as before with the addition <strong>of</strong> the fact that the probability in γ (taken<br />
to be uniform) is spread over a zone 2π sin θdθ. Thus the probability <strong>of</strong> finding the particle with<br />
some angle θ is<br />
P (θ) = dγ 1<br />
dθ 2π2 sin θ .<br />
Since γ is the dihedral angle between the plane containing z and l and the plane containing<br />
l and r (the particle’s position vector), we can relate γ to θ and α by<br />
Thus,<br />
cos θ = cos α cos π π<br />
+ sin α sin cos γ = sin α cos γ<br />
2 2<br />
sin θdθ = sin α sin γdγ.<br />
This allows us to generalize our probability distribution to any value <strong>of</strong> m<br />
|Ylm(θ, φ)| 2 =<br />
=<br />
1<br />
2π2 sin α sin γ<br />
1<br />
2π 2 (sin 2 α − cos 2 θ) 1/2<br />
167<br />
(6.113)<br />
(6.114)
Figure 6.3: Classical and <strong>Quantum</strong> Probability Distribution Functions for Angular Momentum.<br />
1<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
1<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
1<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
»Y4,0» 2<br />
0.5 1 1.5 2 2.5 3<br />
»Y4,2» 2<br />
0.5 1 1.5 2 2.5 3<br />
»Y4,4» 2<br />
0.5 1 1.5 2 2.5 3<br />
1.5<br />
1.25<br />
1<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
1<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
1<br />
0.75<br />
0.5<br />
0.25<br />
»Y10,0» 2<br />
0.5 1 1.5 2 2.5 3<br />
»Y10,2» 2<br />
0.5 1 1.5 2 2.5 3<br />
»Y10,10» 2<br />
0.5 1 1.5 2 2.5 3<br />
which holds so long as sin 2 α > cos 2 θ. This corresponds to the spatial region (π/2 − α) <<br />
θ < π/2 + α. Outside this region, the distribution blows up and corresponds to the classically<br />
forbidden region.<br />
In Fig. 6.3 we compare the results <strong>of</strong> our semi-classical model with the exact results for l = 4<br />
and l = 10. All in all we do pretty well with a semi-classical model, we do miss some <strong>of</strong> the<br />
wiggles and the distribution is sharp close to the boundaries, but the generic features are all<br />
there.<br />
168
Table 6.2: Relation between various notations for Clebsch-Gordan Coefficients in the literature<br />
Symbol Author<br />
C jm<br />
j1m1j2m2 Varshalovich a<br />
S j1j2<br />
jm1jm2 Wignerb Ajj1j2 mm1m2 Eckartc Cm1m j<br />
2<br />
Van der Weardend (j1j2m1m2|j1j2jm) Condon and Shortley e<br />
C j1j2<br />
jm (m1m2) Fock f<br />
X(j, m, j1, j2, m1) Boys g<br />
C(jm; m1m2) Blatt and Weisskopfh Cj1j2j m1m2m Beidenharni C(j1j2j, m1m2) Rosej �<br />
�<br />
j1 j2 j<br />
m1 m2 m<br />
Yutsis and Bandzaitisk 〈j1m1j2m2|(j1j2)jm〉 Fanol a.) D. A. Varschalovich, et al. <strong>Quantum</strong> Theory <strong>of</strong> Angular Momentum, (World Scientific, 1988).<br />
b.) E. Wigner, Group theory, (Academic Press, 1959).<br />
c.) C. Eckart “The application <strong>of</strong> group theory to the quantum dynamics <strong>of</strong> monatomic systems”,<br />
Rev. Mod. Phys. 2, 305 (1930).<br />
d.) B. L. Van der Waerden, Die gruppentheorische methode in der quantenmechanik, (Springer,<br />
1932).<br />
e.)E. Condon and G. Shortley, Theory <strong>of</strong> Atomic Spectra, (Cambridge, 1932).<br />
f.) V. A. Fock, “ New Deduction <strong>of</strong> the Vector Model”, JETP 10,383 (1940).<br />
g.)S. F. Boys, “Electronic wave functions IV”, Proc. Roy. Soc., London, A207, 181 (1951).<br />
h.) J. M. Blatt and V. F. Weisskopf, Theoretical Nuclear Physics, (McGraw-Hill, 1952).<br />
i.) L. C. Beidenharn, ”Tables <strong>of</strong> Racah Coefficients”, ONRL-1098 (1952).<br />
j.) M. E. Rose, Multipole Fields, (Wiley 1955).<br />
k.) A. P. Yusis and A. A. Bandzaitit, The Theory <strong>of</strong> Angular Momentum in Quanutm <strong>Mechanics</strong>,<br />
(Mintus, Vilinus, 1965).<br />
l.) U. Fano, “Statistical matrix techniques and their application to the directional correlation <strong>of</strong><br />
radiation,” US Nat’l Bureau <strong>of</strong> Standards, Report 1214 (1951).<br />
169
6.8 Motion in a central potential: The Hydrogen Atom<br />
(under development)<br />
The solution <strong>of</strong> the Schrödinger equation for the hydrogen atom was perhaps the most significant<br />
developments in quantum theory. Since it is one <strong>of</strong> the few problems in nature in which we<br />
can derive an exact solution to the equations <strong>of</strong> motion, it deserves special attention and focus.<br />
Perhaps more importantly, the hydrogen atomic orbitals form the basis <strong>of</strong> atomic physics and<br />
quantum chemistry.<br />
The potential energy function between the proton and the electron is the centrosymmetric<br />
Coulombic potential<br />
V (r) = − Ze2<br />
r .<br />
Since the potential is centrosymmetric and has no angular dependency the Hydrogen atom Hamiltonian<br />
separates in to radial and angular components.<br />
H = − ¯h2<br />
2µ<br />
� 1<br />
r 2<br />
∂ ∂ L2<br />
r2 −<br />
∂r ∂r ¯h 2 r2 �<br />
+ e2<br />
r<br />
(6.115)<br />
where L 2 is the angular momentum operator we all know and love by now and µ is the reduced<br />
mass <strong>of</strong> the electron/proton system<br />
µ = memp<br />
me + mp<br />
≈ me = 1<br />
Since [H, L] = 0, angular momentum and one component <strong>of</strong> the angular momentum must be<br />
constants <strong>of</strong> the motion. Since there are three separable degrees <strong>of</strong> freedom, we have one other<br />
constant <strong>of</strong> motion which must correspond to the radial motion. As a consequence, the hydrogen<br />
wavefunction is separable into radial and angular components<br />
ψnlm = Rnl(r)Ylm(θ, φ). (6.116)<br />
Using the Hamiltonian in Eq. 6.115 and this wavefunction, the radial Schrödinger equation reads<br />
(in atomic units)<br />
�<br />
− ¯h2<br />
�<br />
1<br />
2 r2 ∂ ∂ l(l + 1)<br />
r2 −<br />
∂r ∂r r2 �<br />
− 1<br />
�<br />
Rnl(R) = ERnl(r) (6.117)<br />
r<br />
At this point, we introduce atomic units to make the notation more compact and drastically<br />
simplify calculations. In atomic units, ¯h = 1 and e = 1. A list <strong>of</strong> conversions for energy, length,<br />
etc. to SI units is listed in the appendix. The motivation is so that all <strong>of</strong> our numbers are <strong>of</strong><br />
order 1.<br />
The kinetic energy term can be rearranged a bit<br />
and the radial equation written as<br />
�<br />
− ¯h2<br />
�<br />
2 ∂ 2 ∂<br />
+<br />
2 ∂r2 r ∂r<br />
1<br />
r2 ∂ ∂<br />
r2<br />
∂r ∂r<br />
= ∂2<br />
∂r<br />
2 ∂<br />
+ 2 r ∂r<br />
(6.118)<br />
l(l + 1)<br />
−<br />
r2 �<br />
− 1<br />
�<br />
Rnl(R) = ERnl(r) (6.119)<br />
r<br />
170
To solve this equation, we first have to figure out what approximate form the wavefunction must<br />
have. For large values <strong>of</strong> r, the 1/r terms disappear and the asymptotic equation is<br />
or<br />
− ¯h2<br />
2<br />
∂ 2<br />
∂r 2 Rnl(R) = ERnl(r) (6.120)<br />
∂ 2 R<br />
∂r 2 = α2 R (6.121)<br />
where α = −2mE/¯h 2 . This differential equation we have seen before for the free particle, so the<br />
solution must have the same form. Except in this case, the function is real. Furthermore, for<br />
bound states with E < 0 the radial solution must go to zero as r → ∞, so <strong>of</strong> the two possible<br />
asymptotic solutions, the exponentially damped term is the correct one.<br />
R(r) ≡ e −αr<br />
(6.122)<br />
Now, we have to check if this is a solution everywhere. So, we take the asymptotic solution and<br />
plug it into the complete equation:<br />
Eliminating e −αr<br />
α 2 e −αr + 2<br />
r (−αe−αr ) + 2m<br />
¯h 2<br />
� �<br />
2 e<br />
+ E e<br />
r −αr = 0. (6.123)<br />
�<br />
α 2 + 2mE<br />
¯h 2<br />
�<br />
+ 1<br />
� �<br />
2 2me<br />
2 − 2α = 0 (6.124)<br />
r ¯h<br />
For the solution to hold everywhere, it must also hold at r = 0, so two conditions must be met<br />
which we defined above, and<br />
α 2 = −2mE/¯h 2<br />
� 2me 2<br />
2 − 2α<br />
¯h<br />
�<br />
(6.125)<br />
= 0. (6.126)<br />
If these conditions are met, then e −αr is a solution. This last equation also sets the length scale<br />
<strong>of</strong> the system since<br />
α = me 2 /¯h 2 = 1/ao<br />
(6.127)<br />
where ao is the Bohr radius. In atomic units, ao = 1. Likewise, the energy can be determined:<br />
E = − ¯h2<br />
2ma 2 o<br />
= − ¯h2<br />
me 2<br />
e 2<br />
2ao<br />
In atomic units the ground states energy is E = −1/2hartree.<br />
171<br />
= − e2<br />
. (6.128)<br />
2ao
Finally, we have to normalize R<br />
�<br />
d 3 re −2αr � ∞<br />
= 4π r<br />
0<br />
2 e −2αr dr (6.129)<br />
The angular normalization can be absorbed into the spherical harmonic term in the total wavefunction<br />
since Y00 = 1/ √ 4π. So, the ground state wavefunction is<br />
ψn00 = Ne −r/ao Y00<br />
(6.130)<br />
The radial integral can be evaluated using Leibnitz’ theorem for differentiation <strong>of</strong> a definite<br />
integral<br />
Thus,<br />
�<br />
∂ b<br />
� b<br />
f(β, x)dx =<br />
∂β a<br />
a<br />
� ∞<br />
r<br />
0<br />
2 e −βr dr =<br />
Exercise 6.5 Generalize this result to show that<br />
� ∞<br />
0<br />
� ∞<br />
0<br />
= ∂2<br />
∂β 2<br />
= − ∂2<br />
∂β 2<br />
= 2<br />
β 3<br />
∂f(β, x)<br />
dx (6.131)<br />
∂β<br />
∂ 2<br />
∂β2 e−βrdr � ∞<br />
0<br />
1<br />
β<br />
r n e −βr dr = n!<br />
β n+1<br />
e −βr dr<br />
Thus, using this result and putting it all together, the normalized radial wavefunction is<br />
(6.132)<br />
(6.133)<br />
� �3/2 1<br />
R10 = 2 e<br />
ao<br />
−r/ao . (6.134)<br />
For the higher energy states, we examine what happens at r → 0. Using a similar analysis<br />
as above, one can show that close in, the radial solution must behave like a polynomial<br />
which leads to a general solution<br />
R ≡ r l+1<br />
R = r l+1 e −αr<br />
∞�<br />
asr<br />
s=0<br />
s .<br />
172
The proceedure is to substitute this back into the Schrodinger equation and evaluate term by<br />
term. In the end one finds that the energies <strong>of</strong> the bound states are (in atomic units)<br />
En = − 1<br />
2n 2<br />
and the radial wavefunctions<br />
� �l � �<br />
2r 2<br />
Rnl =<br />
nao nao<br />
� (n − l − 1)!<br />
2n((n + l)!) 3<br />
�<br />
e −r/nao L 2l+1<br />
� �<br />
2r<br />
n+1<br />
nao<br />
where the L b a are the associated Laguerre polynomials.<br />
6.8.1 Radial Hydrogenic Functions<br />
(6.135)<br />
The radial wavefunctions for nuclei with atomic number Z are modified hydrogenic wavefunctions<br />
with the Bohr radius scaled by Z. I.e a = ao/Z. The energy for 1 electron about a nucleus with<br />
Z protons is<br />
Some radial wavefunctions are<br />
6.9 Spin 1/2 Systems<br />
En = − Z2<br />
n 2<br />
1<br />
2ao<br />
= − Z2<br />
n2 Ry<br />
2<br />
(6.136)<br />
� �3/2<br />
Z<br />
R1s = 2 e<br />
ao<br />
−Zr/ao (6.137)<br />
R2s = 1<br />
� �3/2 �<br />
Z<br />
√ 1 −<br />
2 ao<br />
Zr<br />
�<br />
e<br />
2ao<br />
−Zr/2ao (6.138)<br />
R2p = 1<br />
2 √ � �5/2<br />
Z<br />
re<br />
6 ao<br />
−Zr/2ao (6.139)<br />
In this section we are going to illustrate the various postulates and concepts we have been<br />
developing over the past few weeks. Rather than choosing as examples problems which are<br />
pedagogic (such as the particle in a box and its variations) or or chosen for theor mathematical<br />
simplicity, we are going to focus upon systems which are physically important. We are going to<br />
examine, with out much theoretical introduction, the case in which the state space is limited to<br />
two states. The quantum mechanical behaviour <strong>of</strong> these systems can be varified experimentally<br />
and, in fact, were and still are used to test various assumptions regarding quantum behaviour.<br />
Recall from undergraduate chemistry that particles, such as the electron, proton, and so forth,<br />
possess an intrinsic angular momentum, � S, called spin. This is a property which has no analogue<br />
in classical mechanics. Without going in to all the details <strong>of</strong> angular momentum and how it gets<br />
quantized (don’t worry, it’s a coming event!) we are going to look at a spin 1/2 system, such as<br />
173
a neutral paramagnetic Ag atom in its ground electronic state. We are going to dispense with<br />
treating the other variables, the nuclear position and momentum,the motion <strong>of</strong> the electrons,<br />
etc... and focus only upon the spin states <strong>of</strong> the system.<br />
The paramagnetic Ag atoms possess an electronic magnetic moment, � M. This magnetic<br />
moment can couple to an externally applied magnetic field, � B, resulting on a net force being<br />
applied to the atom. The potential energy in for this is<br />
W = − � M. � B. (6.140)<br />
We take this without further pro<strong>of</strong>. We also take without pro<strong>of</strong> that the magnetic moment and<br />
the intrinsic angular momentum are proportional.<br />
�M = γ � S (6.141)<br />
The proportionality constant is the gyromagnetic ratio <strong>of</strong> the level under consideration. When<br />
the atoms traverse through the magnetic field, they are deflected according to how their angular<br />
momentum vector is oriented with the applied field.<br />
Also, the total moment relative to the center <strong>of</strong> the atom is<br />
�F = � ∇( � M. � B) (6.142)<br />
� Γ = � M × � B. (6.143)<br />
Thus, the time evolution <strong>of</strong> the angular momentum <strong>of</strong> the particle is<br />
that it to say<br />
∂<br />
∂t � S = � Γ (6.144)<br />
∂<br />
∂t � S = γ � S × � B. (6.145)<br />
Thus, the velocity <strong>of</strong> the angular momentum is perpendicular to � S and the angular momentum<br />
vector acts like a gyroscope.<br />
We can also show that for a homogeneous field the force acts parallel to z and is proportional<br />
to Mz. Thus, the atoms are deflected according to how their angular momentum vector is oriented<br />
with respect to the z axis. Experimentally, we get two distributions. Meaning that measurement<br />
<strong>of</strong> Mz can give rise to two possible results.<br />
6.9.1 Theoretical Description<br />
We associate an observable, Sz, with the experimental observations. This has 2 eigenvalues, at<br />
±¯h/2 We shall assume that the two are not degenerate. We also write the eigenvectors <strong>of</strong> Sz as<br />
|±〉 corresponding to<br />
Sz|+〉 = + ¯h<br />
|+〉 (6.146)<br />
2<br />
174
with<br />
and<br />
The closure, or idempotent relation is thus<br />
with<br />
The most general state vector is<br />
Sz|−〉 = + ¯h<br />
|−〉 (6.147)<br />
2<br />
〈+|+〉 = 〈−|−〉 = 1 (6.148)<br />
〈+|−〉 = 0. (6.149)<br />
|+〉〈+| + |−〉〈−| = 1. (6.150)<br />
|ψ〉 = α|+〉 + β|−〉 (6.151)<br />
|α| 2 + |β| 2 = 1. (6.152)<br />
In the |±〉 basis, the matrix representation <strong>of</strong> Sz is diagonal and is written as<br />
Sz = ¯h<br />
�<br />
2<br />
6.9.2 Other Spin Observables<br />
1 0<br />
0 −1<br />
We can also measure Sx and Sy. In the |±〉 basis these are written as<br />
and<br />
Sx = ¯h<br />
�<br />
2<br />
Sy = ¯h<br />
�<br />
2<br />
0 1<br />
1 0<br />
0 i<br />
−i 0<br />
You can verify that the eigenvalues <strong>of</strong> each <strong>of</strong> these are ±¯h/2.<br />
6.9.3 Evolution <strong>of</strong> a state<br />
The Hamiltonian for a spin 1/2 particle in a B-field is given by<br />
�<br />
�<br />
�<br />
(6.153)<br />
(6.154)<br />
(6.155)<br />
H = −γ|B|Sz. (6.156)<br />
175
Where B is the magnitude <strong>of</strong> the field. This operator is time-independent, thus, we can solve the<br />
Schrodinger Equation and see that the eigenvectors <strong>of</strong> H are also the eigenvectors <strong>of</strong> Sz. (This<br />
the eigenvalues <strong>of</strong> Sz are “good quantum numbers”.) Let’s write ω = −γ|B| so that<br />
H|+〉 = + ¯hω<br />
|+〉 (6.157)<br />
2<br />
H|−〉 = − ¯hω<br />
|−〉 (6.158)<br />
2<br />
Therefore there are two energy levels, E± = ±¯hω/2. The separation is proportional to the<br />
magnetic field. They define a single “Bohr Frequency”.<br />
6.9.4 Larmor Precession<br />
Using the |±〉 states, we can write any arb. angular momentum state as<br />
|ψ(0)〉 = cos( θ<br />
2 )e−iφ/2 |+〉 + sin( θ<br />
2 )e+iφ/2 |−〉 (6.159)<br />
where θ and φ are polar coordinate angles specifing the directrion <strong>of</strong> the angular momentum<br />
vector at a given time. The time evolution under H is<br />
|ψ(0)〉 = cos( θ<br />
2 )e−iφ/2 e −iE+t/¯h |+〉 + sin( θ<br />
2 )e+iφ/2 e −iEmt/¯h |−〉, (6.160)<br />
or, using the values <strong>of</strong> E+ and E−<br />
|ψ(0)〉 = cos( θ<br />
2 )e−i(φ+ωt)/2 |+〉 + sin( θ<br />
2 )e+i(φ+ωt)/2 |−〉 (6.161)<br />
In other words, I can write<br />
θ(t) = θ (6.162)<br />
φ(t) = φ + ωt. (6.163)<br />
This corresponds to the precession <strong>of</strong> the angular momentum vector about the z axis at an angular<br />
frequency <strong>of</strong> ω. More over, the expectation values <strong>of</strong> Sz, Sy, and Sx can also be computed:<br />
〈Sz(t)〉 = ¯h/2 cos(θ) (6.164)<br />
〈Sx(t)〉 = ¯h/2 sin(θ/2) cos(φ + ωt) (6.165)<br />
〈Sy(t)〉 = ¯h/2 sin(θ/2) sin(φ + ωt) (6.166)<br />
Finally, what are the “populations” <strong>of</strong> the |±〉 states as a function <strong>of</strong> time?<br />
|〈+|ψ(t)〉| 2 = cos 2 (θ/2) (6.167)<br />
|〈−|ψ(t)〉| 2 = sin 2 (θ/2) (6.168)<br />
Thus, the populations do not change, neither does the normalization <strong>of</strong> the state.<br />
176
6.10 Problems and Exercises<br />
Exercise 6.6 A molecule (A) with orbital angular momentum S = 3/2 decomposes into two<br />
products: product (B) with orbital angular momentum 1/2 and product (C) with orbital angular<br />
momentum 0. We place ourselves in the rest frame <strong>of</strong> A) and angular momentum is conserved<br />
throughout.<br />
A 3/2 → B 1/2 + C 0<br />
(6.169)<br />
1. What values can be taken on by the relative orbital angular momentum <strong>of</strong> the two final<br />
products? Show that there is only one possible value <strong>of</strong> the parity <strong>of</strong> the relative orbital<br />
state is fixed. Would this result remain the same if the spin <strong>of</strong> A was 3/2?<br />
2. Assume that A is initially in the spin state characterized by the eigenvalue ma¯h <strong>of</strong> its spin<br />
component along the z-axis. We know that the final orbital state has a definite parity. Is it<br />
possible to determine this parity by measuring the probabilities <strong>of</strong> finding B in either state<br />
|+〉 or in state |−〉?<br />
Exercise 6.7 The quadrupole moment <strong>of</strong> a charge distribution, ρ(r), is given by<br />
Qij = 1<br />
�<br />
e<br />
3(xixj − δijr 2 )ρ(r)d 3 r (6.170)<br />
where the total charge e = � d3rρ(r). The quantum mechanical equivalent <strong>of</strong> this can be written<br />
in terms <strong>of</strong> the angular momentum operators as<br />
Qij = 1<br />
�<br />
r<br />
e<br />
2<br />
�<br />
3<br />
2 (JiJj + JjJi) − δijJ 2<br />
�<br />
ρ(r)d 3 r (6.171)<br />
The quadrupole moment <strong>of</strong> a stationary state |n, j〉, where n are other non-angular momentum<br />
quantum numbers <strong>of</strong> the system, is given by the expectation value <strong>of</strong> Qzz in the state in which<br />
m = j.<br />
1. Evaluate<br />
in terms <strong>of</strong> j and 〈r 2 〉 = 〈nj|r 2 |nj〉.<br />
Qo = 〈Qzz〉 = 〈njm = j|Qzz|njm = j〉 (6.172)<br />
2. Can a proton (j = 1/2) have a quadrupole moment? What a bout a deuteron (j = 1)?<br />
3. Evaluate the matrix element<br />
What transitions are induced by this operator?<br />
〈njm|Qxy|nj ′ m ′ 〉 (6.173)<br />
4. The quantum mechanical expression <strong>of</strong> the dipole moment is<br />
po = 〈njm = j| r<br />
e Jz|njm = j〉 (6.174)<br />
Can an eigenstate <strong>of</strong> a Hamiltonian with a centrally symmetric potential have an electric<br />
dipole moment?<br />
177
Exercise 6.8 The σx matrix is given by<br />
prove that<br />
σx =<br />
where α is a constant and I is the unit matrix.<br />
�<br />
0 1<br />
1 0<br />
�<br />
, (6.175)<br />
exp(iασx) = I cos(α) + iσx sin(α) (6.176)<br />
Solution: To solve this you need to expand the exponential. To order α 4 this is<br />
e iασx = I + iασx − α2<br />
2 σ2 x − iα3<br />
3! σ3 + α4<br />
4! σ4 x + · · · (6.177)<br />
Also, note that σx.σx = I, thus, σ2n x = I and σ2n+1 x = σx. Collect all the real terms and all the<br />
imaginary terms:<br />
e iασx =<br />
�<br />
I + I α2<br />
2<br />
+ I α4<br />
4!<br />
+ · · ·<br />
These are the series expansions for cos and sin.<br />
�<br />
+ iσx<br />
�<br />
α − −i α3<br />
3!<br />
+ · · ·<br />
�<br />
(6.178)<br />
e iασx = I cos(α) + iσx sin(α) (6.179)<br />
Exercise 6.9 Because <strong>of</strong> the interaction between the proton and the electron in the ground state<br />
<strong>of</strong> the Hydrogen atom, the atom has hyperfine structure. The energy matrix is <strong>of</strong> the form:<br />
in the basis defined by<br />
⎛<br />
⎜<br />
H = ⎜<br />
⎝<br />
A 0 0 0<br />
0 −A 2A 0<br />
0 2A −A 0<br />
0 0 0 A<br />
⎞<br />
⎟<br />
⎠<br />
(6.180)<br />
|1〉 = |e+, p+〉 (6.181)<br />
|2〉 = |e+, p−〉 (6.182)<br />
|3〉 = |e−, p+〉 (6.183)<br />
|4〉 = |e−, p−〉 (6.184)<br />
where the notation e+ means that the electron’s spin is along the +Z-axis, and e− has the spin<br />
pointed along the −Z axis. i.e. |e+, p+〉 is the state in which both the electron spin and proton<br />
spin is along the +Z axis.<br />
1. Find the energy <strong>of</strong> the stationary states and sketch an energy level diagram relating the<br />
energies and the coupling.<br />
178
2. Express the stationary states as linear combinations <strong>of</strong> the basis states.<br />
3. A magnetic field <strong>of</strong> strength B applied in the +Z direction and couples the |e+, p+〉 and<br />
|e−, p−〉 states. Write the new Hamiltonian matrix in the |e±, p±〉 basis. What happens to<br />
the energy levels <strong>of</strong> the stationary states as a result <strong>of</strong> the coupling? Add this information<br />
to the energy level diagram you sketched in part 1.<br />
Exercise 6.10 Consider a spin 1/2 particle with magnetic moment � M = γ � S. The spin space is<br />
spanned by the basis <strong>of</strong> |+〉 and |−〉 vectors, which are eigenvectors <strong>of</strong> Sz with eigenvalues ±¯h/2.<br />
At time t = 0, the state <strong>of</strong> the system is given by<br />
|ψ(0)〉 = |+〉<br />
1. If the observable Sx is measured at time t = 0, what results can be found and with what<br />
probabilities?<br />
2. Taking |ψ(0)〉 as the initial state, we apply a magnetic field parallel to the y axis with<br />
strength Bo. Calculate the state <strong>of</strong> the system at some later time t in the {|±〉} basis.<br />
3. Plot as a function <strong>of</strong> time the expectation values fo the observables Sx, Sy, and Sz. What<br />
are the values and probabilities? Is there a relation between Bo and t for the result <strong>of</strong> one<br />
<strong>of</strong> the measurements to be certain? Give a physical interpretation <strong>of</strong> this condition.<br />
4. Again, consider the same initial state, this time at t = 0, we measure Sy and find +¯h/2<br />
What is the state vector |ψ(0 + )〉 immediately after this measurement?<br />
5. Now we take |ψ(0 + )〉 and apply a uniform time-dependent field parallel to the z-axis. The<br />
Hamiltonian operator <strong>of</strong> the spin is then given by<br />
H(t) = ω(t)Sz<br />
Assume that prior to t = 0, ω(t) = 0 and for t > 0 increases linearly from 0 to ωo at time<br />
t = T . Show that for 0 ≤ t ≤ T , the state vector can be written as<br />
|ψ(t)〉 = 1 �<br />
√ e<br />
2<br />
iθ(t) |+〉 + ie −iθ(t) |−〉 �<br />
where θ(t) is a real function <strong>of</strong> t (which you need to determine).<br />
6. Finally, at time t = τ > T , we measure Sy. What results can we find and with what<br />
probabilities? Determine the relation which must exist between ωo and T in order for us to<br />
be sure <strong>of</strong> the result. Give the physical interpretation.<br />
179
Chapter 7<br />
Perturbation theory<br />
If you perturbate to much, you will go blind.<br />
– T. A. Albright<br />
In previous lectures, we discusses how , say through application <strong>of</strong> and external driving force,<br />
the stationary states <strong>of</strong> a molecule or other quantum mechanical system can be come coupled<br />
so that the system can make transitions from one state to another. We can write the transition<br />
amplitude exactly as<br />
G(i → j, t) = 〈j| exp(−iH(tj − ti))/¯h)|i〉 (7.1)<br />
where H is the full Hamiltonian <strong>of</strong> the uncoupled system plus the applied perturbation. Thus, G<br />
tells us the amplitude for the system prepared in state |i〉 at time ti and evolve under the applied<br />
Hamiltonian for some time tj − ti and be found in state |j〉. In general this is a complicated<br />
quantity to calculate. Often, the coupling is very complex. In fact, we can only exactly determine<br />
G for a few systems: linearly driven harmonic oscillators, coupled two level systems to name the<br />
more important ones.<br />
In today’s lecture and following lectures, we shall develop a series <strong>of</strong> well defined and systematic<br />
approximations which are widely used in all applications <strong>of</strong> quantum mechanics. We start<br />
with a general solution <strong>of</strong> the time-independent Schrödinger equation in terms and eventually<br />
expand the solution to infinite order. We will then look at what happens if we have a perturbation<br />
or coupling which depends explicitly upon time and derive perhaps the most important<br />
rule in quantum mechanics which is called: “Fermi’s Golden Rule”. 1<br />
7.1 Perturbation Theory<br />
In most cases, it is simply impossible to obtain the exact solution to the Schrödinger equation.<br />
In fact, the vast majority <strong>of</strong> problems which are <strong>of</strong> physical interest can not be resolved exactly<br />
and one is forced to make a series <strong>of</strong> well posed approximations. The simplest approximation is<br />
to say that the system we want to solve looks a lot like a much simpler system which we can<br />
1 During a seminar, the speaker mentioned Fermi’s Golden Rule. <strong>Pr<strong>of</strong></strong>. Wenzel raised his arm and in German<br />
spiked English chided the speaker that it was in fact HIS golden rule!<br />
180
solve with some additional complexity (which hopefully is quite small). In other words we want<br />
to be able to write our total Hamiltonian as<br />
H = Ho + V<br />
where Ho represents that part <strong>of</strong> the problem we can solve exactly and V some extra part which<br />
we cannot. This we take as a correction or perturbation to the exact problem.<br />
Perturbation theory can be formuated in a variery <strong>of</strong> ways, we begin with what is typically<br />
termed Rayleigh-Schrödinger perturbation theory. This is the typical approach and used most<br />
commonly. Let Ho|φn〉 = Wn|φn〉 and (Ho + λV )|ψ〉 = En|ψ〉 be the Schrödinger equations for<br />
the uncoupled and perturbed systems. In what follows, we take λ as a small parameter and<br />
expand the exact energy in terms <strong>of</strong> this parameter. Clearly, we write En as a function <strong>of</strong> λ and<br />
write:<br />
En(λ) = E (0)<br />
n + λE (1)<br />
n + λ 2 E (2)<br />
n . . . (7.2)<br />
Likewise, we can expand the exact wavefunction in terms <strong>of</strong> λ<br />
|ψn〉 = |ψ (0)<br />
n 〉 + λ|ψ (1)<br />
n 〉 + λ 2 |ψ (2)<br />
n 〉 . . . (7.3)<br />
Since we require that |ψ〉 be a solution <strong>of</strong> the exact Hamiltonian with energy En, then<br />
H|ψ〉 = (Ho + λV ) �<br />
|ψ (0)<br />
n 〉 + λ|ψ (1)<br />
= �<br />
E (0)<br />
n + λE (1)<br />
n + λ 2 E (2)<br />
n . . .<br />
Now, we collect terms order by order in λ<br />
• λ 0 : Ho|ψ (0)<br />
n 〉 = E (0)<br />
n |ψ (0)<br />
n 〉<br />
• λ 1 : Ho|ψ (1)<br />
n 〉 + V |ψ (0)<br />
n 〉 = E (0)<br />
n |ψ (1) 〉 + E (1)<br />
n |ψ (0)<br />
n 〉<br />
n 〉 + λ 2 |ψ (2)<br />
� �<br />
|ψ (0)<br />
n 〉 + λ|ψ (1)<br />
n 〉 + λ 2 |ψ (2)<br />
n 〉 . . . �<br />
• λ 2 : Ho|ψ (2)<br />
n 〉 + V |ψ (1) 〉 = E (0)<br />
n |ψ (2)<br />
n 〉 + E (1)<br />
n |ψ (1)<br />
n 〉 + E (2)<br />
n |ψ (0)<br />
n 〉<br />
and so on.<br />
n 〉 . . . �<br />
The λ 0 problem is just the unperturbed problem we can solve. Taking the λ 1 terms and<br />
multiplying by 〈ψ (0)<br />
n | we obtain:<br />
(7.4)<br />
(7.5)<br />
〈ψ (0)<br />
n |Ho|ψ (0)<br />
n 〉 + 〈ψ (0)<br />
n |V |ψ (0) 〉 = E (0)<br />
n 〈ψ (0)<br />
n |ψ (1)<br />
n 〉 + E (1)<br />
n 〈ψ (0)<br />
n |ψ (0)<br />
n 〉 (7.6)<br />
In other words, we obtain the 1st order correction for the nth eigenstate:<br />
E (1)<br />
n = 〈ψ (0)<br />
n |V |ψ (0) 〉.<br />
Note to obtain this we assumed that 〈ψ (1)<br />
n |ψ (0)<br />
n 〉 = 0. This is easy to check by performing a<br />
similar calculation, except by multiplying by 〈ψ (0)<br />
m | for m �= n and noting that 〈ψ (0)<br />
n |ψ (0)<br />
m 〉 = 0 are<br />
orthogonal state.<br />
〈ψ (0)<br />
m |Ho|ψ (0)<br />
n 〉 + 〈ψ (0)<br />
m |V |ψ (0) 〉 = E (0)<br />
n 〈ψ (0)<br />
m |ψ (1)<br />
n 〉 (7.7)<br />
181
Rearranging things a bit, one obtains an expression for the overlap between the unperturbed and<br />
perturbed states:<br />
〈ψ (0)<br />
m |ψ (1)<br />
n 〉 = 〈ψ(0)<br />
E (0)<br />
m |V |ψ (0)<br />
n 〉<br />
n − E (0)<br />
m<br />
Now, we use the resolution <strong>of</strong> the identity to project the perturbed state onto the unperturbed<br />
states:<br />
|ψ (1)<br />
n 〉 = �<br />
|ψ (0)<br />
m 〉〈ψ (0)<br />
m |ψ (1)<br />
n 〉<br />
m<br />
= �<br />
m�=n<br />
〈ψ (0)<br />
m |V |ψ (0)<br />
E (0)<br />
n − E (0)<br />
m<br />
n 〉<br />
(7.8)<br />
|ψ (0)<br />
m 〉 (7.9)<br />
where we explictly exclude the n = m term to avoid the singularity. Thus, the first-order<br />
correction to the wavefunction is<br />
|ψn〉 ≈ |ψ (0)<br />
n 〉 + �<br />
This also justifies our assumption above.<br />
m�=n<br />
〈ψ (0)<br />
m |V |ψ (0)<br />
E (0)<br />
n − E (0)<br />
m<br />
n 〉<br />
|ψ (0)<br />
m 〉. (7.10)<br />
7.2 Two level systems subject to a perturbation<br />
Let’s say that in the |±〉 basis our total Hamiltonian is given by<br />
In matrix form:<br />
H = ωSz + V Sx. (7.11)<br />
H =<br />
�<br />
ω V<br />
V −ω<br />
Diagonalization <strong>of</strong> the matrix is easy, the eigenvalues are<br />
We can also determine the eigenvectors:<br />
where<br />
�<br />
(7.12)<br />
E+ = √ ω 2 + V 2 (7.13)<br />
E− = − √ ω 2 + V 2 (7.14)<br />
|φ+〉 = cos(θ/2)|+〉 + sin(θ/2)|−〉 (7.15)<br />
|φ−〉 = − sin(θ/2)|+〉 + cos(θ/2)|−〉 (7.16)<br />
|V |<br />
tan θ = (7.17)<br />
ω<br />
For constant coupling, the energy gap ω between the coupled states determines how the states<br />
are mixed as the result <strong>of</strong> the coupling.<br />
plot splitting as a function <strong>of</strong> unperturbed energy gap<br />
182
7.2.1 Expansion <strong>of</strong> Energies in terms <strong>of</strong> the coupling<br />
We can expand the exact equations for E± in terms <strong>of</strong> the coupling assuming that the coupling<br />
is small compared to ω. To leading order in the coupling:<br />
E+ = ω(1 + 1<br />
� �<br />
�<br />
�|V<br />
| �2<br />
�<br />
� � · · ·) (7.18)<br />
2 � ω �<br />
E− = ω(1 − 1<br />
� �<br />
�<br />
�|V<br />
| �2<br />
�<br />
� � · · ·) (7.19)<br />
2 � ω �<br />
On the otherhand, where the two unperturbed states are identical, we can not do this expansion<br />
and<br />
and<br />
E+ = |V | (7.20)<br />
E− = −|V | (7.21)<br />
We can do the same trick on the wavefunctions: When ω ≪ |V | (strong coupling) , θ ≈ π/2,<br />
Thus,<br />
|ψ+〉 = 1<br />
√ 2 (|+〉 + |−〉) (7.22)<br />
|ψ−〉 = 1<br />
√ 2 (−|+〉 + |−〉). (7.23)<br />
In the weak coupling region, we have to first order in the coupling:<br />
|ψ+〉 = (|+〉 +<br />
|ψ−〉 = (|−〉 +<br />
|V |<br />
|−〉) (7.24)<br />
ω<br />
|V |<br />
|+〉). (7.25)<br />
ω<br />
In other words, in the weak coupling region, the perturbed states look a lot like the unperturbed<br />
states. Where as in the regions <strong>of</strong> strong mixing they are a combination <strong>of</strong> the unperturbed<br />
states.<br />
183
7.2.2 Dipole molecule in homogenous electric field<br />
Here we take the example <strong>of</strong> ammonia inversion in the presence <strong>of</strong> an electric field. From the<br />
problem sets, we know that the NH3 molecule can tunnel between two equivalent C3v configurations<br />
and that as a result <strong>of</strong> the coupling between the two configurations, the unperturbed<br />
energy levels Eo are split by an energy A. Defining the unperturbed states as |1〉 and |2〉 we can<br />
define the tunneling Hamiltonian as:<br />
or in terms <strong>of</strong> Pauli matrices:<br />
H =<br />
�<br />
Eo −A<br />
−A Eo<br />
�<br />
H = Eoσo − Aσx<br />
Taking ψ to be the solution <strong>of</strong> the time-dependent Schrödinger equation<br />
H|ψ(t)〉 = i¯h| ˙ ψ〉<br />
we can insert the identity |1〉〈1| + |2〉〈2| = 1 and re-write this as<br />
i¯h ˙c1 = Eoc1 − Ac2<br />
i¯h ˙c2 = Eoc2 − Ac1<br />
(7.26)<br />
(7.27)<br />
(7.28)<br />
where c1 = 〈1|ψ〉 and c2 = 〈2|ψ〉. are the projections <strong>of</strong> the time-evolving wavefunction onto the<br />
two basis states. Taking these last two equations and adding and subtracting them from each<br />
other yields two new equations for the time-evolution:<br />
i¯h ˙c+ = (Eo − A)c+<br />
i¯h ˙c− = (Eo + A)c−<br />
where c± = c1 ± c2 (we’ll normalize this later). These two new equations are easy to solve,<br />
Thus,<br />
�<br />
i<br />
c±(t) = A± exp<br />
¯h (Eo<br />
�<br />
∓ A)t .<br />
(7.29)<br />
(7.30)<br />
c1(t) = 1 �<br />
eiEot/¯h A+e<br />
2 −iAt/¯h + A−e +iAt/¯h�<br />
and<br />
c2(t) = 1 �<br />
eiEot/¯h A+e<br />
2 −iAt/¯h − A−e +iAt/¯h�<br />
.<br />
Now we have to specify an initial condition. Let’s take c1(0) = 1 and c2(0) = 0 corresponding to<br />
the system starting <strong>of</strong>f in the |1〉 state. For this initial condition, A+ = A− = 1 and<br />
and<br />
c1(t) = e iEot/¯h cos(At/¯h)<br />
c2(t) = e iEot/¯h sin(At/¯h).<br />
184
So that the time evolution <strong>of</strong> the state vector is given by<br />
|ψ(t)〉 = e iEot/¯h [|1〉 cos(At/¯h) + |2〉 sin(At/¯h)]<br />
So, left alone, the molecule will oscillate between the two configurations at the tunneling frequency,<br />
A/¯h.<br />
Now, we apply an electric field. When the dipole moment <strong>of</strong> the molecule is aligned parallel<br />
with the field, the molecule is in a lower energy configuration, whereas for the anti-parrallel case,<br />
the system is in a higher energy configuration. Denote the contribution to the Hamiltonian from<br />
the electric field as:<br />
H ′ = µeEσz<br />
The total Hamiltonian in the {|1〉, |2〉} basis is thus<br />
Solving the eigenvalue problem:<br />
we find two eigenvalues:<br />
H =<br />
�<br />
Eo + µeE −A<br />
−A Eo − µeE<br />
λ± = Eo ±<br />
|H − λI| = 0<br />
�<br />
A 2 + µ 2 eE 2 .<br />
These are the exact eigenvalues.<br />
In Fig. 7.1 we show the variation <strong>of</strong> the energy levels as a function <strong>of</strong> the field strength.<br />
�<br />
(7.31)<br />
Figure 7.1: Variation <strong>of</strong> energy level splitting as a function <strong>of</strong> the applied field for an ammonia<br />
molecule in an electric field<br />
Weak field limit<br />
If µeE/A ≪ 1, then we can use the binomial expansion<br />
√ 1 + x 2 ≈ 1 + x 2 /2 + . . .<br />
to write<br />
�<br />
A2 + µ 2 eE 2 �<br />
= A 1 +<br />
≈ A<br />
�<br />
1 + 1<br />
2<br />
�<br />
µeE<br />
A<br />
�<br />
µeE<br />
A<br />
�2 � 1<br />
/2<br />
�2 �<br />
(7.32)<br />
Thus in the weak field limit, the system can still tunnel between configurations and the energy<br />
splitting are given by<br />
E± ≈ (Eo ∓ A) ∓ µ2 eE 2<br />
A<br />
185
To understand this a bit further, let us use perturbation theory in which the tunneling<br />
dominates and treat the external field as a perturbing force. The unperturbed hamiltonian can<br />
be diagonalized by taking symmetric and anti-symmetric combinations <strong>of</strong> the |1〉 and |2〉 basis<br />
functions. This is exactly what we did above with the time-dependent coefficients. Here the<br />
stationary states are<br />
|±〉 = 1<br />
√ (|1〉 ± |2〉)<br />
2<br />
with energies E± = Eo ∓ A. So that in the |±〉 basis, the unperturbed Hamiltonian becomes:<br />
H =<br />
�<br />
Eo − A 0<br />
0 Eo + A<br />
The first order correction to the ground state energy is given by<br />
E (1) = E (0) + 〈+|H ′ |+〉<br />
To compute 〈+|H ′ |+〉 we need to transform H ′ from the {|1〉, |2〉} uncoupled basis to the new |±〉<br />
coupled basis. This is accomplished by inserting the identity on either side <strong>of</strong> H ′ and collecting<br />
terms:<br />
〈+|H ′ |+〉 = 〈+|(|1〉 < 1| + |2〉〈2|)H ′ (|1〉 < 1| + |2〉〈2|)〉 (7.33)<br />
= 1<br />
2 (〈1| + 〈2|)H′ (|1〉 + |2〉) (7.34)<br />
= 0 (7.35)<br />
Likewise for 〈−|H ′ |−〉 = 0. Thus, the first order correction vanish. However, since 〈+|H ′ |−〉 =<br />
µeE does not vanish, we can use second order perturbation theory to find the energy correction.<br />
W (2)<br />
+ = �<br />
H ′ miH ′ im<br />
m�=i<br />
Ei − Em<br />
�<br />
= 〈+|H′ |−〉〈−|H ′ |+〉<br />
E (0)<br />
+ − E (0)<br />
−<br />
(µeE)<br />
=<br />
2<br />
Eo − A − Eo − A<br />
= − µ2eE 2<br />
2A<br />
.<br />
(7.36)<br />
(7.37)<br />
(7.38)<br />
(7.39)<br />
Similarly for W (2)<br />
− = +µ 2 eE 2 /A. So we get the same variation as we estimated above by expanding<br />
the exact energy levels when the field was week.<br />
Now let us examine the wavefunctions. Remember the first order correction to the eigenstates<br />
is given by<br />
|+ (1) 〉 = 〈−|H′ |−〉<br />
|−〉 (7.40)<br />
E+ − E−<br />
= − µE<br />
|−〉 (7.41)<br />
2A<br />
186
Thus,<br />
|+〉 = |+ (0) 〉 − µE<br />
|−〉<br />
2A<br />
(7.42)<br />
|−〉 = |− (0) 〉 + µE<br />
|+〉<br />
2A<br />
(7.43)<br />
So we see that by turning on the field, we begin to mix the two tunneling states. However, since<br />
we have assumed that µE/A ≪ 1, the final state is not too unlike our initial tunneling states.<br />
Strong field limit<br />
In the strong field limit, we expand the square-root term such that � A<br />
µeE<br />
�<br />
A2 + µ 2 eE 2 ⎛�<br />
� ⎞<br />
2 1/2<br />
A<br />
= Eµe ⎝ + 1⎠<br />
µeE<br />
⎡<br />
= Eµe ⎣1 + 1<br />
� � ⎤<br />
2<br />
A<br />
⎦ . . .<br />
2 µeE<br />
≈ Eµe + 1 A<br />
2<br />
2<br />
µeE<br />
� 2<br />
≪ 1.<br />
(7.44)<br />
For very strong fields, the first term dominates and the energy splitting becomes linear in the<br />
field strength. In this limit, the tunneling has been effectively suppressed.<br />
Let us analyze this limit using perturbation theory. Here we will work in the |1, 2〉 basis and<br />
treat the tunneling as a perturbation. Since the electric field part <strong>of</strong> the Hamiltonian is diagonal<br />
in the 1,2 basis, our unperturbed strong-field hamiltonian is simply<br />
H =<br />
�<br />
Eo − µeE 0<br />
0 Eo − µeE<br />
�<br />
(7.45)<br />
and the perturbation is the tunneling component. As before, the first-order corrections to the<br />
energy vanish and we are forced to resort to 2nd order perturbation theory to get the lowest<br />
order energy correction. The results are<br />
W (2) = ± A2<br />
2µE<br />
which is exactly what we obtained by expanding the exact eigenenergies above. Likewise, the<br />
lowest-order correction to the state-vectors are<br />
|1〉 = |1 0 〉 − A<br />
2µE |20 〉 (7.46)<br />
|2〉 = |2 0 〉 + A<br />
2µE |10 〉 (7.47)<br />
So, for large E the second order correction to the energy vanishes, the correction to the wavefunction<br />
vanishes and we are left with the unperturbed (i.e. non-tunneling) states.<br />
187
7.3 Dyson Expansion <strong>of</strong> the Schrödinger Equation<br />
The Rayleigh-Schrödinger approach is useful for discrete spectra. However, it is not very useful<br />
for scattering or systems with continuous spectra. On the otherhand, the Dyson expansion <strong>of</strong> the<br />
wavefunction can be applied to both cases. Its development is similar to the Rayleigh-Schrödinger<br />
case, We begin by writing the Schrödinger Equation as usual:<br />
(Ho + V )|ψ〉 = E|ψ〉 (7.48)<br />
where we define |φ〉 and W to be the eigenvectors and eigenvalues <strong>of</strong> part <strong>of</strong> the full problem.<br />
We shall call this the “uncoupled” problem and assume it is something we can easily solve.<br />
Ho|φ〉 = W |φ〉 (7.49)<br />
We want to write the solution <strong>of</strong> the fully coupled problem in terms <strong>of</strong> the solution <strong>of</strong> the<br />
uncoupled problem. First we note that<br />
(Ho − E)|ψ〉 = V |ψ〉. (7.50)<br />
Using the “uncoupled problem” as a “homogeneous” solution and the coupling as an inhomogeneous<br />
term, we can solve the Schrödinger equation and obtain |ψ〉 EXACTLY as<br />
This may seem a bit circular. But we can iterate the solution:<br />
Or, out to all orders:<br />
1<br />
|ψ〉 = |φ〉 + V |ψ〉 (7.51)<br />
Ho − E<br />
1<br />
1<br />
|ψ〉 = |φ〉 + V |φ〉 +<br />
Ho − E Ho − E V<br />
1<br />
V |ψ〉. (7.52)<br />
Ho − W<br />
|ψ〉 = |φ〉 +<br />
∞�<br />
�<br />
1<br />
Ho − E V<br />
�n<br />
|φ〉 (7.53)<br />
n=1<br />
Assuming that the series converges rapidly (true for V
i.e.<br />
Likewise:<br />
where<br />
|ψ (1)<br />
n 〉 = |φn〉 + �<br />
�<br />
|ψ (2)<br />
n 〉 = |ψ (1)<br />
n 〉 + �<br />
�<br />
lm<br />
n<br />
1<br />
Wn − Wm<br />
1<br />
(Wm − Wl)(Wn − Wm)<br />
�<br />
|φm〉〈φn|V |φm〉 (7.58)<br />
� 2<br />
VlmVmn|φn〉 (7.59)<br />
Vlm = 〈φl|V |φm〉 (7.60)<br />
is the matrix element <strong>of</strong> the coupling in the uncoupled basis. These last two expressions are the<br />
first and second order corrections to the wavefunction.<br />
Note two things. First that I can actually solve the perturbation series exactly by noting that<br />
the series has the form <strong>of</strong> a geometric progression, for x < 1 converge uniformly to<br />
Thus, I can write<br />
1<br />
1 − x = 1 + x + x2 + · · · =<br />
|ψ〉 =<br />
=<br />
=<br />
n=0<br />
∞�<br />
∞�<br />
x<br />
n=0<br />
n<br />
(7.61)<br />
∞�<br />
�<br />
1<br />
Ho − E V<br />
�n<br />
|φ〉 (7.62)<br />
n=0<br />
(GoV ) n |φ〉 (7.63)<br />
1<br />
|φ〉 (7.64)<br />
1 − GoV<br />
where Go = (Ho−E) −1 (This is the “time-independent” form <strong>of</strong> the propagator for the uncoupled<br />
system). This particular analysis is particularly powerful in deriving the propagator for the fully<br />
coupled problem.<br />
We now calculate the first order and second order corrections to the energy <strong>of</strong> the system.<br />
To do so, we make use <strong>of</strong> the wavefunctions we just derived and write<br />
E (1)<br />
n = 〈ψ (0)<br />
n |H|ψ (0)<br />
n 〉 = Wn + 〈φn|V |φn〉 = Wn + Vnn (7.65)<br />
So the lowest order correction to the energy is simply the matrix element <strong>of</strong> the perturbation<br />
in the uncoupled or unperturbed basis. That was easy. What about the next order correction.<br />
Same procedure as before: (assuming the states are normalized)<br />
E (2)<br />
n = 〈ψ (1)<br />
n |H|ψ (1)<br />
n 〉<br />
= 〈φn|H|φn〉<br />
+ �<br />
�<br />
〈φn|H|φm〉<br />
m�=n<br />
= Wn + Vnn + �<br />
1<br />
Wn − Wm<br />
|Vnm| 2<br />
m�=n<br />
Wn − Wm<br />
189<br />
�<br />
〈φm|V |φn〉 + O[V 3 ]<br />
(7.66)
Notice that I am avoiding the case where m = n as that would cause the denominator to<br />
be zero leading to an infinity. This must be avoided. The so called “degenerate case” must<br />
be handled via explicit matrix diagonalization. Closed forms can be obtained for the doubly<br />
degenerate case easily.<br />
Also note that the successive approximations to the energy require one less level <strong>of</strong> approximation<br />
to the wavefunction. Thus, second-order energy corrections are obtained from first order<br />
wavefunctions.<br />
7.4 Van der Waals forces<br />
7.4.1 Origin <strong>of</strong> long-ranged attractions between atoms and molecules<br />
One <strong>of</strong> the underlyuing principles in chemistry is that molecules at long range are attractive<br />
towards each other. This is clearly true for polar and oppositely charges species. It is also true<br />
for non-polar and neutral species, such as methane, noble gases, and etc. These forces are due to<br />
polarization forces or van der Waals forces, which is is attractive and decreases as 1/R 7 , i.e. the<br />
attractive part <strong>of</strong> the potential goes as −1/R 6 . In this section we will use perturbation theory<br />
to understand the origins <strong>of</strong> this force, restricting our attention to the interaction between two<br />
hydrogen atoms separated by some distance R.<br />
Let us take the two atoms to be motionless and separated by distance R with �n being the<br />
vector pointing from atom A to atom B. Now let �ra be the vector connecting nuclei A to its<br />
electron and likewise for �rB. Thus each atom has an instantaneous electric dipole moment<br />
�µa = q � Ra<br />
(7.67)<br />
�µb = q � Rb. (7.68)<br />
(7.69)<br />
We will assume that R ≫ ra & rb so that the electronic orbitals on each atom do not come into<br />
contact.<br />
Atom A creates an electrostatic potential, U, for atom B in which the charges in B can<br />
interact. This creates an interaction energy W . Since both atoms are neutral, the most important<br />
source for the interactions will come from the dipole-dipole interactions. This, the dipole <strong>of</strong> A<br />
interacts with an electric field E = −∇U generated by the dipole field about B and vice versa. To<br />
calculate the dipole-dipole interaction, we start with the expression for the electrostatic potential<br />
created by µa at B.<br />
U(R) = 1 µa · R<br />
4πɛo R3 Thus,<br />
�E = −∇U = − q 1<br />
4πɛo R3 (�ra − 3(�ra · �n)�n) .<br />
Thus the dipole-dipole interaction energy is<br />
W = −�µb · � E<br />
= e2<br />
R 3 (�ra · �rb − 3(�ra · �n)(�rb · �n)) (7.70)<br />
190
where e 2 = q 2 /4πɛo. Now, let’s set the z axis to be along �n so we can write<br />
W = e2<br />
R 3 (xaxb + yayb − 2zazb).<br />
This will be our perturbing potential which we add to the total Hamiltonian:<br />
H = Ha + Hb + W<br />
where Ha are the unperturbed Hamiltonians for the atoms. Let’s take for example wo hydrogens<br />
each in the 1s state. The unperturbed system has energy<br />
H|1s1; 1s2〉 = (E1 + E2)|1s1; 1s2〉 = −2EI|1s1; 1s2〉,<br />
where EI is the ionization energy <strong>of</strong> the hydrogen 1s state (EI = 13.6eV). The first order vanishes<br />
since it involves integrals over odd functions. This we can anticipate since the 1s orbitals are<br />
spatially isotropic, so the time averaged valie <strong>of</strong> the dipole moments is zero. So, we have to look<br />
towards second order corrections.<br />
The second order energy correction is<br />
E (2) = �<br />
nlm<br />
�<br />
n ′ l ′ m ′ = |〈nlm; n′ l ′ m ′ |W |1sa; 1sb〉| 2<br />
−2EI − En − En ′<br />
where we restrict the summation to avoid the |1sa; 1ab〉 state. Since W ∝ 1/R 3 and the deminator<br />
is negative, we can write<br />
E (2) = − C<br />
R 6<br />
which explains the origin <strong>of</strong> the 1/R 6 attraction.<br />
Now we evaluate the proportionality constant C Written explicitly,<br />
�<br />
4<br />
C = e<br />
�<br />
nml n ′ l ′ m ′<br />
|〈nlm ′ n ′ l ′ m ′ |(xaxb + yayb − 2zazb)|1sa; 1sb〉| 2<br />
2EI + En + En ′<br />
(7.71)<br />
Since n and n ′ ≥ 2 and |En| = EI/n2 < EI, we can replace En and En ′ with 0 with out<br />
appreciable error. Now, we can use the resolution <strong>of</strong> the identity<br />
1 = �<br />
to remove the summation and we get<br />
C = e4<br />
2EI<br />
�<br />
nml n ′ l ′ m ′<br />
|nlm; n ′ l ′ m ′ 〉〈nlm; n ′ l ′ m ′ |<br />
〈1sa; 1ab|(xaxb + yayb − 2zazb) 2 |1sa; 1sb〉 (7.72)<br />
where EI is the ionization potential <strong>of</strong> the 1s state (EI = 1/2). Surprisingly, this is simple<br />
to evaluate since we can use symmetry to our advantage. Since the 1s orbitals are spherically<br />
symmetric, any term involving cross-terms <strong>of</strong> the sort<br />
〈1sa|xaya|1s〉 = 0<br />
191
vanish. This leaves only terms <strong>of</strong> the sort<br />
〈1s|x 2 |1s〉.<br />
all <strong>of</strong> which are equal to 1/3 <strong>of</strong> the mean value <strong>of</strong> RA = x 2 a + y 2 a + z 2 a. Thus,<br />
where ao is the Bohr radius. Thus,<br />
C = 6 e2<br />
�<br />
�<br />
�<br />
�<br />
2EI<br />
〈1s|R<br />
3 |1s〉<br />
�<br />
�<br />
�<br />
�<br />
2<br />
= 6e 2 ao<br />
E (2) 2 ao<br />
= −6e<br />
R6 What does all this mean. We stated at the beginning that the average dipole moment <strong>of</strong> a H<br />
1s atom is zero. That does not mean that every single measurement <strong>of</strong> µa will yield zero. What is<br />
means is that the probability <strong>of</strong> finding the atom with a dipole moment µa is the same for finding<br />
the dipole vector pointed in the opposite direction. Adding the two together produces a net zero<br />
dipole moment. So its the fluctuations about the mean which give the atom an instantaneous<br />
dipole field. Moreover, the fluctuations in A are independent <strong>of</strong> the fluctuations in B, so first<br />
order effects must be zero since the average interaction is zero.<br />
Just because the fluctuations are independent does not mean they are not correlated. Consider<br />
the field generated by A as felt by B. This field is due to the fluctuating dipole at A. This field<br />
induces a dipole at B. This dipole field is in turn felt by A. As a result the fluctuations become<br />
correlated and explains why this is a second order effect. In a sense, A interacts with is own<br />
dipole field through “reflection” <strong>of</strong>f B.<br />
7.4.2 Attraction between an atom a conducting surface<br />
The interaction between an atom or molecule and a surface is a fundimental physical process<br />
in surface chemistry. In this example, we will use perturbation theory to understand the longranged<br />
attraction between an atom, again taking a H 1s atom as our species for simplicity, and<br />
a conducting surface. We will take the z axis to be normal to the surface and assume that the<br />
atom is high enough <strong>of</strong>f the surface that its altitude is much larger than atomic dimensions.<br />
Furthermore, we will assume that the surface is a metal conductor and we will ignore any atomic<br />
level <strong>of</strong> detail in the surface. Consequently, the atom can only interact with its dipole image on<br />
the opposite side <strong>of</strong> the surface.<br />
We can use the same dipole-dipole interaction as before with the following substitutions<br />
e 2 −→ −e 2<br />
(7.73)<br />
R −→ 2d (7.74)<br />
xb −→ x ′ a = xa (7.75)<br />
yb −→ y ′ a = ya (7.76)<br />
zb −→ z ′ a = −za (7.77)<br />
where the sign change reflects the sign difference in the image charges. So we get<br />
W = − e2<br />
8d 3 (x2 a + y 2 a + 2z 2 a)<br />
192
as the interaction between a dipole and its image. Taking the atom to be in the 1s ground state,<br />
the first order term is non-zero:<br />
E (1) = 〈1s|W |1s〉.<br />
Again, using spherical symmetry to our advantage:<br />
E (1) = − e2<br />
8d3 4〈1s|r2 |1s〉 = − e2a2 o<br />
2d<br />
Thus an atom is attracted to the wall with an interaction energy which varries as 1/d 3 . This is<br />
a first order effect since there is perfect correlation between the two dipoles.<br />
7.5 Perturbations Acting over a Finite amount <strong>of</strong> Time<br />
Perhaps the most important application <strong>of</strong> perturbation theory is in cases in which the coupling<br />
acts for a finite amount <strong>of</strong> time. Such as the coupling <strong>of</strong> a molecule to a laser field. The laser field<br />
impinges upon a molecule at some instant in time and some time later is turned <strong>of</strong>f. Alternatively<br />
we can consider cases in which the perturbation is slowly ramped up from 0 to a final value.<br />
7.5.1 General form <strong>of</strong> time-dependent perturbation theory<br />
In general, we can find the time-evolution <strong>of</strong> the coefficients by solving<br />
i¯h ˙cn(t) = Encn(t) + �<br />
λWnk(t)ck(t) (7.78)<br />
where λWnk are the matrix elements <strong>of</strong> the perturbation. Now, let’s write the cn(t) as<br />
cn(t) = bn(t)e −iEnt<br />
and assume that bn(t) changes slowly in time. Thus, we can write a set <strong>of</strong> new equations for the<br />
bn(t) as<br />
k<br />
3 .<br />
i¯h ˙ bn = �<br />
e iωnkt<br />
λWnkbk(t) (7.79)<br />
Now we assume that the bn(t) can be written as a perturbation expansion<br />
k<br />
bn(t) = b (0)<br />
n (t) + λb (1)<br />
n (t) + λ 2 b (2)<br />
n (t) + · · ·<br />
where as before λ is some dimensionless number <strong>of</strong> order unity. Taking its time derivative and<br />
equating powers <strong>of</strong> λ one finds<br />
and that b (0)<br />
n (t) = 0.<br />
i¯h ˙ b (i)<br />
n = �<br />
e iωnkt<br />
Wnk(t)b (i−1)<br />
k<br />
k<br />
193
Now, we calculate the first order solution. For t < 0 the system is assumed to be in some<br />
well defined initial state, |φi〉. Thus, only one bn(t < 0) coefficient can be non-zero and must be<br />
independent <strong>of</strong> time since the coupling has not been turned on. Thus,<br />
bn(t = 0) = δni<br />
At t = 0, we turn on the coupling and λW jumps from 0 to λW (0). This must hold for all order<br />
in λ. So, we immediately get<br />
b (0)<br />
n (0) = δni (7.80)<br />
b (i)<br />
n (0) = 0 (7.81)<br />
Consequently, for all t > 0, b (0)<br />
n (t) = δni which completely specifies the zeroth order result. This<br />
also gives us the first order result.<br />
which is simple to integrate<br />
i¯h ˙ b (1)<br />
n (t) = �<br />
b (1)<br />
n (t) = − i<br />
k<br />
e iωnkt Wnkδki<br />
e<br />
¯h 0<br />
iωnis<br />
Wni(s)ds.<br />
Thus, our perturbation wavefunction is written as<br />
|ψ(t)〉 = e −iEot/¯h |φo〉 + �<br />
7.5.2 Fermi’s Golden Rule<br />
= e iωnit Wni(t) (7.82)<br />
� t<br />
n�=0<br />
λb (1)<br />
n (t)e −iEnt/¯h |φn〉 (7.83)<br />
Let’s consider the time evolution <strong>of</strong> a state under a small perturbation which varies very slowly<br />
in time. The expansion coefficients <strong>of</strong> the state in some basis <strong>of</strong> eigenstates <strong>of</strong> the unperturbed<br />
Hamiltonian evolve according to:<br />
i¯h ˙cs(t) = �<br />
Hsncn(t) (7.84)<br />
where Hsn is the matrix element <strong>of</strong> the full Hamiltonian in the basis.<br />
n<br />
Hsn = Esδns + Vsn(t) (7.85)<br />
Assuming that V (t) is slowly varying and that Vsn
= Escs(t) + �<br />
VsnAn(t)e −i/¯hEnt<br />
n<br />
(7.89)<br />
We now proceed to solve this equation via series <strong>of</strong> well defined approximations. Our first<br />
assumption is that Vsn
= 4|〈s| ˆ V |n〉| 2<br />
� �<br />
2 sin ((Es − En − ¯hω)t/(2¯h))<br />
. (7.102)<br />
(Es − En − ¯hω) 2<br />
This is the general form for a harmonic perturbation. The function<br />
sin 2 (ax)<br />
x 2<br />
(7.103)<br />
is called the sinc function or the “Mexican Hat Function”. At x = 0 the peak is sharply peaked<br />
(corresponding to Es − En − ¯hω = 0). Thus, the transition is only significant when the energy<br />
difference matches the frequency <strong>of</strong> the applied perturbation. As t → ∞ (a = t/(2¯h)), the peak<br />
become very sharp and becomes a δ-function. The width <strong>of</strong> the peak is<br />
∆x = 2π¯h<br />
t<br />
(7.104)<br />
Thus, the longer we measure a transition, the more well resolved the measurement will become.<br />
This has pr<strong>of</strong>ound implications for making measurements <strong>of</strong> meta-stable systems.<br />
We have an expression for the transition probability between two discrete states. We have<br />
not taken into account the the fact that there may be more than one state close by nor have we<br />
taken into account the finite “width” <strong>of</strong> the perturbation (instrument function). When there a<br />
many states close the the transition, I must take into account the density <strong>of</strong> nearby states. Thus,<br />
we define<br />
ρ(E) = ∂N(E)<br />
∂E<br />
(7.105)<br />
as the density <strong>of</strong> states close to energy E where N(E) is the number <strong>of</strong> states with energy E.<br />
Thus, the transition probability to any state other than my original state is<br />
Ps(t) = �<br />
Pns(t) = �<br />
|As(t)| 2<br />
to go from a discrete sum to an integral, we replace<br />
Thus,<br />
�<br />
→<br />
n<br />
s<br />
� �<br />
dN<br />
dE =<br />
dE<br />
s<br />
(7.106)<br />
ρ(E)dE (7.107)<br />
�<br />
Ps(t) = dE|As(t)| 2 ρ(E) (7.108)<br />
Since |As(t)| 2 is peaked sharply at Es = Ei +¯hω we can treat ρ(E) and ˆ Vsn as constant and write<br />
Taking the limits <strong>of</strong> integration to ±∞<br />
Ps(t) = 4|〈s| ˆ V |n〉| 2 � 2 sin (ax)<br />
ρ(Es)<br />
x2 dx (7.109)<br />
� ∞<br />
−∞<br />
dx sin2 (ax)<br />
x<br />
196<br />
= πa = πt<br />
2¯h<br />
(7.110)
In other words:<br />
We can also define a transition rate as<br />
Ps(t) = 2πt<br />
¯h |〈s| ˆ V |n〉| 2 ρ(Es) (7.111)<br />
R(t) =<br />
P (t)<br />
t<br />
thus, the Golden Rule Transition Rate from state n to state s is<br />
(7.112)<br />
Rn→s(t) = 2π<br />
¯h |〈s| ˆ V |n〉| 2 ρ(Es) (7.113)<br />
This is perhaps one <strong>of</strong> the most important approximations in quantum mechanics in that it<br />
has implications and applications in all areas, especially spectroscopy and other applications <strong>of</strong><br />
matter interacting with electro-magnetic radiation.<br />
7.6 Interaction between an atom and light<br />
What I am going to tell you about is what we teach our physics students in the third<br />
or fourth year <strong>of</strong> graduate school... It is my task to convince you not to turn away<br />
because you don’t understand it. You see my physics students don’t understand it...<br />
That is because I don’t understand it. Nobody does.<br />
Richard P. Feynman, QED, The Strange Theory <strong>of</strong> Light and Matter<br />
Here we explore the basis <strong>of</strong> spectroscopy. We will consider how an atom interacts with a<br />
photon field in the low intensity limit in which dipole interactions are important. We will then<br />
examine non-resonant excitation and discuss the concept <strong>of</strong> oscillator strength. Finally we will<br />
look at resonant emission and absorption concluding with a discussion <strong>of</strong> spontaneous emission.<br />
In the next section, we will look at non-linear interactions.<br />
7.6.1 Fields and potentials <strong>of</strong> a light wave<br />
An electromagnetic wave consists <strong>of</strong> two oscillating vector field components which are perpendicular<br />
to each other and oscillate at at an angular frequency ω = ck where k is the magnitude<br />
<strong>of</strong> the wavevector which points in the direction <strong>of</strong> propagation and c is the speed <strong>of</strong> light. For<br />
such a wave, we can always set the scalar part <strong>of</strong> its potential to zero with a suitable choice in<br />
gauge and describe the fields associate with the wave in terms <strong>of</strong> a vector potential, � A given by<br />
�A(r, t) = Aoeze iky−iωt + A ∗ oeze −iky+iωt<br />
Here, the wave-vector points in the +y direction, the electric field, E is polarized in the yz plane<br />
and the magnetic field B is in the xy plane. Using Maxwell’s relations<br />
�E(r, t) = − ∂A<br />
∂t = iωez(Aoe i(ky−ωt) − A ∗ oe −i(ky−ωt) )<br />
197
and<br />
�B(r, t) = ∇ × � A = ikex(Aoe i(ky−ωt) − A ∗ oe −i(ky−ωt) ).<br />
We are free to choose the time origin, so we will choos it as to make Ao purely imaginary and<br />
set<br />
where E and B are real quantities such that<br />
Thus<br />
iωAo = E/2 (7.114)<br />
ikAo = B/2 (7.115)<br />
E<br />
B<br />
= ω<br />
k<br />
= c.<br />
E(r, t) = Eez cos(ky − ωt) (7.116)<br />
B(r, t) = Bez sin(ky − ωt) (7.117)<br />
where E and B are the magnitudes <strong>of</strong> the electric and magnetic field components <strong>of</strong> the plane<br />
wave.<br />
Lastly, we define what is known as the Poynting vector (yes, it’s pronounced pointing) which<br />
is parallel to the direction <strong>of</strong> propagation:<br />
�S = ɛoc 2 � E × � B. (7.118)<br />
Using the expressions for � E and � B above and averaging over several oscillation periods:<br />
�S<br />
2 E<br />
= ɛoc<br />
2 ey<br />
7.6.2 Interactions at Low Light Intensity<br />
(7.119)<br />
The electromagnetic wave we just discussed can interact with an atomic electron. The Hamiltonian<br />
<strong>of</strong> this electron can be given by<br />
H = 1<br />
2m (P − qA(r, t))2 + V (r) − q<br />
S · B(r, t)<br />
m<br />
where the first term represents the interaction between the electron and the electrical field <strong>of</strong><br />
the wave and the last term represents the interaction between the magnetic moment <strong>of</strong> the<br />
electron and the magnetic moment <strong>of</strong> the wave. In expanding the kinetic energy term, we have<br />
to remember that momentum and position do not commute. However, in the present case, A is<br />
parallel to the z axis and Pz and y commute. So, we wind up with the following:<br />
where<br />
H = Ho + W<br />
Ho = P2<br />
+ V (r)<br />
2m<br />
198
is the unperturbed (atomic) hamiltonian and<br />
W = − q q q2<br />
P · A − S · B +<br />
m m 2m A2 .<br />
The first two depend linearly upon A and the second is quadratic in A. So, for low intensity we<br />
can take<br />
W = − q q<br />
P · A − S · B.<br />
m m<br />
Before moving on, we will evaluate the relative importance <strong>of</strong> each term by orders <strong>of</strong> magnitude<br />
for transitions between bound states. In the second term, the contribution <strong>of</strong> the spin operator<br />
is on the order <strong>of</strong> ¯h and the contribution from B is on the order <strong>of</strong> kA. Thus,<br />
WB<br />
WE<br />
=<br />
q<br />
m<br />
q<br />
m<br />
S · B ¯hk<br />
≈<br />
P · A p<br />
¯h/p is on the order <strong>of</strong> an atomic radius, ao and k = 2π/λ where λ is the wavelength <strong>of</strong> the light,<br />
typically on the order <strong>of</strong> 1000ao. Thus,<br />
WB<br />
WE<br />
≈ ao<br />
λ<br />
So, the magnetic coupling is not at all important and we focus only upon the coupling to the<br />
electric field.<br />
Using the expressions we derived previously, the coupling to the electric field component <strong>of</strong><br />
the light wave is given by:<br />
Now, we expand the exponential in powers <strong>of</strong> y<br />
≪ 1.<br />
WE = − q<br />
m pz(Aoe iky e −iωt + A ∗ oe −iky e +iωt ).<br />
e ±iky = 1 ± iky − 1<br />
2 k2 y 2 + . . .<br />
since ky ≈ ao/λ ≪ 1, we can to a good approximation keep only the first term. Thus we get the<br />
dipole operator<br />
WD = qE<br />
mω pz sin(ωt).<br />
In the electric dipole approximation, W (t) = WD(t).<br />
Note, that one might expect that WD should have been written as<br />
WD = −qEz cos(ωt)<br />
since we are, after all, talking about a dipole moment associated with the motion <strong>of</strong> the electron<br />
about the nucleus. Actually, the two expressions are identical! The reason is that I can always<br />
choose a differnent gauge to represent the physical problem without changing the physical result.<br />
To get the present result, we used<br />
A = E<br />
ω ez sin(ωt)<br />
199
and<br />
U(r) = 0<br />
as the scalar potential. A gauge transformation is introduced by taking a function, f and defining<br />
a new vector potential and a new scalar potential as<br />
A ′ = A + ∇f<br />
U ′ = U − ∂f<br />
∂t<br />
We are free to choose f however we do so desire. Let’s take f = zE sin(ωt)/ω. Thus,<br />
and<br />
A ′ E<br />
= ez (sin(ky − ωt) + sin(ωt))<br />
ω<br />
U ′ = −zE cos ωt<br />
is the new scalar potential. In the electric dipole approximation, ky is small, so we set ky = 0<br />
everywhere and obtain A ′ = 0. Thus, the total Hamiltonian becomes<br />
with perturbation<br />
H = Ho + qU ′ (r, t)<br />
W ′ D = −qzE cos(ωt).<br />
This is the usual form <strong>of</strong> the dipole coupling operator. However, when we do the gauge transformation,<br />
we have to transform the state vector as well.<br />
Next, let us consider the matrix elements <strong>of</strong> the dipole operator between two stationary states<br />
<strong>of</strong> Ho: |ψi〉 and |ψf〉 with eigenenergy Ei and Ef respectively. The matrix elements <strong>of</strong> WD are<br />
given by<br />
Wfi(t) = qE<br />
mω sin(ωt)〈ψf|pz|ψi〉<br />
We can evaluate this by noting that<br />
Thus,<br />
[z, Ho] = i¯h ∂Ho<br />
∂pz<br />
= i¯h pz<br />
m .<br />
〈ψf|pz|ψi〉 = imωfi〈ψf|z|ψi〉.<br />
Consequently,<br />
sin(ωt)<br />
Wfi(t) = iqEωfi zfi.<br />
ω<br />
Thus, the matrix elements <strong>of</strong> the dipole operator are those <strong>of</strong> the position operator. This determines<br />
the selection rules for the transition.<br />
Before going through any specific details, let us consider what happens if the frequency ω<br />
does not coincide with ωfi. Soecifically, we limit ourselves to transitions originating from the<br />
ground state <strong>of</strong> the system, |ψo〉. We will assume that the field is weak and that in the field the<br />
200
atom acquires a time-dependent dipole moment which oscillates at the same frequency as the<br />
field via a forced oscillation. To simplify matters, let’s assume that the electron is harmonically<br />
bound to the nucleus in a classical potential<br />
V (r) = 1 2<br />
mωor<br />
where ωo is the natural frequency <strong>of</strong> the electron.<br />
The classical motion <strong>of</strong> the electron is given by the equations <strong>of</strong> motion (via the Ehrenfest<br />
theorem)<br />
¨z + ω 2 z = qE<br />
m cos(ωt).<br />
This is the equation <strong>of</strong> motion for a harmonic oscillator subject to a periodic force. This inhomogeneous<br />
differential equation can be solved (using Fourier transform methods) and the result<br />
is<br />
z(t) = A cos(ωot − φ) +<br />
2<br />
qE<br />
m(ω 2 o − ω 2 ) cos(ωt)<br />
where the first term represents the harmonic motion <strong>of</strong> the electron in the absence <strong>of</strong> the driving<br />
force. The two coefficients, A and φ are determined by the initial condition. If we have a very<br />
slight damping <strong>of</strong> the natural motion, the first term dissappears after a while leaving only the<br />
second, forced oscillation, so we write<br />
z =<br />
qE<br />
m(ω 2 o − ω 2 ) cos(ωt).<br />
Thus, we can write the classical induced electric dipole moment <strong>of</strong> the atom in the field as<br />
D = qz =<br />
q 2 E<br />
m(ω 2 o − ω 2 ) cos(ωt).<br />
Typically this is written in terms <strong>of</strong> a susceptibility, χ, where<br />
χ =<br />
q 2<br />
m(ω 2 o − ω 2 ) .<br />
Now we look at this from a quantum mechanical point <strong>of</strong> view. Again, take the initial state<br />
to be the ground state and H = Ho + WD as the Hamiltonian. Since the time-evolved state can<br />
be written as a superposition <strong>of</strong> eigenstates <strong>of</strong> Ho,<br />
|ψ(t)〉 = �<br />
cn(t)|φn〉<br />
n<br />
To evaluate this we can us the results derived previously in our derivation <strong>of</strong> the golden rule,<br />
|ψ(t)〉 = |φo〉 + � qE<br />
n�=0 2im¯hω 〈n|pz|φo〉<br />
×<br />
�<br />
−iωnot iωt<br />
e − e<br />
ωno + ω − e−iωnot − e−iωt �<br />
|φn〉<br />
ωno − ω<br />
(7.120)<br />
where we have removed a common phase factor. We can then calculate the dipole moment<br />
expectation value, 〈D(t)〉 as<br />
〈D(t)〉 = 2q2 � ωon|〈φn|z|φo〉|<br />
E cos(ωt)<br />
¯h n<br />
2<br />
ω2 no − ω2 201<br />
(7.121)
Oscillator Strength<br />
We can now notice the similarity between a driven harmonic oscillator and the expectation value<br />
<strong>of</strong> the dipole moment <strong>of</strong> an atom in an electric field. We can define the oscillator strength as a<br />
dimensionless and real number characterizing the transition between |φo and |φn〉<br />
fno = 2mωno<br />
|〈φn|z|φo〉|<br />
¯h<br />
2<br />
In Exercise 2.4, we proved the Thomas-Reiche-Kuhn sum rule, which we can write in terms <strong>of</strong><br />
the oscillator strengths, �<br />
fno = 1<br />
This can be written in a very compact form:<br />
n<br />
m<br />
¯h 2 〈φo|[x, [H, x]]|φo〉 = 1.<br />
7.6.3 Photoionization <strong>of</strong> Hydrogen 1s<br />
Up until now we have considered transitions between discrete states inrtoduced via some external<br />
perturbation. Here we consider the single photon photoionization <strong>of</strong> the hydrogen 1s orbital to<br />
illustrate how the golden rule formalism can be used to calculate photoionization cross-sections<br />
as a function <strong>of</strong> the photon-frequency. We already have an expression for dipole coupling:<br />
WD = qE<br />
mω pz sin(ωt) (7.122)<br />
and we have derived the golden rule rate for transitions between states:<br />
Rif = 2π<br />
¯h |〈f|V |i〉|2 δ(Ei − Ef + ¯hω). (7.123)<br />
For transitions to the continuum, the final states are the plane-waves.<br />
ψ(k) = 1<br />
Ω 1/2 eik·r . (7.124)<br />
where Ω is the volume element. Thus the matrix element 〈1s|V |k〉 can be written as<br />
〈1s|pz|k〉 = ¯hkz<br />
Ω1/2 �<br />
ψ1s(r)e ik·r dr. (7.125)<br />
To evaluate the integral, we need to transform the plane-wave function in to spherical coordinates.<br />
This can be done vie the expansion;<br />
e ik·r = �<br />
i l (2l + 1)jl(kr)Pl(cos(θ)) (7.126)<br />
l<br />
where jl(kr) is the spherical Bessel function and Pl(x) is a Legendre polynomial, which we can<br />
also write as a spherical harmonic function,<br />
�<br />
4π<br />
Pl(cos(θ)) =<br />
2l + 1 Yl0(θ, φ). (7.127)<br />
202
Thus, the integral we need to perform is<br />
〈1s|k〉 = 1<br />
√ πΩ<br />
�<br />
��<br />
l<br />
Y ∗<br />
�<br />
00Yl0dΩ i l�<br />
� ∞<br />
4π(2l + 1) r<br />
0<br />
2 e −r jl(kr)dr. (7.128)<br />
The angular integral we do by orthogonality and produces a delta-function which restricts the<br />
sum to l = 0 only leaving,<br />
The radial integral can be easily performed using<br />
leaving<br />
Thus, the matrix element is given by<br />
〈1s|k〉 = 1<br />
� ∞<br />
√ r<br />
Ω 0<br />
2 e −r j0(kr)dr. (7.129)<br />
j0(kr) = sin(kr)<br />
kr<br />
〈1s|k〉 = 4<br />
k<br />
1<br />
Ω 1/2<br />
〈1s|V |k〉 = qE¯h<br />
mω<br />
(7.130)<br />
1<br />
(1 + k2 . (7.131)<br />
) 2<br />
1<br />
Ω1/2 2<br />
(1 + k 2 ) 2<br />
(7.132)<br />
This we can indert directly in to the golden rule formula to get the photoionization rate to a<br />
given k-state.<br />
R0k = 2π¯h<br />
Ω<br />
� �2<br />
qE<br />
mω<br />
which we can manipulate into reading as<br />
R0k = 16π<br />
¯hΩ<br />
4<br />
(1 + k 2 ) 4 δ(Eo − Ek + ¯hω). (7.133)<br />
� �2<br />
qE<br />
m<br />
mω<br />
δ(k2 − K2 )<br />
(1 + k2 ) 4<br />
(7.134)<br />
where we write K 2 = 2m(EI + ¯hω)/¯h 2 to make our notation a bit more compact. Eventually,<br />
we want to know the rate as a function <strong>of</strong> the photon frequency, so let’s put everything except<br />
the frequency and the volume element into a single constant, I, which is related to the intensity<br />
<strong>of</strong> the incident photon.<br />
R0k = I 1<br />
Ω ω2 δ(k2 − K2 )<br />
(1 + k2 . (7.135)<br />
) 4<br />
Now, we sum over all possible final states to get the total photoionization rate. To do this, we<br />
need to turn the sum over final states into an integral, this is done by<br />
�<br />
k<br />
= Ω<br />
� ∞<br />
4π k<br />
(2π) 3<br />
0<br />
2 dk (7.136)<br />
203
Thus,<br />
R = I 1<br />
Ω ω2 �<br />
Ω ∞<br />
4π<br />
(2π) 3<br />
0<br />
= I<br />
ω2 1<br />
2π2 � ∞<br />
0<br />
k 2 δ(k2 − K2 )<br />
(1 + k2 dk<br />
) 4<br />
k 2 δ(k2 − K2 )<br />
(1 + k2 dk<br />
) 2<br />
Now we do a change <strong>of</strong> variables: y = k 2 and dy = 2kdk so that the integral become<br />
� ∞<br />
0<br />
k 2 δ(k2 − K2 )<br />
(1 + k2 �<br />
1 ∞<br />
dk =<br />
) 2 2 0<br />
=<br />
y 1/2<br />
K<br />
2(1 + K 2 ) 4<br />
(1 + y 2 ) 4 δ(y − K2 )dy<br />
Pulling everything together, we see that the total photoionization rate is given by<br />
R = I<br />
ω2 1<br />
2π2 K<br />
(1 + K 2 ) 4<br />
I<br />
=<br />
� m<br />
¯h 2<br />
√<br />
ω ¯h − ɛo<br />
√ �<br />
2 π2 ω2 1 +<br />
√<br />
2 ω − 1<br />
= I<br />
32 π 2 ω 6<br />
2 m (ω ¯h−ɛo)<br />
¯h 2<br />
�4 (7.137)<br />
(7.138)<br />
where in the last line we have converted to atomic units to clean things up a bit. This expression<br />
is clearly valid only when ¯hω > EI = 1/2hartree (13.6 eV) and a plot <strong>of</strong> the photo-ionization<br />
rate is given in Fig. 7.2<br />
7.6.4 Spontaneous Emission <strong>of</strong> Light<br />
The emission and absorption <strong>of</strong> light by an atom or molecule is perhaps the most spectacular<br />
and important phenomena in the universe. It happens when an atom or molecule undergoes a<br />
transition from one state to another due to its interaction with the electro-magnetic field. Because<br />
the electron-magnetic field can not be entirely eliminated from any so called isolated system<br />
(except for certain quantum confinement experiments), no atom or molecule is ever really isolated.<br />
Thus, even in the absence <strong>of</strong> an explicitly applied field, an excited system can spontaneously emit<br />
a photon and relax to a lower energy state. Since we have all done spectroscopy experiments<br />
at one point in our education or another, we all know that the transitions are between discrete<br />
energy levels. In fact, it was in the examination <strong>of</strong> light passing through glass and light emitted<br />
from flames that people in the 19th century began to speculate that atoms can absorb and emit<br />
light only at specific wavelengths.<br />
We will use the GR to deduce the probability <strong>of</strong> a transition under the influence <strong>of</strong> an applied<br />
light field (laser, or otherwise). We will argue that the system is in equilibrium with the electromagnetic<br />
field and that the laser drives the system out <strong>of</strong> equilibrium. From this we can deduce<br />
the rate <strong>of</strong> spontaneous emission in the absence <strong>of</strong> the field.<br />
204
R HarbL<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
0.5 1 1.5 2<br />
мw HauL<br />
Figure 7.2: Photo-ionization spectrum for hydrogen atom.<br />
The electric field associated with a monochromatic light wave <strong>of</strong> average intensity I is<br />
= c<br />
⎛<br />
⎝ɛo<br />
=<br />
〈I〉 = c〈ρ〉 (7.139)<br />
〈 � E 2 o 〉<br />
2<br />
� ɛo<br />
µo<br />
+ 1<br />
µo<br />
� 1/2 E 2 o<br />
2<br />
E<br />
= cɛo<br />
2 o<br />
2<br />
〈B2 ⎞<br />
o〉<br />
⎠ (7.140)<br />
2<br />
(7.141)<br />
(7.142)<br />
where ρ is the energy density <strong>of</strong> the field, | � E| and |Bo| = (1/c)| � E| are the maximum amplitudes<br />
<strong>of</strong> the E and B fields <strong>of</strong> the wave. Units are MKS units.<br />
The em wave in reality contains a spread <strong>of</strong> frequencies, so we must also specify the intensity<br />
density over a definite frequency interval:<br />
dI<br />
dω = cu(ω)dω (7.143)<br />
dω<br />
where u(ω) is the energy density per unit frequency at ω.<br />
Within the “semi-classical” dipole approximation, the coupling between a molecule and the<br />
light wave is<br />
�µ · � E(t) = �µ · �ɛ Eo<br />
2<br />
205<br />
cos(ωt) (7.144)
where �µ is the dipole moment vector, and �ɛ is the polarization vector <strong>of</strong> the wave. Using this<br />
result, we can go back to last week’s lecture and plug directly into the GR and deduce that<br />
Pfi(ω, t) = 4|〈f|�µ · �ɛ|i〉| 2 E 2 o<br />
4<br />
sin 2 ((Ef − Ei − ¯hω)t/(2¯h))<br />
(Ef − Ei − ¯hω) 2<br />
(7.145)<br />
Now, we can take into account the spread <strong>of</strong> frequencies <strong>of</strong> the em wave around the resonant<br />
value <strong>of</strong> ωo = (Ef − Ei)/¯h. To do this we note:<br />
and replace 〈I〉 with (dI/dω)dω.<br />
E 2 o = 2 〈I〉<br />
cɛo<br />
(7.146)<br />
� ∞<br />
Pfi(t) = dωPfi(t, ω) (7.147)<br />
0<br />
= 2<br />
� �<br />
dI<br />
|〈f|�µ · �ɛ|i〉|<br />
cɛo dω ωo<br />
2<br />
� ∞ sin<br />
0<br />
2 ((¯hωo − ¯hω)(t/(2¯h)))<br />
(¯hωo − ¯hω) 2 dω (7.148)<br />
To get this we assume that dI/dω and the matrix element <strong>of</strong> the coupling vary slowly with<br />
frequency as compared to the sin 2 (x)/x 2 term. Thus, as far as doing integrals are concerned,<br />
they are both constants. With ωo so fixed, we can do the integral over dw and get πt/(2¯h 2 ). and<br />
we obtain the GR transition rate:<br />
kfi = π<br />
2 |〈f|�µ · �ɛ|i〉|2<br />
cɛo¯h<br />
� �<br />
dI<br />
dω ωo<br />
(7.149)<br />
Notice also that this equation predicts that the rate for excitation is identical to the rate for<br />
de-excitation. This is because the radiation field contains both a +ω and −ω term (unless the<br />
field is circularly polarized), this the transition rate to a state <strong>of</strong> lower energy to a higher energy<br />
is the same as that <strong>of</strong> the transition from a higher energy state to a lower energy state.<br />
However, we know that systems can emit spontaneously in which a state <strong>of</strong> higher energy<br />
can go to a state <strong>of</strong> lower energy in the absence <strong>of</strong> an external field. This is difficult to explain<br />
in the presence frame-work since we have assumed that |i〉 is stationary.<br />
Let’s assume that we have an ensemble <strong>of</strong> atoms in a cavity containing em radiation and the<br />
system is in thermodynamic equilibrium. (Thought you could escape thermodynamics, eh?) Let<br />
E1 and E2 be the energies <strong>of</strong> two states <strong>of</strong> the atom with E2 > E1. When equilibrium has been<br />
established the number <strong>of</strong> atoms in the two states is determined by the Boltzmann equation:<br />
N2<br />
N1<br />
= Ne−E2β<br />
= e−β(E2−E1)<br />
Ne−E1β (7.150)<br />
where β = 1/kT The number <strong>of</strong> atoms (per unit time) undergoing the transition from 1 to 20 is<br />
proportional to k21 induced by the radiation and to the number <strong>of</strong> atoms in the initial state, N1.<br />
dN<br />
(1 → 2) = N1k21<br />
(7.151)<br />
dt<br />
206
The number <strong>of</strong> atoms going from 2 to 1 is proportional to N2 and to k12 + A where A is the<br />
spontaneous transition rate<br />
At equilibrium, these two rates must be equal. Thus,<br />
dN<br />
dt (2 → 1) = N2(k21 + A) (7.152)<br />
k21 + A<br />
k21<br />
= N1<br />
N2<br />
= e ¯hωβ<br />
(7.153)<br />
Now, let’s refer to the result for the induced rate k21 and express it in terms <strong>of</strong> the energy density<br />
per unit frequency <strong>of</strong> the cavity, u(ω).<br />
where<br />
k21 = π<br />
ɛo¯h 2 |〈2|�µ · �ɛ|1〉|2 u(ω) = B21u(ω) (7.154)<br />
B21 = π<br />
ɛo¯h 2 |〈2|�µ · �ɛ|1〉|2 . (7.155)<br />
For em radiation in equilibrium at temperature T the energy density per unit frequency is given<br />
by Planck’s Law:<br />
Combining the results we obtain<br />
B21<br />
B12<br />
u(ω) = 1<br />
π2c3 ¯hω3 e¯hωβ − 1<br />
B12<br />
B21<br />
which must hold for all temperatures. Since<br />
we get<br />
+ A 1<br />
B21 u(ω)<br />
= e¯hωβ<br />
+ A π<br />
B21<br />
2c3 ¯hω3 (e¯hωβ − 1) = e ¯hωβ<br />
A<br />
B21<br />
and Thus, the spontaneous emission rate is<br />
B21<br />
B12<br />
(7.156)<br />
(7.157)<br />
(7.158)<br />
(7.159)<br />
= 1. (7.160)<br />
π2c3 = 1 (7.161)<br />
¯hω3 A = ¯hω3<br />
π2 B12<br />
(7.162)<br />
c3 207
= ω3<br />
|〈2|�µ · �ɛ|1〉|2<br />
ɛoπ¯hc3 (7.163)<br />
This is a key result in that it determines the probability for the emission <strong>of</strong> light by atomic<br />
and molecular systems. We can use it to compute the intensity <strong>of</strong> spectral lines in terms <strong>of</strong> the<br />
electric dipole moment operator. The lifetime <strong>of</strong> the excited state is then inversely proportional<br />
to the spontaneous decay rate.<br />
τ = 1<br />
A<br />
(7.164)<br />
To compute the matrix elements, we can make a rough approximation that 〈µ〉 ∝ 〈x〉e where<br />
e is the charge <strong>of</strong> an electron and 〈x〉 is on the order <strong>of</strong> atomic dimensions. We also must include<br />
a factor <strong>of</strong> 1/3 for averaging over all orientations <strong>of</strong> (�µ · �ɛ), since at any given time,the moments<br />
are not all aligned.<br />
The factor<br />
1<br />
τ<br />
4 ω<br />
= A =<br />
3<br />
3<br />
¯hc3 e2 |〈x〉|<br />
4πɛo<br />
2<br />
e 2<br />
4πɛo¯hc<br />
= α ≈ 1<br />
137<br />
is the fine structure constant. Also, ω/c = 2π/λ. So, setting 〈x〉 ≈ 1 ˚A<br />
A = 4 1<br />
3 137 c<br />
� �3 2π<br />
(1˚A)<br />
λ<br />
2 ≈<br />
So, for a typical wavelength, λ ≈ 4 × 10 3˚A.<br />
(7.165)<br />
(7.166)<br />
6 × 1018<br />
[λ(˚A)] 3 sec−1 (7.167)<br />
τ = 10 −8 sec (7.168)<br />
which is consistent with observed lifetimes.<br />
We can also compare with classical radiation theory. The power radiated by an accelerated<br />
particle <strong>of</strong> charge e is given by the Larmor formula (c.f Jackson).<br />
P = 2 e<br />
3<br />
2<br />
4πɛo<br />
( ˙v) 2<br />
c 3<br />
(7.169)<br />
where ˙v is the acceleration <strong>of</strong> the charge. Assuming the particle moves in a circular orbit <strong>of</strong><br />
radius r with angular velocity ω, the acceleration is ˙v = ω 2 r. Thus, the time required to radiate<br />
energy ¯hω/2 is equivalent to the lifetime τ.<br />
1<br />
τclass<br />
= 1<br />
¯hω<br />
= 2P<br />
¯hω<br />
4 e<br />
3<br />
2<br />
4πɛo<br />
208<br />
ω 4 r 2<br />
c 3<br />
(7.170)<br />
(7.171)
= 4 ω<br />
3<br />
3<br />
¯hc3 e2 r<br />
4πɛo<br />
2 . (7.172)<br />
This qualitative agreement between the classical and quantum result is a manifestation <strong>of</strong> the<br />
correspondence principle. However, it must be emphasized that the MECHANISM for radiation<br />
is entirely different. The classical result will never predict a discrete spectrum. This was in fact<br />
a very early indication that something was certainly amiss with the classical electro-magnetic<br />
field theories <strong>of</strong> Maxwell and others.<br />
7.7 Time-dependent golden rule<br />
In the last lecture we derived the the Golden Rule (GR) transition rate as<br />
k(t) = 2π<br />
¯h |〈s| ˆ V |n〉| 2 ρ(Es) (7.173)<br />
This is perhaps one <strong>of</strong> the most important approximations in quantum mechanics in that it<br />
has implications and applications in all areas, especially spectroscopy and other applications <strong>of</strong><br />
matter interacting with electro-magnetic radiation. In today’s lecture, I want to show how we<br />
can use the Golden Rule to simplify some very complex problems. Moreover, to show how we<br />
used the GR to solve a real problem. The GR has been used by a number <strong>of</strong> people in chemistry<br />
to look at a wide variery <strong>of</strong> problems. In fact, most <strong>of</strong> this lecture comes right out out <strong>of</strong> the<br />
Journal <strong>of</strong> Chemical Physics. Some papers you may be interested in knowing about include:<br />
1. B. J. Schwartz, E. R. <strong>Bittner</strong>, O. V. Prezhdo, and P. J. Rossky, J. Chem. Phys. 104, 5242<br />
(1996).<br />
2. E. Neria and A. Nitzan, J. Chem. Phys. 99, 1109 (1993).<br />
3. A. Stiab and D. Borgis, J. Chem. Phys. 103, 2642 (1995)<br />
4. E. J. Heller, J. Chem. Phys. 75, 2923 (1981).<br />
5. W. Gelbart, K. Spears, K. F. Freed, J. Jortner, S. A. Rice, Chem. Phys. Lett. 6 345<br />
(1970).<br />
The focus <strong>of</strong> the lecture will be to use the GR to calculate the transition rate from one<br />
adiabatic potential energy surface to another via non-radiative decay. Recall in a previous lecture,<br />
we talked about potential energy curves <strong>of</strong> molecule and that these are obtained by solving the<br />
Schrodinger equation for the electronic degrees <strong>of</strong> freedom assuming that the nuclei move very<br />
slowly. We defined the adiabatic or Born-Oppenheimer potential energy curves for the nuclei in a<br />
molecule by solving the Schrodinger equation for the electrons for fixed nuclear positions. These<br />
potential curves are thus the electronic eigenvalues parameterized by the nuclear positions.<br />
Vi(R) = Ei(R) (7.174)<br />
209
Under the BO approximation, the nuclei move about on a single energy surface and the electronic<br />
wavefunction is simply |Ψi(R)〉 paramterized by the nuclear positions. However, when the nuclei<br />
are moving fast, this assumption is no longer true since,<br />
i¯h d<br />
dt |Ψi(R)〉<br />
�<br />
= i¯h ∂R<br />
∂t<br />
�<br />
∂<br />
+ Ei(R) |Ψi(R)〉. (7.175)<br />
∂R<br />
That is to say, when the nuclear velocities are large in a direction that the wavefunction changes<br />
a lot with varying nuclear position, the Born-Oppenheimer approximation is not so good and<br />
the electronic states become coupled by the nuclear motion. This leads to a wide variety <strong>of</strong><br />
physical phenomina including non-radiative decay and intersystem crossing and is an important<br />
mechanism in electron transfer dynamics.<br />
THe picture I want to work with today is a hybred quantum/classical picture (or semiclassical).<br />
I want to treat the nuclear dynamics as being mostly “classical” with some “quantum<br />
aspects”. IN this picture I will derive a semi-classical version <strong>of</strong> the Golden-Rule transition rate<br />
which can be used in concert with a classical Molecular dynamics simulation.<br />
We start with the GR transition rate we derived the last lecture. We shall for now assume<br />
that the states we are coupling are the vibrational-electronic states <strong>of</strong> the system, written as<br />
|ψi〉 = |αi(R)I(R)〉 (7.176)<br />
where R denotes the nuclear positions, |α(R)〉 is the adiabatic electronic eigenstate obtained at<br />
position R and |I(R)〉 is the initial nuclear vibrational state on the α(R) potential energy surface.<br />
Let this denote the initial quantum state and denote by<br />
|ψf〉 = |αf(R)F (R)〉 (7.177)<br />
the final quantum state. The GR transition rate at nuclear position R is thus<br />
kif = 2π<br />
¯h<br />
�<br />
|〈ψi|<br />
f<br />
ˆ V |ψf〉| 2 δ(Ei − Ef) (7.178)<br />
where the sum is over the final density <strong>of</strong> states (vibrational) and the energys in the δ-function<br />
is the electronic energy gap measured with respect to a common origin. We can also define a<br />
“thermal” rate constant by ensemble averaging over a collective set <strong>of</strong> initial states.<br />
7.7.1 Non-radiative transitions between displaced Harmonic Wells<br />
An important application <strong>of</strong> the GR comes in evaluating electron transfer rates between electronic<br />
state <strong>of</strong> a molecule. Let’s approximate the diabatic electronic energy surfaces <strong>of</strong> a molecule as<br />
harmonic wells <strong>of</strong>f-set by by energy and with the well minimums displaced by some amount xo.<br />
Let the curves cross at xs and assume there is a spatial dependent coupling V (x) which couples<br />
the diabatic electronic states. Let T1 denote the upper surface and So denote the ground-state<br />
surface. The diabatic coupling is maximized at the crossing point and decays rapidly as we move<br />
away. Because these electronic states are coupled, the vibrational states on T1 become coupled<br />
to the vibrational states on So and vibrational amplitude can tunnel from one surface to the<br />
210
other. The tunneling rate can be estimated very well using the Golden Rule (assuming that the<br />
amplitude only crosses from one surface to another once.)<br />
kT S = 2π<br />
¯h |〈ΨT |V |ΨS〉| 2 ρ(Es) (7.179)<br />
The wavefunction on each surface is the “vibronic” function mentioned above. This we will write<br />
as a product <strong>of</strong> an electronic term |ψT 〉 (or |ψS〉) and a vibrational term |nT 〉 (or |nS〉). For<br />
shorthand, let’s write the electronic contribution as<br />
VT S = 〈ψT |V |ψS〉 (7.180)<br />
Say I want to know the probability <strong>of</strong> finding the system in some initial vibronic state after some<br />
time. The rate <strong>of</strong> decay <strong>of</strong> this state is thus the sum over all possible decay channels. So, I must<br />
sum over all final states that I can decay into. The decay rate is thus.<br />
kT S = 2π<br />
¯h<br />
�<br />
|〈nT |VT S|mS〉| 2 ρ(Es) (7.181)<br />
mS<br />
This equation is completely exact (within the GR approximation) and can be used in this form.<br />
However, let’s make a series <strong>of</strong> approximations and derive a set <strong>of</strong> approximate rates and compare<br />
the various approximations.<br />
Condon Approximation<br />
First I note that the density <strong>of</strong> states can be rewritten as<br />
so that<br />
ρ(Es) = Tr(Ho − Es) −1<br />
kT S = 2π<br />
¯h<br />
� |〈nT |VT S|mS〉| 2<br />
mS<br />
En − Es<br />
(7.182)<br />
(7.183)<br />
where En−Es is the energy gap between the initial and final states including the electronic energy<br />
gap. What this means is that if the energy difference between the bottoms <strong>of</strong> the respective wells<br />
is large, then the initial state will be coupled to the high-lying vibrational states in So. Next I<br />
make the “Condon Approximation” that<br />
〈nT |VT S|mS〉 ≈ VT S〈nT |mS〉 (7.184)<br />
where 〈nT |mS〉 is the overlap between vibrational state |nT 〉 in the T1 well and state |mS〉 in the<br />
So well. These are called Franck-Condon factors.<br />
Define the Franck-Condon factor as<br />
�<br />
〈nT |mS〉 =<br />
Evaluation <strong>of</strong> Franck Condon Factors<br />
(T )∗<br />
dxϕ m (x)ϕ (S)<br />
n (x) (7.185)<br />
211
(T ) where ϕ n (x) is the coordinate representation <strong>of</strong> a harmonic osc. state. We shall assume that<br />
the two wells are <strong>of</strong>f-set by xs and each has a freq. ω1 and ω2. We can write the HO state for<br />
each well as a Gauss-Hermite Polynomial (c.f. Compliment Bv in the text.)<br />
where β =<br />
� �<br />
2 1/4<br />
�<br />
β 1<br />
ϕn(x) = √ exp −<br />
π 2nn! β2<br />
2 x2<br />
�<br />
Hn(βx) (7.186)<br />
�<br />
mω/¯h and Hn(z) is a Hermite polynomial,<br />
Hn(z) = (−1) n ∂n z2<br />
e e−z2<br />
∂zn (7.187)<br />
Thus the FC factor is<br />
〈nT |mS〉 =<br />
×<br />
� �<br />
2 1/4 � �<br />
2 1/4<br />
βT βS 1 1<br />
√ √<br />
π π 2nn! 2mm! � �<br />
dx exp − β2 T<br />
2 x2<br />
� �<br />
exp − β2 �<br />
S<br />
2<br />
(x − xs)<br />
2<br />
× Hni (βix)Hmf (βf(x − xs)) (7.188)<br />
This integral is pretty difficult to solve analytically. In fact Mathematica even choked on this<br />
one. Let’s try a simplification: We can expand the Hermite Polynomials as<br />
Hn(z) = 2 n√ �<br />
1 2x<br />
π −<br />
) Γ(− n<br />
nx3<br />
−<br />
) Γ( 1<br />
�<br />
n + · · ·<br />
(7.189)<br />
− )<br />
Γ( 1<br />
2<br />
− n<br />
2<br />
Thus, any integral we want to do involves doing an integral <strong>of</strong> the form<br />
In =<br />
�<br />
dxx n �<br />
exp − β2 1<br />
2 x2 − β2 =<br />
�<br />
2<br />
2<br />
(x − xs)<br />
2<br />
�<br />
dxx n �<br />
exp − β2 1<br />
2 x2 − β2 2<br />
2 (x2 − 2xxs + x 2 =<br />
�<br />
s)<br />
�<br />
dxx n �<br />
exp − β2 1 + β2 2<br />
x<br />
2<br />
2<br />
� �<br />
2 β2 exp<br />
2 (2xxs − x 2 =<br />
�<br />
s)<br />
�<br />
exp − β2 2<br />
2 x2 � �<br />
s dxx n �<br />
exp − β2 1 + β2 2<br />
x<br />
2<br />
2<br />
�<br />
exp �<br />
β 2 2(xxs) �<br />
=<br />
�<br />
exp − β2 2<br />
2 x2 � �<br />
s dxx n �<br />
exp − a<br />
2 x2<br />
where I defined a = β 2 1 + β 2 2 and b = β 2 2xs .<br />
Performing the integral:<br />
In = 2 −1+n<br />
2<br />
2+n<br />
1 2<br />
a<br />
⎛<br />
⎝(1 + (−1) n )<br />
− √ 2 (−1 + (−1) n ) b Γ(<br />
�<br />
1<br />
a<br />
2<br />
2 + n<br />
) 1F1(<br />
2<br />
212<br />
2<br />
2<br />
(7.190)<br />
�<br />
exp (bx) (7.191)<br />
+ n<br />
a Γ(1 ) 1F1(<br />
2<br />
2 + n<br />
2<br />
1 + n<br />
,<br />
2<br />
1 b2<br />
,<br />
2 2 a )<br />
, 3 b2<br />
,<br />
2 2 a )<br />
�<br />
(7.192)
where 1F1(a, b, z) is the Hypergeometric function (c.f Landau and Lifshitz, QM) and Γ(z) is the<br />
Gamma function. Not exactly the easist integral in the world to evaluate. (In other words, don’t<br />
worry about having to solve this integral on an exam!) To make matters even worse, this is only<br />
one term. In order to compute, say the FC factor between ni = 10 and mf = 12 I would need to<br />
sum over 120 terms! However, Mathematica knows how to evaluate these functions, and we can<br />
use it to compute FC factors very easily.<br />
If the harmonic frequencies are the same in each well, life gets much easier. Furthermore, If I<br />
make the initial vibrational state in the T1 well be the ground vibrational state, we can evaluate<br />
the overlap exactly for this case. The answer is (see Mathematica hand out)<br />
M[n] = βnxn s √<br />
2nn! (7.193)<br />
Note that this is different than the FCF calculated in Ref. 6 by Gelbart, et al. who do not have<br />
the square-root factor (their denominator is my denominator squared.) 2<br />
Finally, we can evaluate the matrix element as<br />
β n x n s<br />
〈nT |VT S|mS〉 ≈ VT S √<br />
2nn! Thus, the GR survival rate for the ground vibrational state <strong>of</strong> the T1 surface is<br />
k = 2π<br />
¯h VT<br />
�<br />
S<br />
m<br />
1 β<br />
¯hΩ − ¯hωm<br />
nxn s √<br />
2nn! where Ω is the energy difference between the T1 and So potential minimuma.<br />
“Steep So Approximation”<br />
(7.194)<br />
(7.195)<br />
In this approximation we assume that potential well <strong>of</strong> the So state is very steep and intesects<br />
the T1 surface at xs. We also assume that the diabatic coupling is a function <strong>of</strong> x. Thus, the GR<br />
survival rate is<br />
where<br />
�<br />
〈VT S〉 =<br />
k = 2π 2<br />
〈VT S〉ρS<br />
¯h<br />
(7.196)<br />
dxψ ∗ T (x)ψS(x)V (x) (7.197)<br />
When the So surface is steeply repulsive, the wavefunction on the So surface will be very oscillitory<br />
at the classical turning point, which is nearly identical with xs for very steep potentials. Thus,<br />
for purposes <strong>of</strong> doing integrations, we can assume that<br />
ψS(x) = Cδ(x − xs) + · · · (7.198)<br />
2 I believe this is may be a mistake in their paper. I’ll have to call Karl Freed about this one.<br />
213
where xs is the classical turning point at energy E on the So surface. The justification for this<br />
is from the expansion <strong>of</strong> the “semi-classical” wavefunction on a linear potential, which are the<br />
Airy functions, Ai(x).<br />
Which can be expanded as<br />
Ai(−ζ) = 1<br />
�<br />
+∞ds exp(is<br />
2π −∞<br />
2 /3 − iζs) (7.199)<br />
Expansions <strong>of</strong> this form also let us estimate the coefficient C 3<br />
Using the δ-function approximation<br />
aAi(ax) = δ(x) + · · · (7.200)<br />
�<br />
dxψ ∗ �<br />
T (x)ψS(x)V (x) = C dxψ ∗ T (x)δ(x − xs)V (x) = Cψ ∗ T (xs)V (xs) (7.201)<br />
Now, again assuming that we are in the ground vibrational state on the T1 surface,<br />
So, we get the approximation:<br />
�<br />
ψT (x) =<br />
� β 2<br />
π<br />
dxψ ∗ T (x)ψS(x)V (x) = C<br />
k = 2π<br />
CV (xs)<br />
¯h<br />
� β 2<br />
� 1/4<br />
e −β2 x 2 /2<br />
π<br />
� β 2<br />
π<br />
(7.202)<br />
� 1/4<br />
e −β2 x 2 s /2 V (xs) (7.203)<br />
� 1/4<br />
e −β2 x 2 s/2 1<br />
¯hΩ<br />
where C remains to be determined. For that, refer to the Heller-Brown paper.<br />
Time-Dependent Semi-Classical Evaluation<br />
(7.204)<br />
We next do something tricky. There are a number <strong>of</strong> ways one can represent the δ-function. We<br />
will use the Fourier representation <strong>of</strong> the function and write:<br />
Thus, we can write<br />
kif =<br />
δ(Ei − Ef) = ¯h<br />
� ∞<br />
dte<br />
2π −∞<br />
i/¯h(Ei−Ej)t<br />
� ∞<br />
−∞<br />
3 (see Heller and Brown, JCP 79 3336 (1983). )<br />
dt �<br />
〈αi(R)I(R)|V |αf(R)F (R)〉<br />
f<br />
(7.205)<br />
× 〈αf(R)F (R)|e +iHf t/¯h V e −iHit/¯h |αi(R)I(R)〉 (7.206)<br />
214
Integrating over the electronic degrees <strong>of</strong> freedom, we can define<br />
and thus write<br />
kif =<br />
Vif(R) = 〈αi(R)|V |αf(R)〉 (7.207)<br />
� ∞<br />
−∞<br />
dt �<br />
〈I(R)|Vif(R)|F (R)〉<br />
f<br />
× 〈F (R)|e +iHf t/¯h Vif(R)e −iHit/¯h |I(R)〉 (7.208)<br />
where Hi(R) is the nuclear Hamiltonian for the initial state and Hf(R) is the nuclear Hamiltonian<br />
for the final state.<br />
At this point I can remove the sum over f and obtain<br />
Next, we note that<br />
� ∞<br />
kif =<br />
dt〈I(R)|Vif(R)e<br />
−∞<br />
+iHf t/¯h<br />
Vif(R)e −iHit/¯h<br />
|I(R)〉. (7.209)<br />
Vif(R) = ˙ R · 〈αf(R)|i¯h∇R|αi(R)〉<br />
= ˙ R(0) · Dif(R(0)) (7.210)<br />
is proportional to the nuclear velocity at the initial time. Likewise, the term in the middle<br />
represents the non-adiabatic coupling at some later time. Thus, we can re-write the transition<br />
rate as<br />
� ∞<br />
kif = dt( ˙ R(0) · Dif(R(0)))( ˙ R(t) · Dif(R(t))) (7.211)<br />
−∞<br />
× 〈I(R)|e +iHf t/¯h e −iHit/¯h |I(R)〉. (7.212)<br />
Finally, we do an ensemble averate over the initial positions <strong>of</strong> the nuclei and obtain almost the<br />
final result:<br />
� ∞<br />
kif = dt �<br />
( ˙ R(0) · Dif(R(0)))( ˙ R(t) · Dif(R(t)))Jif(t) �<br />
. (7.213)<br />
where I define<br />
−∞<br />
Jif(t) = 〈I(R)|e +iHf t/¯h e −iHit/¯h |I(R)〉 (7.214)<br />
This term represents the evolution <strong>of</strong> the initial nuclear vibrational state moving forward in time<br />
on the initial energy surface<br />
and backwards in time on the final energy surface.<br />
|I(R(t))〉 = e −i/¯hHit |I(R(0))〉 (7.215)<br />
〈I(R(t))| = 〈I(R(0))|e i/¯hHf t<br />
215<br />
(7.216)
So, J(t) represents the time-dependent overlap integral between nuclear wavefunctions evolving<br />
on the different potential energy surfaces.<br />
Let us assume that the potential wells are Harmonic with the centers <strong>of</strong>f set by some amount<br />
xs. We can define in each well a set <strong>of</strong> Harmonic Oscillator eigenstates, which we’ll write in short<br />
hand as |ni〉 where the subscript i denotes the electronic state. At time t = 0, we can expand<br />
the initial nuclear wavefunction as a superposition <strong>of</strong> these states:<br />
|I(R(0))〉 = �<br />
γn|ni〉 (7.217)<br />
where γn = 〈ni|I(R(0))〉. The time evolution in the well is<br />
|I(R(t))〉 = �<br />
γn exp(−i/2(n + 1)ωit)|ni〉. (7.218)<br />
n<br />
We can also express the evolution <strong>of</strong> the ket as a superposition <strong>of</strong> states in the other well.<br />
〈I(R(t))| = �<br />
m<br />
n<br />
ξ ∗ m exp(+i/2(m + 1)ωft)〈mf| (7.219)<br />
where ξm = 〈mf|I(R)〉 are the coefficients. Thus, J(t) is obtained by hooking the two results<br />
together:<br />
J(t) = �<br />
ξ ∗ mγne +i/2(m+1)ωf t −i/2(n+1)ωit<br />
e 〈mf|ni〉 (7.220)<br />
mn<br />
Now, we must compute the overlap between harmonic states in one well with harmonic states in<br />
another well. This type <strong>of</strong> overlap is termed a Franck-Condon factor (FC). We will evaluate the<br />
FC factor using two different approaches.<br />
7.7.2 Semi-Classical Evaluation<br />
I want to make a series <strong>of</strong> simplifying assumptions to the nuclear wavefunction. Many <strong>of</strong> these<br />
assumptions follow from Heller’s paper referenced at the beginning. The assumptions are as<br />
follows:<br />
1. At the initial time, 〈x|I(R)〉 can be written as a Gaussian <strong>of</strong> width β centered about R.<br />
� �<br />
2 1/4 �<br />
β<br />
〈x|I(R)〉 = exp −<br />
π<br />
β2<br />
2 (x − R(t))2 + i<br />
�<br />
p(t)(x − R(t)) (7.221)<br />
¯h<br />
where p(t) is the classical momentum. (c.f Heller)<br />
2. We know that for a Gaussian wavepacket (especially one in a Harmonic Well), the center<br />
<strong>of</strong> the Gaussian tracks the classical prediction. Thus, we can write that R(t) is the center<br />
<strong>of</strong> the Gaussian evolves under Newton’s equation<br />
m ¨ R(t) = Fi(R) (7.222)<br />
where Fi(R) is the force computed as the derivative <strong>of</strong> the ith energy surface w.r.t. R,<br />
evaluated at the current nuclear position (i.e. the force we would get using the Born-<br />
Oppenheimer approximation.<br />
Fi(R) = − ∂<br />
E(R). (7.223)<br />
∂R<br />
216
3. At t = 0, the initial classical velocities and positions <strong>of</strong> the nuclear waves evolving on the<br />
i and f surface are the same.<br />
4. For very short times, we assume that the wave does not spread appreciably. We can fix<br />
this assuption very easily if need be.<br />
Using these assumptions, we can approximate the time-dependent FC factor as 4<br />
�<br />
J(t) ≈ exp − β2<br />
4 (Rf(t) − Ri(t)) 2<br />
�<br />
�<br />
�<br />
× exp<br />
− 1<br />
4β 2 ¯h 2 (pf(t) − pi(t)) 2<br />
�<br />
× exp + i<br />
2¯h (Rf(t) − Ri(t)) · (pj(t) − pi(t))<br />
Next, we expand Ri(t) and pi(t) as a Tayler series about t = 0.<br />
Using Newton’s Equation:<br />
Ri(t) = Ri(0) + t ˙ Ri(0) + t 2 ¨ Ri(0)<br />
2!<br />
�<br />
(7.224)<br />
+ · · · (7.225)<br />
Ri(t) = Ri(0) + t p(0) Fi(0)<br />
− t2 + · · ·<br />
m m2!<br />
(7.226)<br />
pi(t) = pi(0) + Fi(0)t + 1 ∂Fi<br />
2 ∂t t2 · · · (7.227)<br />
Thus the difference between the nuclear positions after a short amount <strong>of</strong> time will be<br />
Also, the momentum difference is<br />
Thus,<br />
Ri(t) − Rf(t) ≈ −t 2<br />
� Fi(0)<br />
2m<br />
�<br />
Fj(0)<br />
− + · · · (7.228)<br />
2m<br />
pf(t) − pi(t) ≈ (Ff(0) − Fi(0))t + · · · (7.229)<br />
�<br />
J(t) ≈ exp − β2t4 16m2 (Fi(0) − Fj(0)) 2<br />
�<br />
�<br />
× exp − t2<br />
4β2¯h 2 ((Ff(0) − Fi(0)) 2<br />
�<br />
�<br />
× exp + i<br />
4m¯h (Ff(0) − Fi(0))t 3<br />
�<br />
(7.230)<br />
If we include the oscillitory term, the integral does not converge (so much for a short time<br />
approx.) However, when we do the ensemble average, each member <strong>of</strong> the ensemble contributes<br />
4 B. J. Schwartz, E. R. <strong>Bittner</strong>, O. V. Prezhdo, and P. J. Rossky, J. Chem. Phys. 104, 5242 (1996).<br />
217
a slightly different phase contribution, so, we can safely ignore it. Furthermore, for short times,<br />
the decay <strong>of</strong> the overlap will be dominated by the term proportional to t 2 . Thus, we defined the<br />
approximate decay curve as<br />
J(t) = exp<br />
�<br />
− t2<br />
4β2¯h 2 (Fj(0) − Fi(0)) 2<br />
�<br />
Now, pulling everything together, we write the GR rate constant as<br />
kif =<br />
� ∞<br />
−∞<br />
�<br />
× exp<br />
dt �<br />
( ˙ R(0) · Dif(R(0)))( ˙ R(t) · Dif(R(t)))<br />
��<br />
− t2<br />
4β 2 ¯h 2 (Fj(0) − Fi(0)) 2<br />
(7.231)<br />
. (7.232)<br />
The assumptions are that the overlap decays more rapidly than the oscillations in the autocorrelation<br />
<strong>of</strong> the matrix element. This actually bears itself out in reality.<br />
Let’s assume that the overlap decay and the correlation function are un-correlated. (Condon<br />
Approximation) Under this we can write:<br />
or defining<br />
kif =<br />
×<br />
� ∞<br />
−∞<br />
�<br />
exp<br />
and using (c.f. Chandler, Stat. Mech.)<br />
dt �<br />
( ˙ R(0) · Dif(R(0)))( ˙ R(t) · Dif(R(t))) �<br />
�<br />
��<br />
− t2<br />
4β 2 ¯h 2 (Fj(0) − Fi(0)) 2<br />
Cif(t) = �<br />
( ˙ R(0) · Dif(R(0)))( ˙ R(t) · Dif(R(t))) �<br />
−∞<br />
. (7.233)<br />
(7.234)<br />
�<br />
e A�<br />
= e 〈A〉 , (7.235)<br />
the desired result is<br />
� �<br />
∞<br />
kif = dtCif(t) exp −t 2<br />
�<br />
(Fj(0) − Fi(0)) 2<br />
��<br />
. (7.236)<br />
Then<br />
4β 2 ¯h 2<br />
Now, let’s assume that the correlation function is an oscillitory function <strong>of</strong> time.<br />
kif =<br />
=<br />
� ∞<br />
Cif(t) = |Vif| 2 cos(Ωt) (7.237)<br />
dt|Vif| 2 �<br />
cos(Ωt) exp −t 2<br />
�<br />
(Fj(0) − Fi(0)) 2<br />
��<br />
. (7.238)<br />
−∞<br />
�<br />
π<br />
b e−Ω2 /(4b)<br />
|Vif| 2<br />
218<br />
4β 2 ¯h 2<br />
(7.239)
where<br />
b =<br />
�<br />
(Fj(0) − Fi(0)) 2<br />
�<br />
4β 2 ¯h 2<br />
(7.240)<br />
In Ref 1. we used this equation (actually one a few lines up) to compute the non-radiative<br />
relaxation rates between the p to s state <strong>of</strong> an aqueous electron in H20 and in D20 to estimate<br />
the isotopic dependency <strong>of</strong> the transition rate. Briefly, the story goes as this. Paul Barbara’s<br />
group at U. Minnisota and Yann Gauduel’s group in France measured the fluorescence decay <strong>of</strong><br />
an excited excess electron in H20 and in D20 and noted that there was no resolvable difference<br />
between the two solvents. I.e. the non-radiative decay was not at all sensitive to isotopic changes<br />
in the solvent. (The experimental life-times are roughly 310 fs for both sovents with resolution <strong>of</strong><br />
about 80 fs ) This is very surprising since, looking at the non-adiabatic coupling operator above,<br />
you will notice that the matrix element coupling states is proportional to the nuclear velocities.<br />
(The electronic matrix element is between the s and p state <strong>of</strong> the electron.) Thus, since the<br />
velocity <strong>of</strong> a proton is roughly √ 2 times faster than that <strong>of</strong> a deuteron <strong>of</strong> the same kinetic energy,<br />
The non-adiabatic coupling matrix element between the s and p states in water should be twice<br />
that as in heavy-water. Thus, the transition rate in water should be roughtly twice that as in<br />
heavy-water. It turns out that since the D’s move slower than the H’s, the nuclear overlap decays<br />
roughly twice as slowly. Thus we get competing factors <strong>of</strong> two which cancel out.<br />
7.8 Problems and Exercises<br />
Exercise 7.1 A one dimensional harmonic oscillator, with frequency ω, in its ground state is<br />
subjected to a perturbation <strong>of</strong> the form<br />
H ′ (t) = C ˆpe −α|t| cos(Ωt) (7.241)<br />
where ˆp is the momentum operator and C, α, and Ω are constants. What is the probability that<br />
as t → ∞ the oscillator will be found in its first excited state in first-order perturbation theory.<br />
Discuss the result as a function <strong>of</strong> Ω, ω, and α.<br />
Exercise 7.2 A particle is in a one-dimensional infinite well <strong>of</strong> width 2a. A time-dependent<br />
perturbation <strong>of</strong> the form<br />
H ′ (t) = ToVo sin( πx<br />
)δ(t) (7.242)<br />
a<br />
acts on the system, where To and Vo are constants. What is the probability that the system will<br />
be in the first excited state afterwards?<br />
Exercise 7.3 Because <strong>of</strong> the finite size <strong>of</strong> the nucleus, the actual potential seen by the electron<br />
is more like:<br />
219
V(hartree)<br />
-20<br />
-40<br />
-60<br />
-80<br />
-100<br />
-120<br />
r(Bohr)<br />
0.02 0.04 0.06 0.08 0.1<br />
1. Calculate this effect on the ground state energy <strong>of</strong> the H atom using first order perturbation<br />
theory with<br />
2. Explain this choice for H ′ .<br />
H ′ =<br />
� e 2<br />
e3 − for r ≤ R<br />
r R<br />
0 otherwise<br />
3. Expand your results in powers <strong>of</strong> R/ao ≪ 1. (Be careful!)<br />
4. Evaluate numerically your result for R = 1 fm and R = 100 fm.<br />
5. Give the fractional shift <strong>of</strong> the energy <strong>of</strong> the ground state.<br />
(7.243)<br />
6. A more rigorous approach is to take into account the fact that the nucleus has a homogeneous<br />
charge distribution. In this case, the potential energy experienced by the electron goes<br />
as<br />
V (r) = − Ze2<br />
r<br />
when r > R and<br />
V (r) = − Ze2<br />
r<br />
� �� �<br />
1 r 2<br />
+ 2<br />
2R R<br />
R<br />
� �<br />
− 3 − 1<br />
r<br />
for r ≤ R. What is the perturbation in this case? Calculate the energy shift for the H (1s)<br />
220
energy level for R = 1fm and compare to the result you obtained above.<br />
Note that this effect is the “isotope shift” and can be observed in the spectral lines <strong>of</strong> the heavy<br />
elements.<br />
221
Chapter 8<br />
Many Body <strong>Quantum</strong> <strong>Mechanics</strong><br />
It is <strong>of</strong>ten stated that <strong>of</strong> all the theories proposed in this century, the silliest is<br />
quantum theory. In fact, some say that the only thing that quantum theory has<br />
going for it is that it is unquestionably correct.<br />
–M. Kaku (Hyperspace, Oxford <strong>University</strong> Press, 1995)<br />
8.1 Symmetry with respect to particle Exchange<br />
Up to this point we have primarily dealt with quantum mechanical system for 1 particle or with<br />
systems <strong>of</strong> distinguishable particles. By distinguishable we mean that one can assign a unique<br />
label to the particle which distinguishes it from the other particles in the system. Electrons in<br />
molecules and other systems, are identical and cannot be assigned a unique label. Thus, we must<br />
concern our selves with the consequences <strong>of</strong> exchanging the labels we use. To establish a firm<br />
formalism and notation, we shall write the many-particle wavefunction for a system as<br />
�<br />
〈ψN|ψN〉 =<br />
�<br />
=<br />
�<br />
· · ·<br />
�<br />
· · ·<br />
d 3 r1 · · · d 3 rN|ψN(r1, r2, · · · , rN)| 2 < +∞ (8.1)<br />
d1d2 · · · dN|ψN(1, 2, · · · , N)| 2 . (8.2)<br />
We will define the N-particle state space as the product <strong>of</strong> the individual single particle state<br />
spaces thusly<br />
|ψN〉 = |a1a2 · · · aN) = |a1〉 ⊗ |a2〉 ⊗ · · · ⊗ |aN〉 (8.3)<br />
For future reference, we will write the multiparticle state as with a curved bracket: | · · ·). These<br />
states have wavefunctions<br />
〈r|ψN〉 = (r1 · · · rn|a1a2 · · · aN) = 〈r1|a1〉〈r2|a2〉 · · · 〈rN|aN〉 (8.4)<br />
= φa1(r1)φa1(r2) · · · φaN (rN) (8.5)<br />
222
These states obey analogous rules for constructing overlap (projections) and idempotent relations.<br />
They form a complete set <strong>of</strong> states (hence form a basis) and any multi-particle state in the state<br />
space can be constructed as a linear combination <strong>of</strong> the basis states.<br />
Thus far we have not taken into account the symmetry property <strong>of</strong> the wavefunction. There<br />
are a multitude <strong>of</strong> possible states which one can construct using the states we defined above.<br />
However, only symmetric and antisymmetric combinations <strong>of</strong> these state are actually observed in<br />
nature. Particles occurring in symmetric or anti-symmetric states are called Bosons and Fermions<br />
respectively.<br />
Let’s define the permutation operator Pαβ which swaps the positions <strong>of</strong> particles α and β.<br />
e.g.<br />
Also,<br />
P12|1, 2) = |2, 1) (8.6)<br />
P12P12ψ(1, 2) = P 2 12ψ(1, 2) = ψ(1, 2) (8.7)<br />
thus ψ(1, 2) is an eigenstate <strong>of</strong> P12 with eigenvalue ±1. In other words, we can also write<br />
P12ψ(1, 2) = ζψ(1, 2) (8.8)<br />
where ζ = ±1.<br />
A wavefunction <strong>of</strong> N bosons is totally symmetric and thus satisfies<br />
ψ(P 1, P 2, · · · , P N) = ψ(1, 2, · · · , N) (8.9)<br />
where (P 1, P 2, · · · , P N) represents any permutation P <strong>of</strong> the set (1, 2, · · · , N). A wavefunction<br />
<strong>of</strong> N fermions is totally antisymmetric and thus satisfies<br />
ψ(P 1, P 2, · · · , P N) = (−1) P ψ(1, 2, · · · , N). (8.10)<br />
Here, (−1) P denotes the sign or parity <strong>of</strong> the permutation and is defined as the number <strong>of</strong> binary<br />
transpositions which brings the permutation (P 1, P 2, ...) to its original from (1, 2, 3...).<br />
For example: what is the parity <strong>of</strong> the permutation (4,3,5,2,1)? A sequence <strong>of</strong> binary transpositions<br />
is<br />
(4, 3, 5, 2, 1) → (2, 3, 5, 4, 1) → (3, 2, 5, 4, 1) → (5, 2, 3, 4, 1) → (1, 2, 3, 4, 5) (8.11)<br />
So P = 4. Thus, for a system <strong>of</strong> 5 fermions<br />
ψ(4, 3, 5, 2, 1) = ψ(1, 2, 3, 4, 5) (8.12)<br />
In cases where we want to develop the many-body theory for both Fermions and Bosons<br />
simultaneously, we will adopt the notation that ζ = ±1 and any wavefunction can be written as<br />
where ζ = −1 for fermions and +1 for bosons.<br />
ψ(P 1, P 2, · · · , P N) = (ζ) P ψ(1, 2, · · · , N). (8.13)<br />
223
While these symmetry requirement are observed in nature, they can also be derived in the<br />
context <strong>of</strong> quantum field theory that given general assumptions <strong>of</strong> locality, causality and Lorentz<br />
invariance, particles with half integer spin are fermions and those with integer spin are bosons.<br />
Some examples <strong>of</strong> bosons are photon, photons, pions, mesons, gluons and the 4 He atom. Some<br />
examples <strong>of</strong> fermions are protons, electrons, neutrons, muons, neutrinos, quarks, and the 3 He<br />
atom. Composite particles composes <strong>of</strong> any number <strong>of</strong> bosons and even or odd numbers <strong>of</strong><br />
fermions behave as bosons or fermions respectively at temperatures low compared to their binding<br />
energy. An example <strong>of</strong> this is super-conductivity where electron-phonon coupling induces the<br />
pairing <strong>of</strong> electrons (Cooper pairs) which form a Bose-condensate.<br />
Now, consider what happens if I place two fermion particles in the same state:<br />
|ψ(1, 2)〉 = |α(1)α(2)) (8.14)<br />
where α(1) is a state with the “spin up” quantum number. This state must be an eigenstate <strong>of</strong><br />
the permutation operator with eigenvalue ζ = −1.<br />
P12|ψ(1, 2)〉 = −|α(2)α(1)) (8.15)<br />
However, |α(1)α(2)) = |α(2)α(1)), thus the wavefunction <strong>of</strong> the state must vanish everywhere.<br />
For the general case <strong>of</strong> a system with N particles, the normalized wavefunction is<br />
� �1/2 N1!N2!... �<br />
ψ =<br />
ψp1(1)ψp2(2) · · · ψpN(N) (8.16)<br />
N!<br />
where the sum is over all permutations <strong>of</strong> different p1, p2... and the numbers Ni indicate how<br />
many <strong>of</strong> these have the same value (i.e. how many particles are in each state) with � Ni = N.<br />
For a system <strong>of</strong> 2 fermions, the wavefunction is<br />
Thus, in the example above:<br />
Likewise,<br />
ψ(1, 2) = (ψp1(1)ψp2(2) − ψp1(2)ψp2(1))/ √ 2 (8.17)<br />
ψ(1, 2) = (α(1)α(2) − α(2)α(1))/ √ 2 = 0 (8.18)<br />
ψ(1, 2) = (β(1)α(2) − β(2)α(1))/ √ 2<br />
= (β(1)α(2) − P12β(1)α(2))/ √ 2<br />
= (β(1)α(2))/ √ 2 (8.19)<br />
We will write such symmetrized states using the curly brackets<br />
|ψ} = |a1a2 · · · aN} =<br />
� �1/2 N1!N2!... �<br />
ψp1(1)ψp2(2) · · · ψpN(N)<br />
N!<br />
(8.20)<br />
For the general case <strong>of</strong> N particles, the fully anti-symmetrized form <strong>of</strong> the wavefunction takes<br />
the form <strong>of</strong> a determinant<br />
ψ = 1<br />
�<br />
�<br />
�<br />
� φa(1) φa(1) φa(1) �<br />
�<br />
�<br />
�<br />
√ � φb(1) φb(2) φb(2) �<br />
(8.21)<br />
N! �<br />
�<br />
� φc(1) φc(3) φc(3) �<br />
224
where the columns represent the particles and the rows are the different states. The interchange<br />
<strong>of</strong> any two particles corresponds to the interchange <strong>of</strong> two columns, as a result, the determinant<br />
changes sign. Consequently, if two rows are identical, corresponding to two particles occupying<br />
the same state, the determinant vanishes.<br />
Another example, let’s consider the possible states for the He atom ground state. Let’s<br />
assume that the ground state wavefunction is the product <strong>of</strong> two single particle hydrogenic 1s<br />
states with a spin wavefunction written thus<br />
|ψ〉 = |1s(1)α(1), 1s(2)β(2)) (8.22)<br />
Let’s denote |α〉 as the spin up state and |β〉 as the spin down state. We have the following<br />
possible spin combinations:<br />
α(1)α(2) ↑↑ symmetric<br />
α(1)β(2) ↑↓ nether<br />
β(1)α(2) ↓↑ neither<br />
β(1)β(2) ↓↓ symmetric<br />
(8.23)<br />
The ↑↑ and the ↓↓ states are clearly symmetric w.r.t particle exchanges. However, note that<br />
the other two are neither symmetric nor anti-symmetric. Since we can construct linear combinations<br />
<strong>of</strong> these states, we can use the two allowed spin configurations to define the combined spin<br />
state Thus, we get two possible total spin states:<br />
Thus, the possible two particle spin states are<br />
1<br />
√ 2 (|α(1)β(2)) ± |β(1)α(2))) (8.24)<br />
α(1)α(2) ↑↑ symmetric<br />
β(1)β(2) ↓↓ symmetric<br />
√1 (α(1)β(2) + β(1)α(2)) ↑↓ + ↑↓ symmetric<br />
2<br />
√1 (α(1)β(2) − β(1)α(2)) ↑↓ − ↑↓ anti-symmetric<br />
2<br />
(8.25)<br />
These spin states multiply the spatial state and the full wavefunction must be anti-symmetric<br />
w.r.t. exchange. For example, for the ground state <strong>of</strong> the He atom, the zero-th order spatial<br />
state is |1s(1)1s(2)). This is symmetric w.r.t. exchange. Thus, the full ground-state wavefunction<br />
must be the product<br />
|ψ〉 = |1s(1)1s(2)) 1<br />
√ 2 (α(1)β(2) − β(1)α(2) (8.26)<br />
The full state is an eigenstate <strong>of</strong> P12 with eigenvalue -1, which is correct for a system <strong>of</strong> fermions.<br />
What about the other states, where can we use them? What if we could construct a spatial<br />
wavefunction that was anti-symmetric w.r.t particle exchange. Consider the first excited state<br />
<strong>of</strong> He. The electron configuration for this state is<br />
|1s(1)2s(2)) (8.27)<br />
225
However, we could have also written<br />
Taking the symmetric and anti-symmetric combinations<br />
|1s(2)2s(1)) (8.28)<br />
|ψ±〉 = 1<br />
√ 2 (|1s(1)2s(2)) ± |1s(2)2s(1))) (8.29)<br />
The + state is symmetric w.r.t. particle exchange. Thus, the full state (including spin) must be<br />
|ψ1〉 = 1<br />
(|1s(1)2s(2)) + |1s(2)2s(1)))(α(1)β(2) − β(1)α(2)).<br />
2<br />
The other three states must be<br />
(8.30)<br />
|ψ2〉 = 1<br />
|ψ3〉 =<br />
(|1s(1)2s(2)) − |1s(2)2s(1)))(α(1)β(2) + β(1)α(2))<br />
2<br />
(8.31)<br />
1<br />
√ (|1s(1)2s(2)) − |1s(2)2s(1)))(α(1)α(2))<br />
2<br />
(8.32)<br />
|ψ4〉 = 1<br />
√ 2 (|1s(1)2s(2)) − |1s(2)2s(1)))(β(1)β(2)). (8.33)<br />
These states can also be constructed using the determinant wavefunction. For example, the<br />
ground state configurations are generated using<br />
|ψg} = 1<br />
�<br />
�<br />
�<br />
� 1s(1)α(1) 1s(1)β(1) �<br />
�<br />
√ �<br />
�<br />
(8.34)<br />
2 � 1s(2)α(2) 1s(2)β(2) �<br />
= 1<br />
√ 2 |1s(1)1s(2))[α(1)β(2) − α(2)β(1)] (8.35)<br />
Likewise for the excited states, we have 4 possible determinant states<br />
|ψ1} = 1<br />
|ψ2} =<br />
�<br />
�<br />
� 1s(1)α(1)<br />
√ �<br />
2 � 1s(2)α(2)<br />
�<br />
2s(1)α(1) �<br />
�<br />
�<br />
2s(2)α(2) �<br />
1<br />
|ψ3} =<br />
�<br />
�<br />
� 1s(1)α(1)<br />
√ �<br />
2 � 1s(2)α(2)<br />
�<br />
2s(1)β(1) �<br />
�<br />
�<br />
2s(2)β(2) �<br />
1<br />
|ψ4} =<br />
�<br />
�<br />
� 1s(1)β(1)<br />
√ �<br />
2 � 1s(2)β(2)<br />
�<br />
2s(1)α(1) �<br />
�<br />
�<br />
2s(2)α(2) �<br />
1<br />
�<br />
�<br />
� 1s(1)β(1)<br />
√ �<br />
2 � 1s(2)β(2)<br />
�<br />
2s(1)β(1) �<br />
�<br />
�<br />
2s(2)β(2) �<br />
The |ψm) are related to the determinant states as follows<br />
|ψ1} = 1<br />
√ 2 [1s(1)α(1)2s(2)α(2) − 1s(2)α(2)2s(1)α(1)]<br />
= 1<br />
√ 2 [1s(1)2s(2) − 1s(2)2s(1)] α(1)α(2)<br />
(8.36)<br />
= |ψ3}<br />
|ψ4} = 1<br />
√ 2 [1s(1)2s(2) − 1s(2)2s(1)] β(1)β(2) = |ψ4} (8.37)<br />
226
The remaining two must be constructed from linear combinations <strong>of</strong> the determinant states:<br />
|ψ2} = 1<br />
√ 2 [1s(1)α(1)2s(2)β(2) − 1s(2)α(2)2s(1)β(1)] (8.38)<br />
|ψ3} = 1<br />
√ 2 [1s(1)β(1)2s(2)α(2) − 1s(2)β(2)2s(1)α(1)] (8.39)<br />
|ψ2〉 = 1<br />
√ 2 [|ψ2} + |ψ3}] (8.40)<br />
|ψ1〉 = 1<br />
√ 2 [|ψ2} − |ψ3}] (8.41)<br />
When dealing with spin functions, a short hand notation is <strong>of</strong>ten used to reduced the notation<br />
a bit. The notation<br />
is used to denote a spin up state and<br />
|1s〉 ≡ |1sα〉 (8.42)<br />
|1s〉 ≡ |1sβ〉 (8.43)<br />
Using these, the above determinant functions can be written as<br />
|ψ1} = 1<br />
�<br />
�<br />
�<br />
√ �<br />
2 �<br />
1s(1)<br />
1s(2)<br />
2s(1)<br />
2s(2)<br />
�<br />
�<br />
�<br />
�<br />
�<br />
= 1 �<br />
√ |1s(1)2s(2)) − |1s(2)2s(1))<br />
2<br />
�<br />
|ψ2} = 1<br />
�<br />
�<br />
�<br />
� 1s(1) 2s(1) �<br />
�<br />
√ �<br />
�<br />
2 � 1s(2) 2s(2) �<br />
= 1 �<br />
√ |1s(1)2s(2)) − |1s(2)2s(1))<br />
2<br />
�<br />
(8.44)<br />
(8.45)<br />
(8.46)<br />
(8.47)<br />
The symmetrization principal for fermions is <strong>of</strong>ten expressed as the Pauli exclusion principle<br />
which states: no two fermions can occupy the same same state at the same time. This, as we all<br />
well know gives rise to the periodic table and is the basis <strong>of</strong> all atomic structure.<br />
8.2 Matrix Elements <strong>of</strong> Electronic Operators<br />
We can write the Hamiltonian for a N-body problem as follows. Say our N-body Hamiltonian<br />
consists <strong>of</strong> a sum <strong>of</strong> N single particle Hamiltonian, Ho, and two body interactions.<br />
H =<br />
N�<br />
H<br />
n<br />
(n)<br />
o + �<br />
V (ri − rj) (8.48)<br />
i�=j<br />
Using the field operators, the expectation values <strong>of</strong> the Ho terms are<br />
�<br />
�<br />
dxφ ∗ α(x)Hoφα(x) = �<br />
α<br />
227<br />
α<br />
nαWα<br />
(8.49)
since φα(x) is an eigenstate <strong>of</strong> Ho with eigenvalue Eα. α should be regards as a collection <strong>of</strong> all<br />
quantum numbers used to describe the Ho eigenstate.<br />
For example, say we want a zeroth order approximation to the ground state <strong>of</strong> He and we<br />
use hydrogenic functions,<br />
|ψo〉 = |1s(2)1s(2)). (8.50)<br />
This state is symmetric w.r.t electron exchange, so the spin component must be anti-symmetric.<br />
For now this will not contribute the the calculation. The zero-th order Schroedinger Equation is<br />
(Ho(1) + Ho(2))|ψo〉 = Eo|ψo〉 (8.51)<br />
Where Ho(1) is the zero-th order Hamiltonian for particle 1. This is easy to solve<br />
(Ho(1) + Ho(2))|ψo〉 = −Z 2 |ψo〉 (8.52)<br />
Z = 2, so the zero-th order guess to the He ground state energy is −4 (in Hartree units, recall 1<br />
Hartree = 27.6 eV). The correct ground state energy is more like -2.90 Hartree. Let’s now evaluate<br />
to first order in perturbation theory the direct Coulombic interaction between the electrons.<br />
E (1) = (1s(1)1s(2)| 1<br />
r12<br />
|1s(1)1s(2)) (8.53)<br />
The spatial wavefunction for the |1s(1)1s(2)) state is the product <strong>of</strong> two hydrogenic functions<br />
Therefore,<br />
(r1r2|1s(1)1s(2)) = Z3<br />
π e−Z(r1+r2)<br />
E (1) � �<br />
1<br />
= dV1 dV2<br />
r12<br />
Z 3<br />
π e−2Z(r1+r2)<br />
(8.54)<br />
(8.55)<br />
where dV = 4πr 2 dr is the volume element. This integral is most readily solved if we instead<br />
write it as the energy <strong>of</strong> a charge distribution, ρ(2) = |ψ(2)| 2 , in the field <strong>of</strong> a charge distribution<br />
ρ(1) = |ψ(1)| 2 for r2 > r1.<br />
E (1) = 2 Z3<br />
π<br />
�<br />
�<br />
1 r2<br />
−2Zr2 dV2e dV1e<br />
r2 0<br />
−2Zr1 (8.56)<br />
The factor <strong>of</strong> 2 takes into account that we get the same result for when r1 > r2. Doing the<br />
integral, (See subsection 8.3.1.)<br />
or 1.25 Hartee. Thus, for the He atom<br />
E (1) = 5Z<br />
8<br />
E = Eo + E (1) = −Z 2 + 5Z<br />
8<br />
228<br />
(8.57)<br />
= −4 + 1.25 = −2.75 (8.58)
Not too bad, the actual result is −2.90 Hartee.<br />
What we have not taken into consideration that there is an additional contribution to the<br />
energy from the exchange interaction. In other words, we need to compute the Coulomb integral<br />
exchanging electron 1 with electron 2. We really need to compute the perturbation energy with<br />
respect the determinant wavefunction.<br />
E (1) = {1s(1)1s(2)|v|1s(1)1s(2)} (8.59)<br />
= 1<br />
2<br />
�<br />
(1s(1)1s(2)| − (1s(1)1s(2)|) �<br />
v �<br />
|1s(1)1s(2)) − |1s(1)1s(2)) �<br />
= 1 �<br />
(1s(1)1s(2)|v|1s(1)1s(2)) − (1s(1)1s(2)|v|1s(1)1s(2))<br />
2<br />
− (1s(1)1s(2)|v|1s(1)1s(2)) + (1s(1)1s(2)|v|1s(1)1s(2)) �<br />
(8.60)<br />
(8.61)<br />
= 2(1s(1)1s(2)|v|1s(1)1s(2)) − (1s(1)1s(2)|v|1s(1)1s(2)) (8.62)<br />
However, the potential does not depend upon spin. Thus any matrix element which exchanges<br />
a spin from must vanish. This, we have no exchange contribution to the energy. We can in fact<br />
move on to higher orders in perturbation theory and solve accordingly.<br />
8.3 The Hartree-Fock Approximation<br />
We just saw how to estimate the ground state energy <strong>of</strong> a system in the presence <strong>of</strong> interactions<br />
using first order perturbation theory. To get this result, we assumed that the zeroth-order<br />
wavefunctions were pretty good and calculated our results using these wavefunctions. Of course,<br />
the true ground state energy is obtained by summing over all diagrams in the perturbation<br />
expansion:<br />
E − W = 〈ψo|E − W |ψo〉 (8.63)<br />
The second order term contains explicit two-body correlation interactions. i.e. the motion <strong>of</strong> one<br />
electron affects the motion <strong>of</strong> the other electron.<br />
Let’s make a rather bold assumption that we can exclude connected two body interactions<br />
and treat the electrons as independent but moving in an averaged field <strong>of</strong> the other electrons.<br />
First we make some standard definitions:<br />
And write that<br />
Jαβ = (αβ|v|αβ) (8.64)<br />
Kαβ = (αβ|v|βα) (8.65)<br />
EHF − W = �<br />
(2Jαβ − Kαβ) (8.66)<br />
αβ<br />
where sum over all occupied (n/2) spatial orbitals <strong>of</strong> a n−electron system. The J integral is the<br />
direct interaction (Coulomb integral) and the K is the exchange interaction.<br />
229
We now look for a a set <strong>of</strong> orbitals which minimize the variational integral EHF , subject to the<br />
constrain that the wavefunction solutions be orthogonal. One can show (rather straightforwardly)<br />
that if we write the Hamiltonian as a functional <strong>of</strong> the electron density, ρ,<br />
where ρ(1, 2) = φ(1) ∗ φ(2),<br />
� �<br />
H[ρ] = Ho[1, 2] + d3d4{12|v|34}ρ(3, 4) (8.67)<br />
= Ho[1] + 2J(1) − K(1) (8.68)<br />
J(1) =<br />
K(1) =<br />
The Hartree-Fock wavefunctions satisfy<br />
�<br />
�<br />
d2|β(2)| 2 v(12) (8.69)<br />
d2α(2) ∗ β(2)v(12) (8.70)<br />
H[ρ]ψ(1) = E(1)ψ(1) (8.71)<br />
In other words, we diagonalize the Fock-matrix H[ρ] given an initial guess <strong>of</strong> the electron density.<br />
This gives a new set <strong>of</strong> electron orbitals, which we use to construct a new guess for the electron<br />
densities. This procedure is iterated to convergence.<br />
8.3.1 Two electron integrals<br />
One <strong>of</strong> the difficulties encountered is in evaluating the Jαβ and Kαβ two electron integrals. Let’s<br />
take the case that the φα and φβ orbitals are centred on the same atom. Two centred terms can<br />
be evaluated, but the analysis is more difficult. Writing the J integral in Eq. 8.65 out explicitly<br />
we have:<br />
Jαβ = (φα(1)φβ(2)|v(12)|φα(1)φβ(1))<br />
� �<br />
= φ ∗ α(1)φ ∗ β(2)v(1 − 2)φα(1)φ 2 �<br />
βd1d2<br />
= φ ∗ �<br />
α(r1)φα(r1) φ ∗ β(r2)φβ(r2)v(r1 − r2)dr2dr1<br />
(8.72)<br />
If we can factor the single particle orbitals as φα(r, θ, φ) = Rnl(r)Ylm(θ, φ), then we can separate<br />
the radial and angular integrals. Before we do that, we have to resolve the pair-interaction into<br />
radial and angular components as well. For the Coulomb potential, we can use the expansion<br />
1<br />
|r1 − r2|<br />
= �<br />
+l<br />
�<br />
l=0 m=−l<br />
4π<br />
2l + 1<br />
r l <<br />
r l+1<br />
<<br />
Y ∗<br />
lm(θ1φ1)Ylm(θ2φ2) (8.73)<br />
where the notation r< denotes which is the smaller <strong>of</strong> r1 and r2 and r> the greater. For the<br />
hydrogen 1s orbitals (normalized and in atomic units)<br />
φ1s =<br />
�<br />
Z 3<br />
π e−Zr , (8.74)<br />
230
the J integral for the 1s1s configuration is<br />
J = Z6<br />
π 2<br />
�<br />
d 3 r1<br />
Inserting the expansion and using Y00 = 1/ √ 4π,<br />
�<br />
d 3 r2e −2Zr1 e −2Zr2 1<br />
r12<br />
(8.75)<br />
J =<br />
� l�<br />
�<br />
6 1 ∞ � ∞<br />
16Z e<br />
l m=−l 2l + 1 0 0<br />
−2Zr1 −2Zr2 e rl <<br />
r l+1 r<br />
<<br />
2 1dr1r 2 2dr2<br />
�<br />
× Y ∗<br />
�<br />
lm(1)Y00(1)dΩ1 Ylm(2)Y ∗<br />
00(2)dΩ2. (8.76)<br />
The last two integrals are easy due to the orthogonality <strong>of</strong> the spherical harmonics. This leaves<br />
the double integral,<br />
J = 16Z 6<br />
� ∞ � ∞<br />
e −2Zr1 −2Zr2 e 1<br />
r 2 1dr1r 2 2dr2<br />
(8.77)<br />
0<br />
which we evaluate by splitting into two parts,<br />
J = 16Z 2<br />
�� ∞<br />
e<br />
0<br />
−2Zr1<br />
�� r1<br />
r1 e<br />
0<br />
−2Zr2<br />
+<br />
�<br />
2<br />
r2dr2 dr1<br />
� ∞<br />
e<br />
0<br />
−2Zr1<br />
�� ∞<br />
2<br />
r1 e −2Zr2<br />
� �<br />
r2dr2 dr1<br />
In this case the integrals are easy to evaluate and<br />
8.3.2 Koopman’s Theorem<br />
0<br />
r1<br />
J = 5Z<br />
8<br />
r><br />
(8.78)<br />
(8.79)<br />
Koopman’s theorem states that if the single particle energies are not affected by adding or<br />
removing a single electron, then the ionization energy is energy <strong>of</strong> the highest occupied single<br />
particle orbital (the HOMO) and the electron affinity is the energy <strong>of</strong> the lowest unoccupied<br />
orbital (i.e. the LUMO). For the Hartree-Fock orbitals, this is theorem can be proven to be<br />
exact since correlations cancel out at the HF level. For small molecules and atoms, the theorem<br />
fails miserably since correlations play a significant role. On the other hand, for large polyatomic<br />
molecules, Koopman’s theorem is extremely useful in predicting ionization energies and spectra.<br />
From a physical point <strong>of</strong> view, the theorem is never exact since it discounts relaxation <strong>of</strong> both<br />
the electrons and the nuclei.<br />
8.4 <strong>Quantum</strong> Chemistry<br />
<strong>Quantum</strong> chemical concepts play a crucial role in how we think about and describe chemical<br />
processes. In particular, the term quantum chemistry usually denotes the field <strong>of</strong> electronic<br />
structure theory. There is no possible way to cover this field to any depth in a single course and<br />
this one section will certainly not prepare anyone for doing research in quantum chemistry. The<br />
topic itself can be divided into two sub-fields:<br />
231
• Method development: The development and implementation <strong>of</strong> new theories and computational<br />
strategies to take advantage <strong>of</strong> the increaseing power <strong>of</strong> computational hardware.<br />
(Bigger, stronger, faster calculations)<br />
• Application: The use <strong>of</strong> established methods for developing theoretical models <strong>of</strong> chemical<br />
processes<br />
Here we will go in to a brief bit <strong>of</strong> detail into various levels <strong>of</strong> theory and their implementation<br />
in standard quantum chemical packages. For more in depth coverage, refer to<br />
1. <strong>Quantum</strong> Chemistry, Ira Levine. The updated version <strong>of</strong> this text has a nice overview <strong>of</strong><br />
methods, basis sets, theories, and approaches for quantum chemistry.<br />
2. Modern Quanutm Chemistry, A. Szabo and N. S. Ostlund.<br />
3. Ab Initio Molecular Orbital Theory, W. J. Hehre, L. Radom, P. v. R. Schleyer, and J. A.<br />
Pople.<br />
4. Introduction to <strong>Quantum</strong> <strong>Mechanics</strong> in Chemistry, M. Ratner and G. Schatz.<br />
8.4.1 The Born-Oppenheimer Approximation<br />
The fundimental approximation in quantum chemistry is the Born Oppenheimer approximation<br />
we discussed earlier. The idea is that because the mass <strong>of</strong> an electron is at least 10 −4 that <strong>of</strong><br />
a typical nuclei, the motion <strong>of</strong> the nuclei can be effectively ignored and we can write an electronic<br />
Schrödinger equation in the field <strong>of</strong> fixed nuclei. If we write r for electronic coordinates and R<br />
for the nuclear coordinates, the complete electronic/nuclear wavefunction becomes<br />
Ψ(r, R) = ψ(r; R)χ(R) (8.80)<br />
where ψ(r; R) is the electronic part and χ(R) the nuclear part. The full Hamiltonian is<br />
H = Tn + Te + Ven + Vnn + Vee<br />
(8.81)<br />
Tn is the nuclear kinetic energy, Te is the electronic kinetic energy, and the V ’s are the electronnuclear,<br />
nuclear-nuclear, and electron-electron Coulomb potential interactions. We want Ψ to be<br />
a solution <strong>of</strong> the Schrödinger equation,<br />
HΨ = (Tn + Te + Ven + Vnn + Vee)ψχ = Eψχ. (8.82)<br />
So, we divide through by ψχ and take advantage <strong>of</strong> the fact that Te does not depend upon the<br />
nuclear component <strong>of</strong> the total wavefunction<br />
1<br />
ψχ Tnψχ + 1<br />
ψ Teψ + Ven + Vnn + Vee = E.<br />
On the other hand, Tn operates on both components, and involves terms which look like<br />
Tnψχ = �<br />
n<br />
− 1<br />
(ψ∇<br />
2Mn<br />
2 nχ + χ∇ 2 nψ + 2∇χ · ∇nψ)<br />
232
where the sum is over the nuclei and ∇n is the gradient with respect to nuclear position n. The<br />
crudest approximation we can make is to neglect the last two terms–those which involve the<br />
derivatives <strong>of</strong> the electronic wave with respect to the nuclear coordinate. So, when we neglect<br />
those terms, the Schrodinger equation is almost separable into nuclear and electronic terms<br />
1<br />
χ (Tn + Vn)χ + 1<br />
ψ (Te + Ven + Vee)ψ = E. (8.83)<br />
The equation is not really separtable since the second term depends upon the nuclear position.<br />
So, what we do is say that the electronic part depends parametrically upon the nuclear position<br />
giving a constant term, Eel(R), that is a function <strong>of</strong> R.<br />
(Te + Ven(R) + Vee)ψ(r; R) = Eel(R)ψ(r; R). (8.84)<br />
The function, Eel, depends upon the particular electronic state. Since it an eigenvalue <strong>of</strong> Eq. 8.84,<br />
there may be a manifold <strong>of</strong> these functions stacked upon each other.<br />
Turning towards the nuclear part, we have the nuclear Schrödinger equation<br />
(Tn + Vn(R) + E (α)<br />
el (R))χ = W χ. (8.85)<br />
Here, the potential governing the nuclear motion contains the electronic contribution, E (α)<br />
el (R),<br />
which is the αth eigenvalue <strong>of</strong> Eq. 8.84 and the nuclear repulsion energy Vn. Taken together,<br />
these form a potential energy surface<br />
V (R) = Vn + E (α)<br />
el (R)<br />
for the nuclear motion. Thus, the electronic energy serves as the interaction potential between<br />
the nuclei and the motion <strong>of</strong> the nuclei occurs on an energy surface generated by the electronic<br />
state.<br />
Exercise 8.1 Derive the diagonal non-adiabatic correction term 〈ψ|Tn|ψ > to produce a slightly<br />
more accurate potential energy surface<br />
.<br />
V (R) = Vn + E (α)<br />
el<br />
+ 〈ψα|Tn|ψα〉<br />
The BO approximation breaks down when the nuclear motion becomes very fast and the<br />
electronic states can become coupled via the nuclear kinetic energy operator. (One can see by<br />
inspection that the electonic states are not eigenstates <strong>of</strong> the nuclear kinetic energy since Hele<br />
does not commute with ∇ 2 N. )<br />
Let’s assume that the nuclear motion is a time dependent quantity, R(t). Now, take the time<br />
derivative <strong>of</strong> |Ψ e n〉<br />
d<br />
dt |Ψen(R(t))〉 = ∂R(t) ∂<br />
∂t ∂R |Ψen(R)〉 + ∂<br />
∂t |Ψen(R(t))〉 (8.86)<br />
Now, multiply on the left by < Ψ e m(R(t))| where m �= n<br />
〈Ψ e m(R(t))| d<br />
dt |Ψen(R(t))〉 = ∂R(t)<br />
〈Ψ<br />
∂t<br />
e m(R(t))| ∂<br />
∂R |Ψen(R)〉 (8.87)<br />
233
Cleaning things up,<br />
〈Ψ e m(R(t))| d<br />
dt |Ψe n(R(t))〉 = ˙ R(t)· < Ψ e m(R(t))|∇N|Ψ e n(R)〉 (8.88)<br />
we see that the nuclear motion couples electronic states when the nuclear velocity vector ˙ R is<br />
large in a direction in which the electronic wavefunction changes most rapidly with R.<br />
For diatomic molecules, we can separate out the center <strong>of</strong> mass motion and write m as the<br />
reduced mass<br />
m = m1m2<br />
m1 + m2<br />
and write the nuclear Schrodinger equation (in one dimension)<br />
� −¯h 2<br />
2m<br />
(8.89)<br />
∂2 �<br />
+ V (r) φ(r) = Eφ(r) (8.90)<br />
∂x2 where V (r) is the adiabatic or Born-Oppenheimer energy surface discussed above. Since V (r) is<br />
a polyomial <strong>of</strong> r, we can do a Taylor expansion <strong>of</strong> r about its minimum at re<br />
V (r) = −Vo + 1<br />
2 V ′′ (re)(r − re) 2 + 1<br />
6 V ′′′ (re)(r − re) 3 + (8.91)<br />
As an example <strong>of</strong> molecular bonding and how one computes the structure and dynamics <strong>of</strong><br />
a simple molecule, we turn towards the simplest molcular ion, H + 2 . For a fixed H − H distance,<br />
R, the electronic Schrödinger equation reads (in atomic units)<br />
�<br />
− 1<br />
2 ∇2 − 1<br />
r1<br />
− 1<br />
r2<br />
�<br />
ψ(r1, r2) = Eψ(r1, r2) (8.92)<br />
The problem can be solved exactly in elliptical coordinates, but the derivation <strong>of</strong> the result is<br />
not terribly enlightining. What we will do, however, is use a variational approach by combining<br />
hydrogenic 1s orbitals centered on each H nuclei. This proceedure is termed the Linear Combination<br />
<strong>of</strong> Atomic Orbitals and is the underlying idea behind most quantum chemical calculations.<br />
The basis functions are the hydrogen 1s orbitals. The rationalle for this basis is that as R<br />
becomes large, we have a H atom and a proton–a system we can handle pretty easily. Since the<br />
electron can be on either nuclei, we take a linear combination <strong>of</strong> the two choices.<br />
|ψ〉 = c1|φ1〉 + c2|φ2〉 (8.93)<br />
We then use the variational proccedure to find the lowest energy subject to the constraint that<br />
〈ψ|ψ〉 = 1. This is an eigenvalue problem which we can write as<br />
or in Matrix form<br />
�<br />
�<br />
〈φi|H|φj〉 = E<br />
j=1,2<br />
�<br />
cj〈φi|φj〉. (8.94)<br />
j=1,2<br />
H11 H12<br />
H21 H22<br />
� �<br />
c1<br />
c2<br />
�<br />
234<br />
= E<br />
�<br />
S11 S12<br />
S21 S22<br />
�<br />
(8.95)
where Sij is the overlap between the two basis functions.<br />
�<br />
S12 = 〈φ1|φ2〉 = d 3 rφ ∗ 1(r)φ2(r)<br />
and assuming the basis functions are normalized to begin with:<br />
For the hydrogenic orbitals<br />
and<br />
A simple calculation yields 1<br />
S = e −R<br />
S11 = S22 = 1.<br />
ψ1(r1) = e−(r1<br />
√ π<br />
ψ2(r2) = e−(r2)<br />
√ π .<br />
�<br />
1 + R + R2<br />
3<br />
The matrix elements <strong>of</strong> the Hamiltonian need also to be computed. The diagonal terms are<br />
easy and correspond to the hydrogen 1s energies plus the internuclear repulsion plus the Coulomb<br />
interaction between nuclei 2 and the electron distribution about nuclei 1.<br />
where<br />
H11 = −EI + 1<br />
− J11<br />
(8.96)<br />
R<br />
�<br />
.<br />
J11 = 〈φ1| 1<br />
|φ1〉 (8.97)<br />
=<br />
�<br />
r2<br />
d 3 r 1<br />
r12<br />
|φ1(r)| 2<br />
This too, we evaluate in elliptic coordinates and the result reads<br />
J11 = 2EI<br />
R<br />
(8.98)<br />
�<br />
1 − e −2R (1 + R) �<br />
. (8.99)<br />
1 To derive this result, you need to first transform to elliptic coordinates u, v where<br />
r1 =<br />
r2 =<br />
u + v<br />
2 R<br />
u − v<br />
2 R<br />
the volume element is then d 3 r = R 3 (u 2 − v 2 )/8dudvdφ where φ is the azimuthal angle for rotation about the<br />
H − H axis. The resulting integral reads<br />
S = 1<br />
π<br />
� ∞ � +1 � 2π<br />
du dv<br />
1<br />
−1<br />
0<br />
235<br />
dφ R3<br />
8 (u2 − v 2 )e −uR .
S,J,A<br />
1<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
Figure 8.1: Various contributions to the H + 2 Hamiltonian.<br />
2 4 6 8 10 RHbohrL<br />
By symmetry, H11 = H22 and we have the diagonal elements.<br />
We can think <strong>of</strong> J as being a modification <strong>of</strong> the nuclear repulsion due to the screening <strong>of</strong> the<br />
electron about one <strong>of</strong> the atoms. |φ1(r)| 2 is the charge density <strong>of</strong> the hydrogen 1s orbital and is<br />
spherically symmetric about nuclei 1. For large internuclear distances,<br />
J = 1<br />
R<br />
(8.100)<br />
and positive charge <strong>of</strong> nuclei 1 is completly counterballenced by the negative charge distribution<br />
about it. At shorter ranges,<br />
1<br />
− J > 0. (8.101)<br />
R<br />
However, screening along cannot explain a chemical bond since J does not go through a minimum<br />
at some distance R. Figure shows the variation <strong>of</strong> J, H11, and S as functions <strong>of</strong> R.<br />
We now look at the <strong>of</strong>f diagonal elements, H12 = H21. Written explicitly<br />
where<br />
H12 = 〈φ1|h(2)|φ2 > + 1<br />
R S12 − 〈φ1| 1<br />
r12 |φ2〉<br />
= (−EI + 1<br />
R )S12 − A (8.102)<br />
A = 〈φ1| 1<br />
�<br />
|φ2〉 = d 3 rφ1(r) 1<br />
φ2(r), (8.103)<br />
which can also be evaluated using elliptical coordinates.<br />
A = R 2 � ∞<br />
EI du2ue −uR<br />
r12<br />
1<br />
r12<br />
(8.104)<br />
= 2EIe −R (1 + R) (8.105)<br />
236
Exercise 8.2 Verify the expressions for J, S, and A by performig the transformation to elliptic<br />
coordinates and performing the integrations.<br />
A is termed the resonance integral and gives the energy for moving an electron from one<br />
nuclei to the other. When H12 �= 0, there is a finite probability for the electron to hop from one<br />
site to the other and back. This oscillation results in the electron being delocalized between the<br />
nuclei andis the primary contribution to the formation <strong>of</strong> a chemical bond.<br />
To wrap this up, the terms in the Hamiltonian are<br />
S11 = S22 = 1 (8.106)<br />
S12 = S21 = S (8.107)<br />
H11 = H22 = −EI + 1<br />
− C<br />
R<br />
H12 = H21 = (−EI +<br />
(8.108)<br />
1<br />
)S − A<br />
R<br />
(8.109)<br />
Since EI appears in each, we use that as our energy scale and set<br />
so that the secular equation now reads<br />
�<br />
�<br />
�<br />
�<br />
�<br />
�<br />
−1 + 2<br />
�<br />
− γ − ɛ<br />
�R<br />
−1 + 2<br />
R<br />
E = ɛEI<br />
A = αEI<br />
J = γEI<br />
�<br />
−1 + 2<br />
R<br />
S − α − ɛS −1 + 2<br />
R<br />
Solving the secular equation yields two eigenvalues:<br />
ɛ± = −1 2<br />
R<br />
�<br />
S − α − ɛS<br />
− γ − ɛ<br />
(8.110)<br />
(8.111)<br />
(8.112)<br />
�<br />
�<br />
�<br />
�<br />
� = 0 (8.113)<br />
�<br />
α − γ<br />
± . (8.114)<br />
1 ∓ S<br />
For large internuclear separations, ɛ± ≈ −1, or −EI which is the ground state <strong>of</strong> an isolated H<br />
atom EI = 1/2. Choosing this as the energy origin, and putting it all back together:<br />
E± = EI<br />
�<br />
2<br />
R ± 2e−R (1 + R) ∓ 2(1 − e−2R (1 + R))/R<br />
1 ∓ e−R (1R + R2 �<br />
/3<br />
(8.115)<br />
Plots <strong>of</strong> these two energy surfaces are shown in Fig. 8.2. The energy minimum for the lower state<br />
(E−) is at E− = −0.064831hartree when Req = 2.49283ao (or -0.5648 hartree if we don’t set our<br />
zero to be the dissociation limit). These results are qualitatively correct, but are quantitatively<br />
way <strong>of</strong>f the mark. The experimental values are De = 0.1025 hartree and Re = 2.00ao. The<br />
results can be improved upon by using improved basis function, using the charge as a variational<br />
parameter and so forth. The important point is that even at this simple level <strong>of</strong> theory, we can<br />
get chemical bonds and equilibrium geometries.<br />
For the orbitals, we have a symmetric and anti-symmetric combination <strong>of</strong> the two 1s orbitals.<br />
ψ± = N±(φ1 ± φ2). (8.116)<br />
237
e+,e- HhartreeL<br />
0.25<br />
0.2<br />
0.15<br />
0.1<br />
0.05<br />
-0.05<br />
Figure 8.2: Potential energy surface for H + 2 molecular ion.<br />
2 4 6 8 10<br />
R HbohrL<br />
Figure 8.3: Three dimensional representations <strong>of</strong> ψ+ and ψ− for the H + 2 molecular ion generated<br />
using the Spartan ab initio quantum chemistry program.<br />
In Fig. 8.3, we show the orbitals from an ab initio calculation using the 6-31 ∗ set <strong>of</strong> basis<br />
functions. The first figure corresponds to the occupoled ground state orbital which forms a σ<br />
bond between the two H atoms. The second shows the anti-bonding σ ∗ orbital formed by the<br />
anti-symmetric combination <strong>of</strong> the 1s basis functions. The red and blue mesh indicates the phase<br />
<strong>of</strong> the wavefunction.<br />
238
Appendix: Creation and Annihilation Operators<br />
Creation and annihilation operators are a convenient way to represent many-particle states and<br />
many-particle operators. Recall, from the harmonic oscillator, a creation operator, a † , acts on<br />
the ground state to produce 1 quanta <strong>of</strong> excitation in the system. We can generalize this to many<br />
creates a particle in state λ. e.g.<br />
particles by saying that a †<br />
λ<br />
a †<br />
λ |λ1...λN〉 = √ nλ + 1|λλ1...λN〉 (8.117)<br />
where nλ is the occupation <strong>of</strong> the |λ〉 state. Physically, the operator αλ creates a particle in state<br />
|λ〉 and symmetries or antisymmetrizes the state as need be. For Bosons, the case is simple since<br />
any number <strong>of</strong> particles can occupy a given state. For Fermions, the operation takes a simpler<br />
form:<br />
a †<br />
λ |λ1...λN〉<br />
�<br />
=<br />
|λλ1...λN〉 if the state |λ〉 is not present in |λ1...λN〉<br />
0 otherwise<br />
The anti-symmetrized basis vectors can be constructed using the a †<br />
j operators as<br />
(8.118)<br />
|λ1...λN} = a †<br />
λ1a† · · · a† |0〉 (8.119)<br />
λ2 λN<br />
Note that when we write the |} states we do not need to keep track <strong>of</strong> the normalization factors.<br />
We do need to keep track <strong>of</strong> them them when we use the |〉 or |) vectors.<br />
|λ1...λN〉 = a †<br />
λ1a† · · · a† |0〉 (8.120)<br />
λ2 λN<br />
=<br />
�<br />
1<br />
�<br />
λ nλ! a†<br />
λ1 a†<br />
λ2<br />
· · · a† |0〉 (8.121)<br />
λN<br />
The symmetry requirement places certain requirement on the commutation <strong>of</strong> the creation<br />
operators. For example,<br />
Thus,<br />
a † µa † ν|0〉 = |µν} (8.122)<br />
= ζ|νµ} (8.123)<br />
= ζa † νa † µ|0〉 (8.124)<br />
a † µa † ν − ζa † νa † µ = 0 (8.125)<br />
In other words, for ζ = 1 (bosons) the two operators commute for ζ = −1 (fermions) the<br />
operators anti-commute.<br />
239
We can prove similar results for the adjoint <strong>of</strong> the a † operator. In sort, for bosons we have a<br />
and aλ:<br />
commutation relation between a †<br />
λ<br />
[aµ, a † ν] = δµν<br />
While for fermions, we have a anti-commutation relation:<br />
{aµ, a † ν} = aµa † ν + a † µaν = δµν<br />
(8.126)<br />
(8.127)<br />
Finally, we define a set <strong>of</strong> field operators which are related to the creation/annihilation operators<br />
as<br />
ˆψ † (x) = �<br />
〈α|x〉a † α = �<br />
α<br />
α<br />
φ ∗ α(x)a † α<br />
ˆψ(x) = �<br />
〈α|x〉aα = �<br />
φα(x)aα<br />
α<br />
α<br />
(8.128)<br />
(8.129)<br />
These particular operators are useful in deriving various tight-binding approximations.<br />
Say that a † α places a particle in state α and aα(1) deletes a particle from state α. The<br />
occupation <strong>of</strong> state |α〉 is thus<br />
ˆnα|α〉 = a † αaα|α〉 = nα|α〉 (8.130)<br />
if the state is unoccupied, nα = 0 and if the state is occupied, the first operation removes the<br />
particle and the second replaces, and nα = 1.<br />
One-body operators can be evaluated as<br />
Likewise,<br />
Two body operators are written as<br />
ˆV = 1<br />
2<br />
�<br />
αβγδ<br />
�<br />
�<br />
dx<br />
U = �<br />
nαUα = �<br />
α<br />
Ûα = 〈α|U|α〉nα<br />
〈α|U|α〉a<br />
α<br />
† αaα<br />
dyφ ∗ α(x)φ ∗ β(y)V (x − y)φγ(x)φδ(y) = (αβ|v|γδ)a † αa †<br />
β aγaδ<br />
Occasionally, it is useful to write the symmetrized variant <strong>of</strong> this operator<br />
ˆV = 1<br />
4<br />
�<br />
(αβ|v[|γδ) − |δγ)]a<br />
αβγδ<br />
† αa †<br />
βaγaδ = 1<br />
4<br />
�<br />
{αβ|v|γδ}a<br />
αβγδ<br />
† αa †<br />
βaγaδ 240<br />
(8.131)<br />
(8.132)<br />
(8.133)<br />
(8.134)<br />
(8.135)
8.5 Problems and Exercises<br />
Exercise 8.3 Consider the case <strong>of</strong> two identical particles with positions r1 and r2 trapped in a<br />
centeral harmonic well with a mutially repulsive harmonic interaction. The Hamiltonian for this<br />
case can be written as<br />
H = − ¯h2 �<br />
∇<br />
2m<br />
2 1 + ∇ 2 �<br />
2 + 1<br />
2 mω2 (r 2 1 + r 2 2) − λ<br />
4 mω2 |r1 − r2| 2<br />
(8.136)<br />
where λ is a dimensionalless scaling factor. This can be a model for two Bosons or Fermions<br />
trapped in an optical trap and λmω 2 simply tunes the s-wave scattering cross-section for the two<br />
atoms.<br />
1. Show that upon an appropriate change <strong>of</strong> variables<br />
u = (r1 + r2)/ √ 2 (8.137)<br />
v = (r1 − r2)/ √ 2 (8.138)<br />
the Hamiltonian simplifies to two separable three-dimensional harmonic oscillators:<br />
H =<br />
�<br />
− ¯h2<br />
2m ∇2 u + 1<br />
2 mω2u 2<br />
�<br />
2. What is the exact ground state <strong>of</strong> this system?<br />
+<br />
�<br />
− ¯h2<br />
2m ∇2v + 1<br />
2 (1 − λ)mω2v 2<br />
�<br />
(8.139)<br />
3. Assuming the particles are spin 1/2 fermions. What are the lowest energy triplet and singlet<br />
states for this system?<br />
4. What is the average distance <strong>of</strong> separation between the two particles in both the singlet and<br />
triplet configurations.<br />
5. Now, solve this problem via the variational approach by taking your trial wavefunction to<br />
be a Slater determinant <strong>of</strong> the two lowest single particle states:<br />
�<br />
�<br />
�<br />
� α(1) β(2) �<br />
�<br />
ψ(r1, r2) = �<br />
�<br />
(8.140)<br />
� φ(1) φ(2) �<br />
Where φ(1) and φ(2) are the lowest energy 3D harmonic oscillator states modified such<br />
that we can take the width as a variational parameter:<br />
φ(r) = N(ζ) exp(−r 2 /(2ζ 2 )<br />
where N(ζ) is the normalization factor. Construct the Hamiltonan and determine the<br />
lowest energy state by taking the variation<br />
δ〈ψ|H|ψ〉 = 0.<br />
How does your variational estimate compare with the exact value for the energy?<br />
241
Exercise 8.4 In this problem we consider an electron in a linear tri-atomic molecule formed by<br />
three equidistant atoms. We will denote by |A〉, |B〉 , and |C〉 the three orthonormal state <strong>of</strong><br />
the electron, corresponding to three wavefunctions localized about the three nuclei, A, B, and C.<br />
Whele there may be more than these states in the physical system, we will confine ourselves the<br />
subspace spanned by these three vectors.<br />
If we neglect the transfer <strong>of</strong> an electron from one site to an other site, its energy is described by<br />
the Hamiltonian, Ho. The eigenstates <strong>of</strong> Ho are the three orthonormal states above with energies<br />
EA, EB, and EC. For now, take EA = EB = EC = Eo. The coupling (i.e. electron hopping)<br />
between the states is described by an additional term W defined by its action on the basis vectors.<br />
where a is a real positive constant.<br />
W |A〉 = −a|A〉<br />
W |B〉 = −a(|A〉 + |C〉)<br />
W |C〉 = −a|B〉<br />
1. Write both Ho and W in matrix form in the orthonormal basis and determine the eigenvalues<br />
{E1, E2, E3} and eigenvectors {|1〉, |2〉, |3〉} for H = Ho + W . To do this numerically,<br />
pick your energy scale in terms <strong>of</strong> Eo/a.<br />
2. Using the eigenvectors and eigenvalues you just determined, calculate the unitary time<br />
evolution operator in the original basis. Eg.<br />
〈A|U(t)|B〉 = 〈A| exp (−iH/¯ht) |B〉<br />
3. If at time t = 0 the electron is localized on site A (in state |A〉), calculate the probability<br />
<strong>of</strong> finding the electron in any other state at some later time, t (i.e. PA, PB, and PC). Plot<br />
your results. Is there some later time at which the probability <strong>of</strong> finding the electron back<br />
in the original state is exactly 1. Give a physical interpretation <strong>of</strong> this result.<br />
4. Repeat your calculation in part 1 and 2, this time set EA = EC = Eo but set EB = 3Eo.<br />
Again, Plot your results for PA, PB, and PC and give a physical interpretation <strong>of</strong> your<br />
results.<br />
In the next series <strong>of</strong> exercises, we will use Spartan ’02 for performing some elementary quantum<br />
chemistry calculations.<br />
Exercise 8.5 Formaldehyde<br />
1. Using the Builder on Spartan, build formaldehyde H2CO and perform an energy minimization.<br />
Save this. When this is done, use Geometry > Measure Distance and Geometry ><br />
Measure Angle to measure the C-H and C-O bond lengths and the H-C-O bond angle.<br />
242
Figure 8.4: Setup calculation dialog screen<br />
2. Set up a Geometry Optimization calculation (Setup ¿ Calculation). This will open up a<br />
dialog screen that looks like Fig. 8.4. This will set up a Hartree-Fock calculation using<br />
the 6-31G** basis and print out the orbitals and their energies. It also forces Spartan to<br />
preserve the symmetry point-group <strong>of</strong> the initial configuration. After you do this, also setup<br />
some calculations for generating some pictures <strong>of</strong> orbitals. Setup ¿ Graphics will open a<br />
dialog window for adding graphics calculations. Add the following: HOMO, LUMO, and<br />
potential. Close the dialog, and submit the job (Setup> Submit). Open the Spartan Monitor<br />
and wait until the job finishes. When this is done, use Geometry > Measure Distance and<br />
Geometry > Measure Angle to measure the C-H and C-O bond lengths and the H-C-O bond<br />
angle. This is the geometry predicted by a HF/6-31G** calculation.<br />
3. Open Display> Surfaces and plot the HOMO and LUMO orbitals. Open up the text output<br />
(Display>Output ) generated by the calculation and figure out which molecular orbitals<br />
correspond to the HOMO and LUMO orbitals you plotted. What are their energies and<br />
irreducible representations? Are these σ or π orbitals? Considering the IRREPS <strong>of</strong> each<br />
orbital, what is the lowest energy optical transition for this molecule? What are the atomic<br />
orbitals used for the O and C atoms in each <strong>of</strong> these orbitals?<br />
4. Repeat the calculation you did above, this time including a calculation <strong>of</strong> the vibrational<br />
frequencies for both the ground state and first excited states (using CIS). Which vibrational<br />
243
states undergo the largest change upon electronic excitation? Offer an explanation <strong>of</strong> this<br />
result noting that the excited state is pyramidal and that the So → S1 electronic transition<br />
is an n → π ∗ transition. (This calculation will take some time.)<br />
Table 8.1: Vibrational Frequencies <strong>of</strong> Formaldehyde<br />
Symmetry Description S0 S1<br />
Sym CH str<br />
CO str<br />
CH2 bend<br />
out-<strong>of</strong>-pland bend<br />
anti-sym CH str<br />
CH2 rock<br />
5. H2 + C=O → CH2=O transition state. Using the builder, make a model <strong>of</strong> the H2 + C=O<br />
→ CH2=O transition state. For this you will need to make a model that looks something<br />
like what’s shown in Fig. 8.6. Then, go to the Search > Transition State menu. For this<br />
you will need to click on the H − H bond (head) and then the C for the tail <strong>of</strong> the reaction<br />
path. Once you have done this, open Setup > Calculations and calculate Transition State<br />
Geometry at the Ground state with Hartree Fock/6-31G**. Also compute the Frequencies.<br />
Close and submit the calculation. When the calculation finishes, examine the vibrational<br />
frequencies. Is there at least one imaginary frequency? Why do you expect only one such<br />
frequency? What does this tell you about the topology <strong>of</strong> the potential energy surface at this<br />
point. Record the energy at the transition state. Now, do two separate calculations <strong>of</strong> the<br />
isolated reactants. Combine these with the calculation you did above for the formaldehyde<br />
equilbrium geometry and sketch a reaction energy pr<strong>of</strong>ile.<br />
244
Figure 8.5: HOMO-1, HOMO and LUMO for CH2 = O.<br />
245
Figure 8.6: Transition state geometry for H2 + C = O → CH2 = O. The Arrow indicates the<br />
reaction path.<br />
246
Appendix A<br />
Physical Constants and Conversion<br />
Factors<br />
247
Table A.1: Physical Constants<br />
Constant Symbol SI Value<br />
Speed <strong>of</strong> Light c 299792458 m/s (exact)<br />
Charge <strong>of</strong> proton e 1.6021764 ×10 −19 C<br />
Permitivity <strong>of</strong> Vacuum ε◦ 8.8541878 −12 J −1 C 2 m −1<br />
Avagadro’s Number NA 6.022142 ×10 23 mol −1<br />
Rest mass <strong>of</strong> electron me 9.109382 ×10 −31 kg<br />
Table A.2: Atomic Units:In Atomic Units, the following quantities are unitary: ¯h, e, me, ao.<br />
Quantity symbol or expression CGS or SI equiv.<br />
Mass me 9.109382 ×10 −31 kg<br />
Charge e 1.6021764 ×10 −19 C<br />
Angular Momentum ¯h 1.05457×10 −34 Js<br />
Length (bohr) ao = ¯h 2 /(mee 2 ) 0.5291772 −10 m<br />
Energy (hartree) Eh = e 2 /ao 4.35974 ×10 −18 J<br />
time to = ¯h 3 /(mee 4 ) 2.41888 ×10 −17 s<br />
velocity e 2 /¯h 2.18770 ×10 6 m/s<br />
Force e 2 /a 2 o 8.23872 ×10 −8 N<br />
Electric Field e/a 2 o 5.14221 ×10 11 V/m<br />
Electric Potential e/ao 27.2114 V<br />
Fine structure constant α = e2 1/137.036<br />
Magnetic moment<br />
¯hc<br />
βe = e¯h/(2me) 9.27399 ×10−24J/T Permitivity <strong>of</strong> vacuum εo = 1/4π 8.8541878 −12J −1C 2m−1 Hydrogen Atom IP −α2mec2 /2 = −Eh/2 -13.60580 eV<br />
Table A.3: Useful orders <strong>of</strong> magnitude<br />
Quantity approximate value exact value<br />
Electron rest mass mec 2 ≈ 0.5 MeV 0.511003 MeV<br />
Proton rest mass mpc 2 ≈ 1000 MeV 938.280 MeV<br />
neutron rest mass Mnc 2 ≈ 1000MeV 939.573 MeV<br />
Proton/Electron mass ratio mp/me ≈ 2000 1836.1515<br />
One electron volt corresponds to a:<br />
Quantity symbol /relation exact<br />
frequency: ν = 2.4 × 10 14 Hz E = hν 2.417970 ×10 14 Hz<br />
wavelength: λ = 12000˚A λ = c/ν 12398.52 ˚A<br />
wave number: 1/λ = 8000cm −1 8065.48 cm −1<br />
temperature: T = 12000K E = kT 11604.5 K<br />
248
Appendix B<br />
Mathematical Results and Techniques<br />
to Know and Love<br />
B.1 The Dirac Delta Function<br />
B.1.1 Definition<br />
The Dirac delta-function is not really a function, per se, it is really a generalized function defined<br />
by the relation<br />
f(xo) =<br />
� +∞<br />
−∞<br />
dxδ(x − xo)f(x). (B.1)<br />
The integral picks out the first term in the Taylor expansion <strong>of</strong> f(x) about the point xo and this<br />
relation must hold for any function <strong>of</strong> x. For example, let’s take a function which is zero only at<br />
some arbitrary point, xo. Then the integral becomes:<br />
� +∞<br />
dxδ(x − xo)f(x) = 0. (B.2)<br />
−∞<br />
For this to be true for any arbitrary function, we have to conclude that<br />
δ(x) = 0, for x �= 0. (B.3)<br />
Furthermore, from the Reimann-Lebesque theory <strong>of</strong> integration<br />
�<br />
�<br />
f(x)g(x)dx = lim a<br />
h→0<br />
�<br />
�<br />
f(xn)g(xn) , (B.4)<br />
the only way for the defining relation to hold is for<br />
n<br />
δ(0) = ∞. (B.5)<br />
This is a very odd function, it is zero every where except at one point, at which it is infinite. So<br />
it is not a function in the regular sense. In fact, it is more like a distrubution function which is<br />
infinitely narrow. If we set f(x) = 1, then we can see that the δ-function is normalized to unity.<br />
� +∞<br />
dxδ(x − xo) = 1. (B.6)<br />
−∞<br />
249
B.1.2 Properties<br />
Some useful properties <strong>of</strong> the δ-function are as follows:<br />
1. It is real: δ ∗ (x) = δ(x).<br />
2. It is even: δ(x) = δ(−x).<br />
3. δ(ax) = δ(x)/a for a > 0<br />
4. �<br />
5. δ ′ (−x) = −δ ′ (x)<br />
6. xδ(x) = 0<br />
7.<br />
8. f(x)δ(x − a) = f(a)δ(x − a)<br />
9. �<br />
Exercise B.1 Prove the above relations.<br />
B.1.3 Spectral representations<br />
δ ′ (x)f(x)dx = f ′ (0)<br />
δ(x 2 − a 2 ) = 1<br />
(δ(x + a) + δ(x − a))<br />
2a<br />
δ(x − a)δ(x − b)dx = δ(a − b)<br />
The δ function can be thought <strong>of</strong> as the limit <strong>of</strong> a sequence <strong>of</strong> regular functions. For example,<br />
1 sin(ax)<br />
δ(x) = lim<br />
a→∞ π x<br />
This is the ”sinc” function or diffraction function with a width proportional to 1/a. For any<br />
value <strong>of</strong> a, the function is regular. As we make a larger, the width increases and focuses about<br />
x = 0. This is shown in the Fig B.1, for increasing values <strong>of</strong> a. Notice that as a increases, the<br />
peak increases and the function itself becomes extremely oscillitory.<br />
Another extremely useful representation is the Fourier representation<br />
δ(x) = 1<br />
�<br />
2π<br />
e ikx dk. (B.7)<br />
We used this representation in Eq. 7.205 to go from an energy representation to an integral over<br />
time.<br />
Finally, an other form is in terms <strong>of</strong> Gaussian functions as shown in Fig. B.2.<br />
�<br />
a<br />
δ(x) = lim<br />
a→∞ π e−ax2.<br />
(B.8)<br />
250
Figure B.1: sin(xa)/πx representation <strong>of</strong> the Dirac δ-function<br />
0.4<br />
0.3<br />
0.2<br />
0.1<br />
-20 -10 10 20<br />
1.2<br />
1<br />
0.8<br />
0.6<br />
0.4<br />
0.2<br />
-4 -2 2 4<br />
Figure B.2: Gaussian representation <strong>of</strong> δ-function<br />
251
Here the height is proportional to a and the width to the standard deviation, 1/ √ 2a.<br />
Other representations include: Lorentzian form,<br />
δ(x) = lim<br />
a→<br />
1<br />
π<br />
a<br />
x2 ,<br />
+ a2 and derivative form<br />
δ(x) = d<br />
dx θ(x)<br />
where θ(x) is the Heaviside step function<br />
θ(x) =<br />
�<br />
0, x ≤ 0<br />
1 x ≥ 0<br />
This can be understood as the cumulative distribution function<br />
θ(x) =<br />
� x<br />
−∞<br />
252<br />
(B.9)<br />
δ(y)dy. (B.10)
B.2 Coordinate systems<br />
In each case U is a function <strong>of</strong> coordinates and � A is a vector.<br />
B.2.1 Cartesian<br />
• U = U(x, y, z)<br />
• � A = Ax î + Ay ˆj + Az ˆ k<br />
• Volume element: dV = dxdydz<br />
• Vector product: Â · ˆ B = AxBx + AyBy + AzBz<br />
• Gradient<br />
• Laplacian<br />
• Divergence<br />
• Curl<br />
∇ × � A =<br />
x<br />
� ∂Az<br />
∂y<br />
z<br />
i<br />
�∇U = ∂U ∂U<br />
î +<br />
∂x ∂y ˆj + ∂U<br />
∂z ˆ k<br />
∇ 2 U = ∂2U ∂x2 + ∂2U ∂y2 + ∂2U ∂z2 ∇ · � A = ∂Ax<br />
∂x<br />
k<br />
j<br />
+ ∂Ay<br />
∂y<br />
y<br />
+ ∂Az<br />
∂z<br />
� � � �<br />
∂Ay ∂Ax ∂Az ∂Ay<br />
− î − − ˆj +<br />
∂z ∂z ∂x ∂x<br />
253<br />
�<br />
∂Ax<br />
− ˆk<br />
∂y
B.2.2 Spherical<br />
• Coordinates:<br />
• Transformation to cartesian:<br />
• U = U(r, θ, φ)<br />
• � A = Arˆr + Aθ ˆ θ + Aφ ˆ φ<br />
• Arc Length<br />
• Volume element: dV =<br />
• Vector product: Â · ˆ B =<br />
• Gradient<br />
• Laplacian<br />
• Divergence<br />
• Curl<br />
x<br />
φ<br />
z<br />
θ<br />
r<br />
ρ<br />
x = r cos φ sin θ, y = r sin θ sin φ, z = r cos θ<br />
j<br />
i<br />
k<br />
Ar = Aρ sin θ + Az cos θ (B.11)<br />
Aθ = Aρ cos θ − Az sin θ (B.12)<br />
Aφ = −Ax sin φ + Ay cos φ (B.13)<br />
Aρ = Ax cos φ + Ay sin φ (B.14)<br />
�∇U =<br />
∇ 2 U =<br />
∇ · � A =<br />
∇ × � A =<br />
254<br />
y
B.2.3 Cylindrical<br />
x<br />
φ<br />
• Coordinates:<br />
• Transformation to cartesian:<br />
• U = U(x, y, z)<br />
• � A = Ax î + Ay ˆj + Az ˆ k<br />
• Volume element: dV = dxdydz<br />
z<br />
• Vector product: Â · ˆ B = AxBx + AyBy + AzBz<br />
• Gradient<br />
• Laplacian<br />
• Divergence<br />
• Curl<br />
∇ × � A =<br />
ρ<br />
� ∂Az<br />
∂y<br />
k<br />
z<br />
i<br />
j<br />
�∇U = ∂U ∂U<br />
î +<br />
∂x ∂y ˆj + ∂U<br />
∂z ˆ k<br />
∇ 2 U = ∂2U ∂x2 + ∂2U ∂y2 + ∂2U ∂z2 ∇ · � A = ∂Ax<br />
∂x<br />
+ ∂Ay<br />
∂y<br />
y<br />
+ ∂Az<br />
∂z<br />
� � � �<br />
∂Ay ∂Ax ∂Az ∂Ay<br />
− î − − ˆj +<br />
∂z ∂z ∂x ∂x<br />
255<br />
�<br />
∂Ax<br />
− ˆk<br />
∂y
Appendix C<br />
Mathematica Notebook Pages<br />
256
Ground-state<br />
Configuration<br />
Protactinium<br />
Neptunium Plutonium Americium<br />
Berkelium Californium Einsteinium Fermium Mendelevium<br />
(227) 232.0381 231.03588 238.0289 (237) (244) (243) (247) (247) (251) (252) (257) (258)<br />
[Rn]6d7s<br />
5.17 6.3067 5.89 6.1941 6.2657 6.0262 5.9738 5.9915 6.1979 6.2817 6.42 6.50 6.58<br />
2<br />
Ionization<br />
2 2<br />
2 2<br />
3 2<br />
4 2<br />
6 2<br />
7 2<br />
7 2<br />
9 2<br />
10 2<br />
11 2<br />
12 2<br />
13 2<br />
[Rn]6d 7s [Rn]5f 6d7s [Rn]5f 6d7s [Rn]5f 6d7s [Rn]5f 7s [Rn]5f 7s [Rn]5f 6d7s [Rn]5f 7s [Rn]5f 7s [Rn]5f 7s [Rn]5f 7s [Rn]5f 7s<br />
Energy (eV)<br />
† 12<br />
Based upon C.<br />
( ) indicates the mass number <strong>of</strong> the most stable isotope. For a description and the most accurate values and uncertainties, see J. Phys. Chem. Ref. Data, 26 (5), 1239 (1997).<br />
March 1999<br />
Nobelium Lawrencium<br />
(259) (262)<br />
14 2 14 2<br />
[Rn]5f 7s [Rn]5f 7s 7p?<br />
6.65 4.9 ?<br />
Atomic<br />
Weight †<br />
140.116<br />
[Xe]4f5d6s<br />
5.5387<br />
2<br />
89<br />
Ac<br />
Actinium<br />
2<br />
D32 /<br />
90<br />
Th<br />
Thorium<br />
91<br />
Pa<br />
4<br />
K11/ 2<br />
92<br />
U<br />
Uranium<br />
103<br />
Lr<br />
Lanthanum<br />
138.9055<br />
[Xe]5d6s<br />
5.5769<br />
2<br />
Praseodymium Neodymium Promethium<br />
140.116 140.90765 144.24 (145)<br />
[Xe]4f5d6s<br />
5.5387 5.473 5.5250 5.582<br />
2<br />
3 2<br />
4 2<br />
5 2<br />
[Xe]4f 6s [Xe]4f 6s [Xe]4f 6s<br />
3 5<br />
F2<br />
L°6<br />
93<br />
Np<br />
6<br />
L11/ 2<br />
94<br />
Pu<br />
7 F0<br />
95<br />
Am<br />
8<br />
S°7<br />
/ 2<br />
96<br />
Cm<br />
Curium<br />
9 D°2<br />
97<br />
Bk<br />
6<br />
H°15<br />
/ 2<br />
98<br />
Cf<br />
5- I-8<br />
99<br />
Es<br />
4-<br />
I-° 15/ 2<br />
100<br />
Fm<br />
3 H6<br />
101<br />
Md<br />
2<br />
F°7<br />
/ 2<br />
102<br />
No<br />
1 S0<br />
2<br />
P°1<br />
/ 2 ?<br />
Name<br />
Symbol<br />
Samarium<br />
150.36<br />
6 2<br />
[Xe]4f 6s<br />
5.6436<br />
Europium<br />
151.964<br />
7 2<br />
[Xe]4f 6s<br />
5.6704<br />
Gadolinium<br />
157.25<br />
7 2<br />
[Xe]4f 5d6s<br />
6.1501<br />
158.92534<br />
9 2<br />
[Xe]4f 6s<br />
5.8638<br />
Dysprosium<br />
162.50<br />
10 2<br />
[Xe]4f 6s<br />
5.9389<br />
Holmium<br />
164.93032<br />
11 2<br />
[Xe]4f 6s<br />
6.0215<br />
167.26<br />
12 2<br />
[Xe]4f 6s<br />
6.1077<br />
168.93421<br />
13 2<br />
[Xe]4f 6s<br />
6.1843<br />
Ytterbium<br />
173.04<br />
14 2<br />
[Xe]4f 6s<br />
6.2542<br />
174.967<br />
14 2<br />
[Xe]4f 5d6s<br />
5.4259<br />
58<br />
Ce<br />
Cerium<br />
1 G°4<br />
Atomic<br />
Number<br />
Ground-state<br />
Level<br />
57<br />
La<br />
2<br />
D32 /<br />
58<br />
Ce<br />
Cerium<br />
1 G°4<br />
59<br />
Pr<br />
4-<br />
I-° 92 /<br />
60<br />
Nd<br />
5- I-4<br />
61<br />
Pm<br />
6<br />
H°5<br />
/ 2<br />
62<br />
Sm<br />
7 F0<br />
63<br />
Eu<br />
8<br />
S°7<br />
/ 2<br />
64<br />
Gd<br />
9 D°2<br />
65<br />
Tb<br />
Terbium<br />
6<br />
H°15<br />
/ 2<br />
66<br />
Dy<br />
5- I-8<br />
67<br />
Ho<br />
4-<br />
I-° 15/ 2<br />
68<br />
Er<br />
Erbium<br />
3 H6<br />
69<br />
Tm<br />
Thulium<br />
2<br />
F°7<br />
/ 2<br />
70<br />
Yb<br />
1 S0<br />
71<br />
Lu<br />
Lutetium<br />
2<br />
D32 /<br />
Francium<br />
(223)<br />
[Rn]7s<br />
4.0727<br />
(226)<br />
[Rn]7s<br />
5.2784<br />
2<br />
7<br />
Rutherfordium Dubnium<br />
(261) (262)<br />
14 2 2<br />
[Rn]5f 6d 7s ?<br />
6.0 ?<br />
87<br />
Fr<br />
88<br />
Ra<br />
Radium<br />
Seaborgium<br />
(263)<br />
104<br />
Rf<br />
Db<br />
Sg<br />
Bh<br />
Bohrium<br />
(264)<br />
Hassium<br />
(265)<br />
Meitnerium<br />
(268)<br />
Ununnilium<br />
(269)<br />
Unununium<br />
(272)<br />
Ununbium<br />
For a description <strong>of</strong><br />
the atomic data, visit<br />
physics.nist.gov/atomic<br />
2<br />
S12 /<br />
1 S0<br />
Hs<br />
Mt<br />
Uun<br />
Uuu<br />
Uub<br />
Solids<br />
Liquids<br />
Gases<br />
Artificially Prepared<br />
132.90545<br />
[Xe]6s<br />
3.8939<br />
137.327<br />
[Xe]6s<br />
5.2117<br />
2<br />
Rhenium<br />
178.49 180.9479 183.84 186.207 190.23 192.217 195.078 196.96655 200.59 204.3833 207.2<br />
14 2 2 14 3 2 14 4 2 14 5 2 14 6 2 14 7 2 14 9<br />
14 10<br />
14 10 2<br />
[Xe]4f 5d 6s [Xe]4f 5d 6s [Xe]4f 5d 6s [Xe]4f 5d 6s [Xe]4f 5d 6s [Xe]4f 5d 6s [Xe]4f 5d 6s [Xe]4f 5d 6s [Xe]4f 5d 6s [Hg]6p [Hg]6p<br />
6.8251 7.5496 7.8640 7.8335 8.4382 8.9670 8.9587 9.2255 10.4375 6.1082 7.4167<br />
2<br />
3<br />
F2 ? 105 106 107 108 109 110 111 112<br />
208.98038<br />
[Hg]6p<br />
7.2856<br />
3<br />
Polonium<br />
(209)<br />
[Hg]6p<br />
8.417 ?<br />
4<br />
[Hg]6p 5<br />
(210)<br />
(222)<br />
[Hg]6p<br />
10.7485<br />
6<br />
6<br />
55<br />
Cs<br />
Cesium<br />
2<br />
S12 /<br />
56<br />
Ba<br />
Barium<br />
1 S0<br />
72<br />
Hf<br />
Hafnium<br />
3 F2<br />
73<br />
Ta<br />
Tantalum<br />
4<br />
F32 /<br />
74<br />
W<br />
Tungsten<br />
5 D0<br />
75<br />
Re<br />
6<br />
S52 /<br />
76<br />
Os<br />
Osmium<br />
5 D4<br />
77<br />
Ir<br />
Iridium<br />
4<br />
F92 /<br />
78<br />
Pt<br />
Platinum<br />
3 D3<br />
79<br />
Au<br />
Gold<br />
2<br />
S12 /<br />
80<br />
Hg<br />
Mercury<br />
81<br />
Tl<br />
Thallium<br />
2<br />
P°1/ 2<br />
Rubidium<br />
85.4678<br />
[Kr]5s<br />
4.1771<br />
87.62<br />
[Kr]5s<br />
5.6949<br />
2<br />
88.90585<br />
[Kr]4d5s<br />
6.2171<br />
2<br />
91.224<br />
2 2<br />
[Kr]4d 5s<br />
6.6339<br />
6.7589<br />
92.90638<br />
Molybdenum Technetium<br />
95.94 (98)<br />
5<br />
5 2<br />
[Kr]4d 5s [Kr]4d 5s<br />
7.0924 7.28<br />
Cadmium<br />
Antimony<br />
112.411 114.818 118.710 121.760 127.60 126.90447 131.29<br />
10 2<br />
10 2<br />
10 2 2 10 2 3 10 2 4 10 2 5 10 2 6<br />
[Kr]4d 5s [Kr]4d 5s 5p [Kr]4d 5s 5p [Kr]4d 5s 5p [Kr]4d 5s 5p [Kr]4d 5s 5p [Kr]4d 5s 5p<br />
8.9938 5.7864 7.3439 8.6084 9.0096 10.4513 12.1298<br />
1 3 3 1<br />
S0<br />
P0<br />
P2<br />
S0<br />
82<br />
Pb<br />
Lead<br />
83<br />
Bi<br />
Bismuth<br />
4<br />
S°3/ 2<br />
84<br />
Po<br />
85<br />
At<br />
Astatine<br />
2<br />
P°3/ 2<br />
86<br />
Rn<br />
Radon<br />
Ruthenium<br />
101.07<br />
7<br />
[Kr]4d 5s<br />
7.3605<br />
Rhodium<br />
102.90550<br />
8<br />
[Kr]4d 5s<br />
7.4589<br />
Palladium<br />
106.42<br />
[Kr]4d<br />
8.3369<br />
10<br />
7.5762<br />
4<br />
[Kr]4d 5s<br />
10<br />
[Kr]4d 5s<br />
107.8682<br />
5<br />
37<br />
Rb<br />
2<br />
S12 /<br />
38<br />
Sr<br />
Strontium<br />
1 S0<br />
39<br />
Y<br />
Yttrium<br />
2<br />
D32 /<br />
40<br />
Zr<br />
Zirconium<br />
3 F2<br />
41<br />
Nb<br />
Niobium<br />
6<br />
D12 /<br />
42<br />
Mo<br />
7 S3<br />
43<br />
Tc<br />
6<br />
S52 /<br />
44<br />
Ru<br />
5 F5<br />
45<br />
Rh<br />
4<br />
F92 /<br />
46<br />
Pd<br />
1 S0<br />
47<br />
Ag<br />
Silver<br />
2<br />
S12 /<br />
48<br />
Cd<br />
49<br />
In<br />
Indium<br />
2<br />
P°1/ 2<br />
54<br />
Xe<br />
Xenon<br />
Potassium<br />
39.0983<br />
[Ar]4s<br />
4.3407<br />
40.078<br />
[Ar]4s<br />
6.1132<br />
2<br />
Scandium<br />
44.95591<br />
[Ar]3d4s<br />
6.5615<br />
2<br />
47.867<br />
2 2<br />
[Ar]3d 4s<br />
6.8281<br />
Vanadium<br />
50.9415<br />
3 2<br />
[Ar]3d 4s<br />
6.7462<br />
6.7665<br />
51.9961<br />
Manganese<br />
54.93805<br />
5 2<br />
[Ar]3d 4s<br />
7.4340<br />
55.845<br />
6 2<br />
[Ar]3d 4s<br />
7.9024<br />
58.93320<br />
7 2<br />
[Ar]3d 4s<br />
7.8810<br />
58.6934<br />
8 2<br />
[Ar]3d 4s<br />
7.6398<br />
7.7264<br />
63.546<br />
Germanium<br />
65.39 69.723 72.61 74.92160 78.96 79.904 83.80<br />
10 2<br />
10 2<br />
10 2 2 10 2 3 10 2 4 10 2 5 10 2 6<br />
[Ar]3d 4s [Ar]3d 4s 4p [Ar]3d 4s 4p [Ar]3d 4s 4p [Ar]3d 4s 4p [Ar]3d 4s 4p [Ar]3d 4s 4p<br />
9.3942 5.9993 7.8994 9.7886 9.7524 11.8138 13.9996<br />
1 3 3 1<br />
S0<br />
P0<br />
P2<br />
S0<br />
50<br />
Sn<br />
Tin<br />
51<br />
Sb<br />
4<br />
S°3/ 2<br />
52<br />
Te<br />
Tellurium<br />
53<br />
I<br />
Iodine<br />
2<br />
P°3/ 2<br />
257<br />
Period<br />
5<br />
[Ar]3d 4s<br />
10<br />
[Ar]3d 4s<br />
4<br />
19<br />
K<br />
2<br />
S12 /<br />
20<br />
Ca<br />
Calcium<br />
1 S0<br />
21<br />
Sc<br />
22<br />
Ti<br />
Titanium<br />
23<br />
V<br />
24<br />
Cr<br />
Chromium<br />
Mn<br />
26<br />
Fe<br />
Iron<br />
30<br />
Zn<br />
Zinc<br />
31<br />
Ga<br />
Gallium<br />
32<br />
Ge<br />
33<br />
As<br />
Arsenic<br />
34<br />
Se<br />
Selenium<br />
35<br />
Br<br />
Bromine<br />
36<br />
Kr<br />
Krypton<br />
IIIA IVA VA VIA VIIA<br />
3 7<br />
F2<br />
S3 25<br />
2<br />
D32 /<br />
4<br />
F32 /<br />
6<br />
S52 /<br />
5 D4<br />
27<br />
Co<br />
Cobalt<br />
4<br />
F92 /<br />
28<br />
Ni<br />
Nickel<br />
3 F4<br />
Cu<br />
Copper<br />
22.98977<br />
[Ne]3s<br />
5.1391<br />
Magnesium<br />
24.3050<br />
[Ne]3s<br />
7.6462<br />
2<br />
VIIIA<br />
IB IIB<br />
29<br />
2<br />
S12 /<br />
1 S0<br />
2<br />
P°1/ 2<br />
3 P0<br />
4<br />
S°3/ 2<br />
3 P2<br />
2<br />
P°3/ 2<br />
1 S0<br />
Aluminum<br />
26.98154<br />
2<br />
[Ne]3s 3p<br />
5.9858<br />
28.0855<br />
2 2<br />
[Ne]3s 3p<br />
8.1517<br />
Phosphorus<br />
30.97376<br />
2 3<br />
[Ne]3s 3p<br />
10.4867<br />
32.066<br />
2 4<br />
[Ne]3s 3p<br />
10.3600<br />
35.4527<br />
2 5<br />
[Ne]3s 3p<br />
12.9676<br />
39.948<br />
2 6<br />
[Ne]3s 3p<br />
15.7596<br />
3<br />
11<br />
Na<br />
Sodium<br />
2<br />
S12 /<br />
12<br />
Mg<br />
1 S0<br />
13<br />
Al<br />
14<br />
Si<br />
Silicon<br />
15<br />
P<br />
16<br />
S<br />
Sulfur<br />
17<br />
Cl<br />
Chlorine<br />
18<br />
Ar<br />
Argon<br />
5.3917<br />
2<br />
1s 2s<br />
6.941<br />
Beryllium<br />
9.01218<br />
2 2<br />
1s 2s<br />
9.3227<br />
10.811<br />
2 2<br />
1s 2s 2p<br />
8.2980<br />
12.0107<br />
2 2 2<br />
1s 2s 2p<br />
11.2603<br />
14.00674<br />
2 2 3<br />
1s 2s 2p<br />
14.5341<br />
15.9994<br />
2 2 4<br />
1s 2s 2p<br />
13.6181<br />
18.99840<br />
2 2 5<br />
1s 2s 2p<br />
17.4228<br />
20.1797<br />
2 2 6<br />
1s 2s 2p<br />
21.5646<br />
2<br />
Li<br />
Lithium<br />
B<br />
Boron<br />
C<br />
Carbon<br />
N<br />
Nitrogen<br />
O<br />
Oxygen<br />
F<br />
Fluorine<br />
3<br />
2<br />
S12 /<br />
4<br />
Be<br />
10<br />
Ne<br />
Neon<br />
Hydrogen<br />
1.00794<br />
1s<br />
13.5984<br />
IIA<br />
IIIB IVB VB VIB VIIB<br />
3 3<br />
5 6 P0 7 8 P2 9<br />
1s 2<br />
4.00260<br />
24.5874<br />
1<br />
H<br />
Frequently used fundamental physical constants<br />
For the most accurate values <strong>of</strong> these and other constants, visit physics.nist.gov/constants<br />
1 second = 9 192 631 770 periods <strong>of</strong> radiation corresponding to the transition<br />
133<br />
between the two hyperfine levels <strong>of</strong> the ground state <strong>of</strong> Cs<br />
�1<br />
speed <strong>of</strong> light in vacuum c 299 792 458 m s (exact)<br />
�34<br />
Planck constant h 6.6261 � 10 J s ( h=h/2<br />
�)<br />
�19<br />
elementary charge e 1.6022 � 10 C<br />
�31<br />
electron mass me<br />
9.1094 � 10 kg<br />
2<br />
mc e 0.5110 MeV<br />
�27<br />
proton mass mp<br />
1.6726 � 10 kg<br />
fine-structure constant � 1/137.036<br />
�1<br />
Rydberg constant R�<br />
10 973 732 m<br />
15<br />
Rc � 3.289 84 � 10 Hz<br />
Rhc � 13.6057 eV<br />
�23 �1<br />
Boltzmann constant k 1.3807 � 10 J K<br />
2<br />
P°1/ 2<br />
3 P0<br />
4<br />
S°3/ 2<br />
3 P2<br />
2<br />
P°3/ 2<br />
1 S0<br />
1 S0<br />
2<br />
P°1<br />
/ 2<br />
4<br />
S°3<br />
/ 2<br />
2<br />
P°3<br />
/ 2<br />
1 S0<br />
U.S. DEPARTMENT OF COMMERCE<br />
Technology Administration<br />
National Institute <strong>of</strong> Standards and Technology<br />
He<br />
Helium<br />
Group<br />
IA<br />
1<br />
2<br />
S12 /<br />
P E R I O D I C T A B L E<br />
Atomic Properties <strong>of</strong> the Elements<br />
physics.nist.gov www.nist.gov www.nist.gov/srd<br />
Physics<br />
Laboratory<br />
Standard Reference<br />
Data Program<br />
VIII<br />
2<br />
1 S0