Modern Physics notes Paul Fendley fendley@virginia.edu Lecture 6 ...
Modern Physics notes Paul Fendley fendley@virginia.edu Lecture 6 ...
Modern Physics notes Paul Fendley fendley@virginia.edu Lecture 6 ...
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
<strong>Modern</strong> <strong>Physics</strong> <strong>notes</strong><br />
<strong>Paul</strong> <strong>Fendley</strong> <strong>fendley@virginia</strong>.<strong>edu</strong><br />
<strong>Lecture</strong> 6<br />
• Size of the atom<br />
• A digression on hand-waving arguments<br />
• Spectral lines<br />
• Feynman, 2.4-5<br />
• Fowler, “Spectra”, “The Bohr atom”<br />
The size of the atom<br />
Let’s follow Feynman and use the uncertainty principle to make an argument about the size<br />
of an atom. Classically, you can think of an atom as being a mini-solar system. There are<br />
neutrons and protons in the middle (the nucleus). Orbiting the nucleus are the electrons, which<br />
are much less massive. They don’t feel the force (the “strong” force) which holds the nucleus<br />
together. Instead, they are bound by electric forces, just like the planets are bound by gravity.<br />
Because electromagnetism is different from gravity, this picture ends up not working classically.<br />
For example, an accelerating electron radiates classically, and so would emit all its energy and<br />
crash into the nucleus. You need quantum mechanics to fix this up, and we’ll explain how.<br />
Say the electrons moving in orbits of around radius a. If we do experiments, we’ll see that the<br />
electron is in a different position each time we look. So its uncertainty in position ∆x is about<br />
a. By the uncertainty principle, we then know that the uncertainty in momentum is about h/a.<br />
(All of these numbers can be multiplied or divided by factors of 2 or 3 or π: what we are really<br />
doing is dimensional analysis combined with quantum mechanics). This means the momentum<br />
itself is probably around h/a. (If it were larger, we could then do scattering experiments with<br />
light of momentum larger than h/a without disturbing the electron. We could then determine<br />
its position to better accuracy then a, contradicting our original assumption that ∆x ∼ a.) If<br />
its momentum is about h/a, then its kinetic energy is about (h/a) 2 /2M.<br />
I just discovered that Feynman was a little sloppy. He writes h in the text, but when he<br />
gets the numbers out, he actually doesn’t use h. You’ll remember that in our example, the<br />
uncertainty was order h, but the careful derivation of the uncertainty principle says that the<br />
1
uncertainty is of order h/4π. For this reason, and others I’ll mention soon, physicists usually<br />
use the constant which is merely Planck’s constant divided by 2π:<br />
≡ h<br />
2π = 1.055 × 10−34 J · s<br />
The symbol ≡ means “is defined as”. (One reason is that it’s easier to say “h-bar” than “Planck’s<br />
constant”.) So to get the same numerical answer as Feynman, let’s say the momentum is of order<br />
/a instead of h/a. This is a hand-waving argument anyway!<br />
So now let’s look at the energy of the hydrogen atom. The charge on the nucleus is just −e,<br />
and the charge of the electron is e. (We define e to be negative; the answers will depend only on<br />
e 2 , so this definition doesn’t affect the results, a useful check.) In addition to the kinetic energy,<br />
there’s an electrostatic potential energy coming from the two charges being a distance a apart.<br />
The total energy is therefore about<br />
E =<br />
2<br />
2Ma −<br />
e2<br />
2 4πɛ 0 a<br />
The minus sign in the second term is because the electron and the nucleus are attractive. Notice<br />
that as a decreases, we lower the potential energy, but we also increase the kinetic energy (the<br />
uncertainty principle means that cramming a particle into a smaller box increases its energy).<br />
The value of a is determined by finding the minimum of this energy The reason is that in<br />
the absence of any external sources of energy, it’s possible to get rid of energy by say emitting<br />
light, but there’s no way of getting new energy. Thus the system will settle down into its lowest<br />
energy state, and then be stable. To find this minimum value of energy, we find where dE/da = 0.<br />
Precisely, at the minimum a = a 0 , we have<br />
Solving for a 0 gives<br />
a 0 = 4πɛ 0 2<br />
m e e 2 =<br />
0 = dE<br />
da = − 2<br />
+ e2<br />
m e a 3 0 4πɛ 0 a 2 0<br />
(1.055 × 10 −34 ) 2<br />
(8.988 × 10 9 )(9.106 × 10 −31 )(1.602 × 10 −19 ) 2 m = .529 × 10−10 m<br />
So the size of the hydrogen atom is about 10 −10 m, which is called an angstrom. (People today<br />
tend to use nm = 10 −9 m instead of angstroms.) Using a simple hand-waving argument, we know<br />
the approximate size of an atom! (note that this factor of 2π Feynman is sloppy on changes the<br />
answer by a factor of 40, because it’s squared!)<br />
To find the energy of this electron at this minimum, we can now plug our value of a 0 back<br />
into the expression for E. After a little algebra, we get<br />
e2<br />
E = − = −13.6 eV<br />
8πɛ 0 a 0<br />
2
The negative energy means that the electron has less energy when it is in the atom than it has if<br />
it is not in the atom (E = 0 when a → ∞). This is good, because it means that electrons want<br />
to stay in atoms, instead of moving away! This number 13.6 eV is called a Rydberg. It turns<br />
out that we’ve been lucky. We’ve ignored factors of two and pi and so forth, so there’s no reason<br />
this should be the exact answer. However, for the hydrogen atom this is the correct binding<br />
energy: it takes one Rydberg of energy to pull the electron away from the hydrogen atom.<br />
By the way, note that Feynman didn’t forget the 4πɛ 0 : he’s working in units of charge where<br />
this is 1. In these units (called stat-coulombs), the charge of the electron is 1.6 × 10 −19√ 9 × 10 9<br />
stat-coulombs, so you of course get the same answer at the end.<br />
So quantum mechanics explains why atoms are stable. If the electron were to spiral into the<br />
nucleus, ∆x would be smaller, and the uncertainty principle would require that the momentum<br />
be larger. This increase in kinetic energy balances with the decrease in potential energy right at<br />
a 0 , as we’ve shown.<br />
Quantum mechanics also explains why we don’t fall through the floor. The atoms in our shoes<br />
push against the atoms in the floor. When the atoms are squashed into a smaller space, their<br />
momentum and hence energy must increase, because of the uncertainty principle. Remember that<br />
energy wants to be minimized, so the force resists the compression. Classically, the squashing<br />
would lower the potential energy, but without the uncertainty principle, there would be no<br />
corresponding increase in kinetic energy.<br />
A digression<br />
Probably you’re not yet used to hearing “hand-waving” arguments like the one I just gave, so<br />
let me digress and say what I mean by that. These are arguments where you’re not being very<br />
precise, but rather are more along the lines of a plausibility argument. In some sciences, that’s<br />
pretty much all you can ever do (biology is in an interesting transitional period). In physics, we<br />
often can go much farther: we can write down precise mathematical equations which describe<br />
the experimental reality. And in probably all of your physics classes so far were done in this<br />
fashion. But all scientists, including physicists, give plausibility arguments all the time.<br />
One reason is that when you’re doing research, it’s usually a good idea to try to know the<br />
answer before you find it. Of course, the best researchers are prepared for when they don’t<br />
get the answer they were looking for – often the greatest discoveries (e.g. the X-ray, and we’ll<br />
see, the cosmic microwave background) were made this way. But knowing what to expect is<br />
important in these cases – previously-unknown physics is necessary to explain why you didn’t<br />
see what you thought you would see.<br />
A second reason for giving plausibility arguments is that before you understand the math<br />
describing what’s going on, you need to decided what’s going on. (Even good mathematicians<br />
3
do it this way, by the way.) So you have to take a plausibility argument the way it is intended:<br />
to give you a short-cut to the right answer. As long as you don’t take it too seriously (i.e. get<br />
upset if it ends up giving the wrong answer), it’s a useful thing.<br />
In this case we know the hand-waving argument is essentially correct because we know how<br />
to do the computations correctly, and will explain this later in the course. Trouble starts in<br />
research when people don’t know the correct answer, but write papers anyway containing only<br />
the hand-waving argument! Then you get to have fun yelling at your colleagues whose handwaving<br />
arguments give a different answer than yours. You then all go to conferences in nice<br />
parts of the world to continue the arguments, so we all win.<br />
Spectral lines<br />
The existence of photons was our first reason for the word “quantum” in quantum mechanics.<br />
Light at a given frequency occurs in discrete lumps, “quanta”. These lumps can be seen easily<br />
experimentally, although not quite by the naked eye (although your eye apparently would need<br />
to be only about 10 times more sensitive to be able to see individual photons).<br />
A second interesting thing also is apparent in the photoelectric effect experiment we did.<br />
You’ll notice something about the light which was coming from the mercury vapor. Instead<br />
of just giving out light of all colors, there were noticeable lines of distinct frequency (you can<br />
check experimentally that these are of distinct frequency, and measure the wavelength, by doing<br />
interference experiments). Since we now know that light frequency is proportional to photon<br />
energy, this tell us that the mercury atoms are not emitting energy of arbitrary sizes. Instead,<br />
the energy seems to be coming out in chunks. Thus again, where classically you might expect<br />
something continuous, here you get something quantized.<br />
Since the entire subject is named “quantum mechanics”, obviously the word quantum is<br />
rather important. To give a first shot at understanding quanta, let’s first understand one way<br />
the wavelength of light can be quantized classically. Say we put light into a box of length L, so<br />
that the light cannot get out. This is called a standing wave: you can think of it as the light<br />
just bouncing back and forth off of mirrors.<br />
Let’s just do a one-dimensional box. The spatial dependence of light of a given wavelength<br />
λ is either<br />
e −2πix/λ or e +2πix/λ<br />
Remember, we can always add two solutions of Maxwell’s equations in a vacuum and get another,<br />
so we can add these two together with arbitrary coefficients ⃗ A and ⃗ B. In a equation:<br />
⃗E = ⃗ Ae −2πix/λ + ⃗ Be +2πix/λ<br />
4
However, here we need to take into account the presence of the box. This says that A, ⃗ B ⃗ and λ<br />
are not arbitrary. Since no light is escaping, there can be no light at the edges of the box, which<br />
we put at x = 0 and x = L. This means E ⃗ = 0 at the edges. To satisfy this, we need to make<br />
( πn<br />
)<br />
⃗E ∝ sin<br />
L x<br />
where n is some integer. Because sin nπ vanishes for any integer n, this ⃗ E behaves appropriately<br />
at the edges. So note that we have ⃗ A = − ⃗ B, and more importantly<br />
λ = 2L n<br />
for some integer n. The wavelength is quantized in units of the size of the box.<br />
This quantization shouldn’t be so shocking if you’ve ever played a stringed instrument. When<br />
you attach the ends so that they do not move, they can only vibrate with certain wavelengths.<br />
But now, let’s remember that electrons are waves, too. The wave function/probability amplitude<br />
squared gives the probability an electron is in a given place. If we’re trapping the electron in<br />
a box, then the wave function must vanish at the edges of the box. This yields the same<br />
quantization condition that we saw for the electric field of light. This means that the momentum<br />
(and hence the kinetic energy) of an electron is quantized! Thus the velocity of the electron can<br />
only take on certain values, if the electron is trapped in a box. On this homework, you’ll check<br />
what values of velocity this is, for typical box sizes.<br />
Note that this is a little easier to write out if instead of using wavelength, we use the wavenumber<br />
k = 2π/λ. Then our wave is ∝ sin(kx), and the boundary conditions require that<br />
k = πn<br />
L .<br />
The two ways of writing this are obviously equivalent; you’ll notice that Feynman tends to use<br />
k instead of λ. To get rid of the factors of 2π this introduces, people introduced the constant .<br />
In terms of these new variables,<br />
Once we start using , then it’s easier to to use<br />
for frequency so that for photons<br />
p = h λ = k<br />
ω ≡ 2πν<br />
E = ω<br />
You should have seen ω before, when you were doing rotational motion. There ω is the angular<br />
frequency as measured in radians. The point of all of these definitions is to absorb factors of 2π:<br />
in terms of k, ω and , you don’t see any factors of 2π in E = ω, p = k and a wave looks like<br />
e ikx−iωt .<br />
5
The fact that the wavelength and hence the momentum and energy of a particle in a box<br />
are quantized is quite important. Let’s study the atom by making some more hand-waving<br />
arguments. First think about the radial direction. The arguments above indicated that to<br />
minimize the energy and hence be stable, the electron should remain within some radius a. This<br />
doesn’t violate the uncertainty principle: we can’t predict precisely where it appears in this<br />
region (although later we will be able to compute the probability amplitude). Thus this is like<br />
confining a particle to a box of radius a, and so the radial momentum is quantized. The angular<br />
component of the momentum is quantized for a similar reason. Think of an orbit at fixed radius<br />
and momentum. Since we are now in quantum mechanics, this is a probability wave going around<br />
the atom. Instead of demanding that the probability vanish at the edges, we demand that the<br />
probability be periodic: if you go around once, the probability must be the same. Put another<br />
way, one must have an integer number of wavelengths n around the orbit. Thus<br />
where a is the radius of electron’s orbit.<br />
nλ ∼ 2πa<br />
Don’t take this argument too seriously, but the result is very important. The momentum<br />
and energy of the electron in an atom are quantized. By similar reasoning, the angular<br />
momentum of the electron is quantized as well. (The angular component of the ordinary<br />
momentum is related to the angular momentum, but the two are not the same. How do they<br />
differ?)<br />
This now explains why we saw the distinct spectral lines in the mercury vapor. The vapor<br />
in the lamp is fairly dilute, so the atoms are far enough apart so that they can be treated<br />
individually. This means that the electrons in the atoms have distinct energy levels. To illustrate<br />
this, physicists often draw the diagrams of the type you see in Fig. 2-9 of Feynman.<br />
So how do discrete energy levels translate into what we see? When you heat a gas, or run<br />
an electrical current through the gas, it puts energy into the system. This “excites” the atom.<br />
This means that the energy you put in lifts the electrons out of the lowest energy levels and into<br />
higher ones. But as we said before, the electron won’t stay there, if it can. It will fall down<br />
into the levels with lower energy. So what happens to the now-excess energy. When falling<br />
down levels, a photon is emitted! The energy of this photon is not arbitrary: when an electron<br />
falls from energy E 1 to the lowest-energy state (the “ground” state) with energy E 0 , the photon<br />
emitted must have energy E 1 − E 0 . But now quantum mechanics comes in again: we know that<br />
this photon of fixed energy must have a fixed frequency. Thus this transition will emit a photon<br />
of frequency<br />
ν = E 1 − E 0<br />
.<br />
h<br />
The discrete lines we saw in mercury arise from the various transitions in mercury.<br />
Quantum mechanics therefore explains something completely obscure classically: the existence<br />
of spectral lines. The existence of spectral lines with known frequencies was of vital<br />
6
importance in understanding relativity experimentally, and is still of vital importance in understanding<br />
cosmology.<br />
Typically, the spacing between these levels in an atom is on the order of an eV . This is<br />
because the values of these energies are related to the size of the atom, not the size of the box<br />
the mercury vapor is in. If they were related to the size of the system, you wouldn’t see the<br />
quantization. Remember that k ∝ 1/L for a system in a box of size L, while k ∝ 1/a for the<br />
atom. Since a/L is around 10 −10 , the levels due to the size of the box would be spaced much<br />
closer together than the levels due to the size of the atom.<br />
Another thing to note is that if the atoms are close enough together so that different atoms<br />
interact with each other, the behavior can change dramatically. For example, in a metal, there<br />
are electrons which are not bound to a given atom. This is why metals conduct current!<br />
We’ll derive all this much more carefully later. The important point to take home is that we<br />
now have seen two kinds of quantization unsuspected classically. Light comes in bundles called<br />
photons, and electrons in atoms have discrete energy levels.<br />
7