Inventions and Inventors Volume 1 - Online Public Access Catalog
Inventions and Inventors Volume 1 - Online Public Access Catalog
Inventions and Inventors Volume 1 - Online Public Access Catalog
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
<strong>Inventions</strong><br />
<strong>and</strong><br />
<strong>Inventors</strong>
This Page Intentionally Left Blank
MAGILL’S CHOICE<br />
<strong>Inventions</strong><br />
<strong>and</strong><br />
<strong>Inventors</strong><br />
<strong>Volume</strong> 1<br />
Abortion pill — Laminated glass<br />
1 – 458<br />
edited by<br />
Roger Smith<br />
Salem Press, Inc.<br />
Pasadena, California Hackensack, New Jersey
Copyright © 2002, by Salem Press, Inc.<br />
All rights in this book are reserved. No part of this work may be<br />
used or reproduced in any manner whatsoever or transmitted<br />
in any form or by any means, electronic or mechanical, including<br />
photocopy, recording, or any information storage <strong>and</strong> retrieval<br />
system, without written permission from the copyright<br />
owner except in the case of brief quotations embodied in critical<br />
articles <strong>and</strong> reviews. For information address the publisher, Salem<br />
Press, Inc., P.O. Box 50062, Pasadena, California 91115.<br />
Essays originally appeared in Twentieth Century: Great Events<br />
(1992, 1996), Twentieth Century: Great Scientific Achievements (1994),<br />
<strong>and</strong> Great Events from History II: Business <strong>and</strong> Commerce Series<br />
(1994). New material has been added.<br />
∞ The paper used in these volumes conforms to the American<br />
National St<strong>and</strong>ard for Permanence of Paper for Printed Library<br />
Materials, Z39.48-1992 (R1997).<br />
Library of Congress <strong>Catalog</strong>ing-in-<strong>Public</strong>ation Data<br />
<strong>Inventions</strong> <strong>and</strong> inventors / edited by Roger Smith<br />
p.cm. — (Magill’s choice)<br />
Includes bibliographical reference <strong>and</strong> index<br />
ISBN 1-58765-016-9 (set : alk. paper) — ISBN 1-58765-017-7<br />
(vol 1 : alk. paper) — ISBN 1-58765-018-5 (vol 2. : alk. paper)<br />
1. <strong>Inventions</strong>—History—20th century—Encyclopedias. 2. <strong>Inventors</strong>—Biography—Encyclopedias.<br />
I. Smith, Roger, 1953- .<br />
II. Series.<br />
T20 .I59 2001<br />
609—dc21 2001049412<br />
printed in the united states of america
Table of Contents<br />
Table of Contents<br />
Publisher’s Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix<br />
Editor’s Foreword . . . . . . . . . . . . . . . . . . . . . . . . . . . xi<br />
Abortion pill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1<br />
Airplane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6<br />
Alkaline storage battery . . . . . . . . . . . . . . . . . . . . . . . 11<br />
Ammonia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16<br />
Amniocentesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20<br />
Antibacterial drugs . . . . . . . . . . . . . . . . . . . . . . . . . . 24<br />
Apple II computer. . . . . . . . . . . . . . . . . . . . . . . . . . . 28<br />
Aqualung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33<br />
Artificial blood . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38<br />
Artificial chromosome . . . . . . . . . . . . . . . . . . . . . . . . 41<br />
Artificial heart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45<br />
Artificial hormone. . . . . . . . . . . . . . . . . . . . . . . . . . . 50<br />
Artificial insemination . . . . . . . . . . . . . . . . . . . . . . . . 54<br />
Artificial kidney . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58<br />
Artificial satellite . . . . . . . . . . . . . . . . . . . . . . . . . . . 63<br />
Aspartame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67<br />
Assembly line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71<br />
Atomic bomb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76<br />
Atomic clock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80<br />
Atomic-powered ship. . . . . . . . . . . . . . . . . . . . . . . . . 84<br />
Autochrome plate . . . . . . . . . . . . . . . . . . . . . . . . . . . 88<br />
BASIC programming language . . . . . . . . . . . . . . . . . . . 92<br />
Bathyscaphe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95<br />
Bathysphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100<br />
BINAC computer . . . . . . . . . . . . . . . . . . . . . . . . . . 104<br />
Birth control pill . . . . . . . . . . . . . . . . . . . . . . . . . . . 108<br />
Blood transfusion . . . . . . . . . . . . . . . . . . . . . . . . . . 113<br />
Breeder reactor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118<br />
Broadcaster guitar . . . . . . . . . . . . . . . . . . . . . . . . . . 122<br />
Brownie camera . . . . . . . . . . . . . . . . . . . . . . . . . . . 130<br />
Bubble memory . . . . . . . . . . . . . . . . . . . . . . . . . . . 138<br />
v
Table of Contents<br />
Bullet train . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142<br />
Buna rubber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146<br />
CAD/CAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151<br />
Carbon dating . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158<br />
Cassette recording . . . . . . . . . . . . . . . . . . . . . . . . . . 163<br />
CAT scanner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167<br />
Cell phone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172<br />
Cloning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177<br />
Cloud seeding . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183<br />
COBOL computer language . . . . . . . . . . . . . . . . . . . . 187<br />
Color film . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192<br />
Color television . . . . . . . . . . . . . . . . . . . . . . . . . . . 196<br />
Colossus computer . . . . . . . . . . . . . . . . . . . . . . . . . 200<br />
Communications satellite . . . . . . . . . . . . . . . . . . . . . . 204<br />
Community antenna television . . . . . . . . . . . . . . . . . . 208<br />
Compact disc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217<br />
Compressed-air-accumulating power plant . . . . . . . . . . . 225<br />
Computer chips . . . . . . . . . . . . . . . . . . . . . . . . . . . 229<br />
Contact lenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235<br />
Coronary artery bypass surgery . . . . . . . . . . . . . . . . . . 240<br />
Cruise missile . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244<br />
Cyclamate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248<br />
Cyclotron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252<br />
Diesel locomotive . . . . . . . . . . . . . . . . . . . . . . . . . . 257<br />
Differential analyzer. . . . . . . . . . . . . . . . . . . . . . . . . 262<br />
Dirigible. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267<br />
Disposable razor . . . . . . . . . . . . . . . . . . . . . . . . . . . 272<br />
Dolby noise reduction . . . . . . . . . . . . . . . . . . . . . . . . 279<br />
Electric clock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284<br />
Electric refrigerator . . . . . . . . . . . . . . . . . . . . . . . . . 289<br />
Electrocardiogram . . . . . . . . . . . . . . . . . . . . . . . . . . 293<br />
Electroencephalogram. . . . . . . . . . . . . . . . . . . . . . . . 298<br />
Electron microscope . . . . . . . . . . . . . . . . . . . . . . . . . 302<br />
Electronic synthesizer . . . . . . . . . . . . . . . . . . . . . . . . 307<br />
ENIAC computer . . . . . . . . . . . . . . . . . . . . . . . . . . 312<br />
vi
Table of Contents<br />
Fax machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316<br />
Fiber-optics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320<br />
Field ion microscope. . . . . . . . . . . . . . . . . . . . . . . . . 325<br />
Floppy disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330<br />
Fluorescent lighting . . . . . . . . . . . . . . . . . . . . . . . . . 335<br />
FM radio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339<br />
Food freezing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343<br />
FORTRAN programming language . . . . . . . . . . . . . . . . 347<br />
Freeze-drying. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351<br />
Fuel cell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355<br />
Gas-electric car . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360<br />
Geiger counter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365<br />
Genetic “fingerprinting” . . . . . . . . . . . . . . . . . . . . . . 370<br />
Genetically engineered insulin . . . . . . . . . . . . . . . . . . . 374<br />
Geothermal power. . . . . . . . . . . . . . . . . . . . . . . . . . 378<br />
Gyrocompass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382<br />
Hard disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386<br />
Hearing aid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390<br />
Heart-lung machine . . . . . . . . . . . . . . . . . . . . . . . . . 394<br />
Heat pump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398<br />
Holography. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402<br />
Hovercraft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407<br />
Hydrogen bomb . . . . . . . . . . . . . . . . . . . . . . . . . . . 412<br />
IBM Model 1401 computer . . . . . . . . . . . . . . . . . . . . . 417<br />
In vitro plant culture. . . . . . . . . . . . . . . . . . . . . . . . . 421<br />
Infrared photography . . . . . . . . . . . . . . . . . . . . . . . . 425<br />
Instant photography. . . . . . . . . . . . . . . . . . . . . . . . . 430<br />
Interchangeable parts . . . . . . . . . . . . . . . . . . . . . . . . 434<br />
Internal combustion engine. . . . . . . . . . . . . . . . . . . . . 442<br />
The Internet. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446<br />
Iron lung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451<br />
Laminated glass . . . . . . . . . . . . . . . . . . . . . . . . . . . 454<br />
vii
This Page Intentionally Left Blank
Publisher’s Note<br />
Publisher’s Note<br />
To many people, the word “invention” brings to mind cleverly contrived<br />
gadgets <strong>and</strong> devices, such as safety pins, zippers, typewriters,<br />
<strong>and</strong> telephones—all of which have fascinating stories of invention<br />
behind them. However, the word actually has a much broader meaning,<br />
one that goes back to the Latin word invenire, for “to come upon.”<br />
In its broad sense, an invention can be any tangible device or contrivance,<br />
or even a process, that is brought into being by human imagination.<br />
It is in this broad sense that the term is used in <strong>Inventions</strong> <strong>and</strong> <strong>Inventors</strong>,<br />
the latest contribution to the Magill’s Choice reference books.<br />
This two-volume set contains articles on 195 twentieth century<br />
inventions, which span the full range of human imagination—from<br />
simple gadgets, such as disposable razors, to unimaginably complex<br />
medical breakthroughs, such as genetically engineered insulin.<br />
This set is not an encyclopedic catalog of the past century’s greatest<br />
inventions but rather a selective survey of noteworthy breakthroughs<br />
in the widest possible variety of fields.<br />
A combination of several features sets <strong>Inventions</strong> <strong>and</strong> <strong>Inventors</strong><br />
apart from other reference works on this subject: the diversity of its<br />
subject matter, the depth of its individual articles, <strong>and</strong> its emphasis<br />
on the people behind the inventions. The range of subjects covered<br />
here is unusually wide. In addition to articles on what might be considered<br />
“classic” inventions—such as airplanes, television, <strong>and</strong> satellites—the<br />
set has articles on inventions in fields as diverse as agriculture,<br />
biology, chemistry, computer science, consumer products,<br />
drugs <strong>and</strong> vaccines, energy, engineering, food science, genetic engineering,<br />
medical procedures, music, photography, physics, synthetics,<br />
transportation, <strong>and</strong> weapons technology.<br />
Most of this set’s essays appeared earlier in Twentieth Century:<br />
Great Events (1992, 1996) <strong>and</strong> Twentieth Century: Great Scientific<br />
Achievements (1994). Its longest essays are taken from Great Events<br />
from History II: Business <strong>and</strong> Commerce Series (1994). Information in<br />
the articles has been updated, <strong>and</strong> completely new bibliographical<br />
notes have been added to all of them. Half the essays also have original<br />
sidebars on people behind the inventions.<br />
ix
Publisher’s Note<br />
At least one thous<strong>and</strong> words in length, each essay opens with a<br />
brief summary of the invention <strong>and</strong> its significance, followed by an<br />
annotated list of important personages behind it—including scientists,<br />
engineers, technicians, <strong>and</strong> entrepreneurs. The essay then examines<br />
the background to the invention, its process of discovery<br />
<strong>and</strong> innovation, <strong>and</strong> its impact on the world. Half the articles have<br />
entirely new sidebars on individuals who played important roles in<br />
the inventions’ development <strong>and</strong> promotion.<br />
Users can find topics by using any of several different methods.<br />
Articles are alphabetically arranged under their titles, which use the<br />
names of the inventions themselves, such as “Abortion pill,” “Airplane,”<br />
“Alkaline storage battery,” “Ammonia,” <strong>and</strong> “Amniocentesis.”<br />
Many inventions are known by more than one name, however,<br />
<strong>and</strong> users may find what they are looking for in the general index,<br />
which lists topics under multiple terms.<br />
Several systems of cross-referencing direct users to articles of interest.<br />
Appended to every essay is a list of articles on related or similar<br />
inventions. Further help in can be found in appendices at the end<br />
of volume two. The first, a Time Line, lists essay topics chronologically,<br />
by the years in which the inventions were first made. The second,<br />
Topics by Category list, organizes essay topics under broader<br />
headings, with most topics appearing under at least two category<br />
headings. Allowing for the many topics counted more than once,<br />
these categories include Consumer products (36 essays), Electronics<br />
(28), Communications (27), Medicine (25), Measurement <strong>and</strong> detection<br />
(24), Computer science (23), Home products (20), Materials<br />
(18), Medical procedures (17), Synthetics (17), Photography (16), Energy<br />
(16), Engineering (16), Physics (13), Food science (13), Drugs<br />
<strong>and</strong> vaccines (13), Transportation (11), Weapons technology (11),<br />
Genetic engineering (11), Aviation <strong>and</strong> space (10), Biology (9),<br />
Chemistry (9), Exploration (8), Music (7), Earth science (6), Manufacturing<br />
(6), <strong>and</strong> Agriculture (5).<br />
More than one hundred scholars wrote the original articles used<br />
in these volumes. Because their names did not appear with their articles<br />
in the Twentieth Century sets, we cannot, unfortunately, list<br />
them here. However, we extend our thanks for their contributions.<br />
We also are indebted to Roger Smith for his help in assembling the<br />
topic list <strong>and</strong> in writing all the biographical sidebars.<br />
x
Editor’s Foreword<br />
The articles in <strong>Inventions</strong> <strong>and</strong> <strong>Inventors</strong> recount the birth <strong>and</strong> growth<br />
of important components in the technology of the twentieth centuries.<br />
They concern inventions ranging from processes, methods,<br />
sensors, <strong>and</strong> tests to appliances, tools, machinery, vehicles, electronics,<br />
<strong>and</strong> materials. To explain these various inventions, the essays<br />
deal with principles of physics, chemistry, engineering, biology, <strong>and</strong><br />
computers—all intended for general readers. From complex devices,<br />
such as electron microscopes, <strong>and</strong> phenomena difficult to define,<br />
such as the Internet, to things so familiar that they are seldom<br />
thought of as having individual histories at all, such as Pyrex glass<br />
<strong>and</strong> Velcro, all the inventions described here increased the richness<br />
of technological life. Some of these inventions, such as the rotarydial<br />
telephone, have passed out of common use, at least in the<br />
United States <strong>and</strong> Europe, while others, such as the computer, are<br />
now so heavily relied upon that mass technological culture could<br />
scarcely exist without them. Each article, then, is at the same time a<br />
historical sketch <strong>and</strong> technical explanation of an invention, written<br />
to inform <strong>and</strong>, I hope, intrigue.<br />
Brief biographical sidebars accompany half the articles. The<br />
sidebars outline the lives of people who are in some way responsible<br />
for the inventions discussed: the original inventor, a person who<br />
makes important refinements, an entrepreneur, or even a social crusader<br />
who fostered acceptance for a controversial invention, as Margaret<br />
Sanger did for the birth control pill. These little biographies,<br />
although offering only basic information, call forth the personal<br />
struggles behind inventions. And that is a facet to inventions that<br />
needs emphasizing, because it shows that technology, which can<br />
seem bewilderingly impersonal <strong>and</strong> complex, is always rooted in<br />
human need <strong>and</strong> desire.<br />
Roger Smith<br />
Portl<strong>and</strong>, Oregon<br />
xi
This Page Intentionally Left Blank
<strong>Inventions</strong><br />
<strong>and</strong><br />
<strong>Inventors</strong>
This Page Intentionally Left Blank
<strong>Inventions</strong><br />
<strong>and</strong><br />
<strong>Inventors</strong>
This Page Intentionally Left Blank
MAGILL’S CHOICE<br />
<strong>Inventions</strong><br />
<strong>and</strong><br />
<strong>Inventors</strong><br />
<strong>Volume</strong> 2<br />
Laser — Yellow fever vaccine<br />
Index<br />
459 – 936<br />
edited by<br />
Roger Smith<br />
Salem Press, Inc.<br />
Pasadena, California Hackensack, New Jersey
Copyright © 2002, by Salem Press, Inc.<br />
All rights in this book are reserved. No part of this work may be<br />
used or reproduced in any manner whatsoever or transmitted<br />
in any form or by any means, electronic or mechanical, including<br />
photocopy, recording, or any information storage <strong>and</strong> retrieval<br />
system, without written permission from the copyright<br />
owner except in the case of brief quotations embodied in critical<br />
articles <strong>and</strong> reviews. For information address the publisher, Salem<br />
Press, Inc., P.O. Box 50062, Pasadena, California 91115.<br />
Essays originally appeared in Twentieth Century: Great Events<br />
(1992, 1996), Twentieth Century: Great Scientific Achievements (1994),<br />
<strong>and</strong> Great Events from History II: Business <strong>and</strong> Commerce Series<br />
(1994). New material has been added.<br />
∞ The paper used in these volumes conforms to the American<br />
National St<strong>and</strong>ard for Permanence of Paper for Printed Library<br />
Materials, Z39.48-1992 (R1997).<br />
Library of Congress <strong>Catalog</strong>ing-in-<strong>Public</strong>ation Data<br />
<strong>Inventions</strong> <strong>and</strong> inventors / edited by Roger Smith<br />
p.cm. — (Magill’s choice)<br />
Includes bibliographical reference <strong>and</strong> index<br />
ISBN 1-58765-016-9 (set : alk. paper) — ISBN 1-58765-017-7<br />
(vol 1 : alk. paper) — ISBN 1-58765-018-5 (vol 2. : alk. paper)<br />
1. <strong>Inventions</strong>—History—20th century—Encyclopedias. 2. <strong>Inventors</strong>—Biography—Encyclopedias.<br />
I. Smith, Roger, 1953- .<br />
II. Series.<br />
T20 .I59 2001<br />
609—dc21 2001049412<br />
printed in the united states of america
Table of Contents<br />
Table of Contents<br />
Laser. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459<br />
Laser-diode recording process . . . . . . . . . . . . . . . . . . . 464<br />
Laser eye surgery . . . . . . . . . . . . . . . . . . . . . . . . . . 468<br />
Laser vaporization . . . . . . . . . . . . . . . . . . . . . . . . . . 472<br />
Long-distance radiotelephony . . . . . . . . . . . . . . . . . . . 477<br />
Long-distance telephone . . . . . . . . . . . . . . . . . . . . . . 482<br />
Mammography. . . . . . . . . . . . . . . . . . . . . . . . . . . . 486<br />
Mark I calculator . . . . . . . . . . . . . . . . . . . . . . . . . . . 490<br />
Mass spectrograph. . . . . . . . . . . . . . . . . . . . . . . . . . 494<br />
Memory metal . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498<br />
Microwave cooking . . . . . . . . . . . . . . . . . . . . . . . . . 502<br />
Neoprene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507<br />
Neutrino detector . . . . . . . . . . . . . . . . . . . . . . . . . . 511<br />
Nuclear magnetic resonance . . . . . . . . . . . . . . . . . . . . 516<br />
Nuclear power plant. . . . . . . . . . . . . . . . . . . . . . . . . 520<br />
Nuclear reactor. . . . . . . . . . . . . . . . . . . . . . . . . . . . 525<br />
Nylon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529<br />
Oil-well drill bit . . . . . . . . . . . . . . . . . . . . . . . . . . . 533<br />
Optical disk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537<br />
Orlon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541<br />
Pacemaker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545<br />
Pap test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549<br />
Penicillin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553<br />
Personal computer. . . . . . . . . . . . . . . . . . . . . . . . . . 558<br />
Photoelectric cell . . . . . . . . . . . . . . . . . . . . . . . . . . . 562<br />
Photovoltaic cell . . . . . . . . . . . . . . . . . . . . . . . . . . . 567<br />
Plastic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571<br />
Pocket calculator . . . . . . . . . . . . . . . . . . . . . . . . . . . 576<br />
Polio vaccine (Sabin). . . . . . . . . . . . . . . . . . . . . . . . . 581<br />
Polio vaccine (Salk) . . . . . . . . . . . . . . . . . . . . . . . . . 585<br />
Polyester . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589<br />
xix
Table of Contents<br />
Polyethylene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593<br />
Polystyrene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597<br />
Propeller-coordinated machine gun . . . . . . . . . . . . . . . . 601<br />
Pyrex glass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 606<br />
Radar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611<br />
Radio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616<br />
Radio crystal sets . . . . . . . . . . . . . . . . . . . . . . . . . . 621<br />
Radio interferometer . . . . . . . . . . . . . . . . . . . . . . . . 625<br />
Refrigerant gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . 630<br />
Reserpine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634<br />
Rice <strong>and</strong> wheat strains . . . . . . . . . . . . . . . . . . . . . . . 638<br />
Richter scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645<br />
Robot (household) . . . . . . . . . . . . . . . . . . . . . . . . . . 650<br />
Robot (industrial) . . . . . . . . . . . . . . . . . . . . . . . . . . 654<br />
Rocket . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 658<br />
Rotary dial telephone . . . . . . . . . . . . . . . . . . . . . . . . 663<br />
SAINT. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668<br />
Salvarsan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 673<br />
Scanning tunneling microscope . . . . . . . . . . . . . . . . . . 678<br />
Silicones. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683<br />
Solar thermal engine. . . . . . . . . . . . . . . . . . . . . . . . . 687<br />
Sonar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 692<br />
Stealth aircraft . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697<br />
Steelmaking process . . . . . . . . . . . . . . . . . . . . . . . . . 701<br />
Supercomputer. . . . . . . . . . . . . . . . . . . . . . . . . . . . 709<br />
Supersonic passenger plane . . . . . . . . . . . . . . . . . . . . 714<br />
Synchrocyclotron . . . . . . . . . . . . . . . . . . . . . . . . . . 720<br />
Synthetic amino acid . . . . . . . . . . . . . . . . . . . . . . . . 724<br />
Synthetic DNA . . . . . . . . . . . . . . . . . . . . . . . . . . . . 729<br />
Synthetic RNA . . . . . . . . . . . . . . . . . . . . . . . . . . . . 733<br />
Syphilis test. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737<br />
Talking motion pictures . . . . . . . . . . . . . . . . . . . . . . . 741<br />
Teflon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 746<br />
Telephone switching. . . . . . . . . . . . . . . . . . . . . . . . . 751<br />
Television . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 756<br />
Tevatron accelerator . . . . . . . . . . . . . . . . . . . . . . . . . 761<br />
xx
Table of Contents<br />
Thermal cracking process . . . . . . . . . . . . . . . . . . . . . . 765<br />
Tidal power plant . . . . . . . . . . . . . . . . . . . . . . . . . . 770<br />
Touch-tone telephone . . . . . . . . . . . . . . . . . . . . . . . . 774<br />
Transistor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 778<br />
Transistor radio . . . . . . . . . . . . . . . . . . . . . . . . . . . 786<br />
Tuberculosis vaccine. . . . . . . . . . . . . . . . . . . . . . . . . 791<br />
Tungsten filament . . . . . . . . . . . . . . . . . . . . . . . . . . 795<br />
Tupperware. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 799<br />
Turbojet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 807<br />
Typhus vaccine . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811<br />
Ultracentrifuge . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815<br />
Ultramicroscope . . . . . . . . . . . . . . . . . . . . . . . . . . . 819<br />
Ultrasound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 823<br />
UNIVAC computer . . . . . . . . . . . . . . . . . . . . . . . . . 828<br />
Vacuum cleaner . . . . . . . . . . . . . . . . . . . . . . . . . . . 832<br />
Vacuum tube . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 837<br />
Vat dye . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 842<br />
Velcro . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 846<br />
Vending machine slug rejector . . . . . . . . . . . . . . . . . . . 850<br />
Videocassette recorder . . . . . . . . . . . . . . . . . . . . . . . 857<br />
Virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . 861<br />
Virtual reality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 866<br />
V-2 rocket . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 871<br />
Walkman cassette player . . . . . . . . . . . . . . . . . . . . . . 875<br />
Washing machine . . . . . . . . . . . . . . . . . . . . . . . . . . 883<br />
Weather satellite . . . . . . . . . . . . . . . . . . . . . . . . . . . 887<br />
Xerography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 891<br />
X-ray crystallography . . . . . . . . . . . . . . . . . . . . . . . . 896<br />
X-ray image intensifier . . . . . . . . . . . . . . . . . . . . . . . 901<br />
Yellow fever vaccine . . . . . . . . . . . . . . . . . . . . . . . . . 905<br />
Time Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 909<br />
Topics by Category . . . . . . . . . . . . . . . . . . . . . . . . . 915<br />
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 923<br />
xxi
This Page Intentionally Left Blank
<strong>Inventions</strong><br />
<strong>and</strong><br />
<strong>Inventors</strong>
This Page Intentionally Left Blank
Abortion pill<br />
Abortion pill<br />
The invention: RU-486 was the first commercially available drug<br />
that prevented fertilized eggs from implanting themselves in the<br />
walls of women’s uteruses.<br />
The people behind the invention:<br />
Étienne-Émile Baulieu (1926- ), a French biochemist <strong>and</strong><br />
endocrinologist<br />
Georges Teutsch, a French chemist<br />
Alain Bélanger, a French chemist<br />
Daniel Philibert, a French physicist <strong>and</strong> pharmacologist<br />
Developing <strong>and</strong> Testing<br />
In 1980, Alain Bélanger, a research chemist, was working with<br />
Georges Teutsch at Roussel Uclaf, a French pharmaceutical company.<br />
Teutsch <strong>and</strong> Bélanger were interested in underst<strong>and</strong>ing how<br />
changes in steroids affect the chemicals’ ability to bind to their steroid<br />
receptors. (Receptors are molecules on cells that can bind with<br />
certain chemical substances such as hormones. Receptors therefore<br />
act as connecting links to promote or prevent specific bodily activities<br />
or processes.) Bélanger synthesized several steroids that bonded<br />
to steroid receptors. Among these steroids was a compound that<br />
came to be called “RU-486.”<br />
Another member of the research project, Daniel Philibert, found<br />
that RU-486 blocked the activities of progesterone by binding tightly<br />
to the progesterone receptor. Progesterone is a naturally occurring<br />
steroid hormone that prepares the wall of the uterus to accept a fertilized<br />
egg. Once this is done, the egg can become implanted <strong>and</strong><br />
can begin to develop. The hormone also prevents the muscles of the<br />
uterus from contracting, which might cause the uterus to reject the<br />
egg. Therefore RU-486, by acting as a kind of shield between hormone<br />
<strong>and</strong> receptor, essentially stopped the progesterone from doing<br />
its job.<br />
At the time, Teutsch’s group did not consider that RU-486 might<br />
be useful for deliberately interrupting human pregnancy. It was<br />
1
2 / Abortion pill<br />
Étienne-Émile Baulieu, a biochemist <strong>and</strong> endocrinologist <strong>and</strong> a consultant<br />
for Roussel Uclaf, who made this connection. He persuaded<br />
the company to test RU-486 for its effects on fertility control.<br />
Many tests were performed on rabbits, rats, <strong>and</strong> monkeys; they<br />
showed that, even in the presence of progesterone, RU-486 could<br />
prevent secretory tissue from forming in the uterus, could change<br />
the timing of the menstrual cycle, <strong>and</strong> could terminate a pregnancy—that<br />
is, cause an abortion. The compound also seemed to be<br />
nontoxic, even in high doses.<br />
In October of 1981, Baulieu began testing the drug with human<br />
volunteers. By 1985, major tests of RU-486 were being done in<br />
Étienne-Emile Baulieu<br />
Étienne-Émile Baulieu was born in Strasbourg, France, in<br />
1926. He moved to Paris for his advanced studies at the Faculty<br />
of Medicine <strong>and</strong> Faculty of Science of Pasteur College. He was<br />
an Intern of Paris from 1951 until he received a medical degree<br />
in 1955. He passed examinations qualifying him to become a<br />
teacher at state schools in 1958 <strong>and</strong> during the 1961-1962 academic<br />
year was a visiting scientist in Columbia University’s<br />
Department of Biochemistry.<br />
In 1963 Baulieu was made a Doctor of Science <strong>and</strong> appointed<br />
director of a research unit at France’s National Institute of<br />
Health <strong>and</strong> Medical Science, a position he held until he retired<br />
in 1997. He also served as Head of Service of Hormonal Biochemistry<br />
of the Hospital of Bicêtre (1970-1997), professor of<br />
biochemistry at University of Paris-South (1970-1993), <strong>and</strong> consultant<br />
for Roussel Uclaf (1963-1997).<br />
Among his many honors are the Gregory Pincus Memorial<br />
Award (1978), awards from the National Academy of Medicine,<br />
the Christopher Columbus Discovery Award in Biomedical Research<br />
(1992), the Joseph Bolivar DeLee Humanitarian Award<br />
(1994), <strong>and</strong> Comm<strong>and</strong>er of the Legion of Honor (1990). Although<br />
busy with research <strong>and</strong> teaching duties, Baulieu was on<br />
the editorial board of several French <strong>and</strong> international newspapers,<br />
a member of scientific councils, <strong>and</strong> a participant in the<br />
Special Program in Human Reproduction of the World Health<br />
Organization.
France, Great Britain, The Netherl<strong>and</strong>s, Sweden, <strong>and</strong> China. When a<br />
relatively low dose of RU-486 was given orally, there was an 85 percent<br />
success rate in ending pregnancy; the woman’s body expelled<br />
the embryo <strong>and</strong> all the endometrial surface. Researchers found that<br />
if a low dose of a prostagl<strong>and</strong>in (a hormonelike substance that<br />
causes the smooth muscles of the uterus to contract, thereby expelling<br />
the embryo) was given two days later, the success rate rose to 96<br />
percent. There were few side effects, <strong>and</strong> the low doses of RU-486<br />
did not interfere with the actions of other steroid hormones that are<br />
necessary to keep the body working.<br />
In the March, 1990, issue of The New Engl<strong>and</strong> Journal of Medicine,<br />
Baulieu <strong>and</strong> his coworkers reported that with one dose of RU-486,<br />
followed in thirty-six to forty-eight hours with a low dose of prostagl<strong>and</strong>in,<br />
96 percent of the 2,040 women they studied had a complete<br />
abortion with few side effects. The women were monitored after receiving<br />
the prostagl<strong>and</strong>in to watch for side effects, which included<br />
nausea, vomiting, abdominal pain, <strong>and</strong> diarrhea. When they returned<br />
for a later checkup, fewer than 2 percent of the women complained<br />
of side effects. The researchers used two different prostagl<strong>and</strong>ins;<br />
they found that one caused a quicker abortion but also<br />
brought about more pain <strong>and</strong> a longer period of bleeding.<br />
Using the Drug<br />
Abortion pill / 3<br />
In September, 1988, the French government approved the distribution<br />
of RU-486 for use in government-controlled clinics. The next<br />
month, however, Roussel Uclaf stopped selling the drug because<br />
people opposed to abortion did not want RU-486 to be available <strong>and</strong><br />
were threatening to boycott the company.<br />
Then, however, there were threats <strong>and</strong> pressure from the other<br />
side. For example, members of the World Congress of Obstetrics<br />
<strong>and</strong> Gynecology announced that they might boycott Roussel Uclaf<br />
if it did not make RU-486 available. The French government, which<br />
controlled a 36 percent interest in Roussel Uclaf, ordered the company<br />
to start distributing the drug once more.<br />
By the fall of 1989, more than one-fourth of all early abortions in<br />
France were being done with RU-486 <strong>and</strong> a prostagl<strong>and</strong>in. The French<br />
government began helping to pay the cost of using RU-486 in 1990.
4 / Abortion pill<br />
Testing for approval of RU-486 was completed in Great Britain<br />
<strong>and</strong> The Netherl<strong>and</strong>s, but Roussel Uclaf’s parent company, Hoechst<br />
AG, did not try to market the drug there or in any other country outside<br />
France. (In the United States, government regulations did not<br />
allow RU-486 to be tested using government funds.)<br />
Medical researchers believe that RU-486 may be useful not only<br />
for abortions but also in other ways. For example, it may help in<br />
treating certain breast cancers <strong>and</strong> other tumors. RU-486 is also being<br />
investigated as a possible treatment for glaucoma—to lower<br />
pressure in the eye that may be caused by a high level of steroid hormone.<br />
It may be useful in promoting the healing of skin wounds<br />
<strong>and</strong> softening the cervix at birth, easing delivery. Researchers hope<br />
as well that some form of RU-486 may prove useful as a contraceptive—that<br />
is, not to prevent a fertilized egg from implanting itself in<br />
the mother’s uterus but to prevent ovulation in the first place.<br />
Impact<br />
Groups opposed to abortion rights have spoken out against RU-<br />
486, while those who favor the right to abortion have urged its acceptance.<br />
The drug has been approved for use in China as well as in<br />
France. In the United States, however, the government has avoided<br />
giving its approval to the drug. Officials of the World Health Organization<br />
(WHO) have argued that RU-486 could prevent the deaths<br />
of women who undergo botched abortions. Under international<br />
law, WHO has the right to take control of the drug <strong>and</strong> make it available<br />
in poor countries at low cost. Because of the controversy surrounding<br />
the drug, however, WHO called for more testing to ensure<br />
that RU-486 is quite safe for women.<br />
See also Amniocentesis; Antibacterial drugs; Artificial hormone;<br />
Birth control pill; Salvarsan.<br />
Further Reading<br />
Baulieu, Etienne-Emile, <strong>and</strong> Mort Rosenblum. The “Abortion Pill”:<br />
RU-486, a Woman’s Choice. New York: Simon & Schuster, 1991.<br />
Butler, John Douglas, <strong>and</strong> David F. Walbert. Abortion, Medicine, <strong>and</strong><br />
the Law. 4th ed. New York: Facts on File, 1992.
Abortion pill / 5<br />
Lyall, Sarah. “Britain Allows Over-the-Counter Sales of Morning-<br />
After Pill.” New York Times (January 15, 2001).<br />
McCuen, Gary E. RU 486: The Abortion Pill Controversy. Hudson,<br />
Wis.: GEM <strong>Public</strong>ations, 1992.<br />
Nemecek, Sasha. “The Second Abortion Pill.” Scientific American<br />
283, no. 6 (December, 2000).<br />
Zimmerman, Rachel. “Ads for Controversial Abortion Pill Set to<br />
Appear in National Magazines.” Wall Street Journal (May 23,<br />
2001).
6<br />
Airplane<br />
Airplane<br />
The invention: The first heavier-than-air craft to fly, the airplane<br />
revolutionized transportation <strong>and</strong> symbolized the technological<br />
advances of the twentieth century.<br />
The people behind the invention:<br />
Wilbur Wright (1867-1912), an American inventor<br />
Orville Wright (1871-1948), an American inventor<br />
Octave Chanute (1832-1910), a French-born American civil<br />
engineer<br />
A Careful Search<br />
Although people have dreamed about flying since the time of the<br />
ancient Greeks, it was not until the late eighteenth century that hotair<br />
balloons <strong>and</strong> gliders made human flight possible. It was not until<br />
the late nineteenth century that enough experiments had been done<br />
with kites <strong>and</strong> gliders that people could begin to think seriously<br />
about powered, heavier-than-air flight. Two of these people were<br />
Wilbur <strong>and</strong> Orville Wright.<br />
The Wright brothers making their first successful powered flight, at Kitty Hawk, North<br />
Carolina. (Library of Congress)
The Wright brothers were more than just tinkerers who accidentally<br />
found out how to build a flying machine. In 1899, Wilbur wrote<br />
the Smithsonian Institution for a list of books to help them learn<br />
about flying. They used the research of people such as George<br />
Cayley, Octave Chanute, Samuel Langley, <strong>and</strong> Otto Lilienthal to<br />
help them plan their own experiments with birds, kites, <strong>and</strong> gliders.<br />
They even built their own wind tunnel. They never fully trusted the<br />
results of other people’s research, so they repeated the experiments<br />
of others <strong>and</strong> drew their own conclusions. They shared these results<br />
with Octave Chanute, who was able to offer them lots of good advice.<br />
They were continuing a tradition of excellence in engineering<br />
that began with careful research <strong>and</strong> avoided dangerous trial <strong>and</strong><br />
error.<br />
Slow Success<br />
Airplane / 7<br />
Before the brothers had set their minds to flying, they had built<br />
<strong>and</strong> repaired bicycles. This was a great help to them when they put<br />
their research into practice <strong>and</strong> actually built an airplane. From<br />
building bicycles, they knew how to work with wood <strong>and</strong> metal to<br />
make a lightweight but sturdy machine. Just as important, from riding<br />
bicycles, they got ideas about how an airplane needed to work.<br />
They could see that both bicycles <strong>and</strong> airplanes needed to be fast<br />
<strong>and</strong> light. They could also see that airplanes, like bicycles, needed to<br />
be kept under constant control to stay balanced, <strong>and</strong> that this control<br />
would probably take practice. This was a unique idea. Instead<br />
of building something solid that was controlled by levers <strong>and</strong> wheels<br />
like a car, the Wright brothers built a flexible airplane that was controlled<br />
partly by the movement of the pilot, like a bicycle.<br />
The result was the 1903 Wright Flyer. The Flyer had two sets of<br />
wings, one above the other, which were about 12 meters from tip to<br />
tip. They made their own 12-horsepower engine, as well as the two<br />
propellers the engine spun. The craft had skids instead of wheels.<br />
On December 14, 1903, the Wright brothers took the Wright Flyer to<br />
the shores of Kitty Hawk, North Carolina, where Wilbur Wright<br />
made the first attempt to fly the airplane.<br />
The first thing Wilbur found was that flying an airplane was not<br />
as easy as riding a bicycle. One wrong move sent him tumbling into
8 / Airplane<br />
The Wright Brothers<br />
Orville <strong>and</strong> his older brother Wilbur first got interested in<br />
aircraft when their father gave them a toy helicopter in 1878.<br />
Theirs was a large, supportive family. Their father, a minister,<br />
<strong>and</strong> their mother, a college graduate <strong>and</strong> inventor of household<br />
gadgets, encouraged all five of the children to be creative. Although<br />
Wilbur, born in 1867, was four years older than Orville,<br />
they were close as children. While in high school, they put out a<br />
weekly newspaper together, West Side News, <strong>and</strong> they opened<br />
their bicycle shop in 1892. Orville was the mechanically adept<br />
member of the team, the tinkerer; Wilbur was the deliberative<br />
one, the planner <strong>and</strong> designer.<br />
Since the bicycle business was seasonal, they had time to<br />
pursue their interest in aircraft, puzzling out the technical problems<br />
<strong>and</strong> studying the successes <strong>and</strong> failures of others. They<br />
started with gliders, flying their first, which had a five-foot<br />
wing span, in 1899. They developed their own technique to control<br />
the gliders, the “wing-warping technique,” after watching<br />
how birds fly. They attached wires to the trailing edges of the<br />
wings <strong>and</strong> pulled the wires to deform the wings’ shape. They<br />
built a sixteen-foot glider in 1900 <strong>and</strong> spent a vacation in North<br />
Carolina gaining flying experience. Further designs <strong>and</strong> many<br />
more tests followed, including more than two hundred shapes<br />
of wing studied in their home-built wind tunnel, before their<br />
first successful engine-powered flight in 1903.<br />
Neither man ever married. After Wilbur died of typhoid in<br />
1912, Orville was stricken by the loss of his brother but continued<br />
to run their business until 1915. He last piloted an airplane<br />
himself in 1918 <strong>and</strong> died thirty years later.<br />
Their first powered airplane, the Wright Flyer, lives on at the<br />
National Air <strong>and</strong> Space Museum in Washington, D.C. Small<br />
parts from the aircraft were taken to the Moon by Neil Armstrong<br />
<strong>and</strong> Edwin Aldrin when they made the first l<strong>and</strong>ing<br />
there in 1969.<br />
the s<strong>and</strong> only moments after takeoff. Wilbur was not seriously hurt,<br />
but a few more days were needed to repair the Wright Flyer.<br />
On December 17, 1903, at 10:35 a.m., after eight years of research<br />
<strong>and</strong> planning, Orville Wright took to the air for a historic twelve sec-
onds. He covered 37 meters of ground <strong>and</strong> 152 meters of air space.<br />
Both brothers took two flights that morning. On the fourth flight,<br />
Wilbur flew for fifty-nine seconds over 260 meters of ground <strong>and</strong><br />
through more than 800 meters of air space. After he had l<strong>and</strong>ed, a<br />
sudden gust of wind struck the plane, damaging it beyond repair.<br />
Yet no one was able to beat their record for three years.<br />
Impact<br />
Airplane / 9<br />
Those first flights in 1903 got little publicity. Only a few people,<br />
such as Octave Chanute, understood the significance of the Wright<br />
brothers’ achievement. For the next two years, they continued to<br />
work on their design, <strong>and</strong> by 1905 they had built the Wright Flyer III.<br />
Although Chanute tried to get them to enter flying contests, the<br />
brothers decided to be cautious <strong>and</strong> try to get their machine patented<br />
first, so that no one would be able to steal their ideas.<br />
News of their success spread slowly through the United States<br />
<strong>and</strong> Europe, giving hope to others who were working on airplanes<br />
of their own. When the Wright brothers finally went public with the<br />
Wright Flyer III, they inspired many new advances. By 1910, when<br />
the brothers started flying in air shows <strong>and</strong> contests, their feats were<br />
matched by another American, Glen Hammond Curtiss. The age of<br />
the airplane had arrived.<br />
Later in the decade, the Wright brothers began to think of military<br />
uses for their airplanes. They signed a contract with the U.S.<br />
Army Signal Corps <strong>and</strong> agreed to train military pilots.<br />
Aside from these achievements, the brothers from Dayton, Ohio,<br />
set the st<strong>and</strong>ard for careful research <strong>and</strong> practical experimentation.<br />
They taught the world not only how to fly but also how to design<br />
airplanes. Indeed, their methods of purposeful, meaningful, <strong>and</strong><br />
highly organized research had an impact not only on airplane design<br />
but also on the field of aviation science in general.<br />
See also Bullet train; Cruise missile; Dirigible; Gas-electric car;<br />
Propeller-coordinated machine gun; Rocket; Stealth aircraft; Supersonic<br />
passenger plane; Turbojet; V-2 rocket.
10 / Airplane<br />
Further Reading<br />
Brady, Tim. The American Aviation Experience: A History. Carbondale:<br />
Southern Illinois University Press, 2000.<br />
Chanute, Octave, Marvin Wilks, Orville Wright, <strong>and</strong> Wilbur Wright.<br />
The Papers of Wilbur <strong>and</strong> Orville Wright: Including the Chanute-<br />
Wright Letters <strong>and</strong> Other Papers of Octave Chanute. New York:<br />
McGraw-Hill, 2000.<br />
Culik, Fred, <strong>and</strong> Spencer Dunmore. On Great White Wings: The<br />
Wright Brothers <strong>and</strong> the Race for Flight. Toronto: McArthur, 2001.<br />
Howard, Fred. Wilbur <strong>and</strong> Orville: A Biography of the Wright Brothers.<br />
Mineola, N.Y.: Dover <strong>Public</strong>ations, 1998.
Alkaline storage battery<br />
Alkaline storage battery<br />
The invention: The nickel-iron alkaline battery was a lightweight,<br />
inexpensive portable power source for vehicles with electric motors.<br />
The people behind the invention:<br />
Thomas Alva Edison (1847-1931), American chemist, inventor,<br />
<strong>and</strong> industrialist<br />
Henry Ford (1863-1947), American inventor <strong>and</strong> industrialist<br />
Charles F. Kettering (1876-1958), American engineer <strong>and</strong><br />
inventor<br />
A Three-Way Race<br />
The earliest automobiles were little more than pairs of bicycles<br />
harnessed together within a rigid frame, <strong>and</strong> there was little agreement<br />
at first regarding the best power source for such contraptions.<br />
The steam engine, which was well established for railroad <strong>and</strong> ship<br />
transportation, required an external combustion area <strong>and</strong> a boiler.<br />
Internal combustion engines required h<strong>and</strong> cranking, which could<br />
cause injury if the motor backfired. Electric motors were attractive<br />
because they did not require the burning of fuel, but they required<br />
batteries that could store a considerable amount of energy <strong>and</strong><br />
could be repeatedly recharged. Ninety percent of the motorcabs in<br />
use in New York City in 1899 were electrically powered.<br />
The first practical storage battery, which was invented by the<br />
French physicist Gaston Planté in 1859, employed electrodes (conductors<br />
that bring electricity into <strong>and</strong> out of a conducting medium)<br />
of lead <strong>and</strong> lead oxide <strong>and</strong> a sulfuric acid electrolyte (a solution<br />
that conducts electricity). In somewhat improved form, this<br />
remained the only practical rechargeable battery at the beginning<br />
of the twentieth century. Edison considered the lead acid cell (battery)<br />
unsuitable as a power source for electric vehicles because using<br />
lead, one of the densest metals known, resulted in a heavy<br />
battery that added substantially to the power requirements of a<br />
motorcar. In addition, the use of an acid electrolyte required that<br />
11
12 / Alkaline storage battery<br />
the battery container be either nonmetallic or coated with a nonmetal<br />
<strong>and</strong> thus less dependable than a steel container.<br />
The Edison Battery<br />
In 1900, Edison began experiments aimed at developing a rechargeable<br />
battery with inexpensive <strong>and</strong> lightweight metal electrodes <strong>and</strong> an<br />
alkaline electrolyte so that a metal container could be used. He had already<br />
been involved in manufacturing the nonrechargeable battery<br />
known as the Lal<strong>and</strong>e cell, which had zinc <strong>and</strong> copper oxide electrodes<br />
<strong>and</strong> a highly alkaline sodium hydroxide electrolyte. Zinc electrodes<br />
could not be used in a rechargeable cell because the zinc would<br />
dissolve in the electrolyte. The copper electrode also turned out to be<br />
unsatisfactory. After much further experimentation, Edison settled<br />
on the nickel-iron system for his new storage battery. In this system,<br />
the power-producing reaction involved the conversion of nickel oxide<br />
to nickel hydroxide together with the oxidation of iron metal to<br />
iron oxide, with both materials in contact with a potassium hydroxide<br />
solution. When the battery was recharged, the nickel hydroxide<br />
was converted into oxide <strong>and</strong> the iron oxide was converted back to<br />
the pure metal.<br />
Although the basic ingredients<br />
of the Edison cell were<br />
inexpensive, they could not readily<br />
be obtained in adequate purity<br />
for battery use. Edison set<br />
up a new chemical works to<br />
prepare the needed materials.<br />
He purchased impure nickel alloy,<br />
which was then dissolved<br />
in acid, purified, <strong>and</strong> converted<br />
to the hydroxide. He prepared<br />
pure iron powder by using a<br />
multiple-step process. For use<br />
in the battery, the reactant powders<br />
had to be packed in pockets<br />
made of nickel-plated steel<br />
Thomas A. Edison. (Library of Congress)<br />
that had been perforated to al-
Alkaline storage battery / 13<br />
low the iron <strong>and</strong> nickel powders to come into contact with the electrolyte.<br />
Because the nickel compounds were poor electrical conductors,<br />
a flaky type of graphite was mixed with the nickel hydroxide at<br />
this stage.<br />
Sales of the new Edison storage battery began in 1904, but within<br />
six months it became apparent that the battery was subject to losses<br />
in power <strong>and</strong> a variety of other defects. Edison took the battery off<br />
Thomas Alva Edison<br />
Thomas Alva Edison (1847-1931) was America’s most famous<br />
<strong>and</strong> prolific inventor. His astonishing success story, rising<br />
from a home-schooled child who worked as a newsboy to<br />
a leader in American industry, was celebrated in children’s<br />
books, biographies, <strong>and</strong> movies. Corporations still bear his<br />
name, <strong>and</strong> his inventions <strong>and</strong> improvements of others’ inventions—such<br />
as the light bulb, phonograph, <strong>and</strong> motion picture—shaped<br />
the way Americans live, work, <strong>and</strong> entertain<br />
themselves. The U.S. Patent Office issued Edison 1,093 patents<br />
during his lifetime, the most granted to one person.<br />
Hailed as a genius, Edison himself emphasized the value of<br />
plain determination. Genius is one percent inspiration <strong>and</strong> 99<br />
percent perspiration, he insisted. He also understood the value<br />
of working with others. In fact, one of his greatest contributions<br />
to American technology involved organized research. At age<br />
twenty-three he sold the rights to his first major invention,<br />
an improved ticker-tape machine for Wall Street brokers, for<br />
$40,000. He invested the money in building an industrial research<br />
laboratory, the first ever. It led to his large facilities at<br />
Menlo Park, New Jersey, <strong>and</strong>, later, labs in other locations. At<br />
times as many as one hundred people worked for him, some of<br />
whom, such as Nikola Tesla <strong>and</strong> Reginald Fessenden, became<br />
celebrated inventors in their own right.<br />
At his labs Edison not only developed electrical items, such<br />
as the light bulb <strong>and</strong> storage battery; he also produced an efficient<br />
mimeograph <strong>and</strong> worked on innovations in metallurgy,<br />
organic chemistry, photography <strong>and</strong> motion pictures, <strong>and</strong> phonography.<br />
The phonograph, he once said, was his favorite invention.<br />
Edison never stopped working. He was still receiving patents<br />
the year he died.
14 / Alkaline storage battery<br />
the market in 1905 <strong>and</strong> offered full-price refunds for the defective<br />
batteries. Not a man to ab<strong>and</strong>on an invention, however, he spent the<br />
next five years examining the failed batteries <strong>and</strong> refining his design.<br />
He discovered that the repeated charging <strong>and</strong> discharging of<br />
the battery caused a shift in the distribution of the graphite in the<br />
nickel hydroxide electrode. By using a different type of graphite, he<br />
was able to eliminate this problem <strong>and</strong> produce a very dependable<br />
power source.<br />
The Ford Motor Company, founded by Henry Ford, a former<br />
Edison employee, began the large-scale production of gasolinepowered<br />
automobiles in 1903 <strong>and</strong> introduced the inexpensive, easyto-drive<br />
Model T in 1908. The introduction of the improved Edison<br />
battery in 1910 gave a boost to electric car manufacturers, but their<br />
new position in the market would be short-lived. In 1911, Charles<br />
Kettering invented an electric starter for gasoline-powered vehicles<br />
that eliminated the need for troublesome <strong>and</strong> risky h<strong>and</strong> cranking.<br />
By 1915, this device was available on all gasoline-powered automobiles,<br />
<strong>and</strong> public interest in electrically powered cars rapidly diminished.<br />
Although the Kettering starter required a battery, it required<br />
much less capacity than an electric motor would have <strong>and</strong> was almost<br />
ideally suited to the six-volt lead-acid battery.<br />
Impact<br />
Edison lost the race to produce an electrical power source that<br />
would meet the needs of automotive transportation. Instead, the internal<br />
combustion engine developed by Henry Ford became the st<strong>and</strong>ard.<br />
Interest in electrically powered transportation diminished as<br />
immense reserves of crude oil, from which gasoline could be obtained,<br />
were discovered first in the southwestern United States <strong>and</strong><br />
then on the Arabian peninsula. Nevertheless, the Edison cell found<br />
a variety of uses <strong>and</strong> has been manufactured continuously throughout<br />
most of the twentieth century much as Edison designed it.<br />
Electrically powered trucks proved to be well suited for local deliveries,<br />
<strong>and</strong> some department stores maintained fleets of such<br />
trucks into the mid-1920’s. Electrical power is still preferable to internal<br />
combustion for indoor use, where exhaust fumes are a significant<br />
problem, so forklifts in factories <strong>and</strong> passenger transport vehi-
cles at airports still make use of the Edison-type power source. The<br />
Edison battery also continues to be used in mines, in railway signals,<br />
in some communications equipment, <strong>and</strong> as a highly reliable<br />
source of st<strong>and</strong>by emergency power.<br />
See also Compressed-air-accumulating power plant; Internal<br />
combustion engine; Photoelectric cell; Photovoltaic cell.<br />
Further Reading<br />
Alkaline storage battery / 15<br />
Baldwin, Neil. Edison: Inventing the Century. Chicago: University of<br />
Chicago Press, 2001.<br />
Boyd, Thomas Alvin. Professional Amateur: The Biography of Charles<br />
Franklin Kettering. New York: Arno Press, 1972.<br />
Bryan, Ford R. Beyond the Model T: The Other Ventures of Henry Ford.<br />
Rev. ed. Detroit: Wayne State University Press, 1997.<br />
Cramer, Carol. Thomas Edison. San Diego, Calif.: Greenhaven Press,<br />
2001.<br />
Israel, Paul. Edison: A Life of Invention. New York: Wiley, 2000.
16<br />
Ammonia<br />
Ammonia<br />
The invention: The first successful method for converting nitrogen<br />
from the atmosphere <strong>and</strong> combining it with hydrogen to synthesize<br />
ammonia, a valuable compound used as a fertilizer.<br />
The person behind the invention:<br />
Fritz Haber (1868-1934), a German chemist who won the 1918<br />
Nobel Prize in Chemistry<br />
The Need for Nitrogen<br />
The nitrogen content of the soil, essential to plant growth, is<br />
maintained normally by the deposition <strong>and</strong> decay of old vegetation<br />
<strong>and</strong> by nitrates in rainfall. If, however, the soil is used extensively<br />
for agricultural purposes, more intensive methods must be used to<br />
maintain soil nutrients such as nitrogen. One such method is crop<br />
rotation, in which successive divisions of a farm are planted in rotation<br />
with clover, corn, or wheat, for example, or allowed to lie fallow<br />
for a year or so. The clover is able to absorb nitrogen from the air <strong>and</strong><br />
deposit it in the soil through its roots. As population has increased,<br />
however, farming has become more intensive, <strong>and</strong> the use of artificial<br />
fertilizers—some containing nitrogen—has become almost universal.<br />
Nitrogen-bearing compounds, such as potassium nitrate <strong>and</strong><br />
ammonium chloride, have been used for many years as artificial fertilizers.<br />
Much of the nitrate used, mainly potassium nitrate, came<br />
from Chilean saltpeter, of which a yearly amount of half a million<br />
tons was imported at the beginning of the twentieth century into<br />
Europe <strong>and</strong> the United States for use in agriculture. Ammonia was<br />
produced by dry distillation of bituminous coal <strong>and</strong> other lowgrade<br />
fuel materials. Originally, coke ovens discharged this valuable<br />
material into the atmosphere, but more economical methods<br />
were found later to collect <strong>and</strong> condense these ammonia-bearing<br />
vapors.<br />
At the beginning of the twentieth century, Germany had practically<br />
no source of fertilizer-grade nitrogen; almost all of its supply
came from the deserts of northern Chile. As dem<strong>and</strong> for nitrates increased,<br />
it became apparent that the supply from these vast deposits<br />
would not be enough. Other sources needed to be found, <strong>and</strong> the almost<br />
unlimited supply of nitrogen in the atmosphere (80 percent nitrogen)<br />
was an obvious source.<br />
Temperature <strong>and</strong> Pressure<br />
Ammonia / 17<br />
When Fritz Haber <strong>and</strong> coworkers began his experiments on ammonia<br />
production in 1904, Haber decided to repeat the experiments<br />
of the British chemist Sir William Ramsay <strong>and</strong> Sydney Young, who<br />
in 1884 had studied the decomposition of ammonia at about 800 degrees<br />
Celsius. They had found that a certain amount of ammonia<br />
was always left undecomposed. In other words, the reaction between<br />
ammonia <strong>and</strong> its constituent elements—nitrogen <strong>and</strong> hydrogen—had<br />
reached a state of equilibrium.<br />
Haber decided to determine the point at which this equilibrium<br />
took place at temperatures near 1,000 degrees Celsius. He tried several<br />
approaches, reacting pure hydrogen with pure nitrogen, <strong>and</strong><br />
starting with pure ammonia gas <strong>and</strong> using iron filings as a catalyst.<br />
(Catalytic agents speed up a reaction without affecting it otherwise).<br />
Having determined the point of equilibrium, he next tried different<br />
catalysts <strong>and</strong> found nickel to be as effective as iron, <strong>and</strong> calcium<br />
<strong>and</strong> manganese even better. At 1,000 degrees Celsius, the rate of reaction<br />
was enough to produce practical amounts of ammonia continuously.<br />
Further work by Haber showed that increasing the pressure also<br />
increased the percentage of ammonia at equilibrium. For example,<br />
at 300 degrees Celsius, the percentage of ammonia at equilibrium at<br />
1 atmosphere of pressure was very small, but at 200 atmospheres,<br />
the percentage of ammonia at equilibrium was far greater. A pilot<br />
plant was constructed <strong>and</strong> was successful enough to impress a<br />
chemical company, Badische Anilin-und Soda-Fabrik (BASF). BASF<br />
agreed to study Haber’s process <strong>and</strong> to investigate different catalysts<br />
on a large scale. Soon thereafter, the process became a commercial<br />
success.
(Nobel Foundation)<br />
18 / Ammonia<br />
Impact<br />
Fritz Haber<br />
Fritz Haber’s career is a warning to inventors: Beware of<br />
what you create, even if your intentions are honorable.<br />
Considered a leading chemist of his age, Haber was born in<br />
Breslau (now Wroclaw, Pol<strong>and</strong>) in 1868. A brilliant student, he<br />
earned a doctorate quickly, specializing in organic chemistry,<br />
<strong>and</strong> briefly worked as an industrial chemist. Although he soon<br />
took an academic job, throughout his career Haber believed<br />
that science must benefit society—new theoretical discoveries<br />
must find practical applications.<br />
Beginning in 1904, he applied new chemical techniques<br />
to fix atmospheric nitrogen in the form of ammonia.<br />
Nitrogen in the form of nitrates was urgently<br />
sought because nitrates were necessary to fertilize<br />
crops <strong>and</strong> natural sources were becoming rare. Only<br />
artificial nitrates could sustain the amount of agriculture<br />
needed to feed exp<strong>and</strong>ing populations. In 1908<br />
Haber succeeded in finding an efficient, cheap process<br />
to make ammonia <strong>and</strong> convert it to nitrates, <strong>and</strong><br />
by 1910 German manufacturers had built large plants<br />
to exploit his techniques. He was lauded as a great benefactor to<br />
humanity.<br />
However, his efforts to help Germany during World War I,<br />
even though he hated war, turned his life into a nightmare. His<br />
wife committed suicide because of his chlorine gas research,<br />
which also poisoned his international reputation <strong>and</strong> tainted<br />
his 1918 Nobel Prize in Chemistry. After the war he redirected<br />
his energies to helping Germany rebuild its economy. Eight<br />
years of experiments in extracting gold from seawater ended in<br />
failure, but he did raise the Kaiser Wilhelm Institute for Physical<br />
Chemistry, which he directed, to international prominence.<br />
Nonetheless, Haber had to flee Adolf Hitler’s Nazi regime in<br />
1933 <strong>and</strong> died a year later, better known for his war research<br />
than for his fundamental service to agriculture <strong>and</strong> industry.<br />
With the beginning of World War I, nitrates were needed more<br />
urgently for use in explosives than in agriculture. After the fall of<br />
Antwerp, 50,000 tons of Chilean saltpeter were discovered in the
harbor <strong>and</strong> fell into German h<strong>and</strong>s. Because the ammonia from<br />
Haber’s process could be converted readily into nitrates, it became<br />
an important war resource. Haber’s other contribution to the German<br />
war effort was his development of poison gas, which was used<br />
for the chlorine gas attack on Allied troops at Ypres in 1915. He also<br />
directed research on gas masks <strong>and</strong> other protective devices.<br />
At the end of the war, the 1918 Nobel Prize in Chemistry was<br />
awarded to Haber for his development of the process for making<br />
synthetic ammonia. Because the war was still fresh in everyone’s<br />
memory, it became one of the most controversial Nobel awards ever<br />
made. A headline in The New York Times for January 26, 1920, stated:<br />
“French Attack Swedes for Nobel Prize Award: Chemistry Honor<br />
Given to Dr. Haber, Inventor of German Asphyxiating Gas.” In a letter<br />
to the Times on January 28, 1920, the Swedish legation in Washington,<br />
D.C., defended the award.<br />
Haber left Germany in 1933 under duress from the anti-Semitic<br />
policies of the Nazi authorities. He was invited to accept a position<br />
with the University of Cambridge, Engl<strong>and</strong>, <strong>and</strong> died on a trip to<br />
Basel, Switzerl<strong>and</strong>, a few months later, a great man whose spirit had<br />
been crushed by the actions of an evil regime.<br />
See also Fuel cell; Refrigerant gas; Silicones; Thermal cracking<br />
process.<br />
Further Reading<br />
Ammonia / 19<br />
Goran, Morris Herbert. The Story of Fritz Haber. Norman: University<br />
of Oklahoma Press, 1967.<br />
Jansen, Sarah. “Chemical-Warfare Techniques for Insect Control: Insect<br />
‘Pests’ in Germany Before <strong>and</strong> After World War I.” Endeavour<br />
24, no. 1 (March, 2000).<br />
Smil, Vaclav. Enriching the Earth: Fritz Haber, Carl Bosch, <strong>and</strong> the Transformation<br />
of World Food Production. Cambridge, Mass.: MIT Press,<br />
2001.
20<br />
Amniocentesis<br />
Amniocentesis<br />
The invention: A technique for removing amniotic fluid from<br />
pregnant women, amniocentesis became a life-saving tool for diagnosing<br />
fetal maturity, health, <strong>and</strong> genetic defects.<br />
The people behind the invention:<br />
Douglas Bevis, an English physician<br />
Aubrey Milunsky (1936- ), an American pediatrician<br />
How Babies Grow<br />
For thous<strong>and</strong>s of years, the inability to see or touch a fetus in the<br />
uterus was a staggering problem in obstetric care <strong>and</strong> in the diagnosis<br />
of the future mental <strong>and</strong> physical health of human offspring. A<br />
beginning to the solution of this problem occurred on February 23,<br />
1952, when The Lancet published a study called “The Antenatal Prediction<br />
of a Hemolytic Disease of the Newborn.” This study, carried<br />
out by physician Douglas Bevis, described the use of amniocentesis<br />
to assess the risk factors found in the fetuses of Rh-negative women<br />
impregnated by Rh-positive men. The article is viewed by many as a<br />
l<strong>and</strong>mark in medicine that led to the wide use of amniocentesis as a<br />
tool for diagnosing fetal maturity, fetal health, <strong>and</strong> fetal genetic<br />
deects.<br />
At the beginning of a human pregnancy (conception) an egg <strong>and</strong><br />
a sperm unite to produce the fertilized egg that will become a new<br />
human being. After conception, the fertilized egg passes from the<br />
oviduct into the uterus, while dividing <strong>and</strong> becoming an organized<br />
cluster of cells capable of carrying out different tasks in the ninemonth-long<br />
series of events leading up to birth.<br />
About a week after conception, the cluster of cells, now a “vesicle”<br />
(a fluid-filled sac containing the new human cells), attaches<br />
to the uterine lining, penetrates it, <strong>and</strong> becomes intimately intertwined<br />
with uterine tissues. In time, the merger between the vesicle<br />
<strong>and</strong> the uterus results in formation of a placenta that connects the<br />
mother <strong>and</strong> the embryo, <strong>and</strong> an amniotic sac filled with the amniotic<br />
fluid in which the embryo floats.
Amniotic Sac<br />
Uterus<br />
Amniotic Fluid<br />
Placenta<br />
Physicians extract amniotic fluid directly from the<br />
womb <strong>and</strong> examine it to determine the health of the<br />
fetus.<br />
Eight weeks after conception,<br />
the embryo (now a<br />
fetus) is about 2.5 centimeters<br />
long <strong>and</strong> possesses<br />
all the anatomic elements<br />
it will have when it is<br />
born. At this time, about<br />
two <strong>and</strong> one-half months<br />
after her last menstruation,<br />
the expectant mother typically<br />
visits a physician <strong>and</strong><br />
finds out she is pregnant.<br />
Also at this time, expecting<br />
mothers often begin to<br />
worry about possible birth<br />
defects in the babies they<br />
carry. Diabetic mothers <strong>and</strong><br />
mothers older than thirty-<br />
five years have higher than usual chances of delivering babies who<br />
have birth defects.<br />
Many other factors inferred from the medical history an expecting<br />
mother provides to her physician can indicate the possible appearance<br />
of birth defects. In some cases, knowledge of possible<br />
physical problems in a fetus may allow their treatment in the uterus<br />
<strong>and</strong> save the newborn from problems that could persist throughout<br />
life or lead to death in early childhood. Information is obtained<br />
through the examination of the amniotic fluid in which the fetus is<br />
suspended throughout pregnancy. The process of obtaining this<br />
fluid is called “amniocentesis.”<br />
Diagnosing Diseases Before Birth<br />
Amniocentesis / 21<br />
Amniocentesis is carried out in several steps. First, the placenta<br />
<strong>and</strong> the fetus are located by the use of ultrasound techniques. Next,<br />
the expecting mother may be given a local anesthetic; a long needle<br />
is then inserted carefully into the amniotic sac. As soon as amniotic<br />
fluid is seen, a small sample (about four teaspoons) is drawn into a<br />
hypodermic syringe <strong>and</strong> the syringe is removed. Amniocentesis is
22 / Amniocentesis<br />
nearly painless, <strong>and</strong> most patients feel only a little abdominal pressure<br />
during the procedure.<br />
The amniotic fluid of early pregnancy resembles blood serum.<br />
As pregnancy continues, its content of substances from fetal urine<br />
<strong>and</strong> other fetal secretions increases. The fluid also contains fetal cells<br />
from skin <strong>and</strong> from the gastrointestinal, reproductive, <strong>and</strong> respiratory<br />
tracts. Therefore, it is of great diagnostic use. Immediately after<br />
the fluid is removed from the fetus, the fetal cells are separated out.<br />
Then, the cells are used for genetic analysis <strong>and</strong> the amniotic fluid is<br />
examined by means of various biochemical techniques.<br />
One important use of the amniotic fluid from amniocentesis is<br />
the determination of its lecithin <strong>and</strong> sphingomyelin content. Lecithins<br />
<strong>and</strong> sphingomyelins are two types of body lipids (fatty molecules)<br />
that are useful diagnostic tools. Lecithins are important because<br />
they are essential components of the so-called pulmonary<br />
surfactant of mature lungs. The pulmonary surfactant acts at lung<br />
surfaces to prevent the collapse of the lung air sacs (alveoli) when a<br />
person exhales.<br />
Subnormal lecithin production in a fetus indicates that it most<br />
likely will exhibit respiratory distress syndrome or a disease called<br />
“hyaline membrane disease” after birth. Both diseases can be fatal,<br />
so it is valuable to determine whether fetal lecithin levels are adequate<br />
for appropriate lung function in the newborn baby. This is<br />
particularly important in fetuses being carried by diabetic mothers,<br />
who frequently produce newborns with such problems. Often, when<br />
the risk of respiratory distress syndrome is identified through amniocentesis,<br />
the fetus in question is injected with hormones that help it<br />
produce mature lungs. This effect is then confirmed by the repeated<br />
use of amniocentesis. Many other problems can also be identified by<br />
the use of amniocentesis <strong>and</strong> corrected before the baby is born.<br />
Consequences<br />
In the years that have followed Bevis’s original observation, many<br />
improvements in the methodology of amniocentesis <strong>and</strong> in the techniques<br />
used in gathering <strong>and</strong> analyzing the genetic <strong>and</strong> biochemical<br />
information obtained have led to good results. Hundreds of debilitating<br />
hereditary diseases can be diagnosed <strong>and</strong> some ameliorated—by
the examination of amniotic fluid <strong>and</strong> fetal cells isolated by amniocentesis.<br />
For many parents who have had a child afflicted by some hereditary<br />
disease, the use of the technique has become a major consideration<br />
in family planning. Furthermore, many physicians recommend strongly<br />
that all mothers over the age of thirty-four be tested by amniocentesis<br />
to assist in the diagnosis of Down syndrome, a congenital but nonhereditary<br />
form of mental deficiency.<br />
There remains the question of whether such solutions are morally<br />
appropriate, but parents—<strong>and</strong> society—now have a choice resulting<br />
from the techniques that have developed since Bevis’s 1952<br />
observation. It is also hoped that these techniques will lead to<br />
means for correcting <strong>and</strong> preventing diseases <strong>and</strong> preclude the need<br />
for considering the therapeutic termination of any pregnancy.<br />
See also Abortion pill; Birth control pill; CAT scanner; Electrocardiogram;<br />
Electroencephalogram; Mammography; Nuclear magnetic<br />
resonance; Pap test; Ultrasound; X-ray image intensifier.<br />
Further Reading<br />
Amniocentesis / 23<br />
Milunsky, Aubrey. Genetic Disorders <strong>and</strong> the Fetus: Diagnosis, Prevention,<br />
<strong>and</strong> Treatment. 3d ed. Baltimore: Johns Hopkins University<br />
Press, 1992.<br />
Rapp, Rayna. Testing Women, Testing the Fetus: The Social Impact of<br />
Amniocentesis in America. New York: Routledge, 1999.<br />
Rothenberg, Karen H., <strong>and</strong> Elizabeth Jean Thomson. Women <strong>and</strong> Prenatal<br />
Testing: Facing the Challenges of Genetic Technology. Columbus:<br />
Ohio State University Press, 1994.<br />
Rothman, Barbara Katz. The Tentative Pregnancy: How Amniocentesis<br />
Changes the Experience of Motherhood. New York: Norton, 1993.
24<br />
Antibacterial drugs<br />
Antibacterial drugs<br />
The invention: Sulfonamides <strong>and</strong> other drugs that have proved effective<br />
in combating many previously untreatable bacterial diseases.<br />
The people behind the invention:<br />
Gerhard Domagk (1895-1964), a German physician who was<br />
awarded the 1939 Nobel Prize in Physiology or Medicine<br />
Paul Ehrlich (1854-1915), a German chemist <strong>and</strong> bacteriologist<br />
who was the cowinner of the 1908 Nobel Prize in Physiology<br />
or Medicine<br />
The Search for Magic Bullets<br />
Although quinine had been used to treat malaria long before the<br />
twentieth century, Paul Ehrlich, who discovered a large number of<br />
useful drugs, is usually considered the father of modern chemotherapy.<br />
Ehrlich was familiar with the technique of using dyes to stain<br />
microorganisms in order to make them visible under a microscope,<br />
<strong>and</strong> he suspected that some of these dyes might be used to poison<br />
the microorganisms responsible for certain diseases without hurting<br />
the patient. Ehrlich thus began to search for dyes that could act<br />
as “magic bullets” that would destroy microorganisms <strong>and</strong> cure<br />
diseases. From 1906 to 1910, Ehrlich tested numerous compounds<br />
that had been developed by the German dye industry. He eventually<br />
found that a number of complex trypan dyes would inhibit the<br />
protozoans that caused African sleeping sickness.<br />
Ehrlich <strong>and</strong> his coworkers also synthesized hundreds of organic<br />
compounds that contained arsenic. In 1910, he found that one of<br />
these compounds, salvarsan, was useful in curing syphilis, a sexually<br />
transmitted disease caused by the bacterium Treponema. This<br />
was an important discovery, because syphilis killed thous<strong>and</strong>s of<br />
people each year. Salvarsan, however, was often toxic to patients,<br />
because it had to be taken in large doses for as long as two years to<br />
effect a cure. Ehrlich thus searched for <strong>and</strong> found a less toxic arsenic<br />
compound, neosalvarsan, which replaced salvarsan in 1912.
In 1915, tartar emetic (a compound containing the metal antimony)<br />
was found to be useful in treating kala-azar, which was<br />
caused by a protozoan. Kala-azar affected millions of people in Africa,<br />
India, <strong>and</strong> Asia, causing much suffering <strong>and</strong> many deaths each<br />
year. Two years later, it was discovered that injection of tartar emetic<br />
into the blood of persons suffering from bilharziasis killed the<br />
flatworms infecting the bladder, liver, <strong>and</strong> spleen. In 1920, suramin,<br />
a colorless compound developed from trypan red, was introduced<br />
to treat African sleeping sickness. It was much less toxic to the patient<br />
than any of the drugs Ehrlich had developed, <strong>and</strong> a single dose<br />
would give protection for more than a month. From the dye methylene<br />
blue, chemists made mepacrine, a drug that was effective<br />
against the protozoans that cause malaria. This chemical was introduced<br />
in 1933 <strong>and</strong> used during World War II; its principal drawback<br />
was that it could cause a patient’s skin to become yellow.<br />
Well Worth the Effort<br />
Antibacterial drugs / 25<br />
Gerhard Domagk had been trained in medicine, but he turned to<br />
research in an attempt to discover chemicals that would inhibit or<br />
kill microorganisms. In 1927, he became director of experimental<br />
pathology <strong>and</strong> bacteriology at the Elberfeld laboratories of the German<br />
chemical firm I. G. Farbenindustrie. Ehrlich’s discovery that<br />
trypan dyes selectively poisoned microorganisms suggested to Domagk<br />
that he look for antimicrobials in a new group of chemicals<br />
known as azo dyes. A number of these dyes were synthesized<br />
from sulfonamides <strong>and</strong> purified by Fritz Mietzsch <strong>and</strong> Josef Klarer.<br />
Domagk found that many of these dyes protected mice infected<br />
with the bacteria Streptococcus pyogenes. In 1932, he discovered that<br />
one of these dyes was much more effective than any tested previously.<br />
This red azo dye containing a sulfonamide was named prontosil<br />
rubrum.<br />
From 1932 to 1935, Domagk began a rigorous testing program to<br />
determine the effectiveness <strong>and</strong> dangers of prontosil use at different<br />
doses in animals. Since all chemicals injected into animals or humans<br />
are potentially dangerous, Domagk determined the doses that<br />
harmed or killed. In addition, he worked out the lowest doses that<br />
would eliminate the pathogen. The firm supplied samples of the
26 / Antibacterial drugs<br />
drug to physicians to carry out clinical trials on humans. (Animal<br />
experimentation can give only an indication of which chemicals<br />
might be useful in humans <strong>and</strong> which doses are required.)<br />
Domagk thus learned which doses were effective <strong>and</strong> safe. This<br />
knowledge saved his daughter’s life. One day while knitting, Domagk’s<br />
daughter punctured her finger with a needle <strong>and</strong> was infected<br />
with a virulent bacteria, which quickly multiplied <strong>and</strong> spread<br />
from the wound into neighboring tissues. In an attempt to alleviate<br />
the swelling, the infected area was lanced <strong>and</strong> allowed to drain, but<br />
this did not stop the infection from spreading. The child became<br />
critically ill with developing septicemia, or blood poisoning.<br />
In those days, more than 75 percent of those who acquired blood<br />
infections died. Domagk realized that the chances for his daughter’s<br />
survival were poor. In desperation, he obtained some of the powdered<br />
prontosil that had worked so well on infected animals. He extrapolated<br />
from his animal experiments how much to give his<br />
daughter so that the bacteria would be killed but his daughter<br />
would not be poisoned. Within hours of the first treatment, her fever<br />
dropped, <strong>and</strong> she recovered completely after repeated doses of<br />
prontosil.<br />
Impact<br />
Directly <strong>and</strong> indirectly, Ehrlich’s <strong>and</strong> Domagk’s work served to<br />
usher in a new medical age. Prior to the discovery that prontosil<br />
could be use to treat bacterial infection <strong>and</strong> the subsequent development<br />
of a series of sulfonamides, or “sulfa drugs,” there was no<br />
chemical defense against this type of disease; as a result, illnesses<br />
such as streptococcal infection, gonorrhea, <strong>and</strong> pneumonia held terrors<br />
of which they have largely been shorn. Asmall injury could easily<br />
lead to death.<br />
By following the clues presented by the synthetic sulfa drugs <strong>and</strong><br />
how they worked to destroy bacteria, other scientists were able to<br />
develop an even more powerful type of drug, the antibiotic. When<br />
the American bacteriologist Rene Dubos discovered that natural organisms<br />
could also be used to fight bacteria, interest was renewed in<br />
an earlier discovery by the Scottish bacteriologist Sir Alex<strong>and</strong>er: the<br />
development of penicillin.
Antibiotics such as penicillin <strong>and</strong> streptomycin have become<br />
some of the most important tools in fighting disease. Antibiotics<br />
have replaced sulfa drugs for most uses, in part because they cause<br />
fewer side effects, but sulfa drugs are still used for a h<strong>and</strong>ful of purposes.<br />
Together, sulfonamides <strong>and</strong> antibiotics have offered the possibility<br />
of a cure to millions of people who previously would have<br />
had little chance of survival.<br />
See also Penicillin; Polio vaccine (Sabin); Polio vaccine (Salk);<br />
Salvarsan; Tuberculosis vaccine; Typhus vaccine; Yellow fever vaccine.<br />
Further Reading<br />
Antibacterial drugs / 27<br />
Alstaedter, Rosemarie. From Germanin to Acylureidopenicillin: Research<br />
That Made History: Documentation of a Scientific Revolution:<br />
Dedicated to Gerhardt Domagk on the Eighty-fifth Anniversary of His<br />
Birth. Leverkausen, West Germany: Bayer AG, 1980.<br />
Baumler, Ernst. Paul Ehrlich: Scientist for Life. New York: Holmes <strong>and</strong><br />
Meier, 1984.<br />
Galdston, Iago. Behind the Sulfa Drugs, a Short History of Chemotherapy.<br />
New York: D. Appleton-Century, 1943.<br />
Physiology or Medicine, 1922-1941. River Edge, N.J.: World Scientific,<br />
1999.
28<br />
Apple II computer<br />
Apple II computer<br />
The invention: The first commercially available, preassembled<br />
personal computer, the Apple II helped move computers out of<br />
the workplace <strong>and</strong> into the home.<br />
The people behind the invention:<br />
Stephen Wozniak (1950- ), cofounder of Apple <strong>and</strong> designer<br />
of the Apple II computer<br />
Steven Jobs (1955- ), cofounder of Apple<br />
Regis McKenna (1939- ), owner of the Silicon Valley public<br />
relations <strong>and</strong> advertising company that h<strong>and</strong>led the Apple<br />
account<br />
Chris Espinosa (1961- ), the high school student who wrote<br />
the BASIC program shipped with the Apple II<br />
R<strong>and</strong>y Wigginton (1960- ), a high school student <strong>and</strong> Apple<br />
software programmer<br />
Inventing the Apple<br />
As late as the 1960’s, not many people in the computer industry<br />
believed that a small computer could be useful to the average person.<br />
It was through the effort of two friends from the Silicon Valley—the<br />
high-technology area between San Francisco <strong>and</strong> San Jose—<br />
that the personal computer revolution was started.<br />
Both Steven Jobs <strong>and</strong> Stephen Wozniak had attended Homestead<br />
High School in Los Altos, California, <strong>and</strong> both developed early interests<br />
in technology, especially computers. In 1971, Wozniak built<br />
his first computer from spare parts. Shortly after this, he was introduced<br />
to Jobs. Jobs had already developed an interest in electronics<br />
(he once telephoned William Hewlett, cofounder of Hewlett-<br />
Packard, to ask for parts), <strong>and</strong> he <strong>and</strong> Wozniak became friends.<br />
Their first business together was the construction <strong>and</strong> sale of “blue<br />
boxes,” illegal devices that allowed the user to make long-distance<br />
telephone calls for free.<br />
After attending college, the two took jobs within the electronics<br />
industry. Wozniak began working at Hewlett-Packard, where he
studied calculator design, <strong>and</strong> Jobs took a job at Atari, the video<br />
company. The friendship paid off again when Wozniak, at Jobs’s request,<br />
designed the game “Breakout” for Atari, <strong>and</strong> the pair was<br />
paid seven hundred dollars.<br />
In 1975, the Altair computer, a personal computer in kit form,<br />
was introduced by Micro Instrumentation <strong>and</strong> Telemetry Systems<br />
(MITS). Shortly thereafter, the first personal computer club, the<br />
Homebrew Computer Club, began meeting in Menlo Park, near<br />
Stanford University. Wozniak <strong>and</strong> Jobs began attending the meeting<br />
regularly. Wozniak eagerly examined the Altairs that others<br />
brought. He thought that the design could be improved. In only a<br />
few more weeks, he produced a circuit board <strong>and</strong> interfaces that<br />
connected it to a keyboard <strong>and</strong> a video monitor. He showed the machine<br />
at a Homebrew meeting <strong>and</strong> distributed photocopies of the<br />
design.<br />
In this new machine, which he named an “Apple,” Jobs saw a big<br />
opportunity. He talked Wozniak into forming a partnership to develop<br />
personal computers. Jobs sold his car, <strong>and</strong> Wozniak sold his<br />
two Hewlett-Packard calculators; with the money, they ordered<br />
printed circuit boards made. Their break came when Paul Terrell, a<br />
retailer, was so impressed that he ordered fifty fully assembled Apples.<br />
Within thirty days, the computers were completed, <strong>and</strong> they<br />
sold for a fairly high price: $666.66.<br />
During the summer of 1976, Wozniak kept improving the Apple.<br />
The new computer would come with a keyboard, an internal power<br />
supply, a built-in computer language called the Beginner’s All-<br />
Purpose Symbolic Instruction Code” (BASIC), hookups for adding<br />
printers <strong>and</strong> other devices, <strong>and</strong> color graphics, all enclosed in a plastic<br />
case. The output would be seen on a television screen. The machine<br />
would sell for twelve hundred dollars.<br />
Selling the Apple<br />
Apple II computer / 29<br />
Regis McKenna was the head of the Regis McKenna <strong>Public</strong> Relations<br />
agency, the best of the public relations firms that served the<br />
high-technology industries of the valley, which Jobs wanted to h<strong>and</strong>le<br />
the Apple account. At first, McKenna rejected the offer, but<br />
Jobs’s constant pleading finally convinced him. The agency’s first
30 / Apple II computer<br />
Steven Jobs<br />
While IBM <strong>and</strong> other corporations were devoting massive<br />
resources <strong>and</strong> talent to designing a small computer in 1975,<br />
Steven Paul Jobs <strong>and</strong> Stephen Wozniak, members of the tiny<br />
Homebrew Computer Club, put together the first truly userfriendly<br />
personal computer in Wozniak’s home. Jobs admitted<br />
later that “Woz” was the engineering brains. Jobs himself was<br />
the brains of design <strong>and</strong> marketing. Both had to scrape together<br />
money for the project from their small salaries as low-level electronics<br />
workers. Within eight years, Jobs headed the most progressive<br />
company in the new personal computer industry <strong>and</strong><br />
was worth an estimated $210 million.<br />
Little in his background foretold such fast, large material<br />
success. Jobs was born in 1955 <strong>and</strong> became an orphan. Adopted<br />
by Paul <strong>and</strong> Clara Jobs, he grew up in California towns near the<br />
area that became known as Silicon Valley. He did not like school<br />
much <strong>and</strong> was considered a loner, albeit one who always had a<br />
distinctive way of thinking about things. Still in high school, he<br />
impressed William Hewlett, founder of Hewlett-Packard in<br />
Palo Alto, <strong>and</strong> won a summer job at the company, as well as<br />
some free equipment for one of his school projects.<br />
However, he dropped out of Reed College after one semester<br />
<strong>and</strong> became a hippie. He studied philosophy <strong>and</strong> Chinese<br />
<strong>and</strong> Indian mysticism. He became a vegetarian <strong>and</strong> practiced<br />
meditation. He even shaved his head <strong>and</strong> traveled to India on a<br />
spiritual pilgrimage. When he returned to America, however,<br />
he also returned to his interest in electronics <strong>and</strong> computers.<br />
Through various jobs at his original company, Apple, <strong>and</strong> elsewhere,<br />
he stayed there.<br />
contributions to Apple were the colorful striped Apple logo <strong>and</strong> a<br />
color ad in Playboy magazine.<br />
In February, 1977, the first Apple Computer office was opened in<br />
Cupertino, California. By this time, two of Wozniak’s friends from<br />
Homebrew, R<strong>and</strong>y Wigginton <strong>and</strong> Chris Espinosa—both high school<br />
students—had joined the company. Their specialty was writing software.<br />
Espinosa worked through his Christmas vacation so that BA-<br />
SIC (the built-in computer language) could ship with the computer.
The team pushed ahead to complete the new Apple in time to<br />
display it at the First West Coast Computer Faire in April, 1977. At<br />
this time, the name “Apple II” was chosen for the new model. The<br />
Apple II computer debuted at the convention <strong>and</strong> included many<br />
innovations. The “motherboard” was far simpler <strong>and</strong> more elegantly<br />
designed than that of any previous computer, <strong>and</strong> the ease of<br />
connecting the Apple II to a television screen made it that much<br />
more attractive to consumers.<br />
Consequences<br />
Apple II computer / 31<br />
The introduction of the Apple II computer launched what was to<br />
be a wave of new computers aimed at the home <strong>and</strong> small-business<br />
markets. Within a few months of the Apple II’s introduction, Commodore<br />
introduced its PET computer <strong>and</strong> T<strong>and</strong>y Corporation/Radio<br />
Shack brought out its TRS-80. Apple continued to increase the<br />
types of things that its computers could do <strong>and</strong> worked out a distribution<br />
deal with the new ComputerL<strong>and</strong> chain of stores.<br />
In December, 1977, Wozniak began work on creating a floppy<br />
disk system for the Apple II. (Afloppy disk is a small, flexible plastic<br />
disk coated with magnetic material. The magnetized surface enables<br />
computer data to be stored on the disk.) The cassette tape storage<br />
on which all personal computers then depended was slow <strong>and</strong><br />
unreliable. Floppy disks, which had been introduced for larger computers<br />
by the International Business Machines (IBM) Corporation in<br />
1970, were fast <strong>and</strong> reliable. As he did with everything that interested<br />
him, Wozniak spent almost all of his time learning about <strong>and</strong><br />
designing a floppy disk drive. When the final drive shipped in June,<br />
1978, it made possible development of more powerful software for<br />
the computer.<br />
By 1980, Apple had sold 130,000 Apple II’s. That year, the company<br />
went public, <strong>and</strong> Jobs <strong>and</strong> Wozniak, among others, became<br />
wealthy. Three years later, Apple became the youngest company to<br />
make the Fortune 500 list of the largest industrial companies. By<br />
then, IBM had entered the personal computer field <strong>and</strong> had begun<br />
to dominate it, but the Apple II’s earlier success ensured that personal<br />
computers would not be a market fad. By the end of the<br />
1980’s, 35 million personal computers would be in use.
32 / Apple II computer<br />
See also BINAC computer; Colossus computer; ENIAC computer;<br />
Floppy disk; Hard disk; IBM Model 1401 computer; Personal<br />
computer; UNIVAC computer.<br />
Further Reading<br />
Carlton, Jim. Apple: The Inside Story of Intrigue, Egomania, <strong>and</strong> Business<br />
Blunders. Rev. ed. London: R<strong>and</strong>om House, 1999.<br />
Gold, Rebecca. Steve Wozniak: A Wizard Called Woz. Minneapolis:<br />
Lerner, 1994.<br />
Linzmayer, Owen W. Apple Confidential: The Real Story of Apple Computer,<br />
Inc. San Francisco: No Starch Press, 1999.<br />
Moritz, Michael. The Little Kingdom: The Private Story of Apple Computer.<br />
New York: Morrow, 1984.<br />
Rose, Frank. West of Eden: The End of Innocence at Apple Computer.<br />
New York: Viking, 1989.
Aqualung<br />
Aqualung<br />
The invention: A device that allows divers to descend hundreds of<br />
meters below the surface of the ocean by enabling them to carry<br />
the oxygen they breathe with them.<br />
The people behind the invention:<br />
Jacques-Yves Cousteau (1910-1997), a French navy officer,<br />
undersea explorer, inventor, <strong>and</strong> author<br />
Émile Gagnan, a French engineer who invented an automatic<br />
air-regulating device<br />
The Limitations of Early Diving<br />
Undersea dives have been made since ancient times for the purposes<br />
of spying, recovering lost treasures from wrecks, <strong>and</strong> obtaining<br />
natural treasures (such as pearls). Many attempts have been made<br />
since then to prolong the amount of time divers could remain underwater.<br />
The first device, described by the Greek philosopher Aristotle<br />
in 335 b.c.e., was probably the ancestor of the modern snorkel. It was<br />
a bent reed placed in the mouth, with one end above the water.<br />
In addition to depth limitations set by the length of the reed,<br />
pressure considerations also presented a problem. The pressure on<br />
a diver’s body increases by about one-half pound per square centimeter<br />
for every meter ventured below the surface. After descending<br />
about 0.9 meter, inhaling surface air through a snorkel becomes difficult<br />
because the human chest muscles are no longer strong enough<br />
to inflate the chest. In order to breathe at or below this depth, a diver<br />
must breathe air that has been pressurized; moreover, that pressure<br />
must be able to vary as the diver descends or ascends.<br />
Few changes were possible in the technology of diving until air<br />
compressors were invented during the early nineteenth century.<br />
Fresh, pressurized air could then be supplied to divers. At first, the<br />
divers who used this method had to wear diving suits, complete<br />
with fishbowl-like helmets. This “tethered” diving made divers relatively<br />
immobile but allowed them to search for sunken treasure or<br />
do other complex jobs at great depths.<br />
33
34 / Aqualung<br />
The Development of Scuba Diving<br />
The invention of scuba gear gave divers more freedom to<br />
move about <strong>and</strong> made them less dependent on heavy equipment.<br />
(“Scuba” st<strong>and</strong>s for self-contained underwater breathing apparatus.)<br />
Its development occurred in several stages. In 1880, Henry<br />
Fleuss of Engl<strong>and</strong> developed an outfit that used a belt containing<br />
pure oxygen. Belt <strong>and</strong> diver were connected, <strong>and</strong> the diver breathed<br />
the oxygen over <strong>and</strong> over. A version of this system was used by the<br />
U.S. Navy in World War II spying efforts. Nevertheless, it had serious<br />
drawbacks: Pure oxygen was toxic to divers at depths greater<br />
than 9 meters, <strong>and</strong> divers could carry only enough oxygen for relatively<br />
short dives. It did have an advantage for spies, namely, that<br />
the oxygen—breathed over <strong>and</strong> over in a closed system—did not<br />
reach the surface in the form of telltale bubbles.<br />
The next stage of scuba development occurred with the design<br />
of metal tanks that were able to hold highly compressed air.<br />
This enabled divers to use air rather than the potentially toxic<br />
pure oxygen. More important, being hooked up to a greater supply<br />
of air meant that divers could stay under water longer. Initially,<br />
the main problem with the system was that the air flowed continuously<br />
through a mask that covered the diver’s entire face. This process<br />
wasted air, <strong>and</strong> the scuba divers expelled a continual stream<br />
of air bubbles that made spying difficult. The solution, according to<br />
Axel Madsen’s Cousteau (1986), was “a valve that would allow inhaling<br />
<strong>and</strong> exhaling through the same mouthpiece.”<br />
Jacques-Yves Cousteau’s father was an executive for Air Liquide—<br />
France’s main producer of industrial gases. He was able to direct<br />
Cousteau to Émile Gagnan, an engineer at thecompany’s Paris laboratory<br />
who had been developing an automatic gas shutoff valve for Air<br />
Liquide. This valve became the Cousteau-Gagnan regulator, a breathing<br />
device that fed air to the diver at just the right pressure whenever<br />
he or she inhaled.<br />
With this valve—<strong>and</strong> funding from Air Liquide—Cousteau <strong>and</strong><br />
Gagnan set out to design what would become the Aqualung. The<br />
first Aqualungs could be used at depths of up to 68.5 meters. During<br />
testing, however, the dangers of Aqualung diving became apparent.<br />
For example, unless divers ascended <strong>and</strong> descended in slow stages,
Jacques-Yves Cousteau<br />
Aqualung / 35<br />
The son of a businessman who liked to travel, Jacques-Yves<br />
Cousteau acquired the same w<strong>and</strong>erlust. Born in 1910 in Saint-<br />
André-de-Cubzac, France, he was a sickly child, but he learned<br />
to love swimming <strong>and</strong> the ocean. He also took an interest in<br />
movies, producing his first film when he was thirteen.<br />
Cousteau graduated from France’s naval academy, but his<br />
career as an officer ended with a nearly fatal car accident in<br />
1936. He went to Toulon, where he returned to his interests in<br />
the sea <strong>and</strong> photography, a period that culminated in his invention<br />
of the aqualung with Émile Gagnan in 1944. During<br />
World War II he also won a Légion d’honneur for<br />
his photographic espionage. The French Navy established<br />
the Underwater Research Group for Cousteau<br />
in 1944, <strong>and</strong> after the war the venture evolved into the<br />
freewheeling, worldwide voyages that Cousteau became<br />
famous for. Aboard the Calypso, a converted<br />
U.S. minesweeper, he <strong>and</strong> his crew conducted research<br />
<strong>and</strong> pioneered underwater photography. His<br />
1957 documentary The Silent World (based on a 1953<br />
book) won an Oscar <strong>and</strong> the Palm d’Or of the Cannes film<br />
festival. Subsequent movies <strong>and</strong> The Undersea World of Jacques<br />
Cousteau, a television series, established Cousteau as a leading<br />
environmentalist <strong>and</strong> science educator. His Cousteau Society,<br />
dedicated to exploring <strong>and</strong> protecting the oceans, attracted<br />
millions of members worldwide. Through it he launched another<br />
innovative technology, “Turbosails,” towering non-rotating<br />
cylinders that act as sails to reduce ships’ dependency on oilfueled<br />
engines. A new ship propelled by them, the Alcyone, eventually<br />
replaced the Calypso.<br />
Cousteau inspired legions of oceanographers <strong>and</strong> environmentalists<br />
while calling attention to pressing problems in the<br />
world’s oceans. Although his later years where marked by family<br />
tragedies <strong>and</strong> controversy, he was revered throughout the<br />
world <strong>and</strong> had received many honors when he died in 1997.<br />
it was likely that they would get “the bends” (decompression sickness),<br />
the feared disease of earlier, tethered deep-sea divers. Another<br />
problem was that, below 42.6 meters, divers encountered nitrogen<br />
narcosis. (This can lead to impaired judgment that may cause<br />
(Library of Congress)
36 / Aqualung<br />
fatal actions, including removing a mouthpiece or developing an<br />
overpowering desire to continue diving downward, to dangerous<br />
depths.)<br />
Cousteau believed that the Aqualung had tremendous military<br />
potential. During World War II, he traveled to London soon after the<br />
Norm<strong>and</strong>y invasion, hoping to persuade the Allied Powers of its<br />
usefulness. He was not successful. So Cousteau returned to Paris<br />
<strong>and</strong> convinced France’s new government to use Aqualungs to locate<br />
<strong>and</strong> neutralize underwater mines laid along the French coast by<br />
the German navy. Cousteau was commissioned to combine minesweeping<br />
with the study of the physiology of scuba diving. Further<br />
research revealed that the use of helium-oxygen mixtures increased<br />
to 76 meters the depth to which a scuba diver could go without suffering<br />
nitrogen narcosis.<br />
Impact<br />
One way to describe the effects of the development of the Aqualung<br />
is to summarize Cousteau’s continued efforts to the present. In<br />
1946, he <strong>and</strong> Philippe Tailliez established the Undersea Research<br />
Group of Toulon to study diving techniques <strong>and</strong> various aspects of<br />
life in the oceans. They studied marine life in the Red Sea from 1951<br />
to 1952. From 1952 to 1956, they engaged in an expedition supported<br />
by the National Geographic Society. By that time, the Research<br />
Group had developed many techniques that enabled them to<br />
identify life-forms <strong>and</strong> conditions at great depths.<br />
Throughout their undersea studies, Cousteau <strong>and</strong> his coworkers<br />
continued to develop better techniques for scuba diving, for recording<br />
observations by means of still <strong>and</strong> television photography, <strong>and</strong><br />
for collecting plant <strong>and</strong> animal specimens. In addition, Cousteau<br />
participated (with Swiss physicist Auguste Piccard) in the construction<br />
of the deep-submergence research vehicle, or bathyscaphe. In<br />
the 1960’s, he directed a program called Conshelf, which tested a<br />
human’s ability to live in a specially built underwater habitat. He<br />
also wrote <strong>and</strong> produced films on underwater exploration that attracted,<br />
entertained, <strong>and</strong> educated millions of people.<br />
Cousteau has won numerous medals <strong>and</strong> scientific distinctions.<br />
These include the Gold Medal of the National Geographic Society
(1963), the United Nations International Environment Prize (1977),<br />
membership in the American <strong>and</strong> Indian academies of science (1968<br />
<strong>and</strong> 1978, respectively), <strong>and</strong> honorary doctor of science degrees<br />
from the University of California, Berkeley (1970), Harvard University<br />
(1979), <strong>and</strong> Rensselaer Polytechnical Institute (1979).<br />
See also Bathyscaphe; Bathysphere.<br />
Further Reading<br />
Aqualung / 37<br />
Cousteau, Jacques Yves. The Silent World. New York: Harper &<br />
Brothers, 1952.<br />
_____. “Lord of The Depths. Time 153, no. 12 (March 29, 1999).<br />
_____, <strong>and</strong> James Dugan. The Living Sea. London: Elm Tree, 1988.<br />
Madsen, Axel. Cousteau: An Unauthorized Biography. New York:<br />
Beaufort Books, 1986.<br />
Munson, Richard. Cousteau: The Captain <strong>and</strong> His World. New York:<br />
Paragon House, 1991.<br />
Zanelli, Leo, <strong>and</strong> George T. Skuse. Sub-Aqua Illustrated Dictionary.<br />
New York: Oxford University Press, 1976.
38<br />
Artificial blood<br />
Artificial blood<br />
The invention: A perfluorocarbon emulsion that serves as a blood<br />
plasma substitute in the treatment of human patients.<br />
The person behind the invention:<br />
Ryoichi Naito (1906-1982), a Japanese physician<br />
Blood Substitutes<br />
The use of blood <strong>and</strong> blood products in humans is a very complicated<br />
issue. Substances present in blood serve no specific purpose<br />
<strong>and</strong> can be dangerous or deadly, especially when blood or blood<br />
products are taken from one person <strong>and</strong> given to another. This fact,<br />
combined with the necessity for long-term blood storage, a shortage<br />
of donors, <strong>and</strong> some patients’ refusal to use blood for religious reasons,<br />
brought about an intense search for a universal bloodlike substance.<br />
The life-sustaining properties of blood (for example, oxygen transport)<br />
can be entirely replaced by a synthetic mixture of known chemicals.<br />
Fluorocarbons are compounds that consist of molecules containing<br />
only fluorine <strong>and</strong> carbon atoms. These compounds are interesting<br />
to physiologists because they are chemically <strong>and</strong> pharmacologically<br />
inert <strong>and</strong> because they dissolve oxygen <strong>and</strong> other gases.<br />
Studies of fluorocarbons as blood substitutes began in 1966,<br />
when it was shown that a mouse breathing a fluorocarbon liquid<br />
treated with oxygen could survive. Subsequent research involved<br />
the use of fluorocarbons to play the role of red blood cells in transporting<br />
oxygen. Encouraging results led to the total replacement of<br />
blood in a rat, <strong>and</strong> the success of this experiment led in turn to trials<br />
in other mammals, culminating in 1979 with the use of fluorocarbons<br />
in humans.<br />
Clinical Studies<br />
The chemical selected for the clinical studies was Fluosol-DA,<br />
produced by the Japanese Green Cross Corporation. Fluosol-DA
consists of a 20 percent emulsion of two perfluorocarbons (perfluorodecalin<br />
<strong>and</strong> perfluorotripopylamine), emulsifiers, <strong>and</strong> salts<br />
that are included to give the chemical some of the properties of<br />
blood plasma. Fluosol-DA had been tested in monkeys, <strong>and</strong> it had<br />
shown a rapid reversible uptake <strong>and</strong> release of oxygen, a reasonably<br />
rapid excretion, no carcinogenicity or irreversible changes in the animals’<br />
systems, <strong>and</strong> the recovery of blood components to normal<br />
ranges within three weeks of administration.<br />
The clinical studies were divided into three phases. The first<br />
phase consisted of the administration of Fluosol-DA to normal human<br />
volunteers. Twelve healthy volunteers were administered the<br />
chemical, <strong>and</strong> the emulsion’s effects on blood pressure <strong>and</strong> composition<br />
<strong>and</strong> on heart, liver, <strong>and</strong> kidney functions were monitored. No<br />
adverse effects were found in any case. The first phase ended in<br />
March, 1979, <strong>and</strong> based on its positive results, the second <strong>and</strong> third<br />
phases were begun in April, 1979.<br />
Twenty-four Japanese medical institutions were involved in the<br />
next two phases. The reasons for the use of Fluosol-DA instead of<br />
blood in the patients involved were various, <strong>and</strong> they included refusal<br />
of transfusion for religious reasons, lack of compatible blood,<br />
“bloodless” surgery for protection from risk of hepatitis, <strong>and</strong> treatment<br />
of carbon monoxide intoxication.<br />
Among the effects noticed by the patients were the following: a<br />
small increase in blood pressure, with no corresponding effects on<br />
respiration <strong>and</strong> body temperature; an increase in blood oxygen content;<br />
bodily elimination of half the chemical within six to nineteen<br />
hours, depending on the initial dose administered; no change in<br />
red-cell count or hemoglobin content of blood; no change in wholeblood<br />
coagulation time; <strong>and</strong> no significant blood-chemistry changes.<br />
These results made the clinical trials a success <strong>and</strong> opened the door<br />
for other, more extensive ones.<br />
Impact<br />
Artificial blood / 39<br />
Perfluorocarbon emulsions were initially proposed as oxygencarrying<br />
resuscitation fluids, or blood substitutes, <strong>and</strong> the results of<br />
the pioneering studies show their success as such. Their success in<br />
this area, however, led to advanced studies <strong>and</strong> exp<strong>and</strong>ed use of
40 / Artificial blood<br />
these compounds in many areas of clinical medicine <strong>and</strong> biomedical<br />
research.<br />
Perfluorocarbon emulsions are useful in cancer therapy, because<br />
they increase the oxygenation of tumor cells <strong>and</strong> therefore sensitize<br />
them to the effects of radiation or chemotherapy. Perfluorocarbons<br />
can also be used as “contrasting agents” to facilitate magnetic resonance<br />
imaging studies of various tissues; for example, the uptake of<br />
particles of the emulsion by the cells of malignant tissues makes it<br />
possible to locate tumors. Perfluorocarbons also have a high nitrogen<br />
solubility <strong>and</strong> therefore can be used to alleviate the potentially<br />
fatal effects of decompression sickness by “mopping up” nitrogen<br />
gas bubbles from the circulation system. They can also be used to<br />
preserve isolated organs <strong>and</strong> amputated extremities until they can<br />
be reimplanted or reattached. In addition, the emulsions are used in<br />
cell cultures to regulate gas supply <strong>and</strong> to improve cell growth <strong>and</strong><br />
productivity.<br />
The biomedical applications of perfluorocarbon emulsions are<br />
multidisciplinary, involving areas as diverse as tissue imaging, organ<br />
preservation, cancer therapy, <strong>and</strong> cell culture. The successful<br />
clinical trials opened the door for new applications of these<br />
compounds, which rank among the most versatile compounds exploited<br />
by humankind.<br />
See also Artificial heart; Artificial hormone; Artificial kidney;<br />
Blood transfusion; Coronary artery bypass surgery; Electrocardiogram;<br />
Heart-lung machine.<br />
Further Reading<br />
“Artificial Blood Product May Debut in Two Years.” Health Care<br />
Strategic Management 18, no. 8 (August, 2000).<br />
“The Business of Blood: Ryoichi Naito <strong>and</strong> Fluosol-DA Artificial<br />
Blood.” Forbes 131 (January 17, 1983).<br />
Glanz, James. “Pulse Quickens in Search for Blood Substitute.” Research<br />
& Development 34, no. 10 (September, 1992).<br />
Tsuchida, E. Artificial Red Cells: Materials, Performances, <strong>and</strong> Clinical<br />
Study as Blood Substitutes. New York: Wiley, 1997.
Artificial chromosome<br />
Artificial chromosome<br />
The invention: Originally developed for use in the study of natural<br />
chromosome behavior, the artificial chromosome proved to be a<br />
valuable tool for recombinant DNA technology.<br />
The people behind the invention:<br />
Jack W. Szostak (1952- ), a British-born Canadian professor<br />
at Harvard Medical School<br />
Andrew W. Murray (1956- ), a graduate student<br />
The Value of Artificial Chromosomes<br />
The artificial chromosome gives biologists insight into the fundamental<br />
mechanisms by which cells replicate <strong>and</strong> plays an important<br />
role as a tool in genetic engineering technology. Soon after its invention<br />
in 1983 by Andrew W. Murray <strong>and</strong> Jack W. Szostak, the artificial<br />
chromosome was judged by scientists to be important <strong>and</strong> its value<br />
in the field of medicine was exploited.<br />
Chromosomes are essentially carriers of genetic information;<br />
that is, they possess the genetic code that is the blueprint for life. In<br />
higher organisms, the number <strong>and</strong> type of chromosomes that a cell<br />
contains in its nucleus are characteristic of the species. For example,<br />
each human cell has forty-six chromosomes, while the garden pea<br />
has fourteen <strong>and</strong> the guinea pig has sixty-four. The chromosome’s<br />
job in a dividing cell is to replicate <strong>and</strong> then distribute one copy of itself<br />
into each new “daughter” cell. This process, which is referred to<br />
as “mitosis” or “meiosis,” depending upon the actual mechanism<br />
by which the process occurs, is of supreme importance to the continuation<br />
of life.<br />
In 1953, when biophysicists James D. Watson <strong>and</strong> Francis Crick<br />
discovered the structure of deoxyribonucleic acid (DNA), an achievement<br />
for which they won the 1962 Nobel Prize in Physiology or<br />
Medicine, it was immediately apparent to them how the doublehelical<br />
form of DNA (which looks something like a twisted ladder)<br />
might explain the mechanism behind cell division. During DNA<br />
replication, the chromosome unwinds to expose the thin threads of<br />
41
42 / Artificial chromosome<br />
DNA. The two str<strong>and</strong>s of the double helix separate, <strong>and</strong> each acts as<br />
a template for the formation of a new complementary str<strong>and</strong>, thus<br />
forming two complete <strong>and</strong> identical chromosomes that can be distributed<br />
to each new cell. This distribution process, which is referred<br />
to as “segregation,” relies on the chromosomes being pulled along a<br />
microtubule framework in the cell called the “mitotic spindle.”<br />
Creating Artificial Chromosomes<br />
An artificial chromosome is a laboratory-designed chromosome<br />
that possesses only those functional elements its creators choose. In<br />
order to be a true working chromosome, however, it must, at minimum,<br />
maintain the machinery necessary for replication <strong>and</strong> segregation.<br />
By the early 1980’s, Murray <strong>and</strong> Szostak had recognized the possible<br />
advantages of using a simple, controlled model to study chromosome<br />
behavior, since there are several difficulties associated<br />
with studying chromosomes in their natural state. Since natural<br />
chromosomes are large <strong>and</strong> have poorly defined structures, it is almost<br />
impossible to sift out for study those elements that are essential<br />
for replication <strong>and</strong> segregation. Previous methods of altering a<br />
natural chromosome <strong>and</strong> observing the effects were difficult to use<br />
because the cells containing that altered chromosome usually died.<br />
Furthermore, even if the cell survived, analysis was complicated by<br />
the extensive amount of genetic information carried by the chromosome.<br />
Artificial chromosomes are simple <strong>and</strong> have known components,<br />
although the functions of those components may be poorly<br />
understood. In addition, since artificial chromosomes are extra chromosomes<br />
that are carried by the cell, their alteration does not kill the<br />
cell.<br />
Prior to the synthesis of the first artificial chromosome, the essential<br />
functional chromosomal elements of replication <strong>and</strong> segregation<br />
had to be identified <strong>and</strong> harvested. One of the three chromosomal<br />
elements thought to be required is the origin of replication,<br />
the site at which the synthesis of new DNA begins. The relatively<br />
weak interaction between DNA str<strong>and</strong>s at this site facilitates their<br />
separation, making possible—with the help of appropriate enzymes—<br />
the subsequent replication of the str<strong>and</strong>s into “sister chromatids.”
The second essential element is the “centromere,” a thinner segment<br />
of the chromosome that serves as the attachment site for the mitotic<br />
spindle. Sister chromatids are pulled into diametric ends of the dividing<br />
cell by the spindle apparatus, thus forming two identical<br />
daughter cells. The final functional elements are repetitive sequences<br />
of DNA called “telomeres,” which are located at both ends of the<br />
chromosome. The telomeres are needed to protect the terminal<br />
genes from degradation.<br />
With all the functional elements at their disposal, Murray <strong>and</strong><br />
Szostak proceeded to construct their first artificial chromosome.<br />
Once made, this chromosome would be inserted into yeast cells to<br />
replicate, since yeast cells are relatively simple <strong>and</strong> well characterized<br />
but otherwise resemble cells of higher organisms. Construction<br />
begins with a commonly used “bacterial plasmid,” a small, circular,<br />
autonomously replicating section of DNA. Enzymes are then called<br />
upon to create a gap in this “cloning vector” into which the three<br />
chromosomal elements are spliced. In addition, genes that confer<br />
some distinct trait, such as color, to yeast cells are also inserted, thus<br />
making it possible to determine which cells have actually taken up<br />
the new chromosome. Although their first attempt resulted in a<br />
chromosome that failed to segregate properly, by September, 1983,<br />
Murray <strong>and</strong> Szostak had announced in the prestigious British journal<br />
Nature their success in creating the first artificial chromosome.<br />
Consequences<br />
Artificial chromosome / 43<br />
One of the most exciting aspects of the artificial chromosome is<br />
its application to recombinant DNA technology, which involves creating<br />
novel genetic materials by combining segments of DNA from<br />
various sources. For example, the artificial yeast chromosome can<br />
be used as a cloning vector. In this process, a segment of DNA containing<br />
some desired gene is inserted into an artificial chromosome<br />
<strong>and</strong> is then allowed to replicate in yeast until large amounts of the<br />
gene are produced. David T. Burke, Georges F. Carle, <strong>and</strong> Maynard<br />
Victor Olson at Washington University in St. Louis have pioneered<br />
the technique of combining human genes with artificial yeast chromosomes<br />
<strong>and</strong> have succeeded in cloning large segments of human<br />
DNA.
44 / Artificial chromosome<br />
Although amplifying DNA in this manner has been done before,<br />
using bacterial plasmids as cloning vectors, the artificial yeast chromosome<br />
has the advantage of being able to hold much larger segments<br />
of DNA, thus allowing scientists to clone very large genes.<br />
This is of great importance, since the genes that cause diseases such<br />
as hemophilia <strong>and</strong> Duchenne’s muscular dystrophy are enormous.<br />
The most ambitious project for which the artificial yeast chromosome<br />
is being used is the national project whose intent is to clone the<br />
entire human genome.<br />
See also Artificial blood; Artificial hormone; Genetic “fingerprinting”;<br />
Genetically engineered insulin; In vitro plant culture;<br />
Synthetic DNA; Synthetic RNA.<br />
Further Reading<br />
“Evolving RNA with Enzyme-Like Action.” Science News 144 (August<br />
14, 1993).<br />
Freedman, David H. “Playing God: The H<strong>and</strong>made Cell.” Discover<br />
13, no. 8 (August, 1992).<br />
Varshavsky, Alex<strong>and</strong>er. “The 2000 Genetics Society of America<br />
Medal: Jack W. Szostak.” Genetics 157, no. 2 (February, 2001).
Artificial heart<br />
Artificial heart<br />
The invention: The first successful artificial heart, the Jarvik-7, has<br />
helped to keep patients suffering from otherwise terminal heart<br />
disease alive while they await human heart transplants.<br />
The people behind the invention:<br />
Robert Jarvik (1946- ), the main inventor of the Jarvik-7<br />
William Castle DeVries (1943- ), a surgeon at the University<br />
of Utah in Salt Lake City<br />
Barney Clark (1921-1983), a Seattle dentist, the first recipient of<br />
the Jarvik-7<br />
Early Success<br />
The Jarvik-7 artificial heart was designed <strong>and</strong> produced by researchers<br />
at the University of Utah in Salt Lake City; it is named for<br />
the leader of the research team, Robert Jarvik. An air-driven pump<br />
made of plastic <strong>and</strong> titanium, it is the size of a human heart. It is made<br />
up of two hollow chambers of polyurethane <strong>and</strong> aluminum, each<br />
containing a flexible plastic membrane. The heart is implanted in a<br />
human being but must remain connected to an external air pump by<br />
means of two plastic hoses. The hoses carry compressed air to the<br />
heart, which then pumps the oxygenated blood through the pulmonary<br />
artery to the lungs <strong>and</strong> through the aorta to the rest of the body.<br />
The device is expensive, <strong>and</strong> initially the large, clumsy air compressor<br />
had to be wheeled from room to room along with the patient.<br />
The device was new in 1982, <strong>and</strong> that same year Barney Clark, a<br />
dentist from Seattle, was diagnosed as having only hours to live.<br />
His doctor, cardiac specialist William Castle DeVries, proposed surgically<br />
implanting the Jarvik-7 heart, <strong>and</strong> Clark <strong>and</strong> his wife agreed.<br />
The Food <strong>and</strong> Drug Administration (FDA), which regulates the use<br />
of medical devices, had already given DeVries <strong>and</strong> his coworkers<br />
permission to implant up to seven Jarvik-7 hearts for permanent use.<br />
The operation was performed on Clark, <strong>and</strong> at first it seemed quite<br />
successful. Newspapers, radio, <strong>and</strong> television reported this medical<br />
breakthrough: the first time a severely damaged heart had been re-<br />
45
46 / Artificial heart<br />
William C. DeVries<br />
William Castle DeVries did not invent the artificial heart<br />
himself; however, he did develop the procedure to implant it.<br />
The first attempt took him seven <strong>and</strong> a half hours, <strong>and</strong> he<br />
needed fourteen assistants. A success, the surgery made DeVries<br />
one of the most talked-about doctors in the world.<br />
DeVries was born in Brooklyn, New York, in 1943. His father,<br />
a Navy physician, was killed in action a few months later, <strong>and</strong><br />
his mother, a nurse, moved with her son to Utah. As a child<br />
DeVries showed both considerable mechanical aptitude <strong>and</strong><br />
athletic prowess. He won an athletic scholarship to the University<br />
of Utah, graduating with honors in 1966. He entered the<br />
state medical school <strong>and</strong> there met Willem Kolff, a pioneer in<br />
designing <strong>and</strong> testing artificial organs. Under Kolff’s guidance,<br />
DeVries began performing experimental surgeries on animals<br />
to test prototype mechanical hearts. He finished medical school<br />
in 1970 <strong>and</strong> from 1971 until 1979 was an intern <strong>and</strong> then a resident<br />
in surgery at the Duke University Medical Center in North<br />
Carolina.<br />
DeVries returned to the University of Utah as an assistant<br />
professor of cardiovascular <strong>and</strong> thoracic surgery. In the meantime,<br />
Robert K. Jarvik had devised the Jarvik-7 artificial heart.<br />
DeVries experimented, implanting it in animals <strong>and</strong> cadavers<br />
until, following approval from the Federal Drug Administration,<br />
Barney Clark agreed to be the first test patient. He died 115<br />
days after the surgery, having never left the hospital. Although<br />
controversy arose over the ethics <strong>and</strong> cost of the procedure,<br />
more artificial heart implantations followed, many by DeVries.<br />
Long administrative delays getting patients approved for<br />
surgery at Utah frustrated DeVries, so he moved to Humana<br />
Hospital-Audubon in Louisville, Kentucky, in 1984 <strong>and</strong> then<br />
took a professorship at the University of Louisville. In 1988 he<br />
left experimentation for a traditional clinical practice. The FDA<br />
withdrew its approval for the Jarvik-7 in 1990.<br />
In 1999 DeVries retired from practice, but not from medicine.<br />
The next year he joined the Army Reserve <strong>and</strong> began teaching<br />
surgery at the Walter Reed Army Medical Center.<br />
placed by a totally artificial heart. It seemed DeVries had proved that<br />
an artificial heart could be almost as good as a human heart.<br />
Soon after Clark’s surgery, DeVries went on to implant the device
in several other patients with serious heart disease. For a time, all of<br />
them survived the surgery. As a result, DeVries was offered a position<br />
at Humana Hospital in Louisville, Kentucky. Humana offered<br />
to pay for the first one hundred implant operations.<br />
The Controversy Begins<br />
Artificial heart / 47<br />
In the three years after DeVries’s operation on Barney Clark,<br />
however, doubts <strong>and</strong> criticism arose. Of the people who by then had<br />
received the plastic <strong>and</strong> metal device as a permanent replacement<br />
for their own diseased hearts, three had died (including Clark) <strong>and</strong><br />
four had suffered serious strokes. The FDA asked Humana Hospital<br />
<strong>and</strong> Symbion (the company that manufactured the Jarvik-7) for<br />
complete, detailed histories of the artificial-heart recipients.<br />
It was determined that each of the patients who had died or been<br />
disabled had suffered from infection. Life-threatening infection, or<br />
“foreign-body response,” is a danger with the use of any artificial<br />
organ. The Jarvik-7, with its metal valves, plastic body, <strong>and</strong> Velcro<br />
attachments, seemed to draw bacteria like a magnet—<strong>and</strong> these<br />
bacteria proved resistant to even the most powerful antibiotics.<br />
By 1988, researchers had come to realize that severe infection was<br />
almost inevitable if a patient used the Jarvik-7 for a long period of<br />
time. As a result, experts recommended that the device be used for<br />
no longer than thirty days.<br />
Questions of values <strong>and</strong> morality also became part of the controversy<br />
surrounding the artificial heart. Some people thought that it<br />
was wrong to offer patients a device that would extend their lives<br />
but leave them burdened with hardship <strong>and</strong> pain. At times DeVries<br />
claimed that it was worth the price for patients to be able live another<br />
year; at other times, he admitted that if he thought a patient<br />
would have to spend the rest of his or her life in a hospital, he would<br />
think twice before performing the implant.<br />
There were also questions about “informed consent”—the patient’s<br />
underst<strong>and</strong>ing that a medical procedure has a high risk of<br />
failure <strong>and</strong> may leave the patient in misery even if it succeeds.<br />
Getting truly informed consent from a dying patient is tricky, because,<br />
underst<strong>and</strong>ably, the patient is probably willing to try anything.<br />
The Jarvik-7 raised several questions in this regard: Was the
48 / Artificial heart<br />
ordeal worth the risk? Was the patient’s suffering justifiable? Who<br />
should make the decision for or against the surgery: the patient, the<br />
researchers, or a government agency?<br />
Also there was the issue of cost. Should money be poured into expensive,<br />
high-technology devices such as the Jarvik heart, or should<br />
it be reserved for programs to help prevent heart disease in the first<br />
place? Expenses for each of DeVries’s patients had amounted to<br />
about one million dollars.<br />
Humana’s <strong>and</strong> DeVries’s earnings were criticized in particular.<br />
Once the first one hundred free Jarvik-7 implantations had been<br />
performed, Humana Hospital could expect to make large amounts<br />
of money on the surgery. By that time, Humana would have so<br />
much expertise in the field that, though the surgical techniques<br />
could not be patented, it was expected to have a practical monopoly.<br />
DeVries himself owned thous<strong>and</strong>s of shares of stock in Symbion.<br />
Many people wondered whether this was ethical.<br />
Consequences<br />
Given all the controversies, in December of 1985 a panel of experts<br />
recommended that the FDA allow the experiment to continue,<br />
but only with careful monitoring. Meanwhile, cardiac transplantation<br />
was becoming easier <strong>and</strong> more common. By the end of 1985, almost<br />
twenty-six hundred patients in various countries had received<br />
human heart transplants, <strong>and</strong> 76 percent of these patients had survived<br />
for at least four years. When the dem<strong>and</strong> for donor hearts exceeded<br />
the supply, physicians turned to the Jarvik device <strong>and</strong> other<br />
artificial hearts to help see patients through the waiting period.<br />
Experience with the Jarvik-7 made the world keenly aware of<br />
how far medical science still is from making the implantable permanent<br />
mechanical heart a reality. Nevertheless, the device was a<br />
breakthrough in the relatively new field of artificial organs. Since<br />
then, other artificial body parts have included heart valves, blood<br />
vessels, <strong>and</strong> inner ears that help restore hearing to the deaf.<br />
See also Artificial blood; Artificial kidney; Blood transfusion;<br />
Coronary artery bypass surgery; Electrocardiogram; Heart-lung<br />
machine; Pacemaker; Velcro.
Further Reading<br />
Artificial heart / 49<br />
Fox, Renee C., <strong>and</strong> Judith P. Swazy. Spare Parts: Organ Replacement in<br />
American Society. New York: Oxford University Press, 1992.<br />
Kunin, Calvin M., Joanne J. Debbins, <strong>and</strong> Julio C. Melo. “Infectious<br />
Complications in Four Long-Term Recipients of the Jarvik-7 Artificial<br />
Heart.” JAMA 259 (February 12, 1988).<br />
Kunzig, Robert. “The Beat Goes On.” Discover 21, no. 1 (January,<br />
2000).<br />
Lawrie, Gerald M. “Permanent Implantation of the Jarvik-7 Total<br />
Artificial Heart: A Clinical Perspective.” JAMA 259 (February 12,<br />
1988).
50<br />
Artificial hormone<br />
Artificial hormone<br />
The invention: Synthesized oxytocin, a small polypeptide hormone<br />
from the pituitary gl<strong>and</strong> that has shown how complex polypeptides<br />
<strong>and</strong> proteins may be synthesized <strong>and</strong> used in medicine.<br />
The people behind the invention:<br />
Vincent du Vigneaud (1901-1978), an American biochemist <strong>and</strong><br />
winner of the 1955 Nobel Prize in Chemistry<br />
Oliver Kamm (1888-1965), an American biochemist<br />
Sir Edward Albert Sharpey-Schafer (1850-1935), an English<br />
physiologist<br />
Sir Henry Hallett Dale (1875-1968), an English physiologist <strong>and</strong><br />
winner of the 1936 Nobel Prize in Physiology or Medicine<br />
John Jacob Abel (1857-1938), an American pharmacologist <strong>and</strong><br />
biochemist<br />
Body-Function Special Effects<br />
In Engl<strong>and</strong> in 1895, physician George Oliver <strong>and</strong> physiologist<br />
Edward Albert Sharpey-Schafer reported that a hormonal extract<br />
from the pituitary gl<strong>and</strong> of a cow produced a rise in blood pressure<br />
(a pressor effect) when it was injected into animals. In 1901, Rudolph<br />
Magnus <strong>and</strong> Sharpey-Schafer discovered that extracts from<br />
the pituitary also could restrict the flow of urine (an antidiuretic effect).<br />
This observation was related to the fact that when a certain<br />
section of the pituitary was removed surgically from an animal, the<br />
animal excreted an abnormally large amount of urine.<br />
In addition to the pressor <strong>and</strong> antidiuretic activities in the pituitary,<br />
two other effects were found in 1909. Sir Henry Hallett Dale,<br />
an English physiologist, was able to show that the extracts could<br />
cause the uterine muscle to contract (an oxytocic effect), <strong>and</strong> Isaac<br />
Ott <strong>and</strong> John C. Scott found that when lactating (milk-producing)<br />
animals were injected with the extracts, milk was released from the<br />
mammary gl<strong>and</strong>.<br />
Following the discovery of these various effects, attempts were<br />
made to concentrate <strong>and</strong> isolate the substance or substances that
were responsible. John Jacob Abel was able to concentrate the pressor<br />
activity at The Johns Hopkins University using heavy metal salts<br />
<strong>and</strong> extraction with organic solvents. The results of the early work,<br />
however, were varied. Some investigators came to the conclusion<br />
that only one substance was responsible for all the activities, while<br />
others concluded that two or more substances were likely to be involved.<br />
In 1928, Oliver Kamm <strong>and</strong> his coworkers at the drug firm of<br />
Parke, Davis <strong>and</strong> Company in Detroit reported a method for the<br />
separation of the four activities into two chemical fractions with<br />
high potency. One portion contained most of the pressor <strong>and</strong> antidiuretic<br />
activities, while the other contained the uterine-contracting<br />
<strong>and</strong> milk-releasing activities. Over the years, several names have<br />
been used for the two substances responsible for the effects. The generic<br />
name “vasopressin” generally has become the accepted term<br />
for the substance causing the pressor <strong>and</strong> antidiuretic effects, while<br />
the name “oxytocin” has been used for the other two effects. The<br />
two fractions that Kamm <strong>and</strong> his group had prepared were pure<br />
enough for the pharmaceutical firm to make them available for<br />
medical research related to obstetrics, surgical shock, <strong>and</strong> diabetes<br />
insipidus.<br />
A Complicated Synthesis<br />
Artificial hormone / 51<br />
The problem of these hormones <strong>and</strong> their nature interested Vincent<br />
du Vigneaud at the George Washington University School of<br />
Medicine. Working with Kamm, he was able to show that the sulfur<br />
content of both the oxytocin <strong>and</strong> the vasopressin fractions was a result<br />
of the amino acid cystine. This helped to strengthen the concept<br />
that these hormones were polypeptide, or proteinlike, substances.<br />
Du Vigneaud <strong>and</strong> his coworkers next tried to find a way of purifying<br />
oxytocin <strong>and</strong> vasopressin. This required not only the separation<br />
of the hormones themselves but also the separation from other impurities<br />
present in the preparations.<br />
During World War II (1939-1945) <strong>and</strong> shortly thereafter, other<br />
techniques were developed that would give du Vigneaud the tools<br />
he needed to complete the job of purifying <strong>and</strong> characterizing<br />
the two hormonal factors. One of the most important was the
52 / Artificial hormone<br />
countercurrent distribution method of chemist Lyman C. Craig at<br />
the Rockefeller Institute. Craig had developed an apparatus that<br />
could do multiple extractions, making possible separations of substances<br />
with similar properties. Du Vigneaud had used this technique<br />
in purifying his synthetic penicillin, <strong>and</strong> when he returned to<br />
the study of oxytocin <strong>and</strong> vasopressin in 1946, he used it on his purest<br />
preparations. The procedure worked well, <strong>and</strong> milligram quantities<br />
of pure oxytocin were available in 1949 for chemical characterization.<br />
Using the available techniques, Vigneaud <strong>and</strong> his coworkers<br />
were able to determine the structure of oxytocin. It was du Vigneaud’s<br />
goal to make synthetic oxytocin by duplicating the structure<br />
his group had worked out. Eventually, du Vigneaud’s synthetic<br />
oxytocin was obtained <strong>and</strong> the method published in the Journal of<br />
the American Chemical Society in 1953.<br />
Du Vigneaud’s oxytocin was next tested against naturally occurring<br />
oxytocin, <strong>and</strong> the two forms were found to act identically in every<br />
respect. In the final test, the synthetic form was found to induce<br />
labor when given intravenously to women about to give birth. Also,<br />
when microgram quantities of oxytocin were given intravenously<br />
to women who had recently given birth, milk was released from the<br />
mammary gl<strong>and</strong> in less than a minute.<br />
Consequences<br />
The work of du Vigneaud <strong>and</strong> his associates demonstrated for<br />
the first time that it was possible to synthesize peptides that have<br />
properties identical to the natural ones <strong>and</strong> that these can be useful<br />
in certain medical conditions. Oxytocin has been used in the last<br />
stages of labor during childbirth. Vasopressin has been used in the<br />
treatment of diabetes insipidus, when an individual has an insufficiency<br />
in the natural hormone, much as insulin is used by persons<br />
having diabetes mellitus.<br />
After receiving the Nobel Prize in Chemistry in 1955, du Vigneaud<br />
continued his work on synthesizing chemical variations of the two<br />
hormones. By making peptides that differed from oxytocin <strong>and</strong><br />
vasopressin by one or more amino acids, it was possible to study how<br />
the structure of the peptide was related to its physiological activity.
After the structure of insulin <strong>and</strong> some of the smaller proteins<br />
were determined, they, too, were synthesized, although with greater<br />
difficulty. Other methods of carrying out the synthesis of peptides<br />
<strong>and</strong> proteins have been developed <strong>and</strong> are used today. The production<br />
of biologically active proteins, such as insulin <strong>and</strong> growth hormone,<br />
has been made possible by efficient methods of biotechnology.<br />
The genes for these proteins can be put inside microorganisms,<br />
which then make them in addition to their own proteins. The microorganisms<br />
are then harvested <strong>and</strong> the useful protein hormones isolated<br />
<strong>and</strong> purified.<br />
See also Abortion pill; Artificial blood; Birth control pill; Genetically<br />
engineered insulin; Pap test.<br />
Further Reading<br />
Artificial hormone / 53<br />
Basa, Channa, <strong>and</strong> G. M. Anantharamaiah. Peptides: Design, Synthesis,<br />
<strong>and</strong> Biological Activity. Boston: Birkauser, 1994.<br />
Bodanszky, Miklos. “Vincent du Vigneaud, 1901-1978.” Nature 279,<br />
no. 5710 (1979).<br />
Vigneud, Vincent du. “A Trail of Sulfur Research from Insulin to<br />
Oxytocin” [Nobel lecture]. In Chemistry, 1942-1962. River Edge,<br />
N.J.: World Scientific, 1999.
54<br />
Artificial insemination<br />
Artificial insemination<br />
The invention: Practical techniques for the artificial insemination<br />
of farm animals that have revolutionized livestock breeding practices<br />
throughout the world.<br />
The people behind the invention:<br />
Lazzaro Spallanzani (1729-1799), an Italian physiologist<br />
Ilya Ivanovich Ivanov (1870-1932), a Soviet biologist<br />
R. W. Kunitsky, a Soviet veterinarian<br />
Reproduction Without Sex<br />
The tale is told of a fourteenth-century Arabian chieftain who<br />
sought to improve his mediocre breed of horses. Sneaking into the<br />
territory of a neighboring hostile tribe, he stimulated a prize stallion<br />
to ejaculate into a piece of cotton. Quickly returning home, he<br />
inserted this cotton into the vagina of his own mare, who subsequently<br />
gave birth to a high-quality horse. This may have been the<br />
first case of “artificial insemination,” the technique by which semen<br />
is introduced into the female reproductive tract without sexual<br />
contact.<br />
The first scientific record of artificial insemination comes from Italy<br />
in the 1770’s. Lazzaro Spallanzani was one of the foremost physiologists<br />
of his time, well known for having disproved the theory of<br />
spontaneous generation, which states that living organisms can<br />
spring “spontaneously” from lifeless matter. There was some disagreement<br />
at that time about the basic requirements for reproduction<br />
in animals. It was unclear if the sex act was necessary for an embryo<br />
to develop, or if it was sufficient that the sperm <strong>and</strong> eggs come<br />
into contact. Spallanzani began by studying animals in which union<br />
of the sperm <strong>and</strong> egg normally takes place outside the body of the<br />
female. He stimulated males <strong>and</strong> females to release their sperm <strong>and</strong><br />
eggs, then mixed these sex cells in a glass dish. In this way, he produced<br />
young frogs, toads, salam<strong>and</strong>ers, <strong>and</strong> silkworms.<br />
Next, Spallanzani asked whether the sex act was also unnecessary<br />
for reproduction in those species in which fertilization nor-
mally takes place inside the body of the female. He collected semen<br />
that had been ejaculated by a male spaniel <strong>and</strong>, using a syringe, injected<br />
the semen into the vagina of a female spaniel in heat. Two<br />
months later, she delivered a litter of three pups, which bore some<br />
resemblance to both the mother <strong>and</strong> the male that had provided the<br />
sperm.<br />
It was in animal breeding that Spallanzani’s techniques were to<br />
have their most dramatic application. In the 1880’s, an English dog<br />
breeder, Sir Everett Millais, conducted several experiments on artificial<br />
insemination. He was interested mainly in obtaining offspring<br />
from dogs that would not normally mate with one another because<br />
of difference in size. He followed Spallanzani’s methods to produce<br />
a cross between a short, low, basset hound <strong>and</strong> the much larger<br />
bloodhound.<br />
Long-Distance Reproduction<br />
Artificial insemination / 55<br />
Ilya Ivanovich Ivanov was a Soviet biologist who was commissioned<br />
by his government to investigate the use of artificial insemination<br />
on horses. Unlike previous workers who had used artificial<br />
insemination to get around certain anatomical barriers to fertilization,<br />
Ivanov began the use of artificial insemination to reproduce<br />
thoroughbred horses more effectively. His assistant in this work<br />
was the veterinarian R. W. Kunitsky.<br />
In 1901, Ivanov founded the Experimental Station for the Artificial<br />
Insemination of Horses. As its director, he embarked on a series<br />
of experiments to devise the most efficient techniques for breeding<br />
these animals. Not content with the demonstration that the technique<br />
was scientifically feasible, he wished to ensure further that it<br />
could be practiced by Soviet farmers.<br />
If sperm from a male were to be used to impregnate females in<br />
another location, potency would have to be maintained for a long<br />
time. Ivanov first showed that the secretions from the sex gl<strong>and</strong>s<br />
were not required for successful insemination; only the sperm itself<br />
was necessary. He demonstrated further that if a testicle were removed<br />
from a bull <strong>and</strong> kept cold, the sperm would remain alive.<br />
More useful than preservation of testicles would be preservation<br />
of the ejaculated sperm. By adding certain salts to the sperm-
56 / Artificial insemination<br />
containing fluids, <strong>and</strong> by keeping these at cold temperatures, Ivanov<br />
was able to preserve sperm for long periods.<br />
Ivanov also developed instruments to inject the sperm, to hold<br />
the vagina open during insemination, <strong>and</strong> to hold the horse in place<br />
during the procedure. In 1910, Ivanov wrote a practical textbook<br />
with technical instructions for the artificial insemination of horses.<br />
He also trained some three hundred veterinary technicians in the<br />
use of artificial insemination, <strong>and</strong> the knowledge he developed<br />
quickly spread throughout the Soviet Union. Artificial insemination<br />
became the major means of breeding horses.<br />
Until his death in 1932, Ivanov was active in researching many<br />
aspects of the reproductive biology of animals. He developed methods<br />
to treat reproductive diseases of farm animals <strong>and</strong> refined<br />
methods of obtaining, evaluating, diluting, preserving, <strong>and</strong> disinfecting<br />
sperm. He also began to produce hybrids between wild <strong>and</strong><br />
domestic animals in the hope of producing new breeds that would<br />
be able to withst<strong>and</strong> extreme weather conditions better <strong>and</strong> that<br />
would be more resistant to disease. His crosses included hybrids of<br />
ordinary cows with aurochs, bison, <strong>and</strong> yaks, as well as some more<br />
exotic crosses of zebras with horses.<br />
Ivanov also hoped to use artificial insemination to help preserve<br />
species that were in danger of becoming extinct. In 1926, he led an<br />
expedition to West Africa to experiment with the hybridization of<br />
different species of anthropoid apes.<br />
Impact<br />
The greatest beneficiaries of artificial insemination have been<br />
dairy farmers. Some bulls are able to sire genetically superior cows<br />
that produce exceptionally large volumes of milk. Under natural<br />
conditions, such a bull could father at most a few hundred offspring<br />
in its lifetime. Using artificial insemination, a prize bull can inseminate<br />
ten to fifteen thous<strong>and</strong> cows each year. Since frozen sperm may<br />
be purchased through the mail, this also means that dairy farmers<br />
no longer need to keep dangerous bulls on the farm. Artificial insemination<br />
has become the main method of reproduction of dairy<br />
cows, with about 150 million cows (as of 1992) produced this way<br />
throughout the world.
In the 1980’s, artificial insemination gained added importance as<br />
a method of breeding rare animals. Animals kept in zoo cages, animals<br />
that are unable to take part in normal mating, may still produce<br />
sperm that can be used to inseminate a female artificially.<br />
Some species require specific conditions of housing or diet for normal<br />
breeding to occur, conditions not available in all zoos. Such animals<br />
can still reproduce using artificial insemination.<br />
See also Abortion pill; Amniocentesis; Artificial chromosome;<br />
Birth control pill; Cloning; Genetic “fingerprinting”; Genetically engineered<br />
insulin; In vitro plant culture; Rice <strong>and</strong> wheat strains; Synthetic<br />
DNA.<br />
Further Reading<br />
Artificial insemination / 57<br />
Bearden, Henry Joe, <strong>and</strong> John W. Fuquay. Applied Animal Reproduction.<br />
5th ed. Upper Saddle River, N.J.: Prentice Hall, 2000.<br />
Foote, Robert H. Artificial Insemination to Cloning: Tracing Fifty Years<br />
of Research. Ithaca, N.Y.: Cornell University Press, 1998.<br />
Hafez, Elsayed Saad Eldin. Reproduction in Farm Animals. 6th ed.<br />
Philadelphia: Lea <strong>and</strong> Febiger, 1993.<br />
Herman, Harry August. Improving Cattle by the Millions: NAAB <strong>and</strong><br />
the Development <strong>and</strong> Worldwide Application of Artificial Insemination.<br />
Columbia: University of Missouri Press, 1981.
58<br />
Artificial kidney<br />
Artificial kidney<br />
The invention: A machine that removes waste end-products <strong>and</strong><br />
poisons out of the blood when human kidneys are not working<br />
properly.<br />
The people behind the invention:<br />
John Jacob Abel (1857-1938), a pharmacologist <strong>and</strong> biochemist<br />
known as the “father of American pharmacology”<br />
Willem Johan Kolff (1911- ), a Dutch American clinician who<br />
pioneered the artificial kidney <strong>and</strong> the artificial heart<br />
Cleansing the Blood<br />
In the human body, the kidneys are the dual organs that remove<br />
waste matter from the bloodstream <strong>and</strong> send it out of the system as<br />
urine. If the kidneys fail to work properly, this cleansing process<br />
must be done artifically—such as by a machine.<br />
John Jacob Abel was the first professor of pharmacology at Johns<br />
Hopkins University School of Medicine. Around 1912, he began to<br />
study the by-products of metabolism that are carried in the blood.<br />
This work was difficult, he realized, because it was nearly impossible<br />
to detect even the tiny amounts of the many substances in blood.<br />
Moreover, no one had yet developed a method or machine for taking<br />
these substances out of the blood.<br />
In devising a blood filtering system, Abel understood that he<br />
needed a saline solution <strong>and</strong> a membrane that would let some substances<br />
pass through but not others. Working with Leonard Rowntree<br />
<strong>and</strong> Benjamin B. Turner, he spent nearly two years figuring out<br />
how to build a machine that would perform dialysis—that is, remove<br />
metabolic by-products from blood. Finally their efforts succeeded.<br />
The first experiments were performed on rabbits <strong>and</strong> dogs. In<br />
operating the machine, the blood leaving the patient was sent flowing<br />
through a celloidin tube that had been wound loosely around a<br />
drum. An anticlotting substance (hirudin, taken out of leeches) was<br />
added to blood as the blood flowed through the tube. The drum,<br />
which was immersed in a saline <strong>and</strong> dextrose solution, rotated
slowly. As blood flowed through the immersed tubing, the pressure<br />
of osmosis removed urea <strong>and</strong> other substances, but not the plasma<br />
or cells, from the blood. The celloidin membranes allowed oxygen<br />
to pass from the saline <strong>and</strong> dextrose solution into the blood, so that<br />
purified, oxygenated blood then flowed back into the arteries.<br />
Abel studied the substances that his machine had removed from<br />
the blood, <strong>and</strong> he found that they included not only urea but also<br />
free amino acids. He quickly realized that his machine could be useful<br />
for taking care of people whose kidneys were not working properly.<br />
Reporting on his research, he wrote, “In the hope of providing a<br />
substitute in such emergencies, which might tide over a dangerous<br />
crisis...amethod has been devised by which the blood of a living<br />
animal may be submitted to dialysis outside the body, <strong>and</strong> again returned<br />
to the natural circulation.” Abel’s machine removed large<br />
quantities of urea <strong>and</strong> other poisonous substances fairly quickly, so<br />
that the process, which he called “vividiffusion,” could serve as an<br />
artificial kidney during cases of kidney failure.<br />
For his physiological research, Abel found it necessary to remove,<br />
study, <strong>and</strong> then replace large amounts of blood from living<br />
animals, all without dissolving the red blood cells, which carry oxygen<br />
to the body’s various parts. He realized that this process, which<br />
he called “plasmaphaeresis,” would make possible blood banks,<br />
where blood could be stored for emergency use.<br />
In 1914, Abel published these two discoveries in a series of three<br />
articles in the Journal of Pharmacology <strong>and</strong> Applied Therapeutics, <strong>and</strong><br />
he demonstrated his techniques in London, Engl<strong>and</strong>, <strong>and</strong> Groningen,<br />
The Netherl<strong>and</strong>s. Though he had suggested that his techniques<br />
could be used for medical purposes, he himself was interested<br />
mostly in continuing his biochemical research. So he turned to<br />
other projects in pharmacology, such as the crystallization of insulin,<br />
<strong>and</strong> never returned to studying vividiffusion.<br />
Refining the Technique<br />
Artificial kidney / 59<br />
Georg Haas, a German biochemist working in Giessen, West Germany,<br />
was also interested in dialysis; in 1915, he began to experiment<br />
with “blood washing.” After reading Abel’s 1914 writings,<br />
Haas tried substituting collodium for the celloidin that Abel had used
60 / Artificial kidney<br />
as a filtering membrane <strong>and</strong> using commercially prepared heparin<br />
instead of the homemade hirudin Abel had used to prevent blood<br />
clotting. He then used this machine on a patient <strong>and</strong> found that it<br />
showed promise, but he knew that many technical problems had to<br />
be worked out before the procedure could be used on many patients.<br />
In 1937, Willem Johan Kolff was a young physician at Groningen.<br />
He felt sad to see patients die from kidney failure, <strong>and</strong> he wanted to<br />
find a way to cure others. Having heard his colleagues talk about<br />
the possibility of using dialysis on human patients, he decided to<br />
build a dialysis machine.<br />
Kolff knew that cellophane was an excellent membrane for dialyzing,<br />
<strong>and</strong> that heparin was a good anticoagulant, but he also realized<br />
that his machine would need to be able to treat larger volumes<br />
of blood than Abel’s <strong>and</strong> Haas’s had. During World War II (1939-<br />
John Jacob Abel<br />
Born in 1857, John Jacob Abel grew up in Clevel<strong>and</strong>, Ohio,<br />
<strong>and</strong> then attended the University of Michigan. He graduated in<br />
1883 <strong>and</strong> studied for six years in Germany, which boasted the<br />
finest medical researchers of the times. He received a medical<br />
degree in 1888 in Strasbourg, transferred to Vienna, Austria, for<br />
more clinical experience, <strong>and</strong> then returned to the United States<br />
in 1891 to teach pharmacology at the University of Michigan.<br />
He had to organize his own laboratory, journal, <strong>and</strong> course of<br />
instruction. His efforts attracted the notice of Johns Hopkins<br />
University, which then had the nation’s most progressive medical<br />
school. In 1893 Abel moved there <strong>and</strong> became the first<br />
American to hold the title of professor of pharmacology. He remained<br />
at Johns Hopkins until his retirement in 1932.<br />
His biochemical research illuminated the complex interaction<br />
in the endocrine system. He isolated epinephrine (adrenaline),<br />
used his artificial kidney apparatus to demonstrate the<br />
presence of amino acids in the blood, <strong>and</strong> investigated pituitary<br />
gl<strong>and</strong> hormones <strong>and</strong> insulin.<br />
Abel died in 1938, but his influence did not. His many students<br />
took Abel’s interest in the biochemical basis of pharmacology<br />
to other universities <strong>and</strong> commercial laboratories, modernizing<br />
American drug research.
1945), with the help of the director of a nearby enamel factory, Kolff<br />
built an artificial kidney that was first tried on a patient on March<br />
17, 1943. Between March, 1943, <strong>and</strong> July 21, 1944, Kolff used his secretly<br />
constructed dialysis machines on fifteen patients, of whom<br />
only one survived. He published the results of his research in Acta<br />
Medica Sc<strong>and</strong>inavica. Even though most of his patients had not survived,<br />
he had collected information <strong>and</strong> developed the technique<br />
until he was sure dialysis would eventually work.<br />
Kolff brought machines to Amsterdam <strong>and</strong> The Hague <strong>and</strong> encouraged<br />
other physicians to try them; meanwhile, he continued to<br />
study blood dialysis <strong>and</strong> to improve his machines. In 1947, he<br />
brought improved machines to London <strong>and</strong> the United States. By<br />
the time he reached Boston, however, he had given away all of<br />
his machines. He did, however, explain the technique to John P.<br />
Merrill, a physician at the Harvard Medical School, who soon became<br />
the leading American developer of kidney dialysis <strong>and</strong> kidney-transplant<br />
surgery.<br />
Kolff himself moved to the United States, where he became an<br />
expert not only in artificial kidneys but also in artificial hearts. He<br />
helped develop the Jarvik-7 artificial heart (named for its chief inventor,<br />
Robert Jarvik), which was implanted in a patient in 1982.<br />
Impact<br />
Artificial kidney / 61<br />
Abel’s work showed that the blood carried some substances that<br />
had not been previously known <strong>and</strong> led to the development of the<br />
first dialysis machine for humans. It also encouraged interest in the<br />
possibility of organ transplants.<br />
After World War II, surgeons had tried to transplant kidneys from<br />
one animal to another, but after a few days the recipient began to reject<br />
the kidney <strong>and</strong> die. In spite of these failures, researchers in Europe <strong>and</strong><br />
America transplanted kidneys in several patients, <strong>and</strong> they used artificial<br />
kidneys to take care of the patients who were waiting for transplants.<br />
In 1954, Merrill—to whom Kolff had demonstrated an artificial<br />
kidney—successfully transplanted kidneys in identical twins.<br />
After immunosuppressant drugs (used to prevent the body<br />
from rejecting newly transplanted tissue) were discovered in 1962,<br />
transplantation surgery became much more practical. After kid-
62 / Artificial kidney<br />
ney transplants became common, the artificial kidney became simply<br />
a way of keeping a person alive until a kidney donor could be<br />
found.<br />
See also Artificial blood; Artificial heart; Blood transfusion; Genetically<br />
engineered insulin; Reserpine.<br />
Further Reading<br />
Cogan, Martin G., Patricia Schoenfeld, <strong>and</strong> Frank A. Gotch. Introduction<br />
to Dialysis. 2d ed. New York: Churchill Livingstone, 1991.<br />
DeJauregui, Ruth. One Hundred Medical Milestones That Shaped World<br />
History. San Mateo, Calif.: Bluewood Books, 1998.<br />
Noordwijk, Jacob van. Dialysing for Life: The Development of the Artificial<br />
Kidney. Boston: Kluwer Academic Publishers, 2001.
Artificial satellite<br />
Artificial satellite<br />
The invention: Sputnik I, the first object put into orbit around the<br />
earth, which began the exploration of space.<br />
The people behind the invention:<br />
Sergei P. Korolev (1907-1966), a Soviet rocket scientist<br />
Konstantin Tsiolkovsky (1857-1935), a Soviet schoolteacher <strong>and</strong><br />
the founder of rocketry in the Soviet Union<br />
Robert H. Goddard (1882-1945), an American scientist <strong>and</strong> the<br />
founder of rocketry in the United States<br />
Wernher von Braun (1912-1977), a German who worked on<br />
rocket projects<br />
Arthur C. Clarke (1917- ), the author of more than fifty<br />
books <strong>and</strong> the visionary behind telecommunications<br />
satellites<br />
A Shocking Launch<br />
In Russian, sputnik means “satellite” or “fellow traveler.” On October<br />
4, 1957, Sputnik 1, the first artificial satellite to orbit Earth, was<br />
placed into successful orbit by the Soviet Union. The launch of this<br />
small aluminum sphere, 0.58 meter in diameter <strong>and</strong> weighing 83.6<br />
kilograms, opened the doors to the frontiers of space.<br />
Orbiting Earth every 96 minutes, at 28,962 kilometers per hour,<br />
Sputnik 1 came within 215 kilometers of Earth at its closest point <strong>and</strong><br />
939 kilometers away at its farthest point. It carried equipment to<br />
measure the atmosphere <strong>and</strong> to experiment with the transmission<br />
of electromagnetic waves from space. Equipped with two radio<br />
transmitters (at different frequencies) that broadcast for twenty-one<br />
days, Sputnik 1 was in orbit for ninety-two days, until January 4,<br />
1958, when it disintegrated in the atmosphere.<br />
Sputnik 1 was launched using a Soviet intercontinental ballistic<br />
missile (ICBM) modified by Soviet rocket expert Sergei P. Korolev.<br />
After the launch of Sputnik 2, less than a month later, Chester<br />
Bowles, a former United States ambassador to India <strong>and</strong> Nepal,<br />
wrote: “Armed with a nuclear warhead, the rocket which launched<br />
63
64 / Artificial satellite<br />
Sergei P. Korolev<br />
Sergei P. Korolev’s rocket launched the Space Age: Sputnik I<br />
climbed into outer space aboard one of his R-7 missiles. Widely<br />
considered the Soviet Union’s premiere rocket scientist, he almost<br />
died in Joseph Stalin’s infamous Siberian prison camps<br />
before he could build the launchers that made his country a military<br />
superpower <strong>and</strong> pioneer of space exploration.<br />
Born in 1907, Korolev studied aeronautical engineering at<br />
the Kiev Polytechnic Institute. Upon graduation he helped found<br />
the Group for Investigation of Reactive Motion, which in the<br />
early 1930’s tested liquid-fuel rockets. His success attracted the<br />
military’s attention. It created the Reaction Propulsion Scientific<br />
Research Institute for him, <strong>and</strong> he was on the verge of testing<br />
a rocket-propelled airplane when he was arrested during a<br />
political purge in 1937 <strong>and</strong> sent as a prison laborer to the<br />
Kolyma gold mines. After Germany attacked Russia in World<br />
War II, Korolev was transferred to a prison research institute to<br />
help develop advanced aircraft.<br />
After World War II, rehabilitated in the eyes of the Soviet authorities,<br />
Korolev was placed in charge of long-range ballistic<br />
missile research. In 1953 he began to build the R-7 intercontinental<br />
ballistic missile (ICBM). While other design bureaus concentrated<br />
on developing the ICBM into a Cold War weapon,<br />
Korolev built rockets that explored the Moon with probes. His<br />
goal was to send cosmonauts there too. With his designs <strong>and</strong><br />
guidance, the Soviet space program proved that human space<br />
flight was possible in 1961, <strong>and</strong> so in 1962 he began development<br />
of the N-1, a booster that like the American Saturn V<br />
was powerful enough to send a crewed vehicle to the Moon.<br />
Tragically, Korolev died following minor surgery in 1966. The<br />
N-1 project was cancelled in 1971, along with Russian dreams of<br />
settling its citizens on the Moon.<br />
Sputnik 1 could destroy New York, Chicago, or Detroit 18 minutes<br />
after the button was pushed in Moscow.”<br />
Although the launch of Sputnik 1 came as a shock to the general<br />
public, it came as no surprise to those who followed rocketry. In<br />
June, 1957, the United States Air Force had issued a nonclassified<br />
memo stating that there was “every reason to believe that the Rus-
sian satellite shot would be made on the hundredth anniversary” of<br />
Konstantin Tsiolkovsky’s birth.<br />
Thous<strong>and</strong>s of Launches<br />
Artificial satellite / 65<br />
Rockets have been used since at least the twelfth century, when<br />
Europeans <strong>and</strong> the Chinese were using black powder devices. In<br />
1659, the Polish engineer Kazimir Semenovich published his Roketten<br />
für Luft und Wasser (rockets for air <strong>and</strong> water), which had a drawing<br />
of a three-stage rocket. Rockets were used <strong>and</strong> perfected for warfare<br />
during the nineteenth <strong>and</strong> twentieth centuries. Nazi Germany’s V-2<br />
rocket (thous<strong>and</strong>s of which were launched by Germany against Engl<strong>and</strong><br />
during the closing years of World War II) was the model for<br />
American <strong>and</strong> Soviet rocket designers between 1945 <strong>and</strong> 1957. In<br />
the Soviet Union, Tsiolkovsky had been thinking about <strong>and</strong> writing<br />
about space flight since the last decade of the nineteenth century,<br />
<strong>and</strong> in the United States, Robert H. Goddard had been thinking<br />
about <strong>and</strong> experimenting with rockets since the first decade of the<br />
twentieth century.<br />
Wernher von Braun had worked on rocket projects for Nazi Germany<br />
during World War II, <strong>and</strong>, as the war was ending in May, 1945,<br />
von Braun <strong>and</strong> several hundred other people involved in German<br />
rocket projects surrendered to American troops in Europe. Hundreds<br />
of other German rocket experts ended up in the Soviet Union<br />
to continue with their research. Tom Bower pointed out in his book<br />
The Paperclip Conspiracy: The Hunt for the Nazi Scientists (1987)—so<br />
named because American “recruiting officers had identified [Nazi]<br />
scientists to be offered contracts by slipping an ordinary paperclip<br />
onto their files”—that American rocketry research was helped<br />
tremendously by Nazi scientists who switched sides after World<br />
War II.<br />
The successful launch of Sputnik 1 convinced people that space<br />
travel was no longer simply science fiction. The successful launch of<br />
Sputnik 2 on November 3, 1957, carrying the first space traveler, a<br />
dog named Laika (who was euthanized in orbit because there were<br />
no plans to retrieve her), showed that the launch of Sputnik 1 was<br />
only the beginning of greater things to come.
66 / Artificial satellite<br />
Consequences<br />
After October 4, 1957, the Soviet Union <strong>and</strong> other nations launched<br />
more experimental satellites. On January 31, 1958, the United<br />
States sent up Explorer 1, after failing to launch a Vanguard satellite<br />
on December 6, 1957.<br />
Arthur C. Clarke, most famous for his many books of science fiction,<br />
published a technical paper in 1945 entitled “Extra-Terrestrial<br />
Relays: Can Rocket Stations Give World-Wide Radio Coverage?” In<br />
that paper, he pointed out that a satellite placed in orbit at the correct<br />
height <strong>and</strong> speed above the equator would be able to hover over<br />
the same spot on Earth. The placement of three such “geostationary”<br />
satellites would allow radio signals to be transmitted around<br />
the world. By the 1990’s, communications satellites were numerous.<br />
In the first twenty-five years after Sputnik 1 was launched, from<br />
1957 to 1982, more than two thous<strong>and</strong> objects were placed into various<br />
Earth orbits by more than twenty-four nations. On the average,<br />
something was launched into space every 3.82 days for this twentyfive-year<br />
period, all beginning with Sputnik 1.<br />
See also Communications satellite; Cruise missile; Rocket; V-2<br />
rocket; Weather satellite.<br />
Further Reading<br />
Dickson, Paul. Sputnik: The Shock of the Century. New York: Walker,<br />
2001.<br />
Heppenheimer, T. A. Countdown: A History of Space Flight. New York:<br />
John Wiley & Sons, 1997.<br />
Logsdon, John M., Roger D. Launius, <strong>and</strong> Robert W. Smith. Reconsidering<br />
Sputnik: Forty Years Since the Soviet Satellite. Australia:<br />
Harwood Academic, 2000.
Aspartame<br />
Aspartame<br />
The invention: An artificial sweetener with a comparatively natural<br />
taste widely used in carbonated beverages.<br />
The people behind the invention:<br />
Arthur H. Hayes, Jr. (1933- ), a physician <strong>and</strong> commissioner<br />
of the U.S. Food <strong>and</strong> Drug Administration (FDA)<br />
James M. Schlatter (1942- ), an American chemist<br />
Michael Sveda (1912- ), an American chemist <strong>and</strong> inventor<br />
Ludwig Frederick Audrieth (1901- ), an American chemist<br />
<strong>and</strong> educator<br />
Ira Remsen (1846-1927), an American chemist <strong>and</strong> educator<br />
Constantin Fahlberg (1850-1910), a German chemist<br />
Sweetness Without Calories<br />
People have sweetened food <strong>and</strong> beverages since before recorded<br />
history. The most widely used sweetener is sugar, or sucrose. The<br />
only real drawback to the use of sucrose is that it is a nutritive sweetener:<br />
In addition to adding a sweet taste, it adds calories. Because<br />
sucrose is readily absorbed by the body, an excessive amount can be<br />
life-threatening to diabetics. This fact alone would make the development<br />
of nonsucrose sweeteners attractive.<br />
There are three common nonsucrose sweeteners in use around<br />
the world: saccharin, cyclamates, <strong>and</strong> aspartame. Saccharin was the<br />
first of this group to be discovered, in 1879. Constantin Fahlberg<br />
synthesized saccharin based on the previous experimental work of<br />
Ira Remsen using toluene (derived from petroleum). This product<br />
was found to be three hundred to five hundred times as sweet as<br />
sugar, although some people could detect a bitter aftertaste.<br />
In 1944, the chemical family of cyclamates was discovered by<br />
Ludwig Frederick Audrieth <strong>and</strong> Michael Sveda. Although these<br />
compounds are only thirty to eighty times as sweet as sugar, there<br />
was no detectable aftertaste. By the mid-1960’s, cyclamates had<br />
resplaced saccharin as the leading nonnutritive sweetener in the<br />
United States. Although cyclamates are still in use throughout the<br />
67
68 / Aspartame<br />
world, in October, 1969, FDA removed them from the list of approved<br />
food additives because of tests that indicated possible health<br />
hazards.<br />
A Political Additive<br />
Aspartame is the latest in artificial sweeteners that are derived<br />
from natural ingredients—in this case, two amino acids, one from<br />
milk <strong>and</strong> one from bananas. Discovered by accident in 1965 by<br />
American chemist James M. Schlatter when he licked his fingers<br />
during an experiment, aspartame is 180 times as sweet as sugar. In<br />
1974, the FDA approved its use in dry foods such as gum <strong>and</strong> cereal<br />
<strong>and</strong> as a sugar replacement.<br />
Shortly after its approval for this limited application, the FDA<br />
held public hearings on the safety concerns raised by John W. Olney,<br />
a professor of neuropathology at Washington University in St. Louis.<br />
There was some indication that aspartame, when combined with<br />
the common food additive monosodium glutamate, caused brain<br />
damage in children. These fears were confirmed, but the risk of<br />
brain damage was limited to a small percentage of individuals with<br />
a rare genetic disorder. At this point, the public debate took a political<br />
turn: Senator William Proxmire charged FDA Commissioner Alex<strong>and</strong>er<br />
M. Schmidt with public misconduct. This controversy resulted<br />
in aspartame being taken off the market in 1975.<br />
In 1981, the new FDA commissioner, Arthur H. Hayes, Jr., resapproved<br />
aspartame for use in the same applications: as a tabletop<br />
sweetener, as a cold-cereal additive, in chewing gum, <strong>and</strong> for other<br />
miscellaneous uses. In 1983, the FDA approved aspartame for use in<br />
carbonated beverages, its largest application to date. Later safety<br />
studies revealed that children with a rare metabolic disease, phenylketonuria,<br />
could not ingest this sweetener without severe health<br />
risks because of the presence of phenylalanine in aspartame. This<br />
condition results in a rapid buildup in phenylalanine in the blood.<br />
Laboratories simulated this condition in rats <strong>and</strong> found that high<br />
doses of aspartame inhibited the synthesis of dopamine, a neurotransmitter.<br />
Once this happens, an increase in the frequency of seizures<br />
can occur. There was no direct evidence, however, that aspartame<br />
actually caused seizures in these experiments.
Many other compounds are being tested for use as sugar replacements,<br />
the sweetest being a relative of aspartame. This compound is<br />
seventeen thous<strong>and</strong> to fifty-two thous<strong>and</strong> times sweeter than sugar.<br />
Impact<br />
The business fallout from the approval of a new low-calorie<br />
sweetener occurred over a short span of time. In 1981, sales of this<br />
artificial sweetener by G. D. Searle <strong>and</strong> Company were $74 million.<br />
In 1983, sales rose to $336 million <strong>and</strong> exceeded half a billion dollars<br />
the following year. These figures represent sales of more than 2,500<br />
tons of this product. In 1985, 3,500 tons of aspartame were consumed.<br />
Clearly, this product’s introduction was a commercial success<br />
for Searle. During this same period, the percentage of reducedcalorie<br />
carbonated beverages containing saccharin declined from<br />
100 percent to 20 percent in an industry that had $4 billion in sales.<br />
Universally, consumers preferred products containing aspartame;<br />
the bitter aftertaste of saccharin was rejected in favor of the new, less<br />
powerful sweetener.<br />
There is a trade-off in using these products. The FDA found evidence<br />
linking both saccharin <strong>and</strong> cyclamates to an elevated incidence<br />
of cancer. Cyclamates were banned in the United States for<br />
this reason. <strong>Public</strong> resistance to this measure caused the agency to<br />
back away from its position. The rationale was that, compared to<br />
other health risks associated with the consumption of sugar (especially<br />
for diabetics <strong>and</strong> overweight persons), the chance of getting<br />
cancer was slight <strong>and</strong> therefore a risk that many people would<br />
choose to ignore. The total domination of aspartame in the sweetener<br />
market seems to support this assumption.<br />
See also Cyclamate; Genetically engineered insulin.<br />
Further Reading<br />
Aspartame / 69<br />
Blaylock, Russell L. Excitotoxins: The Taste That Kills. Santa Fe,<br />
N.Mex.: Health Press, 1998.<br />
Hull, Janet Starr. Sweet Poison: How the World’s Most Popular Artificial<br />
Sweetener Is Killing Us—My Story. Far Hills, N.J.: New<br />
Horizon Press, 1999.
70 / Aspartame<br />
Roberts, Hyman Jacob. Aspartame (NutraSweet®): Is It Safe? Philadelphia:<br />
Charles Press, 1990.<br />
Stegink, Lewis D., <strong>and</strong> Lloyd J. Filer, Aspartame: Physiology <strong>and</strong> Biochemistry.<br />
New York: M. Dekker, 1984.<br />
Stoddard, Mary Nash. Deadly Deception: Story of Aspartame, Shocking<br />
Expose of the World’s Most Controversial Sweetener. Dallas: Odenwald<br />
Press, 1998.
Assembly line<br />
Assembly line<br />
The invention: A manufacturing technique pioneered in the automobile<br />
industry by Henry Ford that lowered production costs<br />
<strong>and</strong> helped bring automobile ownership within the reach of millions<br />
of Americans in the early twentieth century.<br />
The people behind the invention:<br />
Henry Ford (1863-1947), an American carmaker<br />
Eli Whitney (1765-1825), an American inventor<br />
Elisha King Root (1808-1865), the developer of division of labor<br />
Oliver Evans (1755-1819), the inventor of power conveyors<br />
Frederick Winslow Taylor (1856-1915), an efficiency engineer<br />
A Practical Man<br />
Henry Ford built his first “horseless carriage” by h<strong>and</strong> in his<br />
home workshop in 1896. In 1903, the Ford Motor Company was<br />
born. Ford’s first product, the Model A, sold for less than one thous<strong>and</strong><br />
dollars, while other cars at that time were priced at five to ten<br />
thous<strong>and</strong> dollars each. When Ford <strong>and</strong> his partners tried, in 1905, to<br />
sell a more expensive car, sales dropped. Then, in 1907, Ford decided<br />
that the Ford Motor Company would build “a motor car for<br />
the great multitude.” It would be called the Model T.<br />
The Model T came out in 1908 <strong>and</strong> was everything that Henry Ford<br />
said it would be. Ford’s Model T was a low-priced (about $850), practical<br />
car that came in one color only: black. In the twenty years during<br />
which the Model T was built, the basic design never changed. Yet the<br />
price of the Model T, or “Tin Lizzie,” as it was affectionately called,<br />
dropped over the years to less than half that of the original Model T. As<br />
the price dropped, sales increased, <strong>and</strong> the Ford Motor Company<br />
quickly became the world’s largest automobile manufacturer.<br />
The last of more than 15 million Model T’s was made in 1927. Although<br />
it looked <strong>and</strong> drove almost exactly like the first Model T,<br />
these two automobiles were built in an entirely different way. The<br />
first was custom-built, while the last came off an assembly line.<br />
At first, Ford had built his cars in the same way everyone else<br />
71
72 / Assembly line<br />
did: one at a time. Skilled mechanics would work on a car from start<br />
to finish, while helpers <strong>and</strong> runners brought parts to these highly<br />
paid craftsmen as they were needed. After finishing one car, the mechanics<br />
<strong>and</strong> their helpers would begin the next.<br />
The Quest for Efficiency<br />
Custom-built products are good when there is little dem<strong>and</strong> <strong>and</strong><br />
buyers are willing to pay the high labor costs. This was not the case<br />
with the automobile. Ford realized that in order to make a large<br />
number of quality cars at a low price, he had to find a more efficient<br />
way to build cars. To do this, he looked to the past <strong>and</strong> the work of<br />
others. He found four ideas: interchangeable parts, continuous flow,<br />
division of labor, <strong>and</strong> elimination of wasted motion.<br />
Eli Whitney, the inventor of the cotton gin, was the first person to<br />
use interchangeable parts successfully in mass production. In 1798, the<br />
United States government asked Whitney to make several thous<strong>and</strong><br />
muskets in two years. Instead of finding <strong>and</strong> hiring gunsmiths to make<br />
the muskets by h<strong>and</strong>, Whitney used most of his time <strong>and</strong> money to design<br />
<strong>and</strong> build special machines that could make large numbers of<br />
Model-T assembly line in the Ford Motor Company’s Highl<strong>and</strong> Park Factory. (Library<br />
of Congress)
Assembly line / 73<br />
identical parts—one machine for each part that was needed to build a<br />
musket. These tools, <strong>and</strong> others Whitney made for holding, measuring,<br />
<strong>and</strong> positioning the parts, made it easy for semiskilled, <strong>and</strong> even<br />
unskilled, workers to build a large number of muskets.<br />
Production can be made more efficient by carefully arranging the<br />
different stages of production to create a “continuous flow.” Ford<br />
borrowed this idea from at least two places: the meat-packing<br />
houses of Chicago <strong>and</strong> an automatic grain mill run by Oliver Evans.<br />
Ford’s idea for a moving assembly line came from Chicago’s<br />
great meat-packing houses in the late 1860’s. Here, the bodies of animals<br />
were moved along an overhead rail past a number of workers,<br />
each of whom made a certain cut, or h<strong>and</strong>led one part of the packing<br />
job. This meant that many animals could be butchered <strong>and</strong> packaged<br />
in a single day.<br />
Ford looked to Oliver Evans for an automatic conveyor system.<br />
In 1783, Evans had designed <strong>and</strong> operated an automatic grain mill<br />
that could be run by only two workers. As one worker poured grain<br />
into a funnel-shaped container, called a “hopper,” at one end of the<br />
mill, a second worker filled sacks with flour at the other end. Everything<br />
in between was done automatically, as Evans’s conveyors<br />
passed the grain through the different steps of the milling process<br />
without any help.<br />
The idea of “division of labor” is simple: When one complicated<br />
job is divided into several easier jobs, some things can be made<br />
faster, with fewer mistakes, by workers who need fewer skills than<br />
ever before. Elisha King Root had used this principle to make the famous<br />
Colt “Six-Shooter.” In 1849, Root went to work for Samuel<br />
Colt at his Connecticut factory <strong>and</strong> proved to be a manufacturing<br />
genius. By dividing the work into very simple steps, with each step<br />
performed by one worker, Root was able to make many more guns<br />
in much less time.<br />
Before Ford applied Root’s idea to the making of engines, it took<br />
one worker one day to make one engine. By breaking down the<br />
complicated job of making an automobile engine into eighty-four<br />
simpler jobs, Ford was able to make the process much more efficient.<br />
By assigning one person to each job, Ford’s company was able<br />
to make 352 engines per day—an increase of more than 400 percent.<br />
Frederick Winslow Taylor has been called the “original efficiency
74 / Assembly line<br />
Henry Ford<br />
Henry Ford (1863-1947) was more of a synthesizer <strong>and</strong> innovator<br />
than an inventor. Others invented the gasoline-powered<br />
automobile <strong>and</strong> the techniques of mass production, but it was<br />
Ford who brought the two together. The result was the assembly<br />
line-produced Model T that the Ford Motor Company<br />
turned out in the millions from 1908 until 1927. And it changed<br />
America profoundly.<br />
Ford’s idea was to lower production costs enough so that<br />
practically everyone could afford a car, not just the wealthy. He<br />
succeeded brilliantly. The first Model T’s cost $850, rock bottom<br />
for the industry, <strong>and</strong> by 1927 the price was down to $290. Americans<br />
bought them up like no other technological marvel in the<br />
nation’s history. For years, out of every one hundred cars on the<br />
road almost forty of them were Model T’s. The basic version<br />
came with nothing on the dash board but an ignition switch,<br />
<strong>and</strong> the cars were quirky—so much so that an entire industry<br />
grew up to outfit them for the road <strong>and</strong> make sure they stayed<br />
running. Even then, they could only go up steep slopes backwards,<br />
<strong>and</strong> starting them was something of an art.<br />
Americans took the Model T to heart, affectionately nicknaming<br />
it the flivver <strong>and</strong> Tin Lizzie. This “democratization of<br />
the automobile,” as Ford called it, not only gave common people<br />
modern transportation <strong>and</strong> made them more mobile than<br />
every before; it started the American love affair with the car.<br />
Even after production stopped in 1927, the Model T Ford remained<br />
the archetype of American automobiles. As the great<br />
essayist E. B. White wrote in “Farewell My Lovely” (1936), his<br />
eulogy for the Model T, “…to a few million people who grew up<br />
with it, the old Ford practically was the American scene.”<br />
expert.” His idea was that inefficiency was caused by wasted time<br />
<strong>and</strong> wasted motion. So Taylor studied ways to eliminate wasted<br />
motion. He proved that, in the long run, doing a job too quickly was<br />
as bad as doing it too slowly. “Correct speed is the speed at which<br />
men can work hour after hour, day after day, year in <strong>and</strong> year out,<br />
<strong>and</strong> remain continuously in good health,” he said. Taylor also studied<br />
ways to streamline workers’ movements. In this way, he was<br />
able to keep wasted motion to a minimum.
Impact<br />
The changeover from custom production to mass production<br />
was an evolution rather than a revolution. Henry Ford applied the<br />
four basic ideas of mass production slowly <strong>and</strong> with care, testing<br />
each new idea before it was used. In 1913, the first moving assembly<br />
line for automobiles was being used to make Model T’s. Ford was<br />
able to make his Tin Lizzies faster than ever, <strong>and</strong> his competitors<br />
soon followed his lead. He had succeeded in making it possible for<br />
millions of people to buy automobiles.<br />
Ford’s work gave a new push to the Industrial Revolution. It<br />
showed Americans that mass production could be used to improve<br />
quality, cut the cost of making an automobile, <strong>and</strong> improve profits.<br />
In fact, the Model T was so profitable that in 1914 Ford was able to<br />
double the minimum daily wage of his workers, so that they too<br />
could afford to buy Tin Lizzies.<br />
Although Americans account for only about 6 percent of the<br />
world’s population, they now own about 50 percent of its wealth.<br />
There are more than twice as many radios in the United States as<br />
there are people. The roads are crowded with more than 180 million<br />
automobiles. Homes are filled with the sounds <strong>and</strong> sights emitting<br />
from more than 150 million television sets. Never have the people of<br />
one nation owned so much. Where did all the products—radios,<br />
cars, television sets—come from? The answer is industry, which still<br />
depends on the methods developed by Henry Ford.<br />
See also CAD/CAM; Color television; Interchangeable parts;<br />
Steelmaking process.<br />
Further Reading<br />
Assembly line / 75<br />
Abernathy, William, Kim Clark, <strong>and</strong> Alan Kantrow. Industrial Renaissance.<br />
New York: Basic Books, 1983.<br />
Bruchey, Stuart. Enterprise: The Dynamic Economy of a Free People.<br />
Cambridge, Mass.: Harvard University Press, 1990.<br />
Flink, James. The Car Culture. Cambridge, Mass.: MIT Press, 1975.<br />
Hayes, Robert. Restoring Our Competitive Edge. New York: Wiley, 1984.<br />
Olson, Sidney. Young Henry Ford: A Picture History of the First Forty<br />
Years. Detroit: Wayne State University Press, 1997.Wiley, 1984.
76<br />
Atomic bomb<br />
Atomic bomb<br />
The invention: A weapon of mass destruction created during<br />
World War II that utilized nuclear fission to create explosions<br />
equivalent to thous<strong>and</strong>s of tons of trinitrotoluene (TNT),<br />
The people behind the invention:<br />
J. Robert Oppenheimer (1904-1967), an American physicist<br />
Leslie Richard Groves (1896-1970), an American engineer <strong>and</strong><br />
Army general<br />
Enrico Fermi (1900-1954), an Italian American nuclear physicist<br />
Niels Bohr (1885-1962), a Danish physicist<br />
Energy on a Large Scale<br />
The first evidence of uranium fission (the splitting of uranium<br />
atoms) was observed by German chemists Otto Hahn <strong>and</strong> Fritz<br />
Strassmann in Berlin at the end of 1938. When these scientists discovered<br />
radioactive barium impurities in neutron-irradiated uranium,<br />
they wrote to their colleague Lise Meitner in Sweden. She <strong>and</strong><br />
her nephew, physicist Otto Robert Frisch, calculated the large release<br />
of energy that would be generated during the nuclear fission<br />
of certain elements. This result was reported to Niels Bohr in Copenhagen.<br />
Meanwhile, similar fission energies were measured by Frédéric<br />
Joliot <strong>and</strong> his associates in Paris, who demonstrated the release of<br />
up to three additional neutrons during nuclear fission. It was recognized<br />
immediately that if neutron-induced fission released enough<br />
additional neutrons to cause at least one more such fission, a selfsustaining<br />
chain reaction would result, yielding energy on a large<br />
scale.<br />
While visiting the United States from January to May of 1939,<br />
Bohr derived a theory of fission with John Wheeler of Princeton<br />
University. This theory led Bohr to predict that the common isotope<br />
uranium 238 (which constitutes 99.3 percent of naturally occurring<br />
uranium) would require fast neutrons for fission, but that the rarer<br />
uranium 235 would fission with neutrons of any energy. This meant
that uranium 235 would be far more suitable for use in any sort of<br />
bomb. Uranium bombardment in a cyclotron led to the discovery of<br />
plutonium in 1940 <strong>and</strong> the discovery that plutonium 239 was fissionable—<strong>and</strong><br />
thus potentially good bomb material. Uranium 238<br />
was then used to “breed” (create) plutonium 239, which was then<br />
separated from the uranium by chemical methods.<br />
During 1942, the Manhattan District of the Army Corps of Engineers<br />
was formed under General Leslie Richard Groves, an engineer<br />
<strong>and</strong> Army general who contracted with E. I. Du Pont de<br />
Nemours <strong>and</strong> Company to construct three secret atomic cities at a<br />
total cost of $2 billion. At Oak Ridge, Tennessee, twenty-five thous<strong>and</strong><br />
workers built a 1,000-kilowatt reactor as a pilot plant. Asecond<br />
city of sixty thous<strong>and</strong> inhabitants was built at Hanford, Washington,<br />
where three huge reactors <strong>and</strong> remotely controlled plutoniumextraction<br />
plants were completed in early 1945.<br />
A Sustained <strong>and</strong> Awesome Roar<br />
Atomic bomb / 77<br />
Studies of fast-neutron reactions for an atomic bomb were brought<br />
together in Chicago in June of 1942 under the leadership of J. Robert<br />
Oppenheimer. He soon became a personal adviser to Groves, who<br />
built for Oppenheimer a laboratory for the design <strong>and</strong> construction<br />
of the bomb at Los Alamos, New Mexico. In 1943, Oppenheimer<br />
gathered two hundred of the best scientists in what was by now being<br />
called the Manhattan Project to live <strong>and</strong> work in this third secret<br />
city.<br />
Two bomb designs were developed. A gun-type bomb called<br />
“Little Boy” used 15 kilograms of uranium 235 in a 4,500-kilogram<br />
cylinder about 2 meters long <strong>and</strong> 0.5 meter in diameter, in which a<br />
uranium bullet could be fired into three uranium target rings to<br />
form a critical mass. An implosion-type bomb called “Fat Man” had<br />
a 5-kilogram spherical core of plutonium about the size of an orange,<br />
which could be squeezed inside a 2,300-kilogram sphere<br />
about 1.5 meters in diameter by properly shaped explosives to make<br />
the mass critical in the shorter time required for the faster plutonium<br />
fission process.<br />
A flat scrub region 200 kilometers southeast of Alamogordo,<br />
called Trinity, was chosen for the test site, <strong>and</strong> observer bunkers
78 / Atomic bomb<br />
were built about 10 kilometers from a 30-meter steel tower. On July<br />
13, 1945, one of the plutonium bombs was assembled at the site; the<br />
next morning, it was raised to the top of the tower. Two days later,<br />
on July 16, after a short thunderstorm delay, the bomb was detonated<br />
at 5:30 a.m. The resulting implosion initiated a chain reaction<br />
of nearly 60 fission generations in about a microsecond. It produced<br />
an intense flash of light <strong>and</strong> a fireball that exp<strong>and</strong>ed to a diameter of<br />
about 600 meters in two seconds, rose to a height of more than 12 kilometers,<br />
<strong>and</strong> formed an ominous mushroom shape. Forty seconds<br />
later, an air blast hit the observer bunkers, followed by a sustained<br />
<strong>and</strong> awesome roar. Measurements confirmed that the explosion had<br />
the power of 18.6 kilotons of trinitrotoluene (TNT), nearly four<br />
times the predicted value.<br />
Impact<br />
On March 9, 1945, 325 American B-29 bombers dropped 2,000<br />
tons of incendiary bombs on Tokyo, resulting in 100,000 deaths from<br />
the fire storms that swept the city. Nevertheless, the Japanese military<br />
refused to surrender, <strong>and</strong> American military plans called for an<br />
invasion of Japan, with estimates of up to a half million American<br />
casualties, plus as many as 2 million Japanese casualties. On August<br />
6, 1945, after authorization by President Harry S. Truman, the<br />
B-29 Enola Gay dropped the uranium Little Boy bomb on Hiroshima<br />
at 8:15 a.m. On August 9, the remaining plutonium Fat Man bomb<br />
was dropped on Nagasaki. Approximately 100,000 people died at<br />
Hiroshima (out of a population of 400,000), <strong>and</strong> about 50,000 more<br />
died at Nagasaki. Japan offered to surrender on August 10, <strong>and</strong> after<br />
a brief attempt by some army officers to rebel, an official announcement<br />
by Emperor Hirohito was broadcast on August 15.<br />
The development of the thermonuclear fusion bomb, in which<br />
hydrogen isotopes could be fused together by the force of a fission<br />
explosion to produce helium nuclei <strong>and</strong> almost unlimited energy,<br />
had been proposed early in the Manhattan Project by physicist Edward<br />
Teller. Little effort was invested in the hydrogen bomb until<br />
after the surprise explosion of a Soviet atomic bomb in September,<br />
1949, which had been built with information stolen from the Manhattan<br />
Project. After three years of development under Teller’s
guidance, the first successful H-bomb was exploded on November<br />
1, 1952, obliterating the Elugelab atoll in the Marshall Isl<strong>and</strong>s of<br />
the South Pacific. The arms race then accelerated until each side had<br />
stockpiles of thous<strong>and</strong>s of H-bombs.<br />
The Manhattan Project opened a P<strong>and</strong>ora’s box of nuclear weapons<br />
that would plague succeeding generations, but it contributed<br />
more than merely weapons. About 19 percent of the electrical energy<br />
in the United States is generated by about 110 nuclear reactors<br />
producing more than 100,000 megawatts of power. More than 400<br />
reactors in thirty countries provide 300,000 megawatts of the world’s<br />
power. Reactors have made possible the widespread use of radioisotopes<br />
in medical diagnosis <strong>and</strong> therapy. Many of the techniques<br />
for producing <strong>and</strong> using these isotopes were developed by the hundreds<br />
of nuclear physicists who switched to the field of radiation<br />
biophysics after the war, ensuring that the benefits of their wartime<br />
efforts would reach the public.<br />
See also Airplane; Breeder reactor; Cruise missile; Hydrogen<br />
bomb; Rocket; Stealth aircraft; V-2 rocket.<br />
Further Reading<br />
Atomic bomb / 79<br />
Goudsmit, Samuel Abraham, <strong>and</strong> Albert E. Moyer. The History of<br />
Modern Physics, 1800-1950. Los Angeles: Tomash Publishers, 1983.<br />
Henshall, Phillip. The Nuclear Axis: Germany, Japan, <strong>and</strong> the Atom<br />
Bomb Race, 1939-1945. Stoud: Sutton, 2000.<br />
Krieger, David. Splitting the Atom: A Chronology of the Nuclear Age.<br />
Santa Barbara, Calif.: Nuclear Age Peace Foundation, 1998.<br />
Smith, June. How the Atom Bombs Began, 1939-1946. London: Brockwell,<br />
1988.
80<br />
Atomic clock<br />
Atomic clock<br />
The invention: A clock using the ammonia molecule as its oscillator<br />
that surpasses mechanical clocks in long-term stability, precision,<br />
<strong>and</strong> accuracy.<br />
The person behind the invention:<br />
Harold Lyons (1913-1984), an American physicist<br />
Time Measurement<br />
The accurate measurement of basic quantities, such as length,<br />
electrical charge, <strong>and</strong> temperature, is the foundation of science. The<br />
results of such measurements dictate whether a scientific theory is<br />
valid or must be modified or even rejected. Many experimental<br />
quantities change over time, but time cannot be measured directly.<br />
It must be measured by the occurrence of an oscillation or rotation,<br />
such as the twenty-four-hour rotation of the earth. For centuries, the<br />
rising of the Sun was sufficient as a timekeeper, but the need for<br />
more precision <strong>and</strong> accuracy increased as human knowledge grew.<br />
Progress in science can be measured by how accurately time has<br />
been measured at any given point. In 1713, the British government,<br />
after the disastrous sinking of a British fleet in 1707 because of a miscalculation<br />
of longitude, offered a reward of 20,000 pounds for the<br />
invention of a ship’s chronometer (a very accurate clock). Latitude<br />
is determined by the altitude of the Sun above the southern horizon<br />
at noon local time, but the determination of longitude requires an<br />
accurate clock set at Greenwich, Engl<strong>and</strong>, time. The difference between<br />
the ship’s clock <strong>and</strong> the local sun time gives the ship’s longitude.<br />
This permits the accurate charting of new l<strong>and</strong>s, such as those<br />
that were being explored in the eighteenth century. John Harrison,<br />
an English instrument maker, eventually built a chronometer that<br />
was accurate within one minute after five months at sea. He received<br />
his reward from Parliament in 1765.<br />
Atomic Clocks Provide Greater Stability<br />
A clock contains four parts: energy to keep the clock operating,<br />
an oscillator, an oscillation counter, <strong>and</strong> a display. A gr<strong>and</strong>father
Atomic clock / 81<br />
clock has weights that fall slowly, providing energy that powers the<br />
clock’s gears. The pendulum, a weight on the end of a rod, swings<br />
back <strong>and</strong> forth (oscillates) with a regular beat. The length of the rod<br />
determines the pendulum’s period of oscillation. The pendulum is<br />
attached to gears that count the oscillations <strong>and</strong> drive the display<br />
h<strong>and</strong>s.<br />
There are limits to a mechanical clock’s accuracy <strong>and</strong> stability.<br />
The length of the rod changes as the temperature changes, so the<br />
period of oscillation changes. Friction in the gears changes as they<br />
wear out. Making the clock smaller increases its accuracy, precision,<br />
<strong>and</strong> stability. Accuracy is how close the clock is to telling the actual<br />
time. Stability indicates how the accuracy changes over time, while<br />
precision is the number of accurate decimal places in the display. A<br />
gr<strong>and</strong>father clock, for example, might be accurate to ten seconds per<br />
day <strong>and</strong> precise to a second, while having a stability of minutes per<br />
week.<br />
Applying an electrical signal to a quartz crystal will make the<br />
crystal oscillate at its natural vibration frequency, which depends on<br />
its size, its shape, <strong>and</strong> the way in which it was cut from the larger<br />
crystal. Since the faster a clock’s oscillator vibrates, the more precise<br />
the clock, a crystal-based clock is more precise than a large pendulum<br />
clock. By keeping the crystal under constant temperature, the<br />
clock is kept accurate, but it eventually loses its stability <strong>and</strong> slowly<br />
wears out.<br />
In 1948, Harold Lyons <strong>and</strong> his colleagues at the National Bureau<br />
of St<strong>and</strong>ards (NBS) constructed the first atomic clock, which used<br />
the ammonia molecule as its oscillator. Such a clock is called an<br />
atomic clock because, when it operates, a nitrogen atom vibrates.<br />
The pyramid-shaped ammonia molecule is composed of a triangular<br />
base; there is a hydrogen atom at each corner <strong>and</strong> a nitrogen<br />
atom at the top of the pyramid. The nitrogen atom does not remain<br />
at the top; if it absorbs radio waves of the right energy <strong>and</strong> frequency,<br />
it passes through the base to produce an upside-down pyramid<br />
<strong>and</strong> then moves back to the top. This oscillation frequency occurs<br />
at 23,870 megacycles (1 megacycle equals 1 million cycles) per<br />
second.<br />
Lyons’s clock was actually a quartz-ammonia clock, since the signal<br />
from a quartz crystal produced radio waves of the crystal’s fre-
82 / Atomic clock<br />
quency that were fed into an ammonia-filled tube. If the radio<br />
waves were at 23,870 megacycles, the ammonia molecules absorbed<br />
the waves; a detector sensed this, <strong>and</strong> it sent no correction signal to<br />
the crystal. If radio waves deviated from 23,870 megacycles, the ammonia<br />
did not absorb them, the detector sensed the unabsorbed radio<br />
waves, <strong>and</strong> a correction signal was sent to the crystal. The<br />
atomic clock’s accuracy <strong>and</strong> precision were comparable to those of a<br />
quartz-based clock—one part in a hundred million—but the atomic<br />
clock was more stable because molecules do not wear out.<br />
The atomic clock’s accuracy was improved by using cesium<br />
133 atoms as the source of oscillation. These atoms oscillate at<br />
9,192,631,770 plus or minus 20 cycles per second. They are accurate<br />
to a billionth of a second per day <strong>and</strong> precise to nine decimal places.<br />
A cesium clock is stable for years. Future developments in atomic<br />
clocks may see accuracies of one part in a million billions.<br />
Impact<br />
The development of stable, very accurate atomic clocks has farreaching<br />
implications for many areas of science. Global positioning<br />
satellites send signals to receivers on ships <strong>and</strong> airplanes. By timing<br />
the signals, the receiver’s position is calculated to within several<br />
meters of its true location.<br />
Chemists are interested in finding the speed of chemical reactions,<br />
<strong>and</strong> atomic clocks are used for this purpose. The atomic clock<br />
led to the development of the maser (an acronym for microwave amplification<br />
by stimulated emission of radiation), which is used to<br />
amplify weak radio signals, <strong>and</strong> the maser led to the development<br />
of the laser, a light-frequency maser that has more uses than can be<br />
listed here.<br />
Atomic clocks have been used to test Einstein’s theories of relativity<br />
that state that time on a moving clock, as observed by a stationary<br />
observer, slows down, <strong>and</strong> that a clock slows down near a<br />
large mass (because of the effects of gravity). Under normal conditions<br />
of low velocities <strong>and</strong> low mass, the changes in time are very<br />
small, but atomic clocks are accurate <strong>and</strong> stable enough to detect<br />
even these small changes. In such experiments, three sets of clocks<br />
were used—one group remained on Earth, one was flown west
around the earth on a jet, <strong>and</strong> the last set was flown east. By comparing<br />
the times of the in-flight sets with the stationary set, the<br />
predicted slowdowns of time were observed <strong>and</strong> the theories were<br />
verified.<br />
See also Carbon dating; Cyclotron; Electric clock; Laser; Synchrocyclotron;<br />
Tevatron accelerator.<br />
Further Reading<br />
Atomic clock / 83<br />
Audoin, Claude, <strong>and</strong> Bernard Guinot. The Measurement of Time:<br />
Time, Frequency, <strong>and</strong> the Atomic Clock. New York: Cambridge University<br />
Press, 2001.<br />
Barnett, Jo Ellen. Time’s Pendulum: The Quest to Capture Time—From<br />
Sundials to Atomic Clocks. New York: Plenum Trade, 1998.<br />
Bendick, Jeanne. The First Book of Time. New York: F. Watts, 1970.<br />
“Ultra-Accurate Atomic Clock Unveiled at NIST Laboratory.” Research<br />
<strong>and</strong> Development 42, no. 2 (February, 2000).
84<br />
Atomic-powered ship<br />
Atomic-powered ship<br />
The invention: The world’s first atomic-powered merchant ship<br />
demonstrated a peaceful use of atomic power.<br />
The people behind the invention:<br />
Otto Hahn (1879-1968), a German chemist<br />
Enrico Fermi (1901-1954), an Italian American physicist<br />
Dwight D. Eisenhower (1890-1969), president of the United<br />
States, 1953-1961<br />
Splitting the Atom<br />
In 1938, Otto Hahn, working at the Kaiser Wilhelm Institute for<br />
Chemistry, discovered that bombarding uranium atoms with neutrons<br />
causes them to split into two smaller, lighter atoms. A large<br />
amount of energy is released during this process, which is called<br />
“fission.” When one kilogram of uranium is fissioned, it releases the<br />
same amount of energy as does the burning of 3,000 metric tons of<br />
coal. The fission process also releases new neutrons.<br />
Enrico Fermi suggested that these new neutrons could be used to<br />
split more uranium atoms <strong>and</strong> produce a chain reaction. Fermi <strong>and</strong><br />
his assistants produced the first human-made chain reaction at the<br />
University of Chicago on December 2, 1942. Although the first use<br />
of this new energy source was the atomic bombs that were used to<br />
defeat Japan in World War II, it was later realized that a carefully<br />
controlled chain reaction could produce useful energy. The submarine<br />
Nautilus, launched in 1954, used the energy released from fission<br />
to make steam to drive its turbines.<br />
U.S. President Dwight David Eisenhower proposed his “Atoms<br />
for Peace” program in December, 1953. On April 25, 1955, President<br />
Eisenhower announced that the “Atoms for Peace” program would<br />
be exp<strong>and</strong>ed to include the design <strong>and</strong> construction of an atomicpowered<br />
merchant ship, <strong>and</strong> he signed the legislation authorizing<br />
the construction of the ship in 1956.
Savannah’s Design <strong>and</strong> Construction<br />
Atomic-powered ship / 85<br />
A contract to design an atomic-powered merchant ship was<br />
awarded to George G. Sharp, Inc., on April 4, 1957. The ship was to<br />
carry approximately one hundred passengers (later reduced to sixty<br />
to reduce the ship’s cost) <strong>and</strong> 10,886 metric tons of cargo while making<br />
a speed of 21 knots, about 39 kilometers per hour. The ship was<br />
to be 181 meters long <strong>and</strong> 23.7 meters wide. The reactor was to provide<br />
steam for a 20,000-horsepower turbine that would drive the<br />
ship’s propeller. Most of the ship’s machinery was similar to that of<br />
existing ships; the major difference was that steam came from a reactor<br />
instead of a coal- or oil-burning boiler.<br />
New York Shipbuilding Corporation of Camden, New Jersey,<br />
won the contract to build the ship on November 16, 1957. States Marine<br />
Lines was selected in July, 1958, to operate the ship. It was christened<br />
Savannah <strong>and</strong> launched on July 21, 1959. The name Savannah<br />
was chosen to honor the first ship to use steam power while crossing<br />
an ocean. This earlier Savannah was launched in New York City<br />
in 1818.<br />
Ships are normally launched long before their construction is<br />
complete, <strong>and</strong> the new Savannah was no exception. It was finally<br />
turned over to States Marine Lines on May 1, 1962. After extensive<br />
testing by its operators <strong>and</strong> delays caused by labor union disputes,<br />
it began its maiden voyage from Yorktown, Virginia, to Savannah,<br />
Georgia, on August 20, 1962. The original budget for design <strong>and</strong><br />
construction was $35 million, but by this time, the actual cost was<br />
about $80 million.<br />
Savannah‘s nuclear reactor was fueled with about 7,000 kilograms<br />
(15,400 pounds) of uranium. Uranium consists of two forms,<br />
or “isotopes.” These are uranium 235, which can fission, <strong>and</strong> uranium<br />
238, which cannot. Naturally occurring uranium is less than 1<br />
percent uranium 235, but the uranium in Savannah‘s reactor had<br />
been enriched to contain nearly 5 percent of this isotope. Thus, there<br />
was less than 362 kilograms of usable uranium in the reactor. The<br />
ship was able to travel about 800,000 kilometers on this initial fuel<br />
load. Three <strong>and</strong> a half million kilograms of water per hour flowed<br />
through the reactor under a pressure of 5,413 kilograms per square<br />
centimeter. It entered the reactor at 298.8 degrees Celsius <strong>and</strong> left at
86 / Atomic-powered ship<br />
317.7 degrees Celsius. Water leaving the reactor passed through a<br />
heat exchanger called a “steam generator.” In the steam generator,<br />
reactor water flowed through many small tubes. Heat passed through<br />
the walls of these tubes <strong>and</strong> boiled water outside them. About<br />
113,000 kilograms of steam per hour were produced in this way at a<br />
pressure of 1,434 kilograms per square centimeter <strong>and</strong> a temperature<br />
of 240.5 degrees Celsius.<br />
Labor union disputes dogged Savannah‘s early operations, <strong>and</strong> it<br />
did not start its first trans-Atlantic crossing until June 8, 1964. Savannah<br />
was never a money maker. Even in the 1960’s, the trend was toward<br />
much bigger ships. It was announced that the ship would be<br />
retired in August, 1967, but that did not happen. It was finally put<br />
out of service in 1971. Later, Savannah was placed on permanent display<br />
at Charleston, South Carolina.<br />
Consequences<br />
Following the United States’ lead, Germany <strong>and</strong> Japan built<br />
atomic-powered merchant ships. The Soviet Union is believed to<br />
have built several atomic-powered icebreakers. Germany’s Otto<br />
Hahn, named for the scientist who first split the atom, began service<br />
in 1968, <strong>and</strong> Japan’s Mutsuai was under construction as Savannah retired.<br />
Numerous studies conducted in the early 1970’s claimed to prove<br />
that large atomic-powered merchant ships were more profitable<br />
than oil-fired ships of the same size. Several conferences devoted to<br />
this subject were held, but no new ships were built.<br />
Although the U.S. Navy has continued to use reactors to power<br />
submarines, aircraft carriers, <strong>and</strong> cruisers, atomic power has not<br />
been widely used for merchant-ship propulsion. Labor union problems<br />
such as those that haunted Savannah, high insurance costs, <strong>and</strong><br />
high construction costs are probably the reasons. <strong>Public</strong> opinion, after<br />
the reactor accidents at Three Mile Isl<strong>and</strong> (in 1979) <strong>and</strong> Chernobyl<br />
(in 1986) is also a factor.<br />
See also Gyrocompass; Hovercraft; Nuclear reactor; Supersonic<br />
passenger plane.
Further Reading<br />
Atomic-powered ship / 87<br />
Epstein, Sam Epstein, Beryl (William) Epstein, <strong>and</strong> Raymond<br />
Burns. Enrico Fermi, Father of Atomic Power. Champaign, Ill. Garrard<br />
Publishing, 1970.<br />
Hahn, Otto, <strong>and</strong> Wily Ley. Otto Hahn: A Scientific Autobiography.<br />
New York: C. Scribner’s Sons, 1966.<br />
Hoffman, Klaus. Otto Hahn: Achievement <strong>and</strong> Responsibility. New<br />
York: Springer, 2001.<br />
“The Race to Power Bigger, Faster Ships.” Business Week 2305 (November<br />
10, 1973).<br />
“Underway on Nuclear Power.” All H<strong>and</strong>s 979 (November, 1998).
88<br />
Autochrome plate<br />
Autochrome plate<br />
The invention: The first commercially successful process in which<br />
a single exposure in a regular camera produced a color image.<br />
The people behind the invention:<br />
Louis Lumière (1864-1948), a French inventor <strong>and</strong> scientist<br />
Auguste Lumière (1862-1954), an inventor, physician, physicist,<br />
chemist, <strong>and</strong> botanist<br />
Alphonse Seyewetz, a skilled scientist <strong>and</strong> assistant of the<br />
Lumière brothers<br />
Adding Color<br />
In 1882, Antoine Lumière, painter, pioneer photographer, <strong>and</strong> father<br />
of Auguste <strong>and</strong> Louis, founded a factory to manufacture photographic<br />
gelatin dry-plates. After the Lumière brothers took over the<br />
factory’s management, they exp<strong>and</strong>ed production to include roll<br />
film <strong>and</strong> printing papers in 1887 <strong>and</strong> also carried out joint research<br />
that led to fundamental discoveries <strong>and</strong> improvements in photographic<br />
development <strong>and</strong> other aspects of photographic chemistry.<br />
While recording <strong>and</strong> reproducing the actual colors of a subject<br />
was not possible at the time of photography’s inception (about<br />
1822), the first practical photographic process, the daguerreotype,<br />
was able to render both striking detail <strong>and</strong> good tonal quality. Thus,<br />
the desire to produce full-color images, or some approximation to<br />
realistic color, occupied the minds of many photographers <strong>and</strong> inventors,<br />
including Louis <strong>and</strong> Auguste Lumière, throughout the<br />
nineteenth century.<br />
As researchers set out to reproduce the colors of nature, the first<br />
process that met with any practical success was based on the additive<br />
color theory expounded by the Scottish physicist James Clerk<br />
Maxwell in 1861. He believed that any color can be created by<br />
adding together red, green, <strong>and</strong> blue light in certain proportions.<br />
Maxwell, in his experiments, had taken three negatives through<br />
screens or filters of these additive primary colors. He then took<br />
slides made from these negatives <strong>and</strong> projected the slides through
Antoine Lumière <strong>and</strong> Sons<br />
Autochrome plate / 89<br />
Antoine Lumière was explosive in temperament, loved a<br />
good fight, <strong>and</strong> despised Americans. With these qualities—<strong>and</strong><br />
his sons to take care of the practicalities—he turned France into<br />
a leader of the early photography <strong>and</strong> film industries.<br />
Lumière was born into a family of wine growers in 1840 <strong>and</strong><br />
trained to be a sign painter. Bored with his job, he learned the<br />
new art of photography, set up a studio in Lyon, <strong>and</strong> began to<br />
experiment with ways to make his own photographic plates.<br />
Failures led to frustration, <strong>and</strong> frustration ignited his temper,<br />
which often ended in his smashing the furniture <strong>and</strong> glassware<br />
nearby. His sons, Auguste, born 1862, <strong>and</strong> Louis, born 1864,<br />
came to the rescue. Louis, a science whiz as a teenager, succeeded<br />
where his father had failed. The dry plate he invented,<br />
Blue Label, was the most sensitive yet. The Lumières set up a<br />
factory to manufacture the plates <strong>and</strong> quickly found themselves<br />
wealthy, but the old man’s love of extravagant spending<br />
<strong>and</strong> parties led them to the door of bankruptcy in 1882. His sons<br />
had to take control to save the family finances.<br />
The father, an ardent French patriot, soon threw himself into<br />
a new crusade. American tariffs made it impossible for the<br />
Lumières to make a profit selling their photographic plates in<br />
the United States, which so angered the old man that he<br />
looked for revenge. He found it in the form of Thomas Edison’s<br />
Kinetoscope in 1894. He got hold of samples, <strong>and</strong> soon<br />
the family factory was making motion picture film of its own<br />
<strong>and</strong> could undersell Edison in France. Louis also invented a<br />
projector, adapted from a sewing machine, that made it possible<br />
for movies to be shown to audiences.<br />
Before Antoine Lumière died in Paris in 1911, he had the satisfaction<br />
of seeing his beloved France producing better, cheaper<br />
photographic products than those available from America, as<br />
well as becoming a pioneer in film making.<br />
the same filters onto a screen so that their images were superimposed.<br />
As a result, he found that it was possible to reproduce the exact<br />
colors as well as the form of an object.<br />
Unfortunately, since colors could not be printed in their tonal<br />
relationships on paper before the end of the nineteenth century,
90 / Autochrome plate<br />
Maxwell’s experiment was unsuccessful. Although Frederick E.<br />
Ives of Philadelphia, in 1892, optically united three transparencies<br />
so that they could be viewed in proper alignment by looking through<br />
a peephole, viewing the transparencies was still not as simple as<br />
looking at a black-<strong>and</strong>-white photograph.<br />
The Autochrome Plate<br />
The first practical method of making a single photograph that<br />
could be viewed without any apparatus was devised by John Joly of<br />
Dublin in 1893. Instead of taking three separate pictures through<br />
three colored filters, he took one negative through one filter minutely<br />
checkered with microscopic areas colored red, green, <strong>and</strong><br />
blue. The filter <strong>and</strong> the plate were exactly the same size <strong>and</strong> were<br />
placed in contact with each other in the camera. After the plate was<br />
developed, a transparency was made, <strong>and</strong> the filter was permanently<br />
attached to it. The black-<strong>and</strong>-white areas of the picture allowed<br />
more or less light to shine through the filters; if viewed from a<br />
proper distance, the colored lights blended to form the various colors<br />
of nature.<br />
In sum, the potential principles of additive color <strong>and</strong> other methods<br />
<strong>and</strong> their potential applications in photography had been discovered<br />
<strong>and</strong> even experimentally demonstrated by 1880. Yet a practical<br />
process of color photography utilizing these principles could<br />
not be produced until a truly panchromatic emulsion was available,<br />
since making a color print required being able to record the primary<br />
colors of the light cast by the subject.<br />
Louis <strong>and</strong> Auguste Lumière, along with their research associate<br />
Alphonse Seyewetz, succeeded in creating a single-plate process<br />
based on this method in 1903. It was introduced commercially as the<br />
autochrome plate in 1907 <strong>and</strong> was soon in use throughout the<br />
world. This process is one of many that take advantage of the limited<br />
resolving power of the eye. Grains or dots too small to be recognized<br />
as separate units are accepted in their entirety <strong>and</strong>, to the<br />
sense of vision, appear as tones <strong>and</strong> continuous color.
Impact<br />
While the autochrome plate remained one of the most popular<br />
color processes until the 1930’s, soon this process was superseded by<br />
subtractive color processes. Leopold Mannes <strong>and</strong> Leopold Godowsky,<br />
both musicians <strong>and</strong> amateur photographic researchers who eventually<br />
joined forces with Eastman Kodak research scientists, did the<br />
most to perfect the Lumière brothers’ advances in making color<br />
photography practical. Their collaboration led to the introduction in<br />
1935 of Kodachrome, a subtractive process in which a single sheet of<br />
film is coated with three layers of emulsion, each sensitive to one<br />
primary color. A single exposure produces a color image.<br />
Color photography is now commonplace. The amateur market is<br />
enormous, <strong>and</strong> the snapshot is almost always taken in color. Commercial<br />
<strong>and</strong> publishing markets use color extensively. Even photography<br />
as an art form, which was done in black <strong>and</strong> white through<br />
most of its history, has turned increasingly to color.<br />
See also Color film; Instant photography; Xerography.<br />
Further Reading<br />
Autochrome plate / 91<br />
Collins, Douglas. The Story of Kodak. New York: Harry N. Abrams,<br />
1990.<br />
Glendinning, Peter. Color Photography: History, Theory, <strong>and</strong> Darkroom<br />
Technique. Englewood Cliffs, N.J.: Prentice-Hall, 1985.<br />
Lartigue, Jacques-Henri, <strong>and</strong> Georges Herscher. The Autochromes of<br />
J. H. Lartigue, 1912-1927. New York: Viking Press, 1981.<br />
Tolstoy, Ivan. James Clerk Maxwell: A Biography. Chicago: University<br />
of Chicago Press, 1982.<br />
Wood, John. The Art of the Autochrome: The Birth of Color Photography.<br />
Iowa City: University of Iowa Press, 1993.
92<br />
BASIC programming language<br />
BASIC programming language<br />
The invention: An interactive computer system <strong>and</strong> simple programming<br />
language that made it easier for nontechnical people<br />
to use computers.<br />
The people behind the invention:<br />
John G. Kemeny (1926-1992), the chairman of Dartmouth’s<br />
mathematics department<br />
Thomas E. Kurtz (1928- ), the director of the Kiewit<br />
Computation Center at Dartmouth<br />
Bill Gates (1955- ), a cofounder <strong>and</strong> later chairman of the<br />
board <strong>and</strong> chief operating officer of the Microsoft<br />
Corporation<br />
The Evolution of Programming<br />
The first digital computers were developed during World War II<br />
(1939-1945) to speed the complex calculations required for ballistics,<br />
cryptography, <strong>and</strong> other military applications. Computer technology<br />
developed rapidly, <strong>and</strong> the 1950’s <strong>and</strong> 1960’s saw computer systems<br />
installed throughout the world. These systems were very large<br />
<strong>and</strong> expensive, requiring many highly trained people for their operation.<br />
The calculations performed by the first computers were determined<br />
solely by their electrical circuits. In the 1940’s, The American<br />
mathematician John von Neumann <strong>and</strong> others pioneered the idea of<br />
computers storing their instructions in a program, so that changes<br />
in calculations could be made without rewiring their circuits. The<br />
programs were written in machine language, long lists of zeros <strong>and</strong><br />
ones corresponding to on <strong>and</strong> off conditions of circuits. During the<br />
1950’s, “assemblers” were introduced that used short names for<br />
common sequences of instructions <strong>and</strong> were, in turn, transformed<br />
into the zeros <strong>and</strong> ones intelligible to the computer. The late 1950’s<br />
saw the introduction of high-level languages, notably Formula Translation<br />
(FORTRAN), Common Business Oriented Language (COBOL),<br />
<strong>and</strong> Algorithmic Language (ALGOL), which used English words to
communicate instructions to the computer. Unfortunately, these<br />
high-level languages were complicated; they required some knowledge<br />
of the computer equipment <strong>and</strong> were designed to be used by<br />
scientists, engineers, <strong>and</strong> other technical experts.<br />
Developing BASIC<br />
BASIC programming language / 93<br />
John G. Kemeny was chairman of the department of mathematics<br />
at Dartmouth College in Hanover, New Hampshire. In 1962,<br />
Thomas E. Kurtz, Dartmouth’s computing director, approached<br />
Kemeny with the idea of implementing a computer system at Dartmouth<br />
College. Both men were dedicated to the idea that liberal arts<br />
students should be able to make use of computers. Although the English<br />
comm<strong>and</strong>s of FORTRAN <strong>and</strong> ALGOL were a tremendous improvement<br />
over the cryptic instructions of assembly language, they<br />
were both too complicated for beginners. Kemeny convinced Kurtz<br />
that they needed a completely new language, simple enough for beginners<br />
to learn quickly, yet flexible enough for many different<br />
kinds of applications.<br />
The language they developed was known as the “Beginner’s Allpurpose<br />
Symbolic Instruction Code,” or BASIC. The original language<br />
consisted of fourteen different statements. Each line of a<br />
BASIC program was preceded by a number. Line numbers were referenced<br />
by control flow statements, such as, “IFX=9THEN GOTO<br />
200.” Line numbers were also used as an editing reference. If line 30<br />
of a program contained an error, the programmer could make the<br />
necessary correction merely by retyping line 30.<br />
Programming in BASIC was first taught at Dartmouth in the fall<br />
of 1964. Students were ready to begin writing programs after two<br />
hours of classroom lectures. By June of 1968, more than 80 percent of<br />
the undergraduates at Dartmouth could write a BASIC program.<br />
Most of them were not science majors <strong>and</strong> used their programs in<br />
conjunction with other nontechnical courses.<br />
Kemeny <strong>and</strong> Kurtz, <strong>and</strong> later others under their supervision,<br />
wrote more powerful versions of BASIC that included support for<br />
graphics on video terminals <strong>and</strong> structured programming. The creators<br />
of BASIC, however, always tried to maintain their original design<br />
goal of keeping BASIC simple enough for beginners.
94 / BASIC programming language<br />
Consequences<br />
Kemeny <strong>and</strong> Kurtz encouraged the widespread adoption of BA-<br />
SIC by allowing other institutions to use their computer system <strong>and</strong><br />
by placing BASIC in the public domain. Over time, they shaped BA-<br />
SIC into a powerful language with numerous features added in response<br />
to the needs of its users. What Kemeny <strong>and</strong> Kurtz had not<br />
foreseen was the advent of the microprocessor chip in the early<br />
1970’s, which revolutionized computer technology. By 1975, microcomputer<br />
kits were being sold to hobbyists for well under a thous<strong>and</strong><br />
dollars. The earliest of these was the Altair.<br />
That same year, prelaw student William H. Gates (1955- ) was<br />
persuaded by a friend, Paul Allen, to drop out of Harvard University<br />
<strong>and</strong> help create a version of BASIC that would run on the Altair.<br />
Gates <strong>and</strong> Allen formed a company, Microsoft Corporation, to sell<br />
their BASIC interpreter, which was designed to fit into the tiny<br />
memory of the Altair. It was about as simple as the original Dartmouth<br />
BASIC but had to depend heavily on the computer hardware.<br />
Most computers purchased for home use still include a version<br />
of Microsoft Corporation’s BASIC.<br />
See also BINAC computer; COBOL computer language; FOR-<br />
TRAN programming language; SAINT; Supercomputer.<br />
Further Reading<br />
Kemeney, John G., <strong>and</strong> Thomas E. Kurtz. True BASIC: The Structured<br />
Language System for the Future. Reference Manual. West Lebanon,<br />
N.H.: True BASIC, 1988.<br />
Kurtz, Thomas E., <strong>and</strong> John G. Kemeney. BASIC. 5th ed. Hanover,<br />
N.H., 1970.<br />
Spencer, Donald D. Great Men <strong>and</strong> Women of Computing. 2d ed. Ormond<br />
Beach, Fla.: Camelot Publishing, 1999.
Bathyscaphe<br />
Bathyscaphe<br />
The invention: A submersible vessel capable of exploring the<br />
deepest trenches of the world’s oceans.<br />
The people behind the invention:<br />
William Beebe (1877-1962), an American biologist <strong>and</strong> explorer<br />
Auguste Piccard (1884-1962), a Swiss-born Belgian physicist<br />
Jacques Piccard (1922- ), a Swiss ocean engineer<br />
Early Exploration of the Deep Sea<br />
The first human penetration of the deep ocean was made by William<br />
Beebe in 1934, when he descended 923 meters into the Atlantic<br />
Ocean near Bermuda. His diving chamber was a 1.5-meter steel ball<br />
that he named Bathysphere, from the Greek word bathys (deep) <strong>and</strong><br />
the word sphere, for its shape. He found that a sphere resists pressure<br />
in all directions equally <strong>and</strong> is not easily crushed if it is constructed<br />
of thick steel. The bathysphere weighed 2.5 metric tons. It<br />
had no buoyancy <strong>and</strong> was lowered from a surface ship on a single<br />
2.2-centimeter cable; a broken cable would have meant certain<br />
death for the bathysphere’s passengers.<br />
Numerous deep dives by Beebe <strong>and</strong> his engineer colleague, Otis<br />
Barton, were the first uses of submersibles for science. Through two<br />
small viewing ports, they were able to observe <strong>and</strong> photograph<br />
many deep-sea creatures in their natural habitats for the first time.<br />
They also made valuable observations on the behavior of light as<br />
the submersible descended, noting that the green surface water became<br />
pale blue at 100 meters, dark blue at 200 meters, <strong>and</strong> nearly<br />
black at 300 meters. A technique called “contour diving” was particularly<br />
dangerous. In this practice, the bathysphere was slowly<br />
towed close to the seafloor. On one such dive, the bathysphere narrowly<br />
missed crashing into a coral crag, but the explorers learned a<br />
great deal about the submarine geology of Bermuda <strong>and</strong> the biology<br />
of a coral-reef community. Beebe wrote several popular <strong>and</strong> scientific<br />
books about his adventures that did much to arouse interest in<br />
the ocean.<br />
95
96 / Bathyscaphe<br />
Testing the Bathyscaphe<br />
The next important phase in the exploration of the deep ocean<br />
was led by the Swiss physicist Auguste Piccard. In 1948, he launched<br />
a new type of deep-sea research craft that did not require a cable <strong>and</strong><br />
that could return to the surface by means of its own buoyancy. He<br />
called the craft a bathyscaphe, which is Greek for “deep boat.”<br />
Piccard began work on the bathyscaphe in 1937, supported by a<br />
grant from the Belgian National Scientific Research Fund. The German<br />
occupation of Belgium early in World War II cut the project<br />
short, but Piccard continued his work after the war. The finished<br />
bathyscaphe was named FNRS 2, for the initials of the Belgian fund<br />
that had sponsored the project. The vessel was ready for testing in<br />
the fall of 1948.<br />
The first bathyscaphe, as well as later versions, consisted of<br />
two basic components: first, a heavy steel cabin to accommodate<br />
observers, which looked somewhat like an enlarged version of<br />
Beebe’s bathysphere; <strong>and</strong> second, a light container called a float,<br />
filled with gasoline, that provided lifting power because it was<br />
lighter than water. Enough iron shot was stored in silos to cause<br />
the vessel to descend. When this ballast was released, the gasoline<br />
in the float gave the bathyscaphe sufficient buoyancy to return to<br />
the surface.<br />
Piccard’s bathyscaphe had a number of ingenious devices. Jacques-<br />
Yves Cousteau, inventor of the Aqualung six years earlier, contributed<br />
a mechanical claw that was used to take samples of rocks, sediment,<br />
<strong>and</strong> bottom creatures. A seven-barreled harpoon gun, operated<br />
by water pressure, was attached to the sphere to capture<br />
specimens of giant squids or other large marine animals for study.<br />
The harpoons had electrical-shock heads to stun the “sea monsters,”<br />
<strong>and</strong> if that did not work, the harpoon could give a lethal injection of<br />
strychnine poison. Inside the sphere were various instruments for<br />
measuring the deep-sea environment, including a Geiger counter<br />
for monitoring cosmic rays. The air-purification system could support<br />
two people for up to twenty-four hours. The bathyscaphe had a<br />
radar mast to broadcast its location as soon as it surfaced. This was<br />
essential because there was no way for the crew to open the sphere<br />
from the inside.
Auguste Piccard<br />
Bathyscaphe / 97<br />
Auguste Piccard used balloons to set records in altitude both<br />
above sea level <strong>and</strong> below sea level. However, setting records<br />
was not his purpose: He went where no one had gone before for<br />
the sake of science.<br />
Born in Basel, Switzerl<strong>and</strong>, in 1884, Auguste <strong>and</strong><br />
his twin brother, Jean-Félix Piccard, studied in Zurich.<br />
After university in 1913, Auguste, a physicist,<br />
<strong>and</strong> Jean-Félix, a chemist, took up hot-air ballooning,<br />
<strong>and</strong> they joined the balloon section of the Swiss Army<br />
in 1915.<br />
Auguste moved to Brussels, Belgium, in 1922 to<br />
take a professorship of applied physics, <strong>and</strong> there he<br />
continued his ballooning. His subject of interest was<br />
cosmic rays, <strong>and</strong> in order to study them he had to get above the<br />
thick lower layer of atmosphere. Accordingly, he designed hydrogen-filled<br />
balloons that could reach high altitude. A ballshaped,<br />
pressurized gondola carried him, his instruments, <strong>and</strong><br />
one colleague to 51,775 feet altitude in 1931 <strong>and</strong> to 53,152 feet in<br />
1932. Both were records.<br />
Auguste, working with his son Jacques, then turned his attention<br />
to the sea. In order to explore the largely unknown world<br />
underwater, he built the bathyscaphe. It was really just another<br />
type of balloon, one which was made of steel <strong>and</strong> carried him<br />
inside. His dives with his son in various models of bathyscaphe<br />
set record after record. Their 1953 dive down 10,300 feet into the<br />
Mediterranean Sea was the deepest until Jacques, accompanied<br />
by a U.S. Navy officer, descended to the deepest spot on Earth<br />
seven years later.<br />
The FNRS 2 was first tested off the Cape Verde Isl<strong>and</strong>s with the<br />
assistance of the French navy. Although Piccard descended to only<br />
25 meters, the dive demonstrated the potential of the bathyscaphe.<br />
On the second dive, the vessel was severely damaged by waves, <strong>and</strong><br />
further tests were suspended. A redesigned <strong>and</strong> rebuilt bathyscaphe,<br />
renamed FNRS 3 <strong>and</strong> operated by the French navy, descended to a<br />
depth of 4,049 meters off Dakar, Senegal, on the west coast of Africa<br />
in early 1954.<br />
In August, 1953, Auguste Piccard, with his son Jacques, launched a<br />
(Library of Congress)
98 / Bathyscaphe<br />
greatly improved bathyscaphe, the Trieste, which they named for the<br />
Italian city in which it was built. In September of the same year, the<br />
Trieste successfully dived to 3,150 meters in the Mediterranean Sea. The<br />
Piccards glimpsed, for the first time, animals living on the seafloor at<br />
that depth. In 1958, the U.S. Navy purchased the Trieste <strong>and</strong> transported<br />
it to California, where it was equipped with a new cabin designed<br />
to enable the vessel to reach the seabed of the great oceanic<br />
trenches. Several successful descents were made in the Pacific by<br />
Jacques Piccard, <strong>and</strong> on January 23, 1960, Piccard, accompanied by<br />
Lieutenant Donald Walsh of the U.S. Navy, dived a record 10,916 meters<br />
to the bottom of the Mariana Trench near the isl<strong>and</strong> of Guam.<br />
Impact<br />
The oceans have always raised formidable barriers to humanity’s<br />
curiosity <strong>and</strong> underst<strong>and</strong>ing. In 1960, two events demonstrated the<br />
ability of humans to travel underwater for prolonged periods <strong>and</strong> to<br />
observe the extreme depths of the ocean. The nuclear submarine<br />
Triton circumnavigated the world while submerged, <strong>and</strong> Jacques<br />
Piccard <strong>and</strong> Lieutenant Donald Walsh descended nearly 11 kilometers<br />
to the bottom of the ocean’s greatest depression aboard the<br />
Trieste. After sinking for four hours <strong>and</strong> forty-eight minutes, the<br />
Trieste l<strong>and</strong>ed in the Challenger Deep of the Mariana Trench, the<br />
deepest known spot on the ocean floor. The explorers remained on<br />
the bottom for only twenty minutes, but they answered one of the<br />
biggest questions about the sea: Can animals live in the immense<br />
cold <strong>and</strong> pressure of the deep trenches? Observations of red shrimp<br />
<strong>and</strong> flatfishes proved that the answer was yes.<br />
The Trieste played another important role in undersea exploration<br />
when, in 1963, it located <strong>and</strong> photographed the wreckage of the<br />
nuclear submarine Thresher. The Thresher had mysteriously disappeared<br />
on a test dive off the New Engl<strong>and</strong> coast, <strong>and</strong> the Navy had<br />
been unable to find a trace of the lost submarine using surface vessels<br />
equipped with sonar <strong>and</strong> remote-control cameras on cables.<br />
Only the Trieste could actually search the bottom. On its third dive,<br />
the bathyscaphe found a piece of the wreckage, <strong>and</strong> it eventually<br />
photographed a 3,000-meter trail of debris that led to Thresher‘s hull,<br />
at a depth of 2.5 kilometers.
These exploits showed clearly that scientific submersibles could<br />
be used anywhere in the ocean. Piccard’s work thus opened the last<br />
geographic frontier on Earth.<br />
See also Aqualung; Bathysphere; Sonar; Ultrasound.<br />
Further Reading<br />
Bathyscaphe / 99<br />
Ballard, Robert D., <strong>and</strong> Will Hively. The Eternal Darkness: A Personal<br />
History of Deep-Sea Exploration. Princeton, N.J.: Princeton University<br />
Press, 2000.<br />
Piccard, Jacques, <strong>and</strong> Robert S. Dietz. Seven Miles Down: The Story of<br />
the Bathyscaphe Trieste. New York: Longmans, 1962.<br />
Welker, Robert Henry. Natural Man: The Life of William Beebe. Bloomington:<br />
Indiana University Press, 1975.
100<br />
Bathysphere<br />
Bathysphere<br />
The invention: The first successful chamber for manned deep-sea<br />
diving missions.<br />
The people behind the invention:<br />
William Beebe (1877-1962), an American naturalist <strong>and</strong> curator<br />
of ornithology<br />
Otis Barton (1899- ), an American engineer<br />
John Tee-Van (1897-1967), an American general associate with<br />
the New York Zoological Society<br />
Gloria Hollister Anable (1903?-1988), an American research<br />
associate with the New York Zoological Society<br />
Inner Space<br />
Until the 1930’s, the vast depths of the oceans had remained<br />
largely unexplored, although people did know something of the<br />
ocean’s depths. Soundings <strong>and</strong> nettings of the ocean bottom had<br />
been made many times by a number of expeditions since the 1870’s.<br />
Diving helmets had allowed humans to descend more than 91 meters<br />
below the surface, <strong>and</strong> the submarine allowed them to reach a<br />
depth of nearly 120 meters. There was no firsth<strong>and</strong> knowledge,<br />
however, of what it was like in the deepest reaches of the ocean: inner<br />
space.<br />
The person who gave the world the first account of life at great<br />
depths was William Beebe. When he announced in 1926 that he was<br />
attempting to build a craft to explore the ocean, he was already a<br />
well-known naturalist. Although his only degrees had been honorary<br />
doctorates, he was graduated as a special student in the Department<br />
of Zoology of Columbia University in 1898. He began his lifelong<br />
association with the New York Zoological Society in 1899.<br />
It was during a trip to the Galápagos Isl<strong>and</strong>s off the west coast of<br />
South America that Beebe turned his attention to oceanography. He<br />
became the first scientist to use a diving helmet in fieldwork, swimming<br />
in the shallow waters. He continued this shallow-water work<br />
at the new station he established in 1928, with the permission of En-
glish authorities, on the tiny isl<strong>and</strong> of Nonesuch in the Bermudas.<br />
Beebe realized, however, that he had reached the limits of the current<br />
technology <strong>and</strong> that to study the animal life of the ocean depths<br />
would require a new approach.<br />
A New Approach<br />
Bathysphere / 101<br />
While he was considering various cylindrical designs for a new<br />
deep-sea exploratory craft, Beebe was introduced to Otis Barton.<br />
Barton, a young New Engl<strong>and</strong>er who had been trained as an engineer<br />
at Harvard University, had turned to the problems of ocean<br />
diving while doing postgraduate work at Columbia University. In<br />
December, 1928, Barton brought his blueprints to Beebe. Beebe immediately<br />
saw that Barton’s design was what he was looking for,<br />
<strong>and</strong> the two went ahead with the construction of Barton’s craft.<br />
The “bathysphere,” as Beebe named the device, weighed 2,268<br />
kilograms <strong>and</strong> had a diameter of 1.45 meters <strong>and</strong> steel walls 3.8 centimeters<br />
thick. The door, weighing 180 kilograms, would be fastened<br />
over a manhole with ten bolts. Four windows, made of fused<br />
quartz, were ordered from the General Electric Company at a cost of<br />
$500 each. A 250-watt water spotlight lent by the Westinghouse<br />
Company provided the exterior illumination, <strong>and</strong> a telephone lent<br />
by the Bell Telephone Laboratory provided a means of communicating<br />
with the surface. The breathing apparatus consisted of two oxygen<br />
tanks that allowed 2 liters of oxygen per minute to escape into<br />
the sphere. During the dive, the carbon dioxide <strong>and</strong> moisture were<br />
removed, respectively, by trays containing soda lime <strong>and</strong> calcium<br />
chloride. A winch would lower the bathysphere on a steel cable.<br />
In early July, 1930, after several test dives, the first manned dive<br />
commenced. Beebe <strong>and</strong> Barton descended to a depth of 244 meters.<br />
A short circuit in one of the switches showered them with sparks<br />
momentarily, but the descent was largely a success. Beebe <strong>and</strong><br />
Barton had descended farther than any human.<br />
Two more days of diving yielded a final dive record of 435 meters<br />
below sea level. Beebe <strong>and</strong> the other members of his staff (ichthyologist<br />
John Tee-Van <strong>and</strong> zoologist Gloria Hollister Anable) saw many<br />
species of fish <strong>and</strong> other marine life that previously had been seen<br />
only after being caught in nets. These first dives proved that an un-
102 / Bathysphere<br />
dersea exploratory craft had potential value, at least for deep water.<br />
After 1932, the bathysphere went on display at the Century of Progress<br />
Exhibition in Chicago.<br />
In late 1933, the National Geographic Society offered to sponsor<br />
another series of dives. Although a new record was not a stipulation,<br />
Beebe was determined to supply one. The bathysphere was<br />
completely refitted before the new dives.<br />
An unmanned test dive to 920 meters was made on August 7,<br />
1934, once again off Nonesuch Isl<strong>and</strong>. Minor adjustments were<br />
made, <strong>and</strong> on the morning of August 11, the first dive commenced,<br />
attaining a depth of 765 meters <strong>and</strong> recording a number of new scientific<br />
observations. Several days later, on August 15, the weather<br />
was again right for the dive.<br />
This dive also paid rich dividends in the number of species of<br />
deep-sea life observed. Finally, with only a few turns of cable left on<br />
the winch spool, the bathysphere reached a record depth of 923 meters—almost<br />
a kilometer below the ocean’s surface.<br />
Impact<br />
Barton continued to work on the bathysphere design for some<br />
years. It was not until 1948, however, that his new design, the<br />
benthoscope, was finally constructed. It was similar in basic design<br />
to the bathysphere, though the walls were increased to withst<strong>and</strong><br />
greater pressures. Other improvements were made, but the essential<br />
strengths <strong>and</strong> weaknesses remained. On August 16, 1949, Barton,<br />
diving alone, broke the record he <strong>and</strong> Beebe had set earlier,<br />
reaching a depth of 1,372 meters off the coast of Southern California.<br />
The bathysphere effectively marked the end of the tethered exploration<br />
of the deep, but it pointed the way to other possibilities.<br />
The first advance in this area came in 1943, when undersea explorer<br />
Jacques-Yves Cousteau <strong>and</strong> engineer Émile Gagnan developed the<br />
Aqualung underwater breathing apparatus, which made possible<br />
unfettered <strong>and</strong> largely unencumbered exploration down to about<br />
60 meters. This was by no means deep diving, but it was clearly a<br />
step along the lines that Beebe had envisioned for underwater research.<br />
A further step came in the development of the bathyscaphe by
Auguste Piccard, the renowned Swiss physicist, who, in the 1930’s,<br />
had conquered the stratosphere in high-altitude balloons. The bathyscaphe<br />
was a balloon that operated in reverse. A spherical steel passenger<br />
cabin was attached beneath a large float filled with gasoline<br />
for buoyancy. Several tons of iron pellets held by electromagnets<br />
acted as ballast. The bathyscaphe would sink slowly to the bottom<br />
of the ocean, <strong>and</strong> when its passengers wished to return, the ballast<br />
would be dumped. The craft would then slowly rise to the surface.<br />
On September 30, 1953, Piccard touched bottom off the coast of Italy,<br />
some 3,000 meters below sea level.<br />
See also Aqualung; Bathyscaphe; Sonar; Ultrasound.<br />
Further Reading<br />
Bathysphere / 103<br />
Ballard, Robert D., <strong>and</strong> Will Hively. The Eternal Darkness: A Personal<br />
History of Deep-Sea Exploration. Princeton, N.J.: Princeton University<br />
Press, 2000.<br />
Forman, Will. The History of American Deep Submersible Operations,<br />
1775-1995. Flagstaff, Ariz.: Best, 1999.<br />
Welker, Robert Henry. Natural Man: The Life of William Beebe. Bloomington:<br />
Indiana University Press, 1975.
104<br />
BINAC computer<br />
BINAC computer<br />
The invention: The world’s first electronic general-purpose digital<br />
computer.<br />
The people behind the invention:<br />
John Presper Eckert (1919-1995), an American electrical engineer<br />
John W. Mauchly (1907-1980), an American physicist<br />
John von Neumann (1903-1957), a Hungarian American<br />
mathematician<br />
Alan Mathison Turing (1912-1954), an English mathematician<br />
Computer Evolution<br />
In the 1820’s, there was a need for error-free mathematical <strong>and</strong><br />
astronomical tables for use in navigation, unreliable versions of<br />
which were being produced by human “computers.” The problem<br />
moved English mathematician <strong>and</strong> inventor Charles Babbage to design<br />
<strong>and</strong> partially construct some of the earliest prototypes of modern<br />
computers, with substantial but inadequate funding from the<br />
British government. In the 1880’s, the search by the U.S. Bureau of<br />
the Census for a more efficient method of compiling the 1890 census<br />
led American inventor Herman Hollerith to devise a punched-card<br />
calculator, a machine that reduced by several years the time required<br />
to process the data.<br />
The emergence of modern electronic computers began during<br />
World War II (1939-1945), when there was an urgent need in the<br />
American military for reliable <strong>and</strong> quickly produced mathematical<br />
tables that could be used to aim various types of artillery. The calculation<br />
of very complex tables had progressed somewhat since<br />
Babbage’s day, <strong>and</strong> the human computers were being assisted by<br />
mechanical calculators. Still, the growing dem<strong>and</strong> for increased accuracy<br />
<strong>and</strong> efficiency was pushing the limits of these machines.<br />
Finally, in 1946, following three years of intense work at the University<br />
of Pennsylvania’s Moore School of Engineering, John Presper<br />
Eckert <strong>and</strong> John W. Mauchly presented their solution to the problems<br />
in the form of the Electronic Numerical Integrator <strong>and</strong> Calcula-
tor (ENIAC) the world’s first electronic general-purpose digital<br />
computer.<br />
The ENIAC, built under a contract with the Army’s Ballistic Research<br />
Laboratory, became a great success for Eckert <strong>and</strong> Mauchly,<br />
but even before it was completed, they were setting their sights on<br />
loftier targets. The primary drawback of the ENIAC was the great<br />
difficulty involved in programming it. Whenever the operators<br />
needed to instruct the machine to shift from one type of calculation<br />
to another, they had to reset a vast array of dials <strong>and</strong> switches, unplug<br />
<strong>and</strong> replug numerous cables, <strong>and</strong> make various other adjustments<br />
to the multiple pieces of hardware involved. Such a mode of<br />
operation was deemed acceptable for the ENIAC because, in computing<br />
firing tables, it would need reprogramming only occasionally.<br />
Yet if instructions could be stored in a machine’s memory, along<br />
with the data, such a machine would be able to h<strong>and</strong>le a wide range<br />
of calculations with ease <strong>and</strong> efficiency.<br />
The Turing Concept<br />
BINAC computer / 105<br />
The idea of a stored-program computer had first appeared in a<br />
paper published by English mathematician Alan Mathison Turing<br />
in 1937. In this paper, Turing described a hypothetical machine of<br />
quite simple design that could be used to solve a wide range of logical<br />
<strong>and</strong> mathematical problems. One significant aspect of this imaginary<br />
Turing machine was that the tape that would run through it<br />
would contain both information to be processed <strong>and</strong> instructions on<br />
how to process it. The tape would thus be a type of memory device,<br />
storing both the data <strong>and</strong> the program as sets of symbols that the<br />
machine could “read” <strong>and</strong> underst<strong>and</strong>. Turing never attempted to<br />
construct this machine, <strong>and</strong> it was not until 1946 that he developed a<br />
design for an electronic stored-program computer, a prototype of<br />
which was built in 1950.<br />
In the meantime, John von Neumann, a Hungarian American<br />
mathematician acquainted with Turing’s ideas, joined Eckert <strong>and</strong><br />
Mauchly in 1944 <strong>and</strong> contributed to the design of ENIAC’s successor,<br />
the Electronic Discrete Variable Automatic Computer (EDVAC), another<br />
project financed by the Army. The EDVAC was the first computer<br />
designed to incorporate the concept of the stored program.
106 / BINAC computer<br />
In March of 1946, Eckert <strong>and</strong> Mauchly, frustrated by a controversy<br />
over patent rights for the ENIAC, resigned from the<br />
Moore School. Several months later, they formed the Philadelphiabased<br />
Electronic Control Company on the strength of a contract<br />
from the National Bureau of St<strong>and</strong>ards <strong>and</strong> the Census Bureau to<br />
build a much gr<strong>and</strong>er computer, the Universal Automatic Computer<br />
(UNIVAC). They thus ab<strong>and</strong>oned the EDVAC project, which<br />
was finally completed by the Moore School in 1952, but they incorporated<br />
the main features of the EDVAC into the design of the<br />
UNIVAC.<br />
Building the UNIVAC, however, proved to be much more involved<br />
<strong>and</strong> expensive than anticipated, <strong>and</strong> the funds provided by<br />
the original contract were inadequate. Eckert <strong>and</strong> Mauchly, therefore,<br />
took on several other smaller projects in an effort to raise<br />
funds. On October 9, 1947, they signed a contract with the Northrop<br />
Corporation of Hawthorne, California, to produce a relatively small<br />
computer to be used in the guidance system of a top-secret missile<br />
called the Snark, which Northrop was building for the Air Force.<br />
This computer, the Binary Automatic Computer (BINAC), turned<br />
out to be Eckert <strong>and</strong> Mauchly’s first commercial sale <strong>and</strong> the first<br />
stored-program computer completed in the United States.<br />
The BINAC was designed to be at least a preliminary version of a<br />
compact, airborne computer. It had two main processing units.<br />
These contained a total of fourteen hundred vacuum tubes, a drastic<br />
reduction from the eighteen thous<strong>and</strong> used in the ENIAC. There<br />
were also two memory units, as well as two power supplies, an input<br />
converter unit, <strong>and</strong> an input console, which used either a typewriter<br />
keyboard or an encoded magnetic tape (the first time such<br />
tape was used for computer input). Because of its dual processing,<br />
memory, <strong>and</strong> power units, the BINAC was actually two computers,<br />
each of which would continually check its results against those of<br />
the other in an effort to identify errors.<br />
The BINAC became operational in August, 1949. <strong>Public</strong> demonstrations<br />
of the computer were held in Philadelphia from August 18<br />
through August 20.
Impact<br />
The design embodied in the BINAC is the real source of its significance.<br />
It demonstrated successfully the benefits of the dual processor<br />
design for minimizing errors, a feature adopted in many subsequent<br />
computers. It showed the suitability of magnetic tape as an<br />
input-output medium. Its most important new feature was its ability<br />
to store programs in its relatively spacious memory, the principle<br />
that Eckert, Mauchly, <strong>and</strong> von Neumann had originally designed<br />
into the EDVAC. In this respect, the BINAC was a direct descendant<br />
of the EDVAC.<br />
In addition, the stored-program principle gave electronic computers<br />
new powers, quickness, <strong>and</strong> automatic control that, as they<br />
have continued to grow, have contributed immensely to the aura of<br />
intelligence often associated with their operation.<br />
The BINAC successfully demonstrated some of these impressive<br />
new powers in August of 1949 to eager observers from a number of<br />
major American corporations. It helped to convince many influential<br />
leaders of the commercial segment of society of the promise of<br />
electronic computers. In doing so, the BINAC helped to ensure the<br />
further evolution of computers.<br />
See also Apple II computer; BINAC computer; Colossus computer;<br />
ENIAC computer; IBM Model 1401 computer; Personal computer;<br />
Supercomputer; UNIVAC computer.<br />
Further Reading<br />
BINAC computer / 107<br />
Macrae, Norman. John von Neumann: The Scientific Genius Who Pioneered<br />
the Modern Computer, Game Theory, Nuclear Deterrence, <strong>and</strong><br />
Much More. New York: Pantheon Books, 1992.<br />
Spencer, Donald D. Great Men <strong>and</strong> Women of Computing. 2d ed. Ormond<br />
Beach, Fla.: Camelot Publishing, 1999.<br />
Zientara, Marguerite. The History of Computing: A Biographical Portrait<br />
of the Visionaries Who Shaped the Destiny of the Computer Industry.<br />
Framingham, Mass.: CW Communications, 1981.
108<br />
Birth control pill<br />
Birth control pill<br />
The invention: An orally administered drug that inhibits ovulation<br />
in women, thereby greatly reducing the chance of pregnancy.<br />
The people behind the invention:<br />
Gregory Pincus (1903-1967), an American biologist<br />
Min-Chueh Chang (1908-1991), a Chinese-born reproductive<br />
biologist<br />
John Rock (1890-1984), an American gynecologist<br />
Celso-Ramon Garcia (1921- ), a physician<br />
Edris Rice-Wray (1904- ), a physician<br />
Katherine Dexter McCormick (1875-1967), an American<br />
millionaire<br />
Margaret Sanger (1879-1966), an American activist<br />
An Ardent Crusader<br />
Margaret Sanger was an ardent crusader for birth control <strong>and</strong><br />
family planning. Having decided that a foolproof contraceptive was<br />
necessary, Sanger met with her friend, the wealthy socialite Katherine<br />
Dexter McCormick. A 1904 graduate in biology from the Massachusetts<br />
Institute of Technology, McCormick had the knowledge<br />
<strong>and</strong> the vision to invest in biological research. Sanger arranged a<br />
meeting between McCormick <strong>and</strong> Gregory Pincus, head of the<br />
Worcester Institutes of Experimental Biology. After listening to Sanger’s<br />
pleas for an effective contraceptive <strong>and</strong> McCormick’s offer of financial<br />
backing, Pincus agreed to focus his energies on finding a pill<br />
that would prevent pregnancy.<br />
Pincus organized a team to conduct research on both laboratory<br />
animals <strong>and</strong> humans. The laboratory studies were conducted under<br />
the direction of Min-Chueh Chang, a Chinese-born scientist who<br />
had been studying sperm biology, artificial insemination, <strong>and</strong> in vitro<br />
fertilization. The goal of his research was to see whether pregnancy<br />
might be prevented by manipulation of the hormones usually<br />
found in a woman.
It was already known that there was one time when a woman<br />
could not become pregnant—when she was already pregnant. In<br />
1921, Ludwig Haberl<strong>and</strong>t, an Austrian physiologist, had transplanted<br />
the ovaries from a pregnant rabbit into a nonpregnant one.<br />
The latter failed to produce ripe eggs, showing that some substance<br />
from the ovaries of a pregnant female prevents ovulation. This substance<br />
was later identified as the hormone progesterone by George<br />
W. Corner, Jr., <strong>and</strong> Willard M. Allen in 1928.<br />
If progesterone could inhibit ovulation during pregnancy, maybe<br />
progesterone treatment could prevent ovulation in nonpregnant females<br />
as well. In 1937, this was shown to be the case by scientists<br />
from the University of Pennsylvania, who prevented ovulation in<br />
rabbits with injections of progesterone. It was not until 1951, however,<br />
when Carl Djerassi <strong>and</strong> other chemists devised inexpensive<br />
ways of producing progesterone in the laboratory, that serious consideration<br />
was given to the medical use of progesterone. The synthetic<br />
version of progesterone was called “progestin.”<br />
Testing the Pill<br />
Birth control pill / 109<br />
In the laboratory, Chang tried more than two hundred different<br />
progesterone <strong>and</strong> progestin compounds, searching for one that<br />
would inhibit ovulation in rabbits <strong>and</strong> rats. Finally, two compounds<br />
were chosen: progestins derived from the root of a wild Mexican<br />
yam. Pincus arranged for clinical tests to be carried out by Celso-<br />
Ramon Garcia, a physician, <strong>and</strong> John Rock, a gynecologist.<br />
Rock had already been conducting experiments with progesterone<br />
as a treatment for infertility. The treatment was effective in some<br />
women but required that large doses of expensive progesterone be<br />
injected daily. Rock was hopeful that the synthetic progestin that<br />
Chang had found effective in animals would be helpful in infertile<br />
women as well. With Garcia <strong>and</strong> Pincus, Rock treated another<br />
group of fifty infertile women with the synthetic progestin. After<br />
treatment ended, seven of these previously infertile women became<br />
pregnant within half a year. Garcia, Pincus, <strong>and</strong> Rock also took several<br />
physiological measurements of the women while they were<br />
taking the progestin <strong>and</strong> were able to conclude that ovulation did<br />
not occur while the women were taking the progestin pill.
(Library of Congess)<br />
110 / Birth control pill<br />
Margaret Sanger<br />
Margaret Louise Higgins saw her mother die at the age of<br />
only fifty. The cause was tuberculosis, but Margaret, the sixth of<br />
eleven children, was convinced her mother’s string of pregnancies<br />
was what killed her. Her crusade to liberate women from<br />
the burden of unwanted, dangerous pregnancies lasted the rest<br />
of her life.<br />
Born in Corning, New York, in 1879, she went to Claverack<br />
College <strong>and</strong> Hudson River Institute <strong>and</strong> joined a nursing program<br />
at White Plains Hospital, graduating in 1900.<br />
Two years later she married William Sanger, an architect<br />
<strong>and</strong> painter. They moved into New York City in<br />
1910 <strong>and</strong> became part of Greenwich Village’s community<br />
of left-wing intellectuals, artists, <strong>and</strong> activists,<br />
such as John Reed, Upton Sinclair, <strong>and</strong> Emma<br />
Goldman. She used her free time to support liberal<br />
reform causes, participating in labor actions of the Industrial<br />
Workers of the World. Working as a visiting<br />
nurse, she witnessed the health problems among poor<br />
women caused by poor hygiene <strong>and</strong> frequent pregnancies.<br />
In 1912 she test this began a newspaper column, “What<br />
Every Girl Should Know,” about reproductive health <strong>and</strong> education.<br />
The authorities tried to suppress some of the columns as<br />
obscene—for instance, one explaining venereal disease—but<br />
Sanger was undaunted. In 1914, she launched The Woman Rebel,<br />
a magazine promoting women’s liberation <strong>and</strong> birth control.<br />
From then on, although threatened with legal action <strong>and</strong> jail,<br />
she vigorously fought the political battles for birth control She<br />
published books, lectured, took part in demonstrations, opened<br />
a birth control clinic in Brooklyn (the nation’s first), started the<br />
Birth Control Federation of American (later renamed Planned<br />
Parenthood Federation of America), <strong>and</strong> traveled overseas to<br />
promote birth control in order to improve the st<strong>and</strong>ard of living<br />
in Third World countries <strong>and</strong> to curb population growth.<br />
Sanger was not an inventor, but she contributed ideas to the<br />
invention of various birth control devices <strong>and</strong> in the 1950’s<br />
found the money needed for the research <strong>and</strong> development of<br />
oral contraceptives at the Worcester Foundation for Experimental<br />
Biology, which produced the first birth control pill. She died<br />
in Tucson, Arizona, in 1966.
Having shown that the hormone could effectively prevent ovulation<br />
in both animals <strong>and</strong> humans, the investigators turned their attention<br />
back to birth control. They were faced with several problems:<br />
whether side effects might occur in women using progestins for a<br />
long time, <strong>and</strong> whether women would remember to take the pill day<br />
after day, for months or even years. To solve these problems, the birth<br />
control pill was tested on a large scale. Because of legal problems in<br />
the United States, Pincus decided to conduct the test in Puerto Rico.<br />
The test started in April of 1956. Edris Rice-Wray, a physician,<br />
was responsible for the day-to-day management of the project. As<br />
director of the Puerto Rico Family Planning Association, she had<br />
seen firsth<strong>and</strong> the need for a cheap, reliable contraceptive. The<br />
women she recruited for the study were married women from a<br />
low-income population living in a housing development in Río<br />
Piedras, a suburb of San Juan. Word spread quickly, <strong>and</strong> soon<br />
women were volunteering to take the pill that would prevent pregnancy.<br />
In the first study, 221 women took a pill containing 10 milligrams<br />
of progestin <strong>and</strong> 0.15 milligrams of estrogen. (The estrogen<br />
was added to help control breakthrough bleeding.)<br />
Results of the test were reported in 1957. Overall, the pill proved<br />
highly effective in preventing conception. None of the women<br />
who took the pill according to directions became pregnant, <strong>and</strong><br />
most women who wanted to get pregnant after stopping the pill<br />
had no difficulty. Nevertheless, 17 percent of the women had some<br />
unpleasant reactions, such as nausea or dizziness. The scientists<br />
believed that these mild side effects, as well as one death from congestive<br />
heart failure, were unrelated to the use of the pill.<br />
Even before the final results were announced, additional field<br />
tests were begun. In 1960, the U.S. Food <strong>and</strong> Drug Administration<br />
(FDA) approved the use of the pill developed by Pincus <strong>and</strong> his collaborators<br />
as an oral contraceptive.<br />
Consequences<br />
Birth control pill / 111<br />
Within two years of approval by the FDA, more than a million<br />
women in the United States were using the birth control pill. New<br />
contraceptives were developed in the 1960’s <strong>and</strong> 1970’s, but the<br />
birth control pill remains the most widely used method of prevent-
112 / Birth control pill<br />
ing pregnancy. More than 60<br />
million women use the pill<br />
worldwide.<br />
The greatest impact of the<br />
pill has been in the social <strong>and</strong><br />
political world. Before Sanger<br />
began the push for the pill,<br />
birth control was regarded often<br />
as socially immoral <strong>and</strong><br />
often illegal as well. Women<br />
in those post-World War II<br />
years were expected to have<br />
a lifelong career as a mother<br />
to their many children.<br />
With the advent of the pill,<br />
a radical change occurred<br />
in society’s attitude toward<br />
women’s work. Women had in-<br />
creased freedom to work <strong>and</strong> enter careers previously closed to them<br />
because of fears that they might get pregnant. Women could control<br />
more precisely when they would get pregnant <strong>and</strong> how many children<br />
they would have. The women’s movement of the 1960’s—with its<br />
change to more liberal social <strong>and</strong> sexual values—gained much of its<br />
strength from the success of the birth control pill.<br />
See also Abortion pill; Amniocentesis; Artificial hormone; Genetically<br />
engineered insulin; Mammography; Syphilis test; Ultrasound.<br />
Further Reading<br />
Dispensers designed to help users keep track of<br />
the days on which they take their pills. (Image<br />
Club Graphics)<br />
DeJauregui, Ruth. One Hundred Medical Milestones That Shaped World<br />
History. San Mateo, Calif.: Bluewood Books, 1998.<br />
Tone, Andrea. Devices <strong>and</strong> Desires: A History of Contraceptives in America.<br />
New York: Hill <strong>and</strong> Wang, 2001.<br />
Watkins, Elizabeth Siegel. On the Pill: A Social History of Oral Contraceptives,<br />
1950-1970. Baltimore: Johns Hopkins University Press, 1998.
Blood transfusion<br />
Blood transfusion<br />
The invention: A technique that greatly enhanced surgery patients’<br />
chances of survival by replenishing the blood they lose in<br />
surgery with a fresh supply.<br />
The people behind the invention:<br />
Charles Drew (1904-1950), American pioneer in blood<br />
transfusion techniques<br />
George Washington Crile (1864-1943), an American surgeon,<br />
author, <strong>and</strong> brigadier general in the U.S. Army Medical<br />
Officers’ Reserve Corps<br />
Alexis Carrel (1873-1944), a French surgeon<br />
Samuel Jason Mixter (1855-1923), an American surgeon<br />
Nourishing Blood Transfusions<br />
113<br />
It is impossible to say when <strong>and</strong> where the idea of blood transfusion<br />
first originated, although descriptions of this procedure are<br />
found in ancient Egyptian <strong>and</strong> Greek writings. The earliest documented<br />
case of a blood transfusion is that of Pope Innocent VII. In<br />
April, 1492, the pope, who was gravely ill, was transfused with the<br />
blood of three young boys. As a result, all three boys died without<br />
bringing any relief to the pope.<br />
In the centuries that followed, there were occasional descriptions<br />
of blood transfusions, but it was not until the middle of the seventeenth<br />
century that the technique gained popularity following the<br />
English physician <strong>and</strong> anatomist William Harvey’s discovery of the<br />
circulation of the blood in 1628. In the medical thought of those<br />
times, blood transfusion was considered to have a nourishing effect<br />
on the recipient. In many of those experiments, the human recipient<br />
received animal blood, usually from a lamb or a calf. Blood transfusion<br />
was tried as a cure for many different diseases, mainly those<br />
that caused hemorrhages, as well as for other medical problems <strong>and</strong><br />
even for marital problems.<br />
Blood transfusions were a dangerous procedure, causing many<br />
deaths of both donor <strong>and</strong> recipient as a result of excessive blood
114 / Blood transfusion<br />
loss, infection, passage of blood clots into the circulatory systems of<br />
the recipients, passage of air into the blood vessels (air embolism),<br />
<strong>and</strong> transfusion reaction as a result of incompatible blood types. In<br />
the mid-nineteenth century, blood transfusions from animals to humans<br />
stopped after it was discovered that the serum of one species<br />
agglutinates <strong>and</strong> dissolves the blood cells of other species. A sharp<br />
drop in the use of blood transfusion came with the introduction of<br />
physiologic salt solution in 1875. Infusion of salt solution was simple<br />
<strong>and</strong> was safer than blood transfusion.<br />
Direct-Connection Blood Transfusions<br />
In 1898, when George Washington Crile began his work on blood<br />
transfusions, the major obstacle he faced was solving the problem of<br />
blood clotting during transfusions. He realized that salt solutions<br />
were not helpful in severe cases of blood loss, when there is a need to<br />
restore the patient to consciousness, steady the heart action, <strong>and</strong> raise<br />
the blood pressure. At that time, he was experimenting with indirect<br />
blood transfusions by drawing the blood of the donor into a vessel,<br />
then transferring it into the recipient’s vein by tube, funnel, <strong>and</strong> cannula,<br />
the same technique used in the infusion of saline solution.<br />
The solution to the problem of blood clotting came in 1902 when<br />
Alexis Carrel developed the technique of surgically joining blood<br />
vessels without exposing the blood to air or germs, either of which<br />
can lead to clotting. Crile learned this technique from Carrel <strong>and</strong><br />
used it to join the peripheral artery in the donor to a peripheral vein<br />
of the recipient. Since the transfused blood remained sealed in the<br />
inner lining of the vessels, blood clotting did not occur.<br />
The first human blood transfusion of this type was performed by<br />
Crile in December, 1905. The patient, a thirty-five-year-old woman,<br />
was transfused by her husb<strong>and</strong> but died a few hours after the procedure.<br />
The second, but first successful, transfusion was performed on<br />
August 8, 1906. The patient, a twenty-three-year-old male, suffered<br />
from severe hemorrhaging following surgery to remove kidney<br />
stones. After all attempts to stop the bleeding were exhausted with<br />
no results, <strong>and</strong> the patient was dangerously weak, transfusion was<br />
considered as a last resort. One of the patient’s brothers was the do-
Charles Drew<br />
Blood transfusion / 115<br />
While he was still in medical school, Charles Richard Drew<br />
saw a man’s life saved with a blood transfusion. He also saw patients<br />
die because suitable donors could not be found. Impressed<br />
by both the life-saving power of transfusions <strong>and</strong> the<br />
dire need for more of them, Drew devoted his career to improving<br />
the nation’s blood supply. His inventions saved untold<br />
thous<strong>and</strong>s of lives, especially during World War II, before artificial<br />
blood was developed.<br />
Born in 1904 in Washington, D.C., Drew was a star athlete in<br />
high school, in Amherst College—from which he graduated in<br />
1926—<strong>and</strong> even in medical school at McGill University<br />
in Montreal from 1928 to 1933. He returned to the<br />
U.S. capital to become a resident in Freedmen’s Hospital<br />
of Howard University. While there he invented a<br />
method for separating plasma from whole blood <strong>and</strong><br />
discovered that it was not necessary to recombine the<br />
plasma <strong>and</strong> red blood cells for transfusion. Plasma<br />
alone was sufficient, <strong>and</strong> by drying or <strong>and</strong> freezing it,<br />
the plasma remained fresh enough over long periods<br />
to act as an emergency reserve. In 1938 Drew took a<br />
fellowship in blood research at Columbia Presbyterian Hospital<br />
in New York City. Employing his plasma preservation methods,<br />
he opened the first blood bank <strong>and</strong> wrote a dissertation on his<br />
techniques. He became the first African American to earn a<br />
Doctor of Science degree from Columbia University in 1940.<br />
He organized another blood bank, this one in Great Britain,<br />
<strong>and</strong> in 1941 was appointed director of the American Red Cross<br />
blood donor project. However, Drew learned to his disgust that<br />
the Red Cross <strong>and</strong> U.S. government would not allow blood<br />
from African Americans <strong>and</strong> Caucasians to be mixed in the<br />
blood bank. There was no scientific reason for such segregation.<br />
Bias prevailed. Drew angrily denounced the policy at a press<br />
conference <strong>and</strong> resigned from the Red Cross.<br />
He went back to Howard University as head of surgery <strong>and</strong>,<br />
later, director of Freedmen’s Hospital. Drew died in 1950 following<br />
an automobile accident.<br />
nor. Following the transfusion, the patient showed remarkable recovery<br />
<strong>and</strong> was strong enough to withst<strong>and</strong> surgery to remove the<br />
kidney <strong>and</strong> stop the bleeding. When his condition deteriorated a<br />
(Associated Publishers)
116 / Blood transfusion<br />
few days later, another transfusion was done. This time, too, he<br />
showed remarkable improvement, which continued until his complete<br />
recovery.<br />
For his first transfusions, Crile used the Carrel suture method,<br />
which required using very fine needles <strong>and</strong> thread. It was a very<br />
delicate <strong>and</strong> time-consuming procedure. At the suggestion of Samuel<br />
Jason Mixter, Crile developed a new method using a short tubal<br />
device with an attached h<strong>and</strong>le to connect the blood vessels. By this<br />
method, 3 or 4 centimeters of the vessels to be connected were surgically<br />
exposed, clamped, <strong>and</strong> cut, just as under the previous method.<br />
Yet, instead of suturing of the blood vessels, the recipient’s vein was<br />
passed through the tube <strong>and</strong> then cuffed back over the tube <strong>and</strong> tied<br />
to it. Then the donor’s artery was slipped over the cuff. The clamps<br />
were opened, <strong>and</strong> blood was allowed to flow from the donor to the<br />
recipient. In order to accommodate different-sized blood vessels,<br />
tubes of four different sizes were made, ranging in diameter from<br />
1.5 to 3 millimeters.<br />
Impact<br />
Crile’s method was the preferred method of blood transfusion<br />
for a number of years. Following the publication of his book on<br />
transfusion, a number of modifications to the original method were<br />
published in medical journals. In 1913, Edward Lindeman developed<br />
a method of transfusing blood simply by inserting a needle<br />
through the patient’s skin <strong>and</strong> into a surface vein, making it for the<br />
first time a nonsurgical method. This method allowed one to measure<br />
the exact quantity of blood transfused. It also allowed the donor<br />
to serve in multiple transfusions. This development opened the<br />
field of transfusions to all physicians. Lindeman’s needle <strong>and</strong> syringe<br />
method also eliminated another major drawback of direct<br />
blood transfusion: the need to have both donor <strong>and</strong> recipient right<br />
next to each other.<br />
See also Coronary artery bypass surgery; Electrocardiogram;<br />
Electroencephalogram; Heart-lung machine.
Further Reading<br />
Blood transfusion / 117<br />
English, Peter C. Shock, Physiological Surgery, <strong>and</strong> George Washington<br />
Crile: Medical Innovation in the Progressive Era. Westport, Conn.:<br />
Greenwood Press, 1980.<br />
Le Vay, David, <strong>and</strong> Roy Porter. Alexis Carrel: The Perfectibility of Man.<br />
Rockville, Md.: Kabel Publishers, 1996.<br />
Malinin, Theodore I. Surgery <strong>and</strong> Life: The Extraordinary Career of<br />
Alexis Carrel. New York: Harcourt Brace Jovanovich, 1979.<br />
May, Angelo M., <strong>and</strong> Alice G. May. The Two Lions of Lyons: The Tale of<br />
Two Surgeons, Alexis Carrel <strong>and</strong> René Leriche. Rockville, Md.: Kabel<br />
Publishers, 1992.
118<br />
Breeder reactor<br />
Breeder reactor<br />
The invention: A plant that generates electricity from nuclear fission<br />
while creating new fuel.<br />
The person behind the invention:<br />
Walter Henry Zinn (1906-2000), the first director of the Argonne<br />
National Laboratory<br />
Producing Electricity with More Fuel<br />
The discovery of nuclear fission involved both the discovery that<br />
the nucleus of a uranium atom would split into two lighter elements<br />
when struck by a neutron <strong>and</strong> the observation that additional neutrons,<br />
along with a significant amount of energy, were released at<br />
the same time. These neutrons might strike other atoms <strong>and</strong> cause<br />
them to fission (split) also. That, in turn, would release more energy<br />
<strong>and</strong> more neutrons, triggering a chain reaction as the process continued<br />
to repeat itself, yielding a continuing supply of heat.<br />
Besides the possibility that an explosive weapon could be constructed,<br />
early speculation about nuclear fission included its use in<br />
the generation of electricity. The occurrence of World War II (1939-<br />
1945) meant that the explosive weapon would be developed first.<br />
Both the weapons technology <strong>and</strong> the basic physics for the electrical<br />
reactor had their beginnings in Chicago with the world’s first nuclear<br />
chain reaction. The first self-sustaining nuclear chain reaction occurred<br />
in a laboratory at the University of Chicago on December 2, 1942.<br />
It also became apparent at that time that there was more than one<br />
way to build a bomb. At this point, two paths were taken: One was<br />
to build an atomic bomb with enough fissionable uranium in it to<br />
explode when detonated, <strong>and</strong> another was to generate fissionable<br />
plutonium <strong>and</strong> build a bomb. Energy was released in both methods,<br />
but the second method also produced another fissionable substance.<br />
The observation that plutonium <strong>and</strong> energy could be produced together<br />
meant that it would be possible to design electric power systems<br />
that would produce fissionable plutonium in quantities as large<br />
as, or larger than, the amount of fissionable material consumed. This
is the breeder concept, the idea that while using up fissionable uranium<br />
235, another fissionable element can be made. The full development<br />
of this concept for electric power was delayed until the end of<br />
World War II.<br />
Electricity from Atomic Energy<br />
Breeder reactor / 119<br />
On August 1, 1946, the Atomic Energy Commission (AEC) was<br />
established to control the development <strong>and</strong> explore the peaceful<br />
uses of nuclear energy. The Argonne National Laboratory was assigned<br />
the major responsibilities for pioneering breeder reactor<br />
technologies. Walter Henry Zinn was the laboratory’s first director.<br />
He led a team that planned a modest facility (Experimental Breeder<br />
Reactor I, or EBR-I) for testing the validity of the breeding principle.<br />
Planning for this had begun in late 1944 <strong>and</strong> grew as a natural extension<br />
of the physics that developed the plutonium atomic bomb.<br />
The conceptual design details for a breeder-electric reactor were<br />
reasonably complete by late 1945. On March 1, 1949, the AEC announced<br />
the selection of a site in Idaho for the National Reactor Station<br />
(later to be named the Idaho National Engineering Laboratory,<br />
or INEL). Construction at the INEL site in Arco, Idaho, began in October,<br />
1949. Critical mass was reached in August, 1951. (“Critical<br />
mass” is the amount <strong>and</strong> concentration of fissionable material required<br />
to produce a self-sustaining chain reaction.)<br />
The system was brought to full operating power, 1.1 megawatts<br />
of thermal power, on December 19, 1951. The next day, December<br />
20, at 11:00 a.m., steam was directed to a turbine generator. At 1:23<br />
p.m., the generator was connected to the electrical grid at the site,<br />
<strong>and</strong> “electricity flowed from atomic energy,” in the words of Zinn’s<br />
console log of that day. Approximately 200 kilowatts of electric<br />
power were generated most of the time that the reactor was run.<br />
This was enough to satisfy the needs of the EBR-I facilities. The reactor<br />
was shut down in 1964 after five years of use primarily as a test<br />
facility. It had also produced the first pure plutonium.<br />
With the first fuel loading, a conversion ratio of 1.01 was achieved,<br />
meaning that more new fuel was generated than was consumed by<br />
about 1 percent. When later fuel loadings were made with plutonium,<br />
the conversion ratios were more favorable, reaching as high
120 / Breeder reactor<br />
as 1.27. EBR-I was the first reactor to generate its own fuel <strong>and</strong> the<br />
first power reactor to use plutonium for fuel.<br />
The use of EBR-I also included pioneering work on fuel recovery<br />
<strong>and</strong> reprocessing. During its five-year lifetime, EBR-I operated with<br />
four different fuel loadings, each designed to establish specific<br />
benchmarks of breeder technology. This reactor was seen as the first<br />
in a series of increasingly large reactors in a program designed to<br />
develop breeder technology. The reactor was replaced by EBR-II,<br />
which had been proposed in 1953 <strong>and</strong> was constructed from 1955 to<br />
1964. EBR-II was capable of producing 20 megawatts of electrical<br />
power. It was approximately fifty times more powerful than EBR-I<br />
but still small compared to light-water commercial reactors of 600 to<br />
1,100 megawatts in use toward the end of the twentieth century.<br />
Consequences<br />
The potential for peaceful uses of nuclear fission were dramatized<br />
with the start-up of EBR-I in 1951: It was the first in the world<br />
to produce electricity, while also being the pioneer in a breeder reactor<br />
program. The breeder program was not the only reactor program<br />
being developed, however, <strong>and</strong> it eventually gave way to the<br />
light-water reactor design for use in the United States. Still, if energy<br />
resources fall into short supply, it is likely that the technologies first<br />
developed with EBR-I will find new importance. In France <strong>and</strong> Japan,<br />
commercial reactors make use of breeder reactor technology;<br />
these reactors require extensive fuel reprocessing.<br />
Following the completion of tests with plutonium loading in 1964,<br />
EBR-I was shut down <strong>and</strong> placed in st<strong>and</strong>by status. In 1966, it was declared<br />
a national historical l<strong>and</strong>mark under the stewardship of the<br />
U.S. Department of the Interior. The facility was opened to the public<br />
in June, 1975.<br />
See also Atomic bomb; Geothermal power; Nuclear power<br />
plant; Nuclear reactor; Solar thermal engine; Tidal power plant.
Further Reading<br />
Breeder reactor / 121<br />
“Breeder Trouble.” Technology Review 91, no. 5 (July, 1988).<br />
Hippel, Frank von, <strong>and</strong> Suzanne Jones. “Birth of the Breeder.” Bulletin<br />
of the Atomic Scientists 53, no. 5 (September/October, 1997).<br />
Krieger, David. Splitting the Atom: A Chronology of the Nuclear Age.<br />
Santa Barbara, Calif.: Nuclear Age Peace foundation, 1998.
122<br />
Broadcaster guitar<br />
Broadcaster guitar<br />
The invention: The first commercially manufactured solid-body<br />
electric guitar, the Broadcaster revolutionized the guitar industry<br />
<strong>and</strong> changed the face of popular music<br />
The people behind the invention:<br />
Leo Fender (1909-1991), designer of affordable <strong>and</strong> easily massproduced<br />
solid-body electric guitars<br />
Les Paul (Lester William Polfuss, 1915- ), a legendary<br />
guitarist <strong>and</strong> designer of solid-body electric guitars<br />
Charlie Christian (1919-1942), an influential electric jazz<br />
guitarist of the 1930’s<br />
Early Electric Guitars<br />
It has been estimated that between 1931 <strong>and</strong> 1937, approximately<br />
twenty-seven hundred electric guitars <strong>and</strong> amplifiers were sold in<br />
the United States. The Electro String Instrument Company, run<br />
by Adolph Rickenbacker <strong>and</strong> his designer partners, George Beauchamp<br />
<strong>and</strong> Paul Barth, produced two of the first commercially manufactured<br />
electric guitars—the Rickenbacker A-22 <strong>and</strong> A-25—in<br />
1931. The Rickenbacker models were what are known as “lap steel”<br />
or Hawaiian guitars. A Hawaiian guitar is played with the instrument<br />
lying flat across a guitarist’s knees. By the mid-1930’s, the Gibson<br />
company had introduced an electric Spanish guitar, the ES-150.<br />
Legendary jazz guitarist Charlie Christian made this model famous<br />
while playing for Benny Goodman’s orchestra. Christian was the<br />
first electric guitarist to be heard by a large American audience.<br />
He became an inspiration for future electric guitarists, because he<br />
proved that the electric guitar could have its own unique solo<br />
sound. Along with Christian, the other electric guitar figures who<br />
put the instrument on the musical map were blues guitarist T-Bone<br />
Walker, guitarist <strong>and</strong> inventor Les Paul, <strong>and</strong> engineer <strong>and</strong> inventor<br />
Leo Fender.<br />
Early electric guitars were really no more than acoustic guitars,<br />
with the addition of one or more pickups, which convert string vi-
ations to electrical signals that can be played through a speaker.<br />
Amplification of a guitar made it a more assertive musical instrument.<br />
The electrification of the guitar ultimately would make it<br />
more flexible, giving it a more prominent role in popular music. Les<br />
Paul, always a compulsive inventor, began experimenting with<br />
ways of producing an electric solid-body guitar in the late 1930’s. In<br />
1929, at the age of thirteen, he had amplified his first acoustic guitar.<br />
Another influential inventor of the 1940’s was Paul Bigsby. He built<br />
a prototype solid-body guitar for country music star Merle Travis in<br />
1947. It was Leo Fender who revolutionized the electric guitar industry<br />
by producing the first commercially viable solid-body electric<br />
guitar, the Broadcaster, in 1948.<br />
Leo Fender<br />
Broadcaster guitar / 123<br />
Leo Fender was born in the Anaheim, California, area in 1909. As<br />
a teenager, he began to build <strong>and</strong> repair guitars. By the 1930’s,<br />
Fender was building <strong>and</strong> renting out public address systems for<br />
group gatherings. In 1937, after short tenures of employment with<br />
the Division of Highways <strong>and</strong> the U.S. Tire Company, he opened a<br />
radio repair company in Fullerton, California. Always looking to<br />
exp<strong>and</strong> <strong>and</strong> invent new <strong>and</strong> exciting electrical gadgets, Fender <strong>and</strong><br />
Clayton Orr “Doc” Kauffman started the K&FCompany in 1944.<br />
Kauffman was a musician <strong>and</strong> a former employee of the Electro<br />
String Instrument Company. TheK&FCompany lasted until 1946<br />
<strong>and</strong> produced steel guitars <strong>and</strong> amplifiers. After that partnership<br />
ended, Fender founded the Fender Electric Instruments Company.<br />
With the help of George Fullerton, who joined the company in<br />
1948, Fender developed the Fender Broadcaster. The body of the<br />
Broadcaster was made of a solid plank of ash wood. The corners of<br />
the ash body were rounded. There was a cutaway located under the<br />
joint with the solid maple neck, making it easier for the guitarist to<br />
access the higher frets. The maple neck was bolted to the body of the<br />
guitar, which was unusual, since most guitar necks prior to the<br />
Broadcaster had been glued to the body. Frets were positioned directly<br />
into designed cuts made in the maple of the neck. The guitar<br />
had two pickups.<br />
The Fender Electric Instruments Company made fewer than one
124 / Broadcaster guitar<br />
thous<strong>and</strong> Broadcasters. In 1950, the name of the guitar was changed<br />
from the Broadcaster to the Telecaster, as the Gretsch company had<br />
already registered the name Broadcaster for some of its drums <strong>and</strong><br />
banjos. Fender decided not to fight in court over use of the name.<br />
Leo Fender has been called the Henry Ford of the solid-body<br />
electric guitar, <strong>and</strong> the Telecaster became known as the Model T of<br />
the industry. The early Telecasters sold for $189.50. Besides being inexpensive,<br />
the Telecaster was a very durable instrument. Basically,<br />
the Telecaster was a continuation of the Broadcaster. Fender did not<br />
file for a patent on its unique bridge pickup until January 13, 1950,<br />
<strong>and</strong> he did not file for a patent on the Telecaster’s unique body<br />
shape until April 3, 1951.<br />
In the music industry during the late 1940’s, it was important for<br />
a company to unveil new instruments at trade shows. At this time,<br />
there was only one important trade show, sponsored by the National<br />
Association of Music Merchants. The Broadcaster was first<br />
sprung on the industry at the 1948 trade show in Chicago. The industry<br />
had seen nothing like this guitar ever before. This new guitar<br />
existed only to be amplified; it was not merely an acoustic guitar<br />
that had been converted.<br />
Impact<br />
The Telecaster, as it would be called after 1950, remained in continuous<br />
production for more years than any other guitar of its type<br />
<strong>and</strong> was one of the industry’s best sellers. From the beginning, it<br />
looked <strong>and</strong> sounded unique. The electrified acoustic guitars had a<br />
mellow woody tone, whereas the Telecaster had a clean twangy<br />
tone. This tone made it popular with country <strong>and</strong> blues guitarists.<br />
The Telecaster could also be played at higher volume than previous<br />
electric guitars.<br />
Because Leo Fender attempted something revolutionary by introducing<br />
an electric solid-body guitar, there was no guarantee that<br />
his business venture would succeed. Fender Electric Instruments<br />
Company had fifteen employees in 1947. At times, during the early<br />
years of the company, it looked as though Fender’s dreams would<br />
not come to fruition, but the company persevered <strong>and</strong> grew. Between<br />
1948 <strong>and</strong> 1955 with an increase of employees, the company
was able to produce ten thous<strong>and</strong> Broadcaster/Telecaster guitars.<br />
Fender had taken a big risk, but it paid off enormously. Between<br />
1958 <strong>and</strong> the mid-1970’s, Fender produced more than 250,000 Telecasters.<br />
Other guitar manufacturers were placed in a position of<br />
having to catch up. Fender had succeeded in developing a process<br />
by which electric solid-body guitars could be manufactured profitably<br />
on a large scale.<br />
Early Guitar Pickups<br />
Broadcaster guitar / 125<br />
The first pickups used on a guitar can be traced back to the 1920’s<br />
<strong>and</strong> the efforts of Lloyd Loar, but there was not strong interest on the<br />
part of the American public for the guitar to be amplified. The public<br />
did not become intrigued until the 1930’s. Charlie Christian’s<br />
electric guitar performances with Benny Goodman woke up the<br />
public to the potential of this new <strong>and</strong> exciting sound. It was not until<br />
the 1950’s, though, that the electric guitar became firmly established.<br />
Leo Fender was the right man in the right place. He could not<br />
have known that his Fender guitars would help to usher in a whole<br />
new musical l<strong>and</strong>scape. Since the electric guitar was the newest<br />
member of the family of guitars, it took some time for musical audiences<br />
to fully appreciate what it could do. The electric solid-body<br />
guitar has been called a dangerous, uncivilized instrument. The<br />
youth culture of the 1950’s found in this new guitar a voice for their<br />
rebellion. Fender unleashed a revolution not only in the construction<br />
of a guitar but also in the way popular music would be approached<br />
henceforth.<br />
Because of the ever-increasing dem<strong>and</strong> for the Fender product,<br />
Fender Sales was established as a separate distribution company in<br />
1953 by Don R<strong>and</strong>all. Fender Electric Instruments Company had fifteen<br />
employees in 1947, but by 1955, the company employed fifty<br />
people. By 1960, the number of employees had risen to more than<br />
one hundred. Before Leo Fender sold the company to CBS on January<br />
4, 1965, for $13 million, the company occupied twenty-seven<br />
buildings <strong>and</strong> employed more than five hundred workers.<br />
Always interested in finding new ways of designing a more nearly<br />
perfect guitar, Leo Fender again came up with a remarkable guitar in<br />
1954, with the Stratocaster. There was talk in the guitar industry that
126 / Broadcaster guitar<br />
Charlie Christian<br />
Charlie Christian (1919-1942) did not invent the electric guitar,<br />
but he did pioneer its use. He was born to music, <strong>and</strong> for<br />
jazz aficionados he quickly developed into a legend, not only<br />
establishing a new solo instrument but also helping to invent a<br />
whole new type of jazz.<br />
Christian grew up in Texas, surrounded by a family of professional<br />
musicians. His parents <strong>and</strong> two brothers played trumpet,<br />
guitar, <strong>and</strong> piano, <strong>and</strong> sang, <strong>and</strong> Charlie was quick to imitate<br />
them. As a boy he made his own guitars out of cigar boxes<br />
<strong>and</strong>, according to a childhood friend, novelist Ralph Ellison,<br />
wowed his friends at school with his riffs. When he first heard<br />
an electric guitar in the mid-1930’s, he made that his own, too.<br />
The acoustic guitar had been only a backup instrument in<br />
jazz because it was too quiet to soar in solos. In 1935, Eddie Durham<br />
found that electric guitars could swing side by side with<br />
louder instruments. Charlie, already an experienced performer<br />
with acoustic guitar <strong>and</strong> bass, immediately recognized the power<br />
<strong>and</strong> range of subtle expression possible with the electrified instrument.<br />
He bought a Gibson ES-150 <strong>and</strong> began to make musical<br />
history with his improvisations.<br />
He impressed producer John Hammond, who introduced<br />
him to big-b<strong>and</strong> leader Benny Goodman in 1939. Notoriously<br />
hard to please, Goodman rejected Christian after an audition.<br />
However, Hammond later sneaked him on stage while the<br />
Goodman b<strong>and</strong> was performing. Outraged, Goodman segued<br />
into a tune he was sure Christian did not know, “Rose Room.”<br />
Christian was undaunted. He delivered an astonishingly inventive<br />
solo, <strong>and</strong> Goodman was won over despite himself. Christian’s<br />
ensuing tenure with Goodman’s b<strong>and</strong> brought electric<br />
guitar solos into the limelight.<br />
However, it was during after-hours jam sessions at the Hotel<br />
Cecil in New York that Christian left his stylistic imprint on<br />
jazz. Including such jazz greats as Joe Guy, Thelonious Monk,<br />
<strong>and</strong> Kenny Clarke, the groups played around with new sounds.<br />
Out of these sessions bebop was born, <strong>and</strong> Christian was a central<br />
figure. Sick with tuberculosis, he had to quit playing in 1941<br />
<strong>and</strong> died the following spring, only twenty-five years old.
Fender had gone too far with the introduction of the Stratocaster, but<br />
it became a huge success because of its versatility. It was the first commercial<br />
solid-body electric guitar to have three pickups <strong>and</strong> a vibrato<br />
bar. It was also easier to play than the Telecaster because of its double<br />
cutaway, contoured body, <strong>and</strong> scooped back. The Stratocaster sold<br />
for $249.50. Since its introduction, the Stratocaster has undergone<br />
some minor changes, but Fender <strong>and</strong> his staff basically got it right the<br />
first time.<br />
The Gibson company entered the solid-body market in 1952 with<br />
the unveiling of the “Les Paul” model. After the Telecaster, the Les<br />
Paul guitar was the next significant solid-body to be introduced. Les<br />
Paul was a legendary guitarist who also had been experimenting<br />
with electric guitar designs for many years. The Gibson designers<br />
came up with a striking model that produced a thick rounded tone.<br />
Over the years, the Les Paul model has won a loyal following.<br />
The Precision Bass<br />
Broadcaster guitar / 127<br />
In 1951, Leo Fender introduced another revolutionary guitar, the<br />
Precision bass. At a cost of $195.50, the first electric bass would go on<br />
to dominate the market. The Fender company has manufactured numerous<br />
guitar models over the years, but the three that st<strong>and</strong> above<br />
all others in the field are the Telecaster, the Precision bass, <strong>and</strong> the<br />
Stratocaster. The Telecaster is considered to be more of a workhorse,<br />
whereas the Stratocaster is thought of as the thoroughbred of electric<br />
guitars. The Precision bass was in its own right a revolutionary guitar.<br />
With a styling that had been copied from the Telecaster, the Precision<br />
freed musicians from bulky oversized acoustic basses, which<br />
were prone to feedback. The name Precision had meaning. Fender’s<br />
electric bass made it possible, with its frets, for the precise playing of<br />
notes; many acoustic basses were fretless. The original Precision bass<br />
model was manufactured from 1951 to 1954. The next version lasted<br />
from 1954 until June of 1957. The Precision bass that went into production<br />
in June, 1957, with its split humbucking pickup, continued to<br />
be the st<strong>and</strong>ard electric bass on the market into the 1990’s.<br />
By 1964, the Fender Electric Instruments Company had grown<br />
enormously. In addition to Leo Fender, a number of crucial people<br />
worked for the organization, including George Fullerton <strong>and</strong> Don
128 / Broadcaster guitar<br />
R<strong>and</strong>all. Fred Tavares joined the company’s research <strong>and</strong> development<br />
team in 1953. In May, 1954, Forrest White became Fender’s<br />
plant manager. All these individuals played vital roles in the success<br />
of Fender, but the driving force behind the scene was always<br />
Leo Fender. As Fender’s health deteriorated, R<strong>and</strong>all commenced<br />
negotiations with CBS to sell the Fender company. In January, 1965,<br />
CBS bought Fender for $13 million. Eventually, Leo Fender regained<br />
his health, <strong>and</strong> he was hired as a technical adviser by CBS/Fender.<br />
He continued in this capacity until 1970. He remained determined<br />
to create more guitar designs of note. Although he never again produced<br />
anything that could equal his previous success, he never<br />
stopped trying to attain a new perfection of guitar design.<br />
Fender died on March 21, 1991, in Fullerton, California. He had<br />
suffered for years from Parkinson’s disease, <strong>and</strong> he died of complications<br />
from the disease. He is remembered for his Broadcaster/<br />
Telecaster, Precision bass, <strong>and</strong> Stratocaster, which revolutionized<br />
popular music. Because the Fender company was able to mass produce<br />
these <strong>and</strong> other solid-body electric guitars, new styles of music<br />
that relied on the sound made by an electric guitar exploded onto<br />
the scene. The electric guitar manufacturing business grew rapidly<br />
after Fender introduced mass production. Besides American companies,<br />
there are guitar companies that have flourished in Europe<br />
<strong>and</strong> Japan.<br />
The marriage between rock music <strong>and</strong> solid-body electric guitars<br />
was initiated by the Fender guitars. The Telecaster, Precision bass,<br />
<strong>and</strong> Stratocaster become synonymous with the explosive character<br />
of rock <strong>and</strong> roll music. The multi-billion-dollar music business can<br />
point to Fender as the pragmatic visionary who put the solid-body<br />
electric guitar into the forefront of the musical scene. His innovative<br />
guitars have been used by some of the most important guitarists of<br />
the rock era, including Jimi Hendrix, Eric Clapton, <strong>and</strong> Jeff Beck.<br />
More important, Fender guitars have remained bestsellers with<br />
the public worldwide. Amateur musicians purchased them by the<br />
thous<strong>and</strong>s for their own entertainment. Owning <strong>and</strong> playing a<br />
Fender guitar, or one of the other electric guitars that followed, allowed<br />
these amateurs to feel closer to their musician idols. A large<br />
market for sheet music from popular artists also developed.<br />
In 1992, Fender was inducted into the Rock <strong>and</strong> Roll Hall of
Fame. He is one of the few non-musicians ever to be inducted. The<br />
sound of an electric guitar is the sound of exuberance, <strong>and</strong> since the<br />
Broadcaster was first unveiled in 1948, that sound has grown to be<br />
pervasive <strong>and</strong> enormously profitable.<br />
See also Cassette recording; Dolby noise reduction; Electronic<br />
synthesizer.<br />
Further Reading<br />
Broadcaster guitar / 129<br />
Bacon, Tony, <strong>and</strong> Paul Day. The Fender Book. San Francisco: GPI<br />
Books, 1992.<br />
Brosnac, Donald, ed. Guitars Made by the Fender Company. Westport,<br />
Conn.: Bold Strummer, 1986.<br />
Freeth, Nick. The Electric Guitar. Philadelphia: Courage Books, 1999.<br />
Trynka, Paul. The Electric Guitar: An Illustrated History. San Francisco:<br />
Chronicle Books, 1995.<br />
Wheeler, Tom. American Guitars: An Illustrated History. New York:<br />
Harper & Row, 1982.<br />
_____. “Electric Guitars.” In The Guitar Book: A H<strong>and</strong>book for Electric<br />
<strong>and</strong> Acoustic Guitarists. New York: Harper & Row, 1974.
130<br />
Brownie camera<br />
Brownie camera<br />
The invention: The first inexpensive <strong>and</strong> easy-to-use camera available<br />
to the general public, the Brownie revolutionized photography<br />
by making it possible for every person to become a photographer.<br />
The people behind the invention:<br />
George Eastman (1854-1932), founder of the Eastman Kodak<br />
Company<br />
Frank A. Brownell, a camera maker for the Kodak Company<br />
who designed the Brownie<br />
Henry M. Reichenbach, a chemist who worked with Eastman to<br />
develop flexible film<br />
William H. Walker, a Rochester camera manufacturer who<br />
collaborated with Eastman<br />
A New Way to Take Pictures<br />
In early February of 1900, the first shipments of a new small box<br />
camera called the Brownie reached Kodak dealers in the United<br />
States <strong>and</strong> Engl<strong>and</strong>. George Eastman, eager to put photography<br />
within the reach of everyone, had directed Frank Brownell to design<br />
a small camera that could be manufactured inexpensively but that<br />
would still take good photographs.<br />
Advertisements for the Brownie proclaimed that everyone—<br />
even children—could take good pictures with the camera. The<br />
Brownie was aimed directly at the children’s market, a fact indicated<br />
by its box, which was decorated with drawings of imaginary<br />
elves called “Brownies” created by the Canadian illustrator Palmer<br />
Cox. Moreover, the camera cost only one dollar.<br />
The Brownie was made of jute board <strong>and</strong> wood, with a hinged<br />
back fastened by a sliding catch. It had an inexpensive two-piece<br />
glass lens <strong>and</strong> a simple rotary shutter that allowed both timed <strong>and</strong><br />
instantaneous exposures to be made. With a lens aperture of approximately<br />
f14 <strong>and</strong> a shutter speed of approximately 1/50 of a second,<br />
the Brownie was certainly capable of taking acceptable snap-
Brownie camera / 131<br />
shots. It had no viewfinder; however, an optional clip-on reflecting<br />
viewfinder was available. The camera came loaded with a six-exposure<br />
roll of Kodak film that produced square negatives 2.5 inches on<br />
a side. This film could be developed, printed, <strong>and</strong> mounted for forty<br />
cents, <strong>and</strong> a new roll could be purchased for fifteen cents.<br />
George Eastman’s first career choice had been banking, but when<br />
he failed to receive a promotion he thought he deserved, he decided<br />
to devote himself to his hobby, photography. Having worked with a<br />
rigorous wet-plate process, he knew why there were few amateur<br />
photographers at the time—the whole process, from plate preparation<br />
to printing, was too expensive <strong>and</strong> too much trouble. Even so,<br />
he had already begun to think about the commercial possibilities of<br />
photography; after reading of British experiments with dry-plate<br />
technology, he set up a small chemical laboratory <strong>and</strong> came up with<br />
a process of his own. The Eastman Dry Plate Company became one<br />
of the most successful producers of gelatin dry plates.<br />
Dry-plate photography had attracted more amateurs, but it was<br />
still a complicated <strong>and</strong> expensive hobby. Eastman realized that the<br />
number of photographers would have to increase considerably if<br />
the market for cameras <strong>and</strong> supplies were to have any potential. In<br />
the early 1880’s, Eastman first formulated the policies that would<br />
make the Eastman Kodak Company so successful in years to come:<br />
mass production, low prices, foreign <strong>and</strong> domestic distribution, <strong>and</strong><br />
selling through extensive advertising <strong>and</strong> by demonstration.<br />
In his efforts to exp<strong>and</strong> the amateur market, Eastman first tackled<br />
the problem of the glass-plate negative, which was heavy, fragile,<br />
<strong>and</strong> expensive to make. By 1884, his experiments with paper<br />
negatives had been successful enough that he changed the name of<br />
his company to The Eastman Dry Plate <strong>and</strong> Film Company. Since<br />
flexible roll film needed some sort of device to hold it steady in the<br />
camera’s focal plane, Eastman collaborated with William Walker<br />
to develop the Eastman-Walker roll-holder. Eastman’s pioneering<br />
manufacture <strong>and</strong> use of roll films led to the appearance on the market<br />
in the 1880’s of a wide array of h<strong>and</strong> cameras from a number of<br />
different companies. Such cameras were called “detective cameras”<br />
because they were small <strong>and</strong> could be used surreptitiously. The<br />
most famous of these, introduced by Eastman in 1888, was named<br />
the “Kodak”—a word he coined to be terse, distinctive, <strong>and</strong> easily
132 / Brownie camera<br />
pronounced in any language. This camera’s simplicity of operation<br />
was appealing to the general public <strong>and</strong> stimulated the growth of<br />
amateur photography.<br />
The Camera<br />
The Kodak was a box about seven inches long <strong>and</strong> four inches<br />
wide, with a one-speed shutter <strong>and</strong> a fixed-focus lens that produced<br />
reasonably sharp pictures. It came loaded with enough roll film to<br />
make one hundred exposures. The camera’s initial price of twentyfive<br />
dollars included the cost of processing the first roll of film; the<br />
camera also came with a leather case <strong>and</strong> strap. After the film was<br />
exposed, the camera was mailed, unopened, to the company’s plant<br />
in Rochester, New York, where the developing <strong>and</strong> printing were<br />
done. For an additional ten dollars, the camera was reloaded <strong>and</strong><br />
sent back to the customer.<br />
The Kodak was advertised in mass-market publications, rather<br />
than in specialized photographic journals, with the slogan: “You<br />
press the button, we do the rest.” With his introduction of a camera<br />
that was easy to use <strong>and</strong> a service that eliminated the need to know<br />
anything about processing negatives, Eastman revolutionized the<br />
photographic market. Thous<strong>and</strong>s of people no longer depended<br />
upon professional photographers for their portraits but instead<br />
learned to make their own. In 1892, the Eastman Dry Plate <strong>and</strong> Film<br />
Company became the Eastman Kodak Company, <strong>and</strong> by the mid-<br />
1890’s, one hundred thous<strong>and</strong> Kodak cameras had been manufactured<br />
<strong>and</strong> sold, half of them in Europe by Kodak Limited.<br />
Having popularized photography with the first Kodak, in 1900<br />
Eastman turned his attention to the children’s market with the introduction<br />
of the Brownie. The first five thous<strong>and</strong> cameras sent to<br />
dealers were sold immediately; by the end of the following year, almost<br />
a quarter of a million had been sold. The Kodak Company organized<br />
Brownie camera clubs <strong>and</strong> held competitions specifically<br />
for young photographers. The Brownie came with an instruction<br />
booklet that gave children simple directions for taking successful<br />
pictures, <strong>and</strong> “The Brownie Boy,” an appealing youngster who<br />
loved photography, became a st<strong>and</strong>ard feature of Kodak’s advertisements.
Impact<br />
Brownie camera / 133<br />
Eastman followed the success of the first Brownie by introducing<br />
several additional models between 1901 <strong>and</strong> 1917. Each was a more<br />
elaborate version of the original. These Brownie box cameras were<br />
on the market until the early 1930’s, <strong>and</strong> their success inspired other<br />
companies to manufacture box cameras of their own. In 1906, the<br />
Ansco company produced the Buster Brown camera in three sizes<br />
that corresponded to Kodak’s Brownie camera range; in 1910 <strong>and</strong><br />
1914, Ansco made three more versions. The Seneca company’s<br />
Scout box camera, in three sizes, appeared in 1913, <strong>and</strong> Sears Roebuck’s<br />
Kewpie cameras, in five sizes, were sold beginning in 1916.<br />
In Engl<strong>and</strong>, the Houghtons company introduced its first Scout camera<br />
in 1901, followed by another series of four box cameras in 1910<br />
sold under the Ensign trademark. Other English manufacturers of<br />
box cameras included the James Sinclair company, with its Traveller<br />
Una of 1909, <strong>and</strong> the Thornton-Pickard company, with a Filma camera<br />
marketed in four sizes in 1912.<br />
After World War I ended, several series of box cameras were<br />
manufactured in Germany by companies that had formerly concentrated<br />
on more advanced <strong>and</strong> expensive cameras. The success of<br />
box cameras in other countries, led by Kodak’s Brownie, undoubtedly<br />
prompted this trend in the German photographic industry. The<br />
Ernemann Film K series of cameras in three sizes, introduced in<br />
1919, <strong>and</strong> the all-metal Trapp Little Wonder of 1922 are examples of<br />
popular German box cameras.<br />
In the early 1920’s, camera manufacturers began making boxcamera<br />
bodies from metal rather than from wood <strong>and</strong> cardboard.<br />
Machine-formed metal was less expensive than the traditional h<strong>and</strong>worked<br />
materials. In 1924, Kodak’s two most popular Brownie sizes<br />
appeared with aluminum bodies.<br />
In 1928, Kodak Limited of Engl<strong>and</strong> added two important new<br />
features to the Brownie—a built-in portrait lens, which could be<br />
brought in front of the taking lens by pressing a lever, <strong>and</strong> camera<br />
bodies in a range of seven different fashion colors. The Beau<br />
Brownie cameras, made in 1930, were the most popular of all the<br />
colored box cameras. The work of Walter Dorwin Teague, a leading<br />
American designer, these cameras had an Art Deco geometric pat-
134 / Brownie camera<br />
tern on the front panel, which was enameled in a color matching the<br />
leatherette covering of the camera body. Several other companies,<br />
including Ansco, again followed Kodak’s lead <strong>and</strong> introduced their<br />
own lines of colored cameras.<br />
In the 1930’s, several new box cameras with interesting features appeared,<br />
many manufactured by leading film companies. In France, the<br />
Lumiere Company advertised a series of box cameras—the Luxbox,<br />
Scoutbox, <strong>and</strong> Lumibox—that ranged from a basic camera to one with<br />
an adjustable lens <strong>and</strong> shutter. In 1933, the German Agfa company restyled<br />
its entire range of box cameras, <strong>and</strong> in 1939, the Italian Ferrania<br />
company entered the market with box cameras in two sizes. In 1932,<br />
Kodak redesigned its Brownie series to take the new 620 roll film,<br />
which it had just introduced. This film <strong>and</strong> the new Six-20 Brownies inspired<br />
other companies to experiment with variations of their own;<br />
some box cameras, such as the Certo Double-box, the Coronet Every<br />
Distance, <strong>and</strong> the Ensign E-20 cameras, offered a choice of two picture<br />
formats.<br />
Another new trend was a move toward smaller-format cameras<br />
using st<strong>and</strong>ard 127 roll film. In 1934, Kodak marketed the small<br />
Baby Brownie. Designed by Teague <strong>and</strong> made from molded black<br />
plastic, this little camera with a folding viewfinder sold for only one<br />
dollar—the price of the original Brownie in 1900.<br />
The Baby Brownie, the first Kodak camera made of molded plastic,<br />
heralded the move to the use of plastic in camera manufacture.<br />
Soon many others, such as the Altissa series of box cameras <strong>and</strong> the<br />
Voigtl<strong>and</strong>er Brilliant V/6 camera, were being made from this new<br />
material.<br />
Later Trends<br />
By the late 1930’s, flashbulbs had replaced flash powder for taking<br />
pictures in low light; again, the Eastman Kodak Company led<br />
the way in introducing this new technology as a feature on the inexpensive<br />
box camera. The Falcon Press-Flash, marketed in 1939, was<br />
the first mass-produced camera to have flash synchronization <strong>and</strong><br />
was followed the next year by the Six-20 Flash Brownie, which had a<br />
detachable flash gun. In the early 1940’s, other companies, such as<br />
Agfa-Ansco, introduced this feature on their own box cameras.
George Eastman<br />
Brownie camera / 135<br />
Frugal, bold, practical, generous to those who were loyal,<br />
impatient with dissent, <strong>and</strong> possessing a steely determination,<br />
George Eastman (1854-1932) rose to become one of the richest<br />
people of his generation. He abhorred poverty <strong>and</strong> did his best<br />
to raise others from it as well.<br />
At age fourteen, when his father died, Eastman<br />
had to drop out of school to support his mother <strong>and</strong><br />
sister. The missed opportunity for an education <strong>and</strong><br />
the struggle to earn a living shaped his outlook. He<br />
worked at an insurance agency <strong>and</strong> then at a bank,<br />
keeping careful record of the money he earned. By<br />
the time he was twenty-five he had saved three thous<strong>and</strong><br />
dollars <strong>and</strong> found his job as a banker to be unrewarding.<br />
As a teenager, he had taught himself photography.<br />
However, that was only a start. He taught himself the physics<br />
<strong>and</strong> chemistry of photography too—<strong>and</strong> enough French <strong>and</strong><br />
German to read the latest foreign scientific journals. His purpose<br />
was practical, to make cameras cheap <strong>and</strong> easy to use so<br />
that average people could own them. This launched him on the<br />
career of invention <strong>and</strong> business that took him away from banking<br />
<strong>and</strong> made his fortune. At the same time he remembered his<br />
origins <strong>and</strong> family. Out of his first earnings, he bought photographs<br />
for his mother <strong>and</strong> a favorite teacher. He never stopped<br />
giving. At the company he founded, he gave substantial raises<br />
to employees, reduced their hours, <strong>and</strong> installed safety equipment,<br />
a medical department, <strong>and</strong> a lunch room. He gave millions<br />
to the Hampton Institute, Tuskegee Institute, Massachusetts<br />
Institute of Technology, <strong>and</strong> University of Rochester, while<br />
also establishing dental clinics for the poor.<br />
In his old age he found he could no longer keep up with his<br />
younger scientific <strong>and</strong> business colleagues. In 1932, leaving behind<br />
a note that asked, simply, “My work is done, why wait?”<br />
he committed suicide. Even then he continued to give. His will<br />
left most of his vast fortune to charities.<br />
In the years after World War II, the box camera evolved into an<br />
eye-level camera, making it more convenient to carry <strong>and</strong> use.<br />
Many amateur photographers, however, still had trouble h<strong>and</strong>ling<br />
(Smithsonian Institution)
136 / Brownie camera<br />
paper-backed roll film <strong>and</strong> were taking their cameras back to dealers<br />
to be unloaded <strong>and</strong> reloaded. Kodak therefore developed a new<br />
system of film loading, using the Kodapak cartridge, which could<br />
be mass-produced with a high degree of accuracy by precision plastic-molding<br />
techniques. To load the camera, the user simply opened<br />
the camera back <strong>and</strong> inserted the cartridge. This new film was introduced<br />
in 1963, along with a series of Instamatic cameras designed<br />
for its use. Both were immediately successful.<br />
The popularity of the film cartridge ended the long history of the<br />
simple <strong>and</strong> inexpensive roll film camera. The last English Brownie<br />
was made in 1967, <strong>and</strong> the series of Brownies made in the United<br />
States was discontinued in 1970. Eastman’s original marketing strategy<br />
of simplifying photography in order to increase the dem<strong>and</strong> for<br />
cameras <strong>and</strong> film continued, however, with the public’s acceptance<br />
of cartridge-loading cameras such as the Instamatic.<br />
From the beginning, Eastman had recognized that there were<br />
two kinds of photographers other than professionals. The first, he<br />
declared, were the true amateurs who devoted time enough to acquire<br />
skill in the complex processing procedures of the day. The second<br />
were those who merely wanted personal pictures or memorabilia<br />
of their everyday lives, families, <strong>and</strong> travels. The second class,<br />
he observed, outnumbered the first by almost ten to one. Thus, it<br />
was to this second kind of amateur photographer that Eastman had<br />
appealed, both with his first cameras <strong>and</strong> with his advertising slogan,<br />
“You press the button, we do the rest.” Eastman had done<br />
much more than simply invent cameras <strong>and</strong> films; he had invented<br />
a system <strong>and</strong> then developed the means for supporting that system.<br />
This is essentially what the Eastman Kodak Company continued to<br />
accomplish with the series of Instamatics <strong>and</strong> other descendants of<br />
the original Brownie. In the decade between 1963 <strong>and</strong> 1973, for example,<br />
approximately sixty million Instamatics were sold throughout<br />
the world.<br />
The research, manufacturing, <strong>and</strong> marketing activities of the<br />
Eastman Kodak Company have been so complex <strong>and</strong> varied that no<br />
one would suggest that the company’s prosperity rests solely on the<br />
success of its line of inexpensive cameras <strong>and</strong> cartridge films, although<br />
these have continued to be important to the company. Like<br />
Kodak, however, most large companies in the photographic indus-
try have exp<strong>and</strong>ed their research to satisfy the ever-growing dem<strong>and</strong><br />
from amateurs. The amateurism that George Eastman recognized<br />
<strong>and</strong> encouraged at the beginning of the twentieth century<br />
thus still flourished at its end.<br />
See also Autochrome plate; Color film; Instant photography.<br />
Further Reading<br />
Brownie camera / 137<br />
Brooke-Ball, Peter. George Eastman <strong>and</strong> Kodak. Watford: Exley, 1994.<br />
Collins, Douglas. The Story of Kodak. New York: Harry N. Abrams,<br />
1990.<br />
Freund, Gisele. Photography <strong>and</strong> Society. Boston: David R. Godine,<br />
1980.<br />
Wade, John. A Short History of the Camera. Watford, Engl<strong>and</strong>: Fountain<br />
Press, 1979.<br />
West, Nancy Martha. Kodak <strong>and</strong> the Lens of Nostalgia. Charlottesville:<br />
University Press of Virginia, 2000.
138<br />
Bubble memory<br />
Bubble memory<br />
The invention: An early nonvolatile medium for storing information<br />
on computers.<br />
The person behind the invention:<br />
Andrew H. Bobeck (1926- ), a Bell Telephone Laboratories<br />
scientist<br />
Magnetic Technology<br />
The fanfare over the commercial prospects of magnetic bubbles<br />
was begun on August 8, 1969, by a report appearing in both The New<br />
York Times <strong>and</strong> The Wall Street Journal. The early 1970’s would see the<br />
anticipation mount (at least in the computer world) with each prediction<br />
of the benefits of this revolution in information storage technology.<br />
Although it was not disclosed to the public until August of 1969,<br />
magnetic bubble technology had held the interest of a small group<br />
of researchers around the world for many years. The organization<br />
that probably can claim the greatest research advances with respect<br />
to computer applications of magnetic bubbles is Bell Telephone<br />
Laboratories (later part of American Telephone <strong>and</strong> Telegraph). Basic<br />
research into the properties of certain ferrimagnetic materials<br />
started at Bell Laboratories shortly after the end of World War II<br />
(1939-1945).<br />
Ferrimagnetic substances are typically magnetic iron oxides. Research<br />
into the properties of these <strong>and</strong> related compounds accelerated<br />
after the discovery of ferrimagnetic garnets in 1956 (these are a<br />
class of ferrimagnetic oxide materials that have the crystal structure<br />
of garnet). Ferrimagnetism is similar to ferromagnetism, the phenomenon<br />
that accounts for the strong attraction of one magnetized<br />
body for another. The ferromagnetic materials most suited for bubble<br />
memories contain, in addition to iron, the element yttrium or a<br />
metal from the rare earth series.<br />
It was a fruitful collaboration between scientist <strong>and</strong> engineer,<br />
between pure <strong>and</strong> applied science, that produced this promising
eakthrough in data storage technology. In 1966, Bell Laboratories<br />
scientist Andrew H. Bobeck <strong>and</strong> his coworkers were the first to realize<br />
the data storage potential offered by the strange behavior of thin<br />
slices of magnetic iron oxides under an applied magnetic field. The<br />
first U.S. patent for a memory device using magnetic bubbles was<br />
filed by Bobeck in the fall of 1966 <strong>and</strong> issued on August 5, 1969.<br />
Bubbles Full of Memories<br />
Bubble memory / 139<br />
The three basic functional elements of a computer are the central<br />
processing unit, the input/output unit, <strong>and</strong> memory. Most implementations<br />
of semiconductor memory require a constant power<br />
source to retain the stored data. If the power is turned off, all stored<br />
data are lost. Memory with this characteristic is called “volatile.”<br />
Disks <strong>and</strong> tapes, which are typically used for secondary memory,<br />
are “nonvolatile.” Nonvolatile memory relies on the orientation of<br />
magnetic domains, rather than on electrical currents, to sustain its<br />
existence.<br />
One can visualize by analogy how this will work by taking a<br />
group of permanent bar magnets that are labeled with N for north at<br />
one end <strong>and</strong> S for south at the other. If an arrow is painted starting<br />
from the north end with the tip at the south end on each magnet, an<br />
orientation can then be assigned to a magnetic domain (here one<br />
whole bar magnet). Data are “stored” with these bar magnets by arranging<br />
them in rows, some pointing up, some pointing down. Different<br />
arrangements translate to different data. In the binary world<br />
of the computer, all information is represented by two states. A<br />
stored data item (known as a “bit,” or binary digit) is either on or off,<br />
up or down, true or false, depending on the physical representation.<br />
The “on” state is commonly labeled with the number 1 <strong>and</strong> the “off”<br />
state with the number 0. This is the principle behind magnetic disk<br />
<strong>and</strong> tape data storage.<br />
Now imagine a thin slice of a certain type of magnetic material in<br />
the shape of a 3-by-5-inch index card. Under a microscope, using a<br />
special source of light, one can see through this thin slice in many regions<br />
of the surface. Darker, snakelike regions can also be seen, representing<br />
domains of an opposite orientation (polarity) to the transparent<br />
regions. If a weak external magnetic field is then applied by
140 / Bubble memory<br />
placing a permanent magnet of the same shape as the card on the<br />
underside of the slice, a strange thing happens to the dark serpentine<br />
pattern—the long domains shrink <strong>and</strong> eventually contract into<br />
“bubbles,” tiny magnetized spots. Viewed from the side of the slice,<br />
the bubbles are cylindrically shaped domains having a polarity opposite<br />
to that of the material on which they rest. The presence or absence<br />
of a bubble indicates eithera0ora1bit. Data bits are stored by<br />
moving the bubbles in the thin film. As long as the field is applied<br />
by the permanent magnet substrate, the data will be retained. The<br />
bubble is thus a nonvolatile medium for data storage.<br />
Consequences<br />
Magnetic bubble memory created quite a stir in 1969 with its<br />
splashy public introduction. Most of the manufacturers of computer<br />
chips immediately instituted bubble memory development projects.<br />
Texas Instruments, Philips, Hitachi, Motorola, Fujitsu, <strong>and</strong> International<br />
Business Machines (IBM) joined the race with Bell Laboratories<br />
to mass-produce bubble memory chips. Texas Instruments<br />
became the first major chip manufacturer to mass-produce bubble<br />
memories in the mid-to-late 1970’s. By 1990, however, almost all the<br />
research into magnetic bubble technology had shifted to Japan.<br />
Hitachi <strong>and</strong> Fujitsu began to invest heavily in this area.<br />
Mass production proved to be the most difficult task. Although<br />
the materials it uses are different, the process of producing magnetic<br />
bubble memory chips is similar to the process applied in producing<br />
semiconductor-based chips such as those used for r<strong>and</strong>om access<br />
memory (RAM). It is for this reason that major semiconductor manufacturers<br />
<strong>and</strong> computer companies initially invested in this technology.<br />
Lower fabrication yields <strong>and</strong> reliability issues plagued<br />
early production runs, however, <strong>and</strong>, although these problems<br />
have mostly been solved, gains in the performance characteristics of<br />
competing conventional memories have limited the impact that<br />
magnetic bubble technology has had on the marketplace. The materials<br />
used for magnetic bubble memories are costlier <strong>and</strong> possess<br />
more complicated structures than those used for semiconductor or<br />
disk memory.<br />
Speed <strong>and</strong> cost of materials are not the only bases for compari-
son. It is possible to perform some elementary logic with magnetic<br />
bubbles. Conventional semiconductor-based memory offers storage<br />
only. The capability of performing logic with magnetic bubbles<br />
puts bubble technology far ahead of other magnetic technologies<br />
with respect to functional versatility.<br />
A small niche market for bubble memory developed in the 1980’s.<br />
Magnetic bubble memory can be found in intelligent terminals, desktop<br />
computers, embedded systems, test equipment, <strong>and</strong> similar microcomputer-based<br />
systems.<br />
See also Computer chips; Floppy disk; Hard disk; Optical disk;<br />
Personal computer.<br />
Further Reading<br />
Bubble memory / 141<br />
“Bubble Memory’s Ruggedness Revives Interest for Military Use.”<br />
Aviation Week <strong>and</strong> Space Technology 130, no. 3 (January 16, 1989).<br />
Graff, Gordon. “Better Bubbles.” Popular Science 232, no. 2 (February,<br />
1988).<br />
McLeod, Jonah. “Will Bubble Memories Make a Comeback?” Electronics<br />
61, no. 14 (August, 1988).<br />
Nields, Megan. “Bubble Memory Bursts into Niche Markets.” Mini-<br />
Micro Systems 20, no. 5 (May, 1987).
142<br />
Bullet train<br />
Bullet train<br />
The invention: An ultrafast passenger railroad system capable of<br />
moving passengers at speeds double or triple those of ordinary<br />
trains.<br />
The people behind the invention:<br />
Ikeda Hayato (1899-1965), Japanese prime minister from 1960 to<br />
1964, who pushed for the expansion of public expenditures<br />
Shinji Sogo (1901-1971), the president of the Japanese National<br />
Railways, the “father of the bullet train”<br />
Building a Faster Train<br />
By 1900, Japan had a world-class railway system, a logical result<br />
of the country’s dense population <strong>and</strong> the needs of its modernizing<br />
economy. After 1907, the government controlled the system<br />
through the Japanese National Railways (JNR). In 1938, JNR engineers<br />
first suggested the idea of a train that would travel 125 miles<br />
per hour from Tokyo to the southern city of Shimonoseki. Construction<br />
of a rapid train began in 1940 but was soon stopped because of<br />
World War II.<br />
The 311-mile railway between Tokyo <strong>and</strong> Osaka, the Tokaido<br />
Line, has always been the major line in Japan. By 1957, a business express<br />
along the line operated at an average speed of 57 miles per<br />
hour, but the double-track line was rapidly reaching its transport capacity.<br />
The JNR established two investigative committees to explore<br />
alternative solutions. In 1958, the second committee recommended<br />
the construction of a high-speed railroad on a separate double track,<br />
to be completed in time for the Tokyo Olympics of 1964. The Railway<br />
Technical Institute of the JNR concluded that it was feasible to<br />
design a line that would operate at an average speed of about 130<br />
miles per hour, cutting time for travel between Tokyo <strong>and</strong> Osaka<br />
from six hours to three hours.<br />
By 1962, about 17 miles of the proposed line were completed for<br />
test purposes. During the next two years, prototype trains were<br />
tested to correct flaws <strong>and</strong> make improvements in the design. The en-
tire project was completed on schedule in July, 1964, with total construction<br />
costs of more than $1 billion, double the original estimates.<br />
The Speeding Bullet<br />
Bullet train / 143<br />
Service on the Shinkansen, or New Trunk Line, began on October<br />
1, 1964, ten days before the opening of the Olympic Games.<br />
Commonly called the “bullet train” because of its shape <strong>and</strong> speed,<br />
the Shinkansen was an instant success with the public, both in Japan<br />
<strong>and</strong> abroad. As promised, the time required to travel between Tokyo<br />
<strong>and</strong> Osaka was cut in half. Initially, the system provided daily<br />
services of sixty trains consisting of twelve cars each, but the number<br />
of scheduled trains was almost doubled by the end of the year.<br />
The Shinkansen was able to operate at its unprecedented speed<br />
because it was designed <strong>and</strong> operated as an integrated system,<br />
making use of countless technological <strong>and</strong> scientific developments.<br />
Tracks followed the st<strong>and</strong>ard gauge of 56.5 inches, rather than the<br />
more narrow gauge common in Japan. For extra strength, heavy<br />
Japanese bullet trains. (PhotoDisc)
144 / Bullet train<br />
welded rails were attached directly onto reinforced concrete slabs.<br />
The minimum radius of a curve was 8,200 feet, except where sharper<br />
curves were m<strong>and</strong>ated by topography. In many ways similar to<br />
modern airplanes, the railway cars were made airtight in order to<br />
prevent ear discomfort caused by changes in pressure when trains<br />
enter tunnels.<br />
The Shinkansen trains were powered by electric traction motors,<br />
with four 185-kilowatt motors on each car—one motor attached to<br />
each axle. This design had several advantages: It provided an even<br />
distribution of axle load for reducing strain on the tracks; it allowed<br />
the application of dynamic brakes (where the motor was used for<br />
braking) on all axles; <strong>and</strong> it prevented the failure of one or two units<br />
from interrupting operation of the entire train. The 25,000-volt electrical<br />
current was carried by trolley wire to the cars, where it was<br />
rectified into a pulsating current to drive the motors.<br />
The Shinkansen system established a casualty-free record because<br />
of its maintenance policies combined with its computerized<br />
Centralized Traffic Control system. The control room at Tokyo Station<br />
was designed to maintain timely information about the location<br />
of all trains <strong>and</strong> the condition of all routes. Although train operators<br />
had some discretion in determining speed, automatic brakes<br />
also operated to ensure a safe distance between trains. At least once<br />
each month, cars were thoroughly inspected; every ten days, an inspection<br />
train examined the conditions of tracks, communication<br />
equipment, <strong>and</strong> electrical systems.<br />
Impact<br />
<strong>Public</strong> usage of the Tokyo-Osaka bullet train increased steadily<br />
because of the system’s high speed, comfort, punctuality, <strong>and</strong> superb<br />
safety record. Businesspeople were especially happy that the<br />
rapid service allowed them to make the round-trip without the necessity<br />
of an overnight stay, <strong>and</strong> continuing modernization soon allowed<br />
nonstop trains to make a one-way trip in two <strong>and</strong> one-half<br />
hours, requiring speeds of 160 miles per hour in some stretches. By<br />
the early 1970’s, the line was transporting a daily average of 339,000<br />
passengers in 240 trains, meaning that a train departed from Tokyo<br />
about every ten minutes.
The popularity of the Shinkansen system quickly resulted in dem<strong>and</strong>s<br />
for its extension into other densely populated regions. In<br />
1972, a 100-mile stretch between Osaka <strong>and</strong> Okayama was opened<br />
for service. By 1975, the line was further extended to Hakata on the<br />
isl<strong>and</strong> of Kyushu, passing through the Kammon undersea tunnel.<br />
The cost of this 244-mile stretch was almost $2.5 billion. In 1982,<br />
lines were completed from Tokyo to Niigata <strong>and</strong> from Tokyo to<br />
Morioka. By 1993, the system had grown to 1,134 miles of track.<br />
Since high usage made the system extremely profitable, the sale of<br />
the JNR to private companies in 1987 did not appear to produce adverse<br />
consequences.<br />
The economic success of the Shinkansen had a revolutionary effect<br />
on thinking about the possibilities of modern rail transportation,<br />
leading one authority to conclude that the line acted as “a<br />
savior of the declining railroad industry.” Several other industrial<br />
countries were stimulated to undertake large-scale railway projects;<br />
France, especially, followed Japan’s example by constructing highspeed<br />
electric railroads from Paris to Nice <strong>and</strong> to Lyon. By the mid-<br />
1980’s, there were experiments with high-speed trains based on<br />
magnetic levitation <strong>and</strong> other radical innovations, but it was not<br />
clear whether such designs would be able to compete with the<br />
Shinkansen model.<br />
See also Airplane; Atomic-powered ship; Diesel locomotive; Supersonic<br />
passenger plane.<br />
Further Reading<br />
Bullet train / 145<br />
French, Howard W. “Japan’s New Bullet Train Draws Fire.” New<br />
York Times (September 24, 2000).<br />
Frew, Tim. Locomotives: From the Steam Locomotive to the Bullet Train.<br />
New York: Mallard Press, 1990.<br />
Holley, David. “Faster Than a Speeding Bullet: High-Speed Trains<br />
Are Japan’s Pride, Subject of Debate.” Los Angeles Times (April 10,<br />
1994).<br />
O’Neill, Bill. “Beating the Bullet Train.” New Scientist 140, no. 1893<br />
(October 2, 1993).<br />
Raoul, Jean-Claude. “How High-Speed Trains Make Tracks.” Scientific<br />
American 277 (October, 1997).
146<br />
Buna rubber<br />
Buna rubber<br />
The invention: The first practical synthetic rubber product developed,<br />
Buna inspired the creation of other other synthetic substances<br />
that eventually replaced natural rubber in industrial applications.<br />
The people behind the invention:<br />
Charles de la Condamine (1701-1774), a French naturalist<br />
Charles Goodyear (1800-1860), an American inventor<br />
Joseph Priestley (1733-1804), an English chemist<br />
Charles Greville Williams (1829-1910), an English chemist<br />
A New Synthetic Rubber<br />
The discovery of natural rubber is often credited to the French<br />
scientist Charles de la Condamine, who, in 1736, sent the French<br />
Academy of Science samples of an elastic material used by Peruvian<br />
Indians to make balls that bounced. The material was primarily a<br />
curiosity until 1770, when Joseph Priestley, an English chemist, discovered<br />
that it rubbed out pencil marks, after which he called it<br />
“rubber.” Natural rubber, made from the sap of the rubber tree<br />
(Hevea brasiliensis), became important after Charles Goodyear discovered<br />
in 1830 that heating rubber with sulfur (a process called<br />
“vulcanization”) made it more elastic <strong>and</strong> easier to use. Vulcanized<br />
natural rubber came to be used to make raincoats, rubber b<strong>and</strong>s,<br />
<strong>and</strong> motor vehicle tires.<br />
Natural rubber is difficult to obtain (making one tire requires<br />
the amount of rubber produced by one tree in two years), <strong>and</strong> wars<br />
have often cut off supplies of this material to various countries.<br />
Therefore, efforts to manufacture synthetic rubber began in the<br />
late eighteenth century. Those efforts followed the discovery by<br />
English chemist Charles Greville Williams <strong>and</strong> others in the 1860’s<br />
that natural rubber was composed of thous<strong>and</strong>s of molecules of a<br />
chemical called isoprene that had been joined to form giant, necklace-like<br />
molecules. The first successful synthetic rubber, Buna,<br />
was patented by Germany’s I. G. Farben Industrie in 1926. The suc-
Buna rubber / 147<br />
cess of this rubber led to the development of many other synthetic<br />
rubbers, which are now used in place of natural rubber in many<br />
applications.<br />
Charles Goodyear<br />
It was an accident that finally showed Charles Goodyear<br />
(1800-1860) how to make rubber into a durable, practical material.<br />
For years he had been experimenting at home looking for<br />
ways to improve natural rubber—<strong>and</strong> producing stenches that<br />
drove his family <strong>and</strong> neighbors to distraction—when in 1839 he<br />
dropped a piece of rubber mixed with sulfur onto a hot stove.<br />
When he examined the charred specimen, he discovered it was<br />
not sticky, as hot natural rubber always is, <strong>and</strong> when he took it<br />
outside into the cold, it did not become brittle.<br />
The son of an inventor, Goodyear invented much<br />
more than his vulcanizing process for rubber. He also<br />
patented a spring-lever faucet, pontoon boat, hay fork,<br />
<strong>and</strong> air pump, but he was never successful in making<br />
money from his inventions. Owner of a hardware<br />
store, he went broke during a financial panic in 1830<br />
<strong>and</strong> had to spend time in debtor’s prison. He was<br />
never financially stable afterwards, often having to<br />
borrow money <strong>and</strong> sell his family’s belongings to<br />
support his experiments. And he had a large family—twelve<br />
children, of whom only half lived beyond childhood.<br />
Even vulcanized rubber did not make Goodyear’s fortune.<br />
He delayed patenting it until Thomas Hancock, an Englishman,<br />
replicated Goodyear’s method of vulcanizing <strong>and</strong> began producing<br />
rubber in Engl<strong>and</strong>. Goodyear sued <strong>and</strong> lost. Others stole<br />
his method, <strong>and</strong> although he won one large case, legal expenses<br />
took away most of the settlement. He borrowed more <strong>and</strong> more<br />
money to advertise his product, with some success. For example,<br />
Emperor Napoleon III awarded Goodyear the Cross of the<br />
Legion of Honor for his display at the 1851 Crystal Palace Exhibition<br />
in London. Nevertheless, Goodyear died deeply in debt.<br />
Despite all the imitators, vulcanized rubber remained associated<br />
with Goodyear. Thirty-eight years after he died, the<br />
world’s larger rubber manufacturer took his name for the company’s<br />
title.<br />
(Smithsonian Institution)
148 / Buna rubber<br />
From Erasers to Gas Pumps<br />
Natural rubber belongs to the group of chemicals called “polymers.”<br />
A polymer is a giant molecule that is made up of many simpler<br />
chemical units (“monomers”) that are attached chemically to<br />
form long strings. In natural rubber, the monomer is isoprene<br />
(dimethylbutadiene). The first efforts to make a synthetic rubber<br />
used the discovery that isoprene could be made <strong>and</strong> converted<br />
into an elastic polymer. The synthetic rubber that was created from<br />
isoprene was, however, inferior to natural rubber. The first Buna<br />
rubber, which was patented by I. G. Farben in 1926, was better, but it<br />
was still less than ideal. Buna rubber was made by polymerizing the<br />
monomer butadiene in the presence of sodium. The name Buna<br />
comes from the first two letters of the words “butadiene” <strong>and</strong> “natrium”<br />
(German for sodium). Natural <strong>and</strong> Buna rubbers are called<br />
homopolymers because they contain only one kind of monomer.<br />
The ability of chemists to make Buna rubber, along with its successful<br />
use, led to experimentation with the addition of other monomers<br />
to isoprene-like chemicals used to make synthetic rubber.<br />
Among the first great successes were materials that contained two<br />
alternating monomers; such materials are called “copolymers.” If<br />
the two monomers are designated A <strong>and</strong> B, part of a polymer molecule<br />
can be represented as (ABABABABABABABABAB). Numerous<br />
synthetic copolymers, which are often called “elastomers,” now<br />
replace natural rubber in applications where they have superior<br />
properties. All elastomers are rubbers, since objects made from<br />
them both stretch greatly when pulled <strong>and</strong> return quickly to their<br />
original shape when the tension is released.<br />
Two other well-known rubbers developed by I. G. Farben are the<br />
copolymers called Buna-N <strong>and</strong> Buna-S. These materials combine butadiene<br />
<strong>and</strong> the monomers acrylonitrile <strong>and</strong> styrene, respectively.<br />
Many modern motor vehicle tires are made of synthetic rubber that<br />
differs little from Buna-S rubber. This rubber was developed after<br />
the United States was cut off in the 1940’s, during World War II,<br />
from its Asian source of natural rubber. The solution to this problem<br />
was the development of a synthetic rubber industry based on GR-S<br />
rubber (government rubber plus styrene), which was essentially<br />
Buna-S rubber. This rubber is still widely used.
Buna-S rubber is often made by mixing butadiene <strong>and</strong> styrene in<br />
huge tanks of soapy water, stirring vigorously, <strong>and</strong> heating the mixture.<br />
The polymer contains equal amounts of butadiene <strong>and</strong> styrene<br />
(BSBSBSBSBSBSBSBS). When the molecules of the Buna-S polymer<br />
reach the desired size, the polymerization is stopped <strong>and</strong> the rubber<br />
is coagulated (solidified) chemically. Then, water <strong>and</strong> all the unused<br />
starting materials are removed, after which the rubber is dried <strong>and</strong><br />
shipped to various plants for use in tires <strong>and</strong> other products. The<br />
major difference between Buna-S <strong>and</strong> GR-S rubber is that the method<br />
of making GR-S rubber involves the use of low temperatures.<br />
Buna-N rubber is made in a fashion similar to that used for Buna-<br />
S, using butadiene <strong>and</strong> acrylonitrile. Both Buna-N <strong>and</strong> the related<br />
neoprene rubber, invented by Du Pont, are very resistant to gasoline<br />
<strong>and</strong> other liquid vehicle fuels. For this reason, they can be used in<br />
gas-pump hoses. All synthetic rubbers are vulcanized before they<br />
are used in industry.<br />
Impact<br />
Buna rubber / 149<br />
Buna rubber became the basis for the development of the other<br />
modern synthetic rubbers. These rubbers have special properties<br />
that make them suitable for specific applications. One developmental<br />
approach involved the use of chemically modified butadiene in<br />
homopolymers such as neoprene. Made of chloroprene (chlorobutadiene),<br />
neoprene is extremely resistant to sun, air, <strong>and</strong> chemicals.<br />
It is so widely used in machine parts, shoe soles, <strong>and</strong> hoses that<br />
more than 400 million pounds are produced annually.<br />
Another developmental approach involved copolymers that alternated<br />
butadiene with other monomers. For example, the successful<br />
Buna-N rubber (butadiene <strong>and</strong> acrylonitrile) has properties<br />
similar to those of neoprene. It differs sufficiently from neoprene,<br />
however, to be used to make items such as printing press rollers.<br />
About 200 million pounds of Buna-N are produced annually. Some<br />
4 billion pounds of the even more widely used polymer Buna-S/<br />
GR-S are produced annually, most of which is used to make tires.<br />
Several other synthetic rubbers have significant industrial applications,<br />
<strong>and</strong> efforts to make copolymers for still other purposes continue.
150 / Buna rubber<br />
See also Neoprene; Nylon; Orlon; Plastic; Polyester; Polyethylene;<br />
Polystyrene; Silicones; Teflon; Velcro.<br />
Further Reading<br />
Herbert, Vernon. Synthetic Rubber: A Project That Had to Succeed.<br />
Westport, Conn.: Greenwood Press, 1985.<br />
Mossman, S. T. I., <strong>and</strong> Peter John Turnbull Morris. The Development of<br />
Plastics. Cambridge: Royal Society of Chemistry, 1994.<br />
Von Hagen, Victor Wolfgang. South America Called Them: Explorations<br />
of the Great Naturalists, La Condamine, Humboldt, Darwin,<br />
Spruce. New York: A. A. Knopf, 1945.
CAD/CAM<br />
CAD/CAM<br />
The invention: Computer-Aided Design (CAD) <strong>and</strong> Computer-<br />
Aided Manufacturing (CAM) enhanced flexibility in engineering<br />
design, leading to higher quality <strong>and</strong> reduced time for manufacturing<br />
The people behind the invention:<br />
Patrick Hanratty, a General Motors Research Laboratory<br />
worker who developed graphics programs<br />
Jack St. Clair Kilby (1923- ), a Texas Instruments employee<br />
who first conceived of the idea of the integrated circuit<br />
Robert Noyce (1927-1990), an Intel Corporation employee who<br />
developed an improved process of manufacturing<br />
integrated circuits on microchips<br />
Don Halliday, an early user of CAD/CAM who created the<br />
Made-in-America car in only four months by using CAD<br />
<strong>and</strong> project management software<br />
Fred Borsini, an early user of CAD/CAM who demonstrated<br />
its power<br />
Summary of Event<br />
151<br />
Computer-Aided Design (CAD) is a technique whereby geometrical<br />
descriptions of two-dimensional (2-D) or three-dimensional (3-<br />
D) objects can be created <strong>and</strong> stored, in the form of mathematical<br />
models, in a computer system. Points, lines, <strong>and</strong> curves are represented<br />
as graphical coordinates. When a drawing is requested from<br />
the computer, transformations are performed on the stored data,<br />
<strong>and</strong> the geometry of a part or a full view from either a two- or a<br />
three-dimensional perspective is shown. CAD systems replace the<br />
tedious process of manual drafting, <strong>and</strong> computer-aided drawing<br />
<strong>and</strong> redrawing that can be retrieved when needed has improved<br />
drafting efficiency. A CAD system is a combination of computer<br />
hardware <strong>and</strong> software that facilitates the construction of geometric<br />
models <strong>and</strong>, in many cases, their analysis. It allows a wide variety of<br />
visual representations of those models to be displayed.
152 / CAD/CAM<br />
Computer-Aided Manufacturing (CAM) refers to the use of computers<br />
to control, wholly or partly, manufacturing processes. In<br />
practice, the term is most often applied to computer-based developments<br />
of numerical control technology; robots <strong>and</strong> flexible manufacturing<br />
systems (FMS) are included in the broader use of CAM<br />
systems. A CAD/CAM interface is envisioned as a computerized<br />
database that can be accessed <strong>and</strong> enriched by either design or manufacturing<br />
professionals during various stages of the product development<br />
<strong>and</strong> production cycle.<br />
In CAD systems of the early 1990’s, the ability to model solid objects<br />
became widely available. The use of graphic elements such as<br />
lines <strong>and</strong> arcs <strong>and</strong> the ability to create a model by adding <strong>and</strong> subtracting<br />
solids such as cubes <strong>and</strong> cylinders are the basic principles of<br />
CAD <strong>and</strong> of simulating objects within a computer. CAD systems enable<br />
computers to simulate both taking things apart (sectioning)<br />
<strong>and</strong> putting things together for assembly. In addition to being able<br />
to construct prototypes <strong>and</strong> store images of different models, CAD<br />
systems can be used for simulating the behavior of machines, parts,<br />
<strong>and</strong> components. These abilities enable CAD to construct models<br />
that can be subjected to nondestructive testing; that is, even before<br />
engineers build a physical prototype, the CAD model can be subjected<br />
to testing <strong>and</strong> the results can be analyzed. As another example,<br />
designers of printed circuit boards have the ability to test their<br />
circuits on a CAD system by simulating the electrical properties of<br />
components.<br />
During the 1950’s, the U.S. Air Force recognized the need for reducing<br />
the development time for special aircraft equipment. As a<br />
result, the Air Force commissioned the Massachusetts Institute of<br />
Technology to develop numerically controlled (NC) machines that<br />
were programmable. A workable demonstration of NC machines<br />
was made in 1952; this began a new era for manufacturing. As the<br />
speed of an aircraft increased, the cost of manufacturing also increased<br />
because of stricter technical requirements. This higher cost<br />
provided a stimulus for the further development of NC technology,<br />
which promised to reduce errors in design before the prototype<br />
stage.<br />
The early 1960’s saw the development of mainframe computers.<br />
Many industries valued computing technology for its speed <strong>and</strong> for
CAD/CAM / 153<br />
its accuracy in lengthy <strong>and</strong> tedious numerical operations in design,<br />
manufacturing, <strong>and</strong> other business functional areas. Patrick<br />
Hanratty, working for General Motors Research Laboratory, saw<br />
other potential applications <strong>and</strong> developed graphics programs for<br />
use on mainframe computers. The use of graphics in software aided<br />
the development of CAD/CAM, allowing visual representations of<br />
models to be presented on computer screens <strong>and</strong> printers.<br />
The 1970’s saw an important development in computer hardware,<br />
namely the development <strong>and</strong> growth of personal computers<br />
(PCs). Personal computers became smaller as a result of the development<br />
of integrated circuits. Jack St. Clair Kilby, working for Texas<br />
Instruments, first conceived of the integrated circuit; later, Robert<br />
Noyce, working for Intel Corporation, developed an improved process<br />
of manufacturing integrated circuits on microchips. Personal<br />
computers using these microchips offered both speed <strong>and</strong> accuracy<br />
at costs much lower than those of mainframe computers.<br />
Five companies offered integrated commercial computer-aided<br />
design <strong>and</strong> computer-aided manufacturing systems by the first half<br />
of 1973. Integration meant that both design <strong>and</strong> manufacturing<br />
were contained in one system. Of these five companies—Applicon,<br />
Computervision, Gerber Scientific, Manufacturing <strong>and</strong> Consulting<br />
Services (MCS), <strong>and</strong> United Computing—four offered turnkey systems<br />
exclusively. Turnkey systems provide design, development,<br />
training, <strong>and</strong> implementation for each customer (company) based<br />
on the contractual agreement; they are meant to be used as delivered,<br />
with no need for the purchaser to make significant adjustments<br />
or perform programming.<br />
The 1980’s saw a proliferation of mini- <strong>and</strong> microcomputers with<br />
a variety of platforms (processors) with increased speed <strong>and</strong> better<br />
graphical resolution. This made the widespread development of<br />
computer-aided design <strong>and</strong> computer-aided manufacturing possible<br />
<strong>and</strong> practical. Major corporations spent large research <strong>and</strong> development<br />
budgets developing CAD/CAM systems that would<br />
automate manual drafting <strong>and</strong> machine tool movements. Don Halliday,<br />
working for Truesports Inc., provided an early example of the<br />
benefits of CAD/CAM. He created the Made-in-America car in only<br />
four months by using CAD <strong>and</strong> project management software. In<br />
the late 1980’s, Fred Borsini, the president of Leap Technologies in
154 / CAD/CAM<br />
Michigan, brought various products to market in record time through<br />
the use of CAD/CAM.<br />
In the early 1980’s, much of the CAD/CAM industry consisted of<br />
software companies. The cost for a relatively slow interactive system<br />
in 1980 was close to $100,000. The late 1980’s saw the demise of<br />
minicomputer-based systems in favor of Unix work stations <strong>and</strong><br />
PCs based on 386 <strong>and</strong> 486 microchips produced by Intel. By the time<br />
of the International Manufacturing Technology show in September,<br />
1992, the industry could show numerous CAD/CAM innovations<br />
including tools, CAD/CAM models to evaluate manufacturability<br />
in early design phases, <strong>and</strong> systems that allowed use of the same<br />
data for a full range of manufacturing functions.<br />
Impact<br />
In 1990, CAD/CAM hardware sales by U.S. vendors reached<br />
$2.68 billion. In software alone, $1.42 billion worth of CAD/CAM<br />
products <strong>and</strong> systems were sold worldwide by U.S. vendors, according<br />
to International Data Corporation figures for 1990. CAD/<br />
CAM systems were in widespread use throughout the industrial<br />
world. Development lagged in advanced software applications,<br />
particularly in image processing, <strong>and</strong> in the communications software<br />
<strong>and</strong> hardware that ties processes together.<br />
A reevaluation of CAD/CAM systems was being driven by the<br />
industry trend toward increased functionality of computer-driven<br />
numerically controlled machines. Numerical control (NC) software<br />
enables users to graphically define the geometry of the parts in a<br />
product, develop paths that machine tools will follow, <strong>and</strong> exchange<br />
data among machines on the shop floor. In 1991, NC configuration<br />
software represented 86 percent of total CAM sales. In 1992,<br />
the market shares of the five largest companies in the CAD/CAM<br />
market were 29 percent for International Business Machines, 17 percent<br />
for Intergraph, 11 percent for Computervision, 9 percent for<br />
Hewlett-Packard, <strong>and</strong> 6 percent for Mentor Graphics.<br />
General Motors formed a joint venture with Ford <strong>and</strong> Chrysler to<br />
develop a common computer language in order to make the next<br />
generation of CAD/CAM systems easier to use. The venture was<br />
aimed particularly at problems that posed barriers to speeding up
CAD/CAM / 155<br />
the design of new automobiles. The three car companies all had sophisticated<br />
computer systems that allowed engineers to design<br />
parts on computers <strong>and</strong> then electronically transmit specifications<br />
to tools that make parts or dies.<br />
CAD/CAM technology was expected to advance on many fronts.<br />
As of the early 1990’s, different CAD/CAM vendors had developed<br />
systems that were often incompatible with one another, making it<br />
difficult to transfer data from one system to another. Large corporations,<br />
such as the major automakers, developed their own interfaces<br />
<strong>and</strong> network capabilities to allow different systems to communicate.<br />
Major users of CAD/CAM saw consolidation in the industry<br />
through the establishment of st<strong>and</strong>ards as being in their interests.<br />
Resellers of CAD/CAM products also attempted to redefine<br />
their markets. These vendors provide technical support <strong>and</strong> service<br />
to users. The sale of CAD/CAM products <strong>and</strong> systems offered substantial<br />
opportunities, since dem<strong>and</strong> remained strong. Resellers<br />
worked most effectively with small <strong>and</strong> medium-sized companies,<br />
which often were neglected by the primary sellers of CAD/CAM<br />
equipment because they did not generate a large volume of business.<br />
Some projections held that by 1995 half of all CAD/CAM systems<br />
would be sold through resellers, at a cost of $10,000 or less for<br />
each system. The CAD/CAM market thus was in the process of dividing<br />
into two markets: large customers (such as aerospace firms<br />
<strong>and</strong> automobile manufacturers) that would be served by primary<br />
vendors, <strong>and</strong> small <strong>and</strong> medium-sized customers that would be serviced<br />
by resellers.<br />
CAD will find future applications in marketing, the construction<br />
industry, production planning, <strong>and</strong> large-scale projects such as shipbuilding<br />
<strong>and</strong> aerospace. Other likely CAD markets include hospitals,<br />
the apparel industry, colleges <strong>and</strong> universities, food product<br />
manufacturers, <strong>and</strong> equipment manufacturers. As the linkage between<br />
CAD <strong>and</strong> CAM is enhanced, systems will become more productive.<br />
The geometrical data from CAD will be put to greater use<br />
by CAM systems.<br />
CAD/CAM already had proved that it could make a big difference<br />
in productivity <strong>and</strong> quality. Customer orders could be changed<br />
much faster <strong>and</strong> more accurately than in the past, when a change<br />
could require a manual redrafting of a design. Computers could do
156 / CAD/CAM<br />
automatically in minutes what once took hours manually. CAD/<br />
CAM saved time by reducing, <strong>and</strong> in some cases eliminating, human<br />
error. Many flexible manufacturing systems (FMS) had machining<br />
centers equipped with sensing probes to check the accuracy<br />
of the machining process. These self-checks can be made part of numerical<br />
control (NC) programs. With the technology of the early<br />
1990’s, some experts estimated that CAD/CAM systems were in<br />
many cases twice as productive as the systems they replaced; in the<br />
long run, productivity is likely to improve even more, perhaps up to<br />
three times that of older systems or even higher. As costs for CAD/<br />
CAM systems concurrently fall, the investment in a system will be<br />
recovered more quickly. Some analysts estimated that by the mid-<br />
1990’s, the recovery time for an average system would be about<br />
three years.<br />
Another frontier in the development of CAD/CAM systems is<br />
expert (or knowledge-based) systems, which combine data with a<br />
human expert’s knowledge, expressed in the form of rules that the<br />
computer follows. Such a system will analyze data in a manner<br />
mimicking intelligence. For example, a 3-D model might be created<br />
from st<strong>and</strong>ard 2-D drawings. Expert systems will likely play a<br />
pivotal role in CAM applications. For example, an expert system<br />
could determine the best sequence of machining operations to produce<br />
a component.<br />
Continuing improvements in hardware, especially increased<br />
speed, will benefit CAD/CAM systems. Software developments,<br />
however, may produce greater benefits. Wider use of CAD/CAM<br />
systems will depend on the cost savings from improvements in<br />
hardware <strong>and</strong> software as well as on the productivity of the systems<br />
<strong>and</strong> the quality of their product. The construction, apparel,<br />
automobile, <strong>and</strong> aerospace industries have already experienced<br />
increases in productivity, quality, <strong>and</strong> profitability through the use<br />
of CAD/CAM. A case in point is Boeing, which used CAD from<br />
start to finish in the design of the 757.<br />
See also Differential analyzer; Mark I calculator; Personal computer;<br />
SAINT; Virtual machine; Virtual reality.
Further Reading<br />
CAD/CAM / 157<br />
Groover, Mikell P., <strong>and</strong> Emory W. Zimmers, Jr. CAD/CAM: Computer-Aided<br />
Design <strong>and</strong> Manufacturing. Englewood Cliffs, N.J.:<br />
Prentice-Hall, 1984.<br />
Jurgen, Ronald K. Computers <strong>and</strong> Manufacturing Productivity. New<br />
York: Institute of Electrical <strong>and</strong> Electronics Engineers, 1987.<br />
McMahon, Chris, <strong>and</strong> Jimmie Browne. CAD/CAM: From Principles to<br />
Practice. Reading, Mass.: Addison-Wesley, 1993.<br />
_____. CAD/CAM: Principles, Practice, <strong>and</strong> Manufacturing Management.<br />
2d ed. Harlow, Engl<strong>and</strong>: Addison-Wesley, 1998.<br />
Medl<strong>and</strong>, A. J., <strong>and</strong> Piers Burnett. CAD/CAM in Practice. New York:<br />
John Wiley & Sons, 1986.
158<br />
Carbon dating<br />
Carbon dating<br />
The invention: A technique that measures the radioactive decay of<br />
carbon 14 in organic substances to determine the ages of artifacts<br />
as old as ten thous<strong>and</strong> years.<br />
The people behind the invention:<br />
Willard Frank Libby (1908-1980), an American chemist who won<br />
the 1960 Nobel Prize in Chemistry<br />
Charles Wesley Ferguson (1922-1986), a scientist who<br />
demonstrated that carbon 14 dates before 1500 b.c. needed to<br />
be corrected<br />
One in a Trillion<br />
Carbon dioxide in the earth’s atmosphere contains a mixture of<br />
three carbon isotopes (isotopes are atoms of the same element that<br />
contain different numbers of neutrons), which occur in the following<br />
percentages: about 99 percent carbon 12, about 1 percent carbon<br />
13, <strong>and</strong> approximately one atom in a trillion of radioactive carbon<br />
14. Plants absorb carbon dioxide from the atmosphere during photosynthesis,<br />
<strong>and</strong> then animals eat the plants, so all living plants <strong>and</strong><br />
animals contain a small amount of radioactive carbon.<br />
When a plant or animal dies, its radioactivity slowly decreases as<br />
the radioactive carbon 14 decays. The time it takes for half of any radioactive<br />
substance to decay is known as its “half-life.” The half-life<br />
for carbon 14 is known to be about fifty-seven hundred years. The<br />
carbon 14 activity will drop to one-half after one half-life, onefourth<br />
after two half-lives, one-eighth after three half-lives, <strong>and</strong> so<br />
forth. After ten or twenty half-lives, the activity becomes too low to<br />
be measurable. Coal <strong>and</strong> oil, which were formed from organic matter<br />
millions of years ago, have long since lost any carbon 14 activity.<br />
Wood samples from an Egyptian tomb or charcoal from a prehistoric<br />
fireplace a few thous<strong>and</strong> years ago, however, can be dated with<br />
good reliability from the leftover radioactivity.<br />
In the 1940’s, the properties of radioactive elements were still<br />
being discovered <strong>and</strong> were just beginning to be used to solve problems.<br />
Scientists still did not know the half-life of carbon 14, <strong>and</strong> ar-
chaeologists still depended mainly on historical evidence to determine<br />
the ages of ancient objects.<br />
In early 1947, Willard Frank Libby started a crucial experiment in<br />
testing for radioactive carbon. He decided to test samples of methane<br />
gas from two different sources. One group of samples came<br />
from the sewage disposal plant at Baltimore, Maryl<strong>and</strong>, which was<br />
rich in fresh organic matter. The other sample of methane came from<br />
an oil refinery, which should have contained only ancient carbon<br />
from fossils whose radioactivity should have completely decayed.<br />
The experimental results confirmed Libby’s suspicions: The methane<br />
from fresh sewage was radioactive, but the methane from oil<br />
was not. Evidently, radioactive carbon was present in fresh organic<br />
material, but it decays away eventually.<br />
Tree-Ring Dating<br />
Carbon dating / 159<br />
In order to establish the validity of radiocarbon dating, Libby analyzed<br />
known samples of varying ages. These included tree-ring<br />
samples from the years 575 <strong>and</strong> 1075 <strong>and</strong> one redwood from 979<br />
b.c.e., as well as artifacts from Egyptian tombs going back to about<br />
3000 b.c.e. In 1949, he published an article in the journal Science that<br />
contained a graph comparing the historical ages <strong>and</strong> the measured<br />
radiocarbon ages of eleven objects. The results were accurate within<br />
10 percent, which meant that the general method was sound.<br />
The first archaeological object analyzed by carbon dating, obtained<br />
from the Metropolitan Museum of Art in New York, was a<br />
piece of cypress wood from the tomb of King Djoser of Egypt. Based<br />
on historical evidence, the age of this piece of wood was about fortysix<br />
hundred years. A small sample of carbon obtained from this<br />
wood was deposited on the inside of Libby’s radiation counter, giving<br />
a count rate that was about 40 percent lower than that of modern<br />
organic carbon. The resulting age of the wood calculated from its residual<br />
radioactivity was about thirty-eight hundred years, a difference<br />
of eight hundred years. Considering that this was the first object<br />
to be analyzed, even such a rough agreement with the historic<br />
age was considered to be encouraging.<br />
The validity of radiocarbon dating depends on an important assumption—namely,<br />
that the abundance of carbon 14 in nature has
160 / Carbon dating<br />
Willard Frank Libby<br />
Born in 1908, Willard Frank Libby came from a family of<br />
farmers in Gr<strong>and</strong> View, Colorado. They moved to Sebastopol,<br />
California, where Libby went through public school. He entered<br />
the University of California, Berkeley, in 1927, earning a<br />
bachelor of science degree in 1931 <strong>and</strong> a doctorate in 1933. He<br />
stayed on at Berkeley as an instructor of chemistry until he won<br />
the first of his three Guggenheim Fellowships in 1941. He<br />
moved to Princeton University to study, but World War II cut<br />
short his fellowship. Instead, he joined the Manhattan Project,<br />
helping design the atomic bomb at Columbia University’s Division<br />
of War Research.<br />
After the war Libby became a professor of chemistry at the<br />
University of Chicago, where he conducted his research on carbon-14<br />
dating. A leading expert in radiochemistry, he also investigated<br />
isotope tracers <strong>and</strong> the effects of fallout. However,<br />
his career saw as much public service as research. In 1954 President<br />
Dwight Eisenhower appointed him to the Atomic Energy<br />
Commission as its first chemist, <strong>and</strong> Libby directed the administration’s<br />
international Atoms for Peace program. He resigned<br />
in 1959 to take an appointment at the University of California,<br />
Los Angeles, as professor of chemistry <strong>and</strong> then in 1962 as director<br />
of the Institute of Geophysics <strong>and</strong> Planetary Physics, a<br />
position he held until he died in 1980.<br />
Libby received the Nobel Prize in Chemistry in 1960 for developing<br />
carbon-14 dating. Among his many other honors were<br />
the American Chemical Society’s Willard Gibbs Award in 1958,<br />
the Albert Einstein Medal in 1959, <strong>and</strong> the Day Medal of the<br />
Geological Society of America in 1961. He was a member of the<br />
Advisory Board of the Guggenheim Memorial Foundation, the<br />
Office of Civil <strong>and</strong> Defense Mobilization, the National Science<br />
Foundation’s General Commission on Science, <strong>and</strong> the<br />
Academic Institution <strong>and</strong> also a director of Douglas Aircraft<br />
Company.<br />
been constant for many thous<strong>and</strong>s of years. If carbon 14 was less<br />
abundant at some point in history, organic samples from that era<br />
would have started with less radioactivity. When analyzed today,<br />
their reduced activity would make them appear to be older than<br />
they really are.
Charles Wesley Ferguson from the Tree-Ring Research Laboratory<br />
at the University of Arizona tackled this problem. He measured<br />
the age of bristlecone pine trees both by counting the rings <strong>and</strong> by<br />
using carbon 14 methods. He found that carbon 14 dates before<br />
1500 b.c.e. needed to be corrected. The results show that radiocarbon<br />
dates are older than tree-ring counting dates by as much as several<br />
hundred years for the oldest samples. He knew that the number<br />
of tree rings had given him the correct age of the pines, because trees<br />
accumulate one ring of growth for every year of life. Apparently, the<br />
carbon 14 content in the atmosphere has not been constant. Fortunately,<br />
tree-ring counting gives reliable dates that can be used to<br />
correct radiocarbon measurements back to about 6000 b.c.e.<br />
Impact<br />
Carbon dating / 161<br />
Some interesting samples were dated by Libby’s group. The<br />
Dead Sea Scrolls had been found in a cave by an Arab shepherd in<br />
1947, but some Bible scholars at first questioned whether they were<br />
genuine. The linen wrapping from the Book of Isaiah was tested for<br />
carbon 14, giving a date of 100 b.c.e., which helped to establish its<br />
authenticity. Human hair from an Egyptian tomb was determined<br />
to be nearly five thous<strong>and</strong> years old. Well-preserved s<strong>and</strong>als from a<br />
cave in eastern Oregon were determined to be ninety-three hundred<br />
years old. A charcoal sample from a prehistoric site in western<br />
South Dakota was found to be about seven thous<strong>and</strong> years old.<br />
The Shroud of Turin, located in Turin, Italy, has been a controversial<br />
object for many years. It is a linen cloth, more than four meters<br />
long, which shows the image of a man’s body, both front <strong>and</strong> back.<br />
Some people think it may have been the burial shroud of Jesus<br />
Christ after his crucifixion. A team of scientists in 1978 was permitted<br />
to study the shroud, using infrared photography, analysis of<br />
possible blood stains, microscopic examination of the linen fibers,<br />
<strong>and</strong> other methods. The results were ambiguous. A carbon 14 test<br />
was not permitted because it would have required cutting a piece<br />
about the size of a h<strong>and</strong>kerchief from the shroud.<br />
A new method of measuring carbon 14 was developed in the late<br />
1980’s. It is called “accelerator mass spectrometry,” or AMS. Unlike<br />
Libby’s method, it does not count the radioactivity of carbon. In-
162 / Carbon dating<br />
stead, a mass spectrometer directly measures the ratio of carbon 14<br />
to ordinary carbon. The main advantage of this method is that the<br />
sample size needed for analysis is about a thous<strong>and</strong> times smaller<br />
than before. The archbishop of Turin permitted three laboratories<br />
with the appropriate AMS apparatus to test the shroud material.<br />
The results agreed that the material was from the fourteenth century,<br />
not from the time of Christ. The figure on the shroud may be a<br />
watercolor painting on linen.<br />
Since Libby’s pioneering experiments in the late 1940’s, carbon<br />
14 dating has established itself as a reliable dating technique for archaeologists<br />
<strong>and</strong> cultural historians. Further improvements are expected<br />
to increase precision, to make it possible to use smaller samples,<br />
<strong>and</strong> to extend the effective time range of the method back to<br />
fifty thous<strong>and</strong> years or earlier.<br />
See also Atomic clock; Geiger counter; Richter scale.<br />
Further Reading<br />
Goldberg, Paul, Vance T. Holliday, <strong>and</strong> C. Reid Ferring. Earth Sciences<br />
<strong>and</strong> Archaeology. New York: Kluwer Academic Plenum, 2001.<br />
Libby, Willard Frank. “Radiocarbon Dating” [Nobel lecture]. In<br />
Chemistry, 1942-1962. River Edge, N.J.: World Scientific, 1999.<br />
Lowe, John J. Radiocarbon Dating: Recent Applications <strong>and</strong> Future Potential.<br />
New York: John Wiley <strong>and</strong> Sons, 1996.
Cassette recording<br />
Cassette recording<br />
The invention: Self-contained system making it possible to record<br />
<strong>and</strong> repeatedly play back sound without having to thread tape<br />
through a machine.<br />
The person behind the invention:<br />
Fritz Pfleumer, a German engineer whose work on audiotapes<br />
paved the way for audiocassette production<br />
Smaller Is Better<br />
163<br />
The introduction of magnetic audio recording tape in 1929 was<br />
met with great enthusiasm, particularly in the entertainment industry,<br />
<strong>and</strong> specifically among radio broadcasters. Although somewhat<br />
practical methods for recording <strong>and</strong> storing sound for later playback<br />
had been around for some time, audiotape was much easier to<br />
use, store, <strong>and</strong> edit, <strong>and</strong> much less expensive to produce.<br />
It was Fritz Pfleumer, a German engineer, who in 1929 filed the<br />
first audiotape patent. His detailed specifications indicated that<br />
tape could be made by bonding a thin coating of oxide to strips of either<br />
paper or film. Pfleumer also suggested that audiotape could be<br />
attached to filmstrips to provide higher-quality sound than was<br />
available with the film sound technologies in use at that time. In<br />
1935, the German electronics firm AEG produced a reliable prototype<br />
of a record-playback machine based on Pfleumer’s idea. By<br />
1947, the American company 3M had refined the concept to the<br />
point where it was able to produce a high-quality tape using a plastic-based<br />
backing <strong>and</strong> red oxide. The tape recorded <strong>and</strong> reproduced<br />
sound with a high degree of clarity <strong>and</strong> dynamic range <strong>and</strong> would<br />
soon become the st<strong>and</strong>ard in the industry.<br />
Still, the tape was sold <strong>and</strong> used in a somewhat inconvenient<br />
open-reel format. The user had to thread it through a machine <strong>and</strong><br />
onto a take-up reel. This process was somewhat cumbersome <strong>and</strong><br />
complicated for the layperson. For many years, sound-recording<br />
technology remained a tool mostly for professionals.<br />
In 1963, the first audiocassette was introduced by the Nether-
164 / Cassette recording<br />
l<strong>and</strong>s-based Philips NV company. This device could be inserted into<br />
a machine without threading. Rewind <strong>and</strong> fast-forward were faster,<br />
<strong>and</strong> it made no difference where the tape was stopped prior to the<br />
ejection of the cassette. By contrast, open-reel audiotape required<br />
that the tape be wound fully onto one or the other of the two reels<br />
before it could be taken off the machine.<br />
Technical advances allowed the cassette tape to be much narrower<br />
than the tape used in open reels <strong>and</strong> also allowed the tape<br />
speed to be reduced without sacrificing sound quality. Thus, the<br />
cassette was easier to carry around, <strong>and</strong> more sound could be recorded<br />
on a cassette tape. In addition, the enclosed cassette decreased<br />
wear <strong>and</strong> tear on the tape <strong>and</strong> protected it from contamination.<br />
Creating a Market<br />
One of the most popular uses for audiocassettes was to record<br />
music from radios <strong>and</strong> other audio sources for later playback. During<br />
the 1970’s, many radio stations developed “all music” formats<br />
in which entire albums were often played without interruption.<br />
That gave listeners an opportunity to record the music for later<br />
playback. At first, the music recording industry complained about<br />
this practice, charging that unauthorized recording of music from<br />
the radio was a violation of copyright laws. Eventually, the issue<br />
died down as the same companies began to recognize this new, untapped<br />
market for recorded music on cassette.<br />
Audiocassettes, all based on the original Philips design, were being<br />
manufactured by more than sixty companies within only a few<br />
years of their introduction. In addition, spin-offs of that design were<br />
being used in many specialized applications, including dictation,<br />
storage of computer information, <strong>and</strong> surveillance. The emergence<br />
of videotape resulted in a number of formats for recording <strong>and</strong><br />
playing back video based on the same principle. Although each is<br />
characterized by different widths of tape, each uses the same technique<br />
for tape storage <strong>and</strong> transport.<br />
The cassette has remained a popular means of storing <strong>and</strong> retrieving<br />
information on magnetic tape for more than a quarter of a<br />
century. During the early 1990’s, digital technologies such as audio<br />
CDs (compact discs) <strong>and</strong> the more advanced CD-ROM (compact
discs that reproduce sound, text, <strong>and</strong> images via computer) were beginning<br />
to store information in revolutionary new ways. With the<br />
development of this increasingly sophisticated technology, need for<br />
the audiocassette, once the most versatile, reliable, portable, <strong>and</strong><br />
economical means of recording, storing, <strong>and</strong> playing-back sound,<br />
became more limited.<br />
Consequences<br />
Cassette recording / 165<br />
The cassette represented a new level of convenience for the audiophile,<br />
resulting in a significant increase in the use of recording<br />
technology in all walks of life. Even small children could operate<br />
cassette recorders <strong>and</strong> players, which led to their use in schools for a<br />
variety of instructional tasks <strong>and</strong> in the home for entertainment. The<br />
recording industry realized that audiotape cassettes would allow<br />
consumers to listen to recorded music in places where record players<br />
were impractical: in automobiles, at the beach, even while camping.<br />
The industry also saw the need for widespread availability of<br />
music <strong>and</strong> information on cassette tape. It soon began distributing<br />
albums on audiocassette in addition to the long-play vinyl discs,<br />
<strong>and</strong> recording sales increased substantially. This new technology<br />
put recorded music into automobiles for the first time, again resulting<br />
in a surge in sales for recorded music. Eventually, information,<br />
including language instruction <strong>and</strong> books-on-tape, became popular<br />
commuter fare.<br />
With the invention of the microchip, audiotape players became<br />
available in smaller <strong>and</strong> smaller sizes, making them truly portable.<br />
Audiocassettes underwent another explosion in popularity during<br />
the early 1980’s, when the Sony Corporation introduced the<br />
Walkman, an extremely compact, almost weightless cassette player<br />
that could be attached to clothing <strong>and</strong> used with lightweight earphones<br />
virtually anywhere. At the same time, cassettes were suddenly<br />
being used with microcomputers for backing up magnetic<br />
data files.<br />
Home video soon exploded onto the scene, bringing with it new<br />
applications for cassettes. As had happened with audiotape, video<br />
camera-recorder units, called “camcorders,” were miniaturized to<br />
the point where 8-millimeter videocassettes capable of recording up
166 / Cassette recording<br />
to 90 minutes of live action <strong>and</strong> sound were widely available. These<br />
cassettes closely resembled the audiocassette first introduced in<br />
1963.<br />
See also Compact disc; Dolby noise reduction; Electronic synthesizer;<br />
FM radio; Transistor radio; Walkman cassette player.<br />
Further Reading<br />
Miller, Christopher. “The One Hundred Greatest <strong>Inventions</strong>: Audio<br />
<strong>and</strong> Video.” Popular Science 254, no. 4 (April, 1999).<br />
Praag, Phil van. Evolution of the Audio Recorder. Waukesha, Wis.: EC<br />
Designs, 1997.<br />
Stark, Craig. “Thirty Five Years of Tape Recording.” Stereo Review 58<br />
(September, 1993).
CAT scanner<br />
CAT scanner<br />
The invention: A technique that collects X-ray data from solid,<br />
opaque masses such as human bodies <strong>and</strong> uses a computer to<br />
construct a three-dimensional image.<br />
The people behind the invention:<br />
Godfrey Newbold Hounsfield (1919- ), an English<br />
electronics engineer who shared the 1979 Nobel Prize in<br />
Physiology or Medicine<br />
Allan M. Cormack (1924-1998), a South African-born American<br />
physicist who shared the 1979 Nobel Prize in Physiology or<br />
Medicine<br />
James Ambrose, an English radiologist<br />
A Significant Merger<br />
167<br />
Computerized axial tomography (CAT) is a technique that collects<br />
X-ray data from an opaque, solid mass such as a human body<br />
<strong>and</strong> uses a sophisticated computer to assemble those data into a<br />
three-dimensional image. This sophisticated merger of separate<br />
technologies led to another name for CAT, computer-assisted tomography<br />
(it came to be called computed tomography, or CT). CAT<br />
is a technique of medical radiology, an area of medicine that began<br />
after the German physicist Wilhelm Conrad Röntgen’s 1895 discovery<br />
of the high-energy electromagnetic radiations he named “X<br />
rays.” Röntgen <strong>and</strong> others soon produced X-ray images of parts of<br />
the human body, <strong>and</strong> physicians were quick to learn that these images<br />
were valuable diagnostic aids.<br />
In the late 1950’s <strong>and</strong> early 1960’s, Allan M. Cormack, a physicist<br />
at Tufts University in Massachusetts, pioneered a mathematical<br />
method for obtaining detailed X-ray absorption patterns in opaque<br />
samples meant to model biological samples. His studies used narrow<br />
X-ray beams that were passed through samples at many different angles.<br />
Because the technique probed test samples from many different<br />
points of reference, it became possible—by using the proper mathematics—to<br />
reconstruct the interior structure of a thin slice of the object<br />
being studied.
168 / CAT scanner<br />
Cormack published his data but received almost no recognition<br />
because computers that could analyze the data in an effective fashion<br />
had not yet been developed. Nevertheless, X-ray tomography—<br />
the process of using X-rays to produce detailed images of thin<br />
sections of solid objects—had been born. It remained for Godfrey<br />
Newbold Hounsfield of Engl<strong>and</strong>’s Electrical <strong>and</strong> Musical Instruments<br />
(EMI) Limited (independently, <strong>and</strong> reportedly with no<br />
knowledge of Cormack’s work) to design the first practical CAT<br />
scanner.<br />
A Series of Thin Slices<br />
Hounsfield, like Cormack, realized that X-ray tomography was<br />
the most practical approach to developing a medical body imager. It<br />
could be used to divide any three-dimensional object into a series of<br />
thin slices that could be reconstructed into images by using appropriate<br />
computers. Hounsfield developed another mathematical approach<br />
to the method. He estimated that the technique would make<br />
possible the very accurate reconstruction of images of thin body sections<br />
with a sensitivity well above that of the X-ray methodology<br />
then in use. Moreover, he proposed that his method would enable<br />
Medical technicians studying CAT scan results. (PhotoDisc)
Godfrey Newbold Hounsfield<br />
CAT scanner / 169<br />
On his family farm outside Newark, Nottinghamshire, Engl<strong>and</strong>,<br />
Godfrey Newbold Hounsfield (born 1919), the youngest<br />
of five children, was usually left to his own devices. The farm,<br />
he later wrote, offered an infinite variety of diversions, <strong>and</strong> his<br />
favorites were the many mechanical <strong>and</strong> electrical gadgets. By<br />
his teen years, he was making his own gadgets, such as an electrical<br />
recording machine, <strong>and</strong> experimenting with homemade<br />
gliders <strong>and</strong> water-propelled rockets. All these childhood projects<br />
taught him the fundamentals of practical reasoning.<br />
During World War II he joined the Royal Air Force, where<br />
his talent with gadgets got him a position as an instructor at the<br />
school for radio mechanics. There, on his own, he built his an oscilloscope<br />
<strong>and</strong> demonstration equipment. This initiative caught<br />
the eye of a high-ranking officer, who after the war arranged a<br />
scholarship so that Hounsfield could attend the Faraday Electrical<br />
Engineering College in London. Upon graduating in 1951,<br />
he took a research position with Electrical <strong>and</strong> Musical Instruments,<br />
Limited (EMI). His first assignments involved radar <strong>and</strong><br />
guided weapons, but he also developed an interest in computers<br />
<strong>and</strong> in 1958 led the design team that put together Engl<strong>and</strong>’s<br />
first all-transistor computer, the EMIDEC 1100. This experience,<br />
in turn, prepared him to follow through on his idea for computed<br />
tomography, which came to him in 1967.<br />
EMI released its first CT scanner in 1971, <strong>and</strong> it so impressed<br />
the medical world that in 1979 Hounsfield <strong>and</strong> Allan M. Cormack<br />
shared the Nobel Prize in Physiology or Medicine for the<br />
invention. Hounsfield, who continued to work on improved<br />
computed tomography <strong>and</strong> other diagnostic imagining techniques,<br />
was knighted in 1981.<br />
researchers <strong>and</strong> physicians to distinguish between normal <strong>and</strong> diseased<br />
tissue. Hounsfield was correct about that.<br />
The prototype instrument that Hounsfield developed was quite<br />
slow, requiring nine days to scan an object. Soon, he modified the<br />
scanner so that its use took only nine hours, <strong>and</strong> he obtained successful<br />
tomograms of preserved human brains <strong>and</strong> the fresh brains<br />
of cattle. The further development of the CAT scanner then pro-
170 / CAT scanner<br />
ceeded quickly, yielding an instrument that required four <strong>and</strong> onehalf<br />
minutes to gather tomographic data <strong>and</strong> twenty minutes to<br />
produce the tomographic image.<br />
In late 1971, the first clinical CAT scanner was installed at Atkinson<br />
Morley’s Hospital in Wimbledon, Engl<strong>and</strong>. By early 1972,<br />
the first patient, a woman with a suspected brain tumor, had been<br />
examined, <strong>and</strong> the resultant tomogram identified a dark, circular<br />
cyst in her brain. Additional data collection from other patients<br />
soon validated the technique. Hounsfield <strong>and</strong> EMI patented the<br />
CAT scanner in 1972, <strong>and</strong> the findings were reported at that year’s<br />
annual meeting of the British Institute of Radiology.<br />
Hounsfield published a detailed description of the instrument in<br />
1973. Hounsfield’s clinical collaborator, James Ambrose, published<br />
on the clinical aspects of the technique. Neurologists all around the<br />
world were ecstatic about the new tool that allowed them to locate<br />
tissue abnormalities with great precision.<br />
The CAT scanner consisted of an X-ray generator, a scanner unit<br />
composed of an X-ray tube <strong>and</strong> a detector in a circular chamber<br />
about which they could be rotated, a computer that could process<br />
all the data obtained, <strong>and</strong> a cathode-ray tube on which tomograms<br />
were viewed. To produce tomograms, the patient was placed on a<br />
couch, head inside the scanner chamber, <strong>and</strong> the emitter-detector<br />
was rotated 1 degree at a time. At each position, 160 readings were<br />
taken, converted to electrical signals, <strong>and</strong> fed into the computer. In<br />
the 180 degrees traversed, 28,800 readings were taken <strong>and</strong> processed.<br />
The computer then converted the data into a tomogram (a<br />
cross-sectional representation of the brain that shows the differences<br />
in tissue density). A Polaroid picture of the tomogram was<br />
then taken <strong>and</strong> interpreted by the physician in charge.<br />
Consequences<br />
Many neurologists agree that CAT is the most important method<br />
developed in the twentieth century to facilitate diagnosis of disorders<br />
of the brain. Even the first scanners could distinguish between<br />
brain tumors <strong>and</strong> blood clots <strong>and</strong> help physicians to diagnose a variety<br />
of brain-related birth defects. In addition, the scanners are believed<br />
to have saved many lives by allowing physicians to avoid
the dangerous exploratory brain surgery once required in many<br />
cases <strong>and</strong> by replacing more dangerous techniques, such as pneumoencephalography,<br />
which required a physician to puncture the<br />
head for diagnostic purposes.<br />
By 1975, improvements, including quicker reaction time <strong>and</strong><br />
more complex emitter-detector systems, made it possible for EMI to<br />
introduce full-body CAT scanners to the world market. Then it became<br />
possible to examine other parts of the body—including the<br />
lungs, the heart, <strong>and</strong> the abdominal organs—for cardiovascular<br />
problems, tumors, <strong>and</strong> other structural health disorders. The technique<br />
became so ubiquitous that many departments of radiology<br />
changed their names to departments of medical imaging.<br />
The use of CAT scanners has not been problem-free. Part of<br />
the reason for this is the high cost of the devices—ranging from<br />
about $300,000 for early models to $1 million for modern instruments—<strong>and</strong><br />
resultant claims by consumer advocacy groups that<br />
the scanners are unnecessarily expensive toys for physicians.<br />
Still, CAT scanners have become important everyday diagnostic<br />
tools in many areas of medicine. Furthermore, continuation of the<br />
efforts of Hounsfield <strong>and</strong> others has led to more improvements of<br />
CAT scanners <strong>and</strong> to the use of nonradiologic nuclear magnetic resonance<br />
imaging in such diagnoses.<br />
See also Amniocentesis; Electrocardiogram; Electroencephalogram;<br />
Mammography; Nuclear magnetic resonance; Pap test; Ultrasound;<br />
X-ray image intensifier.<br />
Further Reading<br />
CAT scanner / 171<br />
Gambarelli, J. Computerized Axial Tomography: An Anatomic Atlas of<br />
Serial Sections of the Human Body: Anatomy—Radiology—Scanner.<br />
New York: Springer Verlag, 1977.<br />
Raju, Tones N. K. “The Nobel Chronicles.” Lancet 354, no. 9190 (November<br />
6, 1999).<br />
Thomas, Robert McG., Jr. “Allan Cormack, Seventy Four, Nobelist<br />
Who Helped Invent CAT Scan.” New York Times (May 9, 1998).
172<br />
Cell phone<br />
Cell phone<br />
The invention: Mobile telephone system controlled by computers<br />
to use a region’s radio frequencies, or channels, repeatedly,<br />
thereby accommodating large numbers of users.<br />
The people behind the invention:<br />
William Oliver Baker (1915- ), the president of Bell<br />
Laboratories<br />
Richard H. Fefrenkiel, the head of the mobile systems<br />
engineering department at Bell<br />
The First Radio Telephones<br />
The first recorded attempt to use radio technology to provide direct<br />
access to a telephone system took place in 1920. It was not until<br />
1946, however, that Bell Telephone established the first such commercial<br />
system in St. Louis. The system had a number of disadvantages;<br />
users had to contact an operator who did the dialing <strong>and</strong> the<br />
connecting, <strong>and</strong> the use of a single radio frequency prevented simultaneous<br />
talking <strong>and</strong> listening. In 1949, a system was developed<br />
that used two radio frequencies (a “duplex pair”), permitting both<br />
the mobile unit <strong>and</strong> the base station to transmit <strong>and</strong> receive simultaneously<br />
<strong>and</strong> making a more normal sort of telephone conversation<br />
possible. This type of service, known as Mobile Telephone Service<br />
(MTS), was the norm in the field for many years.<br />
The history of MTS is one of continuously increasing business usage.<br />
The development of the transistor made possible the design <strong>and</strong><br />
manufacture of reasonably light, compact, <strong>and</strong> reliable equipment,<br />
but the expansion of MTS was slowed by the limited number of radio<br />
frequencies; there is nowhere near enough space on the radio spectrum<br />
for each user to have a separate frequency. In New York City, for<br />
example, New York Telephone Company was limited to just twelve<br />
channels for its more than seven hundred mobile subscribers, meaning<br />
that only twelve conversations could be carried on at once. In addition,<br />
because of possible interference, none of those channels could<br />
be reused in nearby cities; only fifty-four channels were available na-
A dominant trend in cell phone design is smaller<br />
<strong>and</strong> lighter units. (PhotoDisc)<br />
tionwide. By the late 1970’s,<br />
most of the systems in major<br />
cities were considered full, <strong>and</strong><br />
new subscribers were placed<br />
on a waiting list; some people<br />
had been waiting for as long<br />
as ten years to become subscribers.<br />
Mobile phone users<br />
commonly experienced long<br />
delays in getting poor-quality<br />
channels.<br />
The Cellular<br />
Breakthrough<br />
Cell phone / 173<br />
In 1968, the Federal Communications<br />
Commission (FCC)<br />
requested proposals for the<br />
creation of high-capacity, spectrum-efficient<br />
mobile systems.<br />
Bell Telephone had already<br />
been lobbying for the creation<br />
of such a system for some years. In the early 1970’s, both Motorola <strong>and</strong><br />
Bell Telephone proposed the use of cellular technology to solve the<br />
problems posed by mobile telephone service. Cellular systems involve<br />
the use of a computer to make it possible to use an area’s frequencies,<br />
or channels, repeatedly, allowing such systems to accommodate many<br />
more users.<br />
A two-thous<strong>and</strong>-customer, 2100-square-mile cellular telephone<br />
system called the Advanced Mobile Phone Service, built by the<br />
AMPS Corporation, an AT&T subsidiary, became operational in<br />
Chicago in 1978. The Illinois Bell Telephone Company was allowed<br />
to make a limited commercial offering <strong>and</strong> obtained about fourteen<br />
hundred subscribers. American Radio Telephone Service was allowed<br />
to conduct a similar test in the Baltimore/Washington area.<br />
These first systems showed the technological feasibility <strong>and</strong> affordability<br />
of cellular service.<br />
In 1979, Bell Labs of Murray Hill, New Jersey, received a patent
174 / Cell phone<br />
William Oliver Baker<br />
For great discoveries <strong>and</strong> inventions to be possible in the<br />
world of high technology, inventors need great facilities—laboratories<br />
<strong>and</strong> workshops—with brilliant colleagues. These must<br />
be managed by imaginative administrators.<br />
One of the best was William Oliver Baker (b. 1915), who rose<br />
to become president of the legendary Bell Labs. Baker started out<br />
as one of the most promising scientists of his generation. After<br />
earning a Ph.D. in chemistry at Princeton University, he joined<br />
the research section at Bell Telephone Laboratories in 1939. He<br />
studied the physics <strong>and</strong> chemistry of polymers, especially for use<br />
in electronics <strong>and</strong> telecommunications. During his research career<br />
he helped develop synthetic rubber <strong>and</strong> radar, found uses<br />
for polymers in communications <strong>and</strong> power cables, <strong>and</strong> participated<br />
in the discovery of microgels. In 1954 he ranked among the<br />
top-ten scientists in American industry <strong>and</strong> asked to chair a National<br />
Research Council committee studying heat shields for<br />
missiles <strong>and</strong> satellites.<br />
Administration suited him. The following year he took over<br />
as leader of research at Bell Labs <strong>and</strong> served as president from<br />
1973 until 1979. Under his direction, basic discoveries <strong>and</strong> inventions<br />
poured out of the lab that later transformed the way<br />
people live <strong>and</strong> work: satellite communications, principles for<br />
programming high-speed computers, the technology for modern<br />
electronic communications, the superconducting solenoid,<br />
the maser, <strong>and</strong> the laser. His scientists won Nobel Prizes <strong>and</strong> legions<br />
of other honors, as did Baker himself, who received dozens<br />
of medals, awards, <strong>and</strong> honorary degrees. Moreover, he<br />
was an original member of the President’s Science Advisory<br />
Board, became the first chair of the National Science Information<br />
Council, <strong>and</strong> served on the National Science Board. His<br />
influence on American science <strong>and</strong> technology was deep <strong>and</strong><br />
lasting.<br />
for such a system. The inventor was Richard H. Fefrenkiel, head of<br />
the mobile systems engineering department under the leadership<br />
of Labs president William Baker. The patented method divides a<br />
city into small coverage areas called “cells,” each served by lowpower<br />
transmitter-receivers. When a vehicle leaves the coverage
of one cell, calls are switched to the antenna <strong>and</strong> channels of an adjacent<br />
cell; a conversation underway is automatically transferred<br />
<strong>and</strong> continues without interruption. A channel used in one cell can<br />
be reused a few cells away for a different conversation. In this way,<br />
a few hundred channels can serve hundreds of thous<strong>and</strong>s of users.<br />
Computers control the call-transfer process, effectively reducing<br />
the amount of radio spectrum required. Cellular systems thus actually<br />
use radio frequencies to transmit conversations, but because<br />
the equipment is so telephone-like, “cellular telephone” (or “cell<br />
phone”) became the accepted term for the new technology.<br />
Each AMPS cell station is connected by wire to a central switching<br />
office, which determines when a mobile phone should be transferred<br />
to another cell as the transmitter moves out of range during a<br />
conversation. It does this by monitoring the strength of signals received<br />
from the mobile unit by adjacent cells, “h<strong>and</strong>ing off” the call<br />
when a new cell receives a stronger signal; this change is imperceptible<br />
to the user.<br />
Impact<br />
Cell phone / 175<br />
In 1982, the FCC began accepting applications for cellular system<br />
licenses in the thirty largest U.S. cities. By the end of 1984, there<br />
were about forty thous<strong>and</strong> cellular customers in nearly two dozen<br />
cities. Cellular telephone ownership boomed to 9 million by 1992.<br />
As cellular telephones became more common, they also became<br />
cheaper <strong>and</strong> more convenient to buy <strong>and</strong> to use. New systems<br />
developed in the 1990’s continued to make smaller, lighter, <strong>and</strong><br />
cheaper cellular phones even more accessible. Since the cellular telephone<br />
was made possible by the marriage of communications <strong>and</strong><br />
computers, advances in both these fields have continued to change<br />
the industry at a rapid rate.<br />
Cellular phones have proven ideal for many people who need or<br />
want to keep in touch with others at all times. They also provide<br />
convenient emergency communication devices for travelers <strong>and</strong><br />
field-workers. On the other h<strong>and</strong>, ownership of a cellular phone can<br />
also have its drawbacks; many users have found that they can never<br />
be out of touch—even when they would rather be.
176 / Cell phone<br />
See also Internet; Long-distance telephone; Rotary dial telephone;<br />
Telephone switching; Touch-tone telephone.<br />
Further Reading<br />
Carlo, George Louis, <strong>and</strong> Martin Schram. Cell Phones: Invisible Hazards<br />
in the Wireless Age. New York: Carroll <strong>and</strong> Graf, 2001.<br />
“The Cellular Phone.” Newsweek 130, 24A (Winter 1997/1998).<br />
Oliphant, Malcolm W. “How Mobile Telephony Got Going.” IEEE<br />
Spectrum 36, no. 8 (August, 1999).<br />
Young, Peter. Person to Person: The International Impact of the Telephone.<br />
Cambridge: Granta Editions, 1991.
Cloning<br />
Cloning<br />
The invention: Experimental technique for creating exact duplicates<br />
of living organisms by recreating their DNA.<br />
The people behind the invention:<br />
Ian Wilmut, an embryologist with the Roslin Institute<br />
Keith H. S. Campbell, an experiment supervisor with the Roslin<br />
Institute<br />
J. McWhir, a researcher with the Roslin Institute<br />
W. A. Ritchie, a researcher with the Roslin Institute<br />
Making Copies<br />
177<br />
On February 22, 1997, officials of the Roslin Institute, a biological<br />
research institution near Edinburgh, Scotl<strong>and</strong>, held a press conference<br />
to announce startling news: They had succeeded in creating<br />
a clone—a biologically identical copy—from cells taken from<br />
an adult sheep. Although cloning had been performed previously<br />
with simpler organisms, the Roslin Institute experiment marked<br />
the first time that a large, complex mammal had been successfully<br />
cloned.<br />
Cloning, or the production of genetically identical individuals,<br />
has long been a staple of science fiction <strong>and</strong> other popular literature.<br />
Clones do exist naturally, as in the example of identical twins. Scientists<br />
have long understood the process by which identical twins<br />
are created, <strong>and</strong> agricultural researchers have often dreamed of a<br />
method by which cheap identical copies of superior livestock could<br />
be created.<br />
The discovery of the double helix structure of deoxyribonucleic<br />
acid (DNA), or the genetic code, by James Watson <strong>and</strong> Francis Crick<br />
in the 1950’s led to extensive research into cloning <strong>and</strong> genetic engineering.<br />
Using the discoveries of Watson <strong>and</strong> Crick, scientists were<br />
soon able to develop techniques to clone laboratory mice; however,<br />
the cloning of complex, valuable animals such as livestock proved<br />
to be hard going.<br />
Early versions of livestock cloning were technical attempts at dupli-
178 / Cloning<br />
Ian Wilmut<br />
Ian Wilmut was born in Hampton Lucey, not far from Warwick<br />
in central Engl<strong>and</strong>, in 1944. He found his life’s calling in embryology—<strong>and</strong><br />
especially animal genetic engineering— while he<br />
was studying at the University of Nottingham, where his mentor<br />
was G. Eric Lamming, a leading expert on reproduction. After<br />
receiving his undergraduate degree, he attended Darwin<br />
College, Cambridge University. He completed his doctorate in<br />
1973 upon submitting a thesis about freezing boar sperm. This<br />
came after he produced a viable calf, named Frosty, from the<br />
frozen semen, the first time anyone had done so.<br />
Soon afterward he joined the Animal Breeding Research Station,<br />
which later became the Roslin Institute in Roslin, Scotl<strong>and</strong>.<br />
He immersed himself in research, seldom working fewer than<br />
nine hours a day. During the 1980’s he experimented with the<br />
insertion of genes into sheep embryos but concluded that cloning<br />
would be less time-consuming <strong>and</strong> less prone to failure.<br />
Joined by Keith Campbell in 1990, he cloned two Welsh mountain<br />
sheep from differentiated embryo cells, a feat similar to<br />
those of other reproductive experimenters. However, Dolly,<br />
who was cloned from adult cells, shook the world when her<br />
birth was announced in 1997. That same year Wilmut <strong>and</strong><br />
Campbell produced another cloned sheep, Polly. Cloned from<br />
fetal skin cells, she was genetically altered to carry a human<br />
gene.<br />
Wilmut’s technique for cloning from adult cells, which the<br />
laboratory patented, was a fundamentally new method of reproduction,<br />
but he had a loftier purpose in mind than simply<br />
establishing a first. He believed that animals genetically engineered<br />
to include human genes can produce proteins needed by<br />
people who because of genetic diseases cannot make the proteins<br />
themselves. The production of new treatments for old diseases,<br />
he told an astonished public after the revelation of Dolly,<br />
was his goal.<br />
cating the natural process of fertilized egg splitting that leads to the<br />
birth of identical twins. Artificially inseminated eggs were removed,<br />
split, <strong>and</strong> then reinserted into surrogate mothers. This method proved<br />
to be overly costly for commercial purposes, a situation aggravated by<br />
a low success rate.
Nuclear Transfer<br />
Model of a double helix. (PhotoDisc)<br />
Cloning / 179<br />
Researchers at the Roslin Institute found these earlier attempts to<br />
be fundamentally flawed. Even if the success rate could be improved,<br />
the number of clones created (of sheep, in this case) would<br />
still be limited. The Scots, led by embryologist Ian Wilmut <strong>and</strong> experiment<br />
supervisor Keith Campbell, decided to take an entirely<br />
different approach. The result was the first live birth of a mammal<br />
produced through a process known as “nuclear transfer.”<br />
Nuclear transfer involves the replacement of the nucleus of an<br />
immature egg with a nucleus taken from another cell. Previous attempts<br />
at nuclear transfer had cells from a single embryo divided<br />
up <strong>and</strong> implanted into an egg. Because a sheep embryo has only<br />
about forty usable cells, this method also proved limiting.<br />
The Roslin team therefore decided to grow their own cells in a<br />
laboratory culture. They took more mature embryonic cells than<br />
those previously used, <strong>and</strong> they experimented with the use of a nutrient<br />
mixture. One of their breakthroughs occurred when they discovered<br />
that these “cell lines” grew much more quickly when certain<br />
nutrients were absent.
180 / Cloning<br />
Using this technique, the Scots were able to produce a theoretically<br />
unlimited number of genetically identical cell lines. The next<br />
step was to transfer the cell lines of the sheep into the nucleus of unfertilized<br />
sheep eggs.<br />
First, 277 nuclei with a full set of chromosomes were transferred<br />
to the unfertilized eggs. An electric shock was then used to cause the<br />
eggs to begin development, the shock performing the duty of fertilization.<br />
Of these eggs, twenty-nine developed enough to be inserted<br />
into surrogate mothers.<br />
All the embryos died before birth except one: a ewe the scientists<br />
named “Dolly.” Her birth on July 5, 1996, was witnessed by only a<br />
veterinarian <strong>and</strong> a few researchers. Not until the clone had survived<br />
the critical earliest stages of life was the success of the experiment<br />
disclosed; Dolly was more than seven months old by the time her<br />
birth was announced to a startled world.<br />
Impact<br />
The news that the cloning of sophisticated organisms had left the<br />
realm of science fiction <strong>and</strong> become a matter of accomplished scientific<br />
fact set off an immediate uproar. Ethicists <strong>and</strong> media commentators<br />
quickly began to debate the moral consequences of the use—<br />
<strong>and</strong> potential misuse—of the technology. Politicians in numerous<br />
countries responded to the news by calling for legal restrictions on<br />
cloning research. Scientists, meanwhile, speculated about the possible<br />
benefits <strong>and</strong> practical limitations of the process.<br />
The issue that stirred the imagination of the broader public <strong>and</strong><br />
sparked the most spirited debate was the possibility that similar experiments<br />
might soon be performed using human embryos. Although<br />
most commentators seemed to agree that such efforts would<br />
be profoundly immoral, many experts observed that they would be<br />
virtually impossible to prevent. “Could someone do this tomorrow<br />
morning on a human embryo?” Arthur L. Caplan, the director of the<br />
University of Pennsylvania’s bioethics center, asked reporters. “Yes.<br />
It would not even take too much science. The embryos are out<br />
there.”<br />
Such observations conjured visions of a future that seemed marvelous<br />
to some, nightmarish to others. Optimists suggested that the
est <strong>and</strong> brightest of humanity could be forever perpetuated, creating<br />
an endless supply of Albert Einsteins <strong>and</strong> Wolfgang Amadeus<br />
Mozarts. Pessimists warned of a world overrun by clones of selfserving<br />
narcissists <strong>and</strong> petty despots, or of the creation of a secondary<br />
class of humans to serve as organ donors for their progenitors.<br />
The Roslin Institute’s researchers steadfastly proclaimed their<br />
own opposition to human experimentation. Moreover, most scientists<br />
were quick to point out that such scenarios were far from realization,<br />
noting the extremely high failure rate involved in the creation<br />
of even a single sheep. In addition, most experts emphasized<br />
more practical possible uses of the technology: improving agricultural<br />
stock by cloning productive <strong>and</strong> disease-resistant animals, for<br />
example, or regenerating endangered or even extinct species. Even<br />
such apparently benign schemes had their detractors, however, as<br />
other observers remarked on the potential dangers of thus narrowing<br />
a species’ genetic pool.<br />
Even prior to the Roslin Institute’s announcement, most European<br />
nations had adopted a bioethics code that flatly prohibited genetic<br />
experiments on human subjects. Ten days after the announcement,<br />
U.S. president Bill Clinton issued an executive order that<br />
banned the use of federal money for human cloning research, <strong>and</strong><br />
he called on researchers in the private sector to refrain from such experiments<br />
voluntarily. Nevertheless, few observers doubted that<br />
Dolly’s birth marked only the beginning of an intriguing—<strong>and</strong> possibly<br />
frightening—new chapter in the history of science.<br />
See also Amniocentesis; Artificial chromosome; Artificial insemination;<br />
Genetic “fingerprinting”; In vitro plant culture; Rice <strong>and</strong><br />
wheat strains.<br />
Further Reading<br />
Cloning / 181<br />
Facklam, Margery, Howard Facklam, <strong>and</strong> Paul Facklam. From Cell to<br />
Clone: The Story of Genetic Engineering. New York: Harcourt Brace<br />
Jovanovich, 1979.<br />
Gillis, Justin. “Cloned Cows Are Fetching Big Bucks: Dozens of Genetic<br />
Duplicates Ready to Take Up Residence on U.S. Farms.”<br />
Washington Post (March 25, 2001).
182 / Cloning<br />
Kolata, Gina Bari. Clone: The Road to Dolly, <strong>and</strong> the Path Ahead. New<br />
York: William Morrow, 1998.<br />
Regalado, Antonio. “Clues Are Sought for Cloning’s Fail Rate: Researchers<br />
Want to Know Exactly How an Egg Reprograms Adult<br />
DNA.” Wall Street Journal (November 24, 2000).<br />
Winslow, Ron. “Scientists Clone Pigs, Lifting Prospects of Replacement<br />
Organs for Humans.” Wall Street Journal (August 17, 2000).
Cloud seeding<br />
Cloud seeding<br />
The invention: Technique for inducing rainfall by distributing dry<br />
ice or silver nitrate into reluctant rainclouds.<br />
The people behind the invention:<br />
Vincent Joseph Schaefer (1906-1993), an American chemist <strong>and</strong><br />
meteorologist<br />
Irving Langmuir (1881-1957), an American physicist <strong>and</strong><br />
chemist who won the 1932 Nobel Prize in Chemistry<br />
Bernard Vonnegut (1914-1997), an American physical chemist<br />
<strong>and</strong> meteorologist<br />
Praying for Rain<br />
183<br />
Beginning in 1943, an intense interest in the study of clouds developed<br />
into the practice of weather “modification.” Working for<br />
the General Electric Research Laboratory, Nobel laureate Irving<br />
Langmuir <strong>and</strong> his assistant researcher <strong>and</strong> technician, Vincent Joseph<br />
Schaefer, began an intensive study of precipitation <strong>and</strong> its<br />
causes.<br />
Past research <strong>and</strong> study had indicated two possible ways that<br />
clouds produce rain. The first possibility is called “coalescing,” a<br />
process by which tiny droplets of water vapor in a cloud merge after<br />
bumping into one another <strong>and</strong> become heavier <strong>and</strong> fatter until they<br />
drop to earth. The second possibility is the “Bergeron process” of<br />
droplet growth, named after the Swedish meteorologist Tor Bergeron.<br />
Bergeron’s process relates to supercooled clouds, or clouds<br />
that are at or below freezing temperatures <strong>and</strong> yet still contain both<br />
ice crystals <strong>and</strong> liquid water droplets. The size of the water droplets<br />
allows the droplets to remain liquid despite freezing temperatures;<br />
while small droplets can remain liquid only down to 4 degrees Celsius,<br />
larger droplets may not freeze until reaching −15 degrees<br />
Celsius. Precipitation occurs when the ice crystals become heavy<br />
enough to fall. If the temperature at some point below the cloud is<br />
warm enough, it will melt the ice crystals before they reach the<br />
earth, producing rain. If the temperature remains at the freezing
184 / Cloud seeding<br />
point, the ice crystals retain their form <strong>and</strong> fall as snow.<br />
Schaefer used a deep-freezing unit in order to observe water<br />
droplets in pure cloud form. In order to observe the droplets better,<br />
Schaefer lined the chest with black velvet <strong>and</strong> concentrated a beam<br />
of light inside. The first agent he introduced inside the supercooled<br />
freezer was his own breath. When that failed to form the desired ice<br />
crystals, he proceeded to try other agents. His hope was to form ice<br />
crystals that would then cause the moisture in the surrounding air<br />
to condense into more ice crystals, which would produce a miniature<br />
snowfall.<br />
He eventually achieved success when he tossed a h<strong>and</strong>ful of dry<br />
ice inside <strong>and</strong> was rewarded with the long-awaited snow. The<br />
freezer was set at the freezing point of water, 0 degrees Celsius, but<br />
not all the particles were ice crystals, so when the dry ice was introduced<br />
all the stray water droplets froze instantly, producing ice<br />
crystals, or snowflakes.<br />
Planting the First Seeds<br />
On November 13, 1946, Schaefer took to the air over Mount<br />
Greylock with several pounds of dry ice in order to repeat the experiment<br />
in nature. After he had finished sprinkling, or seeding, a<br />
supercooled cloud, he instructed the pilot to fly underneath the<br />
cloud he had just seeded. Schaefer was greeted by the sight of snow.<br />
By the time it reached the ground, it had melted into the first-ever<br />
human-made rainfall.<br />
Independently of Schaefer <strong>and</strong> Langmuir, another General Electric<br />
scientist, Bernard Vonnegut, was also seeking a way to cause<br />
rain. He found that silver iodide crystals, which have the same size<br />
<strong>and</strong> shape as ice crystals, could “fool” water droplets into condensing<br />
on them. When a certain chemical mixture containing silver iodide<br />
is heated on a special burner called a “generator,” silver iodide<br />
crystals appear in the smoke of the mixture. Vonnegut’s discovery<br />
allowed seeding to occur in a way very different from seeding with<br />
dry ice, but with the same result. Using Vonnegut’s process, the<br />
seeding is done from the ground. The generators are placed outside<br />
<strong>and</strong> the chemicals are mixed. As the smoke wafts upward, it carries<br />
the newly formed silver iodide crystals with it into the clouds.
The results of the scientific experiments by Langmuir, Vonnegut,<br />
<strong>and</strong> Schaefer were alternately hailed <strong>and</strong> rejected as legitimate.<br />
Critics argue that the process of seeding is too complex <strong>and</strong><br />
would have to require more than just the addition of dry ice or silver<br />
nitrate in order to produce rain. One of the major problems surrounding<br />
the question of weather modification by cloud seeding is<br />
the scarcity of knowledge about the earth’s atmosphere. A journey<br />
begun about fifty years ago is still a long way from being completed.<br />
Impact<br />
Although the actual statistical <strong>and</strong> other proofs needed to support<br />
cloud seeding are lacking, the discovery in 1946 by the General<br />
Electric employees set off a wave of interest <strong>and</strong> dem<strong>and</strong> for information<br />
that far surpassed the interest generated by the discovery of<br />
nuclear fission shortly before. The possibility of ending drought<br />
<strong>and</strong>, in the process, hunger excited many people. The discovery also<br />
prompted both legitimate <strong>and</strong> false “rainmakers” who used the information<br />
gathered by Schaefer, Langmuir, <strong>and</strong> Vonnegut to set up<br />
cloud-seeding businesses. Weather modification, in its current stage<br />
of development, cannot be used to end worldwide drought. It does,<br />
however, have beneficial results in some cases on the crops of<br />
smaller farms that have been affected by drought.<br />
In order to underst<strong>and</strong> the advances made in weather modification,<br />
new instruments are needed to record accurately the results of<br />
further experimentation. The storm of interest—both favorable <strong>and</strong><br />
nonfavorable—generated by the discoveries of Schaefer, Langmuir,<br />
<strong>and</strong> Vonnegut has had <strong>and</strong> will continue to have far-reaching effects<br />
on many aspects of society.<br />
See also Airplane; Artificial insemination; In vitro plant culture;<br />
Weather satellite.<br />
Further Reading<br />
Cloud seeding / 185<br />
Cole, Stephen. “Mexico Results Spur New Looking at Rainmaking.”<br />
Washington Post (January 22, 2001).
186 / Cloud seeding<br />
Havens, Barrington S., James E. Jiusto, <strong>and</strong> Bernard Vonnegut. Early<br />
History of Cloud Seeding. Socorro, N.Mex.: Langmuir Laboratory,<br />
New Mexico Institute of Mining <strong>and</strong> Technology, 1978.<br />
“Science <strong>and</strong> Technology: Cloudbusting.” The Economist (August 21,<br />
1999).<br />
Villiers, Marq de. Water: The Fate of Our Most Precious Resource. Boston:<br />
Houghton Mifflin, 2000.
COBOL computer language<br />
COBOL computer language<br />
The invention: The first user-friendly computer programming language,<br />
COBOL was originally designed to solve ballistics problems.<br />
The people behind the invention:<br />
Grace Murray Hopper (1906-1992), an American<br />
mathematician<br />
Howard Hathaway Aiken (1900-1973), an American<br />
mathematician<br />
Plain Speaking<br />
187<br />
Grace Murray Hopper, a mathematician, was a faculty member<br />
at Vassar College when World War II (1939-1945) began. She enlisted<br />
in the Navy <strong>and</strong> in 1943 was assigned to the Bureau of Ordnance<br />
Computation Project, where she worked on ballistics problems.<br />
In 1944, the Navy began using one of the first electronic<br />
computers, the Automatic Sequence Controlled Calculator (ASCC),<br />
designed by an International Business Machines (IBM) Corporation<br />
team of engineers headed by Howard Hathaway Aiken, to solve<br />
ballistics problems. Hopper became the third programmer of the<br />
ASCC.<br />
Hopper’s interest in computer programming continued after<br />
the war ended. By the early 1950’s, Hopper’s work with programming<br />
languages had led to her development of FLOW-MATIC, the<br />
first English-language data processing compiler. Hopper’s work<br />
on FLOW-MATIC paved the way for her later work with COBOL<br />
(Common Business Oriented Language).<br />
Until Hopper developed FLOW-MATIC, digital computer programming<br />
was all machine-specific <strong>and</strong> was written in machine<br />
code. A program designed for one computer could not be used on<br />
another. Every program was both machine-specific <strong>and</strong> problemspecific<br />
in that the programmer would be told what problem the<br />
machine was going to be asked <strong>and</strong> then would write a completely<br />
new program for that specific problem in the machine code.
188 / COBOL computer language<br />
Grace Murray Hopper<br />
Grace Brewster Murray was born in New York City in 1906.<br />
As a child she revered her great-gr<strong>and</strong>father, a U.S. Navy admiral,<br />
<strong>and</strong> her gr<strong>and</strong>father, an engineer. Her career melded their<br />
professions.<br />
She studied mathematics <strong>and</strong> physics at Vassar College,<br />
earning a bachelor’s degree in 1928 <strong>and</strong> a master’s degree in<br />
1930, when she married Vincent Foster Hopper. She accepted a<br />
teaching post at Vassar but continued her studies, completing a<br />
doctorate at Yale University in 1934. In 1943 she left academia<br />
for the Navy <strong>and</strong> was assigned to the Bureau of Ordnance Computation<br />
Project at Harvard University. She worked on the nation’s<br />
first modern computer, the Mark I, <strong>and</strong> contributed to the<br />
development of major new models afterward, including Sperry<br />
Corporation’s ENIAC <strong>and</strong> UNIVAC. While still with the Navy<br />
project at Harvard, Hopper participated in a minor incident<br />
that forever marked computer slang. One day a moth became<br />
caught in a switch, causing the computer to malfunction. She<br />
<strong>and</strong> other technicians found it <strong>and</strong> ever after referred to correcting<br />
mechanical glitches as “debugging.”<br />
Hopper joined Sperry Corporation after the war <strong>and</strong> carried<br />
out her seminal work with the FLOW-MATIC <strong>and</strong> COBOL<br />
computer languages. Meanwhile, she retained her commission<br />
in the Naval Reserves, helping the service incorporate computers<br />
<strong>and</strong> COBOL into its armaments <strong>and</strong> administration systems.<br />
She retired from the Navy in 1966 <strong>and</strong> from Sperry in<br />
1971, but the Navy soon had her out of retirement on temporary<br />
active duty to help with its computer systems. After her second<br />
retirement, the Navy, grateful for her tireless service, promoted<br />
her to rear admiral in 1985, the nation’s first woman admiral.<br />
She was also awarded the Distinguished Service Cross by the<br />
Department of Defense, the National Medal of Technology, <strong>and</strong><br />
the Legion of Merit. She became an inductee into the Engineering<br />
<strong>and</strong> Science Hall of Fame in 1991. Hopper, nicknamed<br />
Amazing Grace, died a year later.<br />
Machine code was based on the programmer’s knowledge of the<br />
physical characteristics of the computer as well as the requirements of<br />
the problem to be solved; that is, the programmer had to know what<br />
was happening within the machine as it worked through a series of
calculations, which relays tripped when <strong>and</strong> in what order, <strong>and</strong> what<br />
mathematical operations were necessary to solve the problem. Programming<br />
was therefore a highly specialized skill requiring a unique<br />
combination of linguistic, reasoning, engineering, <strong>and</strong> mathematical<br />
abilities that not even all the mathematicians <strong>and</strong> electrical engineers<br />
who designed <strong>and</strong> built the early computers possessed.<br />
While every computer still operates in response to the programming,<br />
or instructions, built into it, which are formatted in machine<br />
code, modern computers can accept programs written in nonmachine<br />
code—that is, in various automatic programming languages. They<br />
are able to accept nonmachine code programs because specialized<br />
programs now exist to translate those programs into the appropriate<br />
machine code. These translating programs are known as “compilers,”<br />
or “assemblers,” <strong>and</strong> FLOW-MATIC was the first such program.<br />
Hopper developed FLOW-MATIC after realizing that it would<br />
be necessary to eliminate unnecessary steps in programming to<br />
make computers more efficient. FLOW-MATIC was based, in part,<br />
on Hopper’s recognition that certain elements, or comm<strong>and</strong>s, were<br />
common to many different programming applications. Hopper theorized<br />
that it would not be necessary to write a lengthy series of instructions<br />
in machine code to instruct a computer to begin a series of<br />
operations; instead, she believed that it would be possible to develop<br />
comm<strong>and</strong>s in an assembly language in such a way that a programmer<br />
could write one comm<strong>and</strong>, such as the word add, that<br />
would translate into a sequence of several comm<strong>and</strong>s in machine<br />
code. Hopper’s successful development of a compiler to translate<br />
programming languages into machine code thus meant that programming<br />
became faster <strong>and</strong> easier. From assembly languages such<br />
as FLOW-MATIC, it was a logical progression to the development of<br />
high-level computer languages, such as FORTRAN (Formula Translation)<br />
<strong>and</strong> COBOL.<br />
The Language of Business<br />
COBOL computer language / 189<br />
Between 1955 (when FLOW-MATIC was introduced) <strong>and</strong> 1959, a<br />
number of attempts at developing a specific business-oriented language<br />
were made. IBM <strong>and</strong> Remington R<strong>and</strong> believed that the only<br />
way to market computers to the business community was through
190 / COBOL computer language<br />
the development of a language that business people would be<br />
comfortable using. Remington R<strong>and</strong> officials were especially committed<br />
to providing a language that resembled English. None of<br />
the attempts to develop a business-oriented language succeeded,<br />
however, <strong>and</strong> by 1959 Hopper <strong>and</strong> other members of the U.S. Department<br />
of Defense had persuaded representatives of various companies<br />
of the need to cooperate.<br />
On May 28 <strong>and</strong> 29, 1959, a conference sponsored by the Department<br />
of Defense was held at the Pentagon to discuss the problem of<br />
establishing a common language for the adaptation of electronic<br />
computers for data processing. As a result, the first distribution of<br />
COBOL was accomplished on December 17, 1959. Although many<br />
people were involved in the development of COBOL, Hopper played<br />
a particularly important role. She not only found solutions to technical<br />
problems but also succeeded in selling the concept of a common<br />
language from an administrative <strong>and</strong> managerial point of view. Hopper<br />
recognized that while the companies involved in the commercial<br />
development of computers were in competition with one another, the<br />
use of a common, business-oriented language would contribute to<br />
the growth of the computer industry as a whole, as well as simplify<br />
the training of computer programmers <strong>and</strong> operators.<br />
Consequences<br />
COBOL was the first compiler developed for business data processing<br />
operations. Its development simplified the training required<br />
for computer users in business applications <strong>and</strong> demonstrated that<br />
computers could be practical tools in government <strong>and</strong> industry as<br />
well as in science. Prior to the development of COBOL, electronic<br />
computers had been characterized as expensive, oversized adding<br />
machines that were adequate for performing time-consuming mathematics<br />
but lacked the flexibility that business people required.<br />
In addition, the development of COBOL freed programmers not<br />
only from the need to know machine code but also from the need to<br />
underst<strong>and</strong> the physical functioning of the computers they were using.<br />
Programming languages could be written that were both machine-independent<br />
<strong>and</strong> almost universally convertible from one<br />
computer to another.
Finally, because Hopper <strong>and</strong> the other committee members worked<br />
under the auspices of the Department of Defense, the software<br />
was not copyrighted, <strong>and</strong> in a short period of time COBOL became<br />
widely available to anyone who wanted to use it. It diffused rapidly<br />
throughout the industry <strong>and</strong> contributed to the widespread adaptation<br />
of computers for use in countless settings.<br />
See also BASIC programming language; Colossus computer;<br />
ENIAC computer; FORTRAN programming language; SAINT.<br />
Further Reading<br />
COBOL computer language / 191<br />
Cohen, Bernard I., Gregory W. Welch, <strong>and</strong> Robert V. D. Campbell.<br />
Makin’ Numbers: Howard Aiken: <strong>and</strong> the Computer. Cambridge,<br />
Mass.: MIT Press, 1999.<br />
Cohen, Bernard I. Howard Aiken: Portrait of a Computer Pioneer. Cambridge,<br />
Mass.: MIT Press, 1999.<br />
Ferguson, David E. “The Roots of COBOL.” Systems 3X World <strong>and</strong> As<br />
World 17, no. 7 (July, 1989).<br />
Yount, Lisa. A to Z of Women in Science <strong>and</strong> Math. New York: Facts on<br />
File, 1999.
192<br />
Color film<br />
Color film<br />
The invention: Aphotographic medium used to take full-color pictures.<br />
The people behind the invention:<br />
Rudolf Fischer (1881-1957), a German chemist<br />
H. Siegrist (1885-1959), a German chemist <strong>and</strong> Fischer’s<br />
collaborator<br />
Benno Homolka (1877-1949), a German chemist<br />
The Process Begins<br />
Around the turn of the twentieth century, Arthur-Louis Ducos du<br />
Hauron, a French chemist <strong>and</strong> physicist, proposed a tripack (threelayer)<br />
process of film development in which three color negatives<br />
would be taken by means of superimposed films. This was a subtractive<br />
process. (In the “additive method” of making color pictures,<br />
the three colors are added in projection—that is, the colors are formed<br />
by the mixture of colored light of the three primary hues. In the<br />
“subtractive method,” the colors are produced by the superposition<br />
of prints.) In Ducos du Hauron’s process, the blue-light negative<br />
would be taken on the top film of the pack; a yellow filter below it<br />
would transmit the yellow light, which would reach a green-sensitive<br />
film <strong>and</strong> then fall upon the bottom of the pack, which would be sensitive<br />
to red light. Tripacks of this type were unsatisfactory, however,<br />
because the light became diffused in passing through the emulsion<br />
layers, so the green <strong>and</strong> red negatives were not sharp.<br />
To obtain the real advantage of a tripack, the three layers must<br />
be coated one over the other so that the distance between the bluesensitive<br />
<strong>and</strong> red-sensitive layers is a small fraction of a thous<strong>and</strong>th<br />
of an inch. Tripacks of this type were suggested by the early pioneers<br />
of color photography, who had the idea that the packs would<br />
be separated into three layers for development <strong>and</strong> printing. The<br />
manipulation of such systems proved to be very difficult in practice.<br />
It was also suggested, however, that it might be possible to develop<br />
such tripacks as a unit <strong>and</strong> then, by chemical treatment, convert the<br />
silver images into dye images.
Fischer’s Theory<br />
One of the earliest subtractive tripack methods that seemed to<br />
hold great promise was that suggested by Rudolf Fischer in 1912. He<br />
proposed a tripack that would be made by coating three emulsions<br />
on top of one another; the lowest one would be red-sensitive, the<br />
middle one would be green-sensitive, <strong>and</strong> the top one would be bluesensitive.<br />
Chemical substances called “couplers,” which would produce<br />
dyes in the development process, would be incorporated into<br />
the layers. In this method, the molecules of the developing agent, after<br />
becoming oxidized by developing the silver image, would react<br />
with the unoxidized form (the coupler) to produce the dye image.<br />
The two types of developing agents described by Fischer are<br />
paraminophenol <strong>and</strong> paraphenylenediamine (or their derivatives).<br />
The five types of dye that Fischer discovered are formed when silver<br />
images are developed by these two developing agents in the presence<br />
of suitable couplers. The five classes of dye he used (indophenols,<br />
indoanilines, indamines, indothiophenols, <strong>and</strong> azomethines)<br />
were already known when Fischer did his work, but it was he who<br />
discovered that the photographic latent image could be used to promote<br />
their formulation from “coupler” <strong>and</strong> “developing agent.”<br />
The indoaniline <strong>and</strong> azomethine types have been found to possess<br />
the necessary properties, but the other three suffer from serious defects.<br />
Because only p-phenylenediamine <strong>and</strong> its derivatives can be<br />
used to form the indoaniline <strong>and</strong> azomethine dyes, it has become<br />
the most widely used color developing agent.<br />
Impact<br />
Color film / 193<br />
In the early 1920’s, Leopold Mannes <strong>and</strong> Leopold Godowsky<br />
made a great advance beyond the Fischer process. Working on a<br />
new process of color photography, they adopted coupler development,<br />
but instead of putting couplers into the emulsion as Fischer<br />
had, they introduced them during processing. Finally, in 1935, the<br />
film was placed on the market under the name “Kodachrome,” a<br />
name that had been used for an early two-color process.<br />
The first use of the new Kodachrome process in 1935 was for 16millimeter<br />
film. Color motion pictures could be made by the Koda-
194 / Color film<br />
chrome process as easily as black-<strong>and</strong>-white pictures, because the<br />
complex work involved (the color development of the film) was<br />
done under precise technical control. The definition (quality of the<br />
image) given by the process was soon sufficient to make it practical<br />
for 8-millimeter pictures, <strong>and</strong> in 1936, Kodachrome film was introduced<br />
in a 35-millimeter size for use in popular miniature cameras.<br />
Soon thereafter, color processes were developed on a larger scale<br />
<strong>and</strong> new color materials were rapidly introduced. In 1940, the Kodak<br />
Research Laboratories worked out a modification of the Fischer<br />
process in which the couplers were put into the emulsion layers.<br />
These couplers are not dissolved in the gelatin layer itself, as the<br />
Fischer couplers are, but are carried in small particles of an oily material<br />
that dissolves the couplers, protects them from the gelatin,<br />
<strong>and</strong> protects the silver bromide from any interaction with the couplers.<br />
When development takes place, the oxidation product of the<br />
developing agent penetrates into the organic particles <strong>and</strong> reacts<br />
with the couplers so that the dyes are formed in small particles that<br />
are dispersed throughout the layers. In one form of this material,<br />
Ektachrome (originally intended for use in aerial photography), the<br />
film is reversed to produce a color positive. It is first developed with<br />
a black-<strong>and</strong>-white developer, then reexposed <strong>and</strong> developed with a<br />
color developer that recombines with the couplers in each layer to<br />
produce the appropriate dyes, all three of which are produced simultaneously<br />
in one development.<br />
In summary, although Fischer did not succeed in putting his theory<br />
into practice, his work still forms the basis of most modern color<br />
photographic systems. Not only did he demonstrate the general<br />
principle of dye-coupling development, but the art is still mainly<br />
confined to one of the two types of developing agent, <strong>and</strong> two of the<br />
five types of dye, described by him.<br />
See also Autochrome plate; Brownie camera; Infrared photography;<br />
Instant photography.<br />
Further Reading<br />
Collins, Douglas. The Story of Kodak. New York: Harry N. Abrams,<br />
1990.
Color film / 195<br />
Glendinning, Peter. Color Photography: History, Theory, <strong>and</strong> Darkroom<br />
Technique. Englewood Cliffs, N.J.: Prentice-Hall, 1985.<br />
Wood, John. The Art of the Autochrome: The Birth of Color Photography.<br />
Iowa City: University of Iowa Press, 1993.
196<br />
Color television<br />
Color television<br />
The invention: System for broadcasting full-color images over the<br />
airwaves.<br />
The people behind the invention:<br />
Peter Carl Goldmark (1906-1977), the head of the CBS research<br />
<strong>and</strong> development laboratory<br />
William S. Paley (1901-1990), the businessman who took over<br />
CBS<br />
David Sarnoff (1891-1971), the founder of RCA<br />
The Race for St<strong>and</strong>ardization<br />
Although by 1928 color television had already been demonstrated<br />
in Scotl<strong>and</strong>, two events in 1940 mark that year as the beginning<br />
of color television. First, on February 12, 1940, the Radio Corporation<br />
of America (RCA) demonstrated its color television system<br />
privately to a group that included members of the Federal Communications<br />
Commission (FCC), an administrative body that had the<br />
authority to set st<strong>and</strong>ards for an electronic color system. The demonstration<br />
did not go well; indeed, David Sarnoff, the head of RCA,<br />
canceled a planned public demonstration <strong>and</strong> returned his engineers<br />
to the Princeton, New Jersey, headquarters of RCA’s laboratories.<br />
Next, on September 1, 1940, the Columbia Broadcasting System<br />
(CBS) took the first step to develop a color system that would become<br />
the st<strong>and</strong>ard for the United States. On that day, CBS demonstrated<br />
color television to the public, based on the research of an engineer,<br />
Peter Carl Goldmark. Goldmark placed a set of spinning<br />
filters in front of the black-<strong>and</strong>-white television images, breaking<br />
them down into three primary colors <strong>and</strong> producing color television.<br />
The audience saw what was called “additive color.”<br />
Although Goldmark had been a researcher at CBS since January,<br />
1936, he did not attempt to develop a color television system until<br />
March, 1940, after watching the Technicolor motion picture Gone<br />
with the Wind (1939). Inspired, Goldmark began to tinker in his tiny
CBS laboratory in the headquarters building in New York City.<br />
If a decision had been made in 1940, the CBS color st<strong>and</strong>ard<br />
would have been accepted as the national st<strong>and</strong>ard. The FCC was,<br />
at that time, more concerned with trying to establish a black-<strong>and</strong>white<br />
st<strong>and</strong>ard for television. Color television seemed decades away.<br />
In 1941, the FCC decided to adopt st<strong>and</strong>ards for black-<strong>and</strong>-white<br />
television only, leaving the issue of color unresolved—<strong>and</strong> the<br />
doors to the future of color broadcasting wide open. Control of a potentially<br />
lucrative market as well as personal rivalry threw William<br />
S. Paley, the head of CBS, <strong>and</strong> Sarnoff into a race for the control of<br />
color television. Both companies would pay dearly in terms of<br />
money <strong>and</strong> time, but it would take until the 1960’s before the United<br />
States would become a nation of color television watchers.<br />
RCA was at the time the acknowledged leader in the development<br />
of black-<strong>and</strong>-white television. CBS engineers soon discovered,<br />
however, that their company’s color system would not work when<br />
combined with RCA black-<strong>and</strong>-white televisions. In other words,<br />
customers would need one set for black-<strong>and</strong>-white <strong>and</strong> one for<br />
color. Moreover, since the color system of CBS needed more broadcast<br />
frequency space than the black-<strong>and</strong>-white system in use, CBS<br />
was forced to ask the FCC to allocate new channel space in the<br />
ultrahigh frequency (UHF) b<strong>and</strong>, which was then not being used. In<br />
contrast, RCA scientists labored to make a compatible color system<br />
that required no additional frequency space.<br />
No Time to Wait<br />
Color television / 197<br />
Following the end of World War II, in 1945, the suburbanites who<br />
populated new communities in America’s cities wanted television sets<br />
right away; they did not want to wait for the government to decide on<br />
a color st<strong>and</strong>ard <strong>and</strong> then wait again while manufacturers redesigned<br />
assembly lines to make color sets. Rich with savings accumulated during<br />
the prosperity of the war years, Americans wanted to spend their<br />
money. After the war, the FCC saw no reason to open up proceedings<br />
regarding color systems. Black-<strong>and</strong>-white was operational; customers<br />
were waiting in line for the new electronic marvel. To give its engineers<br />
time to create a compatible color system, RCA skillfully lobbied the<br />
members of the FCC to take no action.
198 / Color television<br />
There were other problems with the CBS mechanical color television.<br />
It was noisy <strong>and</strong> large, <strong>and</strong> its color balance was hard to maintain.<br />
CBS claimed that through further engineering work, it would<br />
improve the actual sets. Yet RCA was able to convince other manufacturers<br />
to support it in preference to CBS principally because of its<br />
proven manufacturing track record.<br />
In 1946, RCA demonstrated a new electronic color receiver with<br />
three picture tubes, one for each of the primary colors. Color reproduction<br />
was fairly true; although any movement on the screen<br />
caused color blurring, there was little flicker. It worked, however,<br />
<strong>and</strong> thus ended the invention phase of color television begun in<br />
1940. The race for st<strong>and</strong>ardization would require seven more years<br />
of corporate struggle before the RCA system would finally win<br />
adoption as the national st<strong>and</strong>ard in 1953.<br />
Impact<br />
Through the 1950’s, black-<strong>and</strong>-white television remained the order<br />
of the day. Through the later years of the decade, only the National<br />
Broadcasting Company (NBC) television network was regularly<br />
airing programs in color. Full production <strong>and</strong> presentation of<br />
shows in color during prime time did not come until the mid-1960’s;<br />
most industry observers date 1972 as the true arrival of color television.<br />
By 1972, color sets were found in more than half the homes in the<br />
United States. At that point, since color was so widespread, TV<br />
Guide stopped tagging color program listings with a special symbol<br />
<strong>and</strong> instead tagged only black-<strong>and</strong>-white shows, as it does to this<br />
day. Gradually, only cheap, portable sets were made for black-<strong>and</strong>white<br />
viewing, while color sets came in all varieties from tiny h<strong>and</strong>held<br />
pocket televisions to mammoth projection televisions.<br />
See also Autochrome plate; Community antenna television;<br />
Communications satellite; Fiber-optics; FM radio; Radio; Television;<br />
Transistor; Videocassette recorder.
Further Reading<br />
Color television / 199<br />
Burns, R. W. Television: An International History of the Formative Years.<br />
London: Institution of Electrical Engineers in association with<br />
the Science Museum, 1998.<br />
Fisher, David E., <strong>and</strong> Marshall Fisher. Tube: The Invention of Television.<br />
Washington, D.C.: Counterpoint, 1996.<br />
Lewis, Tom. Empire of the Air: The Men Who Made Radio. New York:<br />
HarperPerennial, 1993.<br />
Lyons, Eugene. David Sarnoff: A Biography. New York: Harper <strong>and</strong><br />
Row, 1967.
200<br />
Colossus computer<br />
Colossus computer<br />
The invention: The first all-electronic calculating device, the Colossus<br />
computer was built to decipher German military codes<br />
during World War II.<br />
The people behind the invention:<br />
Thomas H. Flowers, an electronics expert<br />
Max H. A. Newman (1897-1984), a mathematician<br />
Alan Mathison Turing (1912-1954), a mathematician<br />
C. E. Wynn-Williams, a member of the Telecommunications<br />
Research Establishment<br />
An Undercover Operation<br />
In 1939, during World War II (1939-1945), a team of scientists,<br />
mathematicians, <strong>and</strong> engineers met at Bletchley Park, outside London,<br />
to discuss the development of machines that would break the<br />
secret code used in Nazi military communications. The Germans<br />
were using a machine called “Enigma” to communicate in code between<br />
headquarters <strong>and</strong> field units. Polish scientists, however, had<br />
been able to examine a German Enigma <strong>and</strong> between 1928 <strong>and</strong> 1938<br />
were able to break the codes by using electromechanical codebreaking<br />
machines called “bombas.” In 1938, the Germans made the<br />
Enigma more complicated, <strong>and</strong> the Polish were no longer able to<br />
break the codes. In 1939, the Polish machines <strong>and</strong> codebreaking<br />
knowledge passed to the British.<br />
Alan Mathison Turing was one of the mathematicians gathered<br />
at Bletchley Park to work on codebreaking machines. Turing was<br />
one of the first people to conceive of the universality of digital computers.<br />
He first mentioned the “Turing machine” in 1936 in an article<br />
published in the Proceedings of the London Mathematical Society.<br />
The Turing machine, a hypothetical device that can solve any<br />
problem that involves mathematical computation, is not restricted<br />
to only one task—hence the universality feature.<br />
Turing suggested an improvement to the Bletchley codebreaking<br />
machine, the “Bombe,” which had been modeled on the Polish
omba. This improvement increased the computing power of the<br />
machine. The new codebreaking machine replaced the tedious<br />
method of decoding by h<strong>and</strong>, which in addition to being slow,<br />
was ineffective in dealing with complicated encryptions that were<br />
changed daily.<br />
Building a Better Mousetrap<br />
Colossus computer / 201<br />
The Bombe was very useful. In 1942, when the Germans started<br />
using a more sophisticated cipher machine known as the “Fish,”<br />
Max H. A. Newman, who was in charge of one subunit at Bletchley<br />
Park, believed that an automated device could be designed to break<br />
the codes produced by the Fish. Thomas H. Flowers, who was in<br />
charge of a switching group at the Post Office Research Station at<br />
Dollis Hill, had been approached to build a special-purpose electromechanical<br />
device for Bletchley Park in 1941. The device was not<br />
useful, <strong>and</strong> Flowers was assigned to other problems.<br />
Flowers began to work closely with Turing, Newman, <strong>and</strong> C. E.<br />
Wynn-Williams of the Telecommunications Research Establishment<br />
(TRE) to develop a machine that could break the Fish codes. The<br />
Dollis Hill team worked on the tape driving <strong>and</strong> reading problems,<br />
<strong>and</strong> Wynn-Williams’s team at TRE worked on electronic counters<br />
<strong>and</strong> the necessary circuitry. Their efforts produced the “Heath Robinson,”<br />
which could read two thous<strong>and</strong> characters per second. The<br />
Heath Robinson used vacuum tubes, an uncommon component in<br />
the early 1940’s. The vacuum tubes performed more reliably <strong>and</strong><br />
rapidly than the relays that had been used for counters. Heath Robinson<br />
<strong>and</strong> the companion machines proved that high-speed electronic<br />
devices could successfully do cryptoanalytic work (solve decoding<br />
problems).<br />
Entirely automatic in operation once started, the Heath Robinson<br />
was put together at Bletchley Park in the spring of 1943. The Heath<br />
Robinson became obsolete for codebreaking shortly after it was put<br />
into use, so work began on a bigger, faster, <strong>and</strong> more powerful machine:<br />
the Colossus.<br />
Flowers led the team that designed <strong>and</strong> built the Colossus in<br />
eleven months at Dollis Hill. The first Colossus (Mark I) was a bigger,<br />
faster version of the Heath Robinson <strong>and</strong> read about five thou-
202 / Colossus computer<br />
s<strong>and</strong> characters per second. Colossus had approximately fifteen<br />
hundred vacuum tubes, which was the largest number that had<br />
ever been used at that time. Although Turing <strong>and</strong> Wynn-Williams<br />
were not directly involved with the design of the Colossus, their<br />
previous work on the Heath Robinson was crucial to the project,<br />
since the first Colossus was based on the Heath Robinson.<br />
Colossus became operational at Bletchley Park in December,<br />
1943, <strong>and</strong> Flowers made arrangements for the manufacture of its<br />
components in case other machines were required. The request for<br />
additional machines came in March, 1944. The second Colossus, the<br />
Mark II, was extensively redesigned <strong>and</strong> was able to read twentyfive<br />
thous<strong>and</strong> characters per second because it was capable of performing<br />
parallel operations (carrying out several different operations<br />
at once, instead of one at a time); it also had a short-term<br />
memory. The Mark II went into operation on June 1, 1944. More<br />
machines were made, each with further modifications, until there<br />
were ten. The Colossus machines were special-purpose, programcontrolled<br />
electronic digital computers, the only known electronic<br />
programmable computers in existence in 1944. The use of electronics<br />
allowed for a tremendous increase in the internal speed of the<br />
machine.<br />
Impact<br />
The Colossus machines gave Britain the best codebreaking machines<br />
of World War II <strong>and</strong> provided information that was crucial<br />
for the Allied victory. The information decoded by Colossus, the actual<br />
messages, <strong>and</strong> their influence on military decisions would remain<br />
classified for decades after the war.<br />
The later work of several of the people involved with the Bletchley<br />
Park projects was important in British computer development<br />
after the war. Newman’s <strong>and</strong> Turing’s postwar careers were closely<br />
tied to emerging computer advances. Newman, who was interested<br />
in the impact of computers on mathematics, received a grant from<br />
the Royal Society in 1946 to establish a calculating machine laboratory<br />
at Manchester University. He was also involved with postwar<br />
computer growth in Britain.<br />
Several other members of the Bletchley Park team, including Tu-
ing, joined Newman at Manchester in 1948. Before going to Manchester<br />
University, however, Turing joined Britain’s National Physical<br />
Laboratory (NPL). At NPL, Turing worked on an advanced<br />
computer known as the Pilot Automatic Computing Engine (Pilot<br />
ACE). While at NPL, Turing proposed the concept of a stored program,<br />
which was a controversial but extremely important idea in<br />
computing. A “stored” program is one that remains in residence inside<br />
the computer, making it possible for a particular program <strong>and</strong><br />
data to be fed through an input device simultaneously. (The Heath<br />
Robinson <strong>and</strong> Colossus machines were limited by utilizing separate<br />
input tapes, one for the program <strong>and</strong> one for the data to be analyzed.)<br />
Turing was among the first to explain the stored-program<br />
concept in print. He was also among the first to imagine how subroutines<br />
could be included in a program. (A subroutine allows separate<br />
tasks within a large program to be done in distinct modules; in<br />
effect, it is a detour within a program. After the completion of the<br />
subroutine, the main program takes control again.)<br />
See also Apple II computer; Differential analyzer; ENIAC computer;<br />
IBM Model 1401 computer; Personal computer; Supercomputer;<br />
UNIVAC computer.<br />
Further Reading<br />
Colossus computer / 203<br />
Carter, Frank. Codebreaking with the Colossus Computer: Finding the K-<br />
Wheel Patterns—An Account of Some of the Techniques Used. Milton<br />
Keynes, Engl<strong>and</strong>: Bletchley Park Trust, 1997.<br />
Gray, Paul. “Computer Scientist: Alan Turing.” Time 153, no. 12<br />
(March 29, 1999).<br />
Hodges, Andrew. Alan Turing: The Enigma. New York: Walker, 2000.<br />
Sale, Tony. The Colossus Computer, 1943-1996: And How It Helped to<br />
Break the German Lorenz Cipher in World War II. Cleobury Mortimer:<br />
M&M Baldwin, 1998.
204<br />
Communications satellite<br />
Communications satellite<br />
The invention: Telstar I, the world’s first commercial communications<br />
satellite, opened the age of live, worldwide television by<br />
connecting the United States <strong>and</strong> Europe.<br />
The people behind the invention:<br />
Arthur C. Clarke (1917- ), a British science-fiction writer<br />
who in 1945 first proposed the idea of using satellites as<br />
communications relays<br />
John R. Pierce (1910- ), an American engineer who worked<br />
on the Echo <strong>and</strong> Telstar satellite communications projects<br />
Science Fiction?<br />
In 1945, Arthur C. Clarke suggested that a satellite orbiting high<br />
above the earth could relay television signals between different stations<br />
on the ground, making for a much wider range of transmission<br />
than that of the usual ground-based systems. Writing in the<br />
February, 1945, issue of Wireless World, Clarke said that satellites<br />
“could give television <strong>and</strong> microwave coverage to the entire<br />
planet.”<br />
In 1956, John R. Pierce at the Bell Telephone Laboratories of the<br />
American Telephone & Telegraph Company (AT&T) began to urge<br />
the development of communications satellites. He saw these satellites<br />
as a replacement for the ocean-bottom cables then being used to<br />
carry transatlantic telephone calls. In 1950, about one-<strong>and</strong>-a-half<br />
million transatlantic calls were made, <strong>and</strong> that number was expected<br />
to grow to three million by 1960, straining the capacity of the<br />
existing cables; in 1970, twenty-one million calls were made.<br />
Communications satellites offered a good, cost-effective alternative<br />
to building more transatlantic telephone cables. On January 19,<br />
1961, the Federal Communications Commission (FCC) gave permission<br />
for AT&T to begin Project Telstar, the first commercial communications<br />
satellite bridging the Atlantic Ocean. AT&T reached an<br />
agreement with the National Aeronautics <strong>and</strong> Space Administration<br />
(NASA) in July, 1961, in which AT&T would pay $3 million for
each Telstar launch. The Telstar project involved about four hundred<br />
scientists, engineers, <strong>and</strong> technicians at the Bell Telephone<br />
Laboratories, twenty more technical personnel at AT&T headquarters,<br />
<strong>and</strong> the efforts of more than eight hundred other companies<br />
that provided equipment or services.<br />
Telstar 1 was shaped like a faceted sphere, was 88 centimeters in<br />
diameter, <strong>and</strong> weighed 80 kilograms. Most of its exterior surface<br />
(sixty of the seventy-four facets) was covered by 3,600 solar cells to<br />
convert sunlight into 15 watts of electricity to power the satellite.<br />
Each solar cell was covered with artificial sapphire to reduce the<br />
damage caused by radiation. The main instrument was a two-way<br />
radio able to h<strong>and</strong>le six hundred telephone calls at a time or one<br />
television channel.<br />
The signal that the radio would send back to Earth was very<br />
weak—less than one-thirtieth the energy used by a household light<br />
bulb. Large ground antennas were needed to receive Telstar’s faint<br />
signal. The main ground station was built by AT&T in Andover,<br />
Maine, on a hilltop informally called “Space Hill.” A horn-shaped<br />
antenna, weighing 380 tons, with a length of 54 meters <strong>and</strong> an open<br />
end with an area of 1,097 square meters, was mounted so that it<br />
could rotate to track Telstar across the sky. To protect it from wind<br />
<strong>and</strong> weather, the antenna was built inside an inflated dome, 64 meters<br />
in diameter <strong>and</strong> 49 meters tall. It was, at the time, the largest inflatable<br />
structure ever built. A second, smaller horn antenna in<br />
Holmdel, New Jersey, was also used.<br />
International Cooperation<br />
Communications satellite / 205<br />
In February, 1961, the governments of the United States <strong>and</strong> Engl<strong>and</strong><br />
agreed to let the British Post Office <strong>and</strong> NASA work together<br />
to test experimental communications satellites. The British Post Office<br />
built a 26-meter-diameter steerable dish antenna of its own design<br />
at Goonhilly Downs, near Cornwall, Engl<strong>and</strong>. Under a similar<br />
agreement, the French National Center for Telecommunications<br />
Studies constructed a ground station, almost identical to the Andover<br />
station, at Pleumeur-Bodou, Brittany, France.<br />
After testing, Telstar 1 was moved to Cape Canaveral, Florida,<br />
<strong>and</strong> attached to the Thor-Delta launch vehicle built by the Douglas
206 / Communications satellite<br />
Aircraft Company. The Thor-Delta was launched at 3:35 a.m. eastern<br />
st<strong>and</strong>ard time (EST) on July 10, 1962. Once in orbit, Telstar 1 took<br />
157.8 minutes to circle the globe. The satellite came within range of<br />
the Andover station on its sixth orbit, <strong>and</strong> a television test pattern<br />
was transmitted to the satellite at 6:26 p.m. EST. At 6:30 p.m. EST, a<br />
tape-recorded black-<strong>and</strong>-white image of the American flag with the<br />
Andover station in the background, transmitted from Andover to<br />
Holmdel, opened the first television show ever broadcast by satellite.<br />
Live pictures of U.S. vice president Lyndon B. Johnson <strong>and</strong><br />
other officials gathered at Carnegie Institution in Washington, D.C.,<br />
followed on the AT&T program carried live on all three American<br />
networks.<br />
Up to the moment of launch, it was uncertain if the French station<br />
would be completed in time to participate in the initial test. At 6:47<br />
p.m. EST, however, Telstar’s signal was picked up by the station in<br />
Pleumeur-Bodou, <strong>and</strong> Johnson’s image became the first television<br />
transmission to cross the Atlantic. Pictures received at the French<br />
station were reported to be so clear that they looked like they had<br />
been sent from only forty kilometers away. Because of technical difficulties,<br />
the English station was unable to receive a clear signal.<br />
The first formal exchange of programming between the United<br />
States <strong>and</strong> Europe occurred on July 23, 1962. This special eighteenminute<br />
program, produced by the European Broadcasting Union,<br />
consisted of live scenes from major cities throughout Europe <strong>and</strong><br />
was transmitted from Goonhilly Downs, where the technical difficulties<br />
had been corrected, to Andover via Telstar.<br />
On the previous orbit, a program entitled “America, July 23,<br />
1962,” showing scenes from fifty television cameras around the<br />
United States, was beamed from Andover to Pleumeur-Bodou <strong>and</strong><br />
seen by an estimated one hundred million viewers throughout Europe.<br />
Consequences<br />
Telstar 1 <strong>and</strong> the communications satellites that followed it revolutionized<br />
the television news <strong>and</strong> sports industries. Before, television<br />
networks had to ship film across the oceans, meaning delays of<br />
hours or days between the time an event occurred <strong>and</strong> the broadcast
of pictures of that event on television on another continent. Now,<br />
news of major significance, as well as sporting events, can be viewed<br />
live around the world. The impact on international relations also<br />
was significant, with world opinion becoming able to influence the<br />
actions of governments <strong>and</strong> individuals, since those actions could<br />
be seen around the world as the events were still in progress.<br />
More powerful launch vehicles allowed new satellites to be placed<br />
in geosynchronous orbits, circling the earth at a speed the same as<br />
the earth’s rotation rate. When viewed from the ground, these satellites<br />
appeared to remain stationary in the sky. This allowed continuous<br />
communications <strong>and</strong> greatly simplified the ground antenna<br />
system. By the late 1970’s, private individuals had built small antennas<br />
in their backyards to receive television signals directly from the<br />
satellites.<br />
See also Artificial satellite; Cruise missile; Rocket; Weather satellite.<br />
Further Reading<br />
Communications satellite / 207<br />
McAleer, Neil. Odyssey: The Authorised Biography of Arthur C. Clarke.<br />
London: Victor Gollancz, 1992.<br />
Pierce, John Robinson. The Beginnings of Satellite Communications.<br />
San Francisco: San Francisco Press, 1968.<br />
_____. Science, Art, <strong>and</strong> Communication. New York: C. N. Potter, 1968.
208<br />
Community antenna television<br />
Community antenna television<br />
The invention: A system for connecting households in isolated areas<br />
to common antennas to improve television reception, community<br />
antenna television was a forerunner of modern cabletelevision<br />
systems.<br />
The people behind the invention:<br />
Robert J. Tarlton, the founder of CATV in eastern Pennsylvania<br />
Ed Parsons, the founder of CATV in Oregon<br />
Ted Turner (1938- ), founder of the first cable superstation,<br />
WTBS<br />
Growing Dem<strong>and</strong> for Television<br />
Television broadcasting in the United States began in the late<br />
1930’s. After delays resulting from World War II, it exploded into<br />
the American public’s consciousness. The new medium relied primarily<br />
on existing broadcasting stations that quickly converted<br />
from radio to television formats. Consequently, the reception of television<br />
signals was centralized in large cities. The dem<strong>and</strong> for television<br />
quickly swept across the country. Ownership of television receivers<br />
increased dramatically, <strong>and</strong> those who could not afford their<br />
own flocked to businesses, usually taverns, or to the homes of<br />
friends with sets. People in urban areas had more opportunities to<br />
view the new medium <strong>and</strong> had the advantage of more broadcasts<br />
within the range of reception. Those in outlying regions were not so<br />
fortunate, as they struggled to see fuzzy pictures <strong>and</strong> were, in some<br />
cases, unable to receive a signal at all.<br />
The situation for outlying areas worsened in 1948, when the Federal<br />
Communications Commission (FCC) implemented a ban on all<br />
new television stations while it considered how to exp<strong>and</strong> the television<br />
market <strong>and</strong> how to deal with a controversy over color reception.<br />
This left areas without nearby stations in limbo, while people<br />
in areas with established stations reaped the benefits of new programming.<br />
The ban would remain in effect until 1952, when new<br />
stations came under construction across the country.
Poor reception in some areas <strong>and</strong> the FCC ban on new station<br />
construction together set the stage for the development of Community<br />
Antenna Television (CATV). CATV did not have a glamorous<br />
beginning. Late in 1949, two different men, frustrated by the slow<br />
movement of television to outlying areas, set up what would become<br />
the foundation of the multimillion-dollar cable industry.<br />
Robert J. Tarlton was a radio salesman in Lansford, Pennsylvania,<br />
about sixty-five miles from Philadelphia. He wanted to move<br />
into television sales but lived in an area with poor reception. Together<br />
with friends, he founded Panther Valley Television <strong>and</strong> set<br />
up a master antenna in a mountain range that blocked the reception<br />
of Philadelphia-based broadcasting. For an installation fee of $125<br />
<strong>and</strong> a fee of $3 per month, Panther Valley Television offered residents<br />
clear reception of the three Philadelphia stations via a coaxial<br />
cable wired to their homes. At the same time, Ed Parsons, of KAST<br />
radio in Astoria, Oregon, linked homes via coaxial cables to a master<br />
antenna set up to receive remote broadcasts. Both systems offered<br />
three channels, the major network affiliates, to subscribers. By<br />
1952, when the FCC ban was lifted, some seventy CATV systems<br />
provided small <strong>and</strong> rural communities with the wonders of television.<br />
That same year, the National Cable Television Association was<br />
formed to represent the interests of the young industry.<br />
Early systems could carry only one to three channels. In 1953,<br />
CATV began to use microwave relays, which could import distant<br />
signals to add more variety <strong>and</strong> pushed system capability to twelve<br />
channels. A system of towers began sprouting up across the country.<br />
These towers could relay a television signal from a powerful<br />
originating station to each cable system’s main antenna. This further<br />
opened the reception available to subscribers.<br />
Pay Television<br />
Community antenna television / 209<br />
The notion of pay television also began at this time. In 1951, the<br />
FCC authorized a test of Zenith Radio Corporation’s Phonevision in<br />
Chicago. Scrambled images could be sent as electronic impulses<br />
over telephone lines, then unscrambled by devices placed in subscribers’<br />
homes. Subscribers could order a film over the telephone<br />
for a minimal cost, usually $1. Advertisers for the system promoted
210 / Community antenna television<br />
the idea of films for the “sick, aged, <strong>and</strong> sitterless.” This early test<br />
was a forerunner of the premium, or pay, channels of later decades.<br />
Network opposition to CATV came in the late 1950’s. RCA chairman<br />
David Sarnoff warned against a pay television system that<br />
could soon fall under government regulation, as in the case of utilities.<br />
In April, 1959, the FCC found no basis for asserting jurisdiction<br />
or authority over CATV. This left the industry open to tremendous<br />
growth.<br />
By 1960, the industry included 640 systems with 700,000 subscribers.<br />
Ten years later, 2,490 systems were in operation, serving<br />
more than 4.5 million households. This accelerated growth came at<br />
a price. In April, 1965, the FCC reversed itself <strong>and</strong> asserted authority<br />
over microwave-fed CATV. A year later, the entire cable system<br />
came under FCC control. The FCC quickly restricted the use of distant<br />
signals in the largest hundred markets.<br />
The FCC movement to control cable systems stemmed from the<br />
agency’s desire to balance the television market. From the onset of<br />
television broadcasting, the FCC strived to maintain a balanced programming<br />
schedule. The goal was to create local markets in which<br />
local affiliate stations prospered from advertising <strong>and</strong> other community<br />
support <strong>and</strong> would not be unduly harmed by competition<br />
from larger metropolitan stations. In addition, growth of the industry<br />
ideally was to be uniform, with large <strong>and</strong> small cities receiving<br />
equal consideration. Cable systems, particularly those that could receive<br />
distant signals via microwave relay, upset the balance. For example,<br />
a small Ohio town could receive New York channels as well<br />
as Chicago channels via cable, as opposed to receiving only the<br />
channels from one city.<br />
The balance was further upset with the creation of a new communications<br />
satellite, COMSAT, in 1963. This technology allowed a<br />
signal to be sent to the satellite, retransmitted back to Earth, <strong>and</strong><br />
then picked up by a receiving station. This further increased the<br />
range of cable offerings <strong>and</strong> moved the transmission of television<br />
signals to a national scale, as microwave-relayed transmissions<br />
worked best in a regional scope. These two factors led the FCC to<br />
freeze the cable industry from new development <strong>and</strong> construction<br />
in December, 1968. After 1972, when the cable freeze was lifted, the<br />
greatest impact of CATV would be felt.
Ted Turner<br />
Community antenna television / 211<br />
“The whole idea of gr<strong>and</strong> things always turned me on,” Ted<br />
Turner said in a 1978 Playboy magazine interview. Irrepressible,<br />
tenacious, <strong>and</strong> flamboyant, Turner was groomed from childhood<br />
for gr<strong>and</strong>ness.<br />
Born Robert Edward Turner III in 1938 in Cincinnati, Ohio,<br />
he was raised by a harsh, dem<strong>and</strong>ing father who sent him to<br />
military preparatory schools <strong>and</strong> insisted he study business at<br />
Brown University instead of attending the U.S. Naval Academy,<br />
as the son wanted. Known as “Terrible Ted” in<br />
school for his high-energy, maverick ways, he became<br />
an champion debater, expert sailor, <strong>and</strong> natural<br />
leader. When the Turner Advertising Company failed<br />
in 1960, <strong>and</strong> his father committed suicide, young<br />
Turner took it over <strong>and</strong> parlayed it into an empire, acquiring<br />
or creating television stations <strong>and</strong> revolutionizing<br />
how they were broadcast to Americans.<br />
From then on he acquired, innovated, <strong>and</strong>, often,<br />
shocked. He bought the Atlanta Braves baseball team<br />
<strong>and</strong> Hawks basketball team, often angering sports<br />
executives with his recruiting methods, earning the<br />
nicknames “Mouth of the South” <strong>and</strong> “Captain Outrageous”<br />
for his assertiveness. He won the prestigious America’s Cup in<br />
1977 at the helm of the yacht Courageous. He bought Metro-<br />
Golden-Mayer/United Artists <strong>and</strong> incensed movie purists by<br />
having black-<strong>and</strong>-white classics “colorized.” In 1995 he concluded<br />
a $7.5 billion merger of Turner Broadcasting <strong>and</strong> Time<br />
Warner <strong>and</strong> set about an insult-slinging business war with another<br />
media tycoon, Rupert Murdoch. Meanwhile, he went<br />
through three marriages, the last to movie star Jane Fonda, <strong>and</strong><br />
became the largest private l<strong>and</strong>holder in the nation, with luxury<br />
homes in six states.<br />
However, Turner’s life was not all acquisition. He started a<br />
charitable foundation <strong>and</strong> sponsored the Olympics-like Goodwill<br />
Games between the United States <strong>and</strong> the Soviet Union to<br />
improve relations, for which Time magazine named him its man<br />
of the year in 1991. However, Turner’s gr<strong>and</strong>est shocker came<br />
in 1997 when he promised to donate $1 billion—$100 million<br />
each year for a decade—to the United Nations to help in feeding<br />
the poor, resettling refugees, <strong>and</strong> eradicating l<strong>and</strong> mines.<br />
And he publicly challenged other super-rich people to use their<br />
vast wealth similarly.<br />
(George Bennett)
212 / Community antenna television<br />
Impact<br />
The founding of cable television had a two-tier effect on the<br />
American public. The immediate impact of CATV was the opening<br />
of television to areas cut off from network broadcasting as a result of<br />
distance or topographical obstructions. Cable brought television to<br />
those who would have otherwise missed the early years of the medium.<br />
As technology furthered the capabilities of the industry, a second<br />
impact emerged. Along with the 1972 lifting of the ban on cable expansion,<br />
the FCC established strict guidelines for the advancement<br />
of the industry. Issuing a 500-page blueprint for the expansion of cable,<br />
the FCC included limits on the use of imported distant signals,<br />
required the blacking out of some specific programs (films <strong>and</strong> serials,<br />
for example), <strong>and</strong> limited pay cable to films that were more than<br />
two years old <strong>and</strong> to sports.<br />
Another component of the guidelines required all systems that<br />
went into operation after March, 1972 (<strong>and</strong> all systems by March,<br />
1977), to provide public access channels for education <strong>and</strong> local<br />
government. In addition, channels were to be made available for<br />
lease. These access channels opened information to subscribers that<br />
would not normally be available. Local governments <strong>and</strong> school<br />
boards began to broadcast meetings, <strong>and</strong> even high school athletics<br />
soon appeared via public access channels. These channels also<br />
provided space to local educational institutions for home-based<br />
courses in a variety of disciplines.<br />
Cable Communications Policy Act<br />
Further FCC involvement came in the 1984 Cable Communications<br />
Policy Act, which deregulated the industry <strong>and</strong> opened the<br />
door for more expansion. This act removed local control over cable<br />
service rates <strong>and</strong> virtually made monopolies out of local providers<br />
by limiting competition. The late 1980’s brought a new technology,<br />
fiber optics, which promised to further advance the industry by increasing<br />
the quality of cable services <strong>and</strong> channel availability.<br />
One area of the cable industry, pay television, took off in the<br />
1970’s <strong>and</strong> early 1980’s. The first major pay channel was developed
Community antenna television / 213<br />
by the media giant Time-Life. It inaugurated Home Box Office<br />
(HBO) in 1975 as the first national satellite interconnected network.<br />
Early HBO programming primarily featured films but included no<br />
films less than two years old (meeting the 1972 FCC guidelines), no<br />
serials, <strong>and</strong> no advertisements. Other premium movie channels followed,<br />
including Showtime, Cinemax, <strong>and</strong> The Movie Channel. By<br />
the late 1970’s, cable systems offered multiple premium channels to<br />
their subscribers.<br />
Superstations were another component of the cable industry that<br />
boomed in the 1970’s <strong>and</strong> 1980’s. The first, WTBS, was owned <strong>and</strong><br />
operated by Ted Turner <strong>and</strong> broadcast from Atlanta, Georgia. It emphasized<br />
films <strong>and</strong> reruns of old television series. Cable systems<br />
that broadcast WTBS were asked to allocate the signal to channel 17,<br />
thus creating uniformity across the country for the superstation.<br />
Chicago’s WGN <strong>and</strong> New York City’s WOR soon followed, gaining<br />
access to homes across the nation via cable. Both these superstations<br />
emphasized sporting events in the early years <strong>and</strong> exp<strong>and</strong>ed to include<br />
films <strong>and</strong> other entertainment in the 1980’s.<br />
Both pay channels <strong>and</strong> superstations transmitted via satellites<br />
(WTBS leased space from RCA, for example) <strong>and</strong> were picked up by<br />
cable systems across the country. Other stations with broadcasts intended<br />
solely for the cable industry opened in the 1980’s. Ted Turner<br />
started the Cable News Network in 1980 <strong>and</strong> followed with the allnews<br />
network Headline News. He added another channel with the<br />
Turner Network Television (TNT) in 1988. Other 1980’s additions<br />
included The Disney Channel, ESPN, The Entertainment Channel,<br />
The Discovery Channel, <strong>and</strong> Lifetime. The Cable-Satellite <strong>Public</strong> Affairs<br />
Network (C-SPAN) enhanced the cable industry’s presence in<br />
Washington, D.C., by broadcasting sessions of the House of Representatives.<br />
Specialized networks for particular audiences also developed.<br />
Music Television (MTV), featuring songs played along with video<br />
sequences, premiered in 1981. Nickelodeon, a children’s channel,<br />
<strong>and</strong> VH-1, a music channel aimed at baby boomers rather than<br />
MTV’s teenage audience, reflected the movement toward specialization.<br />
Other specialized channels, such as the Sci-Fi Channel <strong>and</strong> the<br />
Comedy Channel, went even further in targeting specific audiences.
214 / Community antenna television<br />
Cable <strong>and</strong> the <strong>Public</strong><br />
The impact on the American public was tremendous. Information<br />
<strong>and</strong> entertainment became available around the clock. Cable<br />
provided a new level of service, information, <strong>and</strong> entertainment unavailable<br />
to nonsubscribers. One phenomenon that exploded in the<br />
late 1980’s was home shopping. Via The Home Shopping Club <strong>and</strong><br />
QVC, two shopping channels offered through cable television, the<br />
American public could order a full range of products. Everything<br />
from jewelry to tools <strong>and</strong> home cleaning supplies to clothing <strong>and</strong><br />
electronics was available to anyone with a credit card. Americans<br />
could now go shopping from home.<br />
The cable industry was not without its competitors <strong>and</strong> critics. In<br />
the 1980’s, the videocassette recorder (VCR) opened the viewing<br />
market. Prerecorded cassettes of recent film releases as well as classics<br />
were made available for purchase or for a small rental fee. National<br />
chains of video rental outlets, such as Blockbuster Video <strong>and</strong><br />
Video Towne, offered thous<strong>and</strong>s of titles for rent. Libraries also began<br />
to stock films. This created competition for the cable industry, in<br />
particular the premium movie channels. To combat this competition,<br />
channels began to offer original productions unavailable on<br />
videocassette. The combined effect of the cable industry <strong>and</strong> the<br />
videocassette market was devastating to the motion picture industry.<br />
The wide variety of programming available at home encouraged<br />
the American public, especially baby boomers with children,<br />
to stay home <strong>and</strong> watch cable or rented films instead of going to theaters.<br />
Critics of the cable industry seized on the violence, sexual content,<br />
<strong>and</strong> graphic language found in some of cable’s offerings. One<br />
parent responded by developing a lockout device that could make<br />
certain channels unavailable to children. Some premium channels<br />
developed an after-hours programming schedule that aired adulttheme<br />
programming only late at night. Another criticism stemmed<br />
from the repetition common on pay channels. As a result of the limited<br />
supply of <strong>and</strong> large dem<strong>and</strong> for films, pay channels were forced<br />
to repeat programs several times within a month <strong>and</strong> to rebroadcast<br />
films that were several years old. This led consumers to question the<br />
value of the additional monthly fee paid for such channels. To com-
at the problem, premium channels increased efforts aimed at original<br />
production <strong>and</strong> added more films that had not been box-office hits.<br />
By the early 1990’s, as some eleven thous<strong>and</strong> cable systems were<br />
serving 56.2 million subscribers, a new cry for regulation began. Debates<br />
over services <strong>and</strong> increasingly high rates led the FCC <strong>and</strong><br />
Congress to investigate the industry, opening the door for new<br />
guidelines on the cable industry. The non-cable networks—American<br />
Broadcasting Company (ABC), Columbia Broadcasting System<br />
(CBS), National Broadcasting Company (NBC), <strong>and</strong> newcomer<br />
Fox—stressed their concerns about the cable industry. These networks<br />
provided free programming, <strong>and</strong> cable systems profited<br />
from inclusion of network programming. Television industry representatives<br />
expressed the opinion that cable providers should pay<br />
for the privilege of retransmitting network broadcasts.<br />
The impact on cable’s subscribers, especially concerning monthly<br />
cable rates, came under heavy debate in public <strong>and</strong> government forums.<br />
The administration in Washington, D.C., expressed concern<br />
that cable rates had risen too quickly <strong>and</strong> for no obvious reason other<br />
than profit-seeking by what were essentially monopolistic local cable<br />
systems. What was clear was that the cable industry had transformed<br />
the television experience <strong>and</strong> was going to remain a powerful force<br />
within the medium. Regulators <strong>and</strong> television industry leaders were<br />
left to determine how to maintain an equitable coexistence within the<br />
medium.<br />
See also Color television; Communications satellite; Fiber-optics;<br />
Telephone switching; Television.<br />
Further Reading<br />
Community antenna television / 215<br />
Baldwin, Thomas F., <strong>and</strong> D. Stevens McVoy. Cable Communication.<br />
Englewood Cliffs, N.J.: Prentice-Hall, 1983.<br />
Brenner, Daniel L., <strong>and</strong> Monroe E. Price. Cable Television <strong>and</strong> Other<br />
Nonbroadcast Video: Law <strong>and</strong> Policy. New York: Clark Boardman,<br />
1986.<br />
Burns, R. W. Television: An International History of the Formative Years.<br />
London: Institution of Electrical Engineers in Association with<br />
the Science Museum, 1998.
216 / Community antenna television<br />
Coleman, Wim. The Age of Broadcasting: Television. Carlisle, Mass.:<br />
Discovery Enterprises, 1997.<br />
Negrine, Ralph M., ed. Cable Television <strong>and</strong> the Future of Broadcasting.<br />
New York: St. Martin’s Press, 1985.<br />
Sconce, Jeffrey. Haunted Media: Electronic Presence from Telegraphy to<br />
Television. Durham, N.C.: Duke University Press, 2000.<br />
Whittemore, Hank. CNN: The Inside Story. Boston: Little, Brown,<br />
1990.
Compact disc<br />
Compact disc<br />
The invention: A plastic disk on which digitized music or computer<br />
data is stored.<br />
The people behind the invention:<br />
Akio Morita (1921- ), a Japanese physicist <strong>and</strong> engineer<br />
who was a cofounder of Sony<br />
Wisse Dekker (1924- ), a Dutch businessman who led the<br />
Philips company<br />
W. R. Bennett (1904-1983), an American engineer who was a<br />
pioneer in digital communications <strong>and</strong> who played an<br />
important part in the Bell Laboratories research program<br />
Digital Recording<br />
217<br />
The digital system of sound recording, like the analog methods<br />
that preceded it, was developed by the telephone companies to improve<br />
the quality <strong>and</strong> speed of telephone transmissions. The system<br />
of electrical recording introduced by Bell Laboratories in the 1920s<br />
was part of this effort. Even Edison’s famous invention of the phonograph<br />
in 1877 was originally conceived as an accompaniment to<br />
the telephone. Although developed within the framework of telephone<br />
communications, these innovations found wide applications<br />
in the entertainment industry.<br />
The basis of the digital recording system was a technique of sampling<br />
the electrical waveforms of sound called PCM, or pulse code<br />
modulation. PCM measures the characteristics of these waves <strong>and</strong><br />
converts them into numbers. This technique was developed at Bell<br />
Laboratories in the 1930’s to transmit speech. At the end of World<br />
War II, engineers of the Bell System began to adapt PCM technology<br />
for ordinary telephone communications.<br />
The problem of turning sound waves into numbers was that of<br />
finding a method that could quickly <strong>and</strong> reliably manipulate millions<br />
of them. The answer to this problem was found in electronic computers,<br />
which used binary code to h<strong>and</strong>le millions of computations in a<br />
few seconds. The rapid advance of computer technology <strong>and</strong> the
218 / Compact disc<br />
semiconductor circuits that gave computers the power to h<strong>and</strong>le<br />
complex calculations provided the means to bring digital sound technology<br />
into commercial use. In the 1960’s, digital transmission <strong>and</strong><br />
switching systems were introduced to the telephone network.<br />
Pulse coded modulation of audio signals into digital code achieved<br />
st<strong>and</strong>ards of reproduction that exceeded even the best analog system,<br />
creating an enormous dynamic range of sounds with no distortion<br />
or background noise. The importance of digital recording went<br />
beyond the transmission of sound because it could be applied to all<br />
types of magnetic recording in which the source signal is transformed<br />
into an electric current. There were numerous commercial<br />
applications for such a system, <strong>and</strong> several companies began to explore<br />
the possibilities of digital recording in the 1970’s.<br />
Researchers at the Sony, Matsushita, <strong>and</strong> Mitsubishi electronics<br />
companies in Japan produced experimental digital recording systems.<br />
Each developed its own PCM processor, an integrated circuit<br />
that changes audio signals into digital code. It does not continuously<br />
transform sound but instead samples it by analyzing thous<strong>and</strong>s<br />
of minute slices of it per second. Sony’s PCM-F1 was the first<br />
analog-to-digital conversion chip to be produced. This gave Sony a<br />
lead in the research into <strong>and</strong> development of digital recording.<br />
All three companies had strong interests in both audio <strong>and</strong> video<br />
electronics equipment <strong>and</strong> saw digital recording as a key technology<br />
because it could deal with both types of information simultaneously.<br />
They devised recorders for use in their manufacturing operations.<br />
After using PCM techniques to turn sound into digital code, they recorded<br />
this information onto tape, using not magnetic audio tape but<br />
the more advanced video tape, which could h<strong>and</strong>le much more information.<br />
The experiments with digital recording occurred simultaneously<br />
with the accelerated development of video recording technology<br />
<strong>and</strong> owed much to the enhanced capabilities of video recorders.<br />
At this time, videocassette recorders were being developed in<br />
several corporate laboratories in Japan <strong>and</strong> Europe. The Sony Corporation<br />
was one of the companies developing video recorders at this<br />
time. Its U-matic machines were successfully used to record digitally.<br />
In 1972, the Nippon Columbia Company began to make its master recordings<br />
digitally on an Ampex video recording machine.
Links Among New Technologies<br />
Compact disc / 219<br />
There were powerful links between the new sound recording<br />
systems <strong>and</strong> the emerging technologies of storing <strong>and</strong> retrieving<br />
video images. The television had proved to be the most widely used<br />
<strong>and</strong> profitable electronic product of the 1950’s, but with the market<br />
for color television saturated by the end of the 1960’s, manufacturers<br />
had to look for a replacement product. A machine to save <strong>and</strong> replay<br />
television images was seen as the ideal companion to the family<br />
TV set. The great consumer electronics companies—General<br />
Electric <strong>and</strong> RCA in the United States, Philips <strong>and</strong> Telefunken in Europe,<br />
<strong>and</strong> Sony <strong>and</strong> Matsushita in Japan—began experimental programs<br />
to find a way to save video images.<br />
RCA’s experimental teams took the lead in developing an optical<br />
videodisc system, called Selectavision, that used an electronic stylus<br />
to read changes in capacitance on the disc. The greatest challenge to<br />
them came from the Philips company of Holl<strong>and</strong>. Its optical videodisc<br />
used a laser beam to read information on a revolving disc, in<br />
which a layer of plastic contained coded information. With the aid<br />
of the engineering department of the Deutsche Grammophon record<br />
company, Philips had an experimental laser disc in h<strong>and</strong> by<br />
1964.<br />
The Philips Laservision videodisc was not a commercial success,<br />
but it carried forward an important idea. The research <strong>and</strong> engineering<br />
work carried out in the laboratories at Eindhoven in Holl<strong>and</strong><br />
proved that the laser reader could do the job. More important,<br />
Philips engineers had found that this fragile device could be mass<br />
produced as a cheap <strong>and</strong> reliable component of a commercial product.<br />
The laser optical decoder was applied to reading the binary<br />
codes of digital sound. By the end of the 1970’s, Philips engineers<br />
had produced a working system.<br />
Ten years of experimental work on the Laservision system proved<br />
to be a valuable investment for the Philips corporation. Around<br />
1979, it started to work on a digital audio disc (DAD) playback system.<br />
This involved more than the basic idea of converting the output<br />
of the PCM conversion chip onto a disc. The lines of pits on the<br />
compact disc carry a great amount of information: the left- <strong>and</strong><br />
right-h<strong>and</strong> tracks of the stereo system are identified, <strong>and</strong> a sequence
220 / Compact disc<br />
of pits also controls the motor speed <strong>and</strong> corrects any error in the laser<br />
reading of the binary codes.<br />
This research was carried out jointly with the Sony Corporation<br />
of Japan, which had produced a superior method of encoding digital<br />
sound with its PCM chips. The binary codes that carried the information<br />
were manipulated by Sony’s sixteen-bit microprocessor.<br />
Its PCM chip for analog-to-digital conversion was also employed.<br />
Together, Philips <strong>and</strong> Sony produced a commercial digital playback<br />
record that they named the compact disc. The name is significant, as<br />
it does more than indicate the size of the disc—it indicates family<br />
ties with the highly successful compact cassette. Philips <strong>and</strong> Sony<br />
had already worked to establish this st<strong>and</strong>ard in the magnetic tape<br />
format <strong>and</strong> aimed to make their compact disc the st<strong>and</strong>ard for digital<br />
sound reproduction.<br />
Philips <strong>and</strong> Sony began to demonstrate their compact digital disc<br />
(CD) system to representatives of the audio industry in 1981. They<br />
were not alone in digital recording. The Japanese Victor Company, a<br />
subsidiary of Matsushita, had developed a version of digital recording<br />
from its VHD video disc design. It was called audio high density<br />
disc (AHD). Instead of<br />
the small CD disc, the AHD<br />
system used a ten-inch vinyl<br />
disc. Each digital recording<br />
system used a different<br />
PCM chip with a<br />
different rate of sampling<br />
the audio signal.<br />
The recording <strong>and</strong> electronics<br />
industries’ decision<br />
to st<strong>and</strong>ardize on the Philips/Sony<br />
CD system was<br />
therefore a major victory for<br />
these companies <strong>and</strong> an important<br />
event in the digital<br />
era of sound recording.<br />
Sony had found out the<br />
hard way that the technical<br />
performance of an innova-<br />
Although not much larger than a 3.25-inch floppy<br />
disk, a compact disk can store more than five hundred<br />
times as much data. (PhotoDisc)
tion is irrelevant when compared with the politics of turning it into<br />
an industrywide st<strong>and</strong>ard. Although the pioneer in videocassette<br />
recorders, Sony had been beaten by its rival, Matsushita, in establishing<br />
the video recording st<strong>and</strong>ard. This mistake was not repeated<br />
in the digital st<strong>and</strong>ards negotiations, <strong>and</strong> many companies were<br />
persuaded to license the new technology. In 1982, the technology<br />
was announced to the public. The following year, the compact disc<br />
was on the market.<br />
The Apex of Sound Technology<br />
Compact disc / 221<br />
The compact disc represented the apex of recorded sound technology.<br />
Simply put, here at last was a system of recording in which<br />
there was no extraneous noise—no surface noise of scratches <strong>and</strong><br />
pops, no tape hiss, no background hum—<strong>and</strong> no damage was done<br />
to the recording as it was played. In principle, a digital recording<br />
will last forever, <strong>and</strong> each play will sound as pure as the first. The<br />
compact disc could also play much longer than the vinyl record or<br />
long-playing cassette tape.<br />
Despite these obvious technical advantages, the commercial success<br />
of digital recording was not ensured. There had been several<br />
other advanced systems that had not fared well in the marketplace,<br />
<strong>and</strong> the conspicuous failure of quadrophonic sound in the 1970’s<br />
had not been forgotten within the industry of recorded sound. Historically,<br />
there were two key factors in the rapid acceptance of a new<br />
system of sound recording <strong>and</strong> reproduction: a library of prerecorded<br />
music to tempt the listener into adopting the system <strong>and</strong> a<br />
continual decrease in the price of the playing units to bring them<br />
within the budgets of more buyers.<br />
By 1984, there were about a thous<strong>and</strong> titles available on compact<br />
disc in the United States; that number had doubled by 1985. Although<br />
many of these selections were classical music—it was naturally<br />
assumed that audiophiles would be the first to buy digital<br />
equipment—popular music was well represented. The first CD available<br />
for purchase was an album by popular entertainer Billy Joel.<br />
The first CD-playing units cost more than $1,000, but Akio Morita<br />
of Sony was determined that the company should reduce the<br />
price of players even if it meant selling them below cost. Sony’s
222 / Compact disc<br />
Akio Morita<br />
Akio Morita was born in Nagoya, Japan, in 1921 into a family<br />
owning one of the country’s oldest, most prosperous sake<br />
breweries. As the eldest son, Morita was expected to take over<br />
its management from his father. However, business did not interest<br />
him as a child. Electronics did, especially radios. He made<br />
his own radio <strong>and</strong> phonograph <strong>and</strong> resolved to be a scientist.<br />
He succeeded, but in an ironic twist, he also became one of the<br />
twentieth century’s most successful businessmen.<br />
After taking a degree in physics from Osaka Imperial University<br />
in 1944, he worked at the Naval Research Center. There<br />
he met Masaru Ibuka. Although Ibuka was twelve years older<br />
<strong>and</strong> much more reserved in temperament, the two became fast<br />
friends. After World War II, they borrowed the equivalent of<br />
about $500 from Morita’s father <strong>and</strong> opened the Tokyo Telecommunications<br />
Company, making voltmeters <strong>and</strong>, later, tape<br />
recorders.<br />
To help along sluggish sales, Morita visited local schools to<br />
demonstrate the tape recorder’s usefulness in teaching. He was<br />
so successful that a third of Japan’s elementary schools bought<br />
them. From then on, Morita, as vice president of the company,<br />
was the lead man in marketing <strong>and</strong> sales strategy. He bought<br />
rights from West Electric Company to manufacture transistors<br />
in 1954, <strong>and</strong> soon the company was turning out transistor radios.<br />
Sales soared. They changed the name to Sony (based on<br />
the Latin word for sound, sonus) because it was more memorable.<br />
Despite an American bias against Japanese products—<br />
which many Americans regarded as shoddy imitations—Morita<br />
launched Sony America in 1960. In 1963 Sony became the first<br />
Japanese company to sell its stock in America <strong>and</strong> in 1970 the<br />
first to be listed on the New York Stock Exchange, opening an<br />
American factory two years later. Morita became president of<br />
Sony Corporation in 1971 <strong>and</strong> board chairman in 1976.<br />
In 1984 Sony earnings exceeded $5 billion, a ten-million percent<br />
increase in worth in less than forty years. As important for<br />
Japanese industry <strong>and</strong> national honor, Morita <strong>and</strong> Sony moved<br />
Japanese electronics into leading edge of technical sophistication<br />
<strong>and</strong> craftsmanship.
audio engineering department improved the performance of the<br />
players while reducing size <strong>and</strong> cost. By 1984, Sony had a small CD<br />
unit on the market for $300. Several of Sony’s competitors, including<br />
Matsushita, had followed its lead into digital reproduction.<br />
There were several compact disc players available in 1985 that cost<br />
less than $500. Sony quickly applied digital technology to the popular<br />
personal stereo <strong>and</strong> to automobile sound systems. Sales of CD<br />
units increased roughly tenfold from 1983 to 1985.<br />
Impact on Vinyl Recording<br />
Compact disc / 223<br />
When the compact disc was announced in 1982, the vinyl record<br />
was the leading form of recorded sound, with 273 million units sold<br />
annually compared to 125 million prerecorded cassette tapes. The<br />
compact disc sold slowly, beginning with 800,000 units shipped in<br />
1983 <strong>and</strong> rising to 53 million in 1986. By that time, the cassette tape<br />
had taken the lead, with slightly fewer than 350 million units. The<br />
vinyl record was in decline, with only about 110 million units<br />
shipped. Compact discs first outsold vinyl records in 1988. In the ten<br />
years from 1979 to 1988, the sales of vinyl records dropped nearly 80<br />
percent. In 1989, CDs accounted for more than 286 million sales, but<br />
cassettes still led the field with total sales of 446 million. The compact<br />
disc finally passed the cassette in total sales in 1992, when more<br />
than 300 million CDs were shipped, an increase of 22 percent over<br />
the figure for 1991.<br />
The introduction of digital recording had an invigorating effect<br />
on the industry of recorded sound, which had been unable to fully<br />
recover from the slump of the late 1970’s. Sales of recorded music<br />
had stagnated in the early 1980’s, <strong>and</strong> an industry accustomed to<br />
steady increases in output became eager to find a new product or<br />
style of music to boost its sales. The compact disc was the product to<br />
revitalize the market for both recordings <strong>and</strong> players. During the<br />
1980’s, worldwide sales of recorded music jumped from $12 billion<br />
to $22 billion, with about half of the sales volume accounted for by<br />
digital recordings by the end of the decade.<br />
The success of digital recording served in the long run to undermine<br />
the commercial viability of the compact disc. This was a playonly<br />
technology, like the vinyl record before it. Once users had be-
224 / Compact disc<br />
come accustomed to the pristine digital sound, they clamored for<br />
digital recording capability. The alliance of Sony <strong>and</strong> Philips broke<br />
down in the search for a digital tape technology for home use. Sony<br />
produced a digital tape system called DAT, while Philips responded<br />
with a digital version of its compact audio tape called DCC. Sony<br />
answered the challenge of DCC with its Mini Disc (MD) product,<br />
which can record <strong>and</strong> replay digitally.<br />
The versatility of digital recording has opened up a wide range of<br />
consumer products. Compact disc technology has been incorporated<br />
into the computer, in which CD-ROM readers convert the digital<br />
code of the disc into sound <strong>and</strong> images. Many home computers have<br />
the capability to record <strong>and</strong> replay sound digitally. Digital recording<br />
is the basis for interactive audio/video computer programs in which<br />
the user can interface with recorded sound <strong>and</strong> images. Philips has<br />
established a strong foothold in interactive digital technology with its<br />
CD-I (compact disc interactive) system, which was introduced in<br />
1990. This acts as a multimedia entertainer, providing sound, moving<br />
images, games, <strong>and</strong> interactive sound <strong>and</strong> image publications such as<br />
encyclopedias. The future of digital recording will be broad-based<br />
systems that can record <strong>and</strong> replay a wide variety of sounds <strong>and</strong> images<br />
<strong>and</strong> that can be manipulated by users of home computers.<br />
See also Cassette recording; Dolby noise reduction; Electronic<br />
synthesizer; FM radio; Laser-diode recording process; Optical disk;<br />
Transistor; Videocassette recorder; Walkman cassette player.<br />
Further Reading<br />
Copel<strong>and</strong>, Peter. Sound Recordings. London: British Library, 1991.<br />
Heerding, A. A Company of Many Parts. Cambridge: Cambridge University<br />
Press, 1998.<br />
Marshall, David V. Akio Morita <strong>and</strong> Sony. Watford: Exley, 1995.<br />
Morita, Akio, with Edwin M. Reingold, <strong>and</strong> Mitsuko Shimomura.<br />
Made in Japan: Akio Morita <strong>and</strong> Sony. London: HarperCollins, 1994.<br />
Nathan, John. Sony: The Private Life. Boston, Mass.: Houghton Mifflin,<br />
1999.<br />
Schlender, Brenton R. “How Sony Keeps the Magic Going.” Fortune<br />
125 (February 24, 1992).
Compressed-air-accumulating<br />
power plant<br />
Compressed-air-accumulating power plant<br />
The invention: Plants that can be used to store energy in the form<br />
of compressed air when electric power dem<strong>and</strong> is low <strong>and</strong> use it<br />
to produce energy when power dem<strong>and</strong> is high.<br />
The organization behind the invention:<br />
Nordwestdeutsche Kraftwerke, a Germany company<br />
Power, Energy Storage, <strong>and</strong> Compressed Air<br />
225<br />
Energy, which can be defined as the capacity to do work, is essential<br />
to all aspects of modern life. One familiar kind of energy, which<br />
is produced in huge amounts by power companies, is electrical energy,<br />
or electricity. Most electricity is produced in a process that consists<br />
of two steps. First, a fossil fuel such as coal is burned <strong>and</strong> the resulting<br />
heat is used to make steam. Then, the steam is used to<br />
operate a turbine system that produces electricity. Electricity has<br />
myriad applications, including the operation of heaters, home appliances<br />
of many kinds, industrial machinery, computers, <strong>and</strong> artificial<br />
illumination systems.<br />
An essential feature of electricity manufacture is the production<br />
of the particular amount of electricity that is needed at a given time.<br />
If moment-to-moment energy requirements are not met, the city or<br />
locality involved will experience a “blackout,” the most obvious<br />
feature of which is the loss of electrical lighting. To prevent blackouts,<br />
it is essential to store extra electricity at times when power production<br />
exceeds power dem<strong>and</strong>s. Then, when power dem<strong>and</strong>s exceed<br />
the capacity to make energy by normal means, stored energy<br />
can be used to make up the difference.<br />
One successful modern procedure for such storage is the compressed-air-accumulation<br />
process, pioneered by the Nordwestdeutsche<br />
Kraftwerke company’s compressed-air-accumulating power<br />
plant, which opened in December, 1978. The plant, which is<br />
located in Huntorf, Germany (at the time, West Germany), makes<br />
compressed air during periods of low electricity dem<strong>and</strong>, stores the
226 / Compressed-air-accumulating power plant<br />
air in an underground cavern, <strong>and</strong> uses it to produce extra electricity<br />
during periods of high dem<strong>and</strong>.<br />
Plant Operation <strong>and</strong> Components<br />
The German 300-megawatt compressed-air-accumulating power<br />
plant in Huntorf produces extra electricity from stored compressed<br />
air that will provide up to four hours per day of local peak electricity<br />
needs. The energy-storage process, which is vital to meeting very<br />
high peak electric power dem<strong>and</strong>s, is viable for electric power<br />
plants whose total usual electric outputs range from 25 megawatts<br />
to the 300 megawatts produced at Huntorf. It has been suggested,<br />
however, that the process is most suitable for 25- to 50-megawatt<br />
plants.<br />
The energy-storage procedure used at Huntorf is quite simple.<br />
All the surplus electricity that is made in nonpeak-dem<strong>and</strong> periods<br />
is utilized to drive an air compressor. The compressor pumps air<br />
from the surrounding atmosphere into an airtight underground<br />
storage cavern. When extra electricity is required, the stored compressed<br />
air is released <strong>and</strong> passed through a heating unit to be<br />
warmed, after which it is used to run gas-turbine systems that produce<br />
electricity. This sequence of events is the same as that used in<br />
any gas-turbine generating system; the only difference is that the<br />
compressed air can be stored for any desired period of time rather<br />
than having to be used immediately.<br />
One requirement of any compressed-air-accumulating power<br />
plant is an underground storage chamber. The Huntorf plant utilizes<br />
a cavern that was hollowed out some 450 meters below the surface<br />
of the earth. The cavern was created by drilling a hole into an<br />
underground salt deposit <strong>and</strong> pumping in water. The water dissolved<br />
the salt, <strong>and</strong> the resultant saltwater solution (brine) was<br />
pumped out of the deposit. The process of pumping in water <strong>and</strong> removing<br />
brine was continued until the cavern reached the desired<br />
size. This type of storage cavern is virtually leak-free. The preparation<br />
of such underwater salt-dome caverns has been performed<br />
roughly since the middle of the twentieth century. Until the Huntorf<br />
endeavor, such caves were used to stockpile petroleum <strong>and</strong> natural<br />
gas for later use. It is also possible to use mined, hard-rock caverns
for compressed-air accumulation when it is necessary to compress<br />
air to pressures higher than those that can be maintained effectively<br />
in a salt-dome cavern.<br />
The essential machinery that must be added to conventional<br />
power plants to turn them into compressed-air-accumulating power<br />
plants are motor-driven air compressors <strong>and</strong> gas turbine generating<br />
systems. This equipment must be connected appropriately so that<br />
in the storage mode, the overall system will compress air for storage<br />
in the underground cavern, <strong>and</strong> in the power-production mode, the<br />
system will produce electricity from the stored compressed air.<br />
Large compressed-air-accumulating power plants require specially<br />
constructed machinery. For example, the compressors that<br />
are used at Huntorf were developed specifically for that plant by<br />
Sulzer, a Swiss company. When the capacity of such plants is no<br />
higher than 50 megawatts, however, st<strong>and</strong>ard, readily available<br />
components can be used. This means that relatively small compressed-air-accumulating<br />
power plants can be constructed for a reasonable<br />
cost.<br />
Consequences<br />
Compressor<br />
Compressed-air-accumulating power plant / 227<br />
Air<br />
Electricity<br />
Out<br />
Electricity<br />
In<br />
Exhaust Stack<br />
Recuperator<br />
Clutch Motor Clutch<br />
Generator<br />
Turbine<br />
Valve Valve<br />
Combuster<br />
Burning Fuel<br />
Schematic of a compressed-air-accumulating power plant.<br />
The development of compressed-air-accumulating power plants<br />
has had a significant impact on the electric power industry, adding to<br />
its capacity to store energy. The main storage methods available prior<br />
to the development of compressed-air-accumulation methodology<br />
were batteries <strong>and</strong> water that was pumped uphill (hydro-storage). Battery<br />
technology is expensive, <strong>and</strong> its capacity is insufficient for major,<br />
long-term power storage. Hydro-storage is a more viable technology.
228 / Compressed-air-accumulating power plant<br />
Compressed-air energy-storage systems have several advantages<br />
over hydro-storage. First, they can be used in areas where flat terrain<br />
makes it impossible to use hydro-storage. Second, compressedair<br />
storage is more efficient than hydro-storage. Finally, the fact that<br />
st<strong>and</strong>ard plant components can be used, along with several other<br />
factors, means that 25- to 50-megawatt compressed-air storage plants<br />
can be constructed much more quickly <strong>and</strong> cheaply than comparable<br />
hydro-storage plants.<br />
The attractiveness of compressed-air-accumulating power plants<br />
has motivated efforts to develop hard-rock cavern construction<br />
techniques that cut costs <strong>and</strong> make it possible to use high-pressure<br />
air storage. In addition, aquifers (underground strata of porous rock<br />
that normally hold groundwater) have been used successfully for<br />
compressed-air storage. It is expected that compressed-air-accumulating<br />
power plants will be widely used in the future, which will<br />
help to decrease pollution <strong>and</strong> cut the use of fossil fuels.<br />
See also Alkaline storage battery; Breeder reactor; Fuel cell; Geothermal<br />
power; Heat pump; Nuclear power plant; Tidal power<br />
plant.<br />
Further Reading<br />
“Compressed Air Stores Electricity.” Popular Science 242, no. 5 (May,<br />
1993).<br />
Lee, Daehee. “Power to Spare: Compressed Air Energy Storage.”<br />
Mechanical Engineering 113, no. 7 (July, 1991).<br />
Shepard, Sam, <strong>and</strong> Septimus van der Linden. “Compressed Air Energy<br />
Storage Adapts Proven Technology to Address Market Opportunities.”<br />
Power Engineering 105, no. 4 (April, 2001).<br />
Zink, John C. “Who Says You Can’t Store Electricity?” Power Engineering<br />
101, no. 3 (March, 1997).
Computer chips<br />
Computer chips<br />
The invention: Also known as a microprocessor, a computer chip<br />
combines the basic logic circuits of a computer on a single silicon<br />
chip.<br />
The people behind the invention:<br />
Robert Norton Noyce (1927-1990), an American physicist<br />
William Shockley (1910-1989), an American coinventor of the<br />
transistor who was a cowinner of the 1956 Nobel Prize in<br />
Physics<br />
Marcian Edward Hoff, Jr. (1937- ), an American engineer<br />
Jack St. Clair Kilby (1923- ), an American researcher <strong>and</strong><br />
assistant vice president of Texas Instruments<br />
The Shockley Eight<br />
229<br />
The microelectronics industry began shortly after World War II<br />
with the invention of the transistor. While radar was being developed<br />
during the war, it was discovered that certain crystalline substances,<br />
such as germanium <strong>and</strong> silicon, possess unique electrical<br />
properties that make them excellent signal detectors. This class of<br />
materials became known as “semiconductors,” because they are<br />
neither conductors nor insulators of electricity.<br />
Immediately after the war, scientists at Bell Telephone Laboratories<br />
began to conduct research on semiconductors in the hope that<br />
they might yield some benefits for communications. The Bell physicists<br />
learned to control the electrical properties of semiconductor<br />
crystals by “doping” (treating) them with minute impurities. When<br />
two thin wires for current were attached to this material, a crude device<br />
was obtained that could amplify the voice. The transistor, as<br />
this device was called, was developed late in 1947. The transistor<br />
duplicated many functions of vacuum tubes; it was also smaller, required<br />
less power, <strong>and</strong> generated less heat. The three Bell Laboratories<br />
scientists who guided its development—William Shockley,<br />
Walter H. Brattain, <strong>and</strong> John Bardeen—won the 1956 Nobel Prize in<br />
Physics for their work.
230 / Computer chips<br />
Shockley left Bell Laboratories <strong>and</strong> went to Palo Alto, California,<br />
where he formed his own company, Shockley Semiconductor Laboratories,<br />
which was a subsidiary of Beckman Instruments. Palo Alto<br />
is the home of Stanford University, which, in 1954, set aside 655<br />
acres of l<strong>and</strong> for a high-technology industrial area known as Stanford<br />
Research Park. One of the first small companies to lease a site<br />
there was Hewlett-Packard. Many others followed, <strong>and</strong> the surrounding<br />
area of Santa Clara County gave rise in the 1960’s <strong>and</strong><br />
1970’s to a booming community of electronics firms that became<br />
known as “Silicon Valley.” On the strength of his prestige, Shockley<br />
recruited eight young scientists from the eastern United States to<br />
work for him. One was Robert Norton Noyce, an Iowa-bred physicist<br />
with a doctorate from the Massachusetts Institute of Technology.<br />
Noyce came to Shockley’s company in 1956.<br />
The “Shockley Eight,” as they became known in the industry,<br />
soon found themselves at odds with their boss over issues of research<br />
<strong>and</strong> development. Seven of the dissenting scientists negotiated<br />
with industrialist Sherman Fairchild, <strong>and</strong> they convinced the<br />
remaining holdout, Noyce, to join them as their leader. The Shock-<br />
Despite their tiny size, individual computer chips contain the basic logic circuits of entire<br />
computers. (PhotoDisc)
Jack St. Clair Kilby<br />
Computer chips / 231<br />
Maybe the original, deepest inspiration for the integrated<br />
circuit chip was topographical: As a boy Jack Kilby (b.1923) often<br />
accompanied his father, an electrical engineer, on trips over<br />
the circuit of roads through his flat home state, Kansas.<br />
In any case, he learned to love things electrical, <strong>and</strong> radios<br />
especially, from his father. Young Kilby had just started studying<br />
at the University of Illinois on his way to a degree in electrical<br />
engineering, when World War II started. He joined the<br />
Office of Strategic Services (OSS), which sent him into Japaneseoccupied<br />
territory to train local freedom fighters. He found the<br />
radios given to him to be heavy <strong>and</strong> unreliable, so he got hold of<br />
components on his own <strong>and</strong> built better, smaller radios.<br />
The “better, smaller” theme stayed with him. His first job<br />
out of college was with Centralab in Milwaukee, Wisconsin,<br />
where he designed ever smaller circuits. However, the bulky,<br />
hot vacuum tubes then in use limited miniaturization. In 1952,<br />
Centralab <strong>and</strong> Kilby eagerly incorporated the newly invented<br />
transistors into their designs. Kilby found, however, that all the<br />
electrical connections needed to hook up transistors <strong>and</strong> wires<br />
in a complex circuit also limited miniaturization.<br />
He moved to Texas Instruments in 1958. The company was<br />
working on a modular approach to miniaturization with snaptogether<br />
st<strong>and</strong>ardized parts. Kilby had a better idea: place everything<br />
for a specific circuit on a chip of silicon. Along with<br />
many other inventors, Kilby was soon looking for ways to put<br />
this new integrated circuit to work. He experimented with their<br />
use in computers <strong>and</strong> in generating solar power. He helped to<br />
develop the first h<strong>and</strong>-held calculator. Soon integrated circuits<br />
were in practically every electronic gadget, so that by the year<br />
2000 his invention supported an electronic equipment industry<br />
that earned more than a trillion dollars a year.<br />
Among his many awards, Kilby shared the 2000 Nobel Prize<br />
in Physics with Zhores I. Alferov <strong>and</strong> Herbert Kroemer, both of<br />
whom also miniaturized electronics.<br />
ley Eight defected in 1957 to form a new company, Fairchild Semiconductor,<br />
in nearby Mountain View, California. Shockley’s company,<br />
which never recovered from the loss of these scientists, soon<br />
went out of business.
232 / Computer chips<br />
Integrating Circuits<br />
Research efforts at Fairchild Semiconductor <strong>and</strong> Texas Instruments,<br />
in Dallas, Texas, focused on putting several transistors on<br />
one piece, or “chip,” of silicon. The first step involved making miniaturized<br />
electrical circuits. Jack St. Clair Kilby, a researcher at Texas<br />
Instruments, succeeded in making a circuit on a chip that consisted<br />
of tiny resistors, transistors, <strong>and</strong> capacitors, all of which were connected<br />
with gold wires. He <strong>and</strong> his company filed for a patent on<br />
this “integrated circuit” in February, 1959. Noyce <strong>and</strong> his associates<br />
at Fairchild Semiconductor followed in July of that year with an integrated<br />
circuit manufactured by means of a “planar process,”<br />
which involved laying down several layers of semiconductor that<br />
were isolated by layers of insulating material. Although Kilby <strong>and</strong><br />
Noyce are generally recognized as coinventors of the integrated circuit,<br />
Kilby alone received a membership in the National <strong>Inventors</strong><br />
Hall of Fame for his efforts.<br />
Consequences<br />
By 1968, Fairchild Semiconductor had grown to a point at which<br />
many of its key Silicon Valley managers had major philosophical<br />
differences with the East Coast management of their parent company.<br />
This led to a major exodus of top-level management <strong>and</strong> engineers.<br />
Many started their own companies. Noyce, Gordon E. Moore,<br />
<strong>and</strong> Andrew Grove left Fairchild to form a new company in Santa<br />
Clara called Intel with $2 million that had been provided by venture<br />
capitalist Arthur Rock. Intel’s main business was the manufacture<br />
of computer memory integrated circuit chips. By 1970, Intel was<br />
able to develop <strong>and</strong> bring to market a r<strong>and</strong>om-access memory<br />
(RAM) chip that was subsequently purchased in large quantities by<br />
several major computer manufacturers, providing large profits for<br />
Intel.<br />
In 1969, Marcian Edward Hoff, Jr., an Intel research <strong>and</strong> development<br />
engineer, met with engineers from Busicom, a Japanese firm.<br />
These engineers wanted Intel to design a set of integrated circuits for<br />
Busicom’s desktop calculators, but Hoff told them their specifications<br />
were too complex. Nevertheless, Hoff began to think about the possi-
Circuitry of a typical computer chip. (PhotoDisc)<br />
Computer chips / 233<br />
bility of incorporating all the logic circuits of a computer central processing<br />
unit (CPU) into one chip. He began to design a chip called a<br />
“microprocessor,” which, when combined with a chip that would<br />
hold a program <strong>and</strong> one that would hold data, would become a small,<br />
general-purpose computer. Noyce encouraged Hoff <strong>and</strong> his associates<br />
to continue his work on the microprocessor, <strong>and</strong> Busicom contracted<br />
with Intel to produce the chip. Frederico Faggin, who was hired from<br />
Fairchild, did the chip layout <strong>and</strong> circuit drawings.<br />
In January, 1971, the Intel team finished its first working microprocessor,<br />
the 4004. The following year, Intel made a higher-capacity<br />
microprocessor, the 8008, for Computer Terminals Corporation.<br />
That company contracted with Texas Instruments to produce a chip<br />
with the same specifications as the 8008, which was produced in<br />
June, 1972. Other manufacturers soon produced their own microprocessors.<br />
The Intel microprocessor became the most widely used computer<br />
chip in the budding personal computer industry <strong>and</strong> may<br />
take significant credit for the PC “revolution” that soon followed.<br />
Microprocessors have become so common that people use them every<br />
day without realizing it. In addition to being used in computers,
234 / Computer chips<br />
the microprocessor has found its way into automobiles, microwave<br />
ovens, wristwatches, telephones, <strong>and</strong> many other ordinary items.<br />
See also Bubble memory; Floppy disk; Hard disk; Optical disk;<br />
Personal computer; Virtual machine.<br />
Further Reading<br />
Ceruzzi, Paul E. A History of Modern Computing. Cambridge, Mass.:<br />
MIT Press, 2000.<br />
Reid, T. R. The Chip: How Two Americans Invented the Microchip <strong>and</strong><br />
Launched a Revolution. New York: R<strong>and</strong>om House, 2001.<br />
Slater, Robert. Portraits in Silicon. Cambridge, Mass.: MIT Press,<br />
1987.
Contact lenses<br />
Contact lenses<br />
The invention: Small plastic devices that fit under the eyelids, contact<br />
lenses, or “contacts,” frequently replace the more familiar<br />
eyeglasses that many people wear to correct vision problems.<br />
The people behind the invention:<br />
Leonardo da Vinci (1452-1519), an Italian artist <strong>and</strong> scientist<br />
Adolf Eugen Fick (1829-1901), a German glassblower<br />
Kevin Tuohy, an American optician<br />
Otto Wichterle (1913- ), a Czech chemist<br />
William Feinbloom (1904-1985), an American optometrist<br />
An Old Idea<br />
235<br />
There are two main types of contact lenses: hard <strong>and</strong> soft. Both<br />
types are made of synthetic polymers (plastics). The basic concept of<br />
the contact lens was conceived by Leonardo da Vinci in 1508. He<br />
proposed that vision could be improved if small glass ampules<br />
filled with water were placed in front of each eye. Nothing came of<br />
the idea until glass scleral lenses were invented by the German<br />
glassblower Adolf Fick. Fick’s large, heavy lenses covered the pupil<br />
of the eye, its colored iris, <strong>and</strong> part of the sclera (the white of the<br />
eye). Fick’s lenses were not useful, since they were painful to wear.<br />
In the mid-1930’s, however, plastic scleral lenses were developed<br />
by various organizations <strong>and</strong> people, including the German company<br />
I. G. Farben <strong>and</strong> the American optometrist William Feinbloom.<br />
These lenses were light <strong>and</strong> relatively comfortable; they<br />
could be worn for several hours at a time.<br />
In 1945, the American optician Kevin Tuohy developed corneal<br />
lenses, which covered only the cornea of the eye. Reportedly,<br />
Tuohy’s invention was inspired by the fact that his nearsighted wife<br />
could not bear scleral lenses but hated to wear eyeglasses. Tuohy’s<br />
lenses were hard contact lenses made of rigid plastic, but they were<br />
much more comfortable than scleral lenses <strong>and</strong> could be worn for<br />
longer periods of time. Soon after, other people developed soft contact<br />
lenses, which cover both the cornea <strong>and</strong> the iris. At present,
236 / Contact lenses<br />
many kinds of contact lenses are available. Both hard <strong>and</strong> soft contact<br />
lenses have advantages for particular uses.<br />
Eyes, Tears, <strong>and</strong> Contact Lenses<br />
The camera-like human eye automatically focuses itself <strong>and</strong> adjusts<br />
to the prevailing light intensity. In addition, it never runs out of<br />
“film” <strong>and</strong> makes a continuous series of visual images. In the process<br />
of seeing, light enters the eye <strong>and</strong> passes through the clear,<br />
dome-shaped cornea, through the hole (the pupil) in the colored<br />
iris, <strong>and</strong> through the clear eye lens, which can change shape by<br />
means of muscle contraction. The lens focuses the light, which next<br />
passes across the jellylike “vitreous humor” <strong>and</strong> hits the retina.<br />
There, light-sensitive retinal cells send visual images to the optic<br />
nerve, which transmits them to the brain for interpretation.<br />
Many people have 20/20 (normal) vision, which means that they<br />
can clearly see letters on a designated line of a st<strong>and</strong>ard eye chart<br />
placed 20 feet away. Nearsighted (myopic) people have vision of<br />
20/40 or worse. This means that, 20 feet from the eye chart, they see<br />
clearly what people with 20/20 vision can see clearly at a greater<br />
distance.<br />
Myopia (nearsightedness) is one of the four most common visual<br />
defects. The others are hyperopia, astigmatism, <strong>and</strong> presbyopia. All<br />
are called “refractive errors” <strong>and</strong> are corrected with appropriate<br />
eyeglasses or contact lenses. Myopia, which occurs in 30 percent of<br />
humans, occurs when the eyeball is too long for the lens’s focusing<br />
ability <strong>and</strong> images of distant objects focus before they reach the retina,<br />
causing blurry vision. Hyperopia, or farsightedness, occurs<br />
when the eyeballs are too short. In hyperopia, the eye’s lenses cannot<br />
focus images of nearby objects by the time those images reach<br />
the retina, resulting in blurry vision. A more common condition is<br />
astigmatism, in which incorrectly shaped corneas make all objects<br />
appear blurred. Finally, presbyopia, part of the aging process,<br />
causes the lens of the eye to lose its elasticity. It causes progressive<br />
difficulty in seeing nearby objects. In myopic, hyperopic, or astigmatic<br />
people, bifocal (two-lens) systems are used to correct presbyopia,<br />
whereas monofocal systems are used to correct presbyopia in<br />
people whose vision is otherwise normal.
William Feinbloom<br />
Contact lenses / 237<br />
William Feinbloom started his career in eye care when he<br />
was only three, helping his father, an optometrist, in his practice.<br />
Born in Brooklyn, New York, in 1904, Feinbloom studied at<br />
the Columbia School of Optometry <strong>and</strong> graduated at nineteen.<br />
He later earned degrees in physics, mathematics, biophysics,<br />
<strong>and</strong> psychology, all of it to help him treat people who suffered<br />
visual impairments. His many achievements on the behalf of<br />
the partially sighted won him professional accolades as the “father<br />
of low vision.”<br />
In 1932, while working in a clinic, Feinbloom produced the<br />
first of his special vision-enhancing inventions. He ground<br />
three-power lenses, imitating the primary lens of a refracting<br />
telescope, <strong>and</strong> fit them in a frame for an elderly patient whose<br />
vision could not be treated. The patient was again able to see,<br />
<strong>and</strong> when news of this miracle later reached Pope Pius XI, he<br />
sent a special blessing to Feinbloom. He soon opened his own<br />
practice <strong>and</strong> during the next fifty years invented a series of new<br />
lenses for people with macular degeneration <strong>and</strong> other vision<br />
diseases, as well as making the first set of contact lenses in<br />
America.<br />
In 1978 Feinbloom bequeathed his practice to the Pennsylvania<br />
College of Optometry, which named it the William Feinbloom<br />
Vision Rehabilitation Center. Every year the William<br />
Feinbloom Award honors a vision-care specialist who has improved<br />
the delivery <strong>and</strong> quality of optometric service. Feinbloom<br />
died in 1985.<br />
Modern contact lenses, which many people prefer to eyeglasses,<br />
are used to correct all common eye defects as well as many others<br />
not mentioned here. The lenses float on the layer of tears that is<br />
made continuously to nourish the eye <strong>and</strong> keep it moist. They fit under<br />
the eyelids <strong>and</strong> either over the cornea or over both the cornea<br />
<strong>and</strong> the iris, <strong>and</strong> they correct visual errors by altering the eye’s focal<br />
length enough to produce 20/20 vision. In addition to being more attractive<br />
than eyeglasses, contact lenses correct visual defects more effectively<br />
than eyeglasses can. Some soft contact lenses (all are made<br />
of flexible plastics) can be worn almost continuously. Hard lenses are
238 / Contact lenses<br />
made of more rigid plastic <strong>and</strong> last longer, though they can usually be<br />
worn only for six to nine hours at a time. The choice of hard or soft<br />
lenses must be made on an individual basis.<br />
The disadvantages of contact lenses include the fact that they must<br />
be cleaned frequently to prevent eye irritation. Furthermore, people<br />
who do not produce adequate amounts of tears (a condition called<br />
“dry eyes”) cannot wear them. Also, arthritis, many allergies, <strong>and</strong><br />
poor manual dexterity caused by old age or physical problems make<br />
many people poor c<strong>and</strong>idates for contact lenses.<br />
Impact<br />
The invention of Plexiglas hard scleral contact lenses set the stage<br />
for the development of the widely used corneal hard lenses by Tuohy.<br />
The development of soft contact lenses available to the general public<br />
began in Czechoslovakia in the 1960’s. It led to the sale, starting in the<br />
1970’s, of the popular, soft<br />
contact lenses pioneered by<br />
Otto Wichterle. The Wichterle<br />
lenses, which cover<br />
both the cornea <strong>and</strong> the iris,<br />
are made of a plastic called<br />
HEMA (short for hydroxyethylmethylmethacrylate).<br />
These very thin lenses<br />
have disadvantages that include<br />
the requirement of<br />
disinfection between uses,<br />
incomplete astigmatism correction,<br />
low durability, <strong>and</strong><br />
the possibility of chemical<br />
combination with some<br />
medications, which can<br />
damage the eyes. Therefore,<br />
much research is being<br />
carried out to improve<br />
Contact lenses are placed directly on the surface of<br />
the eye. (Digital Stock)<br />
them. For this reason, <strong>and</strong><br />
because of the continued
popularity of hard lenses, new kinds of soft <strong>and</strong> hard lenses are continually<br />
coming on the market.<br />
See also Artificial heart; Disposable razor; Hearing aid; Laser eye<br />
surgery; Pacemaker.<br />
Further Reading<br />
Contact lenses / 239<br />
“The Contact Lens.” Newsweek 130 (Winter, 1997/1998).<br />
Hemphill, Clara. “A Quest for Better Vision: Spectacles over the<br />
Centuries.” New York Times (August 8, 2000).<br />
Koetting, Robert A. History of the Contact Lens. Irvine, Calif.:<br />
Allergan, 1978.<br />
Lubick, Naomi. “The Hard <strong>and</strong> the Soft.” Scientific American 283, no.<br />
4 (October, 2000).
240<br />
Coronary artery bypass surgery<br />
Coronary artery bypass surgery<br />
The invention: The most widely used procedure of its type, coronary<br />
bypass surgery uses veins from legs to improve circulation<br />
to the heart.<br />
The people behind the invention:<br />
Rene Favaloro (1923-2000), a heart surgeon<br />
Donald B. Effler (1915- ), a member of the surgical team<br />
that performed the first coronary artery bypass operation<br />
F. Mason Sones (1918- ), a physician who developed an<br />
improved technique of X-raying the heart’s arteries<br />
Fighting Heart Disease<br />
In the mid-1960’s, the leading cause of death in the United States<br />
was coronary artery disease, claiming nearly 250 deaths per 100,000<br />
people. Because this number was so alarming, much research was<br />
being conducted on the heart. Most of the public’s attention was focused<br />
on heart transplants performed separately by the famous surgeons<br />
Christiaan Barnard <strong>and</strong> Michael DeBakey. Yet other, less dramatic<br />
procedures were being developed <strong>and</strong> studied.<br />
A major problem with coronary artery disease, besides the threat<br />
of death, is chest pain, or angina. Individuals whose arteries are<br />
clogged with fat <strong>and</strong> cholesterol are frequently unable to deliver<br />
enough oxygen to their heart muscles. This may result in angina,<br />
which causes enough pain to limit their physical activities. Some of<br />
the heart research in the mid-1960’s was an attempt to find a surgical<br />
procedure that would eliminate angina in heart patients. The<br />
various surgical procedures had varying success rates.<br />
In the late 1950’s <strong>and</strong> early 1960’s, a team of physicians in Clevel<strong>and</strong><br />
was studying surgical procedures that would eliminate angina.<br />
The team was composed of Rene Favaloro, Donald B. Effler, F.<br />
Mason Sones, <strong>and</strong> Laurence Groves. They were working on the concept,<br />
proposed by Dr. Arthur M. Vineberg from McGill University<br />
in Montreal, of implanting a healthy artery from the chest into the<br />
heart. This bypass procedure would provide the heart with another
Bypass<br />
Graft<br />
Blockage<br />
Before bypass surgery (left) the blockage in the artery<br />
threatens to cut off bloodflow; after surgery to<br />
graft a piece of vein (right), the blood can flow<br />
around the blockage.<br />
Coronary artery bypass surgery / 241<br />
source of blood, resulting<br />
in enough oxygen to overcome<br />
the angina. Yet Vineberg’s<br />
surgery was often<br />
ineffective because it was<br />
hard to determine exactly<br />
where to implant the new<br />
artery.<br />
New Techniques<br />
In order to make Vineberg’s<br />
proposed operation<br />
successful, better diagnostic<br />
tools were needed. This was<br />
accomplished by the work<br />
of Sones. He developed a diagnostic procedure, called “arteriography,”<br />
whereby a catheter was inserted into an artery in the arm,<br />
which he ran all the way into the heart. He then injected a dye into the<br />
coronary arteries <strong>and</strong> photographed them with a high-speed motionpicture<br />
camera. This provided an image of the heart, which made it<br />
easy to determine where the blockages were in the coronary arteries.<br />
Using this tool, the team tried several new techniques. First, the<br />
surgeons tried to ream out the deposits found in the narrow portion<br />
of the artery. They found, however, that this actually reduced<br />
blood flow. Second, they tried slitting the length of the blocked<br />
area of the artery <strong>and</strong> suturing a strip of tissue that would increase<br />
the diameter of the opening. This was also ineffective because it often<br />
resulted in turbulent blood flow. Finally, the team attempted to<br />
reroute the flow of blood around the blockage by suturing in other<br />
tissue, such as a portion of a vein from the upper leg. This bypass<br />
procedure removed that part of the artery that was clogged <strong>and</strong> replaced<br />
it with a clear vessel, thereby restoring blood flow through<br />
the artery. This new method was introduced by Favaloro in 1967.<br />
In order for Favaloro <strong>and</strong> other heart surgeons to perform coronary<br />
artery surgery successfully, several other medical techniques<br />
had to be developed. These included extracorporeal circulation <strong>and</strong><br />
microsurgical techniques.
242 / Coronary artery bypass surgery<br />
Extracorporeal circulation is the process of diverting the patient’s<br />
blood flow from the heart <strong>and</strong> into a heart-lung machine.<br />
This procedure was developed in 1953 by U.S. surgeon John H.<br />
Gibbon, Jr. Since the blood does not flow through the heart, the<br />
heart can be temporarily stopped so that the surgeons can isolate<br />
the artery <strong>and</strong> perform the surgery on motionless tissue.<br />
Microsurgery is necessary because some of the coronary arteries<br />
are less than 1.5 millimeters in diameter. Since these arteries<br />
had to be sutured, optical magnification <strong>and</strong> very delicate <strong>and</strong> sophisticated<br />
surgical tools were required. After performing this surgery<br />
on numerous patients, follow-up studies were able to determine<br />
the surgery’s effectiveness. Only then was the value of coronary artery<br />
bypass surgery recognized as an effective procedure for reducing angina<br />
in heart patients.<br />
Consequences<br />
According to the American Heart Association, approximately<br />
332,000 bypass surgeries were performed in the United States in<br />
1987, an increase of 48,000 from 1986. These figures show that the<br />
work by Favaloro <strong>and</strong> others has had a major impact on the<br />
health of United States citizens. The future outlook is also positive.<br />
It has been estimated that five million people had coronary<br />
artery disease in 1987. Of this group, an estimated 1.5 million had<br />
heart attacks <strong>and</strong> 500,000 died. Of those living, many experienced<br />
angina. Research has developed new surgical procedures <strong>and</strong><br />
new drugs to help fight coronary artery disease. Yet coronary artery<br />
bypass surgery is still a major form of treatment.<br />
See also Artificial blood; Artificial heart; Blood transfusion;<br />
Electrocardiogram; Heart-lung machine; Pacemaker.<br />
Further Reading<br />
Bing, Richard J. Cardiology: The Evolution of the Science <strong>and</strong> the<br />
Art. 2d ed. New Brunswick, N.J.: Rutgers University Press,<br />
1999.
Coronary artery bypass surgery / 243<br />
Faiola, Anthony. “Doctor’s Suicide Strikes at Heart of Argentina’s<br />
Health Care Crisis: Famed Cardiac Surgeon Championed<br />
the Poor.” Washington Post (August 25, 2000).<br />
Favaloro, René G. The Challenging Dream of Heart Surgery: From the<br />
Pampas to Clevel<strong>and</strong>. Boston: Little, Brown, 1994.
244<br />
Cruise missile<br />
Cruise missile<br />
The invention: Aircraft weapons system that makes it possible to<br />
attack both l<strong>and</strong> <strong>and</strong> sea targets with extreme accuracy without<br />
endangering the lives of the pilots.<br />
The person behind the invention:<br />
Rear Admiral Walter M. Locke (1930- ), U.S. Navy project<br />
manager<br />
From the Buzz Bombs of World War II<br />
During World War II, Germany developed <strong>and</strong> used two different<br />
types of missiles: ballistic missiles <strong>and</strong> cruise missiles. A ballistic<br />
missile is one that does not use aerodynamic lift in order to fly. It is<br />
fired into the air by powerful jet engines <strong>and</strong> reaches a high altitude;<br />
when its engines are out of fuel, it descends on its flight path toward<br />
its target. The German V-2 was the first ballistic missile. The United<br />
States <strong>and</strong> other countries subsequently developed a variety of<br />
highly sophisticated <strong>and</strong> accurate ballistic missiles.<br />
The other missile used by Germany was a cruise missile called<br />
the V-1, which was also called the flying bomb or the buzz bomb.<br />
The V-1 used aerodynamic lift in order to fly, just as airplanes do. It<br />
flew relatively low <strong>and</strong> was slow; by the end of the war, the British,<br />
against whom it was used, had developed techniques for countering<br />
it, primarily by shooting it down.<br />
After World War II, both the United States <strong>and</strong> the Soviet Union<br />
carried on the Germans’ development of both ballistic <strong>and</strong> cruise<br />
missiles. The United States discontinued serious work on cruise<br />
missile technology during the 1950’s: The development of ballistic<br />
missiles of great destructive capability had been very successful.<br />
Ballistic missiles armed with nuclear warheads had become the basis<br />
for the U.S. strategy of attempting to deter enemy attacks with<br />
the threat of a massive missile counterattack. In addition, aircraft<br />
carriers provided an air-attack capability similar to that of cruise<br />
missiles. Finally, cruise missiles were believed to be too vulnerable<br />
to being shot down by enemy aircraft or surface-to-air missiles.
While ballistic missiles are excellent for attacking large, fixed targets,<br />
they are not suitable for attacking moving targets. They can be<br />
very accurately aimed, but since they are not very maneuverable<br />
during their final descent, they are limited in their ability to change<br />
course to hit a moving target, such as a ship.<br />
During the 1967 war, the Egyptians used a Soviet-built cruise<br />
missile to sink the Israeli ship Elath. The U.S. military, primarily the<br />
Navy <strong>and</strong> the Air Force, took note of the Egyptian success <strong>and</strong><br />
within a few years initiated cruise missile development programs.<br />
The Development of Cruise Missiles<br />
Cruise missile / 245<br />
The United States probably could have developed cruise missiles<br />
similar to 1990’s models as early as the 1960’s, but it would have required<br />
a huge effort. The goal was to develop missiles that could be<br />
launched from ships <strong>and</strong> planes using existing launching equipment,<br />
could fly long distances at low altitudes at fairly high speeds,<br />
<strong>and</strong> could reach their targets with a very high degree of accuracy. If<br />
the missiles flew too slowly, they would be fairly easy to shoot<br />
down, like the German V-1’s. If they flew at too high an altitude,<br />
they would be vulnerable to the same type of surface-based missiles<br />
that shot down Gary Powers, the pilot of the U.S. U2 spyplane, in<br />
1960. If they were inaccurate, they would be of little use.<br />
The early Soviet cruise missiles were designed to meet their performance<br />
goals without too much concern about how they would<br />
be launched. They were fairly large, <strong>and</strong> the ships that launched<br />
them required major modifications. The U.S. goal of being able to<br />
launch using existing equipment, without making major modifications<br />
to the ships <strong>and</strong> planes that would launch them, played a major<br />
part in their torpedo-like shape: Sea-Launched Cruise Missiles<br />
(SLCMs) had to fit in the submarine’s torpedo tubes, <strong>and</strong> Air-<br />
Launched Cruise Missiles (ALCMs) were constrained to fit in rotary<br />
launchers. The size limitation also meant that small, efficient jet engines<br />
would be required that could fly the long distances required<br />
without needing too great a fuel load. Small, smart computers were<br />
needed to provide the required accuracy. The engine <strong>and</strong> computer<br />
technologies began to be available in the 1970’s, <strong>and</strong> they blossomed<br />
in the 1980’s.
246 / Cruise missile<br />
The U.S. Navy initiated cruise missile development efforts in<br />
1972; the Air Force followed in 1973. In 1977, the Joint Cruise Missile<br />
Project was established, with the Navy taking the lead. Rear<br />
Admiral Walter M. Locke was named project manager. The goal<br />
was to develop air-, sea-, <strong>and</strong> ground-launched cruise missiles.<br />
By coordinating activities, encouraging competition, <strong>and</strong><br />
requiring the use of common components wherever possible, the<br />
cruise missile development program became a model for future<br />
weapon-system development efforts. The primary contractors<br />
included Boeing Aerospace Company, General Dynamics, <strong>and</strong><br />
McDonnell Douglas.<br />
In 1978, SLCMs were first launched from submarines. Over the<br />
next few years, increasingly dem<strong>and</strong>ing tests were passed by several<br />
versions of cruise missiles. By the mid-1980’s, both antiship <strong>and</strong><br />
antil<strong>and</strong> missiles were available. An antil<strong>and</strong> version could be guided<br />
to its target with extreme accuracy by comparing a map programmed<br />
into its computer to the picture taken by an on-board video camera.<br />
The typical cruise missile is between 18 <strong>and</strong> 21 feet long, about 21<br />
inches in diameter, <strong>and</strong> has a wingspan of between 8 <strong>and</strong> 12 feet.<br />
Cruise missiles travel slightly below the speed of sound <strong>and</strong> have a<br />
range of around 1,350 miles (antil<strong>and</strong>) or 250 miles (antiship). Both<br />
conventionally armed <strong>and</strong> nuclear versions have been fielded.<br />
Consequences<br />
Cruise missiles have become an important part of the U.S. arsenal.<br />
They provide a means of attacking targets on l<strong>and</strong> <strong>and</strong> water<br />
without having to put an aircraft pilot’s life in danger. Their value<br />
was demonstrated in 1991 during the Persian Gulf War. One of their<br />
uses was to “soften up” defenses prior to sending in aircraft, thus reducing<br />
the risk to pilots. Overall estimates are that about 85 percent<br />
of cruise missiles used in the Persian Gulf War arrived on target,<br />
which is an outst<strong>and</strong>ing record. It is believed that their extreme accuracy<br />
also helped to minimize noncombatant casualties.<br />
See also Airplane; Atomic bomb; Hydrogen bomb; Rocket;<br />
Stealth aircraft; V-2 rocket.
Further Reading<br />
Cruise missile / 247<br />
Collyer, David G. Buzz Bomb. Deal, Kent, Engl<strong>and</strong>: Kent Aviation<br />
Historical Research Society, 1994.<br />
McDaid, Hugh, <strong>and</strong> David Oliver. Robot Warriors: The Top Secret History<br />
of the Pilotless Plane. London: Orion Media, 1997.<br />
Macknight, Nigel. Tomahawk Cruise Missile. Osceola, Wis.: Motorbooks<br />
International, 1995.<br />
Werrell, Kenneth P. The Evolution of the Cruise Missile. Maxwell Air<br />
Force Base, Ala.: Air University Press, 1997.
248<br />
Cyclamate<br />
Cyclamate<br />
The invention: An artificial sweetener introduced to the American<br />
market in 1950 under the tradename Sucaryl.<br />
The person behind the invention:<br />
Michael Sveda (1912-1999), an American chemist<br />
A Foolhardy Experiment<br />
The first synthetic sugar substitute, saccharin, was developed in<br />
1879. It became commercially available in 1907 but was banned for<br />
safety reasons in 1912. Sugar shortages during World War I (1914-<br />
1918) resulted in its reintroduction. Two other artificial sweeteners,<br />
Dulcin <strong>and</strong> P-4000, were introduced later but were banned in 1950<br />
for causing cancer in laboratory animals.<br />
In 1937, Michael Sveda was a young chemist working on his<br />
Ph.D. at the University of Illinois. A flood in the Ohio valley had ruined<br />
the local pipe-tobacco crop, <strong>and</strong> Sveda, a smoker, had been<br />
forced to purchase cigarettes. One day while in the laboratory,<br />
Sveda happened to brush some loose tobacco from his lips <strong>and</strong> noticed<br />
that his fingers tasted sweet. Having a curious, if rather foolhardy,<br />
nature, Sveda tasted the chemicals on his bench to find which<br />
one was responsible for the taste. The culprit was the forerunner of<br />
cyclohexylsulfamate, the material that came to be known as “cyclamate.”<br />
Later, on reviewing his career, Sveda explained the serendipitous<br />
discovery with the comment: “God looks after ...fools, children,<br />
<strong>and</strong> chemists.”<br />
Sveda joined E. I. Du Pont de Nemours <strong>and</strong> Company in 1939<br />
<strong>and</strong> assigned the patent for cyclamate to his employer. In June of<br />
1950, after a decade of testing on animals <strong>and</strong> humans, Abbott Laboratories<br />
announced that it was launching Sveda’s artificial sweetener<br />
under the trade name Sucaryl. Du Pont followed with its<br />
sweetener product, Cyclan. A Time magazine article in 1950 announced<br />
the new product <strong>and</strong> noted that Abbott had warned that<br />
because the product was a sodium salt, individuals with kidney<br />
problems should consult their doctors before adding it to their food.
Cyclamate had no calories, but it was thirty to forty times sweeter<br />
than sugar. Unlike saccharin, cyclamate left no unpleasant aftertaste.<br />
The additive was also found to improve the flavor of some<br />
foods, such as meat, <strong>and</strong> was used extensively to preserve various<br />
foods. By 1969, about 250 food products contained cyclamates, including<br />
cakes, puddings, canned fruit, ice cream, salad dressings,<br />
<strong>and</strong> its most important use, carbonated beverages.<br />
It was originally thought that cyclamates were harmless to the<br />
human body. In 1959, the chemical was added to the GRAS (generally<br />
recognized as safe) list. Materials on this list, such as sugar, salt,<br />
pepper, <strong>and</strong> vinegar, did not have to be rigorously tested before being<br />
added to food. In 1964, however, a report cited evidence that cyclamates<br />
<strong>and</strong> saccharin, taken together, were a health hazard. Its<br />
publication alarmed the scientific community. Numerous investigations<br />
followed.<br />
Shooting Themselves in the Foot<br />
Cyclamate / 249<br />
Initially, the claims against cyclamate had been that it caused diarrhea<br />
or prevented drugs from doing their work in the body.<br />
By 1969, these claims had begun to include the threat of cancer.<br />
Ironically, the evidence that sealed the fate of the artificial sweetener<br />
was provided by Abbott itself.<br />
A private Long Isl<strong>and</strong> company had been hired by Abbott to conduct<br />
an extensive toxicity study to determine the effects of longterm<br />
exposure to the cyclamate-saccharin mixtures often found in<br />
commercial products. The team of scientists fed rats daily doses of<br />
the mixture to study the effect on reproduction, unborn fetuses, <strong>and</strong><br />
fertility. In each case, the rats were declared to be normal. When the<br />
rats were killed at the end of the study, however, those that had been<br />
exposed to the higher doses showed evidence of bladder tumors.<br />
Abbott shared the report with investigators from the National Cancer<br />
Institute <strong>and</strong> then with the U.S. Food <strong>and</strong> Drug Administration<br />
(FDA).<br />
The doses required to produce the tumors were equivalent to an<br />
individual drinking 350 bottles of diet cola a day. That was more<br />
than one hundred times greater than that consumed even by those<br />
people who consumed a high amount of cyclamate. A six-person
250 / Cyclamate<br />
panel of scientists met to review the data <strong>and</strong> urged the ban of all cyclamates<br />
from foodstuffs. In October, 1969, amid enormous media<br />
coverage, the federal government announced that cyclamates were<br />
to be withdrawn from the market by the beginning of 1970.<br />
In the years following the ban, the controversy continued. Doubt<br />
was cast on the results of the independent study linking sweetener<br />
use to tumors in rats, because the study was designed not to evaluate<br />
cancer risks but to explain the effects of cyclamate use over<br />
many years. Bladder parasites, known as “nematodes,” found in the<br />
rats may have affected the outcome of the tests. In addition, an impurity<br />
found in some of the saccharin used in the study may have<br />
led to the problems observed. Extensive investigations such as the<br />
three-year project conducted at the National Cancer Research Center<br />
in Heidelberg, Germany, found no basis for the widespread ban.<br />
In 1972, however, rats fed high doses of saccharin alone were<br />
found to have developed bladder tumors. At that time, the sweetener<br />
was removed from the GRAS list. An outright ban was averted<br />
by the m<strong>and</strong>atory use of labels alerting consumers that certain<br />
products contained saccharin.<br />
Impact<br />
The introduction of cyclamate heralded the start of a new industry.<br />
For individuals who had to restrict their sugar intake for health<br />
reasons, or for those who wished to lose weight, there was now an<br />
alternative to giving up sweet food.<br />
The Pepsi-Cola company put a new diet drink formulation on<br />
the market almost as soon as the ban was instituted. In fact, it ran<br />
advertisements the day after the ban was announced showing the<br />
Diet Pepsi product boldly proclaiming “Sugar added—No Cyclamates.”<br />
Sveda, the discoverer of cyclamates, was not impressed with the<br />
FDA’s decision on the sweetener <strong>and</strong> its h<strong>and</strong>ling of subsequent investigations.<br />
He accused the FDA of “a massive cover-up of elemental<br />
blunders” <strong>and</strong> claimed that the original ban was based on sugar<br />
politics <strong>and</strong> bad science.<br />
For the manufacturers of cyclamate, meanwhile, the problem lay<br />
with the wording of the Delaney amendment, the legislation that
egulates new food additives. The amendment states that the manufacturer<br />
must prove that its product is safe, rather than the FDAhaving<br />
to prove that it is unsafe. The onus was on Abbott Laboratories<br />
to deflect concerns about the safety of the product, <strong>and</strong> it remained<br />
unable to do so.<br />
See also Aspartame; Genetically engineered insulin.<br />
Further Reading<br />
Cyclamate / 251<br />
Kaufman, Leslie. “Michael Sveda, the Inventor of Cyclamates, Dies<br />
at Eighty Seven.” New York Times (August 21, 1999).<br />
Lawler, Philip F. Sweet Talk: Media Coverage of Artificial Sweeteners.<br />
Washington, D.C.: Media Institute, 1986.<br />
Remington, Dennis W. The Bitter Truth About Artificial Sweeteners.<br />
Provo, Utah: Vitality House, 1987.<br />
Whelan, Elizabeth M. “The Bitter Truth About a Sweetener Scare.”<br />
Wall Street Journal (August 26, 1999).
252<br />
Cyclotron<br />
Cyclotron<br />
The invention: The first successful magnetic resonance accelerator<br />
for protons, the cyclotron gave rise to the modern era of particle<br />
accelerators, which are used by physicists to study the structure<br />
of atoms.<br />
The people behind the invention:<br />
Ernest Orl<strong>and</strong>o Lawrence (1901-1958), an American nuclear<br />
physicist who was awarded the 1939 Nobel Prize in Physics<br />
M. Stanley Livingston (1905-1986), an American nuclear<br />
physicist<br />
Niels Edlefsen (1893-1971), an American physicist<br />
David Sloan (1905- ), an American physicist <strong>and</strong> electrical<br />
engineer<br />
The Beginning of an Era<br />
The invention of the cyclotron by Ernest Orl<strong>and</strong>o Lawrence<br />
marks the beginning of the modern era of high-energy physics. Although<br />
the energies of newer accelerators have increased steadily,<br />
the principles incorporated in the cyclotron have been fundamental<br />
to succeeding generations of accelerators, many of which were also<br />
developed in Lawrence’s laboratory. The care <strong>and</strong> support for such<br />
machines have also given rise to “big science”: the massing of scientists,<br />
money, <strong>and</strong> machines in support of experiments to discover<br />
the nature of the atom <strong>and</strong> its constituents.<br />
At the University of California, Lawrence took an interest in the<br />
new physics of the atomic nucleus, which had been developed by<br />
the British physicist Ernest Rutherford <strong>and</strong> his followers in Engl<strong>and</strong>,<br />
<strong>and</strong> which was attracting more attention as the development<br />
of quantum mechanics seemed to offer solutions to problems that<br />
had long preoccupied physicists. In order to explore the nucleus of<br />
the atom, however, suitable probes were required. An artificial<br />
means of accelerating ions to high energies was also needed.<br />
During the late 1920’s, various means of accelerating alpha particles,<br />
protons (hydrogen ions), <strong>and</strong> electrons had been tried, but
none had been successful in causing a nuclear transformation when<br />
Lawrence entered the field. The high voltages required exceeded<br />
the resources available to physicists. It was believed that more than<br />
a million volts would be required to accelerate an ion to sufficient<br />
energies to penetrate even the lightest atomic nuclei. At such voltages,<br />
insulators broke down, releasing sparks across great distances.<br />
European researchers even attempted to harness lightning to accomplish<br />
the task, with fatal results.<br />
Early in April, 1929, Lawrence discovered an article by a German<br />
electrical engineer that described a linear accelerator of ions that<br />
worked by passing an ion through two sets of electrodes, each of<br />
which carried the same voltage <strong>and</strong> increased the energy of the ions<br />
correspondingly. By spacing the electrodes appropriately <strong>and</strong> using<br />
an alternating electrical field, this “resonance acceleration” of ions<br />
could speed subatomic particles to many times the energy applied<br />
in each step, overcoming the problems presented when one tried to<br />
apply a single charge to an ion all at once. Unfortunately, the spacing<br />
of the electrodes would have to be increased as the ions were accelerated,<br />
since they would travel farther between each alternation<br />
of the phase of the accelerating charge, making an accelerator impractically<br />
long in those days of small-scale physics.<br />
Fast-Moving Streams of Ions<br />
Cyclotron / 253<br />
Lawrence knew that a magnetic field would cause the ions to be<br />
deflected <strong>and</strong> form a curved path. If the electrodes were placed<br />
across the diameter of the circle formed by the ions’ path, they<br />
should spiral out as they were accelerated, staying in phase with the<br />
accelerating charge until they reached the periphery of the magnetic<br />
field. This, it seemed to him, afforded a means of producing indefinitely<br />
high voltages without using high voltages by recycling the accelerated<br />
ions through the same electrodes. Many scientists doubted<br />
that such a method would be effective. No mechanism was known<br />
that would keep the circulating ions in sufficiently tight orbits to<br />
avoid collisions with the walls of the accelerating chamber. Others<br />
tried unsuccessfully to use resonance acceleration.<br />
A graduate student, M. Stanley Livingston, continued Lawrence’s<br />
work. For his dissertation project, he used a brass cylinder 10 centi-
254 / Cyclotron<br />
Ernest Orl<strong>and</strong>o Lawrence<br />
A man of great energy <strong>and</strong> gusty temper, Ernest Orl<strong>and</strong>o<br />
Lawrence danced for joy when one of his cyclotrons accelerated<br />
a particle to more than the one million electron volts. That<br />
amount of power was important, according to contemporary<br />
theorists, because it was enough to penetrate the nucleus of a<br />
target atom. For giving physicists a tool with which to examine<br />
the subatomic realm, Lawrence received the 1939 Nobel Prize in<br />
Physics, among many other honors.<br />
The gr<strong>and</strong>son of Norwegian immigrants, Lawrence was<br />
born in Canton, South Dakota, in 1901. After high school, he<br />
went to St. Olaf’s College, the University of South Dakota, the<br />
University of Minnesota, <strong>and</strong> Yale University, where he completed<br />
a doctorate in physics in 1925. After post-graduate fellowships<br />
at Yale, he became a professor at the University of<br />
California, Berkeley, the youngest on campus. In 1936 the university<br />
made him director of its radiation laboratory. Now<br />
named the Lawrence-Livermore National Laboratory, it stayed<br />
at the forefront of physics <strong>and</strong> high-technology research ever<br />
since.<br />
Before World War II Lawrence <strong>and</strong> his brother, Dr. John<br />
Lawrence, also at the university, worked together to find practical<br />
biological <strong>and</strong> medical applications for the radioisotopes<br />
made in Lawrence’s particle accelerators. During the war Lawrence<br />
participated in the Manhattan Project, which made the<br />
atomic bomb. He was a passionate anticommunist <strong>and</strong> after the<br />
war argued before Congress for funds to develop death rays<br />
<strong>and</strong> radiation bombs from research with his cyclotrons; however,<br />
he was also an American delegate to the Geneva Conference<br />
in 1958, which sought a ban on atomic bomb tests.<br />
Lawrence helped solve the mystery of cosmic particles, invented<br />
a method for measuring ultra-small time intervals, <strong>and</strong><br />
calculated with high precision the ratio of the charge of an electron<br />
to its mass, a fundamental constant of nature. Lawrence<br />
died in 1958 in Palo Alto, California.<br />
meters in diameter sealed with wax to hold a vacuum, a half-pillbox<br />
of copper mounted on an insulated stem to serve as the electrode,<br />
<strong>and</strong> a Hartley radio frequency oscillator producing 10 watts. The<br />
hydrogen molecular ions were produced by a thermionic cathode
(mounted near the center of the apparatus) from hydrogen gas admitted<br />
through an aperture in the side of the cylinder after a vacuum<br />
had been produced by a pump. Once formed, the oscillating<br />
electrical field drew out the ions <strong>and</strong> accelerated them as they<br />
passed through the cylinder. The accelerated ions spiraled out in a<br />
magnetic field produced by a 10-centimeter electromagnet to a collector.<br />
By November, 1930, Livingston had observed peaks in the<br />
collector current as he tuned the magnetic field through the value<br />
calculated to produce acceleration.<br />
Borrowing a stronger magnet <strong>and</strong> tuning his radio frequency oscillator<br />
appropriately, Livingston produced 80,000-electronvolt ions<br />
at his collector on January 2, 1931, thus demonstrating the principle<br />
of magnetic resonance acceleration.<br />
Impact<br />
Cyclotron / 255<br />
Demonstration of the principle led to the construction of a succession<br />
of large cyclotrons, beginning with a 25-centimeter cyclotron<br />
developed in the spring <strong>and</strong> summer of 1931 that produced<br />
one-million-electronvolt protons. With the support of the Research<br />
Corporation, Lawrence secured a large electromagnet that had been<br />
developed for radio transmission <strong>and</strong> an unused laboratory to<br />
house it: the Radiation Laboratory.<br />
The 69-centimeter cyclotron built with the magnet was used to<br />
explore nuclear physics. It accelerated deuterons, ions of heavy<br />
water or deuterium that contain, in addition to the proton, the neutron,<br />
which was discovered by Sir James Chadwick in 1932. The accelerated<br />
deuteron, which injected neutrons into target atoms, was<br />
used to produce a wide variety of artificial radioisotopes. Many of<br />
these, such as technetium <strong>and</strong> carbon 14, were discovered with the<br />
cyclotron <strong>and</strong> were later used in medicine.<br />
By 1939, Lawrence had built a 152-centimeter cyclotron for medical<br />
applications, including therapy with neutron beams. In that<br />
year, he won the Nobel Prize in Physics for the invention of the cyclotron<br />
<strong>and</strong> the production of radioisotopes. During World War II,<br />
Lawrence <strong>and</strong> the members of his Radiation Laboratory developed<br />
electromagnetic separation of uranium ions to produce the uranium<br />
235 required for the atomic bomb. After the war, the 467-centimeter
256 / Cyclotron<br />
cyclotron was completed as a synchrocyclotron, which modulated<br />
the frequency of the accelerating fields to compensate for the increasing<br />
mass of ions as they approached the speed of light. The<br />
principle of synchronous acceleration, invented by Lawrence’s associate,<br />
the American physicist Edwin Mattison McMillan, became<br />
fundamental to proton <strong>and</strong> electron synchrotrons.<br />
The cyclotron <strong>and</strong> the Radiation Laboratory were the center of<br />
accelerator physics throughout the 1930’s <strong>and</strong> well into the postwar<br />
era. The invention of the cyclotron not only provided a new tool for<br />
probing the nucleus but also gave rise to new forms of organizing<br />
scientific work <strong>and</strong> to applications in nuclear medicine <strong>and</strong> nuclear<br />
chemistry. Cyclotrons were built in many laboratories in the United<br />
States, Europe, <strong>and</strong> Japan, <strong>and</strong> they became a st<strong>and</strong>ard tool of nuclear<br />
physics.<br />
See also Atomic bomb; Electron microscope; Field ion microscope;<br />
Geiger counter; Hydrogen bomb; Mass spectrograph; Neutrino<br />
detector; Scanning tunneling microscope; Synchrocyclotron;<br />
Tevatron accelerator.<br />
Further Reading<br />
Childs, Herbert. An American Genius: The Life of Ernest Orl<strong>and</strong>o Lawrence.<br />
New York: Dutton, 1968.<br />
Close, F. E., Michael Marten, <strong>and</strong> Christine Sutton. The Particle Explosion.<br />
New York: Oxford University Press, 1994.<br />
Pais, Abraham. Inward Bound: Of Matter <strong>and</strong> Forces in the Physical<br />
World. New York: Clarendon Press, 1988.<br />
Wilson, Elizabeth K. “Fifty Years of Heavy Chemistry.” Chemical <strong>and</strong><br />
Engineering News 78, no. 13 (March 27, 2000).
Diesel locomotive<br />
Diesel locomotive<br />
The invention: An internal combustion engine in which ignition is<br />
achieved by the use of high-temperature compressed air, rather<br />
than a spark plug.<br />
The people behind the invention:<br />
Rudolf Diesel (1858-1913), a German engineer <strong>and</strong> inventor<br />
Sir Dugold Clark (1854-1932), a British engineer<br />
Gottlieb Daimler (1834-1900), a German engineer<br />
Henry Ford (1863-1947), an American automobile magnate<br />
Nikolaus Otto (1832-1891), a German engineer <strong>and</strong> Daimler’s<br />
teacher<br />
A Beginning in Winterthur<br />
257<br />
By the beginning of the twentieth century, new means of providing<br />
society with power were needed. The steam engines that were<br />
used to run factories <strong>and</strong> railways were no longer sufficient, since<br />
they were too heavy <strong>and</strong> inefficient. At that time, Rudolf Diesel, a<br />
German mechanical engineer, invented a new engine. His diesel engine<br />
was much more efficient than previous power sources. It also<br />
appeared that it would be able to run on a wide variety of fuels,<br />
ranging from oil to coal dust. Diesel first showed that his engine was<br />
practical by building a diesel-driven locomotive that was tested in<br />
1912.<br />
In the 1912 test runs, the first diesel-powered locomotive was operated<br />
on the track of the Winterthur-Romanston rail line in Switzerl<strong>and</strong>.<br />
The locomotive was built by a German company, Gesellschaft<br />
für Thermo-Lokomotiven, which was owned by Diesel <strong>and</strong><br />
his colleagues. Immediately after the test runs at Winterthur proved<br />
its efficiency, the locomotive—which had been designed to pull express<br />
trains on Germany’s Berlin-Magdeburg rail line—was moved<br />
to Berlin <strong>and</strong> put into service. It worked so well that many additional<br />
diesel locomotives were built. In time, diesel engines were<br />
also widely used to power many other machines, including those<br />
that ran factories, motor vehicles, <strong>and</strong> ships.
258 / Diesel locomotive<br />
Rudolf Diesel<br />
Unbending, suspicious of others, but also exceptionally intelligent,<br />
Rudolf Christian Karl Diesel led a troubled life <strong>and</strong><br />
came to a mysterious end. His parents, expatriate Germans,<br />
lived in Paris when he was born, 1858, <strong>and</strong> he spent his early<br />
childhood there. In 1870, just as he was starting his formal education,<br />
his family fled to Engl<strong>and</strong> on the outbreak of the Franco-<br />
Prussian War, which turned the French against Germans. In Engl<strong>and</strong>,<br />
Diesel spent much of his spare time in museums, educating<br />
himself. His father, a leather craftsman, was unable to support<br />
his family, so as a teenager Diesel was packed off to<br />
Augsburg, Germany, where he was largely on his own. Although<br />
these experiences made him fluent in English, French,<br />
<strong>and</strong> German, his was not a stable or happy childhood.<br />
He threw himself into his studies, finishing his high school<br />
education three years ahead of schedule, <strong>and</strong> entered the Technical<br />
College of Munich, where he was the star student. Once,<br />
during his school years, he saw a demonstration of a Chinese<br />
firestick. The firestick was a tube with a plunger. When a small<br />
piece of flammable material was put in one end <strong>and</strong> the plunger<br />
pushed down rapidly toward it, the heat of the compressed air<br />
in the tube ignited the material. The demonstration later inspired<br />
Diesel to adapt the principle to an engine.<br />
His was the first engine to run successfully with compressed<br />
air fuel ignition, but it was not the first design. So although he<br />
received the patent for the diesel engine, he had to fight challenges<br />
in court from other inventors over licensing rights. He always<br />
won, but the strain of litigation worsened his tendency to<br />
stubborn self-reliance, <strong>and</strong> this led him into difficulties. The<br />
first compression engines were unreliable <strong>and</strong> unwieldy, but<br />
Diesel rebuffed all suggestions for modifications, requiring that<br />
builders follow his original design. His attitude led to delays in<br />
development of the engine <strong>and</strong> lost him financial support.<br />
In 1913, while crossing the English Channel aboard a ship,<br />
Diesel disappeared. His body was never found, <strong>and</strong> although<br />
the authorities concluded that Diesel committed suicide, no one<br />
knows what happened.
Diesels, Diesels Everywhere<br />
Diesel locomotive / 259<br />
In the 1890’s, the best engines available were steam engines that<br />
were able to convert only 5 to 10 percent of input heat energy to useful<br />
work. The burgeoning industrial society <strong>and</strong> a widespread network<br />
of railroads needed better, more efficient engines to help businesses<br />
make profits <strong>and</strong> to speed up the rate of transportation<br />
available for moving both goods <strong>and</strong> people, since the maximum<br />
speed was only about 48 kilometers per hour. In 1894, Rudolf Diesel,<br />
then thirty-five years old, appeared in Augsburg, Germany, with a<br />
new engine that he believed would demonstrate great efficiency.<br />
The diesel engine demonstrated at Augsburg ran for only a<br />
short time. It was, however, more efficient than other existing engines.<br />
In addition, Diesel predicted that his engines would move<br />
trains faster than could be done by existing engines <strong>and</strong> that they<br />
would run on a wide variety of fuels. Experimentation proved the<br />
truth of his claims; even the first working motive diesel engine (the<br />
one used in the Winterthur test) was capable of pulling heavy<br />
freight <strong>and</strong> passenger trains at maximum speeds of up to 160 kilometers<br />
per hour.<br />
By 1912, Diesel, a millionaire, saw the wide use of diesel locomotives<br />
in Europe <strong>and</strong> the United States <strong>and</strong> the conversion of hundreds<br />
of ships to diesel power. Rudolf Diesel’s role in the story ends<br />
here, a result of his mysterious death in 1913—believed to be a suicide<br />
by the authorities—while crossing the English Channel on the<br />
steamer Dresden. Others involved in the continuing saga of diesel<br />
engines were the Britisher Sir Dugold Clerk, who improved diesel<br />
design, <strong>and</strong> the American Adolphus Busch (of beer-brewing fame),<br />
who bought the North American rights to the diesel engine.<br />
The diesel engine is related to automobile engines invented by<br />
Nikolaus Otto <strong>and</strong> Gottlieb Daimler. The st<strong>and</strong>ard Otto-Daimler (or<br />
Otto) engine was first widely commercialized by American auto<br />
magnate Henry Ford. The diesel <strong>and</strong> Otto engines are internalcombustion<br />
engines. This means that they do work when a fuel is<br />
burned <strong>and</strong> causes a piston to move in a tight-fitting cylinder. In diesel<br />
engines, unlike Otto engines, the fuel is not ignited by a spark<br />
from a spark plug. Instead, ignition is accomplished by the use of<br />
high-temperature compressed air.
260 / Diesel locomotive<br />
In common “two-stroke” diesel engines, pioneered by Sir Dugold<br />
Clerk, a starter causes the engine to make its first stroke. This<br />
draws in air <strong>and</strong> compresses the air sufficiently to raise its temperature<br />
to 900 to 1,000 degrees Fahrenheit. At this point, fuel (usually<br />
oil) is sprayed into the cylinder, ignites, <strong>and</strong> causes the piston to<br />
make its second, power-producing stroke. At the end of that stroke,<br />
more air enters as waste gases leave the cylinder; air compression<br />
occurs again; <strong>and</strong> the power-producing stroke repeats itself. This<br />
process then occurs continuously, without restarting.<br />
Impact<br />
Intake Compression Power Exhaust<br />
The four strokes of a diesel engine. (Robert Bosch Corporation)<br />
Proof of the functionality of the first diesel locomotive set the<br />
stage for the use of diesel engines to power many machines. Although<br />
Rudolf Diesel did not live to see it, diesel engines were<br />
widely used within fifteen years after his death. At first, their main<br />
applications were in locomotives <strong>and</strong> ships. Then, because diesel<br />
engines are more efficient <strong>and</strong> more powerful than Otto engines,<br />
they were modified for use in cars, trucks, <strong>and</strong> buses.<br />
At present, motor vehicle diesel engines are most often used in<br />
buses <strong>and</strong> long-haul trucks. In contrast, diesel engines are not as<br />
popular in automobiles as Otto engines, although European auto-
makers make much wider use of diesel engines than American<br />
automakers do. Many enthusiasts, however, view diesel automobiles<br />
as the wave of the future. This optimism is based on the durability<br />
of the engine, its great power, <strong>and</strong> the wide range <strong>and</strong> economical<br />
nature of the fuels that can be used to run it. The drawbacks<br />
of diesels include the unpleasant odor <strong>and</strong> high pollutant content of<br />
their emissions.<br />
Modern diesel engines are widely used in farm <strong>and</strong> earth-moving<br />
equipment, including balers, threshers, harvesters, bulldozers,rock<br />
crushers, <strong>and</strong> road graders. Construction of the Alaskan oil pipeline<br />
relied heavily on equipment driven by diesel engines. Diesel engines<br />
are also commonly used in sawmills, breweries, coal mines,<br />
<strong>and</strong> electric power plants.<br />
Diesel’s brainchild has become a widely used power source, just<br />
as he predicted. It is likely that the use of diesel engines will continue<br />
<strong>and</strong> will exp<strong>and</strong>, as the dem<strong>and</strong>s of energy conservation require<br />
more efficient engines <strong>and</strong> as moves toward fuel diversification<br />
require engines that can be used with various fuels.<br />
See also Bullet train; Gas-electric car; Internal combustion engine.<br />
Further Reading<br />
Diesel locomotive / 261<br />
Cummins, C. Lyle. Diesel’s Engine. Wilsonville, Oreg.: Carnot Press,<br />
1993.<br />
Diesel, Eugen. From Engines to Autos: Five Pioneers in Engine Development<br />
<strong>and</strong> Their Contributions to the Automotive Industry. Chicago:<br />
H. Regnery, 1960.<br />
Nitske, Robert W., <strong>and</strong> Charles Morrow Wilson. Rudolf Diesel: Pioneer<br />
of the Age of Power. Norman: University of Oklahoma Press,<br />
1965.
262<br />
Differential analyzer<br />
Differential analyzer<br />
The invention: An electromechanical device capable of solving differential<br />
equations.<br />
The people behind the invention:<br />
Vannevar Bush (1890-1974), an American electrical engineer<br />
Harold L. Hazen (1901-1980), an American electrical engineer<br />
Electrical Engineering Problems Become More Complex<br />
After World War I, electrical engineers encountered increasingly<br />
difficult differential equations as they worked on vacuum-tube circuitry,<br />
telephone lines, <strong>and</strong>, particularly, long-distance power transmission<br />
lines. These calculations were lengthy <strong>and</strong> tedious. Two of<br />
the many steps required to solve them were to draw a graph manually<br />
<strong>and</strong> then to determine the area under the curve (essentially, accomplishing<br />
the mathematical procedure called integration).<br />
In 1925, Vannevar Bush, a faculty member in the Electrical Engineering<br />
Department at the Massachusetts Institute of Technology<br />
(MIT), suggested that one of his graduate students devise a machine<br />
to determine the area under the curve. They first considered a mechanical<br />
device but later decided to seek an electrical solution. Realizing<br />
that a watt-hour meter such as that used to measure electricity<br />
in most homes was very similar to the device they needed, Bush <strong>and</strong><br />
his student refined the meter <strong>and</strong> linked it to a pen that automatically<br />
recorded the curve.<br />
They called this machine the Product Integraph, <strong>and</strong> MIT students<br />
began using it immediately. In 1927, Harold L. Hazen, another<br />
MIT faculty member, modified it in order to solve the more complex<br />
second-order differential equations (it originally solved only firstorder<br />
equations).<br />
The Differential Analyzer<br />
The original Product Integraph had solved problems electrically,<br />
<strong>and</strong> Hazen’s modification had added a mechanical integrator. Al-
Differential analyzer / 263<br />
though the revised Product Integraph was useful in solving the<br />
types of problems mentioned above, Bush thought the machine<br />
could be improved by making it an entirely mechanical integrator,<br />
rather than a hybrid electrical <strong>and</strong> mechanical device.<br />
In late 1928, Bush received funding from MIT to develop an entirely<br />
mechanical integrator, <strong>and</strong> he completed the resulting Differential<br />
Analyzer in 1930. This machine consisted of numerous interconnected<br />
shafts on a long, tablelike framework, with drawing<br />
boards flanking one side <strong>and</strong> six wheel-<strong>and</strong>-disk integrators on the<br />
other. Some of the drawing boards were configured to allow an operator<br />
to trace a curve with a pen that was linked to the Analyzer,<br />
thus providing input to the machine. The other drawing boards<br />
were configured to receive output from the Analyzer via a pen that<br />
drew a curve on paper fastened to the drawing board.<br />
The wheel-<strong>and</strong>-disk integrator, which Hazen had first used in<br />
the revised Product Integraph, was the key to the operation of the<br />
Differential Analyzer. The rotational speed of the horizontal disk<br />
was the input to the integrator, <strong>and</strong> it represented one of the variables<br />
in the equation. The smaller wheel rolled on the top surface of<br />
the disk, <strong>and</strong> its speed, which was different from that of the disk,<br />
represented the integrator’s output. The distance from the wheel to<br />
the center of the disk could be changed to accommodate the equation<br />
being solved, <strong>and</strong> the resulting geometry caused the two shafts<br />
to turn so that the output was the integral of the input. The integrators<br />
were linked mechanically to other devices that could add, subtract,<br />
multiply, <strong>and</strong> divide. Thus, the Differential Analyzer could<br />
solve complex equations involving many different mathematical<br />
operations. Because all the linkages <strong>and</strong> calculating devices were<br />
mechanical, the Differential Analyzer actually acted out each calculation.<br />
Computers of this type, which create an analogy to the physical<br />
world, are called analog computers.<br />
The Differential Analyzer fulfilled Bush’s expectations, <strong>and</strong> students<br />
<strong>and</strong> researchers found it very useful. Although each different<br />
problem required Bush’s team to set up a new series of mechanical<br />
linkages, the researchers using the calculations viewed this as a minor<br />
inconvenience. Students at MIT used the Differential Analyzer<br />
in research for doctoral dissertations, master’s theses, <strong>and</strong> bachelor’s<br />
theses. Other researchers worked on a wide range of problems
264 / Differential analyzer<br />
Vannevar Bush<br />
One of the most politically powerful scientists of the twentieth<br />
century, Vannevar Bush was born in 1890 in Everett, Massachusetts.<br />
He studied at Tufts College in Boston, not only earning<br />
two degrees in engineering but also registering his first<br />
patent while still an undergraduate. He worked for General<br />
Electric Company briefly after college <strong>and</strong> then conducted research<br />
on submarine detection for the U.S. Navy during World<br />
War I.<br />
After the war he became a professor of electrical power<br />
transmission (<strong>and</strong> later dean of the engineering school) at the<br />
Massachusetts Institute of Technology (MIT). He also acted as a<br />
consultant for industry <strong>and</strong> started companies of his own, including<br />
(with two others) Raytheon Corporation. While at MIT<br />
he developed the Product Integraph <strong>and</strong> Differential Analyzer<br />
to aid in solving problems related to electrical power transmission.<br />
Starting in 1939, Bush became a key science administrator.<br />
He was president of the Carnegie Foundation from 1939 until<br />
1955, chaired the National Advisory Committee for Aeronautics<br />
from 1939 until 1941, in 1940 was appointed chairman of the<br />
President’s National Defense Research Committee, <strong>and</strong> from<br />
1941 until 1946 was director of the Office of Scientific Research<br />
<strong>and</strong> Development. This meant he was President Franklin Roosevelt’s<br />
science adviser during World War II <strong>and</strong> oversaw wartime<br />
military research, including involvement in the Manhattan<br />
Project that build the first atomic bombs. After the war he<br />
worked for peaceful application of atomic power <strong>and</strong> was instrumental<br />
in inaugurating the National Science Foundation,<br />
which he directed, in 1950. Between 1957 <strong>and</strong> 1959 he served as<br />
chairman of MIT Corporation, retaining an honorary chairmanship<br />
thereafter.<br />
All these political <strong>and</strong> administrative roles meant he exercised<br />
enormous influence in deciding which scientific projects<br />
were supported financially. Having received many honorary<br />
degrees <strong>and</strong> awards, including the National Medal of Science<br />
(1964), Bush died in 1974.
with the Differential Analyzer, mostly in electrical engineering, but<br />
also in atomic physics, astrophysics, <strong>and</strong> seismology. An English researcher,<br />
Douglas Hartree, visited Bush’s laboratory in 1933 to learn<br />
about the Differential Analyzer <strong>and</strong> to use it in his own work on the<br />
atomic field of mercury. When he returned to Engl<strong>and</strong>, he built several<br />
analyzers based on his knowledge of MIT’s machine. The U.S.<br />
Army also built a copy in order to carry out the complex calculations<br />
required to create artillery firing tables (which specified the<br />
proper barrel angle to achieve the desired range). Other analyzers<br />
were built by industry <strong>and</strong> universities around the world.<br />
Impact<br />
Differential analyzer / 265<br />
As successful as the Differential Analyzer had been, Bush wanted<br />
to make another, better analyzer that would be more precise, more<br />
convenient to use, <strong>and</strong> more mathematically flexible. In 1932, Bush<br />
began seeking money for his new machine, but because of the Depression<br />
it was not until 1936 that he received adequate funding for<br />
the Rockefeller Analyzer, as it came to be known. Bush left MIT in<br />
1938, but work on the Rockefeller Analyzer continued. It was first<br />
demonstrated in 1941, <strong>and</strong> by 1942, it was being used in the war effort<br />
to calculate firing tables <strong>and</strong> design radar antenna profiles. At<br />
the end of the war, it was the most important computer in existence.<br />
All the analyzers, which were mechanical computers, faced serious<br />
limitations in speed because of the momentum of the machinery,<br />
<strong>and</strong> in precision because of slippage <strong>and</strong> wear. The digital computers<br />
that were being developed after World War II (even at MIT)<br />
were faster, more precise, <strong>and</strong> capable of executing more powerful<br />
operations because they were electrical computers. As a result, during<br />
the 1950’s, they eclipsed differential analyzers such as those<br />
built by Bush. Descendants of the Differential Analyzer remained in<br />
use as late as the 1990’s, but they played only a minor role.<br />
See also Colossus computer; ENIAC computer; Mark I calculator;<br />
Personal computer; SAINT; UNIVAC computer.
266 / Differential analyzer<br />
Further Reading<br />
Bush, Vannevar. Pieces of the Action. New York: Morrow, 1970.<br />
Marcus, Alan I., <strong>and</strong> Howard P. Segal. Technology in America. Fort<br />
Worth, Tex.: Harcourt Brace College, 1999.<br />
Spencer, Donald D. Great Men <strong>and</strong> Women of Computing. Ormond<br />
Beach, Fla.: Camelot Publishing, 1999.<br />
Zachary, G. Pascal. Endless Frontier: Vannevar Bush, Engineer of the<br />
American Century. Cambridge, Mass.: MIT Press, 1999.
Dirigible<br />
Dirigible<br />
The invention: A rigid lighter-than-air aircraft that played a major<br />
role in World War I <strong>and</strong> in international air traffic until a disastrous<br />
accident destroyed the industry.<br />
The people behind the invention:<br />
Ferdin<strong>and</strong> von Zeppelin (1838-1917), a retired German general<br />
Theodor Kober (1865-1930), Zeppelin’s private engineer<br />
Early Competition<br />
267<br />
When the Montgolfier brothers launched the first hot-air balloon<br />
in 1783, engineers—especially those in France—began working on<br />
ways to use machines to control the speed <strong>and</strong> direction of balloons.<br />
They thought of everything: rowing through the air with silk-covered<br />
oars; building movable wings; using a rotating fan, an airscrew, or a<br />
propeller powered by a steam engine (1852) or an electric motor<br />
(1882). At the end of the nineteenth century, the internal combustion<br />
engine was invented. It promised higher speeds <strong>and</strong> more power.<br />
Up to this point, however, the balloons were not rigid.<br />
A rigid airship could be much larger than a balloon <strong>and</strong> could fly<br />
farther. In 1890, a rigid airship designed by David Schwarz of<br />
Dalmatia was tested in St. Petersburg, Russia. The test failed because<br />
there were problems with inflating the dirigible. A second<br />
test, in Berlin in 1897, was only slightly more successful, since the<br />
hull leaked <strong>and</strong> the flight ended in a crash.<br />
Schwarz’s airship was made of an entirely rigid aluminum cylinder.<br />
Ferdin<strong>and</strong> von Zeppelin had a different idea: His design was<br />
based on a rigid frame. Zeppelin knew about balloons from having<br />
fought in two wars in which they were used: the American Civil<br />
War of 1861-1865 <strong>and</strong> the Franco-Prussian War of 1870-1871. He<br />
wrote down his first “thoughts about an airship” in his diary on<br />
March 25, 1874, inspired by an article about flying <strong>and</strong> international<br />
mail. Zeppelin soon lost interest in this idea of civilian uses for an<br />
airship <strong>and</strong> concentrated instead on the idea that dirigible balloons<br />
might become an important part of modern warfare. He asked the
268 / Dirigible<br />
German government to fund his research, pointing out that France<br />
had a better military air force than Germany did. Zeppelin’s patriotism<br />
was what kept him trying, in spite of money problems <strong>and</strong><br />
technical difficulties.<br />
In 1893, in order to get more money, Zeppelin tried to persuade<br />
the German military <strong>and</strong> engineering experts that his invention was<br />
practical. Even though a government committee decided that his<br />
work was worth a small amount of funding, the army was not sure<br />
that Zeppelin’s dirigible was worth the cost. Finally, the committee<br />
chose Schwarz’s design. In 1896, however, Zeppelin won the support<br />
of the powerful Union of German Engineers, which in May,<br />
1898, gave him 800,000 marks to form a stock company called the<br />
Association for the Promotion of Airship Flights. In 1899, Zeppelin<br />
began building his dirigible in Manzell at Lake Constance. In July,<br />
1900, the airship was finished <strong>and</strong> ready for its first test flight.<br />
Several Attempts<br />
Zeppelin, together with his engineer, Theodor Kober, had worked<br />
on the design since May, 1892, shortly after Zeppelin’s retirement<br />
from the army. They had finished the rough draft by 1894, <strong>and</strong><br />
though they made some changes later, this was the basic design of<br />
the Zeppelin. An improved version was patented in December,<br />
1897.<br />
In the final prototype, called the LZ 1, the engineers tried to make<br />
the airship as light as possible. They used a light internal combustion<br />
engine <strong>and</strong> designed a frame made of the light metal aluminum.<br />
The airship was 128 meters long <strong>and</strong> had a diameter of 11.7<br />
meters when inflated. Twenty-four zinc-aluminum girders ran the<br />
length of the ship, being drawn together at each end. Sixteen rings<br />
held the body together. The engineers stretched an envelope of<br />
smooth cotton over the framework to reduce wind resistance <strong>and</strong> to<br />
protect the gas bags from the sun’s rays. Seventeen gas bags made of<br />
rubberized cloth were placed inside the framework. Together they<br />
held more than 120,000 cubic meters of hydrogen gas, which would<br />
lift 11,090 kilograms. Two motor gondolas were attached to the<br />
sides, each with a 16-horsepower gasoline engine, spinning four<br />
propellers.
Count Ferdin<strong>and</strong> von Zeppelin<br />
Dirigible / 269<br />
The Zeppelin, the first lighter-than-air craft that was powered<br />
<strong>and</strong> steerable, began as a retirement project.<br />
Count Ferdin<strong>and</strong> von Zeppelin was born near Lake Constance<br />
in southern Germany in 1838 <strong>and</strong> grew up in a family<br />
long used to aristocratic privilege <strong>and</strong> government service. After<br />
studying engineering at the University of Tübingen, he was<br />
commissioned as a lieutenant of engineers. In 1863 he traveled<br />
to the United States <strong>and</strong>, armed with a letter of introduction<br />
from President Abraham Lincoln, toured the Union emplacements.<br />
The observation balloons then used to see behind enemy<br />
lines impressed him. He learned all he could about them <strong>and</strong><br />
even flew up in one to seven hundred feet.<br />
His enthusiasm for airships stayed with him throughout his<br />
career, but he was not really able to apply himself to the problem<br />
until he retired (as a brigadier general) in 1890. Then he<br />
concentrated on the struggle to line up financing <strong>and</strong> attract talented<br />
help. He found investors for 90 percent of the money he<br />
needed <strong>and</strong> got the rest from his wife’s inheritance. The first<br />
LZ’s (Luftschiff Zeppelin) had troubles, but setbacks did not stop<br />
him. He was a stubborn, determined man. By the time he died<br />
in 1917 near Berlin he had seen ninety-two airships built. And<br />
because his design was so thoroughly associated with lighterthan-air<br />
vessels in the mind of the German public, they have<br />
ever after been known as zeppelins. However, he had already<br />
recognized their vulnerability as military aircraft, his main interest,<br />
<strong>and</strong> so he had turned his attention to designs for large<br />
airplanes as bombers.<br />
The test flight did not go well. The two main questions—whether<br />
the craft was strong enough <strong>and</strong> fast enough—could not be answered<br />
because little things kept going wrong; for example, a crankshaft<br />
broke <strong>and</strong> a rudder jammed. The first flight lasted no more<br />
than eighteen minutes, with a maximum speed of 13.7 kilometers<br />
per hour. During all three test flights, the airship was in the air for a<br />
total of only two hours, going no faster than 28.2 kilometers per<br />
hour.<br />
Zeppelin had to drop the project for some years because he ran<br />
out of money, <strong>and</strong> his company was dissolved. The LZ 1 was
270 / Dirigible<br />
wrecked in the spring of 1901. A second airship was tested in November,<br />
1905, <strong>and</strong> January, 1906. Both tests were unsuccessful, <strong>and</strong><br />
in the end the ship was destroyed during a storm.<br />
By 1906, however, the German government was convinced of the<br />
military usefulness of the airship, though it would not give money<br />
to Zeppelin unless he agreed to design one that could stay in the air<br />
for at least twenty-four hours. The third Zeppelin failed this test in<br />
the autumn of 1907. Finally, in the summer of 1908, the LZ 4 not only<br />
proved itself to the military but also attracted great publicity. It flew<br />
for more than twenty-four hours <strong>and</strong> reached a speed of more than<br />
60 kilometers per hour. Caught in a storm at the end of this flight,<br />
the airship was forced to l<strong>and</strong> <strong>and</strong> exploded, but money came from<br />
all over Germany to build another.<br />
Impact<br />
Most rigid airships were designed <strong>and</strong> flown in Germany. Of the<br />
161 that were built between 1900 <strong>and</strong> 1938, 139 were made in Germany,<br />
<strong>and</strong> 119 were based on the Zeppelin design.<br />
More than 80 percent of the airships were built for the military.<br />
The Germans used more than one hundred for gathering information<br />
<strong>and</strong> for bombing during World War I (1914-1918). Starting in<br />
May, 1915, airships bombed Warsaw, Pol<strong>and</strong>; Bucharest, Romania;<br />
Salonika, Greece; <strong>and</strong> London, Engl<strong>and</strong>. This was mostly a fear tactic,<br />
since the attacks did not cause great damage, <strong>and</strong> the English antiaircraft<br />
defense improved quickly. By 1916, the German army had<br />
lost so many airships that it stopped using them, though the navy<br />
continued.<br />
Airships were first used for passenger flights in 1910. By 1914,<br />
the Delag (German Aeronautic Stock Company) used seven passenger<br />
airships for sightseeing trips around German cities. There were<br />
still problems with engine power <strong>and</strong> weather forecasting, <strong>and</strong> it<br />
was difficult to move the airships on the ground. After World War I,<br />
the Zeppelins that were left were given to the Allies as payment,<br />
<strong>and</strong> the Germans were not allowed to build airships for their own<br />
use until 1925.<br />
In the 1920’s <strong>and</strong> 1930’s, it became cheaper to use airplanes for
short flights, so airships were useful mostly for long-distance flight.<br />
A British airship made the first transatlantic flight in 1919. The British<br />
hoped to connect their empire by means of airships starting in<br />
1924, but the 1930 crash of the R-101, in which most of the leading<br />
English aeronauts were killed, brought that hope to an end.<br />
The United States Navy built the Akron (1931) <strong>and</strong> the Macon<br />
(1933) for long-range naval reconnaissance, but both airships crashed.<br />
Only the Germans continued to use airships on a regular basis. In<br />
1929, the world tour of the Graf Zeppelin was a success. Regular<br />
flights between Germany <strong>and</strong> South America started in 1932, <strong>and</strong> in<br />
1936, German airships bearing Nazi swastikas flew to Lakehurst,<br />
New Jersey. The tragic explosion of the hydrogen-filled Hindenburg<br />
in 1937, however, brought the era of the rigid airship to a close. The<br />
U.S. secretary of the interior vetoed the sale of nonflammable helium,<br />
fearing that the Nazis would use it for military purposes, <strong>and</strong><br />
the German government had to stop transatlantic flights for safety<br />
reasons. In 1940, the last two remaining rigid airships were destroyed.<br />
See also Airplane; Gyrocompass; Stealth aircraft; Supersonic<br />
passenger plane; Turbojet.<br />
Further Reading<br />
Dirigible / 271<br />
Brooks, Peter. Zeppelin: Rigid Airships, 1893-1940. London: Putman,<br />
1992.<br />
Chant, Christopher. The Zeppelin: The History of German Airships from<br />
1900-1937. New York: Barnes <strong>and</strong> Noble Books, 2000.<br />
Griehl, Manfred, <strong>and</strong> Joachim Dressel. Zeppelin! The German Airship<br />
Story. New York: Sterling Publishing, 1990.<br />
Syon, Guillaume de. Zeppelin!: Germany <strong>and</strong> the Airship, 1900-1939.<br />
Baltimore: John Hopkins University Press, 2001.
272<br />
Disposable razor<br />
Disposable razor<br />
The invention: An inexpensive shaving blade that replaced the traditional<br />
straight-edged razor <strong>and</strong> transformed shaving razors<br />
into a frequent household purchase item.<br />
The people behind the invention:<br />
King Camp Gillette (1855-1932), inventor of the disposable razor<br />
Steven Porter, the machinist who created the first three<br />
disposable razors for King Camp Gillette<br />
William Emery Nickerson (1853-1930), an expert machine<br />
inventor who created the machines necessary for mass<br />
production<br />
Jacob Heilborn, an industrial promoter who helped Gillette start<br />
his company <strong>and</strong> became a partner<br />
Edward J. Stewart, a friend <strong>and</strong> financial backer of Gillette<br />
Henry Sachs, an investor in the Gillette Safety Razor Company<br />
John Joyce, an investor in the Gillette Safety Razor Company<br />
William Painter (1838-1906), an inventor who inspired Gillette<br />
George Gillette, an inventor, King Camp Gillette’s father<br />
A Neater Way to Shave<br />
In 1895, King Camp Gillette thought of the idea of a disposable razor<br />
blade. Gillette spent years drawing different models, <strong>and</strong> finally<br />
Steven Porter, a machinist <strong>and</strong> Gillette’s associate, created from those<br />
drawings the first three disposable razors that worked. Gillette soon<br />
founded the Gillette Safety Razor Company, which became the leading<br />
seller of disposable razor blades in the United States.<br />
George Gillette, King Camp Gillette’s father, had been a newspaper<br />
editor, a patent agent, <strong>and</strong> an inventor. He never invented a very<br />
successful product, but he loved to experiment. He encouraged all<br />
of his sons to figure out how things work <strong>and</strong> how to improve on<br />
them. King was always inventing something new <strong>and</strong> had many<br />
patents, but he was unsuccessful in turning them into profitable<br />
businesses.<br />
Gillette worked as a traveling salesperson for Crown Cork <strong>and</strong>
Seal Company. William Painter, one of Gillette’s friends <strong>and</strong> the inventor<br />
of the crown cork, presented Gillette with a formula for making<br />
a fortune: Invent something that would constantly need to be replaced.<br />
Painter’s crown cork was used to cap beer <strong>and</strong> soda bottles.<br />
It was a tin cap covered with cork, used to form a tight seal over a<br />
bottle. Soda <strong>and</strong> beer companies could use a crown cork only once<br />
<strong>and</strong> needed a steady supply.<br />
King took Painter’s advice <strong>and</strong> began thinking of everyday items<br />
that needed to be replaced often. After owning a Star safety razor<br />
for some time, King realized that the razor blade had not been improved<br />
for a long time. He studied all the razors on the market <strong>and</strong><br />
found that both the common straight razor <strong>and</strong> the safety razor featured<br />
a heavy V-shaped piece of steel, sharpened on one side. King<br />
reasoned that a thin piece of steel sharpened on both sides would<br />
create a better shave <strong>and</strong> could be thrown away once it became dull.<br />
The idea of the disposable razor had been born.<br />
Gillette made several drawings of disposable razors. He then<br />
made a wooden model of the razor to better explain his idea.<br />
Gillette’s first attempt to construct a working model was unsuccessful,<br />
as the steel was too flimsy. Steven Porter, a Boston machinist, decided<br />
to try to make Gillette’s razor from his drawings. He produced<br />
three razors, <strong>and</strong> in the summer of 1899 King was the first<br />
man to shave with a disposable razor.<br />
Changing Consumer Opinion<br />
Disposable razor / 273<br />
In the early 1900’s, most people considered a razor to be a oncein-a-lifetime<br />
purchase. Many fathers h<strong>and</strong>ed down their razors to<br />
their sons. Straight razors needed constant <strong>and</strong> careful attention to<br />
keep them sharp. The thought of throwing a razor in the garbage after<br />
several uses was contrary to the general public’s idea of a razor.<br />
If Gillette’s razor had not provided a much less painful <strong>and</strong> faster<br />
shave, it is unlikely that the disposable would have been a success.<br />
Even with its advantages, public opinion against the product was<br />
still difficult to overcome.<br />
Financing a company to produce the razor proved to be a major<br />
obstacle. King did not have the money himself, <strong>and</strong> potential investors<br />
were skeptical. Skepticism arose both because of public percep-
274 / Disposable razor<br />
tions of the product <strong>and</strong> because of its manufacturing process. Mass<br />
production appeared to be impossible, but the disposable razor<br />
would never be profitable if produced using the methods used to<br />
manufacture its predecessor.<br />
William Emery Nickerson, an expert machine inventor, had looked<br />
at Gillette’s razor <strong>and</strong> said it was impossible to create a machine to<br />
produce it. He was convinced to reexamine the idea <strong>and</strong> finally created<br />
a machine that would create a workable blade. In the process,<br />
Nickerson changed Gillette’s original model. He improved the h<strong>and</strong>le<br />
<strong>and</strong> frame so that it would better support the thin steel blade.<br />
In the meantime, Gillette was busy getting his patent assigned to<br />
the newly formed American Safety Razor Company, owned by<br />
Gillette, Jacob Heilborn, Edward J. Stewart, <strong>and</strong> Nickerson. Gillette<br />
owned considerably more shares than anyone else. Henry Sachs<br />
provided additional capital, buying shares from Gillette.<br />
The stockholders decided to rename the company the Gillette<br />
Safety Razor Company. It soon spent most of its money on machinery<br />
<strong>and</strong> lacked the capital it needed to produce <strong>and</strong> advertise its<br />
product. The only offer the company had received was from a group<br />
of New York investors who were willing to give $125,000 in exchange<br />
for 51 percent of the company. None of the directors wanted<br />
to lose control of the company, so they rejected the offer.<br />
John Joyce, a friend of Gillette, rescued the financially insecure<br />
new company. He agreed to buy $100,000 worth of bonds from the<br />
company for sixty cents on the dollar, purchasing the bonds gradually<br />
as the company needed money. He also received an equivalent<br />
amount of company stock. After an investment of $30,000, Joyce<br />
had the option of backing out. This deal enabled the company to<br />
start manufacturing <strong>and</strong> advertising.<br />
Impact<br />
The company used $18,000 to perfect the machinery to produce<br />
the disposable razor blades <strong>and</strong> razors. Originally the directors<br />
wanted to sell each razor with twenty blades for three dollars. Joyce<br />
insisted on a price of five dollars. In 1903, five dollars was about<br />
one-third of the average American’s weekly salary, <strong>and</strong> a highquality<br />
straight razor could be purchased for about half that price.
Disposable razor / 275<br />
The other directors were skeptical, but Joyce threatened to buy up<br />
all the razors for three dollars <strong>and</strong> sell them himself for five dollars.<br />
Joyce had the financial backing to make this promise good, so the directors<br />
agreed to the higher price.<br />
The Gillette Safety Razor Company contracted with Townsend &<br />
Hunt for exclusive sales. The contract stated that Townsend & Hunt<br />
would buy 50,000 razors with twenty blades each during a period of<br />
slightly more than a year <strong>and</strong> would purchase 100,000 sets per year<br />
for the following four years. The first advertisement for the product<br />
appeared in System Magazine in early fall of 1903, offering the razors<br />
by mail order. By the end of 1903, only fifty-one razors had been<br />
sold.<br />
Since Gillette <strong>and</strong> most of the directors of the company were not<br />
salaried, Gillette had needed to keep his job as salesman with<br />
Crown Cork <strong>and</strong> Seal. At the end of 1903, he received a promotion<br />
that meant relocation from Boston to London. Gillette did not want<br />
to go <strong>and</strong> pleaded with the other directors, but they insisted that the<br />
company could not afford to put him on salary. The company decided<br />
to reduce the number of blades in a set from twenty to twelve<br />
in an effort to increase profits without noticeably raising the cost of a<br />
set. Gillette resigned the title of company president <strong>and</strong> left for Engl<strong>and</strong>.<br />
Shortly thereafter, Townsend & Hunt changed its name to the<br />
Gillette Sales Company, <strong>and</strong> three years later the sales company<br />
sold out to the parent company for $300,000. Sales of the new type<br />
of razor were increasing rapidly in the United States, <strong>and</strong> Joyce<br />
wanted to sell patent rights to European companies for a small percentage<br />
of sales. Gillette thought that that would be a horrible mistake<br />
<strong>and</strong> quickly traveled back to Boston. He had two goals: to stop<br />
the sale of patent rights, based on his conviction that the foreign<br />
market would eventually be very lucrative, <strong>and</strong> to become salaried<br />
by the company. Gillette accomplished both these goals <strong>and</strong> soon<br />
moved back to Boston.<br />
Despite the fact that Joyce <strong>and</strong> Gillette had been good friends for<br />
a long time, their business views often differed. Gillette set up a<br />
holding company in an effort to gain back controlling interest in the<br />
Gillette Safety Razor Company. He borrowed money <strong>and</strong> convinced<br />
his allies in the company to invest in the holding company, eventu-
276 / Disposable razor<br />
ally regaining control. He was reinstated as president of the company.<br />
One clear disagreement was that Gillette wanted to relocate the company<br />
to Newark, New Jersey, <strong>and</strong> Joyce thought that that would be a<br />
waste of money. Gillette authorized company funds to be invested in<br />
a Newark site. The idea was later dropped, costing the company a<br />
large amount of capital. Gillette was not a very wise businessman<br />
King Camp Gillette<br />
At age sixteen, King Camp Gillette (1855-1932) saw all of his<br />
family’s belongings consumed in the Great Chicago Fire. He<br />
had to drop out of school because of it <strong>and</strong> earn his own living.<br />
The catastrophe <strong>and</strong> the sudden loss of security that followed<br />
shaped his ambitions. He was not about to risk destitution ever<br />
again.<br />
He made himself a successful traveling salesman but still<br />
felt he was earning too little. So he turned his mind to inventions,<br />
hoping to get rich quick. The disposable razor was his<br />
only venture, but it was enough. After its long preparation for<br />
marketing Gillette’s invention <strong>and</strong> some subsequent turmoil<br />
among its board of directors, the Gillette Safety Razor Company<br />
was a phenomenal success <strong>and</strong> a bonanza for Gillette. He<br />
became wealthy. He retired in 1913, just ten years after the company<br />
opened, his security assured.<br />
His mother had written cookbooks, one of which was a bestseller.<br />
As an adult, Gillette got the writing bug himself <strong>and</strong><br />
wrote four books, but his theme was far loftier than cooking—<br />
social theory <strong>and</strong> security for the masses. Like Karl Marx he argued<br />
that economic competition squ<strong>and</strong>ers human resources<br />
<strong>and</strong> leads to deprivation, which in turn leads to crime. So, he<br />
reasoned, getting rid of economic competition will end misery<br />
<strong>and</strong> crime. He recommended that a centralized agency plan<br />
production <strong>and</strong> oversee distribution, a recommendation that<br />
America resoundingly ignored. However, other ideas of his<br />
eventually found acceptance, such as air conditioning for workers<br />
<strong>and</strong> government assistance for the unemployed.<br />
In 1922 Gillette moved to Los Angeles, California, <strong>and</strong> devoted<br />
himself to raising oranges <strong>and</strong> collecting his share of the<br />
company profits. However, he seldom felt free enough with his<br />
money to donate it to charity or finance social reform.
Disposable razor / 277<br />
<strong>and</strong> made many costly mistakes. Joyce even accused him of deliberately<br />
trying to keep the stock price low so that Gillette could purchase<br />
more stock. Joyce eventually bought out Gillette, who retained<br />
his title as president but had little say about company<br />
business.<br />
With Gillette out of a management position, the company became<br />
more stable <strong>and</strong> more profitable. The biggest problem the<br />
company faced was that it would soon lose its patent rights. After<br />
the patent expired, the company would have competition. The company<br />
decided that it could either cut prices (<strong>and</strong> therefore profits) to<br />
compete with the lower-priced disposables that would inevitably<br />
enter the market, or it could create a new line of even better razors.<br />
The company opted for the latter strategy. Weeks before the patent<br />
expired, the Gillette Safety Razor Company introduced a new line<br />
of razors.<br />
Both World War I <strong>and</strong> World War II were big boosts to the company,<br />
which contracted with the government to supply razors to almost<br />
all the troops. This transaction created a huge increase in sales<br />
<strong>and</strong> introduced thous<strong>and</strong>s of young men to the Gillette razor. Many<br />
of them continued to use Gillettes after returning from the war.<br />
Aside from the shaky start of the company, its worst financial difficulties<br />
were during the Great Depression. Most Americans simply<br />
could not afford Gillette blades, <strong>and</strong> many used a blade for an extended<br />
time <strong>and</strong> then resharpened it rather than throwing it away. If<br />
it had not been for the company’s foreign markets, the company<br />
would not have shown a profit during the Great Depression.<br />
Gillette’s obstinancy about not selling patent rights to foreign investors<br />
proved to be an excellent decision.<br />
The company advertised through sponsoring sporting events,<br />
including the World Series. Gillette had many celebrity endorsements<br />
from well-known baseball players. Before it became too expensive<br />
for one company to sponsor an entire event, Gillette had<br />
exclusive advertising during the World Series, various boxing<br />
matches, the Kentucky Derby, <strong>and</strong> football bowl games. Sponsoring<br />
these events was costly, but sports spectators were the typical<br />
Gillette customers.<br />
The Gillette Company created many products that complemented<br />
razors <strong>and</strong> blades, including shaving cream, women’s ra-
278 / Disposable razor<br />
zors, <strong>and</strong> electric razors. The company exp<strong>and</strong>ed into new products<br />
including women’s cosmetics, writing utensils, deodorant, <strong>and</strong><br />
wigs. One of the main reasons for obtaining a more diverse product<br />
line was that a one-product company is less stable, especially in a<br />
volatile market. The Gillette Company had learned that lesson in<br />
the Great Depression. Gillette continued to thrive by following the<br />
principles the company had used from the start. The majority of<br />
Gillette’s profits came from foreign markets, <strong>and</strong> its employees<br />
looked to improve products <strong>and</strong> find opportunities in other departments<br />
as well as their own.<br />
See also Contact lenses; Memory metal; Steelmaking process.<br />
Further Reading<br />
Adams, Russell B., Jr. King C. Gillette: The Man <strong>and</strong> His Wonderful<br />
Shaving Device. Boston: Little, Brown, 1978.<br />
Dowling, Tim. Inventor of the Disposable Culture: King Camp Gillette,<br />
1855-1932. London: Short, 2001.<br />
“Gillette: Blade-runner.” The Economist 327 (April 10, 1993).<br />
Killgren, Lucy. “Nicking Gillette.” Marketing Week 22 (June 17, 1999).<br />
McKibben, Gordon. Cutting Edge: Gillette’s Journey to Global Leadership.<br />
Boston, Mass.: Harvard Business School Press, 1998.<br />
Thomas, Robert J. New Product Success Stories: Lessons from Leading<br />
Innovators. New York: John Wiley, 1995.<br />
Zeien, Alfred M. The Gillette Company. New York: Newcomen Society<br />
of the United States, 1999.
Dolby noise reduction<br />
Dolby noise reduction<br />
The invention: Electronic device that reduces the signal-to-noise<br />
ratio of sound recordings <strong>and</strong> greatly improves the sound quality<br />
of recorded music.<br />
The people behind the invention:<br />
Emil Berliner (1851-1929), a German inventor<br />
Ray Milton Dolby (1933- ), an American inventor<br />
Thomas Alva Edison (1847-1931), an American inventor<br />
Phonographs, Tapes, <strong>and</strong> Noise Reduction<br />
279<br />
The main use of record, tape, <strong>and</strong> compact disc players is to listen<br />
to music, although they are also used to listen to recorded speeches,<br />
messages, <strong>and</strong> various forms of instruction. Thomas Alva Edison<br />
invented the first sound-reproducing machine, which he called the<br />
“phonograph,” <strong>and</strong> patented it in 1877. Ten years later, a practical<br />
phonograph (the “gramophone”) was marketed by a German, Emil<br />
Berliner. Phonographs recorded sound by using diaphragms that<br />
vibrated in response to sound waves <strong>and</strong> controlled needles that cut<br />
grooves representing those vibrations into the first phonograph records,<br />
which in Edison’s machine were metal cylinders <strong>and</strong> in Berliner’s<br />
were flat discs. The recordings were then played by reversing<br />
the recording process: Placing a needle in the groove in the recorded<br />
cylinder or disk caused the diaphragm to vibrate, re-creating the<br />
original sound that had been recorded.<br />
In the 1920’s, electrical recording methods developed that produced<br />
higher-quality recordings, <strong>and</strong> then, in the 1930’s, stereophonic<br />
recording was developed by various companies, including<br />
the British company Electrical <strong>and</strong> Musical Industries (EMI). Almost<br />
simultaneously, the technology of tape recording was developed.<br />
By the 1940’s, long-playing stereo records <strong>and</strong> tapes were<br />
widely available. As recording techniques improved further, tapes<br />
became very popular, <strong>and</strong> by the 1960’s, they had evolved into both<br />
studio master recording tapes <strong>and</strong> the audio cassettes used by consumers.
280 / Dolby noise reduction<br />
Hisses <strong>and</strong> other noises associated with sound recording <strong>and</strong> its<br />
environment greatly diminished the quality of recorded music. In<br />
1967, Ray Dolby invented a noise reducer, later named “Dolby A,”<br />
that could be used by recording studios to reduce tape signal-tonoise<br />
ratios. Several years later, his “Dolby B” system, designed<br />
for home use, became st<strong>and</strong>ard equipment in all types of playback<br />
machines. Later, Dolby <strong>and</strong> others designed improved noisesuppression<br />
systems.<br />
Recording <strong>and</strong> Tape Noise<br />
Sound is made up of vibrations of varying frequencies—sound<br />
waves—that sound recorders can convert into grooves on plastic records,<br />
varying magnetic arrangements on plastic tapes covered<br />
with iron particles, or tiny pits on compact discs. The following discussion<br />
will focus on tape recordings, for which the original Dolby<br />
noise reducers were designed.<br />
Tape recordings are made by a process that converts sound<br />
waves into electrical impulses that cause the iron particles in a tape<br />
to reorganize themselves into particular magnetic arrangements.<br />
The process is reversed when the tape is played back. In this process,<br />
the particle arrangements are translated first into electrical impulses<br />
<strong>and</strong> then into sound that is produced by loudspeakers.<br />
Erasing a tape causes the iron particles to move back into their original<br />
spatial arrangement.<br />
Whenever a recording is made, undesired sounds such as hisses,<br />
hums, pops, <strong>and</strong> clicks can mask the nuances of recorded sound, annoying<br />
<strong>and</strong> fatiguing listeners. The first attempts to do away with<br />
undesired sounds (noise) involved making tapes, recording devices,<br />
<strong>and</strong> recording studios quieter. Such efforts did not, however,<br />
remove all undesired sounds.<br />
Furthermore, advances in recording technology increased the<br />
problem of noise by producing better instruments that “heard” <strong>and</strong><br />
transmitted to recordings increased levels of noise. Such noise is often<br />
caused by the components of the recording system; tape hiss is<br />
an example of such noise. This type of noise is most discernible in<br />
quiet passages of recordings, because loud recorded sounds often<br />
mask it.
Ray Dolby<br />
Dolby noise reduction / 281<br />
Ray Dolby, born in Portl<strong>and</strong>, Oregon, in 1933, became an<br />
electronics engineer while still in high school in 1952. That is<br />
when he began working part time for Ampex Corporation,<br />
helping develop the first videotape recorder. He was responsible<br />
for the electronics in the Ampex VTR, which was marketed<br />
in 1956. The next year he finished a bachelor of science degree at<br />
Stanford University, won a Marshall Scholarship <strong>and</strong> National<br />
Science Foundation grant, <strong>and</strong> went to Cambridge University<br />
in Engl<strong>and</strong> for graduate studies. He received a Ph.D. in 1961<br />
<strong>and</strong> a fellowship to Pembroke College, during which he also<br />
consulted for the United Kingdom Atomic Energy Authority.<br />
After two years in India as a United Nations adviser, he set<br />
up Dolby Laboratories in London. It was there that he produced<br />
the sound suppression equipment that made him famous to audiophiles<br />
<strong>and</strong> movie goers, particularly in the 1970’s for the<br />
Dolby stereo (“surround sound”) that enlivened such blockbusters<br />
as Star Wars. In 1976 he moved to San Francisco <strong>and</strong><br />
opened new offices for his company. The holder of more than<br />
fifty patents, Dolby published monographs on videotape recording,<br />
long wavelength X-ray analysis, <strong>and</strong> noise reduction.<br />
He is among the most honored scientists in the recording industry.<br />
Among many other awards, he received an Oscar, Emmy,<br />
Samuel L. Warner Memorial Award, gold <strong>and</strong> silver medals<br />
from the Audio Engineering Society, <strong>and</strong> the National Medal of<br />
Technology. Engl<strong>and</strong> made him an honorary Officer of the Most<br />
Excellent Order of the British Empire, <strong>and</strong> Cambridge University<br />
<strong>and</strong> York University awarded him honorary doctorates.<br />
Because of the problem of noise in quiet passages of recorded<br />
sound, one early attempt at noise suppression involved the reduction<br />
of noise levels by using “dynaural” noise suppressors. These<br />
devices did not alter the loud portions of a recording; instead, they<br />
reduced the very high <strong>and</strong> very low frequencies in the quiet passages<br />
in which noise became most audible. The problem with such<br />
devices was, however, that removing the high <strong>and</strong> low frequencies<br />
could also affect the desirable portions of the recorded sound.<br />
These suppressors could not distinguish desirable from undesirable<br />
sounds. As recording techniques improved, dynaural noise sup-
282 / Dolby noise reduction<br />
pressors caused more <strong>and</strong> more problems, <strong>and</strong> their use was finally<br />
discontinued.<br />
Another approach to noise suppression is sound compression<br />
during the recording process. This compression is based on the fact<br />
that most noise remains at a constant level throughout a recording,<br />
regardless of the sound level of a desired signal (such as music). To<br />
carry out sound compression, the lowest-level signals in a recording<br />
are electronically elevated above the sound level of all noise. Musical<br />
nuances can be lost when the process is carried too far, because<br />
the maximum sound level is not increased by devices that use<br />
sound compression. To return the music or other recorded sound to<br />
its normal sound range for listening, devices that “exp<strong>and</strong>” the recorded<br />
music on playback are used. Two potential problems associated<br />
with the use of sound compression <strong>and</strong> expansion are the difficulty<br />
of matching the two processes <strong>and</strong> the introduction into the<br />
recording of noise created by the compression devices themselves.<br />
In 1967, Ray Dolby developed Dolby A to solve these problems as<br />
they related to tape noise (but not to microphone signals) in the recording<br />
<strong>and</strong> playing back of studio master tapes. The system operated<br />
by carrying out ten-decibel compression during recording <strong>and</strong><br />
then restoring (noiselessly) the range of the music on playback. This<br />
was accomplished by exp<strong>and</strong>ing the sound exactly to its original<br />
range. Dolby A was very expensive <strong>and</strong> was thus limited to use in recording<br />
studios. In the early 1970’s, however, Dolby invented the less<br />
expensive Dolby B system, which was intended for consumers.<br />
Consequences<br />
The development of Dolby A <strong>and</strong> Dolby B noise-reduction systems<br />
is one of the most important contributions to the high-quality<br />
recording <strong>and</strong> reproduction of sound. For this reason, Dolby A<br />
quickly became st<strong>and</strong>ard in the recording industry. In similar fashion,<br />
Dolby B was soon incorporated into virtually every highfidelity<br />
stereo cassette deck to be manufactured.<br />
Dolby’s discoveries spurred advances in the field of noise reduction.<br />
For example, the German company Telefunken <strong>and</strong> the Japanese<br />
companies Sanyo <strong>and</strong> Toshiba, among others, developed their<br />
own noise-reduction systems. Dolby Laboratories countered by
producing an improved system: Dolby C. The competition in the<br />
area of noise reduction continues, <strong>and</strong> it will continue as long as<br />
changes in recording technology produce new, more sensitive recording<br />
equipment.<br />
See also Cassette recording; Compact disc; Electronic synthesizer;<br />
FM radio; Radio; Transistor; Transistor radio; Walkman cassette<br />
player.<br />
Further Reading<br />
Dolby noise reduction / 283<br />
Alkin, E. G. M. Sound Recording <strong>and</strong> Reproduction. 3d ed. Boston: Focal<br />
Press, 1996.<br />
Baldwin, Neil. Edison: Inventing the Century. Chicago: University of<br />
Chicago Press, 2001.<br />
Wile, Frederic William. Emile Berliner, Maker of the Microphone. New<br />
York: Arno Press, 1974.
284<br />
Electric clock<br />
Electric clock<br />
The invention: Electrically powered time-keeping device with a<br />
quartz resonator that has led to the development of extremely accurate,<br />
relatively inexpensive electric clocks that are used in computers<br />
<strong>and</strong> microprocessors.<br />
The person behind the invention:<br />
Warren Alvin Marrison (1896-1980), an American scientist<br />
From Complex Mechanisms to Quartz Crystals<br />
William Alvin Marrison’s fabrication of the electric clock began a<br />
new era in time-keeping. Electric clocks are more accurate <strong>and</strong> more<br />
reliable than mechanical clocks, since they have fewer moving parts<br />
<strong>and</strong> are less likely to malfunction.<br />
An electric clock is a device that generates a string of electric<br />
pulses. The most frequently used electric clocks are called “free running”<br />
<strong>and</strong> “periodic,” which means that they generate a continuous<br />
sequence of electric pulses that are equally spaced. There are various<br />
kinds of electronic “oscillators” (materials that vibrate) that can<br />
be used to manufacture electric clocks.<br />
The material most commonly used as an oscillator in electric<br />
clocks is crystalline quartz. Because quartz (silicon dioxide) is a<br />
completely oxidized compound (which means that it does not deteriorate<br />
readily) <strong>and</strong> is virtually insoluble in water, it is chemically<br />
stable <strong>and</strong> resists chemical processes that would break down other<br />
materials. Quartz is a “piezoelectric” material, which means that it<br />
is capable of generating electricity when it is subjected to pressure<br />
or stress of some kind. In addition, quartz has the advantage of generating<br />
electricity at a very stable frequency, with little variation. For<br />
these reasons, quartz is an ideal material to use as an oscillator.<br />
The Quartz Clock<br />
A quartz clock is an electric clock that makes use of the piezoelectric<br />
properties of a quartz crystal. When a quartz crystal vibrates, a
Early electric clock. (PhotoDisc)<br />
Electric clock / 285<br />
difference of electric potential is produced between two of its faces.<br />
The crystal has a natural frequency (rate) of vibration that is determined<br />
by its size <strong>and</strong> shape. If the crystal is placed in an oscillating<br />
electric circuit that has a frequency that is nearly the same as that of<br />
the crystal, it will vibrate at its natural frequency <strong>and</strong> will cause the<br />
frequency of the entire circuit to match its own frequency.<br />
Piezoelectricity is electricity, or “electric polarity,” that is caused<br />
by the application of mechanical pressure on a “dielectric” material<br />
(one that does not conduct electricity), such as a quartz crystal. The<br />
process also works in reverse; if an electric charge is applied to the<br />
dielectric material, the material will experience a mechanical distortion.<br />
This reciprocal relationship is called “the piezoelectric effect.”<br />
The phenomenon of electricity being generated by the application<br />
of mechanical pressure is called the direct piezoelectric effect, <strong>and</strong><br />
the phenomenon of mechanical stress being produced as a result of<br />
the application of electricity is called the converse piezoelectric<br />
effect.<br />
When a quartz crystal is used to create an oscillator, the natural<br />
frequency of the crystal can be used to produce other frequencies<br />
that can power clocks. The natural frequency of a quartz crystal is<br />
nearly constant if precautions are taken when it is cut <strong>and</strong> polished<br />
<strong>and</strong> if it is maintained at a nearly constant temperature <strong>and</strong> pressure.<br />
After a quartz crystal has been used for some time, its fre-
286 / Electric clock<br />
Warren Alvin Marrison<br />
Born in Invenary, Canada, in 1896, Warren Alvin Marrison<br />
completed high school at Kingston Collegiate Institute in Ontario<br />
<strong>and</strong> attended Queen’s University in Kingston, where he<br />
studied science. World War I interrupted his studies, <strong>and</strong> while<br />
serving in the Royal Flying Corps as an electronics researcher,<br />
he began his life-long interest in radio. He graduated from university<br />
with a degree in engineering physics in 1920, transferred<br />
to Harvard University in 1921, <strong>and</strong> earned a master’s degree.<br />
After his studies, he worked for the Western Electric Company<br />
in New York, helping to develop a method to record<br />
sound on film. He moved to the company’s Bell Laboratory in<br />
1925 <strong>and</strong> studied how to produce frequency st<strong>and</strong>ards for radio<br />
transmissions. This research led him to use quartz crystals as<br />
oscillators, <strong>and</strong> he was able to step down the frequency enough<br />
that it could power a motor. Because the motor revolved at the<br />
same rate as the crystal’s frequency, he could determine the<br />
number of vibrations per time unit of the crystal <strong>and</strong> set a frequency<br />
st<strong>and</strong>ard. However, because the vibrations were constant<br />
over time, the crystal also measured time, <strong>and</strong> a new type<br />
of clock was born.<br />
For his work, Marrison received the British Horological Institute’s<br />
Gold Medal in 1947 <strong>and</strong> the Clockmakers’ Company’s<br />
Tompion Medal in 1955. He died in California in 1980.<br />
quency usually varies slowly as a result of physical changes. If allowances<br />
are made for such changes, quartz-crystal clocks such as<br />
those used in laboratories can be manufactured that will accumulate<br />
errors of only a few thous<strong>and</strong>ths of a second per month. The<br />
quartz crystals that are typically used in watches, however, may accumulate<br />
errors of tens of seconds per year.<br />
There are other materials that can be used to manufacture accurate<br />
electric clocks. For example, clocks that use the element rubidium<br />
typically would accumulate errors no larger than a few tenthous<strong>and</strong>ths<br />
of a second per year, <strong>and</strong> those that use the element cesium<br />
would experience errors of only a few millionths of a second<br />
per year. Quartz is much less expensive than rarer materials such as
ubidium <strong>and</strong> cesium, <strong>and</strong> it is easy to use in such common applications<br />
as computers. Thus, despite their relative inaccuracy, electric<br />
quartz clocks are extremely useful <strong>and</strong> popular, particularly for applications<br />
that require accurate timekeeping over a relatively short<br />
period of time. In such applications, quartz clocks may be adjusted<br />
periodically to correct for accumulated errors.<br />
Impact<br />
The electric quartz clock has contributed significantly to the development<br />
of computers <strong>and</strong> microprocessors. The computer’s control<br />
unit controls <strong>and</strong> synchronizes all data transfers <strong>and</strong> transformations<br />
in the computer system <strong>and</strong> is the key subsystem in the<br />
computer itself. Every action that the computer performs is implemented<br />
by the control unit.<br />
The computer’s control unit uses inputs from a quartz clock to<br />
derive timing <strong>and</strong> control signals that regulate the actions in the system<br />
that are associated with each computer instruction. The control<br />
unit also accepts, as input, control signals generated by other devices<br />
in the computer system.<br />
The other primary impact of the quartz clock is in making the<br />
construction of multiphase clocks a simple task. A multiphase<br />
clock is a clock that has several outputs that oscillate at the same<br />
frequency. These outputs may generate electric waveforms of different<br />
shapes or of the same shape, which makes them useful for<br />
various applications. It is common for a computer to incorporate a<br />
single-phase quartz clock that is used to generate a two-phase<br />
clock.<br />
See also Atomic clock; Carbon dating; Electric refrigerator; Fluorescent<br />
lighting; Microwave cooking; Television; Vacuum cleaner;<br />
Washing machine.<br />
Further Reading<br />
Electric clock / 287<br />
Barnett, Jo Ellen. Time’s Pendulum: From Sundials to Atomic Clocks, the<br />
Fascinating History of Time Keeping <strong>and</strong> How Our Discoveries<br />
Changed the World. San Diego: Harcourt Brace, 1999.
288 / Electric clock<br />
Dennis, Maggie, <strong>and</strong> Carlene Stephens. “Engineering Time: Inventing<br />
the Electronic Wristwatch.” British Journal for the History<br />
of Science 33, no. 119 (December, 2000).<br />
Ganeri, Anita. From C<strong>and</strong>le to Quartz Clock: The Story of Time <strong>and</strong><br />
Timekeeping. London: Evna Brothers, 1996.<br />
Thurber, Karl. “All the Time in the World.” Popular Electronics 14, no.<br />
10 (October, 1997).
Electric refrigerator<br />
Electric refrigerator<br />
The invention: An electrically powered <strong>and</strong> hermetically sealed<br />
food-storage appliance that replaced iceboxes, improved production,<br />
<strong>and</strong> lowered food-storage costs.<br />
The people behind the invention:<br />
Marcel Audiffren, a French monk<br />
Christian Steenstrup (1873-1955), an American engineer<br />
Fred Wolf, an American engineer<br />
Ice Preserves America’s Food<br />
289<br />
Before the development of refrigeration in the United States, a<br />
relatively warm climate made it difficult to preserve food. Meat<br />
spoiled within a day <strong>and</strong> milk could spoil within an hour after milking.<br />
In early America, ice was stored below ground in icehouses that<br />
had roofs at ground level. George Washington had a large icehouse<br />
at his Mount Vernon estate. By 1876, America was consuming more<br />
than 2 million tons of ice each year, which required 4,000 horses <strong>and</strong><br />
10,000 men to deliver.<br />
Several related inventions were needed before mechanical refrigeration<br />
was developed. James Watt invented the condenser, an important<br />
refrigeration system component, in 1769. In 1805, Oliver Evans<br />
presented the idea of continuous circulation of a refrigerant in a<br />
closed cycle. In this closed cooling cycle, a liquid refrigerant evaporates<br />
to a gas at low temperature, absorbing heat from its environment<br />
<strong>and</strong> thereby producing “cold,” which is circulated around an<br />
enclosed cabinet. To maintain this cooling cycle, the refrigerant gas<br />
must be returned to liquid form through condensation by compression.<br />
The first closed-cycle vapor-compression refrigerator, which<br />
was patented by Jacob Perkins in 1834, used ether as a refrigerant.<br />
Iceboxes were used in homes before refrigerators were developed.<br />
Ice was cut from lakes <strong>and</strong> rivers in the northern United States<br />
or produced by ice machines in the southern United States. An ice<br />
machine using air was patented by John Gorrie at New Orleans in<br />
1851. Ferdin<strong>and</strong> Carre introduced the first successful commercial
290 / Electric refrigerator<br />
ice machine, which used ammonia as a refrigerant, in 1862, but it<br />
was too large for home use <strong>and</strong> produced only a pound of ice per<br />
hour. Ice machinery became very dependable after 1890 but was<br />
plagued by low efficiency. Very warm summers in 1890 <strong>and</strong> 1891 cut<br />
natural ice production dramatically <strong>and</strong> increased dem<strong>and</strong> for mechanical<br />
ice production. Ice consumption continued to increase after<br />
1890; by 1914, 21 million tons of ice were used annually. The high<br />
prices charged for ice <strong>and</strong> the extremely low efficiency of home iceboxes<br />
gradually led the public to dem<strong>and</strong> a substitute for ice refrigeration.<br />
Refrigeration for the Home<br />
Domestic refrigeration required a compact unit with a built-in<br />
electric motor that did not require supervision or maintenance.<br />
Marcel Audiffren, a French monk, conceived the idea of an electric<br />
refrigerator for home use around 1910. The first electric refrigerator,<br />
which was invented by Fred Wolf in 1913, was called the Domelre,<br />
which stood for domestic electric refrigerator. This machine used<br />
condensation equipment that was housed in the home’s basement.<br />
In 1915, Alfred Mellowes built the first refrigerator to contain all of<br />
its components; this machine was known as Guardian’s Frigerator.<br />
General Motors acquired Guardian in 1918 <strong>and</strong> began to mass produce<br />
refrigerators. Guardian was renamed Frigidaire in 1919. In<br />
1918, the Kelvinator Company, run by Edmund Copel<strong>and</strong>, built the<br />
first refrigerator with automatic controls, the most important of<br />
which was the thermostatic switch. Despite these advances, by 1920<br />
only a few thous<strong>and</strong> homes had refrigerators, which cost about<br />
$1,000 each.<br />
The General Electric Company (GE) purchased the rights to the<br />
General Motors refrigerator, which was based on an improved<br />
design submitted by one of its engineers, Christian Steenstrup.<br />
Steenstrup’s innovative design included a motor <strong>and</strong> reciprocating<br />
compressor that were hermetically sealed with the refrigerant.<br />
This unit, known as the GE Monitor Top, was first produced in<br />
1927. A patent on this machine was filed for in 1926 <strong>and</strong> granted to<br />
Steenstrup in 1930. Steenstrup became chief engineer of GE’s electric<br />
refrigeration department <strong>and</strong> accumulated thirty-nine addi-
Electric refrigerator / 291<br />
tional patents in refrigeration over the following years. By 1936, he<br />
had more than one hundred patents to his credit in refrigeration <strong>and</strong><br />
other areas.<br />
Further refinement of the refrigerator evolved with the development<br />
of Freon, a nonexplosive, nontoxic, <strong>and</strong> noncorrosive refrigerant<br />
discovered by Thomas Midgely, Jr., in 1928. Freon used lower<br />
pressures than ammonia did, which meant that lighter materials<br />
<strong>and</strong> lower temperatures could be used in refrigeration.<br />
During the years following the introduction of the Monitor Top,<br />
the cost of refrigerators dropped from $1,000 in 1918 to $400 in 1926,<br />
<strong>and</strong> then to $170 in 1935. Sales of units increased from 200,000 in<br />
1926 to 1.5 million in 1935.<br />
Initially, refrigerators were sold separately from their cabinets,<br />
which commonly were used wooden iceboxes. Frigidaire began<br />
making its own cabinets in 1923, <strong>and</strong> by 1930, refrigerators that<br />
combined machinery <strong>and</strong> cabinet were sold.<br />
Throughout the 1930’s, refrigerators were well-insulated, hermetically<br />
sealed steel units that used evaporator coils to cool the<br />
food compartment. The refrigeration system was transferred from<br />
on top of to below the food storage area, which made it possible to<br />
raise the food storage area to a more convenient level. Special light<br />
bulbs that produced radiation to kill taste- <strong>and</strong> odor-bearing bacteria<br />
were used in refrigerators. Other developments included sliding<br />
shelves, shelves in doors, rounded <strong>and</strong> styled cabinet corners, ice<br />
cube trays, <strong>and</strong> even a built-in radio.<br />
The freezing capacity of early refrigerators was inadequate. Only<br />
a package or two of food could be kept cool at a time, ice cubes<br />
melted, <strong>and</strong> only a minimal amount of food could be kept frozen.<br />
The two-temperature refrigerator consisting of one compartment<br />
providing normal cooling <strong>and</strong> a separate compartment for freezing<br />
was developed by GE in 1939. Evaporator coils for cooling were<br />
placed within the refrigerator walls, providing more cooling capacity<br />
<strong>and</strong> more space for food storage. Frigidaire introduced a Cold<br />
Wall compartment, while White-Westinghouse introduced a Colder<br />
Cold system. After World War II, GE introduced the refrigeratorfreezer<br />
combination.
292 / Electric refrigerator<br />
Impact<br />
Audiffren, Wolf, Steenstrup, <strong>and</strong> others combined the earlier inventions<br />
of Watt, Perkins, <strong>and</strong> Carre with the development of electric<br />
motors to produce the electric refrigerator. The development of<br />
domestic electric refrigeration had a tremendous effect on the quality<br />
of home life. Reliable, affordable refrigeration allowed consumers<br />
a wider selection of food <strong>and</strong> increased flexibility in their daily<br />
consumption. The domestic refrigerator with increased freezer capacity<br />
spawned the growth of the frozen food industry. Without the<br />
electric refrigerator, households would still depend on unreliable<br />
supplies of ice.<br />
See also Fluorescent lighting; Food freezing; Freeze-drying; Microwave<br />
cooking; Refrigerant gas; Robot (household); Tupperware;<br />
Vacuum cleaner; Washing machine.<br />
Further Reading<br />
Anderson, Oscar Edward. Refrigeration in America: A History of a New<br />
Technology <strong>and</strong> Its Impact. Princeton: Princeton University Press,<br />
1953.<br />
Donaldson, Barry, Bernard Nagengast, <strong>and</strong> Gershon Meckler. Heat<br />
<strong>and</strong> Cold: Mastering the Great Indoors: A Selective History of Heating,<br />
Ventilation, Air-Conditioning <strong>and</strong> Refrigeration from the Ancients to<br />
the 1930’s. Atlanta, Ga.: American Society of Heating, Refrigerating<br />
<strong>and</strong> Air-Conditioning Engineers, 1994.<br />
Woolrich, Willis Raymond. The Men Who Created Cold: A History of<br />
Refrigeration. New York: Exposition Press, 1967.
Electrocardiogram<br />
Electrocardiogram<br />
The invention: Device for analyzing the electrical currents of the<br />
human heart.<br />
The people behind the invention:<br />
Willem Einthoven (1860-1927), a Dutch physiologist <strong>and</strong><br />
winner of the 1924 Nobel Prize in Physiology or Medicine<br />
Augustus D. Waller (1856-1922), a German physician <strong>and</strong><br />
researcher<br />
Sir Thomas Lewis (1881-1945), an English physiologist<br />
Horse Vibrations<br />
293<br />
In the late 1800’s, there was substantial research interest in the<br />
electrical activity that took place in the human body. Researchers<br />
studied many organs <strong>and</strong> systems in the body, including the nerves,<br />
eyes, lungs, muscles, <strong>and</strong> heart. Because of a lack of available technology,<br />
this research was tedious <strong>and</strong> frequently inaccurate. Therefore,<br />
the development of the appropriate instrumentation was as<br />
important as the research itself.<br />
The initial work on the electrical activity of the heart (detected<br />
from the surface of the body) was conducted by Augustus D. Waller<br />
<strong>and</strong> published in 1887. Many credit him with the development of<br />
the first electrocardiogram. Waller used a Lippmann’s capillary<br />
electrometer (named for its inventor, the French physicist Gabriel-<br />
Jonas Lippmann) to determine the electrical charges in the heart <strong>and</strong><br />
called his recording a “cardiograph.” The recording was made by<br />
placing a series of small tubes on the surface of the body. The tubes<br />
contained mercury <strong>and</strong> sulfuric acid. As an electrical current passed<br />
through the tubes, the mercury would exp<strong>and</strong> <strong>and</strong> contract. The resulting<br />
images were projected onto photographic paper to produce<br />
the first cardiograph. Yet Waller had only limited sucess with the<br />
device <strong>and</strong> eventually ab<strong>and</strong>oned it.<br />
In the early 1890’s, Willem Einthoven, who became a good friend<br />
of Waller, began using the same type of capillary tube to study the<br />
electrical currents of the heart. Einthoven also had a difficult time
294 / Electrocardiogram<br />
working with the instrument. His laboratory was located in an old<br />
wooden building near a cobblestone street. Teams of horses pulling<br />
heavy wagons would pass by <strong>and</strong> cause his laboratory to vibrate.<br />
This vibration affected the capillary tube, causing the cardiograph<br />
to be unclear. In his frustration, Einthoven began to modify his laboratory.<br />
He removed the floorboards <strong>and</strong> dug a hole some ten to fifteen<br />
feet deep. He lined the walls with large rocks to stabilize his instrument.<br />
When this failed to solve the problem, Einthoven, too,<br />
ab<strong>and</strong>oned the Lippmann’s capillary tube. Yet Einthoven did not<br />
ab<strong>and</strong>on the idea, <strong>and</strong> he began to experiment with other instruments.<br />
Electrocardiographs over the Phone<br />
In order to continue his research on the electrical currents of the<br />
heart, Einthoven began to work with a new device, the d’Arsonval<br />
galvanometer (named for its inventor, the French biophysicist<br />
Arsène d’Arsonval). This instrument had a heavy coil of wire suspended<br />
between the poles of a horseshoe magnet. Changes in electrical<br />
activity would cause the coil to move; however, Einthoven<br />
found that the coil was too heavy to record the small electrical<br />
changes found in the heart. Therefore, he modified the instrument<br />
by replacing the coil with a silver-coated quartz thread (string).<br />
The movements could be recorded by transmitting the deflections<br />
through a microscope <strong>and</strong> projecting them on photographic film.<br />
Einthoven called the new instrument the “string galvanometer.”<br />
In developing his string galvanomter, Einthoven was influenced<br />
by the work of one of his teachers, Johannes Bosscha. In the 1850’s,<br />
Bosscha had published a study describing the technical complexities<br />
of measuring very small amounts of electricity. He proposed the<br />
idea that a galvanometer modified with a needle hanging from a<br />
silk thread would be more sensitive in measuring the tiny electric<br />
currents of the heart.<br />
By 1905, Einthoven had improved the string galvanometer to<br />
the point that he could begin using it for clinical studies. In 1906,<br />
he had his laboratory connected to the hospital in Leiden by a telephone<br />
wire. With this arrangement, Einthoven was able to study in<br />
his laboratory electrocardiograms derived from patients in the
Willem Einthoven<br />
Electrocardiogram / 295<br />
Willem Einthoven was born in 1860 on the Isl<strong>and</strong> of Java,<br />
now part of Indonesia. His father was a Dutch army medical officer,<br />
<strong>and</strong> his mother was the daughter of the Finance Director<br />
for the Dutch East Indies. When his father died in 1870, his<br />
mother moved with her six children to Utrecht, Holl<strong>and</strong>.<br />
Einthoven entered the University of Utrecht in 1878 intending<br />
to become a physician like his father, but physics <strong>and</strong> physiology<br />
attracted him more. During his education two research<br />
projects that he conducted brought him notoriety. The first involved<br />
the articulation of the elbow, which he undertook after a<br />
sports injury of his own elbow. (He remained an avid participant<br />
in sports his whole life.) The second, which earned him his<br />
doctorate in 1885, examined stereoscopy <strong>and</strong> color variation.<br />
Because of the keen investigative abilities these studies displayed,<br />
he was at once appointed professor of physiology at the<br />
University of Leiden. He took up the position the next year, after<br />
qualifying as a general practitioner.<br />
Einthoven conducted research into asthma <strong>and</strong> the optics<br />
<strong>and</strong> electrical activity of vision before turning his attention to<br />
the heart. He developed the electrocardiogram in order to measure<br />
the heart’s electrical activity accurately <strong>and</strong> tested its applications<br />
<strong>and</strong> capacities with many students <strong>and</strong> visiting scientists,<br />
helping thereby to widen interest in it as a diagnostic tool.<br />
For this work he received the 1924 Nobel Prize in Physiology or<br />
Medicine.<br />
In his later years, Einthoven studied problems in acoustics<br />
<strong>and</strong> the electrical activity of the sympathetic nervous system.<br />
He died in Leiden in 1927.<br />
hospital, which was located a mile away. With this source of subjects,<br />
Einthoven was able to use his galvanometer to study many<br />
heart problems. As a result of these studies, Einthoven identified<br />
the following heart problems: blocks in the electrical conduction<br />
system of the heart; premature beats of the heart, including two<br />
premature beats in a row; <strong>and</strong> enlargements of the various chambers<br />
of the heart. He was also able to study how the heart behaved<br />
during the administration of cardiac drugs.
296 / Electrocardiogram<br />
A major researcher who communicated with Einthoven about<br />
the electrocardiogram was Sir Thomas Lewis, who is credited with<br />
developing the electrocardiogram into a useful clinical tool. One of<br />
Lewis’s important accomplishments was his identification of atrial<br />
fibrillation, the overactive state of the upper chambers of the heart.<br />
During World War I, Lewis was involved with studying soldiers’<br />
hearts. He designed a series of graded exercises, which he used to<br />
test the soldiers’ ability to perform work. From this study, Lewis<br />
was able to use similar tests to diagnose heart disease <strong>and</strong> to screen<br />
recruits who had heart problems.<br />
Impact<br />
As Einthoven published additional studies on the string galvanometer<br />
in 1903, 1906, <strong>and</strong> 1908, greater interest in his instrument<br />
was generated around the world. In 1910, the instrument, now<br />
called the “electrocardiograph,” was installed in the United States.<br />
It was the foundation of a new laboratory for the study of heart disease<br />
at Johns Hopkins University.<br />
As time passed, the use of the electrocardiogram—or “EKG,” as<br />
it is familiarly known—increased substantially. The major advantage<br />
of the EKG is that it can be used to diagnose problems in the<br />
heart without incisions or the use of needles. It is relatively painless<br />
for the patient; in comparison with other diagnostic techniques,<br />
moreover, it is relatively inexpensive.<br />
Recent developments in the use of the EKG have been in the area<br />
of stress testing. Since many heart problems are more evident during<br />
exercise, when the heart is working harder, EKGs are often<br />
given to patients as they exercise, generally on a treadmill. The clinician<br />
gradually increases the intensity of work the patient is doing<br />
while monitoring the patient’s heart. The use of stress testing has<br />
helped to make the EKG an even more valuable diagnostic tool.<br />
See also Amniocentesis; Artificial heart; Blood transfusion; CAT<br />
scanner; Coronary artery bypass surgery; Electroencephalogram;<br />
Heart-lung machine; Mammography; Nuclear magnetic resonance;<br />
Pacemaker; Ultrasound; X-ray image intensifier.
Further Reading<br />
Electrocardiogram / 297<br />
Cline, Barbara Lovett. Men Who Made a New Physics: Physicists <strong>and</strong><br />
the Quantum Theory. Chicago: University of Chicago Press, 1987.<br />
Hollman, Arthur. Sir Thomas Lewis: Pioneer Cardiologist <strong>and</strong> Clinical<br />
Scientist. New York: Springer, 1997.<br />
Lewis, Thomas. Collected Works on Heart Disease. 1912. Reprint. New<br />
York: Classics of Cardiology Library, 1991.<br />
Snellen, H. A. Two Pioneers of Electrocardiography: The Correspondence<br />
Between Einthoven <strong>and</strong> Lewis from 1908-1926. Rotterdam: Donker<br />
Academic <strong>Public</strong>ations, 1983.<br />
_____. Willem Einthoven, 1860-1927, Father of Electrocardiography: Life<br />
<strong>and</strong> Work, Ancestors <strong>and</strong> Contemporaries. Boston: Kluwer Academic<br />
Publishers, 1995.
298<br />
Electroencephalogram<br />
Electroencephalogram<br />
The invention: A system of electrodes that measures brain wave<br />
patterns in humans, making possible a new era of neurophysiology.<br />
The people behind the invention:<br />
Hans Berger (1873-1941), a German psychiatrist <strong>and</strong> research<br />
scientist<br />
Richard Caton (1842-1926), an English physiologist <strong>and</strong> surgeon<br />
The Electrical Activity of the Brain<br />
Hans Berger’s search for the human electroencephalograph (English<br />
physiologist Richard Caton had described the electroencephalogram,<br />
or “brain wave,” in rabbits <strong>and</strong> monkeys in 1875) was motivated<br />
by his desire to find a physiological method that might be<br />
applied successfully to the study of the long-st<strong>and</strong>ing problem of<br />
the relationship between the mind <strong>and</strong> the brain. His scientific career,<br />
therefore, was directed toward revealing the psychophysical<br />
relationship in terms of principles that would be rooted firmly in the<br />
natural sciences <strong>and</strong> would not have to rely upon vague philosophical<br />
or mystical ideas.<br />
During his early career, Berger attempted to study psychophysical<br />
relationships by making plethysmographic measurements of<br />
changes in the brain circulation of patients with skull defects. In<br />
plethysmography, an instrument is used to indicate <strong>and</strong> record by<br />
tracings the variations in size of an organ or part of the body. Later,<br />
Berger investigated temperature changes occurring in the human<br />
brain during mental activity <strong>and</strong> the action of psychoactive drugs.<br />
He became disillusioned, however, by the lack of psychophysical<br />
underst<strong>and</strong>ing generated by these investigations.<br />
Next, Berger turned to the study of the electrical activity of the<br />
brain, <strong>and</strong> in the 1920’s he set out to search for the human electroencephalogram.<br />
He believed that the electroencephalogram would finally<br />
provide him with a physiological method capable of furnishing<br />
insight into mental functions <strong>and</strong> their disturbances.
Electroencephalogram / 299<br />
Berger made his first unsuccessful attempt at recording the electrical<br />
activity of the brain in 1920, using the scalp of a bald medical<br />
student. He then attempted to stimulate the cortex of patients with<br />
skull defects by using a set of electrodes to apply an electrical current<br />
to the skin covering the defect. The main purpose of these<br />
stimulation experiments was to elicit subjective sensations. Berger<br />
hoped that eliciting these sensations might give him some clue<br />
about the nature of the relationship between the physiochemical<br />
events produced by the electrical stimulus <strong>and</strong> the mental processes<br />
revealed by the patients’ subjective experience. The availability<br />
of many patients with skull defects—in whom the pulsating<br />
surface of the brain was separated from the stimulating electrodes<br />
by only a few millimeters of tissue—reactivated Berger’s interest<br />
in recording the brain’s electrical activity.<br />
Hans Berger<br />
Hans Berger, the father of electroencephalography, was born<br />
in Neuses bei Coburn, Germany, in 1873. He entered the University<br />
of Jena in 1892 as a medical student <strong>and</strong> became an assistant<br />
in the psychiatric clinic in 1897. In 1912 he was appointed<br />
the clinic’s chief doctor <strong>and</strong> then its director <strong>and</strong> a university<br />
professor of psychiatry. In 1919 he was chosen as rector of the<br />
university.<br />
Berger hoped to settle the long-st<strong>and</strong>ing philosophical question<br />
about the brain <strong>and</strong> the mind by finding observable physical<br />
processes that correlated with thought <strong>and</strong> feelings. He<br />
started off by studying the blood circulation in the head <strong>and</strong><br />
brain temperature. Even though this work founded psychophysiology,<br />
he failed to find objective evidence of subjective<br />
states until he started examining fluctuations in the electrical<br />
potential of the brain in 1924. His 1929 paper describing the<br />
electroencephalograph later provided medicine with a basic diagnostic<br />
tool, but the instrument proved to be a very confusing<br />
probe of the human psyche for him. His colleagues in psychiatry<br />
<strong>and</strong> medicine did not accept his relationships of physical<br />
phenomena <strong>and</strong> mental states.<br />
Berger retired as professor emeritus in 1938 <strong>and</strong> died three<br />
years later in Jena.
300 / Electroencephalogram<br />
Small, Tremulous Movements<br />
Berger used several different instruments in trying to detect<br />
brain waves, but all of them used a similar method of recording.<br />
Electrical oscillations deflected a mirror upon which a light beam<br />
was projected. The deflections of the light beam were proportional<br />
to the magnitude of the electrical signals. The movement of the spot<br />
of the light beam was recorded on photographic paper moving at a<br />
speed no greater than 3 centimeters per second.<br />
In July, 1924, Berger observed small, tremulous movements of<br />
the instrument while recording from the skin overlying a bone defect<br />
in a seventeen-year-old patient. In his first paper on the electroencephalogram,<br />
Berger described this case briefly as his first successful<br />
recording of an electroencephalogram. At the time of these<br />
early studies, Berger already had used the term “electroencephalogram”<br />
in his diary. Yet for several years he had doubts about the origin<br />
of the electrical signals he recorded. As late as 1928, he almost<br />
ab<strong>and</strong>oned his electrical recording studies.<br />
The publication of Berger’s first paper on the human encephalogram<br />
in 1929 had little impact on the scientific world. It was either<br />
ignored or regarded with open disbelief. At this time, even<br />
when Berger himself was not completely free of doubts about the<br />
validity of his findings, he managed to continue his work. He published<br />
additional contributions to the study of the electroencephalogram<br />
in a series of fourteen papers. As his research progressed,<br />
Berger became increasingly confident <strong>and</strong> convinced of the significance<br />
of his discovery.<br />
Impact<br />
The long-range impact of Berger’s work is incontestable. When<br />
Berger published his last paper on the human encephalogram in<br />
1938, the new approach to the study of brain function that he inaugurated<br />
in 1929 had gathered momentum in many centers, both in<br />
Europe <strong>and</strong> in the United States. As a result of his pioneering work,<br />
a new diagnostic method had been introduced into medicine. Physiology<br />
had acquired a new investigative tool. Clinical neurophysiology<br />
had been liberated from its dependence upon the functional
anatomical approach, <strong>and</strong> electrophysiological exploration of complex<br />
functions of the central nervous system had begun in earnest.<br />
Berger’s work had finally received its well-deserved recognition.<br />
Many of those who undertook the study of the electroencephalogram<br />
were able to bring a far greater technical knowledge of<br />
neurophysiology to bear upon the problems of the electrical activity<br />
of the brain. Yet the community of neurological scientists has not<br />
ceased to look with respect to the founder of electroencephalography,<br />
who, despite overwhelming odds <strong>and</strong> isolation, opened a new<br />
area of neurophysiology.<br />
See also Amniocentesis; CAT scanner; Electrocardiogram; Mammography;<br />
Nuclear magnetic resonance; Ultrasound; X-ray image<br />
intensifier.<br />
Further Reading<br />
Electroencephalogram / 301<br />
Barlow, John S. The Electroencephalogram: Its Patterns <strong>and</strong> Origins.<br />
Cambridge, Mass.: MIT Press, 1993.<br />
Berger, Hans. Hans Berger on the Electroencephalogram of Man. New<br />
York: Elsevier, 1969.
302<br />
Electron microscope<br />
Electron microscope<br />
The invention: A device for viewing extremely small objects that<br />
uses electron beams <strong>and</strong> “electron lenses” instead of the light<br />
rays <strong>and</strong> optical lenses used by ordinary microscopes.<br />
The people behind the invention:<br />
Ernst Ruska (1906-1988), a German engineer, researcher, <strong>and</strong><br />
inventor who shared the 1986 Nobel Prize in Physics<br />
Hans Busch (1884-1973), a German physicist<br />
Max Knoll (1897-1969), a German engineer <strong>and</strong> professor<br />
Louis de Broglie (1892-1987), a French physicist who won the<br />
1929 Nobel Prize in Physics<br />
Reaching the Limit<br />
The first electron microscope was constructed by Ernst Ruska<br />
<strong>and</strong> Max Knoll in 1931. Scientists who look into the microscopic<br />
world always dem<strong>and</strong> microscopes of higher <strong>and</strong> higher resolution<br />
(resolution is the ability of an optical instrument to distinguish<br />
closely spaced objects). As early as 1834, George Airy, the eminent<br />
British astronomer, theorized that there should be a natural limit to<br />
the resolution of optical microscopes. In 1873, two Germans, Ernst<br />
Abbe, cofounder of the Karl Zeiss Optical Works at Jena, <strong>and</strong> Hermann<br />
von Helmholtz, the famous physicist <strong>and</strong> philosopher, independently<br />
published papers on this issue. Both arrived at the same<br />
conclusion as Airy: Light is limited by the size of its wavelength.<br />
Specifically, light cannot resolve smaller than one-half the height of<br />
its wavelength.<br />
One solution to this limitation was to experiment with light, or<br />
electromagnetic radiation, or shorter <strong>and</strong> shorter wavelengths.<br />
At the beginning of the twentieth century, Joseph Edwin Barnard<br />
experimented on microscopes using ultraviolet light. Such instruments,<br />
however, only modestly improved the resolution. In<br />
1912, German physicist Max von Laue considered using X rays.<br />
At the time, however, it was hard to turn “X-ray microscopy” into<br />
a physical reality. The wavelengths of X rays are exceedingly
short, but for the most part they are used to penetrate matter, not<br />
to illuminate objects. It appeared that microscopes had reached<br />
their limit.<br />
Matter Waves<br />
Electron microscope / 303<br />
In a new microscopy, then, light—even electromagnetic radiation<br />
in general—as the medium that traditionally carried image information,<br />
had to be replaced by a new medium. In 1924, French<br />
theoretical physicist Louis de Broglie advanced a startling hypothesis:<br />
Matter on the scale of subatomic particles possesses wave<br />
characteristics. De Broglie also concluded that the speed of lowmass<br />
subatomic particles, such as electrons, is related to wavelength.<br />
Specifically, higher speeds correspond to shorter wavelengths.<br />
When Knoll <strong>and</strong> Ruska built the first electron microscope in 1931,<br />
they had never heard about de Broglie’s “matter wave.” Ruska recollected<br />
that when, in 1932, he <strong>and</strong> Knoll first learned about de<br />
Broglie’s idea, he realized that those matter waves would have to be<br />
many times shorter in wavelength than light waves.<br />
The core component of the new instrument was the electron<br />
beam, or “cathode ray,” as it was usually called then. The cathoderay<br />
tube was invented in 1857 <strong>and</strong> was the source of a number of<br />
discoveries, including X rays. In 1896, Olaf Kristian Birkel<strong>and</strong>, a<br />
Norwegian scientist, after experimenting with the effect of parallel<br />
magnetic fields on the electron beam of the cathode-ray tube, concluded<br />
that cathode rays that are concentrated on a focal point by<br />
means of a magnet are as effective as parallel light rays that are concentrated<br />
by means of a lens.<br />
From around 1910, German physicist Hans Busch was the leading<br />
researcher in the field. In 1926, he published his theory on the<br />
trajectories of electrons in magnetic fields. His conclusions confirmed<br />
<strong>and</strong> exp<strong>and</strong>ed upon those of Birkel<strong>and</strong>. As a result, Busch<br />
has been recognized as the founder of a new field later known<br />
as “electron optics.” His theoretical study showed, among other<br />
things, that the analogy between light <strong>and</strong> lenses on the one h<strong>and</strong>,<br />
<strong>and</strong> electron beams <strong>and</strong> electromagnetic lenses, on the other h<strong>and</strong>,<br />
was accurate.
304 / Electron microscope<br />
Ernst Ruska<br />
Ernst August Friedrich Ruska was born in 1906 in Heidelberg<br />
to Professor Julius Ruska <strong>and</strong> his wife, Elisabeth. In 1925<br />
he left home for the Technical College of Munich, moving two<br />
years later to the Technical College of Berlin <strong>and</strong> gaining practical<br />
training at nearby Siemens <strong>and</strong> Halsk Limited. During his<br />
university days he became interested in vacuum tube technology<br />
<strong>and</strong> worked at the Institute of High Voltage, participating<br />
in the development of a high performance cathode ray oscilloscope.<br />
His interests also lay with the theory <strong>and</strong> application of electron<br />
optics. In 1929, as part of his graduate work, Ruska published<br />
a proof of Hans Busch’s theory explaining possible lenslike<br />
effects of a magnetic field on an electron stream, which led<br />
to the invention of the polschuh lens. It formed the core of the<br />
electron microscope that Ruska built with his mentor, Max<br />
Kroll, in 1931.<br />
Ruska completed his doctoral studies in 1934, but he had already<br />
found work in industry, believing that further technical<br />
development of electron microscopes was beyond the means of<br />
university laboratories. He worked for Fernseh Limited from<br />
1933 to 1937 <strong>and</strong> for Siemens from 1937 to 1955. Following<br />
World War II he helped set up the Institute of Electron Optics<br />
<strong>and</strong> worked in the Faculty of Medicine <strong>and</strong> Biology of the German<br />
Academy of Sciences. He joined the Fritz Haber Institute<br />
of the Max Planck Society in Berlin in 1949 <strong>and</strong> took over as director<br />
of its Institute for Electron Microscopy in 1955, keeping<br />
the position until he retired in 1974.<br />
His life-long work with electron microscopy earned Ruska<br />
half of the 1986 Nobel Prize in Physics. He died two years later.<br />
To honor his memory, European manufacturers of electron microscopes<br />
instituted the Ernst Ruska Prizes, one for researchers<br />
of materials <strong>and</strong> optics <strong>and</strong> one for biomedical researchers.<br />
Beginning in 1928, Ruska, as a graduate student at the Berlin Institute<br />
of Technology, worked on refining Busch’s work. He found<br />
that the energy of the electrons in the beam was not uniform. This<br />
nonuniformity meant that the images of microscopic objects would<br />
ultimately be fuzzy. Knoll <strong>and</strong> Ruska were able to work from the
ecognition of this problem to the design <strong>and</strong> materialization of a<br />
concentrated electron “writing spot” <strong>and</strong> to the actual construction<br />
of the electron microscope. By April, 1931, they had established a<br />
technological l<strong>and</strong>mark with the “first constructional realization of<br />
an electron microscope.”<br />
Impact<br />
The world’s first electron microscope, which took its first photographic<br />
record on April 7, 1931, was rudimentary. Its two-stage total<br />
magnification was only sixteen times larger than the sample. Since<br />
Ruska <strong>and</strong> Knoll’s creation, however, progress in electron microscopy<br />
has been spectacular. Such an achievement is one of the prominent<br />
examples that illustrate the historically unprecedented pace of<br />
science <strong>and</strong> technology in the twentieth century.<br />
In 1935, for the first time, the electron microscope surpassed<br />
the optical microscope in resolution. The problem of damaging<br />
the specimen by the heating effects of the electron beam proved<br />
to be more difficult to resolve. In 1937, a team at the University of<br />
Toronto constructed the first generally usable electron microscope.<br />
In 1942, a group headed by James Hillier at the Radio Corporation<br />
of America produced commercial transmission electron<br />
microscopes. In 1939 <strong>and</strong> 1940, research papers on electron microscopes<br />
began to appear in Sweden, Canada, the United States,<br />
<strong>and</strong> Japan; from 1944 to 1947, papers appeared in Switzerl<strong>and</strong>,<br />
France, the Soviet Union, The Netherl<strong>and</strong>s, <strong>and</strong> Engl<strong>and</strong>. Following<br />
research work in laboratories, commercial transmission electron<br />
microscopes using magnetic lenses with short focal lengths<br />
also appeared in these countries.<br />
See also Cyclotron; Field ion microscope; Geiger counter; Mass<br />
spectrograph; Neutrino detector; Scanning tunneling microscope;<br />
Synchrocyclotron; Tevatron accelerator; Ultramicroscope.<br />
Further Reading<br />
Electron microscope / 305<br />
Cline, Barbara Lovett. Men Who Made a New Physics: Physicists <strong>and</strong><br />
the Quantum Theory. Chicago: University of Chicago Press, 1987.
306 / Electron microscope<br />
Hawkes, P. W. The Beginnings of Electron Microscopy. Orl<strong>and</strong>o: Academic<br />
Press, 1985.<br />
Marton, Ladislaus. Early History of the Electron Microscope. 2d ed. San<br />
Francisco: San Francisco Press, 1994.<br />
Rasmussen, Nicolas. Picture Control: The Electron Microscope <strong>and</strong> the<br />
Transformation of Biology in America, 1940-1960. Stanford, Calif.:<br />
Stanford University Press, 1997.
Electronic synthesizer<br />
Electronic synthesizer<br />
The invention: Portable electronic device that both simulates the<br />
sounds of acoustic instruments <strong>and</strong> creates entirely new sounds.<br />
The person behind the invention:<br />
Robert A. Moog (1934- ), an American physicist, engineer,<br />
<strong>and</strong> inventor<br />
From Harmonium to Synthesizer<br />
307<br />
The harmonium, or acoustic reed organ, is commonly viewed as<br />
having evolved into the modern electronic synthesizer that can be<br />
used to create many kinds of musical sounds, from the sounds of<br />
single or combined acoustic musical instruments to entirely original<br />
sounds. The first instrument to be called a synthesizer was patented<br />
by the Frenchman J. A. Dereux in 1949. Dereux’s synthesizer, which<br />
amplified the acoustic properties of harmoniums, led to the development<br />
of the recording organ.<br />
Next, several European <strong>and</strong> American inventors altered <strong>and</strong><br />
augmented the properties of such synthesizers. This stage of the<br />
process was followed by the invention of electronic synthesizers,<br />
which initially used electronically generated sounds to imitate<br />
acoustic instruments. It was not long, however, before such synthesizers<br />
were used to create sounds that could not be produced by any<br />
other instrument. Among the early electronic synthesizers were<br />
those made in Germany by Herbert Elmert <strong>and</strong> Robert Beyer in<br />
1953, <strong>and</strong> the American Olsen-Belar synthesizers, which were developed<br />
in 1954. Continual research produced better <strong>and</strong> better versions<br />
of these large, complex electronic devices.<br />
Portable synthesizers, which are often called “keyboards,” were<br />
then developed for concert <strong>and</strong> home use. These instruments became<br />
extremely popular, especially in rock music. In 1964, Robert A.<br />
Moog, an electronics professor, created what are thought by many<br />
to be the first portable synthesizers to be made available to the public.<br />
Several other well-known portable synthesizers, such as ARP<br />
<strong>and</strong> Buchla synthesizers, were also introduced at about the same
308 / Electronic synthesizer<br />
time. Currently, many companies manufacture studio-quality synthesizers<br />
of various types.<br />
Synthesizer Components <strong>and</strong> Operation<br />
Modern synthesizers make music electronically by building up<br />
musical phrases via numerous electronic circuits <strong>and</strong> combining<br />
those phrases to create musical compositions. In addition to duplicating<br />
the sounds of many instruments, such synthesizers also enable<br />
their users to create virtually any imaginable sound. Many<br />
sounds have been created on synthesizers that could not have been<br />
created in any other way.<br />
Synthesizers use sound-processing <strong>and</strong> sound-control equipment<br />
that controls “white noise” audio generators <strong>and</strong> oscillator circuits.<br />
This equipment can be manipulated to produce a huge variety of<br />
sound frequencies <strong>and</strong> frequency mixtures in the same way that a<br />
beam of white light can be manipulated to produce a particular<br />
color or mixture of colors.<br />
Once the desired products of a synthesizer’s noise generator <strong>and</strong><br />
oscillators are produced, percussive sounds that contain all or many<br />
audio frequencies are mixed with many chosen individual sounds<br />
<strong>and</strong> altered by using various electronic processing components. The<br />
better the quality of the synthesizer, the more processing components<br />
it will possess. Among these components are sound amplifiers,<br />
sound mixers, sound filters, reverberators, <strong>and</strong> sound combination<br />
devices.<br />
Sound amplifiers are voltage-controlled devices that change the<br />
dynamic characteristics of any given sound made by a synthesizer.<br />
Sound mixers make it possible to combine <strong>and</strong> blend two or more<br />
manufactured sounds while controlling their relative volumes.<br />
Sound filters affect the frequency content of sound mixtures by increasing<br />
or decreasing the amplitude of the sound frequencies<br />
within particular frequency ranges, which are called “b<strong>and</strong>s.”<br />
Sound filters can be either b<strong>and</strong>-pass filters or b<strong>and</strong>-reject filters.<br />
They operate by increasing or decreasing the amplitudes of sound<br />
frequencies within given ranges (such as treble or bass). Reverberators<br />
(or “reverb” units) produce artificial echoes that can have significant<br />
musical effects. There are also many other varieties of sound-
Robert Moog<br />
Electronic synthesizer / 309<br />
Robert Moog, born in 1934, grew up in the Queens borough<br />
of New York City, a tough area for a brainy kid. To avoid the<br />
bullies who picked on him because he was a nerd, Moog spent a<br />
lot of time helping his father with his hobby, electronics. At<br />
fourteen, he built his own theremin, an eerie-sounding forerunner<br />
of electric instruments.<br />
Moog’s mother, meanwhile, force-fed him piano lessons. He<br />
liked science better <strong>and</strong> majored in physics at Queens College<br />
<strong>and</strong> then Cornell University, but he did not forget the music.<br />
While in college, he designed a kit for making theremins <strong>and</strong><br />
advertised it, selling enough of them to run up a sizable bankroll.<br />
Also while in college, Moog, acting on a suggestion from a<br />
composer, put together the first easy-to-play electronic synthesizer.<br />
Other music synthesizers already existed, but they were<br />
large, complex, <strong>and</strong> expensive—suitable only for recording studios.<br />
When Moog unveiled his synthesizer in 1965, it was portable,<br />
sold for one-tenth the price, <strong>and</strong> gave musicians virtually<br />
an orchestra at their fingertips. It became a stage instrument.<br />
Walter Carlos used a Moog synthesizer in 1969 for his album<br />
Switched-on Bach, electronic renditions of Johann Sebastian Bach’s<br />
concertos. It was a hit <strong>and</strong> won a Grammy award. The album<br />
made Moog <strong>and</strong> his new instrument famous. Its reputation<br />
grew when the Beatles used it for “Because” on Abbey Road <strong>and</strong><br />
Carlos recorded the score for Stanley Kubrick’s classic movie A<br />
Clockwork Orange on a Moog. With the introduction of the even<br />
more portable Minimoog, the popularity of synthesizers soared,<br />
especially among rock musicians but also in jazz <strong>and</strong> other<br />
styles.<br />
Moog sold his company <strong>and</strong> moved to North Carolina in<br />
1978. There he started another company, Big Briar, devoted to<br />
designing special instruments, such as a keyboard that can be<br />
played with as much expressive subtlety as a violin <strong>and</strong> an interactive<br />
piano.<br />
processing elements, among them sound-envelope generators,<br />
spatial locators, <strong>and</strong> frequency shifters. Ultimately, the soundcombination<br />
devices put together the results of the various groups<br />
of audio generating <strong>and</strong> processing elements, shaping the sound<br />
that has been created into its final form.
310 / Electronic synthesizer<br />
A variety of control elements are used to integrate the operation<br />
of synthesizers. Most common is the keyboard, which provides the<br />
name most often used for portable electronic synthesizers. Portable<br />
synthesizer keyboards are most often pressure-sensitive devices<br />
(meaning that the harder one presses the key, the louder the resulting<br />
sound will be) that resemble the black-<strong>and</strong>-white keyboards of<br />
more conventional musical instruments such as the piano <strong>and</strong> the<br />
organ. These synthesizer keyboards produce two simultaneous outputs:<br />
control voltages that govern the pitches of oscillators, <strong>and</strong> timing<br />
pulses that sustain synthesizer responses for as long as a particular<br />
key is depressed.<br />
Unseen but present are the integrated voltage controls that control<br />
overall signal generation <strong>and</strong> processing. In addition to voltage<br />
controls <strong>and</strong> keyboards, synthesizers contain buttons <strong>and</strong> other<br />
switches that can transpose their sound ranges <strong>and</strong> other qualities.<br />
Using the appropriate buttons or switches makes it possible for a<br />
single synthesizer to imitate different instruments—or groups of instruments—at<br />
different times. Other synthesizer control elements<br />
include sample-<strong>and</strong>-hold devices <strong>and</strong> r<strong>and</strong>om voltage sources that<br />
make it possible to sustain particular musical effects <strong>and</strong> to add various<br />
effects to the music that is being played, respectively.<br />
Electronic synthesizers are complex <strong>and</strong> flexible instruments.<br />
The various types <strong>and</strong> models of synthesizers make it possible to<br />
produce many different kinds of music, <strong>and</strong> many musicians use a<br />
variety of keyboards to give them great flexibility in performing<br />
<strong>and</strong> recording.<br />
Impact<br />
The development <strong>and</strong> wide dissemination of studio <strong>and</strong> portable<br />
synthesizers has led to their frequent use to combine the sound<br />
properties of various musical instruments; a single musician can<br />
thus produce, inexpensively <strong>and</strong> with a single instrument, sound<br />
combinations that previously could have been produced only by a<br />
large number of musicians playing various instruments. (Underst<strong>and</strong>ably,<br />
many players of acoustic instruments have been upset by<br />
this development, since it means that they are hired to play less often<br />
than they were before synthesizers were developed.) Another
consequence of synthesizer use has been the development of entirely<br />
original varieties of sound, although this area has been less<br />
thoroughly explored, for commercial reasons. The development of<br />
synthesizers has also led to the design of other new electronic music-making<br />
techniques <strong>and</strong> to the development of new electronic<br />
musical instruments.<br />
Opinions about synthesizers vary from person to person—<strong>and</strong>,<br />
in the case of certain illustrious musicians, from time to time. One<br />
well-known musician initially proposed that electronic synthesizers<br />
would replace many or all conventional instruments, particularly<br />
pianos. Two decades later, though, this same musician noted<br />
that not even the best modern synthesizers could match the quality<br />
of sound produced by pianos made by manufacturers such as<br />
Steinway <strong>and</strong> Baldwin.<br />
See also Broadcaster guitar; Cassette recording; Compact disc;<br />
Dolby noise reduction; Transistor.<br />
Further Reading<br />
Electronic synthesizer / 311<br />
Hopkin, Bart. Gravikords, Whirlies <strong>and</strong> Pyrophones: Experimental Musical<br />
Instruments. Roslyn, N.Y.: Ellipsis Arts, 1996.<br />
Koener, Brendan I.. “Back to Music’s Future.” U.S. News & World Report<br />
122, no. 8 (March 3, 1997).<br />
Nunziata, Susan. “Moog Keyboard Offers Human Touch.” Billboard<br />
104, no. 7 (February 15, 1992).<br />
Shapiro, Peter. Modulations: A History of Electronic Music: Throbbing<br />
Words on Sound. New York: Caipirinha Productions, 2000.
312<br />
ENIAC computer<br />
ENIAC computer<br />
The invention: The first general-purpose electronic digital computer.<br />
The people behind the invention:<br />
John Presper Eckert (1919-1995), an electrical engineer<br />
John William Mauchly (1907-1980), a physicist, engineer, <strong>and</strong><br />
professor<br />
John von Neumann (1903-1957), a Hungarian American<br />
mathematician, physicist, <strong>and</strong> logician<br />
Herman Heine Goldstine (1913- ), an army mathematician<br />
Arthur Walter Burks (1915- ), a philosopher, engineer, <strong>and</strong><br />
professor<br />
John Vincent Atanasoff (1903-1995), a mathematician <strong>and</strong><br />
physicist<br />
A Technological Revolution<br />
The Electronic Numerical Integrator <strong>and</strong> Calculator (ENIAC) was<br />
the first general-purpose electronic digital computer. By demonstrating<br />
the feasibility <strong>and</strong> value of electronic digital computation, it initiated<br />
the computer revolution. The ENIAC was developed during<br />
World War II (1939-1945) at the Moore School of Electrical Engineering<br />
by a team headed by John William Mauchly <strong>and</strong> John Presper<br />
Eckert, who were working on behalf of the U.S. Ordnance Ballistic<br />
Research Laboratory (BRL) at the Aberdeen Proving Ground in<br />
Maryl<strong>and</strong>. Early in the war, the BRL’s need to generate ballistic firing<br />
tables already far outstripped the combined abilities of the available<br />
differential analyzers <strong>and</strong> teams of human computers.<br />
In 1941, Mauchly had seen the special-purpose electronic computer<br />
developed by John Vincent Atanasoff to solve sets of linear<br />
equations. Atanasoff’s computer was severely limited in scope <strong>and</strong><br />
was never fully completed. The functioning prototype, however,<br />
helped convince Mauchly of the feasibility of electronic digital computation<br />
<strong>and</strong> so led to Mauchly’s formal proposal in April, 1943, to<br />
develop the general-purpose ENIAC. The BRL, in desperate need of<br />
computational help, agreed to fund the project, with Lieutenant
Herman Heine Goldstine overseeing it for the U.S. Army.<br />
This first substantial electronic computer was designed, built,<br />
<strong>and</strong> debugged within two <strong>and</strong> one-half years. Even given the highly<br />
talented team, it could be done only by taking as few design risks as<br />
possible. The ENIAC ended up as an electronic version of prior<br />
computers: Its functional organization was similar to that of the<br />
differential analyzer, while it was programmed via a plugboard<br />
(which was something like a telephone switchboard), much like the<br />
earlier electromechanical calculators made by the International Business<br />
Machines (IBM) Corporation. Another consequence was that<br />
the internal representation of numbers was decimal rather than the<br />
now-st<strong>and</strong>ard binary, since the familiar electromechanical computers<br />
used decimal digits.<br />
Although the ENIAC was completed only after the end of the<br />
war, it was used primarily for military purposes. In fact, the first<br />
production run on the system was a two-month calculation needed<br />
for the design of the hydrogen bomb. John von Neumann, working<br />
as a consultant to both the Los Alamos Scientific Laboratory <strong>and</strong> the<br />
ENIAC project, arranged for the production run immediately prior<br />
to ENIAC’s formal dedication in 1946.<br />
A Very Fast Machine<br />
ENIAC computer / 313<br />
The ENIAC was an impressive machine: It contained 18,000 vacuum<br />
tubes, weighed 27 metric tons, <strong>and</strong> occupied a large room. The<br />
final cost to the U.S. Army was about $486,000. For this price, the<br />
army received a machine that computed up to a thous<strong>and</strong> times<br />
faster than its electromechanical precursors; for example, addition<br />
<strong>and</strong> subtraction required only 200 microseconds (200 millionths of a<br />
second). At its dedication ceremony, the ENIAC was fast enough to<br />
calculate a fired shell’s trajectory faster than the shell itself took to<br />
reach its target.<br />
The machine also was much more complex than any predecessor<br />
<strong>and</strong> employed a risky new technology in vacuum tubes; this caused<br />
much concern about its potential reliability. In response to this concern,<br />
Eckert, the lead engineer, imposed strict safety factors on all<br />
components, requiring the design to use components at a level well<br />
below the manufacturers’ specified limits. The result was a machine
314 / ENIAC computer<br />
that ran for as long as three days without a hardware malfunction.<br />
Programming the ENIAC was effected by setting switches <strong>and</strong><br />
physically connecting accumulators, function tables (a kind of manually<br />
set read-only memory), <strong>and</strong> control units. Connections were<br />
made via cables running between plugboards. This was a laborious<br />
<strong>and</strong> error-prone process, often requiring a one-day set time.<br />
The team recognized this problem, <strong>and</strong> in early 1945, Eckert,<br />
Mauchly, <strong>and</strong> Neumann worked on the design of a new machine.<br />
Their basic idea was to treat both program <strong>and</strong> data in the same way,<br />
<strong>and</strong> in particular to store them in the same high-speed memory; in<br />
other words, they planned to produce a stored-program computer.<br />
Neumann described <strong>and</strong> explained this design in his “First Draft of<br />
a Report on the EDVAC” (EDVAC is an acronym for Electronic Discrete<br />
Variable Automatic Computer). In his report, Neumann contributed<br />
new design techniques <strong>and</strong> provided the first general, comprehensive<br />
description of the stored-program architecture.<br />
After the delivery of the ENIAC, Neumann suggested that it<br />
could be wired up so that a set of instructions would be permanently<br />
available <strong>and</strong> could be selected by entries in the function tables.<br />
Engineers implemented the idea, providing sixty instructions<br />
that could be invoked from the programs stored into the function tables.<br />
Despite slowing down the computer’s calculations, this technique<br />
was so superior to plugboard programming that it was used<br />
exclusively thereafter. In this way, the ENIAC was converted into a<br />
kind of primitive stored-program computer.<br />
Impact<br />
The ENIAC’s electronic speed <strong>and</strong> the stored-program design of<br />
the EDVAC posed a serious engineering challenge: to produce a<br />
computer memory that would be large, inexpensive, <strong>and</strong> fast. Without<br />
such fast memories, the electronic control logic would spend<br />
most of its time idling. Vacuum tubes themselves (used in the control)<br />
were not an effective answer because of their large power requirements<br />
<strong>and</strong> heat generation.<br />
The EDVAC design draft proposed using mercury delay lines,<br />
which had been used earlier in radars. These delay lines converted<br />
an electronic signal into a slower acoustic signal in a mercury solu-
tion; for continuous storage, the signal picked up at the other end<br />
was regenerated <strong>and</strong> sent back into the mercury. Maurice Vincent<br />
Wilkes at the University of Cambridge was the first to complete<br />
such a system, in May, 1949. One month earlier, Frederick Call<strong>and</strong><br />
Williams <strong>and</strong> Tom Kilburn at Manchester University had brought<br />
their prototype computer into operation, which used cathode-ray<br />
tubes (CRTs) for its main storage. Thus, Engl<strong>and</strong> took an early lead<br />
in developing computing systems, largely because of a more immediate<br />
practical design approach.<br />
In the meantime, Eckert <strong>and</strong> Mauchly formed the Electronic Control<br />
Company (later the Eckert-Mauchly Computer Corporation).<br />
They produced the Binary Automatic Computer (BINAC) in 1949<br />
<strong>and</strong> the Universal Automatic Computer (UNIVAC) I in 1951; both<br />
machines used mercury storage.<br />
The memory problem that the ENIAC introduced was finally resolved<br />
with the invention of the magnetic core in the early 1950’s.<br />
Core memory was installed on the ENIAC <strong>and</strong> soon on all new machines.<br />
The ENIAC continued in operation until October, 1955,<br />
when parts of it were retired to the Smithsonian Institution. The<br />
ENIAC proved the viability of digital electronics <strong>and</strong> led directly to<br />
the development of stored-program computers. Its impact can be<br />
seen in every modern digital computer.<br />
See also Apple II computer; BINAC computer; Colossus computer;<br />
IBM Model 1401 computer; Personal computer; Supercomputer;<br />
UNIVAC computer.<br />
Further Reading<br />
ENIAC computer / 315<br />
Burks, Alice R., <strong>and</strong> Arthur W. Burks. The First Electronic Computer:<br />
The Atanasoff Story. Ann Arbor: University of Michigan Press,<br />
1990.<br />
McCarney, Scott. ENIAC: The Triumphs <strong>and</strong> Tragedies of the World’s<br />
First Computer. New York: Berkley Books, 2001.<br />
Slater, Robert. Portraits in Silicon. Cambridge, Mass.: MIT Press,<br />
1989.<br />
Stern, Nancy B. From ENIAC to UNIVAC: An Appraisal of the Eckert-<br />
Mauchly Computers. Bedford, Mass.: Digital Press, 1981.
316<br />
Fax machine<br />
Fax machine<br />
The invention: Originally known as the “facsimile machine,” a<br />
machine that converts written <strong>and</strong> printed images into electrical<br />
signals that can be sent via telephone, computer, or radio.<br />
The person behind the invention:<br />
Alex<strong>and</strong>er Bain (1818-1903), a Scottish inventor<br />
Sending Images<br />
The invention of the telegraph <strong>and</strong> telephone during the latter<br />
half of the nineteenth century gave people the ability to send information<br />
quickly over long distances. With the invention of radio <strong>and</strong><br />
television technologies, voices <strong>and</strong> moving pictures could be seen<br />
around the world as well. Oddly, however, the facsimile process—<br />
which involves the transmission of pictures, documents, or other<br />
physical data over distance—predates all these modern devices,<br />
since a simple facsimile apparatus (usually called a fax machine)<br />
was patented in 1843 by Alex<strong>and</strong>er Bain. This early device used a<br />
pendulum to synchronize the transmitting <strong>and</strong> receiving units; it<br />
did not convert the image into an electrical format, however, <strong>and</strong> it<br />
was quite crude <strong>and</strong> impractical. Nevertheless, it reflected the desire<br />
to send images over long distances, which remained a technological<br />
goal for more than a century.<br />
Facsimile machines developed in the period around 1930 enabled<br />
news services to provide newspapers around the world with<br />
pictures for publication. It was not until the 1970’s, however, that<br />
technological advances made small fax machines available for everyday<br />
office use.<br />
Scanning Images<br />
Both the fax machines of the 1930’s <strong>and</strong> those of today operate on<br />
the basis of the same principle: scanning. In early machines, an image<br />
(a document or a picture) was attached to a roller, placed in the<br />
fax machine, <strong>and</strong> rotated at a slow <strong>and</strong> fixed speed (which must be
Fax machine / 317<br />
the same at each end of the link) in a bright light. Light from the image<br />
was reflected from the document in varying degrees, since dark<br />
areas reflect less light than lighter areas do. A lens moved across the<br />
page one line at a time, concentrating <strong>and</strong> directing the reflected<br />
light to a photoelectric tube. This tube would respond to the change<br />
in light level by varying its electric output, thus converting the image<br />
into an output signal whose intensity varied with the changing<br />
light <strong>and</strong> dark spots of the image. Much like the signal from a microphone<br />
or television camera, this modulated (varying) wave could<br />
then be broadcast by radio or sent over telephone lines to a receiver<br />
that performed a reverse function. At the receiving end, a light bulb<br />
was made to vary its intensity to match the varying intensity of the<br />
incoming signal. The output of the light bulb was concentrated<br />
through a lens onto photographically sensitive paper, thus re-creating<br />
the original image as the paper was rotated.<br />
Early fax machines were bulky <strong>and</strong> often difficult to operate.<br />
Advances in semiconductor <strong>and</strong> computer technology in the 1970’s,<br />
however, made the goal of creating an easy-to-use <strong>and</strong> inexpensive<br />
fax machine realistic. Instead of a photoelectric tube that consumes<br />
a relatively large amount of electrical power, a row of small photodiode<br />
semiconductors is used to measure light intensity. Instead of a<br />
power-consuming light source, low-power light-emitting diodes<br />
(LEDs) are used. Some 1,728 light-sensitive diodes are placed in a<br />
row, <strong>and</strong> the image to be scanned is passed over them one line at a<br />
time. Each diode registers either a dark or a light portion of the image.<br />
As each diode is checked in sequence, it produces a signal for<br />
one picture element, also known as a “pixel” or “pel.” Because<br />
many diodes are used, there is no need for a focusing lens; the diode<br />
bar is as wide as the page being scanned, <strong>and</strong> each pixel represents a<br />
portion of a line on that page.<br />
Since most fax transmissions take place over public telephone<br />
system lines, the signal from the photodiodes is transmitted by<br />
means of a built-in computer modem in much the same format that<br />
computers use to transmit data over telephone lines. The receiving<br />
fax uses its modem to convert the audible signal into a sequence that<br />
varies in intensity in proportion to the original signal. This varying<br />
signal is then sent in proper sequence to a row of 1,728 small wires<br />
over which a chemically treated paper is passed. As each wire re-
318 / Fax machine<br />
ceives a signal that represents a black portion of the scanned image,<br />
the wire heats <strong>and</strong>, in contact with the paper, produces a black dot<br />
that corresponds to the transmitted pixel. As the page is passed over<br />
these wires one line at a time, the original image is re-created.<br />
Consequences<br />
The fax machine has long been in use in many commercial <strong>and</strong><br />
scientific fields. Weather data in the form of pictures are transmitted<br />
from orbiting satellites to ground stations; newspapers receive photographs<br />
from international news sources via fax; <strong>and</strong>, using a very<br />
expensive but very high-quality fax device, newspapers <strong>and</strong> magazines<br />
are able to transmit full-size proof copies of each edition to<br />
printers thous<strong>and</strong>s of miles away so that a publication edited in one<br />
country can reach newsst<strong>and</strong>s around the world quickly.<br />
With the technological advances that have been made in recent<br />
years, however, fax transmission has become a part of everyday life,<br />
particularly in business <strong>and</strong> research environments. The ability to<br />
send quickly a copy of a letter, document, or report over thous<strong>and</strong>s<br />
of miles means that information can be shared in a matter of minutes<br />
rather than in a matter of days. In fields such as advertising <strong>and</strong><br />
architecture, it is often necessary to send pictures or drawings to remote<br />
sites. Indeed, the fax machine has played an important role in<br />
providing information to distant observers of political unrest when<br />
other sources of information (such as radio, television, <strong>and</strong> newspapers)<br />
are shut down.<br />
In fact, there has been a natural coupling of computers, modems,<br />
<strong>and</strong> fax devices. Since modern faxes are sent as computer data over<br />
phone lines, specialized <strong>and</strong> inexpensive modems (which allow<br />
two computers to share data) have been developed that allow any<br />
computer user to send <strong>and</strong> receive faxes without bulky machines.<br />
For example, a document—including drawings, pictures, or graphics<br />
of some kind—is created in a computer <strong>and</strong> transmitted directly<br />
to another fax machine. That computer can also receive a fax transmission<br />
<strong>and</strong> either display it on the computer’s screen or print it on<br />
the local printer. Since fax technology is now within the reach of almost<br />
anyone who is interested in using it, there is little doubt that it<br />
will continue to grow in popularity.
See also Communications satellite; Instant photography; Internet;<br />
Personal computer; Xerography.<br />
Further Reading<br />
Fax machine / 319<br />
Bain, Alex<strong>and</strong>er, <strong>and</strong> Leslie William Davidson. Autobiography. New<br />
York: Longmans, Green, 1973.<br />
Cullen, Scott. “Telecommunications in the Office.” Office Systems 16,<br />
no. 12 (December, 1999).<br />
Holtzmann, Gerald J. “Just the Fax.” Inc. 20, no. 13 (September 15,<br />
1998).<br />
Hunkin, Tim. “Just Give Me the Fax.” New Scientist 137, no. 1860<br />
(February 13, 1993).
320<br />
Fiber-optics<br />
Fiber-optics<br />
The invention: The application of glass fibers to electronic communications<br />
<strong>and</strong> other fields to carry large volumes of information<br />
quickly, smoothly, <strong>and</strong> cheaply over great distances.<br />
The people behind the invention:<br />
Samuel F. B. Morse (1791-1872), the American artist <strong>and</strong><br />
inventor who developed the electromagnetic telegraph<br />
system<br />
Alex<strong>and</strong>er Graham Bell (1847-1922), the Scottish American<br />
inventor <strong>and</strong> educator who invented the telephone <strong>and</strong> the<br />
photophone<br />
Theodore H. Maiman (1927- ), the American physicist <strong>and</strong><br />
engineer who invented the solid-state laser<br />
Charles K. Kao (1933- ), a Chinese-born electrical engineer<br />
Zhores I. Alferov (1930- ), a Russian physicist <strong>and</strong><br />
mathematician<br />
The Singing Sun<br />
In 1844, Samuel F. B. Morse, inventor of the telegraph, sent his famous<br />
message, “What hath God wrought?” by electrical impulses<br />
traveling at the speed of light over a 66-kilometer telegraph wire<br />
strung between Washington, D.C., <strong>and</strong> Baltimore. Ever since that<br />
day, scientists have worked to find faster, less expensive, <strong>and</strong> more<br />
efficient ways to convey information over great distances.<br />
At first, the telegraph was used to report stock-market prices <strong>and</strong><br />
the results of political elections. The telegraph was quite important<br />
in the American Civil War (1861-1865). The first transcontinental<br />
telegraph message was sent by Stephen J. Field, chief justice of the<br />
California Supreme Court, to U.S. president Abraham Lincoln on<br />
October 24, 1861. The message declared that California would remain<br />
loyal to the Union. By 1866, telegraph lines had reached all<br />
across the North American continent <strong>and</strong> a telegraph cable had<br />
been laid beneath the Atlantic Ocean to link the Old World with the<br />
New World.
Zhores I. Alferov<br />
Fiber-optics / 321<br />
To create a telephone system that transmitted with light,<br />
perfecting fiber-optic cables was only half the solution. There<br />
also had to be a small, reliable, energy-efficient light source. In<br />
the 1960’s engineers realized that lasers were the best c<strong>and</strong>idate.<br />
However, early gas lasers were bulky, <strong>and</strong> semiconductor<br />
lasers, while small, were temperamental <strong>and</strong> had to be cooled<br />
in liquid nitrogen. Nevertheless, the race was on to devise a<br />
semiconductor laser that produced a continuous beam <strong>and</strong> did<br />
not need to be cooled. The race was between a Bell Labs team in<br />
the United States <strong>and</strong> a Russian team led by Zhores I. Alferov,<br />
neither of which knew much about the other.<br />
Alferov was born in 1930 in Vitebsk, Byelorussia, then part<br />
of the Soviet Union. He earned a degree in electronics from the<br />
V. I. Ulyanov (Lenin) Electrotechnical Institute in Leningrad<br />
(now St. Petersburg). As part of his graduate studies, he became<br />
a researcher at the A. F. Ioffe Physico-Technical Institute in the<br />
same city, receiving a doctorate in physics <strong>and</strong> mathematics in<br />
1970. By then he was one of the world’s leading experts in semiconductor<br />
lasers.<br />
Alferov found that he could improve the laser’s performance<br />
by s<strong>and</strong>wiching very thin layers of gallium arsenide <strong>and</strong><br />
metal, insulated in silicon, in such a way that electrons flowed<br />
only along a 0.03 millimeter strip, producing light in the process.<br />
This double heterojunction narrow-stripe laser was the answer,<br />
producing a steady beam at room temperature. Alferov<br />
published his results a month before the American team came<br />
up with almost precisely the same solution.<br />
The question of who was first was not settled until much<br />
later, during which time both Bell Labs <strong>and</strong> Alferov’s institute<br />
went on to further refinements of the technology. Alferov rose<br />
to become a dean at the St. Petersburg Technical University <strong>and</strong><br />
vice-president of the Russian Academy of Sciences. In 2000 he<br />
shared the Nobel Prize in Physics.<br />
Another American inventor made the leap from the telegraph to<br />
the telephone. Alex<strong>and</strong>er Graham Bell, a teacher of the deaf, was interested<br />
in the physical way speech works. In 1875, he started experimenting<br />
with ways to transmit sound vibrations electrically. He realized<br />
that an electrical current could be adjusted to resemble the
322 / Fiber-optics<br />
vibrations of speech. Bell patented his invention on March 7, 1876.<br />
On July 9, 1877, he founded the Bell Telephone Company.<br />
In 1880, Bell invented a device called the “photophone.” He used<br />
it to demonstrate that speech could be transmitted on a beam of<br />
light. Light is a form of electromagnetic energy. It travels in a vibrating<br />
wave. When the amplitude (height) of the wave is adjusted, a<br />
light beam can be made to carry messages. Bell’s invention included<br />
a thin mirrored disk that converted sound waves directly into a<br />
beam of light. At the receiving end, a selenium resistor connected to<br />
a headphone converted the light back into sound. “I have heard a<br />
ray of sun laugh <strong>and</strong> cough <strong>and</strong> sing,” Bell wrote of his invention.<br />
Although Bell proved that he could transmit speech over distances<br />
of several hundred meters with the photophone, the device<br />
was awkward <strong>and</strong> unreliable, <strong>and</strong> it never became popular as the<br />
telephone did. Not until one hundred years later did researchers find<br />
important practical uses for Bell’s idea of talking on a beam of light.<br />
Two other major discoveries needed to be made first: development<br />
of the laser <strong>and</strong> of high-purity glass. Theodore H. Maiman, an<br />
American physicist <strong>and</strong> electrical engineer at Hughes Research Laboratories<br />
in Malibu, California, built the first laser. The laser produces<br />
an intense, narrowly focused beam of light that can be adjusted to<br />
carry huge amounts of information. The word itself is an acronym for<br />
light amplification by the stimulated emission of radiation.<br />
It soon became clear, though, that even bright laser light can be<br />
broken up <strong>and</strong> absorbed by smog, fog, rain, <strong>and</strong> snow. So in 1966,<br />
Charles K. Kao, an electrical engineer at the St<strong>and</strong>ard Telecommunications<br />
Laboratories in Engl<strong>and</strong>, suggested that glass fibers could<br />
be used to transmit message-carrying beams of laser light without<br />
disruption from weather.<br />
Fiber Optics Are Tested<br />
Optical glass fiber is made from common materials, mostly silica,<br />
soda, <strong>and</strong> lime. The inside of a delicate silica glass tube is coated<br />
with a hundred or more layers of extremely thin glass. The tube is<br />
then heated to 2,000 degrees Celsius <strong>and</strong> collapsed into a thin glass<br />
rod, or preform. The preform is then pulled into thin str<strong>and</strong>s of fiber.<br />
The fibers are coated with plastic to protect them from being nicked<br />
or scratched, <strong>and</strong> then they are covered in flexible cable.
Fiber-optics / 323<br />
The earliest glass fibers<br />
contained many impurities<br />
<strong>and</strong> defects, so they did not<br />
carry light well. Signal repeaters<br />
were needed every<br />
few meters to energize<br />
(amplify) the fading pulses<br />
of light. In 1970, however,<br />
researchers at the Corning<br />
Glass Works in New York<br />
developed a fiber pure<br />
enough to carry light at<br />
Fiber optic str<strong>and</strong>s. (PhotoDisc)<br />
least one kilometer without<br />
amplification.<br />
The telephone industry<br />
quickly became involved in the new fiber-optics technology. Researchers<br />
believed that a bundle of optical fibers as thin as a pencil<br />
could carry several hundred telephone calls at the same time. Optical<br />
fibers were first tested by telephone companies in big cities,<br />
where the great volume of calls often overloaded st<strong>and</strong>ard underground<br />
phone lines.<br />
On May 11, 1977, American Telephone & Telegraph Company<br />
(AT&T), along with Illinois Bell Telephone, Western Electric, <strong>and</strong><br />
Bell Telephone Laboratories, began the first commercial test of fiberoptics<br />
telecommunications in downtown Chicago. The system consisted<br />
of a 2.4-kilometer cable laid beneath city streets. The cable,<br />
only 1.3 centimeters in diameter, linked an office building in the<br />
downtown business district with two telephone exchange centers.<br />
Voice <strong>and</strong> video signals were coded into pulses of laser light <strong>and</strong><br />
transmitted through the hair-thin glass fibers. The tests showed that<br />
a single pair of fibers could carry nearly six hundred telephone conversations<br />
at once very reliably <strong>and</strong> at a reasonable cost.<br />
Six years later, in October, 1983, Bell Laboratories succeeded in<br />
transmitting the equivalent of six thous<strong>and</strong> telephone signals through<br />
an optical fiber cable that was 161 kilometers long. Since that time,<br />
countries all over the world, from Engl<strong>and</strong> to Indonesia, have developed<br />
optical communications systems.
324 / Fiber-optics<br />
Consequences<br />
Fiber optics has had a great impact on telecommunications. A single<br />
fiber can now carry thous<strong>and</strong>s of conversations with no electrical<br />
interference. These fibers are less expensive, weigh less, <strong>and</strong> take up<br />
much less space than copper wire. As a result, people can carry on<br />
conversations over long distances without static <strong>and</strong> at a low cost.<br />
One of the first uses of fiber optics <strong>and</strong> perhaps its best-known<br />
application is the fiberscope, a medical instrument that permits internal<br />
examination of the human body without surgery or X-ray<br />
techniques. The fiberscope, or endoscope, consists of two fiber<br />
bundles. One of the fiber bundles transmits bright light into the patient,<br />
while the other conveys a color image back to the eye of the<br />
physician. The fiberscope has been used to look for ulcers, cancer,<br />
<strong>and</strong> polyps in the stomach, intestine, <strong>and</strong> esophagus of humans.<br />
Medical instruments, such as forceps, can be attached to the fiberscope,<br />
allowing the physician to perform a range of medical procedures,<br />
such as clearing a blocked windpipe or cutting precancerous<br />
polyps from the colon.<br />
See also Cell phone; Community antenna television; Communications<br />
satellite; FM radio; Laser; Long-distance radiotelephony;<br />
Long-distance telephone; Telephone switching.<br />
Further Reading<br />
Carey, John, <strong>and</strong> Neil Gross. “The Light Fantastic: Optoelectronics<br />
May Revolutionize Computers—<strong>and</strong> a Lot More.” Business Week<br />
(May 10, 1993).<br />
Free, John. “Fiber Optics Head for Home.” Popular Science 238<br />
(March, 1991).<br />
Hecht, Jeff. City of Light: The Story of Fiber Optics. Oxford: Oxford<br />
University Press, 1999.<br />
Paul, Noel C. “Laying Down the Line with Huge Projects to Circle<br />
the Globe in Fiber Optic Cable.” Christian Science Monitor (March<br />
29, 2001).<br />
Shinal, John G., with Timothy J. Mullaney. “At the Speed of Light.”<br />
Business Week (October 9, 2000).
Field ion microscope<br />
Field ion microscope<br />
The invention: A microscope that uses ions formed in high-voltage<br />
electric fields to view atoms on metal surfaces.<br />
The people behind the invention:<br />
Erwin Wilhelm Müller (1911-1977), a physicist, engineer, <strong>and</strong><br />
research professor<br />
J. Robert Oppenheimer (1904-1967), an American physicist<br />
To See Beneath the Surface<br />
325<br />
In the early twentieth century, developments in physics, especially<br />
quantum mechanics, paved the way for the application of<br />
new theoretical <strong>and</strong> experimental knowledge to the problem of<br />
viewing the atomic structure of metal surfaces. Of primary importance<br />
were American physicist George Gamow’s 1928 theoretical<br />
explanation of the field emission of electrons by quantum mechanical<br />
means <strong>and</strong> J. Robert Oppenheimer’s 1928 prediction of the<br />
quantum mechanical ionization of hydrogen in a strong electric<br />
field.<br />
In 1936, Erwin Wilhelm Müller developed his field emission microscope,<br />
the first in a series of instruments that would exploit<br />
these developments. It was to be the first instrument to view<br />
atomic structures—although not the individual atoms themselves—<br />
directly. Müller’s subsequent field ion microscope utilized the<br />
same basic concepts used in the field emission microscope yet<br />
proved to be a much more powerful <strong>and</strong> versatile instrument. By<br />
1956, Müller’s invention allowed him to view the crystal lattice<br />
structure of metals in atomic detail; it actually showed the constituent<br />
atoms.<br />
The field emission <strong>and</strong> field ion microscopes make it possible to<br />
view the atomic surface structures of metals on fluorescent screens.<br />
The field ion microscope is the direct descendant of the field emission<br />
microscope. In the case of the field emission microscope, the<br />
images are projected by electrons emitted directly from the tip of a<br />
metal needle, which constitutes the specimen under investigation.
326 / Field ion microscope<br />
These electrons produce an image of the atomic lattice structure of<br />
the needle’s surface. The needle serves as the electron-donating<br />
electrode in a vacuum tube, also known as the “cathode.” A fluorescent<br />
screen that serves as the electron-receiving electrode, or “anode,”<br />
is placed opposite the needle. When sufficient electrical voltage<br />
is applied across the cathode <strong>and</strong> anode, the needle tip emits<br />
electrons, which strike the screen. The image produced on the<br />
screen is a projection of the electron source—the needle surface’s<br />
atomic lattice structure.<br />
Müller studied the effect of needle shape on the performance of<br />
the microscope throughout much of 1937. When the needles had<br />
been properly shaped, Müller was able to realize magnifications of<br />
up to 1 million times. This magnification allowed Müller to view<br />
what he called “maps” of the atomic crystal structure of metals,<br />
since the needles were so small that they were often composed of<br />
only one simple crystal of the material. While the magnification<br />
may have been great, however, the resolution of the instrument was<br />
severely limited by the physics of emitted electrons, which caused<br />
the images Müller obtained to be blurred.<br />
Improving the View<br />
In 1943, while working in Berlin, Müller realized that the resolution<br />
of the field emission microscope was limited by two factors.<br />
The electron velocity, a particle property, was extremely high <strong>and</strong><br />
uncontrollably r<strong>and</strong>om, causing the micrographic images to be<br />
blurred. In addition, the electrons had an unsatisfactorily high wavelength.<br />
When Müller combined these two factors, he was able to determine<br />
that the field emission microscope could never depict single<br />
atoms; it was a physical impossibility for it to distinguish one<br />
atom from another.<br />
By 1951, this limitation led him to develop the technology behind<br />
the field ion microscope. In 1952, Müller moved to the United States<br />
<strong>and</strong> founded the Pennsylvania State University Field Emission Laboratory.<br />
He perfected the field ion microscope between 1952 <strong>and</strong><br />
1956.<br />
The field ion microscope utilized positive ions instead of electrons<br />
to create the atomic surface images on the fluorescent screen.
Erwin Müller<br />
Field ion microscope / 327<br />
Erwin Müller’s scientific goal was to see an individual atom,<br />
<strong>and</strong> to that purpose he invented ever more powerful microscopes.<br />
He was born in Berlin, Germany, in 1911 <strong>and</strong> attended<br />
the city’s Technische Hochschule, earning a diploma in engineering<br />
in 1935 <strong>and</strong> a doctorate in physics in 1936.<br />
Following his studies he worked as an industrial researcher.<br />
Still a neophyte scientist, he discovered the principle of the field<br />
emission microscope <strong>and</strong> was able to produce an image of a<br />
structure only two nanometers in diameter on the surface of a<br />
cathode. In 1941 Müller discovered field desorption by reversing<br />
the polarity of the electron emitter at very low temperatures<br />
so that surface atoms evaporated in the electric field. In 1947 he<br />
left industry <strong>and</strong> began an academic career, teaching physical<br />
chemistry at the Altenburg Engineering School. The following<br />
year he was appointed a department head at the Fritz Haber Institute.<br />
While there, he found that by having a cathode absorb<br />
gas ions <strong>and</strong> then re-emit them he could produce greater magnification.<br />
In 1952 Müller became a professor at Pennsylvania State<br />
University. Applying the new field-ion emission principle, he<br />
was able to achieve his goal, images of individual atoms, in<br />
1956. Almost immediately chemists <strong>and</strong> physicists adopted the<br />
field-ion microscope to conduct basic research concerning the<br />
underlying behavior of field ionization <strong>and</strong> interactions among<br />
absorbed atoms. He further aided such research by coupling a<br />
field-ion microscope <strong>and</strong> mass spectrometer, calling the combination<br />
an atom-probe field-ion microscope; it could both magnify<br />
<strong>and</strong> chemically analyze atoms.<br />
Müller died in 1977. He received the National Medal of Science<br />
posthumously, one of many honors for his contributions to<br />
microscopy.<br />
When an easily ionized gas—at first hydrogen, but usually helium,<br />
neon, or argon—was introduced into the evacuated tube, the emitted<br />
electrons ionized the gas atoms, creating a stream of positively<br />
charged particles, much as Oppenheimer had predicted in 1928.<br />
Müller’s use of positive ions circumvented one of the resolution<br />
problems inherent in the use of imaging electrons. Like the electrons,<br />
however, the positive ions traversed the tube with unpredict-
328 / Field ion microscope<br />
ably r<strong>and</strong>om velocities. Müller eliminated this problem by cryogenically<br />
cooling the needle tip with a supercooled liquefied gas such as<br />
nitrogen or hydrogen.<br />
By 1956, Müller had perfected the means of supplying imaging<br />
positive ions by filling the vacuum tube with an extremely small<br />
quantity of an inert gas such as helium, neon, or argon. By using<br />
such a gas, Müller was assured that no chemical reaction would occur<br />
between the needle tip <strong>and</strong> the gas; any such reaction would alter<br />
the surface atomic structure of the needle <strong>and</strong> thus alter the resulting<br />
microscopic image. The imaging ions allowed the field ion<br />
microscope to image the emitter surface to a resolution of between<br />
two <strong>and</strong> three angstroms, making it ten times more accurate than its<br />
close relative, the field emission microscope.<br />
Consequences<br />
The immediate impact of the field ion microscope was its influence<br />
on the study of metallic surfaces. It is a well-known fact of materials<br />
science that the physical properties of metals are influenced<br />
by the imperfections in their constituent lattice structures. It was not<br />
possible to view the atomic structure of the lattice, <strong>and</strong> thus the finest<br />
detail of any imperfection, until the field ion microscope was developed.<br />
The field ion microscope is the only instrument powerful<br />
enough to view the structural flaws of metal specimens in atomic<br />
detail.<br />
Although the instrument may be extremely powerful, the extremely<br />
large electrical fields required in the imaging process preclude<br />
the instrument’s application to all but the heartiest of metallic<br />
specimens. The field strength of 500 million volts per centimeter<br />
exerts an average stress on metal specimens in the range of almost<br />
1 ton per square millimeter. Metals such as iron <strong>and</strong> platinum can<br />
withst<strong>and</strong> this strain because of the shape of the needles into which<br />
they are formed. Yet this limitation of the instrument makes it extremely<br />
difficult to examine biological materials, which cannot withst<strong>and</strong><br />
the amount of stress that metals can. A practical by-product in<br />
the study of field ionization—field evaporation—eventually permitted<br />
scientists to view large biological molecules.<br />
Field evaporation also allowed surface scientists to view the
atomic structures of biological molecules. By embedding molecules<br />
such as phthalocyanine within the metal needle, scientists have<br />
been able to view the atomic structures of large biological molecules<br />
by field evaporating much of the surrounding metal until the biological<br />
material remains at the needle’s surface.<br />
See also Cyclotron; Electron microscope; Mass spectrograph;<br />
Neutrino detector; Scanning tunneling microscope; Sonar; Synchrocyclotron;<br />
Tevatron accelerator; Ultramicroscope.<br />
Further Reading<br />
Field ion microscope / 329<br />
Gibson, J. M. “Tools for Probing ‘Atomic’ Action.” IEEE Spectrum 22,<br />
no. 12 (December, 1985).<br />
Kunetka, James W. Oppenheimer: The Years of Risk. Englewood Cliffs,<br />
N.J.: Prentice-Hall, 1982.<br />
Schweber, Silvan S. In the Shadow of the Bomb: Bethe, Oppenheimer, <strong>and</strong><br />
the Moral Responsibility of the Scientist. Princeton, N.J.: Princeton<br />
University Press, 2000.<br />
Tsong, Tien Tzou. Atom-Probe Field Ion Microscopy: Field Ion Emission<br />
<strong>and</strong> Surfaces <strong>and</strong> Interfaces at Atomic Resolution. New York: Cambridge<br />
University Press, 1990.
330<br />
Floppy disk<br />
Floppy disk<br />
The invention: Inexpensive magnetic medium for storing <strong>and</strong><br />
moving computer data.<br />
The people behind the invention:<br />
Andrew D. Booth (1918- ), an English inventor who<br />
developed paper disks as a storage medium<br />
Reynold B. Johnson (1906-1998), a design engineer at IBM’s<br />
research facility who oversaw development of magnetic disk<br />
storage devices<br />
Alan Shugart (1930- ), an engineer at IBM’s research<br />
laboratory who first developed the floppy disk as a means of<br />
mass storage for mainframe computers<br />
First Tries<br />
When the International Business Machines (IBM) Corporation<br />
decided to concentrate on the development of computers for business<br />
use in the 1950’s, it faced a problem that had troubled the earliest<br />
computer designers: how to store data reliably <strong>and</strong> inexpensively.<br />
In the early days of computers (the early 1940’s), a number of<br />
ideas were tried. The English inventor Andrew D. Booth produced<br />
spinning paper disks on which he stored data by means of punched<br />
holes, only to ab<strong>and</strong>on the idea because of the insurmountable engineering<br />
problems he foresaw.<br />
The next step was “punched” cards, an idea first used when the<br />
French inventor Joseph-Marie Jacquard invented an automatic weaving<br />
loom for which patterns were stored in pasteboard cards. The<br />
idea was refined by the English mathematician <strong>and</strong> inventor Charles<br />
Babbage for use in his “analytical engine,” an attempt to build a kind<br />
of computing machine. Although it was simple <strong>and</strong> reliable, it was<br />
not fast enough, nor did it store enough data, to be truly practical.<br />
The Ampex Corporation demonstrated its first magnetic audiotape<br />
recorder after World War II (1939-1945). Shortly after that, the<br />
Binary Automatic Computer (BINAC) was introduced with a storage<br />
device that appeared to be a large tape recorder. A more ad-
vanced machine, the Universal Automatic Computer (UNIVAC),<br />
used metal tape instead of plastic (plastic was easily stretched or<br />
even broken). Unfortunately, metal tape was considerably heavier,<br />
<strong>and</strong> its edges were razor-sharp <strong>and</strong> thus dangerous. Improvements<br />
in plastic tape eventually produced sturdy media, <strong>and</strong> magnetic<br />
tape became (<strong>and</strong> remains) a practical medium for storage of computer<br />
data.<br />
Still later designs combined Booth’s spinning paper disks with<br />
magnetic technology to produce rapidly rotating “drums.” Whereas<br />
a tape might have to be fast-forwarded nearly to its end to locate a<br />
specific piece of data, a drum rotating at speeds up to 12,500 revolutions<br />
per minute (rpm) could retrieve data very quickly <strong>and</strong><br />
could store more than 1 million bits (or approximately 125 kilobytes)<br />
of data.<br />
In May, 1955, these drums evolved, under the direction of Reynold<br />
B. Johnson, into IBM’s hard disk unit. The hard disk unit consisted<br />
of fifty platters, each 2 feet in diameter, rotating at 1,200 rpm. Both<br />
sides of the disk could be used to store information. When the operator<br />
wished to access the disk, at his or her comm<strong>and</strong> a read/write<br />
head was moved to the right disk <strong>and</strong> to the side of the disk that<br />
held the desired data. The operator could then read data from or record<br />
data onto the disk. To speed things even more, the next version<br />
of the device, similar in design, employed one hundred read/write<br />
heads—one for each of its fifty double-sided disks. The only remaining<br />
disadvantage was its size, which earned IBM’s first commercial<br />
unit the nickname “jukebox.”<br />
The First Floppy<br />
Floppy disk / 331<br />
The floppy disk drive developed directly from hard disk technology.<br />
It did not take shape until the late 1960’s under the direction of<br />
Alan Shugart (it was announced by IBM as a ready product in 1970).<br />
First created to help restart the operating systems of mainframe<br />
computers that had gone dead, the floppy seemed in some ways to<br />
be a step back, for it operated more slowly than a hard disk drive<br />
<strong>and</strong> did not store as much data. Initially, it consisted of a single thin<br />
plastic disk eight inches in diameter <strong>and</strong> was developed without the<br />
protective envelope in which it is now universally encased. The ad-
332 / Floppy disk<br />
dition of that jacket gave the floppy its single greatest advantage<br />
over the hard disk: portability with reliability.<br />
Another advantage soon became apparent: The floppy is resilient<br />
to damage. In a hard disk drive, the read/write heads must<br />
hover thous<strong>and</strong>ths of a centimeter over the disk surface in order to<br />
attain maximum performance. Should even a small particle of dust<br />
get in the way, or should the drive unit be bumped too hard, the<br />
head may “crash” into the surface of the disk <strong>and</strong> ruin its magnetic<br />
coating; the result is a permanent loss of data. Because the floppy<br />
operates with the read-write head in contact with the flexible plastic<br />
disk surface, individual particles of dust or other contaminants are<br />
not nearly as likely to cause disaster.<br />
As a result of its advantages, the floppy disk was the logical<br />
choice for mass storage in personal computers (PCs), which were<br />
developed a few years after the floppy disk’s introduction. The<br />
floppy is still an important storage device even though hard disk<br />
drives for PCs have become less expensive. Moreover, manufacturers<br />
continually are developing new floppy formats <strong>and</strong> new floppy<br />
disks that can hold more data.<br />
Three-<strong>and</strong>-one-half-inch disks improved on the design of earlier floppies by protecting their<br />
magnetic media within hard plastic shells <strong>and</strong> using sliding metal flanges to protect the surfaces<br />
on which recording heads make contact. (PhotoDisc)
Consequences<br />
Floppy disk / 333<br />
Personal computing would have developed very differently were<br />
it not for the availability of inexpensive floppy disk drives. When<br />
IBM introduced its PC in 1981, the machine provided as st<strong>and</strong>ard<br />
equipment a connection for a cassette tape recorder as a storage device;<br />
a floppy disk was only an option (though an option few did not<br />
take). The awkwardness of tape drives—their slow speed <strong>and</strong> sequential<br />
nature of storing data—presented clear obstacles to the acceptance<br />
of the personal computer as a basic information tool. By<br />
contrast, the floppy drive gives computer users relatively fast storage<br />
at low cost.<br />
Floppy disks provided more than merely economical data storage.<br />
Since they are built to be removable (unlike hard drives), they<br />
represented a basic means of transferring data between machines.<br />
Indeed, prior to the popularization of local area networks (LANs),<br />
the floppy was known as a “sneaker” network: One merely carried<br />
the disk by foot to another computer.<br />
Floppy disks were long the primary means of distributing new<br />
software to users. Even the very flexible floppy showed itself to be<br />
quite resilient to the wear <strong>and</strong> tear of postal delivery. Later, the 3.5inch<br />
disk improved upon the design of the original 8-inch <strong>and</strong> 5.25inch<br />
floppies by protecting the disk medium within a hard plastic<br />
shell <strong>and</strong> by using a sliding metal door to protect the area where the<br />
read/write heads contact the disk.<br />
By the late 1990’s, floppy disks were giving way to new datastorage<br />
media, particularly CD-ROMs—durable laser-encoded disks<br />
that hold more than 700 megabytes of data. As the price of blank<br />
CDs dropped dramatically, floppy disks tended to be used mainly<br />
for short-term storage of small amounts of data. Floppy disks were<br />
also being used less <strong>and</strong> less for data distribution <strong>and</strong> transfer, as<br />
computer users turned increasingly to sending files via e-mail on<br />
the Internet, <strong>and</strong> software providers made their products available<br />
for downloading on Web sites.<br />
See also Bubble memory; Compact disc; Computer chips; Hard<br />
disk; Optical disk; Personal computer.
334 / Floppy disk<br />
Further Reading<br />
Br<strong>and</strong>el, Mary. “IBM Fashions the Floppy.” Computerworld 33, no. 23<br />
(June 7, 1999).<br />
Chposky, James, <strong>and</strong> Ted Leonsis. Blue Magic: The People, Power, <strong>and</strong><br />
Politics Behind the IBM Personal Computer. New York: Facts on File,<br />
1988.<br />
Freiberger, Paul, <strong>and</strong> Michael Swaine. Fire in the Valley: The Making of<br />
the Personal Computer. New York: McGraw-Hill, 2000.<br />
Grossman. Wendy. Remembering the Future: Interviews from Personal<br />
Computer World. New York: Springer, 1997.
Fluorescent lighting<br />
Fluorescent lighting<br />
The invention: A form of electrical lighting that uses a glass tube<br />
coated with phosphor that gives off a cool bluish light <strong>and</strong> emits<br />
ultraviolet radiation.<br />
The people behind the invention:<br />
Vincenzo Cascariolo (1571-1624), an Italian alchemist <strong>and</strong><br />
shoemaker<br />
Heinrich Geissler (1814-1879), a German glassblower<br />
Peter Cooper Hewitt (1861-1921), an American electrical<br />
engineer<br />
Celebrating the “Twelve Greatest <strong>Inventors</strong>”<br />
335<br />
On the night of November 23, 1936, more than one thous<strong>and</strong> industrialists,<br />
patent attorneys, <strong>and</strong> scientists assembled in the main<br />
ballroom of the Mayflower Hotel in Washington, D.C., to celebrate<br />
the one hundredth anniversary of the U.S. Patent Office. A transport<br />
liner over the city radioed the names chosen by the Patent Office as<br />
America’s “Twelve Greatest <strong>Inventors</strong>,” <strong>and</strong>, as the distinguished<br />
group strained to hear those names, “the room was flooded for a<br />
moment by the most brilliant light yet used to illuminate a space<br />
that size.”<br />
Thus did The New York Times summarize the commercial introduction<br />
of the fluorescent lamp. The twelve inventors present were<br />
Thomas Alva Edison, Robert Fulton, Charles Goodyear, Charles<br />
Hall, Elias Howe, Cyrus Hall McCormick, Ottmar Mergenthaler,<br />
Samuel F. B. Morse, George Westinghouse, Wilbur Wright, <strong>and</strong> Eli<br />
Whitney. There was, however, no name to bear the honor for inventing<br />
fluorescent lighting. That honor is shared by many who participated<br />
in a very long series of discoveries.<br />
The fluorescent lamp operates as a low-pressure, electric discharge<br />
inside a glass tube that contains a droplet of mercury <strong>and</strong> a<br />
gas, commonly argon. The inside of the glass tube is coated with<br />
fine particles of phosphor. When electricity is applied to the gas, the<br />
mercury gives off a bluish light <strong>and</strong> emits ultraviolet radiation.
336 / Fluorescent lighting<br />
When bathed in the strong ultraviolet radiation emitted by the mercury,<br />
the phosphor fluoresces (emits light).<br />
The setting for the introduction of the fluorescent lamp began at<br />
the beginning of the 1600’s, when Vincenzo Cascariolo, an Italian<br />
shoemaker <strong>and</strong> alchemist, discovered a substance that gave off a<br />
bluish glow in the dark after exposure to strong sunlight. The fluorescent<br />
substance was apparently barium sulfide <strong>and</strong> was so unusual<br />
for that time <strong>and</strong> so valuable that its formulation was kept secret<br />
for a long time. Gradually, however, scholars became aware of<br />
the preparation secrets of the substance <strong>and</strong> studied it <strong>and</strong> other luminescent<br />
materials.<br />
Further studies in fluorescent lighting were made by the German<br />
physicist Johann Wilhelm Ritter. He observed the luminescence of<br />
phosphors that were exposed to various “exciting” lights. In 1801,<br />
he noted that some phosphors shone brightly when illuminated by<br />
light that the eye could not see (ultraviolet light). Ritter thus discovered<br />
the ultraviolet region of the light spectrum. The use of phosphors<br />
to transform ultraviolet light into visible light was an important<br />
step in the continuing development of the fluorescent lamp.<br />
Further studies in fluorescent lighting were made by the German<br />
physicist Johann Wilhelm Ritter. He observed the luminescence of<br />
phosphors that were exposed to various “exciting” lights. In 1801,<br />
he noted that some phosphors shone brightly when illuminated by<br />
light that the eye could not see (ultraviolet light). Ritter thus discovered<br />
the ultraviolet region of the light spectrum. The use of phosphors<br />
to transform ultraviolet light into visible light was an important<br />
step in the continuing development of the fluorescent lamp.<br />
The British mathematician <strong>and</strong> physicist Sir George Gabriel Stokes<br />
studied the phenomenon as well. It was he who, in 1852, termed the<br />
afterglow “fluorescence.”<br />
Geissler Tubes<br />
While these advances were being made, other workers were trying<br />
to produce a practical form of electric light. In 1706, the English<br />
physicist Francis Hauksbee devised an electrostatic generator, which<br />
is used to accelerate charged particles to very high levels of electrical<br />
energy. He then connected the device to a glass “jar,” used a vac-
uum pump to evacuate the jar to a low pressure, <strong>and</strong> tested his<br />
generator. In so doing, Hauksbee obtained the first human-made<br />
electrical glow discharge by “capturing lightning” in a jar.<br />
In 1854, Heinrich Geissler, a glassblower <strong>and</strong> apparatus maker,<br />
opened his shop in Bonn, Germany, to make scientific instruments;<br />
in 1855, he produced a vacuum pump that used liquid mercury as<br />
an evacuation fluid. That same year, Geissler made the first gaseous<br />
conduction lamps while working in collaboration with the German<br />
scientist Julius Plücker. Plücker referred to these lamps as “Geissler<br />
tubes.” Geissler was able to create red light with neon gas filling a<br />
lamp <strong>and</strong> light of nearly all colors by using certain types of gas<br />
within each of the lamps. Thus, both the neon sign business <strong>and</strong> the<br />
science of spectroscopy were born.<br />
Geissler tubes were studied extensively by a variety of workers.<br />
At the beginning of the twentieth century, the practical American<br />
engineer Peter Cooper Hewitt put these studies to use by marketing<br />
the first low-pressure mercury vapor lamps. The lamps were quite<br />
successful, although they required high voltage for operation, emitted<br />
an eerie blue-green, <strong>and</strong> shone dimly by comparison with their<br />
eventual successor, the fluorescent lamp. At about the same time,<br />
systematic studies of phosphors had finally begun.<br />
By the 1920’s, a number of investigators had discovered that the<br />
low-pressure mercury vapor discharge marketed by Hewitt was an<br />
extremely efficient method for producing ultraviolet light, if the<br />
mercury <strong>and</strong> rare gas pressures were properly adjusted. With a<br />
phosphor to convert the ultraviolet light back to visible light, the<br />
Hewitt lamp made an excellent light source.<br />
Impact<br />
Fluorescent lighting / 337<br />
The introduction of fluorescent lighting in 1936 presented the<br />
public with a completely new form of lighting that had enormous<br />
advantages of high efficiency, long life, <strong>and</strong> relatively low cost.<br />
By 1938, production of fluorescent lamps was well under way. By<br />
April, 1938, four sizes of fluorescent lamps in various colors had<br />
been offered to the public <strong>and</strong> more than two hundred thous<strong>and</strong><br />
lamps had been sold.<br />
During 1939 <strong>and</strong> 1940, two great expositions—the New York
338 / Fluorescent lighting<br />
World’s Fair <strong>and</strong> the San Francisco International Exposition—<br />
helped popularize fluorescent lighting. Thous<strong>and</strong>s of tubular fluorescent<br />
lamps formed a great spiral in the “motor display salon,”<br />
the car showroom of the General Motors exhibit at the New York<br />
World’s Fair. Fluorescent lamps lit the Polish Restaurant <strong>and</strong> hung<br />
in vertical clusters on the flagpoles along the Avenue of the Flags at<br />
the fair, while two-meter-long, upright fluorescent tubes illuminated<br />
buildings at the San Francisco International Exposition.<br />
When the United States entered World War II (1939-1945), the<br />
dem<strong>and</strong> for efficient factory lighting soared. In 1941, more than<br />
twenty-one million fluorescent lamps were sold. Technical advances<br />
continued to improve the fluorescent lamp. By the 1990’s,<br />
this type of lamp supplied most of the world’s artificial lighting.<br />
See also Electric clock; Electric refrigerator; Microwave cooking;<br />
Television; Tungsten filament; Vacuum cleaner; Washing machine.<br />
Further Reading<br />
Bowers, B. “New Lamps for Old: The Story of Electric Lighting.” IEE<br />
Review 41, no. 6 (November 16, 1995).<br />
Dake, Henry Carl, <strong>and</strong> Jack De Ment. Fluorescent Light <strong>and</strong> Its Applications,<br />
Including Location <strong>and</strong> Properties of Fluorescent Materials.<br />
Brooklyn, N.Y.: Chemical Publishing, 1941.<br />
“EPA Sees the Light on Fluorescent Bulbs.” Environmental Health<br />
Perspectives 107, no. 12 (December, 1999).<br />
Harris, J. B. “Electric Lamps, Past <strong>and</strong> Present.” Engineering Science<br />
<strong>and</strong> Education Journal 2, no. 4 (August, 1993).<br />
“How Fluorescent Lighting Became Smaller.” Consulting-Specifying<br />
Engineer 23, no. 2 (February, 1998).
FM radio<br />
FM radio<br />
The invention: A method of broadcasting radio signals by modulating<br />
the frequency, rather than the amplitude, of radio waves,<br />
FM radio greatly improved the quality of sound transmission.<br />
The people behind the invention:<br />
Edwin H. Armstrong (1890-1954), the inventor of FM radio<br />
broadcasting<br />
David Sarnoff (1891-1971), the founder of RCA<br />
An Entirely New System<br />
339<br />
Because early radio broadcasts used amplitude modulation (AM)<br />
to transmit their sounds, they were subject to a sizable amount of interference<br />
<strong>and</strong> static. Since good AM reception relies on the amount<br />
of energy transmitted, energy sources in the atmosphere between<br />
the station <strong>and</strong> the receiver can distort or weaken the original signal.<br />
This is particularly irritating for the transmission of music.<br />
Edwin H. Armstrong provided a solution to this technological<br />
constraint. A graduate of Columbia University, Armstrong made a<br />
significant contribution to the development of radio with his basic<br />
inventions for circuits for AM receivers. (Indeed, the monies Armstrong<br />
received from his earlier inventions financed the development<br />
of the frequency modulation, or FM, system.) Armstrong was<br />
one among many contributors to AM radio. For FM broadcasting,<br />
however, Armstrong must be ranked as the most important inventor.<br />
During the 1920’s, Armstrong established his own research laboratory<br />
in Alpine, New Jersey, across the Hudson River from New<br />
York City. With a small staff of dedicated assistants, he carried out<br />
research on radio circuitry <strong>and</strong> systems for nearly three decades. At<br />
that time, Armstrong also began to teach electrical engineering at<br />
Columbia University.<br />
From 1928 to 1933, Armstrong worked diligently at his private<br />
laboratory at Columbia University to construct a working model of<br />
an FM radio broadcasting system. With the primitive limitations<br />
then imposed on the state of vacuum tube technology, a number of
340 / FM radio<br />
Armstrong’s experimental circuits required as many as one hundred<br />
tubes. Between July, 1930, <strong>and</strong> January, 1933, Armstrong filed<br />
four basic FM patent applications. All were granted simultaneously<br />
on December 26, 1933.<br />
Armstrong sought to perfect FM radio broadcasting, not to offer<br />
radio listeners better musical reception but to create an entirely<br />
new radio broadcasting system. On November 5, 1935, Armstrong<br />
made his first public demonstration of FM broadcasting in New<br />
York City to an audience of radio engineers. An amateur station<br />
based in suburban Yonkers, New York, transmitted these first signals.<br />
The scientific world began to consider the advantages <strong>and</strong><br />
disadvantages of Armstrong’s system; other laboratories began to<br />
craft their own FM systems.<br />
Corporate Conniving<br />
Because Armstrong had no desire to become a manufacturer or<br />
broadcaster, he approached David Sarnoff, head of the Radio Corporation<br />
of America (RCA). As the owner of the top manufacturer<br />
of radio sets <strong>and</strong> the top radio broadcasting network, Sarnoff was<br />
interested in all advances of radio technology. Armstrong first demonstrated<br />
FM radio broadcasting for Sarnoff in December, 1933.<br />
This was followed by visits from RCA engineers, who were sufficiently<br />
impressed to recommend to Sarnoff that the company conduct<br />
field tests of the Armstrong system.<br />
In 1934, Armstrong, with the cooperation of RCA, set up a test<br />
transmitter at the top of the Empire State Building, sharing facilities<br />
with the experimental RCA television transmitter. From 1934 through<br />
1935, tests were conducted using the Empire State facility, to mixed<br />
reactions of RCA’s best engineers. AM radio broadcasting already<br />
had a performance record of nearly two decades. The engineers<br />
wondered if this new technology could replace something that had<br />
worked so well.<br />
This less-than-enthusiastic evaluation fueled the skepticism of<br />
RCA lawyers <strong>and</strong> salespeople. RCA had too much invested in the<br />
AM system, both as a leading manufacturer <strong>and</strong> as the dominant<br />
owner of the major radio network of the time, the National Broadcasting<br />
Company (NBC). Sarnoff was in no rush to adopt FM. To
change systems would risk the millions of dollars RCA was making<br />
as America emerged from the Great Depression.<br />
In 1935, Sarnoff advised Armstrong that RCA would cease any<br />
further research <strong>and</strong> development activity in FM radio broadcasting.<br />
(Still, engineers at RCA laboratories continued to work on FM<br />
to protect the corporate patent position.) Sarnoff declared to the<br />
press that his company would push the frontiers of broadcasting by<br />
concentrating on research <strong>and</strong> development of radio with pictures,<br />
that is, television. As a tangible sign, Sarnoff ordered that Armstrong’s<br />
FM radio broadcasting tower be removed from the top of<br />
the Empire State Building.<br />
Armstrong was outraged. By the mid-1930’s, the development of<br />
FM radio broadcasting had become a mission for Armstrong. For<br />
the remainder of his life, Armstrong devoted his considerable talents<br />
to the promotion of FM radio broadcasting.<br />
Impact<br />
FM radio / 341<br />
After the break with Sarnoff, Armstrong proceeded with plans to<br />
develop his own FM operation. Allied with two of RCA’s biggest<br />
manufacturing competitors, Zenith <strong>and</strong> General Electric, Armstrong<br />
pressed ahead. In June of 1936, at a Federal Communications Commission<br />
(FCC) hearing, Armstrong proclaimed that FM broadcasting<br />
was the only static-free, noise-free, <strong>and</strong> uniform system—both<br />
day <strong>and</strong> night—available. He argued, correctly, that AM radio broadcasting<br />
had none of these qualities.<br />
During World War II (1939-1945), Armstrong gave the military<br />
permission to use FM with no compensation. That patriotic gesture<br />
cost Armstrong millions of dollars when the military soon became<br />
all FM. It did, however, exp<strong>and</strong> interest in FM radio broadcasting.<br />
World War II had provided a field test of equipment <strong>and</strong> use.<br />
By the 1970’s, FM radio broadcasting had grown tremendously.<br />
By 1972, one in three radio listeners tuned into an FM station some<br />
time during the day. Advertisers began to use FM radio stations to<br />
reach the young <strong>and</strong> affluent audiences that were turning to FM stations<br />
in greater numbers.<br />
By the late 1970’s, FM radio stations were outnumbering AM stations.<br />
By 1980, nearly half of radio listeners tuned into FM stations
342 / FM radio<br />
on a regular basis. A decade later, FM radio listening accounted for<br />
more than two-thirds of audience time. Armstrong’s predictions<br />
that listeners would prefer the clear, static-free sounds offered by<br />
FM radio broadcasting had come to pass by the mid-1980’s, nearly<br />
fifty years after Armstrong had commenced his struggle to make<br />
FM radio broadcasting a part of commercial radio.<br />
See also Community antenna television; Communications satellite;<br />
Dolby noise reduction; Fiber-optics; Radio; Radio crystal sets;<br />
Television; Transistor radio.<br />
Further Reading<br />
Lewis, Tom. Empire of the Air: The Men Who Made Radio. New York:<br />
HarperPerennial, 1993.<br />
Sobel, Robert. RCA. New York: Stein <strong>and</strong> Day, 1986.<br />
Streissguth, Thomas. Communications: Sending the Message. Minneapolis,<br />
Minn.: Oliver Press, 1997.
Food freezing<br />
Food freezing<br />
The invention: It was long known that low temperatures helped to<br />
protect food against spoiling; the invention that made frozen<br />
food practical was a method of freezing items quickly. Clarence<br />
Birdseye’s quick-freezing technique made possible a revolution<br />
in food preparation, storage, <strong>and</strong> distribution.<br />
The people behind the invention:<br />
Clarence Birdseye (1886-1956), a scientist <strong>and</strong> inventor<br />
Donald K. Tressler (1894-1981), a researcher at Cornell<br />
University<br />
Am<strong>and</strong>a Theodosia Jones (1835-1914), a food-preservation<br />
pioneer<br />
Feeding the Family<br />
343<br />
In 1917, Clarence Birdseye developed a means of quick-freezing<br />
meat, fish, vegetables, <strong>and</strong> fruit without substantially changing<br />
their original taste. His system of freezing was called by Fortune<br />
magazine “one of the most exciting <strong>and</strong> revolutionary ideas in the<br />
history of food.” Birdseye went on to refine <strong>and</strong> perfect his method<br />
<strong>and</strong> to promote the frozen foods industry until it became a commercial<br />
success nationwide.<br />
It was during a trip to Labrador, where he worked as a fur trader,<br />
that Birdseye was inspired by this idea. Birdseye’s new wife <strong>and</strong><br />
five-week-old baby had accompanied him there. In order to keep<br />
his family well fed, he placed barrels of fresh cabbages in salt water<br />
<strong>and</strong> then exposed the vegetables to freezing winds. Successful at<br />
preserving vegetables, he went on to freeze a winter’s supply of<br />
ducks, caribou, <strong>and</strong> rabbit meat.<br />
In the following years, Birdseye experimented with many freezing<br />
techniques. His equipment was crude: an electric fan, ice, <strong>and</strong> salt<br />
water. His earliest experiments were on fish <strong>and</strong> rabbits, which he<br />
froze <strong>and</strong> packed in old c<strong>and</strong>y boxes. By 1924, he had borrowed<br />
money against his life insurance <strong>and</strong> was lucky enough to find three<br />
partners willing to invest in his new General Seafoods Company
344 / Food freezing<br />
(later renamed General Foods), located in Gloucester, Massachusetts.<br />
Although it was Birdseye’s genius that put the principles of<br />
quick-freezing to work, he did not actually invent quick-freezing.<br />
The scientific principles involved had been known for some time.<br />
As early as 1842, a patent for freezing fish had been issued in Engl<strong>and</strong>.<br />
Nevertheless, the commercial exploitation of the freezing<br />
process could not have happened until the end of the 1800’s, when<br />
mechanical refrigeration was invented. Even then, Birdseye had to<br />
overcome major obstacles.<br />
Finding a Niche<br />
By the 1920’s, there still were few mechanical refrigerators in<br />
American homes. It would take years before adequate facilities for<br />
food freezing <strong>and</strong> retail distribution would be established across the<br />
United States. By the late 1930’s, frozen foods had, indeed, found its<br />
role in commerce but still could not compete with canned or fresh<br />
foods. Birdseye had to work tirelessly to promote the industry, writing<br />
<strong>and</strong> delivering numerous lectures <strong>and</strong> articles to advance its<br />
popularity. His efforts were helped by scientific research conducted<br />
at Cornell University by Donald K. Tressler <strong>and</strong> by C. R. Fellers of<br />
what was then Massachusetts State College. Also, during World<br />
War II (1939-1945), more Americans began to accept the idea: Rationing,<br />
combined with a shortage of canned foods, contributed to<br />
the dem<strong>and</strong> for frozen foods. The armed forces made large purchases<br />
of these items as well.<br />
General Foods was the first to use a system of extremely rapid<br />
freezing of perishable foods in packages. Under the Birdseye system,<br />
fresh foods, such as berries or lobster, were packaged snugly in convenient<br />
square containers. Then, the packages were pressed between<br />
refrigerated metal plates under pressure at 50 degrees below zero.<br />
Two types of freezing machines were used. The “double belt” freezer<br />
consisted of two metal belts that moved through a 15-meter freezing<br />
tunnel, while a special salt solution was sprayed on the surfaces of<br />
the belts. This double-belt freezer was used only in permanent installations<br />
<strong>and</strong> was soon replaced by the “multiplate” freezer, which was<br />
portable <strong>and</strong> required only 11.5 square meters of floor space compared<br />
to the double belt’s 152 square meters.
Am<strong>and</strong>a Theodosia Jones<br />
Food freezing / 345<br />
Am<strong>and</strong>a Theodosia Jones (1835-1914) was close to her brother.<br />
When he suddenly died while they were at school <strong>and</strong> she was<br />
left to contact relatives <strong>and</strong> make the necessary arrangements<br />
for his remains, she was devastated. She had a nervous breakdown<br />
at seventeen <strong>and</strong> could not believe he was entirely gone.<br />
She was sure that he remained an active presence in her life, <strong>and</strong><br />
she became a spiritualist <strong>and</strong> medium so that they could talk<br />
during séances.<br />
Jones always claimed she did not come up with the idea for<br />
the vacuum packing method for preserving food, an important<br />
technique before freezing foods became practicable. It was her<br />
brother who gave it to her. She did the actual experimental<br />
work herself, however, <strong>and</strong> with the aid of Leroy C. Cooley got<br />
the first of their seven patents for food processing. In 1873 she<br />
launched The Women’s Canning <strong>and</strong> Preserving Company, <strong>and</strong><br />
it was more than just a company. It was a mission. All the officers,<br />
stockholders, <strong>and</strong> employees were women. “This is a<br />
woman’s industry,” she insisted, <strong>and</strong> ran the company so that it<br />
was a training school for working women.<br />
In the 1880’s, the spirit of invention moved Jones again. Concerned<br />
about the high rate of accidents among oil drillers, she<br />
examined the problem. Simply add a safety valve to pipes to<br />
control the release of the crude oil, she told drillers in Pennsylvania.<br />
The idea had not occurred to them, but they tried it, <strong>and</strong><br />
it so improved safety that Jones won wide praise.<br />
The multiplate freezer also made it possible to apply the technique<br />
of quick-freezing to seasonal crops. People were able to transport<br />
these freezers easily from one harvesting field to another,<br />
where they were used to freeze crops such as peas fresh off the vine.<br />
The h<strong>and</strong>y multiplate freezer consisted of an insulated cabinet<br />
equipped with refrigerated metal plates. Stacked one above the<br />
other, these plates were capable of being opened <strong>and</strong> closed to receive<br />
food products <strong>and</strong> to compress them with evenly distributed<br />
pressure. Each aluminum plate had internal passages through which<br />
ammonia flowed <strong>and</strong> exp<strong>and</strong>ed at a temperature of −3.8 degrees<br />
Celsius, thus causing the foods to freeze.<br />
A major benefit of the new frozen foods was that their taste <strong>and</strong>
346 / Food freezing<br />
vitamin content were not lost. Ordinarily, when food is frozen<br />
slowly, ice crystals form, which slowly rupture food cells, thus altering<br />
the taste of the food. With quick-freezing, however, the food<br />
looks, tastes, <strong>and</strong> smells like fresh food. Quick-freezing also cuts<br />
down on bacteria.<br />
Impact<br />
During the months between one food harvest <strong>and</strong> the next, humankind<br />
requires trillions of pounds of food to survive. In many<br />
parts of the world, an adequate supply of food is available; elsewhere,<br />
much food goes to waste <strong>and</strong> many go hungry. Methods of<br />
food preservation such as those developed by Birdseye have done<br />
much to help those who cannot obtain proper fresh foods. Preserving<br />
perishable foods also means that they will be available in<br />
greater quantity <strong>and</strong> variety all year-round. In all parts of the world,<br />
both tropical <strong>and</strong> arctic delicacies can be eaten in any season of the<br />
year.<br />
With the rise in popularity of frozen “fast” foods, nutritionists<br />
began to study their effect on the human body. Research has shown<br />
that fresh is the most beneficial. In an industrial nation with many<br />
people, the distribution of fresh commodities is, however, difficult.<br />
It may be many decades before scientists know the long-term effects<br />
on generations raised primarily on frozen foods.<br />
See also Electric refrigerator; Freeze-drying; Microwave cooking;<br />
Polystyrene; Refrigerant gas; Tupperware.<br />
Further Reading<br />
Altman, Linda Jacobs. Women <strong>Inventors</strong>. New York: Facts on File,<br />
1997.<br />
Tressler, Donald K. The Memoirs of Donald K. Tressler. Westport,<br />
Conn.: Avi Publishing, 1976.<br />
_____, <strong>and</strong> Clifford F. Evers. The Freezing Preservation of Foods. New<br />
York: Avi Publishing, 1943.
FORTRAN programming<br />
language<br />
FORTRAN programming language<br />
The invention: The first major computer programming language,<br />
FORTRAN supported programming in a mathematical language<br />
that was natural to scientists <strong>and</strong> engineers <strong>and</strong> achieved unsurpassed<br />
success in scientific computation.<br />
The people behind the invention:<br />
John Backus (1924- ), an American software engineer <strong>and</strong><br />
manager<br />
John W. Mauchly (1907-1980), an American physicist <strong>and</strong><br />
engineer<br />
Herman Heine Goldstine (1913- ), a mathematician <strong>and</strong><br />
computer scientist<br />
John von Neumann (1903-1957), a Hungarian American<br />
mathematician <strong>and</strong> physicist<br />
Talking to Machines<br />
347<br />
Formula Translation, or FORTRAN—the first widely accepted<br />
high-level computer language—was completed by John Backus<br />
<strong>and</strong> his coworkers at the International Business Machines (IBM)<br />
Corporation in April, 1957. Designed to support programming<br />
in a mathematical language that was natural to scientists <strong>and</strong> engineers,<br />
FORTRAN achieved unsurpassed success in scientific<br />
computation.<br />
Computer languages are means of specifying the instructions<br />
that a computer should execute <strong>and</strong> the order of those instructions.<br />
Computer languages can be divided into categories of progressively<br />
higher degrees of abstraction. At the lowest level is binary<br />
code, or machine code: Binary digits, or “bits,” specify in<br />
complete detail every instruction that the machine will execute.<br />
This was the only language available in the early days of computers,<br />
when such machines as the ENIAC (Electronic Numerical Integrator<br />
<strong>and</strong> Calculator) required h<strong>and</strong>-operated switches <strong>and</strong><br />
plugboard connections. All higher levels of language are imple-
348 / FORTRAN programming language<br />
mented by having a program translate instructions written in the<br />
higher language into binary machine language (also called “object<br />
code”). High-level languages (also called “programming languages”)<br />
are largely or entirely independent of the underlying<br />
machine structure. FORTRAN was the first language of this type<br />
to win widespread acceptance.<br />
The emergence of machine-independent programming languages<br />
was a gradual process that spanned the first decade of electronic<br />
computation. One of the earliest developments was the invention of<br />
“flowcharts,” or “flow diagrams,” by Herman Heine Goldstine <strong>and</strong><br />
John von Neumann in 1947. Flowcharting became the most influential<br />
software methodology during the first twenty years of<br />
computing.<br />
Short Code was the first language to be implemented that contained<br />
some high-level features, such as the ability to use mathematical<br />
equations. The idea came from John W. Mauchly, <strong>and</strong> it was<br />
implemented on the BINAC (Binary Automatic Computer) in 1949<br />
with an “interpreter”; later, it was carried over to the UNIVAC (Universal<br />
Automatic Computer) I. Interpreters are programs that do<br />
not translate comm<strong>and</strong>s into a series of object-code instructions; instead,<br />
they directly execute (interpret) those comm<strong>and</strong>s. Every time<br />
the interpreter encounters a comm<strong>and</strong>, that comm<strong>and</strong> must be interpreted<br />
again. “Compilers,” however, convert the entire comm<strong>and</strong><br />
into object code before it is executed.<br />
Much early effort went into creating ways to h<strong>and</strong>le commonly<br />
encountered problems—particularly scientific mathematical<br />
calculations. A number of interpretive languages arose to<br />
support these features. As long as such complex operations had<br />
to be performed by software (computer programs), however, scientific<br />
computation would be relatively slow. Therefore, Backus<br />
lobbied successfully for a direct hardware implementation of these<br />
operations on IBM’s new scientific computer, the 704. Backus then<br />
started the Programming Research Group at IBM in order to develop<br />
a compiler that would allow programs to be written in a<br />
mathematically oriented language rather than a machine-oriented<br />
language. In November of 1954, the group defined an initial version<br />
of FORTRAN.
A More <strong>Access</strong>ible Language<br />
Before FORTRAN was developed, a computer had to perform a<br />
whole series of tasks to make certain types of mathematical calculations.<br />
FORTRAN made it possible for the same calculations to be<br />
performed much more easily. In general, FORTRAN supported constructs<br />
with which scientists were already acquainted, such as functions<br />
<strong>and</strong> multidimensional arrays. In defining a powerful notation<br />
that was accessible to scientists <strong>and</strong> engineers, FORTRAN opened<br />
up programming to a much wider community.<br />
Backus’s success in getting the IBM 704’s hardware to support<br />
scientific computation directly, however, posed a major challenge:<br />
Because such computation would be much faster, the object code<br />
produced by FORTRAN would also have to be much faster. The<br />
lower-level compilers preceding FORTRAN produced programs<br />
that were usually five to ten times slower than their h<strong>and</strong>-coded<br />
counterparts; therefore, efficiency became the primary design objective<br />
for Backus. The highly publicized claims for FORTRAN met<br />
with widespread skepticism among programmers. Much of the<br />
team’s efforts, therefore, went into discovering ways to produce the<br />
most efficient object code.<br />
The efficiency of the compiler produced by Backus, combined<br />
with its clarity <strong>and</strong> ease of use, guaranteed the system’s success. By<br />
1959, many IBM 704 users programmed exclusively in FORTRAN.<br />
By 1963, virtually every computer manufacturer either had delivered<br />
or had promised a version of FORTRAN.<br />
Incompatibilities among manufacturers were minimized by the<br />
popularity of IBM’s version of FORTRAN; every company wanted<br />
to be able to support IBM programs on its own equipment. Nevertheless,<br />
there was sufficient interest in obtaining a st<strong>and</strong>ard for<br />
FORTRAN that the American National St<strong>and</strong>ards Institute adopted<br />
a formal st<strong>and</strong>ard for it in 1966. A revised st<strong>and</strong>ard was adopted in<br />
1978, yielding FORTRAN 77.<br />
Consequences<br />
FORTRAN programming language / 349<br />
In demonstrating the feasibility of efficient high-level languages,<br />
FORTRAN inaugurated a period of great proliferation of program-
350 / FORTRAN programming language<br />
ming languages. Most of these languages attempted to provide similar<br />
or better high-level programming constructs oriented toward a<br />
different, nonscientific programming environment. COBOL, for example,<br />
st<strong>and</strong>s for “Common Business Oriented Language.”<br />
FORTRAN, while remaining the dominant language for scientific<br />
programming, has not found general acceptance among nonscientists.<br />
An IBM project established in 1963 to extend FORTRAN<br />
found the task too unwieldy <strong>and</strong> instead ended up producing an entirely<br />
different language, PL/I, which was delivered in 1966. In the<br />
beginning, Backus <strong>and</strong> his coworkers believed that their revolutionary<br />
language would virtually eliminate the burdens of coding <strong>and</strong><br />
debugging. Instead, FORTRAN launched software as a field of<br />
study <strong>and</strong> an industry in its own right.<br />
In addition to stimulating the introduction of new languages,<br />
FORTRAN encouraged the development of operating systems. Programming<br />
languages had already grown into simple operating systems<br />
called “monitors.” Operating systems since then have been<br />
greatly improved so that they support, for example, simultaneously<br />
active programs (multiprogramming) <strong>and</strong> the networking (combining)<br />
of multiple computers.<br />
See also BASIC programming language; COBOL computer language;<br />
SAINT.<br />
Further Reading<br />
Goff, Leslie. “Born of Frustration.” Computerworld 33, no. 6 (February<br />
8, 1999).<br />
Moreau, René. The Computer Comes of Age: The People, the Hardware,<br />
<strong>and</strong> the Software. Cambridge, Mass.: MIT Press, 1984.<br />
Slater, Robert. Portraits in Silicon. Cambridge, Mass.: MIT Press,<br />
1987.<br />
Stern, Nancy B. From ENIAC to UNIVAC: An Appraisal of the Eckert-<br />
Mauchly Computers. Bedford, Mass.: Digital Press., 1981.
Freeze-drying<br />
Freeze-drying<br />
The invention: Method for preserving foods <strong>and</strong> other organic<br />
matter by freezing them <strong>and</strong> using a vacuum to remove their<br />
water content without damaging their solid matter.<br />
The people behind the invention:<br />
Earl W. Flosdorf (1904- ), an American physician<br />
Ronald I. N. Greaves (1908- ), an English pathologist<br />
Jacques Arsène d’Arsonval (1851-1940), a French physicist<br />
Freeze-Drying for Preservation<br />
351<br />
Drying, or desiccation, is known to preserve biomaterials, including<br />
foods. In freeze-drying, water is evaporated in a frozen<br />
state in a vacuum, by means of sublimation (the process of changing<br />
a solid to a vapor without first changing it to a liquid).<br />
In 1811, John Leslie had first caused freezing by means of the<br />
evaporation <strong>and</strong> sublimation of ice. In 1813, William Wollaston<br />
demonstrated this process to the Royal Society of London. It does<br />
not seem to have occurred to either Leslie or Wollaston to use sublimation<br />
for drying. That distinction goes to Richard Altmann, a<br />
German histologist, who dried pieces of frozen tissue in 1890.<br />
Later, in 1903, Vansteenberghe freeze-dried the rabies virus. In<br />
1906, Jacques Arsène d’Arsonval removed water at a low temperature<br />
for distillation.<br />
Since water removal is the essence of drying, d’Arsonval is often<br />
credited with the discovery of freeze-drying, but the first clearly recorded<br />
use of sublimation for preservation was by Leon Shackell in<br />
1909. His work was widely recognized, <strong>and</strong> he freeze-dried a variety<br />
of biological materials. The first patent for freeze-drying was issued<br />
to Henri Tival, a French inventor, in 1927. In 1934, William<br />
Elser received patents for a modern freeze-drying apparatus that<br />
supplied heat for sublimation.<br />
In 1933, Earl W. Flosdorf had freeze-dried human blood serum<br />
<strong>and</strong> plasma for clinical use. The subsequent efforts of Flosdorf led to<br />
commercial freeze-drying applications in the United States.
352 / Freeze-drying<br />
Freeze-Drying of Foods<br />
With the freeze-drying technique fairly well established for biological<br />
products, it was a natural extension for Flosdorf to apply the<br />
technique to the drying of foods. As early as 1935, Flosdorf experimented<br />
with the freeze-drying of fruit juices <strong>and</strong> milk. An early British<br />
patent was issued to Franklin Kidd, a British inventor, in 1941 for<br />
the freeze-drying of foods. An experimental program on the freezedrying<br />
of food was also initiated at the Low Temperature Research<br />
Station at Cambridge University in Engl<strong>and</strong>, but until World War II,<br />
freeze-drying was only an occasionally used scientific tool.<br />
It was the desiccation of blood plasma from the frozen state, performed<br />
by the American Red Cross for the U.S. armed forces, that<br />
provided the first spectacular, extensive use of freeze-drying. This<br />
work demonstrated the vast potential of freeze-drying for commercial<br />
applications. In 1949, Flosdorf published the first book on<br />
freeze-drying, which laid the foundation for freeze-drying of foods<br />
<strong>and</strong> remains one of the most important contributions to large-scale<br />
operations in the field. In the book, Flosdorf described the freezedrying<br />
of fruit juices, milk, meats, oysters, clams, fish fillets, coffee<br />
<strong>and</strong> tea extracts, fruits, vegetables, <strong>and</strong> other products. Flosdorf also<br />
devoted an entire chapter to describing the equipment used for both<br />
batch <strong>and</strong> continuous processing, <strong>and</strong> he discussed cost analysis.<br />
The holder of more than fifteen patents covering various aspects of<br />
freeze-drying, Flosdorf dominated the move toward commercialization<br />
in the United States.<br />
Simultaneously, researchers in Engl<strong>and</strong> were developing freezedrying<br />
applications under the leadership of Ronald I. N. Greaves.<br />
The food crisis during World War II had led to the recognition that<br />
dried foods cut the costs of transporting, storing, <strong>and</strong> packaging<br />
foods in times of emergency. Thus, in 1951, the British Ministry of<br />
Food Research was established at Aberdeen, Scotl<strong>and</strong>. Scientists at<br />
Aberdeen developed a vacuum contact plate freeze-dryer that improved<br />
product quality <strong>and</strong> reduced the time required for rehydration<br />
(replacement of the water removed in the freeze-drying<br />
process so that the food can be used).<br />
In 1954, trials of initial freeze-drying, followed by the ordinary<br />
process of vacuum drying, were carried out. The abundance of
membranes within plant <strong>and</strong> animal tissues was a major obstacle to<br />
the movement of water vapor, thus limiting the drying rate. In 1956,<br />
two Canadian scientists developed a new method of improving the<br />
freeze-drying rate for steaks by impaling the steaks on spiked heater<br />
plates. This idea was adapted in 1957 by interposing sheets of exp<strong>and</strong>ed<br />
metal, instead of spikes, between the drying surfaces of the<br />
frozen food <strong>and</strong> the heating platens. Because of the substantially<br />
higher freeze-drying rates that it achieved, the process was called<br />
“accelerated freeze-drying.”<br />
In 1960, Greaves described an ingenious method of freeze-drying<br />
liquids. It involved continuously scraping the dry layer during its<br />
formation. This led to a continuous process for freeze-drying liquids.<br />
During the remainder of the 1960’s, freeze-drying applications<br />
proliferated with the advent of several techniques for controlling<br />
<strong>and</strong> improving the effectiveness of the freeze-drying process.<br />
Impact<br />
Freeze-drying / 353<br />
Flosdorf’s vision <strong>and</strong> ingenuity in applying freeze-drying to<br />
foods has revolutionized food preservation. He was also responsible<br />
for making a laboratory technique a tremendous commercial<br />
success.<br />
Freeze-drying is important because it stops the growth of microorganisms,<br />
inhibits deleterious chemical reactions, <strong>and</strong> facilitates<br />
distribution <strong>and</strong> storage. Freeze-dried foods are easily prepared for<br />
consumption by adding water (rehydration). When freeze-dried<br />
properly, most foods, either raw or cooked, can be rehydrated<br />
quickly to yield products that are equal in quality to their frozen<br />
counterparts. Freeze-dried products retain most of their nutritive<br />
qualities <strong>and</strong> have a long storage life, even at room temperature.<br />
Freeze-drying is not, however, without disadvantages. The major<br />
disadvantage is the high cost of processing. Thus, to this day, the<br />
great potential of freeze-drying has not been fully realized. The drying<br />
of cell-free materials, such as coffee <strong>and</strong> tea extracts, has been extremely<br />
successful, but the obstacles imposed by the cell membranes<br />
in foods such as fruits, vegetables, <strong>and</strong> meats have limited<br />
the application to expensive specialty items such as freeze-dried<br />
soups <strong>and</strong> to foods for armies, campers, <strong>and</strong> astronauts. Future eco-
354 / Freeze-drying<br />
nomic changes may create a situation in which the high cost of<br />
freeze-drying is more than offset by the cost of transportation <strong>and</strong><br />
storage.<br />
See also Electric refrigerator; Food freezing; Polystyrene; Tupperware.<br />
Further Reading<br />
Comello, Vic. “Improvements in Freeze Drying Exp<strong>and</strong> Application<br />
Base.” Research <strong>and</strong> Development 42, no. 5 (May, 2000).<br />
Flosdorf, Earl William. Freeze-Drying: Drying by Sublimation. New<br />
York: Reinhold, 1949.<br />
Noves, Robert. Freeze Drying of Foods <strong>and</strong> Biologicals, 1968. Park<br />
Ridge, N.J.: Noyes Development Corporation, 1968.
Fuel cell<br />
Fuel cell<br />
The invention: An electrochemical cell that directly converts energy<br />
from reactions between oxidants <strong>and</strong> fuels, such as liquid<br />
hydrogen, into electrical energy.<br />
The people behind the invention:<br />
Francis Thomas Bacon (1904-1992), an English engineer<br />
Sir William Robert Grove (1811-1896), an English inventor<br />
Georges Leclanché (1839-1882), a French engineer<br />
Aless<strong>and</strong>ro Volta (1745-1827), an Italian physicist<br />
The Earth’s Resources<br />
Because of the earth’s rapidly increasing population <strong>and</strong> the<br />
dwindling of fossil fuels (natural gas, coal, <strong>and</strong> petroleum), there is<br />
a need to design <strong>and</strong> develop new ways to obtain energy <strong>and</strong> to encourage<br />
its intelligent use. The burning of fossil fuels to create energy<br />
causes a slow buildup of carbon dioxide in the atmosphere,<br />
creating pollution that poses many problems for all forms of life on<br />
this planet. Chemical <strong>and</strong> electrical studies can be combined to create<br />
electrochemical processes that yield clean energy.<br />
Because of their very high rate of efficiency <strong>and</strong> their nonpolluting<br />
nature, fuel cells may provide the solution to the problem of<br />
finding sufficient energy sources for humans. The simple reaction of<br />
hydrogen <strong>and</strong> oxygen to form water in such a cell can provide an<br />
enormous amount of clean (nonpolluting) energy. Moreover, hydrogen<br />
<strong>and</strong> oxygen are readily available.<br />
Studies by Aless<strong>and</strong>ro Volta, Georges Leclanché, <strong>and</strong> William<br />
Grove preceded the work of Bacon in the development of the fuel<br />
cell. Bacon became interested in the idea of a hydrogen-oxygen fuel<br />
cell in about 1932. His original intent was to develop a fuel cell that<br />
could be used in commercial applications.<br />
The Fuel Cell Emerges<br />
355<br />
In 1800, the Italian physicist Aless<strong>and</strong>ro Volta experimented<br />
with solutions of chemicals <strong>and</strong> metals that were able to conduct
356 / Fuel cell<br />
electricity. He found that two pieces of metal <strong>and</strong> such a solution<br />
could be arranged in such a way as to produce an electric current.<br />
His creation was the first electrochemical battery, a device that produced<br />
energy from a chemical reaction. Studies in this area were<br />
continued by various people, <strong>and</strong> in the late nineteenth century,<br />
Georges Leclanché invented the dry cell battery, which is now commonly<br />
used.<br />
The work of William Grove followed that of Leclanché. His first<br />
significant contribution was the Grove cell, an improved form of the<br />
cells described above, which became very popular. Grove experimented<br />
with various forms of batteries <strong>and</strong> eventually invented the<br />
“gas battery,” which was actually the earliest fuel cell. It is worth<br />
noting that his design incorporated separate test tubes of hydrogen<br />
<strong>and</strong> oxygen, which he placed over strips of platinum.<br />
After studying the design of Grove’s fuel cell, Bacon decided<br />
that, for practical purposes, the use of platinum <strong>and</strong> other precious<br />
metals should be avoided. By 1939, he had constructed a cell in<br />
which nickel replaced the platinum used.<br />
The theory behind the fuel cell can be described in the following<br />
way. If a mixture of hydrogen <strong>and</strong> oxygen is ignited, energy is released<br />
in the form of a violent explosion. In a fuel cell, however, the<br />
reaction takes place in a controlled manner. Electrons lost by the hydrogen<br />
gas flow out of the fuel cell <strong>and</strong> return to be taken up by the<br />
oxygen in the cell. The electron flow provides electricity to any device<br />
that is connected to the fuel cell, <strong>and</strong> the water that the fuel cell<br />
produces can be purified <strong>and</strong> used for drinking.<br />
Bacon’s studies were interrupted by World War II. After the war<br />
was over, however, Bacon continued his work. Sir Eric Keightley<br />
Rideal of Cambridge University in Engl<strong>and</strong> supported Bacon’s<br />
studies; later, others followed suit. In January, 1954, Bacon wrote an<br />
article entitled “Research into the Properties of the Hydrogen/ Oxygen<br />
Fuel Cell” for a British journal. He was surprised at the speed<br />
with which news of the article spread throughout the scientific<br />
world, particularly in the United States.<br />
After a series of setbacks, Bacon demonstrated a forty-cell unit<br />
that had increased power. This advance showed that the fuel cell<br />
was not merely an interesting toy; it had the capacity to do useful<br />
work. At this point, the General Electric Company (GE), an Ameri-
can corporation, sent a representative to Engl<strong>and</strong> to offer employment<br />
in the United States to senior members of Bacon’s staff. Three scientists<br />
accepted the offer.<br />
A high point in Bacon’s career was the announcement that the<br />
American Pratt <strong>and</strong> Whitney Aircraft company had obtained an order<br />
to build fuel cells for the Apollo project, which ultimately put<br />
two men on the Moon in 1969. Toward the end of his career in 1978,<br />
Bacon hoped that commercial applications for his fuel cells would<br />
be found.<br />
Impact<br />
H 2<br />
- +<br />
Electrolyte<br />
Porous<br />
Electrodes<br />
Anode Cathode<br />
Parts of a basic fuel cell<br />
Fuel cell / 357<br />
Because they are lighter <strong>and</strong> more efficient than batteries, fuel<br />
cells have proved to be useful in the space program. Beginning with<br />
the Gemini 5 spacecraft, alkaline fuel cells (in which a water solution<br />
of potassium hydroxide, a basic, or alkaline, chemical, is placed)<br />
have been used for more than ten thous<strong>and</strong> hours in space. The fuel<br />
cells used aboard the space shuttle deliver the same amount of power<br />
as batteries weighing ten times as much. On a typical seven-day<br />
mission, the shuttle’s fuel cells consume 680 kilograms (1,500 pounds)<br />
of hydrogen <strong>and</strong> generate 719 liters (190 gallons) of water that can<br />
be used for drinking.<br />
Major technical <strong>and</strong> economic problems must be overcome in order<br />
to design fuel cells for practical applications, but some important<br />
advancements have been made. Afew test vehicles that use fuel<br />
O 2
358 / Fuel cell<br />
Francis Bacon<br />
Born in Billericay, Engl<strong>and</strong>, in 1904, Francis Thomas Bacon<br />
completed secondary school at the prestigious Eton College<br />
<strong>and</strong> then attended Trinity College, Cambridge University. In<br />
1932 he started his long search for a practical fuel cell based<br />
upon the oxygen-hydrogen (Hydrox) reaction with an alkaline<br />
electrolyte <strong>and</strong> inexpensive nickel electrodes. In 1940 the British<br />
Admiralty set him up in full-time experimental work at King’s<br />
College, London, <strong>and</strong> then moved him to the Anti-Submarine<br />
Experimental Establishment because the Royal Navy wanted<br />
fuel cells for their submarines.<br />
After World War II Cambridge University appointed him to<br />
the faculty at the Department of Chemical Engineering, <strong>and</strong> he<br />
worked intensively on his fuel cell research. In 1959 he proved<br />
the worth of his work by producing a fuel cell capable of powering<br />
a small truck. It was not until the 1990’s, however, that fuel<br />
cells were taken seriously as the main power source for automobiles.<br />
In 1998, for instance, Icel<strong>and</strong> enlisted the help of Daimler-<br />
Chrysler, Shell Oil, <strong>and</strong> Norsk Hydro to convert all its transportation<br />
vehicles, including its fishing boats, to fuel cell power,<br />
part of its long-range plans for a completely “hydrogen economy.”<br />
Meanwhile, Bacon had the satisfaction of seeing his invention<br />
become a power source for American space vehicles<br />
<strong>and</strong> stations. He died in 1992 in Cambridge.<br />
cells as a source of power have been constructed. Fuel cells using<br />
hydrogen as a fuel <strong>and</strong> oxygen to burn the fuel have been used in a<br />
van built by General Motors Corporation. Thirty-two fuel cells are<br />
installed below the floorboards, <strong>and</strong> tanks of liquid oxygen are carried<br />
in the back of the van. A power plant built in New York City<br />
contains stacks of hydrogen-oxygen fuel cells, which can be put on<br />
line quickly in response to power needs. The Sanyo Electric Company<br />
has developed an electric car that is partially powered by a<br />
fuel cell.<br />
These tremendous technical advances are the result of the singleminded<br />
dedication of Francis Thomas Bacon, who struggled all of<br />
his life with an experiment he was convinced would be successful.
See also Alkaline storage battery; Breeder reactor; Compressedair-accumulating<br />
power plant; Fluorescent lighting; Geothermal<br />
power; Heat pump; Photoelectric cell; Photovoltaic cell; Solar thermal<br />
engine; Tidal power plant.<br />
Further Reading<br />
Fuel cell / 359<br />
Eisenberg, Anne. “Fuel Cell May Be the Future ‘Battery.’” New York<br />
Times (October 21, 1999).<br />
Hoverstein, Paul. “Century-Old Invention Finding a Niche Today.<br />
USA Today (June 3, 1994).<br />
Kufahl, Pam. “Electric: Lighting Up the Twentieth Century.” Unity<br />
Business 3, no. 7 (June, 2000).<br />
Stobart, Richard. Fuel Cell Technology for Vehicles. Warrendale, Pa.:<br />
Society of Automotive Engineers, 2001.
360<br />
Gas-electric car<br />
Gas-electric car<br />
The invention: A hybrid automobile with both an internal combustion<br />
engine <strong>and</strong> an electric motor.<br />
The people behind the invention:<br />
Victor Wouk (1919- ), an American engineer<br />
Tom Elliott, executive vice president of American Honda Motor<br />
Company<br />
Hiroyuki Yoshino, president <strong>and</strong> chief executive officer of<br />
Honda Motor Company<br />
Fujio Cho, president of Toyota Motor Corporation<br />
Announcing Hybrid Vehicles<br />
At the 2000 North American International Auto Show in Detroit,<br />
not only did the Honda Motor Company show off its new Insight<br />
model, it also announced exp<strong>and</strong>ed use of its new technology. Hiroyuki<br />
Yoshino, president <strong>and</strong> chief executive officer, said that Honda’s integrated<br />
motor assist (IMA) system would be exp<strong>and</strong>ed to other massmarket<br />
models. The system basically fits a small electric motor directly<br />
on a one-liter, three-cylinder internal combustion engine. The two<br />
share the workload of powering the car, but the gasoline engine does<br />
not start up until it is needed. The electric motor is powered by a<br />
nickel-metal hydride (Ni-MH) battery pack, with the IMA system automatically<br />
recharging the energy pack during braking.<br />
Tom Elliott, Honda’s executive vice-president, said the vehicle<br />
was a continuation of the company’s philosophy of making the latest<br />
environmental technology accessible to consumers. The $18,000<br />
Insight was a two-seat sporty car that used many innovations to reduce<br />
its weight <strong>and</strong> improve its performance.<br />
Fujio Cho, president of Toyota, also spoke at the Detroit show,<br />
where his company showed off its new $20,000 hybrid Prius. The Toyota<br />
Prius relied more on the electric motor <strong>and</strong> had more energystorage<br />
capacity than the Insight, but was a four-door, five-seat model.<br />
The Toyota Hybrid System divided the power from its 1.5-liter gasoline<br />
engine <strong>and</strong> directed it to drive the wheels <strong>and</strong> a generator. The
generator alternately powered the motor <strong>and</strong> recharged the batteries.<br />
The electric motor was coupled with the gasoline engine to<br />
power the wheels under normal driving. The gasoline engine supplied<br />
average power needs, with the electric motor helping the<br />
peaks; at low speeds, it was all electric. A variable transmission<br />
seamlessly switched back <strong>and</strong> forth between the gasoline engine<br />
<strong>and</strong> electric motor or applied both of them.<br />
Variations on an Idea<br />
Gas-electric car / 361<br />
Automobiles generally use gasoline or diesel engines for driving,<br />
electric motors that start the main motors, <strong>and</strong> a means of recharging<br />
the batteries that power starter motors <strong>and</strong> other devices. In<br />
solely electric cars, gasoline engines are eliminated entirely, <strong>and</strong> the<br />
batteries that power the vehicles are recharged from stationary<br />
sources. In hybrid cars, the relationship between gasoline engines<br />
<strong>and</strong> electric motors is changed so that electric motors h<strong>and</strong>le some<br />
or all of the driving. This is at the expense of an increased number of<br />
batteries or other energy-storage devices.<br />
Possible in many combinations, “hybrids” couple the low-end<br />
torque <strong>and</strong> regenerative braking potential of electric motors with<br />
the range <strong>and</strong> efficient packaging of gasoline, natural gas, or even<br />
hydrogen fuel power plants. The return is greater energy efficiency<br />
<strong>and</strong> reduced pollution.<br />
With sufficient energy-storage capacity, an electric motor can<br />
actually propel a car from a st<strong>and</strong>ing start to a moving speed. In<br />
hybrid vehicles, the gasoline engines—which are more energyefficient<br />
at higher speeds, then kick in. However, the gasoline engines<br />
in these vehicles are smaller, lighter, <strong>and</strong> more efficient than<br />
ordinary gas engines. Designed for average—not peak—driving<br />
conditions, they reduce air pollution <strong>and</strong> considerably improve<br />
fuel economy.<br />
Batteries in hybrid vehicles are recharged partly by the gas engines<br />
<strong>and</strong> partly by regenerative braking; a third of the energy from<br />
slowing the car is turned into electricity. What has finally made hybrids<br />
feasible at reasonable cost are the new developments in computer<br />
technology, allowing sophisticated controls to coordinate electrical<br />
<strong>and</strong> mechanical power.
362 / Gas-electric car<br />
Victor Wouk<br />
H. Piper, an American engineer, filed the first patent for a<br />
hybrid gas-electric powered car in 1905, <strong>and</strong> from then until<br />
1915 they were popular, although not common, because they<br />
could accelerate faster than plain gas-powered cars. Then the<br />
gas-only models became as swift. Their hybrid cousins fells by<br />
the wayside.<br />
Interest in hybrids revived with the unheard-of gasoline<br />
prices during the 1973 oil crisis. The champion of their comeback—the<br />
father of the modern hybrid electric vehicle (HEV)—<br />
was Victor Wouk. Born in 1919 in New York City, Wouk earned<br />
a math <strong>and</strong> physics degree from Columbia University in 1939<br />
<strong>and</strong> a doctorate in electrical engineering from the California Institute<br />
of Technology in 1942. In 1946 he founded Beta Electric<br />
Corporation, which he led until 1959, when he founded <strong>and</strong><br />
was president of another company, Electronic Energy Conversion<br />
Corporation. After 1970, he became an independent consultant,<br />
hoping to build an HEV that people would prefer to<br />
gas-guzzlers.<br />
With his partner, Charles Rosen, Wouk gutted the engine<br />
compartment of a Buick Skylark <strong>and</strong> installed batteries designed<br />
for police cars, a 20-watt direct-current electric motor,<br />
<strong>and</strong> an RX-2 Mazda rotary engine. Only a test vehicle, it still got<br />
better gas mileage (thirty miles per gallon) than the original<br />
Skylark <strong>and</strong> met the requirements for emissions control set by<br />
the Clean Air Act of 1970, unlike all American automobiles of<br />
the era. Moreover, Wouk designed an HEV that would get fifty<br />
miles per gallon <strong>and</strong> pollute one-eighth as much as gas-powered<br />
automobiles. However, the oil crisis ended, gas prices went<br />
down, <strong>and</strong> consumers <strong>and</strong> the government lost interest. Wouk<br />
continued to publish, lecture, <strong>and</strong> design; still, it was not until<br />
the 1990’s that high gas prices <strong>and</strong> concerns over pollution<br />
made HEV’s attractive yet again.<br />
Wouk holds twelve patents, mostly for speed <strong>and</strong> braking<br />
controls in electric vehicles but also for air conditioning, high<br />
voltage direct-current power sources, <strong>and</strong> life extenders for inc<strong>and</strong>escent<br />
lamps.
One way to describe hybrids is to separate them into two types:<br />
parallel, in which either of the two power plants can propel the vehicle,<br />
<strong>and</strong> series, in which the auxiliary power plant is used to<br />
charge the battery, rather than propel the vehicle.<br />
Honda’s Insight is a simplified parallel hybrid that uses a small<br />
but efficient gasoline engine. The electric motor assists the engine,<br />
providing extra power for acceleration or hill climbing, helps provide<br />
regenerative braking, <strong>and</strong> starts the engine. However, it cannot<br />
run the car by itself.<br />
Toyota’s Prius is a parallel hybrid whose power train allows<br />
some series features. Its engine runs only at an efficient speed <strong>and</strong><br />
load <strong>and</strong> is combined with a unique power splitting device. It allows<br />
the car to operate like a parallel hybrid, motor alone, engine<br />
alone, or both. It can act as a series hybrid with the engine charging<br />
the batteries rather than powering the vehicle. It also provides a<br />
continually variable transmission using a planetary gear set that allows<br />
interaction between the engine, the motor, <strong>and</strong> the differential<br />
which drives the wheels.<br />
Impact<br />
Gas-electric car / 363<br />
In 2001 Honda <strong>and</strong> Toyota marketed gas-electric hybrids that offered<br />
better than 60-mile-per-gallon fuel economy <strong>and</strong> met California’s<br />
stringent st<strong>and</strong>ards for “super ultra-low emissions” vehicles.<br />
Both comparnies achieved these st<strong>and</strong>ards without the inconvenience<br />
of fully electric cars which could go only about a hundred<br />
miles on a single battery charge <strong>and</strong> required such gimmicks as kerosene-powered<br />
heaters. As a result, other manufacturers were beginning<br />
to follow suit. Ford, for example, promised a hybrid sport<br />
utility vehicle (SUV) by 2003. Other automakers, including General<br />
Motors <strong>and</strong> DaimlerChrysler, also have announced development of<br />
alternative fuel <strong>and</strong> low emission vehicles. An example is the ESX3<br />
concept car using a 1.5-liter, direct injection diesel combined with a<br />
electric motor <strong>and</strong> a lithium-ion battery<br />
While American automakers were planning to offer some “full<br />
hybrids”—cars capable of running on battery power alone at low<br />
speeds—they were focusing more enthusiastically on electrically<br />
assisted gasoline engines called “mild hybrids.” Full hybrids typi-
364 / Gas-electric car<br />
cally increase gas mileage by up to 60 percent; mild hybrids by only<br />
10 or 20 percent. The “mild hybrid” approach uses regenerative<br />
braking with electrical systems of a much lower voltage <strong>and</strong> storage<br />
capacity than for full hybrids, a much cheaper approach. But there<br />
still is enough energy available to allow the gasoline engine to turn<br />
off automatically when a vehicle stops <strong>and</strong> turn on instantly when<br />
the accelerator is touched. Because the “mild hybrid” approach<br />
adds only $1000 to $1500 to a vehicle’s price, it is likely to be used in<br />
many models. Full hybrids cost much more, but achieve more benefits.<br />
See also Airplane; Diesel locomotive; Hovercraft; Internal combustion<br />
engine; Supersonic passenger plane; Turbojet.<br />
Further Reading<br />
Morton, Ian. “Honda Insight Hybrid Makes Heavy Use of Light<br />
Metal.” Automotive News 74, no. 5853 (December 20, 1999).<br />
Peters, Eric. “Hybrid Cars: The Hope, Hype, <strong>and</strong> Future.” Consumers’<br />
Research Magazine 83, no. 6 (June, 2000).<br />
Reynolds, Kim. “Burt Rutan Ponders the Hybrid Car.” Road <strong>and</strong><br />
Track 51, no. 11 (July, 2000).<br />
Swoboda, Frank. “’Hybrid’ Cars Draw Waiting List of Buyers.”<br />
Washington Post (May 3, 2001).<br />
Yamaguchi, Jack. “Toyota Prius IC/Electric Hybrid Update.” Automotive<br />
Engineering International 108, no. 12 (December, 2000).
Geiger counter<br />
Geiger counter<br />
The invention: the first electronic device able to detect <strong>and</strong> measure<br />
radioactivity in atomic particles.<br />
The people behind the invention:<br />
Hans Geiger (1882-1945), a German physicist<br />
Ernest Rutherford (1871-1937), a British physicist<br />
Sir John Sealy Edward Townsend (1868-1957), an Irish physicist<br />
Sir William Crookes (1832-1919), an English physicist<br />
Wilhelm Conrad Röntgen (1845-1923), a German physicist<br />
Antoine-Henri Becquerel (1852-1908), a French physicist<br />
Discovering Natural Radiation<br />
365<br />
When radioactivity was discovered <strong>and</strong> first studied, the work<br />
was done with rather simple devices. In the 1870’s, Sir William<br />
Crookes learned how to create a very good vacuum in a glass tube.<br />
He placed electrodes in each end of the tube <strong>and</strong> studied the passage<br />
of electricity through the tube. This simple device became known as<br />
the “Crookes tube.” In 1895, Wilhelm Conrad Röntgen was experimenting<br />
with a Crookes tube. It was known that when electricity<br />
went through a Crookes tube, one end of the glass tube might glow.<br />
Certain mineral salts placed near the tube would also glow. In order<br />
to observe carefully the glowing salts, Röntgen had darkened the<br />
room <strong>and</strong> covered most of the Crookes tube with dark paper. Suddenly,<br />
a flash of light caught his eye. It came from a mineral sample<br />
placed some distance from the tube <strong>and</strong> shielded by the dark paper;<br />
yet when the tube was switched off, the mineral sample went dark.<br />
Experimenting further, Röntgen became convinced that some ray<br />
from the Crookes tube had penetrated the mineral <strong>and</strong> caused it to<br />
glow. Since light rays were blocked by the black paper, he called the<br />
mystery ray an “X ray,” with “X” st<strong>and</strong>ing for unknown.<br />
Antoine-Henri Becquerel heard of the discovery of X rays <strong>and</strong>, in<br />
February, 1886, set out to discover if glowing minerals themselves<br />
emitted X rays. Some minerals, called “phosphorescent,” begin to<br />
glow when activated by sunlight. Becquerel’s experiment involved
366 / Geiger counter<br />
wrapping photographic film in black paper <strong>and</strong> setting various<br />
phosphorescent minerals on top <strong>and</strong> leaving them in the sun. He<br />
soon learned that phosphorescent minerals containing uranium<br />
would expose the film.<br />
A series of cloudy days, however, brought a great surprise. Anxious<br />
to continue his experiments, Becquerel decided to develop film<br />
that had not been exposed to sunlight. He was astonished to discover<br />
that the film was deeply exposed. Some emanations must be<br />
coming from the uranium, he realized, <strong>and</strong> they had nothing to do<br />
with sunlight. Thus, natural radioactivity was discovered by accident<br />
with a simple piece of photographic film.<br />
Rutherford <strong>and</strong> Geiger<br />
Ernest Rutherford joined the world of international physics at<br />
about the same time that radioactivity was discovered. Studying the<br />
“Becquerel rays” emitted by uranium, Rutherford eventually distinguished<br />
three different types of radiation, which he named “alpha,”<br />
“beta,” <strong>and</strong> “gamma” after the first three letters of the Greek alphabet.<br />
He showed that alpha particles, the least penetrating of the three, are<br />
the nuclei of helium atoms (a group of two neutrons <strong>and</strong> a proton<br />
tightly bound together). It was later shown that beta particles are electrons.<br />
Gamma rays, which are far more penetrating than either alpha<br />
or beta particles, were shown to be similar to X rays, but with higher<br />
energies.<br />
Rutherford became director of the associated research laboratory<br />
at Manchester University in 1907. Hans Geiger became an assistant.<br />
At this time, Rutherford was trying to prove that alpha particles<br />
carry a double positive charge. The best way to do this was to measure<br />
the electric charge that a stream of alpha particles would bring<br />
to a target. By dividing that charge by the total number of alpha particles<br />
that fell on the target, one could calculate the charge of a single<br />
alpha particle. The problem lay in counting the particles <strong>and</strong> in<br />
proving that every particle had been counted.<br />
Basing their design upon work done by Sir John Sealy Edward<br />
Townsend, a former colleague of Rutherford, Geiger <strong>and</strong> Rutherford<br />
constructed an electronic counter. It consisted of a long brass<br />
tube sealed at both ends from which most of the air had been
Hans Geiger<br />
Geiger counter / 367<br />
Atomic radiation was the first physical phenomenon that<br />
humans discovered that they could not detect with any of their<br />
five natural senses. Hans Geiger found a way to make radiation<br />
observable.<br />
Born into a family with an academic tradition in 1882, Geiger<br />
became an academician himself. His father was a professor<br />
of linguistics at the University of Erlangen, where Geiger completed<br />
his own doctorate in physics in 1906. One of the world’s<br />
centers for experimental physics at the time was Engl<strong>and</strong>, <strong>and</strong><br />
there Geiger went in 1907. He became an assistant to Ernest<br />
Rutherford at the University of Manchester <strong>and</strong> thereby began<br />
the first of a series of successful collaborations during his career—all<br />
devoted to detecting or explaining types of radiation.<br />
Rutherford had distinguished three types of radiation. In<br />
1908, he <strong>and</strong> Geiger built a device to sense the first alpha particles.<br />
It gave them evidence for Rutherford’s conjecture that the<br />
atom was structured like a miniature solar system. Geiger also<br />
worked closely with Ernest Marsden, James Chadwick, <strong>and</strong><br />
Walter Bothe on aspects of radiation physics.<br />
Geiger’s stay in Engl<strong>and</strong> ended with the outbreak of World<br />
War I in 1914. He returned to Germany <strong>and</strong> served as an artillery<br />
officer. Immediately after the war he took up university<br />
posts again, first in Berlin, then in Kiel, Tubingen, <strong>and</strong> back to<br />
Berlin. With Walther Müller he perfected a compact version of<br />
the radiation detector in 1925, the Geiger- Müller counter. It became<br />
the st<strong>and</strong>ard radiation sensor for scientists thereafter, <strong>and</strong>,<br />
during the rush to locate uranium deposits during the 1950’s,<br />
for prospectors.<br />
Geiger used it to prove the existence of the Compton effect,<br />
which concerned the scattering of X rays, <strong>and</strong> his experiments<br />
further proved beyond doubt that light can take the form of<br />
quanta. He also discovered cosmic-ray showers with his detector.<br />
Geiger remained in German during World War II, although<br />
he vigorously opposed the Nazi party’s treatment of scientists.<br />
He died in Potsdam in 1945, after losing his home <strong>and</strong> possessions<br />
during the Allied occupation of Berlin.
368 / Geiger counter<br />
pumped. A thin wire, insulated from the brass, was suspended<br />
down the middle of the tube. This wire was connected to batteries<br />
producing about thirteen hundred volts <strong>and</strong> to an electrometer, a<br />
device that could measure the voltage of the wire. This voltage<br />
could be increased until a spark jumped between the wire <strong>and</strong> the<br />
tube. If the voltage was turned down a little, the tube was ready to<br />
operate. An alpha particle entering the tube would ionize (knock<br />
some electrons away from) at least a few atoms. These electrons<br />
would be accelerated by the high voltage <strong>and</strong>, in turn, would ionize<br />
more atoms, freeing more electrons. This process would continue<br />
until an avalanche of electrons struck the central wire <strong>and</strong> the electrometer<br />
registered the voltage change. Since the tube was nearly<br />
ready to arc because of the high voltage, every alpha particle, even if<br />
it had very little energy, would initiate a discharge. The most complex<br />
of the early radiation detection devices—the forerunner of the<br />
Geiger counter—had just been developed. The two physicists reported<br />
their findings in February, 1908.<br />
Impact<br />
Their first measurements showed that one gram of radium<br />
emitted 34 thous<strong>and</strong> million alpha particles per second. Soon, the<br />
number was refined to 32.8 thous<strong>and</strong> million per second. Next,<br />
Geiger <strong>and</strong> Rutherford measured the amount of charge emitted<br />
by radium each second. Dividing this number by the previous<br />
number gave them the charge on a single alpha particle. Just as<br />
Rutherford had anticipated, the charge was double that of a hydrogen<br />
ion (a proton). This proved to be the most accurate determination<br />
of the fundamental charge until the American physicist<br />
Robert Andrews Millikan conducted his classic oil-drop experiment<br />
in 1911.<br />
Another fundamental result came from a careful measurement of<br />
the volume of helium emitted by radium each second. Using that<br />
value, other properties of gases, <strong>and</strong> the number of helium nuclei<br />
emitted each second, they were able to calculate Avogadro’s number<br />
more directly <strong>and</strong> accurately than had previously been possible.<br />
(Avogadro’s number enables one to calculate the number of atoms<br />
in a given amount of material.)
The true Geiger counter evolved when Geiger replaced the central<br />
wire of the tube with a needle whose point lay just inside a thin<br />
entrance window. This counter was much more sensitive to alpha<br />
<strong>and</strong> beta particles <strong>and</strong> also to gamma rays. By 1928, with the assistance<br />
of Walther Müller, Geiger made his counter much more efficient,<br />
responsive, durable, <strong>and</strong> portable. There are probably few radiation<br />
facilities in the world that do not have at least one Geiger<br />
counter or one of its compact modern relatives.<br />
See also Carbon dating; Gyrocompass; Radar; Sonar; Richter<br />
scale.<br />
Further Reading<br />
Geiger counter / 369<br />
Campbell, John. Rutherford: Scientist Supreme. Christchurch, New<br />
Zeal<strong>and</strong>: AAS <strong>Public</strong>ations, 1999.<br />
Halacy, D. S. They Gave Their Names to Science. New York: Putnam,<br />
1967.<br />
Krebs, A. T. “Hans Geiger: Fiftieth Anniversary of the <strong>Public</strong>ation of<br />
His Doctoral Thesis, 23 July 1906.” Science 124 (1956).<br />
Weir, Fred. “Muscovites Check Radishes for Radiation; a $50 Personal<br />
Geiger Counter Gives Russians a Sense of Confidence at<br />
the Market.” Christian Science Monitor (November 4, 1999).
370<br />
Genetic “fingerprinting”<br />
Genetic “fingerprinting”<br />
The invention: A technique for using the unique characteristics of<br />
each human being’s DNA to identify individuals, establish connections<br />
among relatives, <strong>and</strong> identify criminals.<br />
The people behind the invention:<br />
Alec Jeffreys (1950- ), an English geneticist<br />
Victoria Wilson (1950- ), an English geneticist<br />
Swee Lay Thein (1951- ), a biochemical geneticist<br />
Microscopic Fingerprints<br />
In 1985, Alec Jeffreys, a geneticist at the University of Leicester in<br />
Engl<strong>and</strong>, developed a method of deoxyribonucleic acid (DNA)<br />
analysis that provides a visual representation of the human genetic<br />
structure. Jeffreys’s discovery had an immediate, revolutionary impact<br />
on problems of human identification, especially the identification<br />
of criminals. Whereas earlier techniques, such as conventional<br />
blood typing, provide evidence that is merely exclusionary (indicating<br />
only whether a suspect could or could not be the perpetrator of a<br />
crime), DNA fingerprinting provides positive identification.<br />
For example, under favorable conditions, the technique can establish<br />
with virtual certainty whether a given individual is a murderer<br />
or rapist. The applications are not limited to forensic science;<br />
DNA fingerprinting can also establish definitive proof of parenthood<br />
(paternity or maternity), <strong>and</strong> it is invaluable in providing<br />
markers for mapping disease-causing genes on chromosomes. In<br />
addition, the technique is utilized by animal geneticists to establish<br />
paternity <strong>and</strong> to detect genetic relatedness between social groups.<br />
DNA fingerprinting (also referred to as “genetic fingerprinting”)<br />
is a sophisticated technique that must be executed carefully to produce<br />
valid results. The technical difficulties arise partly from the<br />
complex nature of DNA. DNA, the genetic material responsible for<br />
heredity in all higher forms of life, is an enormously long, doublestr<strong>and</strong>ed<br />
molecule composed of four different units called “bases.”<br />
The bases on one str<strong>and</strong> of DNA pair with complementary bases on
the other str<strong>and</strong>. A human being contains twenty-three pairs of<br />
chromosomes; one member of each chromosome pair is inherited<br />
from the mother, the other from the father. The order, or sequence, of<br />
bases forms the genetic message, which is called the “genome.” Scientists<br />
did not know the sequence of bases in any sizable stretch of<br />
DNA prior to the 1970’s because they lacked the molecular tools to<br />
split DNA into fragments that could be analyzed. This situation<br />
changed with the advent of biotechnology in the mid-1970’s.<br />
The door to DNA analysis was opened with the discovery of bacterial<br />
enzymes called “DNA restriction enzymes.” A restriction enzyme<br />
binds to DNA whenever it finds a specific short sequence of<br />
base pairs (analogous to a code word), <strong>and</strong> it splits the DNA at a defined<br />
site within that sequence. A single enzyme finds millions of<br />
cutting sites in human DNA, <strong>and</strong> the resulting fragments range in<br />
size from tens of base pairs to hundreds or thous<strong>and</strong>s. The fragments<br />
are exposed to a radioactive DNA probe, which can bind to<br />
specific complementary DNA sequences in the fragments. X-ray<br />
film detects the radioactive pattern. The developed film, called an<br />
“autoradiograph,” shows a pattern of DNA fragments, which is<br />
similar to a bar code <strong>and</strong> can be compared with patterns from<br />
known subjects.<br />
The Presence of Minisatellites<br />
Genetic “fingerprinting” / 371<br />
The uniqueness of a DNA fingerprint depends on the fact that,<br />
with the exception of identical twins, no two human beings have<br />
identical DNA sequences. Of the three billion base pairs in human<br />
DNA, many will differ from one person to another.<br />
In 1985, Jeffreys <strong>and</strong> his coworkers, Victoria Wilson at the University<br />
of Leicester <strong>and</strong> Swee Lay Thein at the John Radcliffe Hospital<br />
in Oxford, discovered a way to produce a DNA fingerprint.<br />
Jeffreys had found previously that human DNA contains many repeated<br />
minisequences called “minisatellites.” Minisatellites consist<br />
of sequences of base pairs repeated in t<strong>and</strong>em, <strong>and</strong> the number of<br />
repeated units varies widely from one individual to another. Every<br />
person, with the exception of identical twins, has a different number<br />
of t<strong>and</strong>em repeats <strong>and</strong>, hence, different lengths of minisatellite<br />
DNA. By using two labeled DNA probes to detect two different
372 / Genetic “fingerprinting”<br />
minisatellite sequences, Jeffreys obtained a unique fragment b<strong>and</strong><br />
pattern that was completely specific for an individual.<br />
The power of the technique derives from the law of chance,<br />
which indicates that the probability (chance) that two or more unrelated<br />
events will occur simultaneously is calculated as the multiplication<br />
product of the two separate probabilities. As Jeffreys discovered,<br />
the likelihood of two unrelated people having completely<br />
identical DNA fingerprints is extremely small—less than one in ten<br />
trillion. Given the population of the world, it is clear that the technique<br />
can distinguish any one person from everyone else. Jeffreys<br />
called his b<strong>and</strong> patterns “DNA fingerprints” because of their ability<br />
to individualize. As he stated in his l<strong>and</strong>mark research paper, published<br />
in the English scientific journal Nature in 1985, probes to<br />
minisatellite regions of human DNA produce “DNA ‘fingerprints’<br />
which are completely specific to an individual (or to his or her identical<br />
twin) <strong>and</strong> can be applied directly to problems of human identification,<br />
including parenthood testing.”<br />
Consequences<br />
In addition to being used in human identification, DNA fingerprinting<br />
has found applications in medical genetics. In the search<br />
for a cause, a diagnostic test for, <strong>and</strong> ultimately the treatment of an<br />
inherited disease, it is necessary to locate the defective gene on a human<br />
chromosome. Gene location is accomplished by a technique<br />
called “linkage analysis,” in which geneticists use marker sections<br />
of DNA as reference points to pinpoint the position of a defective<br />
gene on a chromosome. The minisatellite DNA probes developed<br />
by Jeffreys provide a potent <strong>and</strong> valuable set of markers that are of<br />
great value in locating disease-causing genes. Soon after its discovery,<br />
DNA fingerprinting was used to locate the defective genes responsible<br />
for several diseases, including fetal hemoglobin abnormality<br />
<strong>and</strong> Huntington’s disease.<br />
Genetic fingerprinting also has had a major impact on genetic<br />
studies of higher animals. Because DNA sequences are conserved in<br />
evolution, humans <strong>and</strong> other vertebrates have many sequences in<br />
common. This commonality enabled Jeffreys to use his probes to<br />
human minisatellites to bind to the DNA of many different verte-
ates, ranging from mammals to birds, reptiles, amphibians, <strong>and</strong><br />
fish; this made it possible for him to produce DNA fingerprints of<br />
these vertebrates. In addition, the technique has been used to discern<br />
the mating behavior of birds, to determine paternity in zoo primates,<br />
<strong>and</strong> to detect inbreeding in imperiled wildlife. DNA fingerprinting<br />
can also be applied to animal breeding problems, such as<br />
the identification of stolen animals, the verification of semen samples<br />
for artificial insemination, <strong>and</strong> the determination of pedigree.<br />
The technique is not foolproof, however, <strong>and</strong> results may be far<br />
from ideal. Especially in the area of forensic science, there was a<br />
rush to use the tremendous power of DNA fingerprinting to identify<br />
a purported murderer or rapist, <strong>and</strong> the need for scientific st<strong>and</strong>ards<br />
was often neglected. Some problems arose because forensic<br />
DNA fingerprinting in the United States is generally conducted in<br />
private, unregulated laboratories. In the absence of rigorous scientific<br />
controls, the DNA fingerprint b<strong>and</strong>s of two completely unknown<br />
samples cannot be matched precisely, <strong>and</strong> the results may be<br />
unreliable.<br />
See also Amniocentesis; Artificial chromosome; Cloning; In vitro<br />
plant culture; Rice <strong>and</strong> wheat strains; Synthetic amino acid; Synthetic<br />
DNA; Synthetic RNA.<br />
Further Reading<br />
Genetic “fingerprinting” / 373<br />
Bodmer, Walter, <strong>and</strong> Robin McKie. “Probing the Present.” In The<br />
Book of Man: The Human Genome Project. New York: Scribner, 1985.<br />
Caetano-Anolles, Gustavo, <strong>and</strong> Peter M. Gresshoff. DNA Markers:<br />
Protocols, Applications, <strong>and</strong> Overviews. New York: Wiley-VCH,<br />
1997.<br />
Krawezak, Michael, <strong>and</strong> Jorg Schmidtke. DNA Fingerprinting.2ded.<br />
New York: Springer-Verlag, 1998.<br />
Schacter, Bernice Zeldin. Issues <strong>and</strong> Dilemmas of Biotechnology: A Reference<br />
Guide. Westport, Conn.: Greenwood Press, 1999.
374<br />
Genetically engineered insulin<br />
Genetically engineered insulin<br />
The invention: Artificially manufactured human insulin (Humulin)<br />
as a medication for people suffering from diabetes.<br />
The people behind the invention:<br />
Irving S. Johnson (1925- ), an American zoologist who was<br />
vice president of research at Eli Lilly Research Laboratories<br />
Ronald E. Chance (1934- ), an American biochemist at Eli<br />
Lilly Research Laboratories<br />
What Is Diabetes?<br />
Carbohydrates (sugars <strong>and</strong> related chemicals) are the main food<br />
<strong>and</strong> energy source for humans. In wealthy countries such as the<br />
United States, more than 50 percent of the food people eat is made<br />
up of carbohydrates, while in poorer countries the carbohydrate<br />
content of diets is higher, from 70 to 90 percent.<br />
Normally, most carbohydrates that a person eats are used (or metabolized)<br />
quickly to produce energy. Carbohydrates not needed for<br />
energy are either converted to fat or stored as a glucose polymer<br />
called “glycogen.” Most adult humans carry about a pound of body<br />
glycogen; this substance is broken down to produce energy when it<br />
is needed.<br />
Certain diseases prevent the proper metabolism <strong>and</strong> storage of<br />
carbohydrates. The most common of these diseases is diabetes mellitus,<br />
usually called simply “diabetes.” It is found in more than seventy<br />
million people worldwide. Diabetic people cannot produce or<br />
use enough insulin, a hormone secreted by the pancreas. When their<br />
condition is not treated, the eyes may deteriorate to the point of<br />
blindness. The kidneys may stop working properly, blood vessels<br />
may be damaged, <strong>and</strong> the person may fall into a coma <strong>and</strong> die. In<br />
fact, diabetes is the third most common killer in the United States.<br />
Most of the problems surrounding diabetes are caused by high levels<br />
of glucose in the blood. Cataracts often form in diabetics, as excess<br />
glucose is deposited in the lens of the eye.<br />
Important symptoms of diabetes include constant thirst, exces-
sive urination, <strong>and</strong> large amounts of sugar in the blood <strong>and</strong> in the<br />
urine. The glucose tolerance test (GTT) is the best way to find out<br />
whether a person is suffering from diabetes. People given a GTT are<br />
first told to fast overnight. In the morning their blood glucose level<br />
is measured; then they are asked to drink about a fourth of a pound<br />
of glucose dissolved in water. During the next four to six hours, the<br />
blood glucose level is measured repeatedly. In nondiabetics, glucose<br />
levels do not rise above a certain amount during a GTT, <strong>and</strong> the<br />
level drops quickly as the glucose is assimilated by the body. In diabetics,<br />
the blood glucose levels rise much higher <strong>and</strong> do not drop as<br />
quickly. The extra glucose then shows up in the urine.<br />
Treating Diabetes<br />
Genetically engineered insulin / 375<br />
Until the 1920’s, diabetes could be controlled only through a diet<br />
very low in carbohydrates, <strong>and</strong> this treatment was not always successful.<br />
Then Sir Frederick G. Banting <strong>and</strong> Charles H. Best found a<br />
way to prepare purified insulin from animal pancreases <strong>and</strong> gave it<br />
to patients. This gave diabetics their first chance to live a fairly normal<br />
life. Banting <strong>and</strong> his coworkers won the 1923 Nobel Prize in<br />
Physiology or Medicine for their work.<br />
The usual treatment for diabetics became regular shots of insulin.<br />
Drug companies took the insulin from the pancreases of cattle <strong>and</strong><br />
pigs slaughtered by the meat-packing industry. Unfortunately, animal<br />
insulin has two disadvantages. First, about 5 percent of diabetics<br />
are allergic to it <strong>and</strong> can have severe reactions. Second, the world<br />
supply of animal pancreases goes up <strong>and</strong> down depending on how<br />
much meat is being bought. Between 1970 <strong>and</strong> 1975, the supply of<br />
insulin fell sharply as people began to eat less red meat, yet the<br />
numbers of diabetics continued to increase. So researchers began to<br />
look for a better way to supply insulin.<br />
Studying pancreases of people who had donated their bodies to<br />
science, researchers found that human insulin did not cause allergic<br />
reactions. Scientists realized that it would be best to find a chemical<br />
or biological way to prepare human insulin, <strong>and</strong> pharmaceutical<br />
companies worked hard toward this goal. Eli Lilly <strong>and</strong> Company<br />
was the first to succeed, <strong>and</strong> on May 14, 1982, it filed a new drug application<br />
with the Food <strong>and</strong> Drug Administration (FDA) for the hu-
376 / Genetically engineered insulin<br />
man insulin preparation it named “Humulin.”<br />
Humulin is made by genetic engineering. Irving S. Johnson, who<br />
worked on the development of Humulin, described Eli Lilly’s method<br />
for producing Humulin. The common bacterium Escherichia coli<br />
is used. Two strains of the bacterium are produced by genetic engineering:<br />
The first strain is used to make a protein called an “A<br />
chain,” <strong>and</strong> the second strain is used to make a “B chain.” After the<br />
bacteria are harvested, the A <strong>and</strong> B chains are removed <strong>and</strong> purified<br />
separately. Then the two chains are combined chemically. When<br />
they are purified once more, the result is Humulin, which has been<br />
proved by Ronald E. Chance <strong>and</strong> his Eli Lilly coworkers to be chemically,<br />
biologically, <strong>and</strong> physically identical to human insulin.<br />
Consequences<br />
The FDA <strong>and</strong> other regulatory agencies around the world approved<br />
genetically engineered human insulin in 1982. Humulin<br />
does not trigger allergic reactions, <strong>and</strong> its supply does not fluctuate.<br />
It has brought an end to the fear that there would be a worldwide<br />
shortage of insulin.<br />
Humulin is important as well in being the first genetically engineered<br />
industrial chemical. It began an era in which such advanced<br />
technology could be a source for medical drugs, chemicals used in<br />
farming, <strong>and</strong> other important industrial products. Researchers hope<br />
that genetic engineering will help in the underst<strong>and</strong>ing of cancer<br />
<strong>and</strong> other diseases, <strong>and</strong> that it will lead to ways to grow enough<br />
food for a world whose population continues to rise.<br />
See also Artificial chromosome; Artificial insemination; Cloning;<br />
Genetic “fingerprinting”; Synthetic amino acid; Synthetic DNA;<br />
Synthetic RNA.<br />
Further Reading<br />
Berger, Abi. “Gut Cells Engineered to Produce Insulin.” British Medical<br />
Journal 321, no. 7275 (December 16, 2000).<br />
“Genetically Engineered Duckweed to Produce Insulin.” Resource 6,<br />
no. 3 (March, 1999).
Genetically engineered insulin / 377<br />
“Lilly Gets FDA Approval for New Insulin Formula.” Wall Street<br />
Journal (October 3, 1985).<br />
Williams, Linda. “UC Regents Sue Lilly in Dispute Over Biotech<br />
Patent for Insulin.” Los Angeles Times (February 8, 1990).
378<br />
Geothermal power<br />
Geothermal power<br />
The invention: Energy generated from the earth’s natural hot<br />
springs.<br />
The people behind the invention:<br />
Prince Piero Ginori Conti (1865-1939), an Italian nobleman <strong>and</strong><br />
industrialist<br />
Sir Charles Parsons (1854-1931), an English engineer<br />
B. C. McCabe, an American businessman<br />
Developing a Practical System<br />
The first successful use of geothermal energy was at Larderello in<br />
northern Italy. The Larderello geothermal field, located near the city<br />
of Pisa about 240 kilometers northwest of Rome, contains many hot<br />
springs <strong>and</strong> fumaroles (steam vents). In 1777, these springs were<br />
found to be rich in boron, <strong>and</strong> in 1818, Francesco de Larderel began<br />
extracting the useful mineral borax from them. Shortly after 1900,<br />
Prince Piero Ginori Conti, director of the Larderello borax works,<br />
conceived the idea of using the steam for power production. An experimental<br />
electrical power plant was constructed at Larderello in<br />
1904 to provide electric power to the borax plant. After this initial<br />
experiment proved successful, a 250-kilowatt generating station<br />
was installed in 1913 <strong>and</strong> commercial power production began.<br />
As the Larderello field grew, additional geothermal sites throughout<br />
the region were prospected <strong>and</strong> tapped for power. Power production<br />
grew steadily until the 1940’s, when production reached<br />
130 megawatts; however, the Larderello power plants were destroyed<br />
late in World War II (1939-1945). After the war, the generating<br />
plants were rebuilt, <strong>and</strong> they were producing more than 400<br />
megawatts by 1980.<br />
The Larderello power plants encountered many of the technical<br />
problems that were later to concern other geothermal facilities. For<br />
example, hydrogen sulfide in the steam was highly corrosive to copper,<br />
so the Larderello power plant used aluminum for electrical connections<br />
much more than did conventional power plants of the
time. Also, the low pressure of the steam in early wells at Larderello<br />
presented problems. The first generators simply used steam to drive<br />
a generator <strong>and</strong> vented the spent steam into the atmosphere. A system<br />
of this sort, called a “noncondensing system,” is useful for small<br />
generators but not efficient to produce large amounts of power.<br />
Most steam engines derive power not only from the pressure of<br />
the steam but also from the vacuum created when the steam is condensed<br />
back to water. Geothermal systems that generate power<br />
from condensation, as well as direct steam pressure, are called “condensing<br />
systems.” Most large geothermal generators are of this<br />
type. Condensation of geothermal steam presents special problems<br />
not present in ordinary steam engines: There are other gases present<br />
that do not condense. Instead of a vacuum, condensation of steam<br />
contaminated with other gases would result in only a limited drop<br />
in pressure <strong>and</strong>, consequently, very low efficiency.<br />
Initially, the operators of Larderello tried to use the steam to heat<br />
boilers that would, in turn, generate pure steam. Eventually, a device<br />
was developed that removed most of the contaminating gases from<br />
the steam. Although later wells at Larderello <strong>and</strong> other geothermal<br />
fields produced steam at greater pressure, these engineering innovations<br />
improved the efficiency of any geothermal power plant.<br />
Exp<strong>and</strong>ing the Idea<br />
Geothermal power / 379<br />
In 1913, the English engineer Sir Charles Parsons proposed drilling<br />
an extremely deep (12-kilometer) hole to tap the earth’s deep<br />
heat. Power from such a deep hole would not come from natural<br />
steam as at Larderello but would be generated by pumping fluid<br />
into the hole <strong>and</strong> generating steam (as hot as 500 degrees Celsius) at<br />
the bottom. In modern terms, Parsons proposed tapping “hot dryrock”<br />
geothermal energy. (No such plant has been commercially operated<br />
yet, but research is being actively pursued in several countries.)<br />
The first use of geothermal energy in the United States was for direct<br />
heating. In 1890, the municipal water company of Boise, Idaho,<br />
began supplying hot water from a geothermal well. Water was<br />
piped from the well to homes <strong>and</strong> businesses along appropriately<br />
named Warm Springs Avenue. At its peak, the system served more
380 / Geothermal power<br />
than four hundred customers, but as cheap natural gas became<br />
available, the number declined.<br />
Although Larderello was the first successful geothermal electric<br />
power plant, the modern era of geothermal electric power began<br />
with the opening of the Geysers Geothermal Field in California.<br />
Early attempts began in the 1920’s, but it was not until 1955 that B.<br />
C. McCabe, a Los Angeles businessman, leased 14.6 square kilometers<br />
in the Geysers area <strong>and</strong> founded the Magma Power Company.<br />
The first 12.5-megawatt generator was installed at the Geysers in<br />
1960, <strong>and</strong> production increased steadily from then on. The Geysers<br />
surpassed Larderello as the largest producing geothermal field in<br />
the 1970’s, <strong>and</strong> more than 1,000 megawatts were being generated by<br />
1980. By the end of 1980, geothermal plants had been installed in<br />
thirteen countries, with a total capacity of almost 2,600 megawatts,<br />
<strong>and</strong> projects with a total capacity of more than 15,000 megawatts<br />
were being planned in more than twenty countries.<br />
Impact<br />
Geothermal power has many attractive features. Because the<br />
steam is naturally heated <strong>and</strong> under pressure, generating equipment<br />
can be simple, inexpensive, <strong>and</strong> quickly installed. Equipment<br />
<strong>and</strong> installation costs are offset by savings in fuel. It is economically<br />
practical to install small generators, a fact that makes geothermal<br />
plants attractive in remote or underdeveloped areas. Most important<br />
to a world faced with a variety of technical <strong>and</strong> environmental<br />
problems connected with fossil fuels, geothermal power does not<br />
deplete fossil fuel reserves, produces little pollution, <strong>and</strong> contributes<br />
little to the greenhouse effect.<br />
Despite its attractive features, geothermal power has some limitations.<br />
Geologic settings suitable for easy geothermal power production<br />
are rare; there must be a hot rock or magma body close to<br />
the surface. Although it is technically possible to pump water from<br />
an external source into a geothermal well to generate steam, most<br />
geothermal sites require a plentiful supply of natural underground<br />
water that can be tapped as a source of steam. In contrast, fossil-fuel<br />
generating plants can be at any convenient location.
See also Breeder reactor; Compressed-air-accumulating power<br />
plant; Fuel cell; Heat pump; Nuclear power plant; Solar thermal engine;<br />
Thermal cracking process; Tidal power plant.<br />
Further Reading<br />
Geothermal power / 381<br />
Appleyard, Rollo. Charles Parsons: His Life <strong>and</strong> Work. London: Constable,<br />
1933.<br />
Boyle, Godfrey. Renewable Energy: Power for a Sustainable Future. Oxford:<br />
Oxford University Press, 1998.<br />
Cassedy, Edward S. Prospects for Sustainable Energy: A Critical Assessment.<br />
New York: Cambridge University Press, 2000.<br />
Parsons, Robert Hodson. The Steam Turbine <strong>and</strong> Other <strong>Inventions</strong> of<br />
Sir Charles Parsons, O.M. New York: Longmans Green, 1946.
382<br />
Gyrocompass<br />
Gyrocompass<br />
The invention: The first practical navigational device that enabled<br />
ships <strong>and</strong> submarines to stay on course without relying on the<br />
earth’s unreliable magnetic poles.<br />
The people behind the invention:<br />
Hermann Anschütz-Kaempfe (1872-1931), a German inventor<br />
<strong>and</strong> manufacturer<br />
Jean-Bernard-Léon Foucault (1819-1868), a French experimental<br />
physicist <strong>and</strong> inventor<br />
Elmer Ambrose Sperry (1860-1930), an American engineer <strong>and</strong><br />
inventor<br />
From Toys to Tools<br />
A gyroscope consists of a rapidly spinning wheel mounted in a<br />
frame that enables the wheel to tilt freely in any direction. The<br />
amount of momentum allows the wheel to maintain its “attitude”<br />
even when the whole device is turned or rotated.<br />
These devices have been used to solve problems arising in such<br />
areas as sailing <strong>and</strong> navigation. For example, a gyroscope aboard a<br />
ship maintains its orientation even while the ship is rolling. Among<br />
other things, this allows the extent of the roll to be measured accurately.<br />
Moreover, the spin axis of a free gyroscope can be adjusted to<br />
point toward true north. It will (with some exceptions) stay that<br />
way despite changes in the direction of a vehicle in which it is<br />
mounted. Gyroscopic effects were employed in the design of various<br />
objects long before the theory behind them was formally<br />
known. A classic example is a child’s top, which balances, seemingly<br />
in defiance of gravity, as long as it continues to spin. Boomerangs<br />
<strong>and</strong> flying disks derive stability <strong>and</strong> accuracy from the spin<br />
imparted by the thrower. Likewise, the accuracy of rifles improved<br />
when barrels were manufactured with internal spiral grooves that<br />
caused the emerging bullet to spin.<br />
In 1852, the French inventor Jean-Bernard-Léon Foucault built<br />
the first gyroscope, a measuring device consisting of a rapidly spinning<br />
wheel mounted within concentric rings that allowed the wheel
to move freely about two axes. This device, like the Foucault pendulum,<br />
was used to demonstrate the rotation of the earth around its<br />
axis, since the spinning wheel, which is not fixed, retains its orientation<br />
in space while the earth turns under it. The gyroscope had a related<br />
interesting property: As it continued to spin, the force of the<br />
earth’s rotation caused its axis to rotate gradually until it was oriented<br />
parallel to the earth’s axis, that is, in a north-south direction. It<br />
is this property that enables the gyroscope to be used as a compass.<br />
When Magnets Fail<br />
Gyrocompass / 383<br />
In 1904, Hermann Anschütz-Kaempfe, a German manufacturer<br />
working in the Kiel shipyards, became interested in the navigation<br />
problems of submarines used in exploration under the polar ice cap.<br />
By 1905, efficient working submarines were a reality, <strong>and</strong> it was evident<br />
to all major naval powers that submarines would play an increasingly<br />
important role in naval strategy.<br />
Submarine navigation posed problems, however, that could not<br />
be solved by instruments designed for surface vessels. A submarine<br />
needs to orient itself under water in three dimensions; it has no automatic<br />
horizon with respect to which it can level itself. Navigation<br />
by means of stars or l<strong>and</strong>marks is impossible when the submarine is<br />
submerged. Furthermore, in an enclosed metal hull containing machinery<br />
run by electricity, a magnetic compass is worthless. To a<br />
lesser extent, increasing use of metal, massive moving parts, <strong>and</strong><br />
electrical equipment had also rendered the magnetic compass unreliable<br />
in conventional surface battleships.<br />
It made sense for Anschütz-Kaempfe to use the gyroscopic effect<br />
to design an instrument that would enable a ship to maintain its<br />
course while under water. Yet producing such a device would not be<br />
easy. First, it needed to be suspended in such a way that it was free to<br />
turn in any direction with as little mechanical resistance as possible.<br />
At the same time, it had to be able to resist the inevitable pitching <strong>and</strong><br />
rolling of a vessel at sea. Finally, a continuous power supply was required<br />
to keep the gyroscopic wheels spinning at high speed.<br />
The original Anschütz-Kaempfe gyrocompass consisted of a pair<br />
of spinning wheels driven by an electric motor. The device was connected<br />
to a compass card visible to the ship’s navigator. Motor, gyro-
384 / Gyrocompass<br />
Elmer Sperry<br />
Although Elmer Ambrose Sperry, born in 1860, had only a<br />
grade school education as a child in rural New York, the equipment<br />
used on local farms piqued his interest in machinery <strong>and</strong><br />
he learned about technology on his own. He attended a local<br />
teachers’ college, <strong>and</strong> graduating in 1880, he was determined to<br />
become an inventor.<br />
He was especially interested in the application of electricity.<br />
He designed his own arc lighting system <strong>and</strong> opened the Sperry<br />
Electric Light, Motor, <strong>and</strong> Car Brake Company to sell it, changing<br />
its name to Sperry Electric Company in 1887. He made such<br />
progress in devising electric mining equipment, electric brakes<br />
for automobiles <strong>and</strong> streetcars, <strong>and</strong> his own electric car that<br />
General Electric bought him out.<br />
In 1900 Sperry opened a laboratory in Washington, D.C., <strong>and</strong><br />
continued research on a gyroscope that he began in 1896. After<br />
more than a decade he patented his device, <strong>and</strong> after successful<br />
trials aboard the USS Worden, he established the Sperry Gyroscope<br />
Company in 1910, later supplying the American, British,<br />
<strong>and</strong> Russian navies as well as commercial ships. In 1914 he<br />
successfully demonstrated a gyrostabilizer for aircraft <strong>and</strong> exp<strong>and</strong>ed<br />
his company to manufacture aeronautical technology.<br />
Before he sold the company in 1926 he had registered more than<br />
four hundred patents. Sperry died in Brooklyn in 1930.<br />
scope, <strong>and</strong> suspension system were mounted in a frame that allowed<br />
the apparatus to remain stable despite the pitch <strong>and</strong> roll of the ship.<br />
In 1906, the German navy installed a prototype of the Anschütz-<br />
Kaempfe gyrocompass on the battleship Undine <strong>and</strong> subjected it to<br />
exhaustive tests under simulated battle conditions, sailing the ship<br />
under forced draft <strong>and</strong> suddenly reversing the engines, changing the<br />
position of heavy turrets <strong>and</strong> other mechanisms, <strong>and</strong> firing heavy<br />
guns. In conditions under which a magnetic compass would have<br />
been worthless, the gyrocompass proved a satisfactory navigational<br />
tool, <strong>and</strong> the results were impressive enough to convince the German<br />
navy to undertake installation of gyrocompasses in submarines <strong>and</strong><br />
heavy battleships, including the battleship Deutschl<strong>and</strong>.<br />
Elmer Ambrose Sperry, a New York inventor intimately associated<br />
with pioneer electrical development, was independently work-
ing on a design for a gyroscopic compass at about the same time.<br />
In 1907, he patented a gyrocompass consisting of a single rotor<br />
mounted within two concentric shells, suspended by fine piano<br />
wire from a frame mounted on gimbals. The rotor of the Sperry<br />
compass operated in a vacuum, which enabled it to rotate more<br />
rapidly. The Sperry gyrocompass was in use on larger American<br />
battleships <strong>and</strong> submarines on the eve of World War I (1914-1918).<br />
Impact<br />
The ability to navigate submerged submarines was of critical<br />
strategic importance in World War I. Initially, the German navy<br />
had an advantage both in the number of submarines at its disposal<br />
<strong>and</strong> in their design <strong>and</strong> maneuverability. The German U-boat fleet<br />
declared all-out war on Allied shipping, <strong>and</strong>, although their efforts<br />
to blockade Engl<strong>and</strong> <strong>and</strong> France were ultimately unsuccessful, the<br />
tremendous toll they inflicted helped maintain the German position<br />
<strong>and</strong> prolong the war. To a submarine fleet operating throughout<br />
the Atlantic <strong>and</strong> in the Caribbean, as well as in near-shore European<br />
waters, effective long-distance navigation was critical.<br />
Gyrocompasses were st<strong>and</strong>ard equipment on submarines <strong>and</strong><br />
battleships <strong>and</strong>, increasingly, on larger commercial vessels during<br />
World War I, World War II (1939-1945), <strong>and</strong> the period between the<br />
wars. The devices also found their way into aircraft, rockets, <strong>and</strong><br />
guided missiles. Although the compasses were made more accurate<br />
<strong>and</strong> easier to use, the fundamental design differed little from that invented<br />
by Anschütz-Kaempfe.<br />
See also Atomic-powered ship; Dirigible; Hovercraft; Radar; Sonar.<br />
Further Reading<br />
Gyrocompass / 385<br />
Hughes, Thomas Parke. Elmer Sperry: Inventor <strong>and</strong> Engineer. Baltimore:<br />
Johns Hopkins University Press, 1993.<br />
_____. Science <strong>and</strong> the Instrument-Maker: Michelson, Sperry, <strong>and</strong> the<br />
Speed of Light. Washington: Smithsonian Institution Press, 1976.<br />
Sorg, H. W. “From Serson to Draper: Two Centuries of Gyroscopic<br />
Development.” Journal of the Institute of Navigation 23, no. 4 (Winter,<br />
1976-1977).
386<br />
Hard disk<br />
Hard disk<br />
The invention: A large-capacity, permanent magnetic storage device<br />
built into most personal computers.<br />
The people behind the invention:<br />
Alan Shugart (1930- ), an engineer who first developed the<br />
floppy disk<br />
Philip D. Estridge (1938?-1985), the director of IBM’s product<br />
development facility<br />
Thomas J. Watson, Jr. (1914-1993), the chief executive officer of<br />
IBM<br />
The Personal Oddity<br />
When the International Business Machines (IBM) Corporation<br />
introduced its first microcomputer, called simply the IBM PC (for<br />
“personal computer”), the occasion was less a dramatic invention<br />
than the confirmation of a trend begun some years before. A number<br />
of companies had introduced microcomputers before IBM; one<br />
of the best known at that time was Apple Corporation’s Apple II, for<br />
which software for business <strong>and</strong> scientific use was quickly developed.<br />
Nevertheless, the microcomputer was quite expensive <strong>and</strong><br />
was often looked upon as an oddity, not as a useful tool.<br />
Under the leadership of Thomas J. Watson, Jr., IBM, which had<br />
previously focused on giant mainframe computers, decided to develop<br />
the PC. A design team headed by Philip D. Estridge was assembled<br />
in Boca Raton, Florida, <strong>and</strong> it quickly developed its first,<br />
pacesetting product. It is an irony of history that IBM anticipated<br />
selling only one hundred thous<strong>and</strong> or so of these machines, mostly<br />
to scientists <strong>and</strong> technically inclined hobbyists. Instead, IBM’s product<br />
sold exceedingly well, <strong>and</strong> its design parameters, as well as its<br />
operating system, became st<strong>and</strong>ards.<br />
The earliest microcomputers used a cassette recorder as a means<br />
of mass storage; a floppy disk drive capable of storing approximately<br />
160 kilobytes of data was initially offered only as an option.<br />
While home hobbyists were accustomed to using a cassette recorder
for storage purposes, such a system was far too slow <strong>and</strong> awkward<br />
for use in business <strong>and</strong> science. As a result, virtually every IBM PC<br />
sold was equipped with at least one 5.25-inch floppy disk drive.<br />
Memory Requirements<br />
Hard disk / 387<br />
All computers require memory of two sorts in order to carry out<br />
their tasks. One type of memory is main memory, or r<strong>and</strong>om access<br />
memory (RAM), which is used by the computer’s central processor<br />
to store data it is using while operating. The type of memory used<br />
for this function is built typically of silicon-based integrated circuits<br />
that have the advantage of speed (to allow the processor to fetch or<br />
store the data quickly), but the disadvantage of possibly losing or<br />
“forgetting” data when the electric current is turned off. Further,<br />
such memory generally is relatively expensive.<br />
To reduce costs, another type of memory—long-term storage<br />
memory, known also as “mass storage”—was developed. Mass<br />
storage devices include magnetic media (tape or disk drives) <strong>and</strong><br />
optical media (such as the compact disc, read-only memory, or CD-<br />
ROM). While the speed with which data may be retrieved from or<br />
stored in such devices is rather slow compared to the central processor’s<br />
speed, a disk drive—the most common form of mass storage<br />
used in PCs—can store relatively large amounts of data quite inexpensively.<br />
Early floppy disk drives (so called because the magnetically<br />
treated material on which data are recorded is made of a very flexible<br />
plastic) held 160 kilobytes of data using only one side of the<br />
magnetically coated disk (about eighty pages of normal, doublespaced,<br />
typewritten information). Later developments increased<br />
storage capacities to 360 kilobytes by using both sides of the disk<br />
<strong>and</strong> later, with increasing technological ability, 1.44 megabytes (millions<br />
of bytes). In contrast, mainframe computers, which are typically<br />
connected to large <strong>and</strong> expensive tape drive storage systems,<br />
could store gigabytes (millions of megabytes) of information.<br />
While such capacities seem large, the needs of business <strong>and</strong> scientific<br />
users soon outstripped available space. Since even the mailing<br />
list of a small business or a scientist’s mathematical model of a<br />
chemical reaction easily could require greater storage potential than
388 / Hard disk<br />
early PCs allowed, the need arose for a mass storage device that<br />
could accommodate very large files of data.<br />
The answer was the hard disk drive, also known as a “fixed disk<br />
drive,” reflecting the fact that the disk itself is not only rigid but also<br />
permanently installed inside the machine. In 1955, IBM had envisioned<br />
the notion of a fixed, hard magnetic disk as a means of storing<br />
computer data, <strong>and</strong>, under the direction of Alan Shugart in the<br />
1960’s, the floppy disk was developed as well.<br />
As the engineers of IBM’s facility in Boca Raton refined the idea<br />
of the original PC to design the new IBM PC XT, it became clear that<br />
chief among the needs of users was the availability of large-capability<br />
storage devices. The decision was made to add a 10-megabyte<br />
hard disk drive to the PC. On March 8, 1983, less than two years after<br />
the introduction of its first PC, IBM introduced the PC XT. Like<br />
the original, it was an evolutionary design, not a revolutionary one.<br />
The inclusion of a hard disk drive, however, signaled that mass storage<br />
devices in personal computers had arrived.<br />
Consequences<br />
Above all else, any computer provides a means for storing, ordering,<br />
analyzing, <strong>and</strong> presenting information. If the personal computer<br />
is to become the information appliance some have suggested<br />
it will be, the ability to manipulate very large amounts of data will<br />
be of paramount concern. Hard disk technology was greeted enthusiastically<br />
in the marketplace, <strong>and</strong> the dem<strong>and</strong> for hard drives has<br />
seen their numbers increase as their quality increases <strong>and</strong> their<br />
prices drop.<br />
It is easy to underst<strong>and</strong> one reason for such eager acceptance:<br />
convenience. Floppy-bound computer users find themselves frequently<br />
changing (or “swapping”) their disks in order to allow programs<br />
to find the data they need. Moreover, there is a limit to how<br />
much data a single floppy disk can hold. The advantage of a hard<br />
drive is that it allows users to keep seemingly unlimited amounts of<br />
data <strong>and</strong> programs stored in their machines <strong>and</strong> readily available.<br />
Also, hard disk drives are capable of finding files <strong>and</strong> transferring<br />
their contents to the processor much more quickly than a<br />
floppy drive. A user may thus create exceedingly large files, keep
them on h<strong>and</strong> at all times, <strong>and</strong> manipulate data more quickly than<br />
with a floppy. Finally, while a hard drive is a slow substitute for<br />
main memory, it allows users to enjoy the benefits of larger memories<br />
at significantly lower cost.<br />
The introduction of the PC XT with its 10-megabyte hard drive<br />
was a milestone in the development of the PC. Over the next two decades,<br />
the size of computer hard drives increased dramatically. By<br />
2001, few personal computers were sold with hard drives with less<br />
than three gigabytes of storage capacity, <strong>and</strong> hard drives with more<br />
than thirty gigabytes were becoming the st<strong>and</strong>ard. Indeed, for less<br />
money than a PC XT cost in the mid-1980’s, one could buy a fully<br />
equipped computer with a hard drive holding sixty gigabytes—a<br />
storage capacity equivalent to six thous<strong>and</strong> 10-megabyte hard drives.<br />
See also Bubble memory; Compact disc; Computer chips;<br />
Floppy disk; Optical disk; Personal computer.<br />
Further Reading<br />
Hard disk / 389<br />
Chposky, James, <strong>and</strong> Ted Leonsis. Blue Magic: The People, Power, <strong>and</strong><br />
Politics Behind the IBM Personal Computer. New York: Facts on File,<br />
1988.<br />
Freiberger, Paul, <strong>and</strong> Michael Swaine. Fire in the Valley: The Making of<br />
the Personal Computer. 2d ed. New York: McGraw-Hill, 2000.<br />
Grossman, Wendy. Remembering the Future: Interviews from Personal<br />
Computer World. New York: Springer, 1997.<br />
Watson, Thomas J., <strong>and</strong> Peter Petre. Father, Son <strong>and</strong> Co.: My Life at<br />
IBM <strong>and</strong> Beyond. New York: Bantam Books, 2000.
390<br />
Hearing aid<br />
Hearing aid<br />
The invention: Miniaturized electronic amplifier worn inside the<br />
ears of hearing-impaired persons.<br />
The organization behind the invention:<br />
Bell Labs, the research <strong>and</strong> development arm of the American<br />
Telephone <strong>and</strong> Telegraph Company<br />
Trapped in Silence<br />
Until the middle of the twentieth century, people who experienced<br />
hearing loss had little hope of being able to hear sounds without the<br />
use of large, awkward, heavy appliances. For many years, the only<br />
hearing aids available were devices known as ear trumpets. The ear<br />
trumpet tried to compensate for hearing loss by increasing the number<br />
of sound waves funneled into the ear canal. A wide, bell-like<br />
mouth similar to the bell of a musical trumpet narrowed to a tube that<br />
the user placed in his or her ear. Ear trumpets helped a little, but they<br />
could not truly increase the volume of the sounds heard.<br />
Beginning in the nineteenth century, inventors tried to develop<br />
electrical devices that would serve as hearing aids. The telephone<br />
was actually a by-product of Alex<strong>and</strong>er Graham Bell’s efforts to<br />
make a hearing aid. Following the invention of the telephone, electrical<br />
engineers designed hearing aids that employed telephone<br />
technology, but those hearing aids were only a slight improvement<br />
over the old ear trumpets. They required large, heavy battery packs<br />
<strong>and</strong> used a carbon microphone similar to the receiver in a telephone.<br />
More sensitive than purely physical devices such as the ear trumpet,<br />
they could transmit a wider range of sounds but could not amplify<br />
them as effectively as electronic hearing aids now do.<br />
Transistors Make Miniaturization Possible<br />
Two types of hearing aids exist: body-worn <strong>and</strong> head-worn.<br />
Body-worn hearing aids permit the widest range of sounds to be<br />
heard, but because of the devices’ larger size, many hearing-
Hearing aid / 391<br />
impaired persons do not like to wear them. Head-worn hearing<br />
aids, especially those worn completely in the ear, are much less conspicuous.<br />
In addition to in-ear aids, the category of head-worn hearing<br />
aids includes both hearing aids mounted in eyeglass frames <strong>and</strong><br />
those worn behind the ear.<br />
All hearing aids, whether head-worn or body-worn, consist of<br />
four parts: a microphone to pick up sounds, an amplifier, a receiver,<br />
<strong>and</strong> a power source. The microphone gathers sound waves <strong>and</strong> converts<br />
them to electrical signals; the amplifier boosts, or increases,<br />
those signals; <strong>and</strong> the receiver then converts the signals back into<br />
sound waves. In effect, the hearing aid is a miniature radio. After<br />
the receiver converts the signals back to sound waves, those waves<br />
are directed into the ear canal through an earpiece or ear mold. The<br />
ear mold generally is made of plastic <strong>and</strong> is custom fitted from an<br />
impression taken from the prospective user’s ear.<br />
Effective head-worn hearing aids could not be built until the<br />
electronic circuit was developed in the early 1950’s. The same invention—the<br />
transistor—that led to small portable radios <strong>and</strong> tape<br />
players allowed engineers to create miniaturized, inconspicuous<br />
hearing aids. Depending on the degree of amplification required,<br />
the amplifier in a hearing aid contains three or more transistors.<br />
Transistors first replaced vacuum tubes in devices such as radios<br />
<strong>and</strong> phonographs, <strong>and</strong> then engineers realized that they could be<br />
used in devices for the hearing-impaired.<br />
The research at Bell Labs that led to the invention of the transistor<br />
rose out of military research during World War II. The vacuum tubes<br />
used in, for example, radar installations to amplify the strength of electronic<br />
signals were big, were fragile because they were made of<br />
blown glass, <strong>and</strong> gave off high levels of heat when they were used.<br />
Transistors, however, made it possible to build solid-state, integrated<br />
circuits. These are made from crystals of metals such as germanium<br />
or arsenic alloys <strong>and</strong> therefore are much less fragile than glass. They<br />
are also extremely small (in fact, some integrated circuits are barely<br />
visible to the naked eye) <strong>and</strong> give off no heat during use.<br />
The number of transistors in a hearing aid varies depending upon<br />
the amount of amplification required. The first transistor is the most<br />
important for the listener in terms of the quality of sound heard. If the<br />
frequency response is set too high—that is, if the device is too sensi-
392 / Hearing aid<br />
tive—the listener will be bothered by distracting background noise.<br />
Theoretically, there is no limit on the amount of amplification that a<br />
hearing aid can be designed to provide, but there are practical limits.<br />
The higher the amplification, the more power is required to operate<br />
the hearing aid. This is why body-worn hearing aids can convey a<br />
wider range of sounds than head-worn devices can. It is the power<br />
source—not the electronic components—that is the limiting factor. A<br />
body-worn hearing aid includes a larger battery pack than can be<br />
used with a head-worn device. Indeed, despite advances in battery<br />
technology, the power requirements of a head-worn hearing aid are<br />
such that a 1.4-volt battery that could power a wristwatch for several<br />
years will last only a few days in a hearing aid.<br />
Consequences<br />
The invention of the electronic hearing aid made it possible for<br />
many hearing-impaired persons to participate in a hearing world.<br />
Prior to the invention of the hearing aid, hearing-impaired children<br />
often were unable to participate in routine school activities or function<br />
effectively in mainstream society. Instead of being able to live at<br />
home with their families <strong>and</strong> enjoy the same experiences that were<br />
available to other children their age, often they were forced to attend<br />
special schools operated by the state or by charities.<br />
Hearing-impaired people were singled out as being different <strong>and</strong><br />
were limited in their choice of occupations. Although not every<br />
hearing-impaired person can be helped to hear with a hearing aid—<br />
particularly in cases of total hearing loss—the electronic hearing aid<br />
has ended restrictions for many hearing-impaired people. Hearingimpaired<br />
children are now included in public school classes, <strong>and</strong><br />
hearing-impaired adults can now pursue occupations from which<br />
they were once excluded.<br />
Today, many deaf <strong>and</strong> hearing-impaired persons have chosen to<br />
live without the help of a hearing aid. They believe that they are not<br />
disabled but simply different, <strong>and</strong> they point out that their “disability”<br />
often allows them to appreciate <strong>and</strong> participate in life in unique<br />
<strong>and</strong> positive ways. For them, the use of hearing aids is a choice, not a<br />
necessity. For those who choose, hearing aids make it possible to<br />
participate in the hearing world.
See also Artificial heart; Artificial kidney; Cell phone; Contact<br />
lenses; Heart-lung machine; Pacemaker.<br />
Further Reading<br />
Hearing aid / 393<br />
Alex<strong>and</strong>er, Howard. “Hearing Aids: Smaller <strong>and</strong> Smarter.” New<br />
York Times (November 26, 1998).<br />
Fong, Petti. “Guess What’s the New Buzz in Hearing Aids.” Business<br />
Week, no. 3730 (April 30, 2001).<br />
Levitt, Harry. “Noise Reduction in Hearing Aids: AReview.” Journal<br />
of Rehabilitation Research <strong>and</strong> Development 38, no. 1 (January/February,<br />
2001).
394<br />
Heart-lung machine<br />
Heart-lung machine<br />
The invention: The first artificial device to oxygenate <strong>and</strong> circulate<br />
blood during surgery, the heart-lung machine began the era of<br />
open-heart surgery.<br />
The people behind the invention:<br />
John H. Gibbon, Jr. (1903-1974), a cardiovascular surgeon<br />
Mary Hopkinson Gibbon (1905- ), a research technician<br />
Thomas J. Watson (1874-1956), chairman of the board of IBM<br />
T. L. Stokes <strong>and</strong> J. B. Flick, researchers in Gibbon’s laboratory<br />
Bernard J. Miller (1918- ), a cardiovascular surgeon <strong>and</strong><br />
research associate<br />
Cecelia Bavolek, the first human to undergo open-heart surgery<br />
successfully using the heart-lung machine<br />
A Young Woman’s Death<br />
In the first half of the twentieth century, cardiovascular medicine<br />
had many triumphs. Effective anesthesia, antiseptic conditions, <strong>and</strong><br />
antibiotics made surgery safer. Blood-typing, anti-clotting agents,<br />
<strong>and</strong> blood preservatives made blood transfusion practical. Cardiac<br />
catheterization (feeding a tube into the heart), electrocardiography,<br />
<strong>and</strong> fluoroscopy (visualizing living tissues with an X-ray machine)<br />
made the nonsurgical diagnosis of cardiovascular problems possible.<br />
As of 1950, however, there was no safe way to treat damage or defects<br />
within the heart. To make such a correction, this vital organ’s<br />
function had to be interrupted. The problem was to keep the body’s<br />
tissues alive while working on the heart. While some surgeons practiced<br />
so-called blind surgery, in which they inserted a finger into the<br />
heart through a small incision without observing what they were attempting<br />
to correct, others tried to reduce the body’s need for circulation<br />
by slowly chilling the patient until the heart stopped. Still other<br />
surgeons used “cross-circulation,” in which the patient’s circulation<br />
was connected to a donor’s circulation. All these approaches carried<br />
profound risks of hemorrhage, tissue damage, <strong>and</strong> death.<br />
In February of 1931, Gibbon witnessed the death of a young
woman whose lung circulation was blocked by a blood clot. Because<br />
her blood could not pass through her lungs, she slowly lost<br />
consciousness from lack of oxygen. As he monitored her pulse <strong>and</strong><br />
breathing, Gibbon thought about ways to circumvent the obstructed<br />
lungs <strong>and</strong> straining heart <strong>and</strong> provide the oxygen required. Because<br />
surgery to remove such a blood clot was often fatal, the woman’s<br />
surgeons operated only as a last resort. Though the surgery took<br />
only six <strong>and</strong> one-half minutes, she never regained consciousness.<br />
This experience prompted Gibbon to pursue what few people then<br />
considered a practical line of research: a way to circulate <strong>and</strong> oxygenate<br />
blood outside the body.<br />
A Woman’s Life Restored<br />
Heart-lung machine / 395<br />
Gibbon began the project in earnest in 1934, when he returned to<br />
the laboratory of Edward D. Churchill at Massachusetts General<br />
Hospital for his second surgical research fellowship. He was assisted<br />
by Mary Hopkinson Gibbon. Together, they developed, using<br />
cats, a surgical technique for removing blood from a vein, supplying<br />
the blood with oxygen, <strong>and</strong> returning it to an artery using tubes inserted<br />
into the blood vessels. Their objective was to create a device<br />
that would keep the blood moving, spread it over a very thin layer<br />
to pick up oxygen efficiently <strong>and</strong> remove carbon dioxide, <strong>and</strong> avoid<br />
both clotting <strong>and</strong> damaging blood cells. In 1939, they reported that<br />
prolonged survival after heart-lung bypass was possible in experimental<br />
animals.<br />
World War II (1939-1945) interrupted the progress of this work; it<br />
was resumed by Gibbon at Jefferson Medical College in 1944. Shortly<br />
thereafter, he attracted the interest of Thomas J. Watson, chairman of<br />
the board of the International Business Machines (IBM) Corporation,<br />
who provided the services of IBM’s experimental physics laboratory<br />
<strong>and</strong> model machine shop as well as the assistance of staff engineers.<br />
IBM constructed <strong>and</strong> modified two experimental machines<br />
over the next seven years, <strong>and</strong> IBM engineers contributed significantly<br />
to the evolution of a machine that would be practical in humans.<br />
Gibbon’s first attempt to use the pump-oxygenator in a human<br />
being was in a fifteen-month-old baby. This attempt failed, not be-
396 / Heart-lung machine<br />
cause of a malfunction or a surgical mistake but because of a misdiagnosis.<br />
The child died following surgery because the real problem<br />
had not been corrected by the surgery.<br />
On May 6, 1953, the heart-lung machine was first used successfully<br />
on Cecelia Bavolek. In the six months before surgery, Bavolek<br />
had been hospitalized three times for symptoms of heart failure<br />
when she tried to engage in normal activity. While her circulation<br />
was connected to the heart-lung machine for forty-five minutes, the<br />
surgical team headed by Gibbon was able to close an opening between<br />
her atria <strong>and</strong> establish normal heart function. Two months<br />
later, an examination of the defect revealed that it was fully closed;<br />
Bavolek resumed a normal life. The age of open-heart surgery had<br />
begun.<br />
Consequences<br />
The heart-lung bypass technique alone could not make openheart<br />
surgery truly practical. When it was possible to keep tissues<br />
alive by diverting blood around the heart <strong>and</strong> oxygenating it, other<br />
questions already under investigation became even more critical:<br />
how to prolong the survival of bloodless organs, how to measure<br />
oxygen <strong>and</strong> carbon dioxide levels in the blood, <strong>and</strong> how to prolong<br />
anesthesia during complicated surgery. Thus, following the first<br />
successful use of the heart-lung machine, surgeons continued to refine<br />
the methods of open-heart surgery.<br />
The heart-lung apparatus set the stage for the advent of “replacement<br />
parts” for many types of cardiovascular problems. Cardiac<br />
valve replacement was first successfully accomplished in 1960 by<br />
placing an artificial ball valve between the left atrium <strong>and</strong> ventricle.<br />
In 1957, doctors performed the first coronary bypass surgery, grafting<br />
sections of a leg vein into the heart’s circulation system to divert<br />
blood around clogged coronary arteries. Likewise, the first successful<br />
heart transplant (1967) <strong>and</strong> the controversial Jarvik-7 artificial<br />
heart implantation (1982) required the ability to stop the heart <strong>and</strong><br />
keep the body’s tissues alive during time-consuming <strong>and</strong> delicate<br />
surgical procedures. Gibbon’s heart-lung machine paved the way<br />
for all these developments.
See also Artificial heart; Blood transfusion; CAT scanner; Coronary<br />
artery bypass surgery; Electrocardiogram; Iron lung; Mammography;<br />
Nuclear magnetic resonance; Pacemaker; X-ray image<br />
intensifier.<br />
Further Reading<br />
Heart-lung machine / 397<br />
DeJauregui, Ruth. One Hundred Medical Milestones That Shaped World<br />
History. San Mateo, Calif.: Bluewood Books, 1998.<br />
Romaine-Davis, Ada. John Gibbon <strong>and</strong> his Heart-Lung Machine. Philadelphia:<br />
University of Pennsylvania Press, 1991.<br />
Shumacker, Harris B. A Dream of the Heart: The Life of John H. Gibbon,<br />
Jr., Father of the Heart-Lung Machine. Santa Barbara, Calif.: Fithian<br />
Press, 1999.<br />
Watson, Thomas J., <strong>and</strong> Peter Petre. Father, Son <strong>and</strong> Co.: My Life at<br />
IBM <strong>and</strong> Beyond. New York: Bantam Books, 2000.
398<br />
Heat pump<br />
Heat pump<br />
The invention: A device that warms <strong>and</strong> cools buildings efficiently<br />
<strong>and</strong> cheaply by moving heat from one area to another.<br />
The people behind the invention:<br />
T. G. N. Haldane, a British engineer<br />
Lord Kelvin (William Thomson, 1824-1907), a British<br />
mathematician, scientist, <strong>and</strong> engineer<br />
Sadi Carnot (1796-1832), a French physicist <strong>and</strong><br />
thermodynamicist<br />
The Heat Pump<br />
A heat pump is a device that takes in heat at one temperature <strong>and</strong><br />
releases it at a higher temperature. When operated to provide heat (for<br />
example, for space heating), the heat pump is said to operate in the<br />
heating mode; when operated to remove heat (for example, for air conditioning),<br />
it is said to operate in the cooling mode. Some type of work<br />
must be done to drive the pump, no matter which mode is being used.<br />
There are two general types of heat pumps: vapor compression<br />
pumps <strong>and</strong> absorption pumps. The basic principle of vapor compression<br />
cycle heat pumps is derived from the work of Sadi Carnot<br />
in the early nineteenth century. Carnot’s work was published in<br />
1824. It was William Thomson (later to become known as Lord Kelvin),<br />
however, who first proposed a practical heat pump system, or<br />
“heat multiplier,” as it was known then, <strong>and</strong> he also indicated that a<br />
refrigerating machine could be used for heating.<br />
Thomson’s heat pump used air as its working fluid. Thomson<br />
claimed that his heat pump was able to produce heat by using only<br />
3 percent of the energy that would be required for direct heating.<br />
Absorption cycle machines have an even longer history. Refrigerators<br />
based on the use of sulfuric acid <strong>and</strong> water date back to 1777.<br />
Systems using this fluid combination, improved <strong>and</strong> modified by<br />
Edmond Carré, were used extensively in Paris cafés in the late<br />
1800’s. In 1849, a patent was filed by Ferdin<strong>and</strong> Carré for the working-fluid<br />
pair of ammonia <strong>and</strong> water in absorption cycle machines.
Refrigerator or Heater<br />
Heat pump / 399<br />
In the early nineteenth century, many people (including some<br />
electrical engineers) believed that electrical energy could never be<br />
used economically to produce large quantities of heat under ordinary<br />
conditions. A few researchers, however, believed that it was<br />
possible to produce heat by using electrical energy if that energy<br />
was first converted to mechanical energy <strong>and</strong> if the Carnot principle<br />
was then used to pump heat from a lower to a higher temperature.<br />
In 1927, T. G. N. Haldane carried out detailed experiments showing<br />
that the heat pump can be made to operate in either the heating<br />
mode or the cooling mode. A heat pump in the cooling mode works<br />
like a refrigerator; a heat pump in the heating mode supplies heat<br />
for heating. Haldane demonstrated that a refrigerator could be<br />
modified to work as a heating unit. He used a vapor compression<br />
cycle refrigerator for his demonstration.<br />
In the design of a refrigerating device, the primary objective is<br />
the production of cold rather than heat, but the two operations are<br />
complementary. The process of producing cold is simply that of<br />
pumping heat from a relatively cold to a relatively hot source, but in<br />
the refrigeration process particular attention is paid to the prevention<br />
of the leakage of heat into the cold source, whereas no attempt<br />
is made to prevent the escape of heat from the hot source. If a refrigerating<br />
device were treated as a heat pump in which the primary<br />
product is the heat rejected to the hot source, the order of importance<br />
would be reversed, <strong>and</strong> every opportunity would be taken to<br />
allow heat to leak into the cold source <strong>and</strong> every precaution would<br />
be taken against allowing heat to leak out of the hot source.<br />
The components of a heat pump that operates on the principle of<br />
vapor compression include an electric motor, a compressor, an evaporator,<br />
<strong>and</strong> a condenser. The compressor sucks in gas from the evaporator<br />
<strong>and</strong> compresses it to a pressure that corresponds to a saturation<br />
temperature that is slightly higher than that of the required heat. From<br />
the compressor, the compressed gas passes to the condenser, where it is<br />
cooled <strong>and</strong> condensed, thereby giving up a large quantity of heat to the<br />
water or other substance that it is intended to heat. The condensed gas<br />
then passes through the expansion valve, where a sudden reduction of<br />
pressure takes place. This reduction of pressure lowers the boiling
400 / Heat pump<br />
point of the liquid, which therefore vaporizes <strong>and</strong> takes in heat from<br />
the medium surrounding the evaporator. After evaporation, the gas<br />
passes on to the compressor, <strong>and</strong> the cycle is complete.<br />
Haldane was the first person in the United Kingdom to install a<br />
heat pump. He was also the first person to install a domestic heat<br />
pump to provide hot water <strong>and</strong> space heating.<br />
Heat In<br />
Impact<br />
Low-Pressure<br />
Vapor<br />
Electric Power<br />
High-Pressure<br />
Vapor<br />
Compressor<br />
Evaporator Condenser<br />
Low-Pressure<br />
Liquid<br />
Expansion Valve<br />
Components of a heat pump.<br />
High-Pressure<br />
Liquid<br />
Heat Out<br />
Since Haldane’s demonstration of the use of the heat pump, the<br />
device has been highly successful in people’s homes, especially in<br />
those regions where both heating <strong>and</strong> cooling are required for single-<br />
<strong>and</strong> multifamily residences (for example, Australia, Japan, <strong>and</strong><br />
the United States). This is the case because the heat pump can provide<br />
both heating <strong>and</strong> cooling; therefore, the cost of a heat pump<br />
system can be spread over both heating <strong>and</strong> cooling seasons. Total<br />
annual sales of heat pumps worldwide have risen to the millions,<br />
with most sales being made in Japan <strong>and</strong> the United States.<br />
The use of heat pumps can save energy. In addition, because they<br />
are electric, they can save significant quantities of oil, especially in<br />
the residential retrofit <strong>and</strong> replacement markets <strong>and</strong> when used as<br />
add-on devices for existing heating systems. Some heat pumps are<br />
now available that may compete cost-effectively with other heating<br />
systems in meeting the heating dem<strong>and</strong>s of cooler regions.
Technological developments by heat pump manufacturers are<br />
continually improving the performance <strong>and</strong> cost-effectiveness of<br />
heat pumps. The electric heat pump will continue to dominate the<br />
residential market, although engine-driven systems are likely to<br />
have a greater impact on the multifamily market.<br />
See also Breeder reactor; Compressed-air-accumulating power<br />
plant; Fuel cell; Geothermal power; Nuclear power plant; Solar<br />
thermal engine; Tidal power plant.<br />
Further Reading<br />
Heat pump / 401<br />
Kavanaugh, Stephen P., <strong>and</strong> Kevin D. Rafferty. Ground-Source Heat<br />
Pumps: Design of Geothermal Systems for Commercial <strong>and</strong> Institutional<br />
Buildings. Atlanta: American Society of Heating, Refrigerating<br />
<strong>and</strong> Air-Conditioning Engineers, 1997.<br />
Nisson, Ned. “Efficient <strong>and</strong> Affordable.” Popular Science 247, no. 2<br />
(August, 1995).<br />
Using the Earth to Heat <strong>and</strong> Cool Homes. Washington, D.C.: U.S. Department<br />
of Energy, 1983.
402<br />
Holography<br />
Holography<br />
The invention: A lensless system of three-dimensional photography<br />
that was one of the most important developments in twentieth<br />
century optical science.<br />
The people behind the invention:<br />
Dennis Gabor (1900-1979), a Hungarian-born inventor <strong>and</strong><br />
physicist who was awarded the 1971 Nobel Prize in Physics<br />
Emmett Leith (1927- ), a radar researcher who, with Juris<br />
Upatnieks, produced the first laser holograms<br />
Juris Upatnieks (1936- ), a radar researcher who, with<br />
Emmett Leith, produced the first laser holograms<br />
Easter Inspiration<br />
The development of photography in the early 1900’s made possible<br />
the recording of events <strong>and</strong> information in ways unknown before<br />
the twentieth century: the photographing of star clusters, the<br />
recording of the emission spectra of heated elements, the storing of<br />
data in the form of small recorded images (for example, microfilm),<br />
<strong>and</strong> the photographing of microscopic specimens, among other<br />
things. Because of its vast importance to the scientist, the science of<br />
photography has developed steadily.<br />
An underst<strong>and</strong>ing of the photographic <strong>and</strong> holographic processes<br />
requires some knowledge of the wave behavior of light. Light is an<br />
electromagnetic wave that, like a water wave, has an amplitude <strong>and</strong> a<br />
phase. The amplitude corresponds to the wave height, while the<br />
phase indicates which part of the wave is passing a given point at a<br />
given time. A cork floating in a pond bobs up <strong>and</strong> down as waves<br />
pass under it. The position of the cork at any time depends on both<br />
amplitude <strong>and</strong> phase: The phase determines on which part of the<br />
wave the cork is floating at any given time, <strong>and</strong> the amplitude determines<br />
how high or low the cork can be moved. Waves from more<br />
than one source arriving at the cork combine in ways that depend on<br />
their relative phases. If the waves meet in the same phase, they add<br />
<strong>and</strong> produce a large amplitude; if they arrive out of phase, they sub-
tract <strong>and</strong> produce a small amplitude. The total amplitude, or intensity,<br />
depends on the phases of the combining waves.<br />
Dennis Gabor, the inventor of holography, was intrigued by the<br />
way in which the photographic image of an object was stored by a<br />
photographic plate but was unable to devote any consistent research<br />
effort to the question until the 1940’s. At that time, Gabor was involved<br />
in the development of the electron microscope. On Easter<br />
morning in 1947, as Gabor was pondering the problem of how to<br />
improve the electron microscope, the solution came to him. He<br />
would attempt to take a poor electron picture <strong>and</strong> then correct it optically.<br />
The process would require coherent electron beams—that is,<br />
electron waves with a definite phase.<br />
This two-stage method was inspired by the work of Lawrence<br />
Bragg. Bragg had formed the image of a crystal lattice by diffracting<br />
the photographic X-ray diffraction pattern of the original lattice.<br />
This double diffraction process is the basis of the holographic process.<br />
Bragg’s method was limited because of his inability to record<br />
the phase information of the X-ray photograph. Therefore, he could<br />
study only those crystals for which the phase relationship of the reflected<br />
waves could be predicted.<br />
Waiting for the Laser<br />
Holography / 403<br />
Gabor devised a way of capturing the phase information after he<br />
realized that adding coherent background to the wave reflected from<br />
an object would make it possible to produce an interference pattern<br />
on the photographic plate. When the phases of the two waves are<br />
identical, a maximum intensity will be recorded; when they are out of<br />
phase, a minimum intensity is recorded. Therefore, what is recorded<br />
in a hologram is not an image of the object but rather the interference<br />
pattern of the two coherent waves. This pattern looks like a collection<br />
of swirls <strong>and</strong> blank spots. The hologram (or photograph) is then illuminated<br />
by the reference beam, <strong>and</strong> part of the transmitted light is a<br />
replica of the original object wave. When viewing this object wave,<br />
one sees an exact replica of the original object.<br />
The major impediment at the time in making holograms using<br />
any form of radiation was a lack of coherent sources. For example,<br />
the coherence of the mercury lamp used by Gabor <strong>and</strong> his assistant
404 / Holography<br />
Dennis Gabor<br />
The eldest son of a mine director, Dennis Gabor was born in<br />
1900 in Budapest, Hungary. At fifteen, suddenly developing<br />
an intense interest in optics <strong>and</strong> photography, Gabor <strong>and</strong> his<br />
brother sent up their own home laboratory <strong>and</strong> experimented<br />
in those fields as well as with X rays <strong>and</strong> radioactivity. The love<br />
of physics never left him.<br />
Gabor graduated from the Berlin Technische Hochschule in<br />
1924 <strong>and</strong> earned a doctorate of engineering in 1927 after developing<br />
a high-speed cathode ray oscillograph <strong>and</strong> a new kind of<br />
magnetic lens for controlling electrons. After graduate school<br />
he joined Siemens <strong>and</strong> Halske Limited <strong>and</strong> invented a highpressure<br />
mercury lamp, which was later used widely in street<br />
lamps. In 1933, Gabor left Germany because of the rise of Nazism<br />
<strong>and</strong> moved to Engl<strong>and</strong>. He worked in industrial research<br />
until 1948, improving gas-discharge tubes <strong>and</strong> stereoscopic cinematography,<br />
but he also published scientific papers on his<br />
own, including the first of many on communications theory. At<br />
the beginning of 1949, Gabor became a faculty member of the<br />
Imperial College of Science <strong>and</strong> Technology in London, first as a<br />
reader in electronics <strong>and</strong> later as a professor of applied physics.<br />
During his academic years came more inventions, including<br />
the hologram, an electron-velocity spectroscope, an analog computer,<br />
a flat color television tube, <strong>and</strong> a new type of thermionic<br />
converter. He also build a cloud chamber for detecting subatomic<br />
particles <strong>and</strong> used it to study electron interactions. As<br />
interested in theory as he was in applied physics, Gabor<br />
published papers on theoretical aspects of communications,<br />
plasma, magnetrons, <strong>and</strong> fusion. In his later years he worried<br />
deeply about the modern tendency for technology to advance<br />
out of step with social institutions <strong>and</strong> wrote popular books<br />
outlining his belief that social reform should be given priority.<br />
Gabor became a member of Britain’s Royal Society in 1956<br />
<strong>and</strong> was awarded its Rumsford Medal in 1968. In 1971 he received<br />
the Nobel Prize in Physics for inventing holography. He<br />
died in London in 1979.
Ivor Williams was so short that they were able to make holograms of<br />
only about a centimeter in diameter. The early results were rather<br />
poor in terms of image quality <strong>and</strong> also had a double image. For this<br />
reason, there was little interest in holography, <strong>and</strong> the subject lay almost<br />
untouched for more than ten years.<br />
Interest in the field was rekindled after the laser (light amplification<br />
by stimulated emission of radiation) was developed in 1962.<br />
Emmett Leith <strong>and</strong> Juris Upatnieks, who were conducting radar research<br />
at the University of Michigan, published the first laser holographs<br />
in 1963. The laser was an intense light source with a very<br />
long coherence length. Its monochromatic nature improved the resolution<br />
of the images greatly. Also, there was no longer any restriction<br />
on the size of the object to be photographed.<br />
The availability of the laser allowed Leith <strong>and</strong> Upatnieks to propose<br />
another improvement in holographic technique. Before 1964,<br />
holograms were made of only thin transparent objects. A small region<br />
of the hologram bore a one-to-one correspondence to a region<br />
of the object. Only a small portion of the image could be viewed at<br />
one time without the aid of additional optical components. Illuminating<br />
the transparency diffusely allowed the whole image to be<br />
seen at one time. This development also made it possible to record<br />
holograms of diffusely reflected three-dimensional objects. Gabor<br />
had seen from the beginning that this should make it possible to create<br />
three-dimensional images.<br />
After the early 1960’s, the field of holography developed very<br />
quickly. Because holography is different from conventional photography,<br />
the two techniques often complement each other. Gabor saw<br />
his idea blossom into a very important technique in optical science.<br />
Impact<br />
Holography / 405<br />
The development of the laser <strong>and</strong> the publication of the first laser<br />
holograms in 1963 caused a blossoming of the new technique in<br />
many fields. Soon, techniques were developed that allowed holograms<br />
to be viewed with white light. It also became possible for holograms<br />
to reconstruct multicolored images. Holographic methods<br />
have been used to map terrain with radar waves <strong>and</strong> to conduct surveillance<br />
in the fields of forestry, agriculture, <strong>and</strong> meteorology.
406 / Holography<br />
By the 1990’s, holography had become a multimillion-dollar industry,<br />
finding applications in advertising, as an art form, <strong>and</strong> in security<br />
devices on credit cards, as well as in scientific fields. An alternate<br />
form of holography, also suggested by Gabor, uses sound<br />
waves. Acoustical imaging is useful whenever the medium around<br />
the object to be viewed is opaque to light rays—for example, in<br />
medical diagnosis. Holography has affected many areas of science,<br />
technology, <strong>and</strong> culture.<br />
See also Color film; Electron microscope; Infrared photography;<br />
Laser; Mammography; Mass spectrograph; X-ray crystallography.<br />
Further Reading<br />
Greguss, Pál, Tung H. Jeong, <strong>and</strong> Dennis Gabor. Holography: Commemorating<br />
the Ninetieth Anniversary of the Birth of Dennis Gabor.<br />
Bellingham, Wash.: SPIE Optical Engineering Press, 1991.<br />
Kasper, Joseph Emil, <strong>and</strong> Steven A. Feller. The Complete Book of Holograms:<br />
How They Work <strong>and</strong> How to Make Them. Mineola, N.Y.: Dover,<br />
2001.<br />
McNair, Don. How to Make Holograms. Blue Ridge Summit, Pa.: Tab<br />
Books, 1983.<br />
Saxby, Graham. Holograms: How to Make <strong>and</strong> Display Them. New<br />
York: Focal Press, 1980.
Hovercraft<br />
Hovercraft<br />
The invention: A vehicle requiring no surface contact for traction<br />
that moves freely over a variety of surfaces—particularly<br />
water—while supported on a self-generated cushion of air.<br />
The people behind the invention:<br />
Christopher Sydney Cockerell (1910- ), a British engineer<br />
who built the first hovercraft<br />
Ronald A. Shaw (1910- ), an early pioneer in aerodynamics<br />
who experimented with hovercraft<br />
Sir John Isaac Thornycroft (1843-1928), a Royal Navy architect<br />
who was the first to experiment with air-cushion theory<br />
Air-Cushion Travel<br />
407<br />
The air-cushion vehicle was first conceived by Sir John Isaac<br />
Thornycroft of Great Britain in the 1870’s. He theorized that if a<br />
ship had a plenum chamber (a box open at the bottom) for a hull<br />
<strong>and</strong> it were pumped full of air, the ship would rise out of the water<br />
<strong>and</strong> move faster, because there would be less drag. The main problem<br />
was keeping the air from escaping from under the craft.<br />
In the early 1950’s, Christopher Sydney Cockerell was experimenting<br />
with ways to reduce both the wave-making <strong>and</strong> frictional<br />
resistance that craft had to water. In 1953, he constructed a punt<br />
with a fan that supplied air to the bottom of the craft, which could<br />
thus glide over the surface with very little friction. The air was contained<br />
under the craft by specially constructed side walls. In 1955,<br />
the first true “hovercraft,” as Cockerell called it, was constructed of<br />
balsa wood. It weighed only 127 grams <strong>and</strong> traveled over water at a<br />
speed of 13 kilometers per hour.<br />
On November 16, 1956, Cockerell successfully demonstrated<br />
his model hovercraft at the patent agent’s office in London. It was<br />
immediately placed on the “secret” list, <strong>and</strong> Saunders-Roe Ltd.<br />
was given the first contract to build hovercraft in 1957. The first experimental<br />
piloted hovercraft, the SR.N1, which had a weight of<br />
3,400 kilograms <strong>and</strong> could carry three people at the speed of 25
408 / Hovercraft<br />
knots, was completed on May 28, 1959, <strong>and</strong> publicly demonstrated<br />
on June 11, 1959.<br />
Ground Effect Phenomenon<br />
In a hovercraft, a jet airstream is directed downward through a<br />
hole in a metal disk, which forces the disk to rise. The jet of air has a<br />
reverse effect of its own that forces the disk away from the surface.<br />
Some of the air hitting the ground bounces back against the disk to<br />
add further lift. This is called the “ground effect.” The ground effect<br />
is such that the greater the under-surface area of the hovercraft, the<br />
greater the reverse thrust of the air that bounces back. This makes<br />
the hovercraft a mechanically efficient machine because it provides<br />
three functions.<br />
First, the ground effect reduces friction between the craft <strong>and</strong> the<br />
earth’s surface. Second, it acts as a spring suspension to reduce<br />
some of the vertical acceleration effects that arise from travel over<br />
an uneven surface. Third, it provides a safe <strong>and</strong> comfortable ride at<br />
high speed, whatever the operating environment. The air cushion<br />
can distribute the weight of the hovercraft over almost its entire area<br />
so that the cushion pressure is low.<br />
The basic elements of the air-cushion vehicle are a hull, a propulsion<br />
system, <strong>and</strong> a lift system. The hull, which accommodates the<br />
crew, passengers, <strong>and</strong> freight, contains both the propulsion <strong>and</strong> lift<br />
systems. The propulsion <strong>and</strong> lift systems can be driven by the same<br />
power plant or by separate power plants. Early designs used only<br />
one unit, but this proved to be a problem when adequate power was<br />
not achieved for movement <strong>and</strong> lift. Better results are achieved<br />
when two units are used, since far more power is used to lift the vehicle<br />
than to propel it.<br />
For lift, high-speed centrifugal fans are used to drive the air<br />
through jets that are located under the craft. A redesigned aircraft<br />
propeller is used for propulsion. Rudderlike fins <strong>and</strong> an air fan that<br />
can be swiveled to provide direction are placed at the rear of the<br />
craft.<br />
Several different air systems can be used, depending on whether<br />
a skirt system is used in the lift process. The plenum chamber system,<br />
the peripheral jet system, <strong>and</strong> several types of recirculating air
Hovercraft / 409<br />
systems have all been successfully tried without skirting. A variety<br />
of rigid <strong>and</strong> flexible skirts have also proved to be satisfactory, depending<br />
on the use of the vehicle.<br />
Skirts are used to hold the air for lift. Skirts were once hung like cur-<br />
Sir John Isaac Thornycroft<br />
To be truly ahead of one’s time as an inventor, one must simply<br />
know everything there is to know about a specialty <strong>and</strong><br />
then imagine something useful that contemporary technology<br />
is not quite ready for.<br />
John Isaac Thornycroft was such an inventor. Born in 1843 in<br />
what were then the Papal States (Rome, Italy), he trained as an<br />
engineer <strong>and</strong> became a naval architect. He opened a boatbuilding<br />
<strong>and</strong> engineering company at Chiswick in London in<br />
1866 <strong>and</strong> began looking for ways to improve the performance of<br />
small seacraft. In 1877 he delivered the HMS Lightning, Engl<strong>and</strong>’s<br />
first torpedo boat, to the Royal Navy. He continued to<br />
make torpedo boats for coastal waters, nicknamed “scooters,”<br />
<strong>and</strong> made himself a leading expert on boat design. He introduced<br />
stabilizers <strong>and</strong> modified hull <strong>and</strong> propeller shapes in order<br />
to reduce drag from the hull’s contact with water <strong>and</strong><br />
thereby increase a boat’s speed.<br />
One of his best ideas was to have the boat ride on a cushion<br />
of air, so that air acted as a lubricant between the hull <strong>and</strong> water.<br />
He even filed patents for the concept <strong>and</strong> built models, but the<br />
power-source technology of the day was simply too inefficient.<br />
Engines were too heavy for the amount of power they put out.<br />
None could lift a full-size boat off the water <strong>and</strong> keep it on an air<br />
cushion. So the hovercraft had to wait until the 1950’s <strong>and</strong> incorporation<br />
of sophisticated internal combustion engines into<br />
the design.<br />
Meanwhile, Thornycroft <strong>and</strong> the company named after him<br />
continued to make innovative transports <strong>and</strong> engines: a steampowered<br />
van in 1896, a gas engine in 1902, <strong>and</strong> heavy trucks in<br />
1912 that the British government used during World War I. By<br />
the time Thornycroft died in 1928, on the Isle of Wight, he had<br />
been knighted by a grateful government, which would benefit<br />
from his company’s products <strong>and</strong> his advanced ideas for the<br />
rest of the twentieth century.
410 / Hovercraft<br />
tains around hovercraft. Instead of simple curtains to contain the air,<br />
there are now complicated designs that contain the cushion, duct the<br />
air, <strong>and</strong> even provide a secondary suspension. The materials used in<br />
the skirting have also changed from a rubberized fabric to pure rubber<br />
<strong>and</strong> nylon <strong>and</strong>, finally, to neoprene, a lamination of nylon <strong>and</strong> plastic.<br />
The three basic types of hovercraft are the amphibious, nonamphibious,<br />
<strong>and</strong> semiamphibious models. The amphibious type can<br />
travel over water <strong>and</strong> l<strong>and</strong>, whereas the nonamphibious type is restricted<br />
to water travel. The semiamphibious model is also restricted<br />
to water travel but may terminate travel by nosing up on a prepared<br />
ramp or beach. All hovercraft contain built-in buoyancy tanks in the<br />
side skirting as a safety measure in the event that a hovercraft must<br />
settle on the water. Most hovercraft are equipped with gas turbines<br />
<strong>and</strong> use either propellers or water-jet propulsion.<br />
Impact<br />
Hovercraft are used primarily for short passenger ferry services.<br />
Great Britain was the only nation to produce a large number of hovercraft.<br />
The British built larger <strong>and</strong> faster craft <strong>and</strong> pioneered their<br />
successful use as ferries across the English Channel, where they<br />
could reach speeds of 111 kilometers per hour (160 knots) <strong>and</strong> carry<br />
more than four hundred passengers <strong>and</strong> almost one hundred vehicles.<br />
France <strong>and</strong> the former Soviet Union have also effectively demonstrated<br />
hovercraft river travel, <strong>and</strong> the Soviets have experimented<br />
with military applications as well.<br />
The military adaptations of hovercraft have been more diversified.<br />
Beach l<strong>and</strong>ings have been performed effectively, <strong>and</strong> the United<br />
States used hovercraft for river patrols during the Vietnam War.<br />
Other uses also exist for hovercraft. They can be used as harbor pilot<br />
vessels <strong>and</strong> for patrolling shores in a variety of police-<strong>and</strong> customs-related<br />
duties. Hovercraft can also serve as flood-rescue craft<br />
<strong>and</strong> fire-fighting vehicles. Even a hoverfreighter is being considered.<br />
The air-cushion theory in transport systems is rapidly developing.<br />
It has spread to trains <strong>and</strong> smaller people movers in many<br />
countries. Their smooth, rapid, clean, <strong>and</strong> efficient operation makes<br />
hovercraft attractive to transportation designers around the world.
See also Airplane; Atomic-powered ship; Bullet train; Gyrocompass.<br />
Further Reading<br />
Hovercraft / 411<br />
Amyot, Joseph R. Hovercraft Technology, Economics, <strong>and</strong> Applications.<br />
Amsterdam: Elsevier, 1989.<br />
Croome, Angela. Hover Craft. 4th ed. London: Hodder <strong>and</strong> Stoughton,<br />
1984.<br />
Gromer, Cliff. “Flying Low.” Popular Mechanics 176, no. 9 (September,<br />
1999).<br />
McLeavy, Roy. Hovercraft <strong>and</strong> Hydrofoils. London: Jane’s Publishing,<br />
1980.<br />
Pengelley, Rupert. “Hovercraft Cushion the Blow of Amphibious<br />
Operations.” Jane’s Navy International 104, no. 008 (October 1,<br />
1999).<br />
Robertson, Don. A Restless Spirit. New Port, Isle of Wight: Cross<br />
Publishing, 1994.
412<br />
Hydrogen bomb<br />
Hydrogen bomb<br />
The invention: Popularly known as the “H-Bomb,” the hydrogen<br />
bomb differs from the original atomic bomb in using fusion,<br />
rather than fission, to create a thermonuclear explosion almost a<br />
thous<strong>and</strong> times more powerful.<br />
The people behind the invention:<br />
Edward Teller (1908- ), a Hungarian-born theoretical<br />
physicist<br />
Stanislaw Ulam (1909-1984), a Polish-born mathematician<br />
Crash Development<br />
A few months before the 1942 creation of the Manhattan Project,<br />
the United States-led effort to build the atomic (fission) bomb, physicist<br />
Enrico Fermi suggested to Edward Teller that such a bomb<br />
could release more energy by the process of heating a mass of the<br />
hydrogen isotope deuterium <strong>and</strong> igniting the fusion of hydrogen<br />
into helium. Fusion is the process whereby two atoms come together<br />
to form a larger atom, <strong>and</strong> this process usually occurs only in stars,<br />
such as the Sun. Physicists Hans Bethe, George Gamow, <strong>and</strong> Teller<br />
had been studying fusion since 1934 <strong>and</strong> knew of the tremendous<br />
energy than could be released by this process—even more energy<br />
than the fission (atom-splitting) process that would create the atomic<br />
bomb. Initially, Teller dismissed Fermi’s idea, but later in 1942, in<br />
collaboration with Emil Konopinski, he concluded that a hydrogen<br />
bomb, or superbomb, could be made.<br />
For practical considerations, it was decided that the design of the<br />
superbomb would have to wait until after the war. In 1946, a secret<br />
conference on the superbomb was held in Los Alamos, New Mexico,<br />
that was attended by, among other Manhattan Project veterans,<br />
Stanislaw Ulam <strong>and</strong> Klaus Emil Julius Fuchs. Supporting the investigation<br />
of Teller’s concept, the conferees requested a more complete<br />
mathematical analysis of his own admittedly crude calculations<br />
on the dynamics of the fusion reaction. In 1947, Teller believed<br />
that these calculations might take years. Two years later, however,
the Soviet explosion of an atomic bomb convinced Teller that America’s<br />
Cold War adversary was hard at work on its own superbomb.<br />
Even when new calculations cast further doubt on his designs,<br />
Teller began a vigorous campaign for crash development of the hydrogen<br />
bomb, or H-bomb.<br />
The Superbomb<br />
Hydrogen bomb / 413<br />
Scientists knew that fusion reactions could be induced by the explosion<br />
of an atomic bomb. The basic problem was simple <strong>and</strong> formidable:<br />
How could fusion fuel be heated <strong>and</strong> compressed long<br />
enough to achieve significant thermonuclear burning before the<br />
atomic fission explosion blew the assembly apart? A major part of<br />
the solution came from Ulam in 1951. He proposed using the energy<br />
from an exploding atomic bomb to induce significant thermonuclear<br />
reactions in adjacent fusion fuel components.<br />
This arrangement, in which the A-bomb (the primary) is physically<br />
separated from the H-bomb’s (the secondary’s) fusion fuel, became<br />
known as the “Teller-Ulam configuration.” All H-bombs are<br />
cylindrical, with an atomic device at one end <strong>and</strong> the other components<br />
filling the remaining space. Energy from the exploding primary<br />
could be transported by X rays <strong>and</strong> would therefore affect the<br />
fusion fuel at near light speed—before the arrival of the explosion.<br />
Frederick de Hoffman’s work verified <strong>and</strong> enriched the new concept.<br />
In the revised method, moderated X rays from the primary irradiate<br />
a reactive plastic medium surrounding concentric <strong>and</strong> generally<br />
cylindrical layers of fusion <strong>and</strong> fission fuel in the secondary.<br />
Instantly, the plastic becomes a hot plasma that compresses <strong>and</strong><br />
heats the inner layer of fusion fuel, which in turn compresses a central<br />
core of fissile plutonium to supercriticality. Thus compressed,<br />
<strong>and</strong> bombarded by fusion-produced, high-energy neutrons, the fission<br />
element exp<strong>and</strong>s rapidly in a chain reaction from the inside<br />
out, further compressing <strong>and</strong> heating the surrounding fusion fuel,<br />
releasing more energy <strong>and</strong> more neutrons that induce fission in a<br />
fuel casing-tamper made of normally stable uranium 238.<br />
With its equipment to refrigerate the hydrogen isotopes, the device<br />
created to test Teller’s new concept weighed more than sixty<br />
tons. During Operation Ivy, it was tested at Elugelab in the Marshall
414 / Hydrogen bomb<br />
Edward Teller<br />
To call Edward Teller “controversial” is equivalent to saying<br />
that the hydrogen bomb is “destructive”—an enormous understatement.<br />
His forceful support for nuclear arms prompted<br />
some to label him a war criminal while others consider him to<br />
be one of the most thoughtful statesmen among scientists.<br />
Teller was born into a Jewish family in Budapest, Hungary,<br />
in 1908. He left his homel<strong>and</strong> to flee the anti-Semitic fascist government<br />
of the late 1920’s <strong>and</strong> attended the University of Leipzig<br />
in Germany. In 1930 he completed his doctorate <strong>and</strong> hoped<br />
to settle into an academic career there, but he fled Germany<br />
when Adolf Hitler came to power. Teller migrated to the United<br />
States in 1935 <strong>and</strong> taught at George Washington University,<br />
where with George Gamow he studied aspects of quantum mechanics<br />
<strong>and</strong> nuclear physics. He became a U.S. citizen in 1941.<br />
Teller was among the first physicists to realize the possibility<br />
of an atomic (fission) bomb, <strong>and</strong> he became a central figure in<br />
the Manhattan Project that built it during World War II. However,<br />
he was already exploring the idea of a “superbomb” that<br />
explodes because of a fusion reaction. He helped persuade<br />
President Harry Truman to finance a project to build it <strong>and</strong> continued<br />
to influence the politics of nuclear weapons <strong>and</strong> power<br />
afterward. Teller developed the theoretical basis for the hydrogen<br />
bomb <strong>and</strong> its rough design—<strong>and</strong> so is know as its father.<br />
However, controversy later erupted over credit. Mathematician<br />
Stanislaw Ulam claimed he contributed key insights <strong>and</strong> calculations,<br />
a claim Teller vehemently denied. Teller, however, did<br />
credit a young physicist, Richard L. Garwin, with creating the<br />
successful working design for the first bomb.<br />
Fiercely anticommunist, Teller argued for a strong nuclear<br />
arsenal to make the Soviet Union afraid of attacking the United<br />
States <strong>and</strong> supported space-based missile defense systems. He<br />
served as director of the Lawrence Livermore National Laboratory,<br />
professor at the University of California at Berkeley, <strong>and</strong><br />
senior fellow at the nearby Hoover Institution. In his nineties he<br />
outraged environmentalists by suggesting that the atmosphere<br />
could be manipulated with technology to offset the effects of<br />
global warming.
Isl<strong>and</strong>s on November 1, 1952. Exceeding the expectations of all concerned<br />
<strong>and</strong> vaporizing the isl<strong>and</strong>, the explosion equaled 10.4 million<br />
tons of trinitrotoluene (TNT), which meant that it was about<br />
seven hundred times more powerful than the atomic bomb dropped<br />
on Hiroshima, Japan, in 1945. A version of this device weighing<br />
about 20 tons was prepared for delivery by specially modified Air<br />
Force B-36 bombers in the event of an emergency during wartime.<br />
In development at Los Alamos before the 1952 test was a device<br />
weighing only about 4 tons, a “dry bomb” that did not require refrigeration<br />
equipment or liquid fusion fuel; when sufficiently compressed<br />
<strong>and</strong> heated in its molded-powder form, the new fusion fuel<br />
component, lithium-6 deutride, instantly produced tritium, an isotope<br />
of hydrogen. This concept was tested during Operation Castle<br />
at Bikini atoll in 1954 <strong>and</strong> produced a yield of 15 million tons of TNT,<br />
the largest-ever nuclear explosion created by the United States.<br />
Consequences<br />
Hydrogen bomb / 415<br />
Teller was not alone in believing that the world could produce<br />
thermonuclear devices capable of causing great destruction. Months<br />
before Fermi suggested to Teller the possibility of explosive thermonuclear<br />
reactions on Earth, Japanese physicist Tokutaro Hagiwara<br />
had proposed that a uranium 235 bomb could ignite significant fusion<br />
reactions in hydrogen. The Soviet Union successfully tested an<br />
H-bomb dropped from an airplane in 1955, one year before the<br />
United States did so.<br />
Teller became the scientific adviser on nuclear affairs of many<br />
presidents, from Dwight D. Eisenhower to Ronald Reagan. The<br />
widespread blast <strong>and</strong> fallout effects of H-bombs assured the mutual<br />
destruction of the users of such weapons. During the Cold War<br />
(from about 1947 to 1981), both the United States <strong>and</strong> the Soviet<br />
Union possessed H-bombs. “Testing” these bombs made each side<br />
aware of how powerful the other side was. Everyone wanted to<br />
avoid nuclear war. It was thought that no one would try to start a<br />
war that would end in the world’s destruction. This theory was<br />
called deterrence: The United States wanted to let the Soviet Union<br />
know that it had just as many bombs, or more, than it did, so that the<br />
leaders of the Sovet Union would be deterred from starting a war.
416 / Hydrogen bomb<br />
Teller knew that the availability of H-bombs on both sides was<br />
not enough to guarantee that such weapons would never be used. It<br />
was also necessary to make the Soviet Union aware of the existence<br />
of the bombs through testing. He consistently advised against U.S.<br />
participation with the Soviet Union in a moratorium (period of<br />
waiting) on nuclear weapons testing. Largely based on Teller’s urging<br />
that underground testing be continued, the United States rejected<br />
a total moratorium in favor of the 1963 Atmospheric Test Ban<br />
Treaty.<br />
During the 1980’s, Teller, among others, convinced President<br />
Reagan to embrace the Strategic Defense Initiative (SDI). Teller argued<br />
that SDI components, such as the space-based “Excalibur,” a<br />
nuclear bomb-powered X-ray laser weapon proposed by the Lawrence-Livermore<br />
National Laboratory, would make thermonuclear<br />
war not unimaginable, but theoretically impossible.<br />
See also Airplane; Atomic bomb; Cruise missile; Rocket; Stealth<br />
aircraft; V-2 rocket.<br />
Further Reading<br />
Blumberg, Stanley A., <strong>and</strong> Louis G. Panos. Edward Teller, Giant of the<br />
Golden Age of Physics: A Biography. New York: Scribner’s, 1990.<br />
Clash, James M. “Teller Tells It.” Forbes (May 17, 1999).<br />
Teller, Edward, Wendy Teller, <strong>and</strong> Wilson Talley. Conversations on the<br />
Dark Secrets of Physics. New York: Plenum Press, 1991.<br />
York, Herbert E. The Advisors: Oppenheimer, Teller, <strong>and</strong> the Superbomb.<br />
Stanford, Calif.: Stanford University Press, 1989.
IBM Model 1401 computer<br />
IBM Model 1401 computer<br />
The invention: A relatively small, simple, <strong>and</strong> inexpensive computer<br />
that is often credited with having launched the personal<br />
computer age.<br />
The people behind the invention:<br />
Howard H. Aiken (1900-1973), an American mathematician<br />
Charles Babbage (1792-1871), an English mathematician <strong>and</strong><br />
inventor<br />
Herman Hollerith (1860-1929), an American inventor<br />
Computers: From the Beginning<br />
417<br />
Computers evolved into their modern form over a period of<br />
thous<strong>and</strong>s of years as a result of humanity’s efforts to simplify the<br />
process of counting. Two counting devices that are considered to be<br />
very simple, early computers are the abacus <strong>and</strong> the slide rule.<br />
These calculating devices are representative of digital <strong>and</strong> analog<br />
computers, respectively, because an abacus counts numbers of things,<br />
while the slide rule calculates length measurements.<br />
The first modern computer, which was planned by Charles Babbage<br />
in 1833, was never built. It was intended to perform complex<br />
calculations with a data processing/memory unit that was controlled<br />
by punched cards. In 1944, Harvard University’s Howard H.<br />
Aiken <strong>and</strong> the International Business Machines (IBM) Corporation<br />
built such a computer—the huge, punched-tape-controlled Automatic<br />
Sequence Controlled Calculator, or Mark I ASCC, which<br />
could perform complex mathematical operations in seconds. During<br />
the next fifteen years, computer advances produced digital computers<br />
that used binary arithmetic for calculation, incorporated<br />
simplified components that decreased the sizes of computers, had<br />
much faster calculating speeds, <strong>and</strong> were transistorized.<br />
Although practical computers had become much faster than<br />
they had been only a few years earlier, they were still huge <strong>and</strong> extremely<br />
expensive. In 1959, however, IBM introduced the Model<br />
1401 computer. Smaller, simpler, <strong>and</strong> much cheaper than the multi-
418 / IBM Model 1401 computer<br />
million-dollar computers that were available, the IBM Model 1401<br />
computer was also relatively easy to program <strong>and</strong> use. Its low cost,<br />
simplicity of operation, <strong>and</strong> very wide use have led many experts<br />
to view the IBM Model 1401 computer as beginning the age of the<br />
personal computer.<br />
Computer Operation <strong>and</strong> IBM’s Model 1401<br />
Modern computers are essentially very fast calculating machines<br />
that are capable of sorting, comparing, analyzing, <strong>and</strong> outputting information,<br />
as well as storing it for future use. Many sources credit<br />
Aiken’s Mark I ASCC as being the first modern computer to be built.<br />
This huge, five-ton machine used thous<strong>and</strong>s of relays to perform complex<br />
mathematical calculations in seconds. Soon after its introduction,<br />
other companies produced computers that were faster <strong>and</strong> more versatile<br />
than the Mark I. The computer development race was on.<br />
All these early computers utilized the decimal system for calculations<br />
until it was found that binary arithmetic, whose numbers are<br />
combinations of the binary digits 1 <strong>and</strong> 0, was much more suitable<br />
for the purpose. The advantage of the binary system is that the electronic<br />
switches that make up a computer (tubes, transistors, or<br />
chips) can be either on or off; in the binary system, the on state can<br />
be represented by the digit 1, the off state by the digit 0. Strung together<br />
correctly, binary numbers, or digits, can be inputted rapidly<br />
<strong>and</strong> used for high-speed computations. In fact, the computer term<br />
bit is a contraction of the phrase “binary digit.”<br />
A computer consists of input <strong>and</strong> output devices, a storage device<br />
(memory), arithmetic <strong>and</strong> logic units, <strong>and</strong> a control unit. In<br />
most cases, a central processing unit (CPU) combines the logic,<br />
arithmetic, memory, <strong>and</strong> control aspects. Instructions are loaded<br />
into the memory via an input device, processed, <strong>and</strong> stored. Then,<br />
the CPU issues comm<strong>and</strong>s to the other parts of the system to carry<br />
out computations or other functions <strong>and</strong> output the data as needed.<br />
Most output is printed as hard copy or displayed on cathode-ray<br />
tube monitors, or screens.<br />
The early modern computers—such as the Mark I ASCC—were<br />
huge because their information circuits were large relays or tubes.<br />
Computers became smaller <strong>and</strong> smaller as the tubes were replaced—
first with transistors, then with simple integrated circuits, <strong>and</strong> then<br />
with silicon chips. Each technological changeover also produced<br />
more powerful, more cost-effective computers.<br />
In the 1950’s, with reliable transistors available, IBM began the<br />
development of two types of computers that were completed by<br />
about 1959. The larger version was the Stretch computer, which was<br />
advertised as the most powerful computer of its day. Customized<br />
for each individual purchaser (for example, the Atomic Energy<br />
Commission), a Stretch computer cost $10 million or more. Some innovations<br />
in Stretch computers included semiconductor circuits,<br />
new switching systems that quickly converted various kinds of data<br />
into one language that was understood by the CPU, rapid data readers,<br />
<strong>and</strong> devices that seemed to anticipate future operations.<br />
Consequences<br />
IBM Model 1401 computer / 419<br />
The IBM Model 1401 was the first computer sold in very large<br />
numbers. It led IBM <strong>and</strong> other companies to seek to develop less expensive,<br />
more versatile, smaller computers that would be sold to<br />
small businesses <strong>and</strong> to individuals. Six years after the development<br />
of the Model 1401, other IBM models—<strong>and</strong> those made by<br />
other companies—became available that were more compact <strong>and</strong><br />
had larger memories. The search for compactness <strong>and</strong> versatility<br />
continued. A major development was the invention of integrated<br />
circuits by Jack S. Kilby of Texas Instruments; these integrated circuits<br />
became available by the mid-1960’s. They were followed by<br />
even smaller “microprocessors” (computer chips) that became available<br />
in the 1970’s. Computers continued to become smaller <strong>and</strong> more<br />
powerful.<br />
Input <strong>and</strong> storage devices also decreased rapidly in size. At first,<br />
the punched cards invented by Herman Hollerith, founder of the<br />
Tabulation Machine Company (which later became IBM), were read<br />
by bulky readers. In time, less bulky magnetic tapes <strong>and</strong> more compact<br />
readers were developed, after which magnetic disks <strong>and</strong> compact<br />
disc drives were introduced.<br />
Many other advances have been made. Modern computers can<br />
talk, create art <strong>and</strong> graphics, compose music, play games, <strong>and</strong> operate<br />
robots. Further advancement is expected as societal needs
420 / IBM Model 1401 computer<br />
change. Many experts believe that it was the sale of large numbers<br />
of IBM Model 1401 computers that began the trend.<br />
See also Apple II computer; BINAC computer; Colossus computer;<br />
ENIAC computer; Personal computer; Supercomputer;<br />
UNIVAC computer.<br />
Further Reading<br />
Carroll, Paul. Big Blues: The Unmaking of IBM. New York: Crown,<br />
1993.<br />
Chposky, James, <strong>and</strong> Ted Leonsis. Blue Magic: The People, Power, <strong>and</strong><br />
Politics Behind the IBM Personal Computer. New York: Facts on File,<br />
1988.<br />
Manes, Stephen, <strong>and</strong> Paul Andrews. Gates: How Microsoft’s Mogul<br />
Reinvented an Industry. New York: Doubleday, 1993.
In vitro plant culture<br />
In vitro plant culture<br />
The invention: Method for propagating plants in artificial media<br />
that has revolutionized agriculture.<br />
The people behind the invention:<br />
Georges Michel Morel (1916-1973), a French physiologist<br />
Philip Cleaver White (1913- ), an American chemist<br />
Plant Tissue Grows “In Glass”<br />
In the mid-1800’s, biologists began pondering whether a cell isolated<br />
from a multicellular organism could live separately if it were<br />
provided with the proper environment. In 1902, with this question in<br />
mind, the German plant physiologist Gottlieb Haberl<strong>and</strong>t attempted<br />
to culture (grow) isolated plant cells under sterile conditions on an artificial<br />
growth medium. Although his cultured cells never underwent<br />
cell division under these “in vitro” (in glass) conditions, Haberl<strong>and</strong>t<br />
is credited with originating the concept of cell culture.<br />
Subsequently, scientists attempted to culture plant tissues <strong>and</strong><br />
organs rather than individual cells <strong>and</strong> tried to determine the medium<br />
components necessary for the growth of plant tissue in vitro.<br />
In 1934, Philip White grew the first organ culture, using tomato<br />
roots. The discovery of plant hormones, which are compounds that<br />
regulate growth <strong>and</strong> development, was crucial to the successful culture<br />
of plant tissues; in 1939, Roger Gautheret, P. Nobécourt, <strong>and</strong><br />
White independently reported the successful culture of plant callus<br />
tissue. “Callus” is an irregular mass of dividing cells that often results<br />
from the wounding of plant tissue. Plant scientists were fascinated<br />
by the perpetual growth of such tissue in culture <strong>and</strong> spent<br />
years establishing optimal growth conditions <strong>and</strong> exploring the nutritional<br />
<strong>and</strong> hormonal requirements of plant tissue.<br />
Plants by the Millions<br />
421<br />
A lull in botanical research occurred during World War II, but<br />
immediately afterward there was a resurgence of interest in applying<br />
tissue culture techniques to plant research. Georges Morel, a
422 / In vitro plant culture<br />
plant physiologist at the National Institute for Agronomic Research<br />
in France, was one of many scientists during this time who<br />
had become interested in the formation of tumors in plants as well<br />
as in studying various pathogens such as fungi <strong>and</strong> viruses that<br />
cause plant disease.<br />
To further these studies, Morel adapted existing techniques in order<br />
to grow tissue from a wider variety of plant types in culture, <strong>and</strong><br />
he continued to try to identify factors that affected the normal<br />
growth <strong>and</strong> development of plants. Morel was successful in culturing<br />
tissue from ferns <strong>and</strong> was the first to culture monocot plants.<br />
Monocots have certain features that distinguish them from the other<br />
classes of seed-bearing plants, especially with respect to seed structure.<br />
More important, the monocots include the economically important<br />
species of grasses (the major plants of range <strong>and</strong> pasture)<br />
<strong>and</strong> cereals.<br />
For these cultures, Morel utilized a small piece of the growing tip<br />
of a plant shoot (the shoot apex) as the starting tissue material. This<br />
tissue was placed in a glass tube, supplied with a medium containing<br />
specific nutrients, vitamins, <strong>and</strong> plant hormones, <strong>and</strong> allowed<br />
to grow in the light. Under these conditions, the apex tissue grew<br />
roots <strong>and</strong> buds <strong>and</strong> eventually developed into a complete plant.<br />
Morel was able to generate whole plants from pieces of the shoot<br />
apex that were only 100 to 250 micrometers in length.<br />
Morel also investigated the growth of parasites such as fungi <strong>and</strong><br />
viruses in dual culture with host-plant tissue. Using results from<br />
these studies <strong>and</strong> culture techniques that he had mastered, Morel<br />
<strong>and</strong> his colleague Claude Martin regenerated virus-free plants from<br />
tissue that had been taken from virally infected plants. Tissues from<br />
certain tropical species, dahlias, <strong>and</strong> potato plants were used for the<br />
original experiments, but after Morel adapted the methods for the<br />
generation of virus-free orchids, plants that had previously been<br />
difficult to propagate by any means, the true significance of his<br />
work was recognized.<br />
Morel was the first to recognize the potential of the in vitro culture<br />
methods for the mass propagation of plants. He estimated that several<br />
million plants could be obtained in one year from a single small<br />
piece of shoot-apex tissue. Plants generated in this manner were<br />
clonal (genetically identical organisms prepared from a single plant).
With other methods of plant propagation, there is often a great variation<br />
in the traits of the plants produced, but as a result of Morel’s<br />
ideas, breeders could select for some desirable trait in a particular<br />
plant <strong>and</strong> then produce multiple clonal plants, all of which expressed<br />
the desired trait. The methodology also allowed for the production of<br />
virus-free plant material, which minimized both the spread of potential<br />
pathogens during shipping <strong>and</strong> losses caused by disease.<br />
Consequences<br />
In vitro plant culture / 423<br />
In vitro plant culture has been especially useful for species such as palm trees that cannot be<br />
propagated by other methods, such as by sowing seeds or grafting. (PhotoDisc)<br />
Variations on Morel’s methods are used to propagate plants used<br />
for human food consumption; plants that are sources of fiber, oil,<br />
<strong>and</strong> livestock feed; forest trees; <strong>and</strong> plants used in l<strong>and</strong>scaping <strong>and</strong><br />
in the floral industry. In vitro stocks are preserved under deepfreeze<br />
conditions, <strong>and</strong> disease-free plants can be proliferated quickly<br />
at any time of the year after shipping or storage.<br />
The in vitro multiplication of plants has been especially useful<br />
for species such as coconut <strong>and</strong> certain palms that cannot be propagated<br />
by other methods, such as by sowing seeds or grafting, <strong>and</strong><br />
has also become important in the preservation <strong>and</strong> propagation of
424 / In vitro plant culture<br />
rare plant species that might otherwise have become extinct. Many<br />
of these plants are sources of pharmaceuticals, oils, fragrances, <strong>and</strong><br />
other valuable products.<br />
The capability of regenerating plants from tissue culture has also<br />
been crucial in basic scientific research. Plant cells grown in culture<br />
can be studied more easily than can intact plants, <strong>and</strong> scientists have<br />
gained an in-depth underst<strong>and</strong>ing of plant physiology <strong>and</strong> biochemistry<br />
by using this method. This information <strong>and</strong> the methods<br />
of Morel <strong>and</strong> others have made possible the genetic engineering <strong>and</strong><br />
propagation of crop plants that are resistant to disease or disastrous<br />
environmental conditions such as drought <strong>and</strong> freezing. In vitro<br />
techniques have truly revolutionized agriculture.<br />
See also Artificial insemination; Cloning; Genetically engineered<br />
insulin; Rice <strong>and</strong> wheat strains.<br />
Further Reading<br />
Arbury, Jim, Richard Bird, Mike Honour, Clive Innes, <strong>and</strong> Mike Salmon.<br />
The Complete Book of Plant Propagation. Newtown, Conn.:<br />
Taunton Press, 1997.<br />
Clarke, Graham. The Complete Book of Plant Propagation. London:<br />
Seven Dials, 2001.<br />
Hartmann, Hudson T. Plant Propagation: Principles <strong>and</strong> Practices. 6th<br />
ed. London: Prentice-Hall, 1997.<br />
Heuser, Charles. The Complete Book of Plant Propagation. Newtown,<br />
Conn.: Taunton Press, 1997.
Infrared photography<br />
Infrared photography<br />
The invention: The first application of color to infrared photography,<br />
which performs tasks not possible for ordinary photography.<br />
The person behind the invention:<br />
Sir William Herschel (1738-1822), a pioneering English<br />
astronomer<br />
Invisible Light<br />
425<br />
Photography developed rapidly in the nineteenth century when it<br />
became possible to record the colors <strong>and</strong> shades of visible light on<br />
sensitive materials. Visible light is a form of radiation that consists of<br />
electromagnetic waves, which also make up other forms of radiation<br />
such as X rays <strong>and</strong> radio waves. Visible light occupies the range of<br />
wavelengths from about 400 nanometers (1 nanometer is 1 billionth<br />
of a meter) to about 700 nanometers in the electromagnetic spectrum.<br />
Infrared radiation occupies the range from about 700 nanometers<br />
to about 1,350 nanometers in the electromagnetic spectrum. Infrared<br />
rays cannot be seen by the human eye, but they behave in the<br />
same way that rays of visible light behave; they can be reflected, diffracted<br />
(broken), <strong>and</strong> refracted (bent).<br />
Sir William Herschel, a British astronomer, discovered infrared<br />
rays in 1800 by calculating the temperature of the heat that they produced.<br />
The term “infrared,” which was probably first used in 1800,<br />
was used to indicate rays that had wavelengths that were longer than<br />
those on the red end (the high end) of the spectrum of visible light but<br />
shorter than those of the microwaves, which appear higher on the<br />
electromagnetic spectrum. Infrared film is therefore sensitive to the<br />
infrared radiation that the human eye cannot see or record. Dyes that<br />
were sensitive to infrared radiation were discovered early in the<br />
twentieth century, but they were not widely used until the 1930’s. Because<br />
these dyes produced only black-<strong>and</strong>-white images, their usefulness<br />
to artists <strong>and</strong> researchers was limited. After 1930, however, a<br />
tidal wave of infrared photographic applications appeared.
426 / Infrared photography<br />
The Development of Color-Sensitive Infrared Film<br />
In the early 1940’s, military intelligence used infrared viewers for<br />
night operations <strong>and</strong> for gathering information about the enemy. One<br />
device that was commonly used for such purposes was called a<br />
“snooper scope.” Aerial photography with black-<strong>and</strong>-white infrared<br />
film was used to locate enemy hiding places <strong>and</strong> equipment. The images<br />
that were produced, however, often lacked clear definition.<br />
The development in 1942 of the first color-sensitive infrared film,<br />
Ektachrome Aero Film, became possible when researchers at the<br />
Eastman Kodak Company’s laboratories solved some complex chemical<br />
<strong>and</strong> physical problems that had hampered the development of<br />
color infrared film up to that point. Regular color film is sensitive to<br />
all visible colors of the spectrum; infrared color film is sensitive to<br />
violet, blue, <strong>and</strong> red light as well as to infrared radiation. Typical<br />
color film has three layers of emulsion, which are sensitized to blue,<br />
green, <strong>and</strong> red. Infrared color film, however, has its three emulsion<br />
layers sensitized to green, red, <strong>and</strong> infrared. Infrared wavelengths<br />
are recorded as reds of varying densities, depending on the intensity<br />
of the infrared radiation. The more infrared radiation there is,<br />
the darker the color of the red that is recorded.<br />
In infrared photography, a filter is placed over the camera lens to<br />
block the unwanted rays of visible light. The filter blocks visible <strong>and</strong><br />
ultraviolet rays but allows infrared radiation to pass. All three layers<br />
of infrared film are sensitive to blue, so a yellow filter is used. All<br />
blue radiation is absorbed by this filter.<br />
In regular photography, color film consists of three basic layers:<br />
the top layer is sensitive to blue light, the middle layer is sensitive to<br />
green, <strong>and</strong> the third layer is sensitive to red. Exposing the film to<br />
light causes a latent image to be formed in the silver halide crystals<br />
that make up each of the three layers. In infrared photography, color<br />
film consists of a top layer that is sensitive to infrared radiation, a<br />
middle layer sensitive to green, <strong>and</strong> a bottom layer sensitive to red.<br />
“Reversal processing” produces blue in the infrared-sensitive layer,<br />
yellow in the green-sensitive layer, <strong>and</strong> magenta in the red-sensitive<br />
layer. The blue, yellow, <strong>and</strong> magenta layers of the film produce the<br />
“false colors” that accentuate the various levels of infrared radiation<br />
shown as red in a color transparency, slide, or print.
Sir William Herschel<br />
Infrared photography / 427<br />
During his long career Sir William Herschel passed from human<br />
music to the music of the spheres, <strong>and</strong> in doing so revealed<br />
the invisible unlike any astronomer before him.<br />
He was born Friedrich Wilhelm Herschel in Hannover, Germany,<br />
in 1738. Like his brothers, he trained to be a musician in a<br />
local regimental b<strong>and</strong>. In 1757 he had to flee to Engl<strong>and</strong><br />
because his regiment was on the losing side of a<br />
war. Settling in the town of Bath, he supported himself<br />
with music, eventually becoming the organist for<br />
the city’s celebrated Octagon Chapel. He studied the<br />
music theory in Robert Smith’s book on harmonics<br />
<strong>and</strong>, discovering another book by Smith about optics<br />
<strong>and</strong> astronomy, read that too. He was immediately<br />
hooked. By 1773 he was assembling his own telescopes,<br />
<strong>and</strong> within ten years he had built the most powerful instruments<br />
in the l<strong>and</strong>. He interested King George III in astronomy<br />
<strong>and</strong> was rewarded with a royal pension that gave him the<br />
leisure to survey the heavens.<br />
Herschel looked deeper into space than anyone before him.<br />
He discovered thous<strong>and</strong>s of double stars <strong>and</strong> nebulae that had<br />
been invisible to astronomers with less powerful telescopes<br />
than his. He was the first person in recorded history to discover<br />
a planet—Uranus. While trying to learn the construction of the<br />
sun, he conducted hundreds of experiments with light. He<br />
found, unexpectedly, that he could feel heat from the sun even<br />
when visible light was filtered out, <strong>and</strong> concluded that some solar<br />
radiation—in this case infrared—was invisible to human<br />
eyes.<br />
Late in his career Herschel addressed the gr<strong>and</strong>est of all invisible<br />
aspects of the nature: the structure of the universe. His<br />
investigations led him to conclude that the nebulae he had so<br />
often observed were in themselves vast clouds of stars, very far<br />
away—they were galaxies. It was a key conceptual step in the<br />
development of modern cosmology.<br />
By the time Herschel died in 1822, he had trained his sister<br />
Caroline <strong>and</strong> his son John to carry on his work. Both became celebrated<br />
astronomers in their own right.<br />
(Library of Congess)
428 / Infrared photography<br />
The color of the dye that is formed in a particular layer bears no<br />
relationship to the color of light to which the layer is sensitive. If the<br />
relationship is not complementary, the resulting colors will be false.<br />
This means that objects whose colors appear to be similar to the<br />
human eye will not necessarily be recorded as similar colors on infrared<br />
film. A red rose with healthy green leaves will appear on infrared<br />
color film as being yellow with red leaves, because the chlorophyll<br />
contained in the plant leaf reflects infrared radiation <strong>and</strong><br />
causes the green leaves to be recorded as red. Infrared radiation<br />
from about 700 nanometers to about 900 nanometers on the electromagnetic<br />
spectrum can be recorded by infrared color film. Above<br />
900 nanometers, infrared radiation exists as heat patterns that must<br />
be recorded by nonphotographic means.<br />
Impact<br />
Infrared photography has proved to be valuable in many of the<br />
sciences <strong>and</strong> the arts. It has been used to create artistic images that<br />
are often unexpected visual explosions of everyday views. Because<br />
infrared radiation penetrates haze easily, infrared films are often<br />
used in mapping areas or determining vegetation types. Many<br />
cloud-covered tropical areas would be impossible to map without<br />
infrared photography. False-color infrared film can differentiate between<br />
healthy <strong>and</strong> unhealthy plants, so it is widely used to study insect<br />
<strong>and</strong> disease problems in plants. Medical research uses infrared<br />
photography to trace blood flow, detect <strong>and</strong> monitor tumor growth,<br />
<strong>and</strong> to study many other physiological functions that are invisible<br />
to the human eye.<br />
Some forms of cancer can be detected by infrared analysis before<br />
any other tests are able to perceive them. Infrared film is used in<br />
criminology to photograph illegal activities in the dark <strong>and</strong> to study<br />
evidence at crime scenes. Powder burns around a bullet hole, which<br />
are often invisible to the eye, show clearly on infrared film. In addition,<br />
forgeries in documents <strong>and</strong> works of art can often be seen<br />
clearly when photographed on infrared film. Archaeologists have<br />
used infrared film to locate ancient sites that are invisible in daylight.<br />
Wildlife biologists also document the behavior of animals at<br />
night with infrared equipment.
See also Autochrome plate; Color film; Fax machine; Instant<br />
photography.<br />
Further Reading<br />
Infrared photography / 429<br />
Collins, Douglas. The Story of Kodak. New York: Harry N. Abrams,<br />
1990.<br />
Cummins, Richard. “Infrared Revisited.” Petersen’s Photographic<br />
Magazine 23 (February, 1995).<br />
Paduano, Joseph. The Art of Infrared Photography. 4th ed. Buffalo,<br />
N.Y: Amherst Media, 1998.<br />
Richards, Dan. “The Strange Otherworld of Infrared.” Popular Photography<br />
62, no. 6 (June, 1998).<br />
White, Laurie. Infrared Photography H<strong>and</strong>book. Amherst, N.Y.: Amherst<br />
Media, 1995.
430<br />
Instant photography<br />
Instant photography<br />
The invention: Popularly known by its Polaroid tradename, a camera<br />
capable of producing finished photographs immediately after<br />
its film was exposed.<br />
The people behind the invention:<br />
Edwin Herbert L<strong>and</strong> (1909-1991), an American physicist <strong>and</strong><br />
chemist<br />
Howard G. Rogers (1915- ), a senior researcher at Polaroid<br />
<strong>and</strong> L<strong>and</strong>’s collaborator<br />
William J. McCune (1915- ), an engineer <strong>and</strong> head of the<br />
Polaroid team<br />
Ansel Adams (1902-1984), an American photographer <strong>and</strong><br />
L<strong>and</strong>’s technical consultant<br />
The Daughter of Invention<br />
Because he was a chemist <strong>and</strong> physicist interested primarily in<br />
research relating to light <strong>and</strong> vision, <strong>and</strong> to the materials that affect<br />
them, it was inevitable that Edwin Herbert L<strong>and</strong> should be drawn<br />
into the field of photography. L<strong>and</strong> founded the Polaroid Corporation<br />
in 1929. During the summer of 1943, while L<strong>and</strong> <strong>and</strong> his wife<br />
were vacationing in Santa Fe, New Mexico, with their three-yearold<br />
daughter, L<strong>and</strong> stopped to take a picture of the child. After the<br />
picture was taken, his daughter asked to see it. When she was told<br />
she could not see the picture immediately, she asked how long it<br />
would be. Within an hour after his daughter’s question, L<strong>and</strong> had<br />
conceived a preliminary plan for designing the camera, the film,<br />
<strong>and</strong> the physical chemistry of what would become the instant camera.<br />
Such a device would, he hoped, produce a picture immediately<br />
after exposure.<br />
Within six months, L<strong>and</strong> had solved most of the essential problems<br />
of the instant photography system. He <strong>and</strong> a small group of associates<br />
at Polaroid secretly worked on the project. Howard G. Rogers<br />
was L<strong>and</strong>’s collaborator in the laboratory. L<strong>and</strong> conferred the<br />
responsibility for the engineering <strong>and</strong> mechanical phase of the project<br />
on William J. McCune, who led the team that eventually de-
signed the original camera <strong>and</strong> the machinery that produced both<br />
the camera <strong>and</strong> L<strong>and</strong>’s new film.<br />
The first Polaroid L<strong>and</strong> camera—the Model 95—produced photographs<br />
measuring 8.25 by 10.8 centimeters; there were eight pictures<br />
to a roll. Rather than being black-<strong>and</strong>-white, the original Polaroid<br />
prints were sepia-toned (producing a warm, reddish-brown color).<br />
The reasons for the sepia coloration were chemical rather than aesthetic;<br />
as soon as L<strong>and</strong>’s researchers could devise a workable formula<br />
for sharp black-<strong>and</strong>-white prints (about ten months after the camera<br />
was introduced commercially), they replaced the sepia film.<br />
A Sophisticated Chemical Reaction<br />
Instant photography / 431<br />
Although the mechanical process involved in the first demonstration<br />
camera was relatively simple, this process was merely<br />
the means by which a highly sophisticated chemical reaction—<br />
the diffusion transfer process—was produced.<br />
In the basic diffusion transfer process, when an exposed negative<br />
image is developed, the undeveloped portion corresponds<br />
to the opposite aspect of the image, the positive. Almost all selfprocessing<br />
instant photography materials operate according to<br />
three phases—negative development, diffusion transfer, <strong>and</strong><br />
positive development. These occur simultaneously, so that positive<br />
image formation begins instantly. With black-<strong>and</strong>-white materials,<br />
the positive was originally completed in about sixty seconds; with<br />
color materials (introduced later), the process took somewhat longer.<br />
The basic phenomenon of silver in solution diffusing from one<br />
emulsion to another was first observed in the 1850’s, but no practical<br />
use of this action was made until 1939. The photographic use of<br />
diffusion transfer for producing normal-continuous-tone images<br />
was investigated actively from the early 1940’s by L<strong>and</strong> <strong>and</strong> his associates.<br />
The instant camera using this method was demonstrated<br />
in 1947 <strong>and</strong> marketed in 1948.<br />
The fundamentals of photographic diffusion transfer are simplest<br />
in a black-<strong>and</strong>-white peel-apart film. The negative sheet is exposed<br />
in the camera in the normal way. It is then pulled out of the<br />
camera, or film pack holder, by a paper tab. Next, it passes through a<br />
set of rollers, which press it face-to-face with a sheet of receiving ma-
432 / Instant photography<br />
Edwin H. L<strong>and</strong><br />
Born in Bridgeport, Connecticut in 1909, Edwin Herbert<br />
L<strong>and</strong> developed an obsession with color vision. As a boy, he<br />
slept with a copy of an optics textbook under his pillow. When<br />
he went to Harvard to study physics, he found the instruction<br />
too elementary <strong>and</strong> spent much of the time educating himself at<br />
the New York <strong>Public</strong> Library. While there, he thought of the first<br />
of his many sight-related inventions.<br />
He realized that by lining up tiny crystals <strong>and</strong> embedding<br />
them in clear plastic he could make a large, inexpensive light polarizer.<br />
He patented the idea for this “Polaroid” lens in 1929 (the<br />
first of more than five hundred patents) <strong>and</strong> in 1932 set up a commercial<br />
laboratory with his Harvard physics professor, George<br />
Wheelwright III. Five years later he opened the Polaroid Corporation<br />
in Boston to exploit the commercial potential of the lenses.<br />
They were to be used most famously as sunglasses, camera filters,<br />
eyeglasses for producing three-dimensional effects in movies,<br />
<strong>and</strong> glare-reduction screens for visual display terminals.<br />
In 1937, with Joseph Mallory, L<strong>and</strong> invented the vectograph—<br />
a device that superimposed two photographs in order to create<br />
a three-dimensional image. The invention dramatically improved<br />
the aerial photography during World War II <strong>and</strong> the Cold War.<br />
In fact, L<strong>and</strong> had a h<strong>and</strong> in designing both the camera carried<br />
aboard Lockheed’s U2 spyplane <strong>and</strong> the plane itself.<br />
While not busy running the Polaroid Corporation <strong>and</strong> overseeing<br />
development of its cameras, L<strong>and</strong> pursued his passion for<br />
experimenting with color <strong>and</strong> developed a widely respected theory<br />
of color vision. When he retired in 1982, he launched the<br />
Rowl<strong>and</strong> Institute for Science in Boston, once described as a cross<br />
between a private laboratory <strong>and</strong> a private art gallery. (L<strong>and</strong> had<br />
a deep interest in modern art.) He <strong>and</strong> other scientists there conducted<br />
research on artificial intelligence, genetics, microscopy,<br />
holography, protein dynamics, <strong>and</strong> color vision. L<strong>and</strong> died in<br />
1991 in Cambridge, Massachusetts, but the institute carries forward<br />
his legacy of scientific curiosity <strong>and</strong> practical application.<br />
terial included in the film pack. Simultaneously, the rollers rupture<br />
a pod of reagent chemicals that are spread evenly by the rollers<br />
between the two layers. The reagent contains a strong alkali <strong>and</strong> a<br />
silver halide solvent, both of which diffuse into the negative emul-
sion. There the alkali activates the developing agent, which immediately<br />
reduces the exposed halides to a negative image. At the<br />
same time, the solvent dissolves the unexposed halides. The silver<br />
in the dissolved halides forms the positive image.<br />
Impact<br />
The Polaroid L<strong>and</strong> camera had a tremendous impact on the photographic<br />
industry as well as on the amateur <strong>and</strong> professional photographer.<br />
Ansel Adams, who was known for his monumental,<br />
ultrasharp black-<strong>and</strong>-white panoramas of the American West, suggested<br />
to L<strong>and</strong> ways in which the tonal value of Polaroid film could<br />
be enhanced, as well as new applications for Polaroid photographic<br />
technology.<br />
Soon after it was introduced, Polaroid photography became part<br />
of the American way of life <strong>and</strong> changed the face of amateur photography<br />
forever. By the 1950’s, Americans had become accustomed<br />
to the world of recorded visual information through films, magazines,<br />
<strong>and</strong> newspapers; they also had become enthusiastic picturetakers<br />
as a result of the growing trend for simpler <strong>and</strong> more convenient<br />
cameras. By allowing these photographers not only to record<br />
their perceptions but also to see the results almost immediately, Polaroid<br />
brought people closer to the creative process.<br />
See also Autochrome plate; Brownie camera; Color film; Fax machine;<br />
Xerography.<br />
Further Reading<br />
Instant photography / 433<br />
Adams, Ansel. Polaroid L<strong>and</strong> Photography Manual. New York: Morgan<br />
& Morgan, 1963.<br />
Innovation/Imagination: Fifty Years of Polaroid Photography. New York:<br />
H. N. Abrams in association with the Friends of Photography,<br />
1999.<br />
McElheny, Victor K. Insisting on the Impossible: The Life of Edwin L<strong>and</strong>.<br />
Cambridge, Mass.: Perseus Books, 1998.<br />
Olshaker, Mark. The Instant Image. New York: Stein & Day, 1978.<br />
Wensberg, Peter C. L<strong>and</strong>’s Polaroid. Boston: Houghton Mifflin, 1987.
434<br />
Interchangeable parts<br />
Interchangeable parts<br />
The invention: A key idea in the late Industrial Revolution, the<br />
interchangeability of parts made possible mass production of<br />
identical products.<br />
The people behind the invention:<br />
Henry M. Lel<strong>and</strong> (1843-1932), president of Cadillac Motor Car<br />
Company in 1908, known as a master of precision<br />
Frederick Bennett, the British agent for Cadillac Motor Car<br />
Company who convinced the Royal Automobile Club to run<br />
the st<strong>and</strong>ardization test at Brookl<strong>and</strong>s, Engl<strong>and</strong><br />
Henry Ford (1863-1947), founder of Ford Motor Company who<br />
introduced the moving assembly line into the automobile<br />
industry in 1913<br />
An American Idea<br />
Mass production is a twentieth century methodology that for the<br />
most part is a result of nineteenth century ideas. It is a phenomenon<br />
that, although its origins were mostly American, has consequently<br />
changed the entire world. The use of interchangeable parts, the feasibility<br />
of which was demonstrated by the Cadillac Motor Car Company<br />
in 1908, was instrumental in making mass production possible.<br />
The British phase of the Industrial Revolution saw the application<br />
of division of labor, the first principle of industrialization, to capitalistdirected<br />
manufacturing processes. Centralized power sources were<br />
connected through shafts, pulleys, <strong>and</strong> belts to machines housed in<br />
factories. Even after these dramatic changes, the British preferred to<br />
produce unique, h<strong>and</strong>crafted products formed one step at a time using<br />
general-purpose machine tools. Seldom did they make separate components<br />
to be assembled into st<strong>and</strong>ardized products.<br />
Stories about American products that were assembled from fully<br />
interchangeable parts began to reach Great Britain. In 1851, the British<br />
public saw a few of these products on display at an exhibition in<br />
London’s Crystal Palace. In 1854, they were informed by one of their<br />
own investigative commissions that American manufacturers were
uilding military weapons <strong>and</strong> a number of consumer products<br />
with separately made parts that could be easily assembled, with little<br />
filing <strong>and</strong> fitting, by semiskilled workers.<br />
English industrialists had probably heard as much as they ever<br />
wanted to about this so-called “American system of manufacturing”<br />
by the first decade of the twentieth century, when word came<br />
that American companies were building automobiles with parts<br />
manufactured so precisely that they were interchangeable.<br />
The Cadillac<br />
Interchangeable parts / 435<br />
During the fall of 1907, Frederick Bennett, an Englishman who<br />
served as the British agent for the Cadillac Motor Car Company, paid<br />
a visit to the company’s Detroit, Michigan, factory <strong>and</strong> was amazed<br />
at what he saw. He later described the assembling of the relatively inexpensive<br />
Cadillac vehicles as a demonstration of the beauty <strong>and</strong><br />
practicality of precision. He was convinced that if his countrymen<br />
could see what he had seen they would also be impressed.<br />
Most automobile builders at the time claimed that their vehicles<br />
were built with h<strong>and</strong>crafted quality, yet at the same time they advertised<br />
that they could supply repair parts that would fit perfectly.<br />
In actuality, machining <strong>and</strong> filing were almost always required<br />
when parts were replaced, <strong>and</strong> only shops with proper equipment<br />
could do the job.<br />
Upon his return to London, Bennett convinced the Royal Automobile<br />
Club to sponsor a test of the precision of automobile parts. A<br />
st<strong>and</strong>ardization test was set to begin on February 29, 1908, <strong>and</strong> all of<br />
the companies then selling automobiles were invited to participate.<br />
Only the company that Bennett represented, Cadillac, was willing<br />
to enter the contest.<br />
Three one-cylinder Cadillacs, each painted a different color, were<br />
taken from stock at the company’s warehouse in London to a garage<br />
near the Brookl<strong>and</strong>s race track. The cars were first driven around<br />
the track ten times to prove that they were operable. British mechanics<br />
then dismantled the vehicles, placing their parts in piles in the<br />
center of the garage, making sure that there was no way of identifying<br />
from which car each internal piece came. Then, as a further test,<br />
eighty-nine r<strong>and</strong>omly selected parts were removed from the piles
436 / Interchangeable parts<br />
<strong>and</strong> replaced with new ones straight from Cadillac’s storeroom in<br />
London. The mechanics then proceeded to reassemble the automobiles,<br />
using only screwdrivers <strong>and</strong> wrenches.<br />
After the reconstruction, which took two weeks, the cars were<br />
driven from the garage. They were a motley looking trio, with fenders,<br />
doors, hoods, <strong>and</strong> wheels of mixed colors. All three were then<br />
driven five hundred miles around the Brookl<strong>and</strong>s track. The British<br />
were amazed. Cadillac was awarded the club’s prestigious Dewar<br />
Trophy, considered in the young automobile industry to be almost<br />
the equivalent of a Nobel Prize. A number of European <strong>and</strong> American<br />
automobile manufacturers began to consider the promise of interchangeable<br />
parts <strong>and</strong> the assembly line system.<br />
Henry M. Lel<strong>and</strong><br />
Cadillac’s precision-built automobiles were the result of a lifetime<br />
of experience of Henry M. Lel<strong>and</strong>, an American engineer.<br />
Known in Detroit at the turn of the century as a master of precision,<br />
Lel<strong>and</strong> became the primary connection between a series of nineteenth<br />
century attempts to make interchangeable parts <strong>and</strong> the<br />
large-scale use of precision parts in mass production manufacturing<br />
during the twentieth century.<br />
The first American use of truly interchangeable parts had occurred<br />
in the military, nearly three-quarters of a century before the<br />
test at Brookl<strong>and</strong>s. Thomas Jefferson had written from France about<br />
a demonstration of uniform parts for musket locks in 1785. A few<br />
years later, Eli Whitney attempted to make muskets for the American<br />
military by producing separate parts for assembly using specialized<br />
machines. He was never able to produce the precision necessary<br />
for truly interchangeable parts, but he promoted the idea<br />
intensely. It was in 1822 at the Harpers Ferry Armory in Virginia,<br />
<strong>and</strong> then a few years later at the Springfield Armory in Massachusetts,<br />
that the necessary accuracy in machining was finally achieved<br />
on a relatively large scale.<br />
Lel<strong>and</strong> began his career at the Springfield Armory in 1863, at the<br />
age of nineteen. He worked as a tool builder during the Civil War<br />
years <strong>and</strong> soon became an advocate of precision manufacturing. In<br />
1890, Lel<strong>and</strong> moved to Detroit, where he began a firm, Lel<strong>and</strong> &
Henry Martyn Lel<strong>and</strong><br />
Interchangeable parts / 437<br />
Henry Martyn Lel<strong>and</strong> (1843-1932) is the unsung giant of<br />
early automobile manufacturers, launching two of the bestknown<br />
American car companies, Cadillac <strong>and</strong> Lincoln, <strong>and</strong> influenced<br />
the success of General Motors, as well as introducing<br />
the use of interchangeable parts. Had he allowed a model to be<br />
named after him, as did Henry Ford <strong>and</strong> Ransom Olds, he<br />
might have become a household name too, but he refused any<br />
such suggestion.<br />
Lel<strong>and</strong> worked in factories during his youth. During the<br />
Civil War he honed his skills as a machinist at the U.S. Armory<br />
in Springfield, Massachusetts, helping build rifles with interchangeable<br />
parts. After the war, he learned how to machine<br />
parts to within one-thous<strong>and</strong>th of an inch, fabricated the first<br />
mechanical barber’s clippers, <strong>and</strong> refined the workings of air<br />
brakes for locomotives.<br />
This was all warm-up. In 1890 he moved to Detroit <strong>and</strong><br />
opened his own business, Lel<strong>and</strong> <strong>and</strong> Faulconer Manufacturing<br />
Company, specializing in automobile engines. The 10.25-horsepower<br />
engine he built for Olds in 1901 was rejected, but the single-cylinder<br />
(“one-lunger”) design that powered the first Cadillacs<br />
set him on the high road in the automotive industry. More<br />
innovations followed. He developed the electric starter, electric<br />
lights, <strong>and</strong> dimmable headlights. During World War I he built<br />
airplane engines for the U.S. government, <strong>and</strong> afterward converted<br />
the design for use in his new creation, the Lincoln.<br />
Throughout, he dem<strong>and</strong>ed precision from himself <strong>and</strong> those<br />
working for him. Once, for example, he complained to Alfred P.<br />
Sloan that a lot of ball bearings that Sloan had sold him varied<br />
from the required engineering tolerances <strong>and</strong> showed Sloan a<br />
few misshapen bearings to prove the claim. “Even though you<br />
make thous<strong>and</strong>s,” Lel<strong>and</strong> admonished Sloan, “the first <strong>and</strong> last<br />
should be precisely the same.” Sloan took the lesson very seriously.<br />
When he later led General Motors to the top of the industry,<br />
he credited Lel<strong>and</strong> with teaching him what mass production<br />
was all about.<br />
Faulconer, that would become internationally known for precision<br />
machining. His company did well supplying parts to the bicycle industry<br />
<strong>and</strong> internal combustion engines <strong>and</strong> transmissions to early
438 / Interchangeable parts<br />
automobile makers. In 1899, Lel<strong>and</strong> & Faulconer became the primary<br />
supplier of engines to the first of the major automobile producers,<br />
the Olds Motor Works.<br />
In 1902, the directors of another Detroit firm, the Henry Ford<br />
Company, found themselves in a desperate situation. Henry Ford,<br />
the company founder <strong>and</strong> chief engineer, had resigned after a disagreement<br />
with the firm’s key owner, William Murphy. Lel<strong>and</strong> was<br />
asked to take over the reorganization of the company. Because it<br />
could no longer use Ford’s name, the business was renamed in<br />
memory of the French explorer who had founded Detroit two hundred<br />
years earlier, Antoine de la Mothe Cadillac.<br />
Lel<strong>and</strong> was appointed president of the Cadillac Motor Car Company.<br />
The company, under his influence, soon became known for its<br />
precision manufacturing. He disciplined its suppliers, rejecting anything<br />
that did not meet his specifications, <strong>and</strong> insisted on precision<br />
machining for all parts. By 1906, Cadillac was outselling all of its<br />
competitors, including Oldsmobile <strong>and</strong> Ford’s new venture, the<br />
Ford Motor Company. After the Brookl<strong>and</strong>s demonstration in 1908,<br />
Cadillac became recognized worldwide for quality <strong>and</strong> interchangeability<br />
at a reasonable price.<br />
Impact<br />
The Brookl<strong>and</strong>s demonstration went a long way in proving that<br />
mass-produced goods could be durable <strong>and</strong> of relatively high quality.<br />
It showed that st<strong>and</strong>ardized products, although often less costly<br />
to make, were not necessarily cheap substitutes for h<strong>and</strong>crafted <strong>and</strong><br />
painstakingly fitted products. It also demonstrated that, through<br />
the use of interchangeable parts, the job of repairing such complex<br />
machines as automobiles could be made comparatively simple,<br />
moving maintenance <strong>and</strong> repair work from the well-equipped machine<br />
shop to the neighborhood garage or even to the home.<br />
Because of the international publicity Cadillac received, Lel<strong>and</strong>’s<br />
methods began to be emulated by others in the automobile industry.<br />
His precision manufacturing, as his daughter-in-law would later<br />
write in his biography, “laid the foundation for the future American<br />
[automobile] industry.” The successes of automobile manufacturers<br />
quickly led to the introduction of mass production methods, <strong>and</strong>
strategies designed to promote their necessary corollary mass consumption,<br />
in many other American businesses.<br />
In 1909, Cadillac was acquired by William Crapo Durant as the<br />
flagship company of his new holding company, which he labeled<br />
General Motors. Lel<strong>and</strong> continued to improve his production methods,<br />
while also influencing his colleagues in the other General Motors<br />
companies to implement many of his techniques. By the mid-<br />
1920’s, General Motors had become the world’s largest manufacturer<br />
of automobiles. Much of its success resulted from extensions<br />
of Lel<strong>and</strong>’s ideas. The company began offering a number of br<strong>and</strong><br />
name vehicles in a variety of price ranges for marketing purposes,<br />
while still keeping the costs of production down by including in<br />
each design a large number of commonly used, highly st<strong>and</strong>ardized<br />
components.<br />
Henry Lel<strong>and</strong> resigned from Cadillac during World War I after<br />
trying to convince Durant that General Motors should play an important<br />
part in the war effort by contracting to build Liberty aircraft<br />
engines for the military. He formed his own firm, named after his favorite<br />
president, Abraham Lincoln, <strong>and</strong> went on to build about four<br />
thous<strong>and</strong> aircraft engines in 1917 <strong>and</strong> 1918. In 1919, ready to make<br />
automobiles again, Lel<strong>and</strong> converted the Lincoln Motor Company<br />
into a car manufacturer. Again he influenced the industry by setting<br />
high st<strong>and</strong>ards for precision, but in 1921 an economic recession<br />
forced his new venture into receivership. Ironically, Lincoln was<br />
purchased at auction by Henry Ford. Lel<strong>and</strong> retired, his name overshadowed<br />
by those of individuals to whom he had taught the importance<br />
of precision <strong>and</strong> interchangeable parts. Ford, as one example,<br />
went on to become one of America’s industrial legends by<br />
applying the st<strong>and</strong>ardized parts concept.<br />
Ford <strong>and</strong> the Assembly Line<br />
Interchangeable parts / 439<br />
In 1913, Henry Ford, relying on the ease of fit made possible<br />
through the use of machined <strong>and</strong> stamped interchangeable parts,<br />
introduced the moving assembly line to the automobile industry.<br />
He had begun production of the Model T in 1908 using stationary<br />
assembly methods, bringing parts to assemblers. After having learned<br />
how to increase component production significantly, through experi-
440 / Interchangeable parts<br />
ments with interchangeable parts <strong>and</strong> moving assembly methods in<br />
the magneto department, he began to apply this same concept to final<br />
assembly. In the spring of 1913, Ford workers began dragging car<br />
frames past stockpiles of parts for assembly. Soon a power source<br />
was attached to the cars through a chain drive, <strong>and</strong> the vehicles<br />
were pulled past the stockpiles at a constant rate.<br />
From this time on, the pace of tasks performed by assemblers<br />
would be controlled by the rhythm of the moving line. As dem<strong>and</strong><br />
for the Model T increased, the number of employees along the line<br />
was increased <strong>and</strong> the jobs were broken into smaller <strong>and</strong> simpler<br />
tasks. With stationary assembly methods, the time required to assemble<br />
a Model T had averaged twelve <strong>and</strong> one-half person-hours.<br />
Dragging the chassis to the parts cut the time to six hours per vehicle,<br />
<strong>and</strong> the power-driven, constant-rate line produced a Model T<br />
with only ninety-three minutes of labor time. Because of these<br />
amazing increases in productivity, Ford was able to lower the selling<br />
price of the basic model from $900 in 1910 to $260 in 1925. He<br />
had revolutionized automobile manufacturing: The average family<br />
could now afford an automobile.<br />
Soon the average family would also be able to afford many of the<br />
other new products they had seen in magazines <strong>and</strong> newspapers.<br />
At the turn of the century, there were many new household appliances,<br />
farm machines, ready-made fashions, <strong>and</strong> prepackaged food<br />
products on the market, but only the wealthier class could afford<br />
most of these items. Major consumer goods retailers such as Sears,<br />
Roebuck <strong>and</strong> Company, Montgomery Ward, <strong>and</strong> the Great Atlantic<br />
<strong>and</strong> Pacific Tea Company were anxious to find lower-priced versions<br />
of these products to sell to a growing middle-class constituency.<br />
The methods of mass production that Henry Ford had popularized<br />
seemed to carry promise for these products as well. During<br />
the 1920’s, by working with such key manufacturers as Whirlpool,<br />
Hoover, General Electric, <strong>and</strong> Westinghouse, these large distributors<br />
helped introduce mass production methods into a large number<br />
of consumer product industries. They changed class markets<br />
into mass markets.<br />
The movement toward precision also led to the birth of a separate<br />
industry based on the manufacture of machine tools. A general<br />
purpose lathe, milling machine, or grinder could be used for a num-
er of operations, but mass production industries called for narrowpurpose<br />
machines designed for high-speed use in performing one<br />
specialized step in the production process. Many more machines<br />
were now required, one at each step in the production process. Each<br />
machine had to be simpler to operate, with more automatic features,<br />
because of an increased dependence on unskilled workers. The machine<br />
tool industry became the foundation of modern production.<br />
The miracle of mass production that followed, in products as<br />
diverse as airplanes, communication systems, <strong>and</strong> hamburgers,<br />
would not have been possible without the precision insisted upon<br />
by Henry Lel<strong>and</strong> in the first decade of the twentieth century. It<br />
would not have come about without the lessons learned by Henry<br />
Ford in the use of specialized machines <strong>and</strong> assembly methods, <strong>and</strong><br />
it would not have occurred without the growth of the machine tool<br />
industry. Cadillac’s demonstration at Brookl<strong>and</strong>s in 1908 proved<br />
the practicality of precision manufacturing <strong>and</strong> interchangeable<br />
parts to the world. It inspired American manufacturers to continue<br />
to develop these ideas; it convinced Europeans that such production<br />
was possible; <strong>and</strong>, for better or for worse, it played a major part<br />
in changing the world.<br />
See also CAD/CAM; Assembly line; Internal combustion engine.<br />
Further Reading<br />
Interchangeable parts / 441<br />
Hill, Frank Ernest. The Automobile: How It Came, Grew, <strong>and</strong> Has<br />
Changed Our Lives. New York: Dodd, Mead, 1967.<br />
Hounshell, David A. From the American System to Mass Production,<br />
1800-1932. Baltimore: Johns Hopkins University Press, 1984.<br />
Lel<strong>and</strong>, Ottilie M., <strong>and</strong> Minnie Dubbs Millbrook. Master of Precision:<br />
Henry M. Lel<strong>and</strong>. 1966. Reprint. Detroit: Wayne State University<br />
Press, 1996.<br />
Marcus, Alan I., <strong>and</strong> Howard P. Segal. Technology in America: A Brief<br />
History. Fort Worth, Texas: Harcourt Brace College, 1999.<br />
Nevins, Allan, <strong>and</strong> Frank Ernest Hill. The Times, the Man, the Company.<br />
Vol. 1 in Ford. New York: Charles Scribner’s Sons, 1954.
442<br />
Internal combustion engine<br />
Internal combustion engine<br />
The invention: The most common type of engine in automobiles<br />
<strong>and</strong> many other vehicles, the internal combusion engine is characterized<br />
by the fact that it burns its liquid fuelly internally—in<br />
contrast to engines, such as the steam engine, that burn fuel in external<br />
furnaces.<br />
The people behind the invention:<br />
Sir Harry Ralph Ricardo (1885-1974), an English engineer<br />
Oliver Thornycroft (1885-1956), an engineer <strong>and</strong> works manager<br />
Sir David R<strong>and</strong>all Pye (1886-1960), an engineer <strong>and</strong><br />
administrator<br />
Sir Robert Waley Cohen (1877-1952), a scientist <strong>and</strong> industrialist<br />
The Internal Combustion Engine: 1900-1916<br />
By the beginning of the twentieth century, internal combustion<br />
engines were almost everywhere. City streets in Berlin, London,<br />
<strong>and</strong> New York were filled with automobile <strong>and</strong> truck traffic; gasoline-<br />
<strong>and</strong> diesel-powered boat engines were replacing sails; stationary<br />
steam engines for electrical generation were being edged out by<br />
internal combustion engines. Even aircraft use was at h<strong>and</strong>: To<br />
progress from the Wright brothers’ first manned flight in 1903 to the<br />
fighting planes of World War I took only a little more than a decade.<br />
The internal combustion engines of the time, however, were<br />
primitive in design. They were heavy (10 to 15 pounds per output<br />
horsepower, as opposed to 1 to 2 pounds today), slow (typically<br />
1,000 or fewer revolutions per minute or less, as opposed to 2,000 to<br />
5,000 today), <strong>and</strong> extremely inefficient in extracting the energy content<br />
of their fuel. These were not major drawbacks for stationary applications,<br />
or even for road traffic that rarely went faster than 30 or<br />
40 miles per hour, but the advent of military aircraft <strong>and</strong> tanks dem<strong>and</strong>ed<br />
that engines be made more efficient.
Engine <strong>and</strong> Fuel Design<br />
Internal combustion engine / 443<br />
Harry Ricardo, son of an architect <strong>and</strong> gr<strong>and</strong>son (on his mother’s<br />
side) of an engineer, was a central figure in the necessary redesign of<br />
internal combustion engines. As a schoolboy, he built a coal-fired<br />
steam engine for his bicycle, <strong>and</strong> at Cambridge University he produced<br />
a single-cylinder gasoline motorcycle, incorporating many of<br />
his own ideas, which won a fuel-economy competition when it traveled<br />
almost 40 miles on a quart of gasoline. He also began development<br />
of a two-cycle engine called the “Dolphin,” which later was<br />
produced for use in fishing boats <strong>and</strong> automobiles. In fact, in 1911,<br />
Ricardo took his new bride on their honeymoon trip in a Dolphinpowered<br />
car.<br />
The impetus that led to major engine research came in 1916<br />
when Ricardo was an engineer in his family’s firm. The British<br />
government asked for newly designed tank engines, which had to<br />
operate in the dirt <strong>and</strong> mud of battle, at a tilt of up to 35 degrees,<br />
<strong>and</strong> could not give off telltale clouds of blue oil smoke. Ricardo<br />
solved the problem with a special piston design <strong>and</strong> with air circulation<br />
around the carburetor <strong>and</strong> within the engine to keep the oil<br />
cool.<br />
Design work on the tank engines turned Ricardo into a fullfledged<br />
research engineer. In 1917, he founded his own company,<br />
<strong>and</strong> a remarkable series of discoveries quickly followed. He investigated<br />
the problem of detonation of the fuel-air mixture in the internal<br />
combustion cylinder. The mixture is supposed to be ignited<br />
by the spark plug at the top of the compression stroke, with a controlled<br />
flame front spreading at a rate about equal to the speed of<br />
the piston head as it moves downward in the power stroke. Some<br />
fuels, however, detonated (ignited spontaneously throughout the<br />
entire fuel-air mixture) as a result of the compression itself, causing<br />
loss of fuel efficiency <strong>and</strong> damage to the engine.<br />
With the cooperation of Robert Waley Cohen of Shell Petroleum,<br />
Ricardo evaluated chemical mixtures of fuels <strong>and</strong> found that paraffins<br />
(such as n-heptane, the current low-octane st<strong>and</strong>ard) detonated<br />
readily, but aromatics such as toluene were nearly immune to detonation.<br />
He established a “toluene number” rating to describe the<br />
tendency of various fuels to detonate; this number was replaced in
444 / Internal combustion engine<br />
St<strong>and</strong>ard Four-Stroke Internal Combustion Engine<br />
Intake port Port<br />
Spark plug Plug Exhaust Exhaust Port port<br />
Intake Compression<br />
Ignition Expansion<br />
<strong>and</strong> Exhaust<br />
Intake Compression Power Exhaust<br />
The four cycles of a st<strong>and</strong>ard internal combustion engine (left to right): (1) intake, when air<br />
enters the cylinder <strong>and</strong> mixes with gasoline vapor; (2) compression, when the cylinder is<br />
sealed <strong>and</strong> the piston moves up to compress the air-fuel mixture; (3) power, when the spark<br />
plug ignites the mixture, creating more pressure that propels the piston downward; <strong>and</strong> (4)<br />
exhaust, when the burned gases exit the cylinder through the exhaust port.<br />
the 1920’s by the “octane number” devised by Thomas Midgley at<br />
the Delco laboratories in Dayton, Ohio.<br />
The fuel work was carried out in an experimental engine designed<br />
by Ricardo that allowed direct observation of the flame front<br />
as it spread <strong>and</strong> permitted changes in compression ratio while the<br />
engine was running. Three principles emerged from the investigation:<br />
the fuel-air mixture should be admitted with as much turbulence<br />
as possible, for thorough mixing <strong>and</strong> efficient combustion; the<br />
spark plug should be centrally located to prevent distant pockets of<br />
the mixture from detonating before the flame front reaches them;<br />
<strong>and</strong> the mixture should be kept as cool as possible to prevent detonation.<br />
These principles were then applied in the first truly efficient sidevalve<br />
(“L-head”) engine—that is, an engine with the valves in a<br />
chamber at the side of the cylinder, in the engine block, rather than<br />
overhead, in the engine head. Ricardo patented this design, <strong>and</strong> after<br />
winning a patent dispute in court in 1932, he received royalties<br />
or consulting fees for it from engine manufacturers all over the<br />
world.
Impact<br />
The side-valve engine was the workhorse design for automobile<br />
<strong>and</strong> marine engines until after World War II. With its valves actuated<br />
directly by a camshaft in the crankcase, it is simple, rugged,<br />
<strong>and</strong> easy to manufacture. Overhead valves with overhead camshafts<br />
are the st<strong>and</strong>ard in automobile engines today, but the sidevalve<br />
engine is still found in marine applications <strong>and</strong> in small engines<br />
for lawn mowers, home generator systems, <strong>and</strong> the like. In its<br />
widespread use <strong>and</strong> its decades of employment, the side-valve engine<br />
represents a scientific <strong>and</strong> technological breakthrough in the<br />
twentieth century.<br />
Ricardo <strong>and</strong> his colleagues, Oliver Thornycroft <strong>and</strong> D. R. Pye,<br />
went on to create other engine designs—notably, the sleeve-valve<br />
aircraft engine that was the basic pattern for most of the great British<br />
planes of World War II <strong>and</strong> early versions of the aircraft jet engine.<br />
For his technical advances <strong>and</strong> service to the government, Ricardo<br />
was elected a Fellow of the Royal Society in 1929, <strong>and</strong> he was<br />
knighted in 1948.<br />
See also Alkaline storage battery; Assembly line; Diesel locomotive;<br />
Dirigible; Gas-electric car; Interchangeable parts; Thermal cracking<br />
process.<br />
Further Reading<br />
Internal combustion engine / 445<br />
A History of the Automotive Internal Combustion Engine. Warrendale,<br />
Pa.: Society of Automotive Engineers, 1976.<br />
Mowery, David C., <strong>and</strong> Nathan Rosenberg. Paths of Innovation: Technological<br />
Change in Twentieth Century America. New York: Cambridge<br />
University Press, 1999.<br />
Ricardo, Harry R. Memories <strong>and</strong> Machines: The Pattern of My Life. London:<br />
Constable, 1968.
446<br />
The Internet<br />
The Internet<br />
The invention: A worldwide network of interlocking computer<br />
systems, developed out of a U.S. government project to improve<br />
military preparedness.<br />
The people behind the invention:<br />
Paul Baran, a researcher for the RAND corporation<br />
Vinton G. Cerf (1943- ), an American computer scientist<br />
regarded as the “father of the Internet”<br />
Cold War Computer Systems<br />
In 1957, the world was stunned by the launching of the satellite<br />
Sputnik I by the Soviet Union. The international image of the United<br />
States as the world’s technology superpower <strong>and</strong> its perceived edge<br />
in the Cold War were instantly brought into question. As part of the<br />
U.S. response, the Defense Department quickly created the Advanced<br />
Research Projects Agency (ARPA) to conduct research into<br />
“comm<strong>and</strong>, control, <strong>and</strong> communications” systems. Military planners<br />
in the Pentagon ordered ARPA to develop a communications<br />
network that would remain usable in the wake of a nuclear attack.<br />
The solution, proposed by Paul Baran, a scientist at the RAND Corporation,<br />
was the creation of a network of linked computers that<br />
could route communications around damage to any part of the system.<br />
Because the centralized control of data flow by major “hub”<br />
computers would make such a system vulnerable, the system could<br />
not have any central comm<strong>and</strong>, <strong>and</strong> all surviving points had to be<br />
able to reestablish contact following an attack on any single point.<br />
This redundancy of connectivity (later known as “packet switching”)<br />
would not monopolize a single circuit for communications, as<br />
telephones do, but would automatically break up computer messages<br />
into smaller packets, each of which could reach a destination<br />
by rerouting along different paths.<br />
ARPA then began attempting to link university computers over<br />
telephone lines. The historic connecting of four sites conducting<br />
ARPA research was accomplished in 1969 at a computer laboratory
at the University of California at Los Angeles (UCLA), which was<br />
connected to computers at the University of California at Santa<br />
Barbara, the Stanford Research Institute, <strong>and</strong> the University of Utah.<br />
UCLA graduate student Vinton Cerf played a major role in establishing<br />
the connection, which was first known as “ARPAnet.” By<br />
1971, more than twenty sites had been connected to the network, including<br />
supercomputers at the Massachusetts Institute of Technology<br />
<strong>and</strong> Harvard University; by 1981, there were more than two<br />
hundred computers on the system.<br />
The Development of the Internet<br />
The Internet / 447<br />
Because factors such as equipment failure, overtaxed telecommunications<br />
lines, <strong>and</strong> power outages can quickly reduce or abort<br />
(“crash”) computer network performance, the ARPAnet managers<br />
<strong>and</strong> others quickly sought to build still larger “internetting” projects.<br />
In the late 1980’s, the National Science Foundation built its<br />
own network of five supercomputer centers to give academic researchers<br />
access to high-power computers that had previously been<br />
available only to military contractors. The “NSFnet” connected university<br />
networks by linking them to the closest regional center; its<br />
development put ARPAnet out of commission in 1990. The economic<br />
savings that could be gained from the use of electronic mail<br />
(“e-mail”), which reduced postage <strong>and</strong> telephone costs, were motivation<br />
enough for many businesses <strong>and</strong> institutions to invest in<br />
hardware <strong>and</strong> network connections.<br />
The evolution of ARPAnet <strong>and</strong> NSFnet eventually led to the creation<br />
of the “Internet,” an international web of interconnected government,<br />
education, <strong>and</strong> business computer networks that has been<br />
called “the largest machine ever constructed.” Using appropriate<br />
software, a computer terminal or personal computer can send <strong>and</strong><br />
receive data via an “Internet Protocol” packet (an electronic envelope<br />
with an address). Communications programs on the intervening<br />
networks “read” the addresses on packets moving through the<br />
Internet <strong>and</strong> forward the packets toward their destinations. From<br />
approximately one thous<strong>and</strong> networks in the mid-1980’s, the Internet<br />
grew to an estimated thirty thous<strong>and</strong> connected networks by<br />
1994, with an estimated 25 million users accessing it regularly. The
448 / The Internet<br />
Vinton Cerf<br />
Although Vinton Cerf is widely hailed as the “father of the<br />
Internet,” he himself disavows that honor. He has repeatedly<br />
emphasized that the Internet was built on the work of countless<br />
others, <strong>and</strong> that he <strong>and</strong> his partner merely happened to make a<br />
crucial contribution at a turning point in Internet development.<br />
The path leading Cerf to the Internet began early. He was<br />
born in New Haven, Connecticut, in 1943. He read widely, devouring<br />
L. Frank Baum’s Oz books <strong>and</strong> science fiction novels—<br />
especially those dealing with real-science themes. When he was<br />
ten, a book called The Boy Scientist fired his interest in science.<br />
After starting high school in Los Angeles in 1958, he got his first<br />
glimpse of computers, which were very different devices in<br />
those days. During a visit to a Santa Monica lab, he inspected a<br />
computer filling three rooms with wires <strong>and</strong> vacuum tubes that<br />
analyzed data from a Canadian radar system built to detect<br />
sneak missile attacks from the Soviet Union. Two years later he<br />
<strong>and</strong> a friend began programming a paper-tape computer at<br />
UCLA while they were still in high school.<br />
After graduating from Stanford University in 1965 with a<br />
degree in computer science, Cerf worked for IBM for two years,<br />
then entered graduate school at UCLA. His work on multiprocessing<br />
computer systems got sidetracked when a Defense<br />
Department request came in asking for help on a packet-switching<br />
project. This new project drew him into the br<strong>and</strong>-new field<br />
of computer networking on a system that became known as the<br />
ARPAnet. In 1972 Cerf returned to Stanford as an assistant professor.<br />
There he <strong>and</strong> a colleague, Robert Kahn, developed the<br />
concepts <strong>and</strong> protocols that became the basis of the modern Internet—a<br />
term they coined in a paper they delivered in 1974.<br />
Afterward Cerf made development of the Internet the focus<br />
of his distinguished career, <strong>and</strong> he later moved back into the<br />
business world. In 1994 he returned to MCI as senior vice president<br />
of Internet architecture. Meanwhile, he founded the Internet<br />
Society in 1992 <strong>and</strong> the Internet Societal Task Force in 1999.<br />
majority of Internet users live in the United States <strong>and</strong> Europe, but<br />
the Internet has continued to exp<strong>and</strong> internationally as telecommunications<br />
lines are improved in other countries.
Impact<br />
Most individual users access the Internet through modems attached<br />
to their home personal computers by subscribing to local area<br />
networks. These services make information sources available such as<br />
on-line encyclopedias <strong>and</strong> magazines <strong>and</strong> embrace electronic discussion<br />
groups <strong>and</strong> bulletin boards on nearly every specialized interest<br />
area imaginable. Many universities converted large libraries to electronic<br />
form for Internet distribution, with an ambitious example being<br />
Cornell University’s conversion to electronic form of more than<br />
100,000 books on the development of America’s infrastructure.<br />
Numerous corporations <strong>and</strong> small businesses soon began to<br />
market their products <strong>and</strong> services over the Internet. Problems soon<br />
became apparent with the commercial use of the new medium,<br />
however, as the protection of copyrighted material proved to be difficult;<br />
data <strong>and</strong> other text available on the system can be “downloaded,”<br />
or electronically copied. To protect their resources from<br />
unauthorized use via the Internet, therefore, most companies set up<br />
a “firewall” computer to screen incoming communications.<br />
The economic policies of the Bill Clinton administration highlighted<br />
the development of the “information superhighway” for<br />
improving the delivery of social services <strong>and</strong> encouraging new<br />
businesses; however, many governmental agencies <strong>and</strong> offices, including<br />
the U.S. Senate <strong>and</strong> House of Representative, have been<br />
slow to install high-speed fiber-optic network links. Nevertheless,<br />
the Internet soon came to contain numerous information sites to improve<br />
public access to the institutions of government.<br />
See also Cell phone; Communications satellite; Fax machine;<br />
Personal computer.<br />
Further Reading<br />
The Internet / 449<br />
Abbate, Janet. Inventing the Internet. Cambridge, Mass.: MIT Press,<br />
2000.<br />
Brody, Herb. “Net Cerfing.” Technology Review (Cambridge, Mass.)<br />
101, no. 3 (May-June, 1998).<br />
Bryant, Stephen. The Story of the Internet. London: Peason Education,<br />
2000.
450 / The Internet<br />
Rodriguez, Karen. “Plenty Deserve Credit as ‘Father’ of the Internet.”<br />
Business Journal 17, no. 27 (October 22, 1999).<br />
Stefik, Mark J., <strong>and</strong> Vinton Cerf. Internet Dreams: Archetypes, Myths,<br />
<strong>and</strong> Metaphors. Cambridge, Mass.: MIT Press, 1997.<br />
“Vint Cerf.” Forbes 160, no. 7 (October 6, 1997).<br />
Wollinsky, Art. The History of the Internet <strong>and</strong> the World Wide Web.<br />
Berkeley Heights, N.J.: Enslow, 1999.
Iron lung<br />
Iron lung<br />
The invention: A mechanical respirator that saved the lives of victims<br />
of poliomyelitis.<br />
The people behind the invention:<br />
Philip Drinker (1894-1972), an engineer who made many<br />
contributions to medicine<br />
Louis Shaw (1886-1940), a respiratory physiologist who<br />
assisted Drinker<br />
Charles F. McKhann III (1898-1988), a pediatrician <strong>and</strong><br />
founding member of the American Board of Pediatrics<br />
A Terrifying Disease<br />
451<br />
Poliomyelitis (polio, or infantile paralysis) is an infectious viral<br />
disease that damages the central nervous system, causing paralysis<br />
in many cases. Its effect results from the destruction of neurons<br />
(nerve cells) in the spinal cord. In many cases, the disease produces<br />
crippled limbs <strong>and</strong> the wasting away of muscles. In others, polio results<br />
in the fatal paralysis of the respiratory muscles. It is fortunate<br />
that use of the Salk <strong>and</strong> Sabin vaccines beginning in the 1950’s has<br />
virtually eradicated the disease.<br />
In the 1920’s, poliomyelitis was a terrifying disease. Paralysis of<br />
the respiratory muscles caused rapid death by suffocation, often<br />
within only a few hours after the first signs of respiratory distress<br />
had appeared. In 1929, Philip Drinker <strong>and</strong> Louis Shaw, both of Harvard<br />
University, reported the development of a mechanical respirator<br />
that would keep those afflicted with the disease alive for indefinite<br />
periods of time. This device, soon nicknamed the “iron lung,”<br />
helped thous<strong>and</strong>s of people who suffered from respiratory paralysis<br />
as a result of poliomyelitis or other diseases.<br />
Development of the iron lung arose after Drinker, then an assistant<br />
professor in Harvard’s Department of Industrial Hygiene, was<br />
appointed to a Rockefeller Institute commission formed to improve<br />
methods for resuscitating victims of electric shock. The best-known<br />
use of the iron lung—treatment of poliomyelitis—was a result of<br />
numerous epidemics of the disease that occurred from 1898 until
452 / Iron lung<br />
the 1920’s, each leaving thous<strong>and</strong>s of Americans paralyzed.<br />
The concept of the iron lung reportedly arose from Drinker’s observation<br />
of physiological experiments carried out by Shaw <strong>and</strong><br />
Drinker’s brother, Cecil. The experiments involved the placement<br />
of a cat inside an airtight box—a body plethysmograph—with the<br />
cat’s head protruding from an airtight collar. Shaw <strong>and</strong> Cecil Drinker<br />
then measured the volume changes in the plethysmograph to identify<br />
normal breathing patterns. Philip Drinker then placed cats paralyzed<br />
by curare inside plethysmographies <strong>and</strong> showed that they<br />
could be kept breathing artificially by use of air from a hypodermic<br />
syringe connected to the device.<br />
Next, they proceeded to build a human-sized plethysmographlike<br />
machine, with a five-hundred-dollar grant from the New York<br />
Consolidated Gas Company. This was done by a tinsmith <strong>and</strong> the<br />
Harvard Medical School machine shop.<br />
Breath for Paralyzed Lungs<br />
The first machine was tested on Drinker <strong>and</strong> Shaw, <strong>and</strong> after several<br />
modifications were made, a workable iron lung was made<br />
available for clinical use. This machine consisted of a metal cylinder<br />
large enough to hold a human being. One end of the cylinder, which<br />
contained a rubber collar, slid out on casters along with a stretcher<br />
on which the patient was placed. Once the patient was in position<br />
<strong>and</strong> the collar was fitted around the patient’s neck, the stretcher was<br />
pushed back into the cylinder <strong>and</strong> the iron lung was made airtight.<br />
The iron lung then “breathed” for the patient by using an electric<br />
blower to remove <strong>and</strong> replace air alternatively inside the machine.<br />
In the human chest, inhalation occurs when the diaphragm contracts<br />
<strong>and</strong> powerful muscles (which are paralyzed in poliomyelitis<br />
sufferers) exp<strong>and</strong> the rib cage. This lowers the air pressure in the<br />
lungs <strong>and</strong> allows inhalation to occur. In exhalation, the diaphragm<br />
<strong>and</strong> chest muscles relax, <strong>and</strong> air is expelled as the chest cavity returns<br />
to its normal size. In cases of respiratory paralysis treated with<br />
an iron lung, the air coming into or leaving the iron lung alternately<br />
compressed the patient’s chest, producing artificial exhalation, <strong>and</strong><br />
the allowed it to exp<strong>and</strong> to so that the chest could fill with air. In this<br />
way, iron lungs “breathed” for the patients using them.
Careful examination of each patient was required to allow technicians<br />
to adjust the rate of operation of the machine. A cooling system<br />
<strong>and</strong> ports for drainage lines, intravenous lines, <strong>and</strong> the other<br />
apparatus needed to maintain a wide variety of patients were included<br />
in the machine.<br />
The first person treated in an iron lung was an eight-year-old girl<br />
afflicted with respiratory paralysis resulting from poliomyelitis. The<br />
iron lung kept her alive for five days. Unfortunately, she died from<br />
heart failure as a result of pneumonia. The next iron lung patient, a<br />
Harvard University student, was confined to the machine for several<br />
weeks <strong>and</strong> later recovered enough to resume a normal life.<br />
Impact<br />
The Drinker respirator, or iron lung, came into use in 1929 <strong>and</strong><br />
soon was considered indispensable, saving lives of poliomyelitis<br />
victims until the development of the Salk vaccine in the 1950’s.<br />
Although the iron lung is no longer used, it played a critical role<br />
in the development of modern respiratory care, proving that large<br />
numbers of patients could be kept alive with mechanical support.<br />
The iron lung <strong>and</strong> polio treatment began an entirely new era in<br />
treatment of respiratory conditions.<br />
In addition to receiving a number of awards <strong>and</strong> honorary degrees<br />
for his work, Drinker was elected president of the American<br />
Industrial Hygiene Association in 1942 <strong>and</strong> became chairman of<br />
Harvard’s Department of Industrial Hygiene.<br />
See also Electrocardiogram; Electroencephalogram; Heart-lung<br />
machine; Pacemaker; Polio vaccine (Sabin); Polio vaccine (Salk).<br />
Further Reading<br />
Iron lung / 453<br />
DeJauregui, Ruth. One Hundred Medical Milestones That Shaped World<br />
History. San Mateo, Calif.: Bluewood Books, 1998.<br />
Hawkins, Leonard C. The Man in the Iron Lung: The Frederick B. Snite,<br />
Jr., Story. Garden City, N.Y.: Doubleday, 1956.<br />
Rudulph, Mimi. Inside the Iron Lung. Buckinghamshire: Kensal<br />
Press, 1984.
454<br />
Laminated glass<br />
Laminated glass<br />
The invention: Double sheets of glass separated by a thin layer of<br />
plastic s<strong>and</strong>wiched between them.<br />
The people behind the invention:<br />
Edouard Benedictus (1879-1930), a French artist<br />
Katherine Burr Blodgett (1898-1979), an American physicist<br />
The Quest for Unbreakable Glass<br />
People have been fascinated for centuries by the delicate transparency<br />
of glass <strong>and</strong> the glitter of crystals. They have also been frustrated<br />
by the brittleness <strong>and</strong> fragility of glass. When glass breaks, it<br />
forms sharp pieces that can cut people severely. During the 1800’s<br />
<strong>and</strong> early 1900’s, a number of people demonstrated ways to make<br />
“unbreakable” glass. In 1855 in Engl<strong>and</strong>, the first “unbreakable”<br />
glass panes were made by embedding thin wires in the glass. The<br />
embedded wire grid held the glass together when it was struck or<br />
subjected to the intense heat of a fire. Wire glass is still used in windows<br />
that must be fire resistant. The concept of embedding the wire<br />
within a glass sheet so that the glass would not shatter was a predecessor<br />
of the concept of laminated glass.<br />
A series of inventors in Europe <strong>and</strong> the United States worked on<br />
the idea of using a durable, transparent inner layer of plastic between<br />
two sheets of glass to prevent the glass from shattering when it was<br />
dropped or struck by an impact. In 1899, Charles E. Wade of Scranton,<br />
Pennsylvania, obtained a patent for a kind of glass that had a sheet or<br />
netting of mica fused within it to bind it. In 1902, Earnest E. G. Street<br />
of Paris, France, proposed coating glass battery jars with pyroxylin<br />
plastic (celluloid) so that they would hold together if they cracked. In<br />
Swindon, Engl<strong>and</strong>, in 1905, John Crewe Wood applied for a patent<br />
for a material that would prevent automobile windshields from shattering<br />
<strong>and</strong> injuring people when they broke. He proposed cementing<br />
a sheet of material such as celluloid between two sheets of glass.<br />
When the window was broken, the inner material would hold the<br />
glass splinters together so that they would not cut anyone.
Katharine Burr Blodgett<br />
Remembering a Fortuitous Fall<br />
Laminated glass / 455<br />
Besides the danger of shattering, glass poses another problem.<br />
It reflects light, as much as 10 percent of the rays hitting it,<br />
<strong>and</strong> that is bad for many precision instruments. Katharine Burr<br />
Blodgett cleared away that problem.<br />
Blodgett was born in 1898 in Schenectady, New York, just<br />
months after her father died. Her widowed mother, intent upon<br />
giving her <strong>and</strong> her brother the best upbringing possible, devoted<br />
herself to their education <strong>and</strong> took them abroad to live for<br />
extended periods. She succeeded. Blodgett attended Bryn Mawr<br />
<strong>and</strong> then earned a master’s degree in physics from the University<br />
of Chicago. With the help of a family friend, Irving Langmuir,<br />
who later won a Nobel Prize in Chemistry, she was promised<br />
a job at the General Electric (GE) research laboratory.<br />
However, Langmuir first wanted her to study more physics.<br />
Blodgett went to Cambridge University <strong>and</strong> under the guidance<br />
of Ernest Rutherford became the first women to receive a<br />
doctorate in physics there. Then she went to work at GE.<br />
Collaborating with Langmuir, Blodgett found that she could<br />
coat glass with a film one layer of molecules at a time, a feat<br />
never accomplished before. Moreover, the color of light reflected<br />
differed with the number of layers of film. She discovered<br />
that by adjusting the number of layers she could cancel out<br />
the light reflected by the glass beneath, so as much as 99 percent<br />
of natural light would pass through the glass. Producing almost<br />
no reflection, this treated glass was “invisible.” It was perfect<br />
for lenses, such as those in cameras <strong>and</strong> microscopes. Blodgett<br />
also devised a way to measure the thickness of films based on<br />
the wavelengths of light they reflect—a color gauge—that became<br />
a st<strong>and</strong>ard laboratory technique.<br />
Blodgett died in the town of her birth in 1979.<br />
In his patent application, Edouard Benedictus described himself<br />
as an artist <strong>and</strong> painter. He was also a poet, musician, <strong>and</strong><br />
philosopher who was descended from the philosopher Baruch<br />
Benedictus Spinoza; he seemed an unlikely contributor to the<br />
progress of glass manufacture. In 1903, Benedictus was cleaning
456 / Laminated glass<br />
his laboratory when he dropped a glass bottle that held a nitrocellulose<br />
solution. The solvents, which had evaporated during the<br />
years that the bottle had sat on a shelf, had left a strong celluloid<br />
coating on the glass. When Benedictus picked up the bottle, he was<br />
surprised to see that it had not shattered: It was starred, but all the<br />
glass fragments had been held together by the internal celluloid<br />
coating. He looked at the bottle closely, labeled it with the date<br />
(November, 1903) <strong>and</strong> the height from which it had fallen, <strong>and</strong> put<br />
it back on the shelf.<br />
One day some years later (the date is uncertain), Benedictus became<br />
aware of vehicular collisions in which two young women received<br />
serious lacerations from broken glass. He wrote a poetic account<br />
of a daydream he had while he was thinking intently about<br />
the two women. He described a vision in which the faintly illuminated<br />
bottle that had fallen some years before but had not shattered<br />
appeared to float down to him from the shelf. He got up, went into<br />
his laboratory, <strong>and</strong> began to work on an idea that originated with his<br />
thoughts of the bottle that would not splinter.<br />
Benedictus found the old bottle <strong>and</strong> devised a series of experiments<br />
that he carried out until the next evening. By the time he had<br />
finished, he had made the first sheet of Triplex glass, for which he<br />
applied for a patent in 1909. He also founded the Société du Verre<br />
Triplex (The Triplex Glass Society) in that year. In 1912, the Triplex<br />
Safety Glass Company was established in Engl<strong>and</strong>. The company<br />
sold its products for military equipment in World War I, which began<br />
two years later.<br />
Triplex glass was the predecessor of laminated glass. Laminated<br />
glass is composed of two or more sheets of glass with a thin<br />
layer of plastic (usually polyvinyl butyral, although Benedictus<br />
used pyroxylin) laminated between the glass sheets using pressure<br />
<strong>and</strong> heat. The plastic layer will yield rather than rupture when subjected<br />
to loads <strong>and</strong> stresses. This prevents the glass from shattering<br />
into sharp pieces. Because of this property, laminated glass is also<br />
known as “safety glass.”<br />
Impact<br />
Even after the protective value of laminated glass was known,
the product was not widely used for some years. There were a number<br />
of technical difficulties that had to be solved, such as the discoloring<br />
of the plastic layer when it was exposed to sunlight; the relatively<br />
high cost; <strong>and</strong> the cloudiness of the plastic layer, which<br />
obscured vision—especially at night. Nevertheless, the exp<strong>and</strong>ing<br />
automobile industry <strong>and</strong> the corresponding increase in the number<br />
of accidents provided the impetus for improving the qualities <strong>and</strong><br />
manufacturing processes of laminated glass. In the early part of the<br />
century, almost two-thirds of all injuries suffered in automobile accidents<br />
involved broken glass.<br />
Laminated glass is used in many applications in which safety is<br />
important. It is typically used in all windows in cars, trucks, ships,<br />
<strong>and</strong> aircraft. Thick sheets of bullet-resistant laminated glass are<br />
used in banks, jewelry displays, <strong>and</strong> military installations. Thinner<br />
sheets of laminated glass are used as security glass in museums, libraries,<br />
<strong>and</strong> other areas where resistance to break-in attempts is<br />
needed. Many buildings have large ceiling skylights that are made<br />
of laminated glass; if the glass is damaged, it will not shatter, fall,<br />
<strong>and</strong> hurt people below. Laminated glass is used in airports, hotels,<br />
<strong>and</strong> apartments in noisy areas <strong>and</strong> in recording studios to reduce<br />
the amount of noise that is transmitted. It is also used in safety goggles<br />
<strong>and</strong> in viewing ports at industrial plants <strong>and</strong> test chambers.<br />
Edouard Benedictus’s recollection of the bottle that fell but did not<br />
shatter has thus helped make many situations in which glass is used<br />
safer for everyone.<br />
See also Buna rubber; Contact lenses; Neoprene; Plastic; Pyrex<br />
glass; Silicones.<br />
Further Reading<br />
Laminated glass / 457<br />
Eastman, Joel W. Styling vs. Safety: The American Automobile Industry<br />
<strong>and</strong> the Development of Automotive Safety, 1900-1966. Lanham: University<br />
Press of America, 1984.<br />
Fariss, Robert H. “Fifty Years of Safer Windshields.” CHEMTECH<br />
23, no. 9 (September, 1993).<br />
Miel, Rhoda. “New Process Promises Safer Glass.” Automotive News<br />
74, no. 5863 (February 28, 2000).
458 / Laminated glass<br />
Polak, James L. “Eighty Years Plus of Automotive Glass Development:<br />
Windshields Were Once an Option, Today They Are an Integral<br />
Part of the Automobile.” Automotive Engineering 98, no. 6<br />
(June, 1990).
Laser<br />
Laser<br />
The invention: Taking its name from the acronym for light amplification<br />
by the stimulated emission of radiation, a laser is a<br />
beam of electromagnetic radiation that is monochromatic, highly<br />
directional, <strong>and</strong> coherent. Lasers have found multiple applications<br />
in electronics, medicine, <strong>and</strong> other fields.<br />
The people behind the invention:<br />
Theodore Harold Maiman (1927- ), an American physicist<br />
Charles Hard Townes (1915- ), an American physicist who<br />
was a cowinner of the 1964 Nobel Prize in Physics<br />
Arthur L. Schawlow (1921-1999), an American physicist,<br />
cowinner of the 1981 Nobel Prize in Physics<br />
Mary Spaeth (1938- ), the American inventor of the tunable<br />
laser<br />
Coherent Light<br />
459<br />
Laser beams differ from other forms of electromagnetic radiation<br />
in being consisting of a single wavelength, being highly directional,<br />
<strong>and</strong> having waves whose crests <strong>and</strong> troughs are aligned. A laser<br />
beam launched from Earth has produced a spot a few kilometers<br />
wide on the Moon, nearly 400,000 kilometers away. Ordinary light<br />
would have spread much more <strong>and</strong> produced a spot several times<br />
wider than the Moon. Laser light can also be concentrated so as to<br />
yield an enormous intensity of energy, more than that of the surface<br />
of the Sun, an impossibility with ordinary light.<br />
In order to appreciate the difference between laser light <strong>and</strong> ordinary<br />
light, one must examine how light of any kind is produced. An<br />
ordinary light bulb contains atoms of gas. For the bulb to light up,<br />
these atoms must be excited to a state of energy higher then their<br />
normal, or ground, state. This is accomplished by sending a current<br />
of electricity through the bulb; the current jolts the atoms into the<br />
higher-energy state. This excited state is unstable, however, <strong>and</strong> the<br />
atoms will spontaneously return to their ground state by ridding<br />
themselves of excess energy.
460 / Laser<br />
Scanner device using a laser beam to read shelf labels. (PhotoDisc)<br />
As these atoms emit energy, light is produced. The light emitted<br />
by a lamp full of atoms is disorganized <strong>and</strong> emitted in all directions<br />
r<strong>and</strong>omly. This type of light, common to all ordinary sources, from<br />
fluorescent lamps to the Sun, is called “incoherent light.”<br />
Laser light is different. The excited atoms in a laser emit their excess<br />
energy in a unified, controlled manner. The atoms remain in the<br />
excited state until there are a great many excited atoms. Then, they<br />
are stimulated to emit energy, not independently, but in an organized<br />
fashion, with all their light waves traveling in the same direction,<br />
crests <strong>and</strong> troughs perfectly aligned. This type of light is called<br />
“coherent light.”<br />
Theory to Reality<br />
In 1958, Charles Hard Townes of Columbia University, together<br />
with Arthur L. Schawlow, explored the requirements of the laser in<br />
a theoretical paper. In the Soviet Union, F. A. Butayeva <strong>and</strong> V. A.<br />
Fabrikant had amplified light in 1957 using mercury; however, their<br />
work was not published for two years <strong>and</strong> was not published in a<br />
scientific journal. The work of the Soviet scientists, therefore, re-
Mary Spaeth<br />
Born in 1938, Mary Dietrich Spaeth, inventor of the tunable<br />
laser, learned to put things together early. When she was just<br />
three years old, her father began giving her tools to play with.<br />
She learned to use them well <strong>and</strong> got interested in science along<br />
the way. She studied mathematics <strong>and</strong> physics at Valparaiso<br />
University, graduating in 1960, <strong>and</strong> earned a master’s degree in<br />
nuclear physics from Wayne State University in 1962.<br />
The same year she joined Hughes Aircraft Company as a researcher.<br />
While waiting for supplies for her regular research in<br />
1966, she examined the lasers in her laboratory. She wondered<br />
if, by adding dyes, she could cause the beams to change colors.<br />
Cobbling together two lasers—one to boost the power of the<br />
test laser—with Duco cement, she added dyes <strong>and</strong> succeeded at<br />
once. She found that she could produce light in a wide range of<br />
colors with different dyes. The tunable dye laser afterward was<br />
used to separate isotopes in nuclear reactor fuel, to purify plutonium<br />
for weapons, <strong>and</strong> to boost the power of ground-based<br />
astronomical telescopes. She also invented a resonant reflector<br />
for ruby range finders <strong>and</strong> performed basic research on passive<br />
Q switches used in lasers.<br />
Because Spaeth considered Hughes’s promotion policies to<br />
discriminate against women scientists, she moved to the Lawrence<br />
Livermore National Laboratory in 1974. In 1986 she became<br />
the deputy associate director of its Laser Isotope Separation<br />
program.<br />
Laser / 461<br />
ceived virtually no attention in the Western world.<br />
In 1960, Theodore Harold Maiman constructed the first laser in<br />
the United States using a single crystal of synthetic pink ruby,<br />
shaped into a cylindrical rod about 4 centimeters long <strong>and</strong> 0.5 centimeter<br />
across. The ends, polished flat <strong>and</strong> made parallel to within<br />
about a millionth of a centimeter, were coated with silver to make<br />
them mirrors.<br />
It is a property of stimulated emission that stimulated light<br />
waves will be aligned exactly (crest to crest, trough to trough, <strong>and</strong><br />
with respect to direction) with the radiation that does the stimulating.<br />
From the group of excited atoms, one atom returns to its ground
462 / Laser<br />
state, emitting light. That light hits one of the other exited atoms <strong>and</strong><br />
stimulates it to fall to its ground state <strong>and</strong> emit light. The two light<br />
waves are exactly in step. The light from these two atoms hits other<br />
excited atoms, which respond in the same way, “amplifying” the total<br />
sum of light.<br />
If the first atom emits light in a direction parallel to the length of<br />
the crystal cylinder, the mirrors at both ends bounce the light waves<br />
back <strong>and</strong> forth, stimulating more light <strong>and</strong> steadily building up an<br />
increasing intensity of light. The mirror at one end of the cylinder is<br />
constructed to let through a fraction of the light, enabling the light to<br />
emerge as a straight, intense, narrow beam.<br />
Consequences<br />
When the laser was introduced, it was an immediate sensation. In<br />
the eighteen months following Maiman’s announcement that he had<br />
succeeded in producing a working laser, about four hundred companies<br />
<strong>and</strong> several government agencies embarked on work involving<br />
lasers. Activity centered on improving lasers, as well as on exploring<br />
their applications. At the same time, there was equal activity in publicizing<br />
the near-miraculous promise of the device, in applications covering<br />
the spectrum from “death” rays to sight-saving operations. A<br />
popular film in the James Bond series, Goldfinger (1964), showed the<br />
hero under threat of being sliced in half by a laser beam—an impossibility<br />
at the time the film was made because of the low power-output<br />
of the early lasers.<br />
In the first decade after Maiman’s laser, there was some disappointment.<br />
Successful use of lasers was limited to certain areas of<br />
medicine, such as repairing detached retinas, <strong>and</strong> to scientific applications,<br />
particularly in connection with st<strong>and</strong>ards: The speed of<br />
light was measured with great accuracy, as was the distance to the<br />
Moon. By 1990, partly because of advances in other fields, essentially<br />
all the laser’s promise had been fulfilled, including the death<br />
ray <strong>and</strong> James Bond’s slicer. Yet the laser continued to find its place<br />
in technologies not envisioned at the time of the first laser. For example,<br />
lasers are now used in computer printers, in compact disc<br />
players, <strong>and</strong> even in arterial surgery.
See also Atomic clock; Compact disc; Fiber-optics; Holography;<br />
Laser-diode recording process; Laser vaporization; Optical disk.<br />
Further Reading<br />
Laser / 463<br />
Townes, Charles H. How the Laser Happened: Adventures of a Scientist.<br />
New York: Oxford University Press, 1999.<br />
Weber, Robert L. Pioneers of Science: Nobel Prize Winners in Physics.2d<br />
ed. Philadelphia: A. Hilger, 1988.<br />
Yen, W. M., Marc D. Levenson, <strong>and</strong> Arthur L. Schawlow. Lasers,<br />
Spectroscopy, <strong>and</strong> New Ideas: A Tribute to Arthur L. Schawlow. New<br />
York: Springer-Verlag, 1987.
464<br />
Laser-diode recording process<br />
Laser-diode recording process<br />
The invention: Video <strong>and</strong> audio playback system that uses a lowpower<br />
laser to decode information digitally stored on reflective<br />
disks.<br />
The organization behind the invention:<br />
The Philips Corporation, a Dutch electronics firm<br />
The Development of Digital Systems<br />
Since the advent of the computer age, it has been the goal of<br />
many equipment manufacturers to provide reliable digital systems<br />
for the storage <strong>and</strong> retrieval of video <strong>and</strong> audio programs. A need<br />
for such devices was perceived for several reasons. Existing storage<br />
media (movie film <strong>and</strong> 12-inch, vinyl, long-playing records) were<br />
relatively large <strong>and</strong> cumbersome to manipulate <strong>and</strong> were prone to<br />
degradation, breakage, <strong>and</strong> unwanted noise. Thus, during the late<br />
1960’s, two different methods for storing video programs on disc<br />
were invented. A mechanical system was demonstrated by the<br />
Telefunken Company, while the Radio Corporation of America<br />
(RCA) introduced an electrostatic device (a device that used static<br />
electricity). The first commercially successful system, however, was<br />
developed during the mid-1970’s by the Philips Corporation.<br />
Philips devoted considerable resources to creating a digital video<br />
system, read by light beams, which could reproduce an entire feature-length<br />
film from one 12-inch videodisc. An integral part of this<br />
innovation was the fabrication of a device small enough <strong>and</strong> fast<br />
enough to read the vast amounts of greatly compacted data stored<br />
on the 12-inch disc without introducing unwanted noise. Although<br />
Philips was aware of the other formats, the company opted to use an<br />
optical scanner with a small “semiconductor laser diode” to retrieve<br />
the digital information. The laser diode is only a fraction of a millimeter<br />
in size, operates quite efficiently with high amplitude <strong>and</strong> relatively<br />
low power (0.1 watt), <strong>and</strong> can be used continuously. Because<br />
this configuration operates at a high frequency, its informationcarrying<br />
capacity is quite large.
Although the digital videodisc system (called “laservision”) works<br />
well, the low level of noise <strong>and</strong> the clear images offered by this system<br />
were masked by the low quality of the conventional television<br />
monitors on which they were viewed. Furthermore, the high price<br />
of the playback systems <strong>and</strong> the discs made them noncompetitive<br />
with the videocassette recorders (VCRs) that were then capturing<br />
the market for home systems. VCRs had the additional advantage<br />
that programs could be recorded or copied easily. The Philips Corporation<br />
turned its attention to utilizing this technology in an area<br />
where low noise levels <strong>and</strong> high quality would be more readily apparent—audio<br />
disc systems. By 1979, they had perfected the basic<br />
compact disc (CD) system, which soon revolutionized the world of<br />
stereophonic home systems.<br />
Reading Digital Discs with Laser Light<br />
Laser-diode recording process / 465<br />
Digital signals (signals composed of numbers) are stored on<br />
discs as “pits” impressed into the plastic disc <strong>and</strong> then coated with a<br />
thin reflective layer of aluminum. A laser beam, manipulated by<br />
delicate, fast-moving mirrors, tracks <strong>and</strong> reads the digital information<br />
as changes in light intensity. These data are then converted to a<br />
varying electrical signal that contains the video or audio information.<br />
The data are then recovered by means of a sophisticated<br />
pickup that consists of the semiconductor laser diode, a polarizing<br />
beam splitter, an objective lens, a collective lens system, <strong>and</strong> a<br />
photodiode receiver. The beam from the laser diode is focused by a<br />
collimator lens (a lens that collects <strong>and</strong> focuses light) <strong>and</strong> then<br />
passes through the polarizing beam splitter (PBS). This device acts<br />
like a one-way mirror mounted at 45 degrees to the light path. Light<br />
from the laser passes through the PBS as if it were a window, but the<br />
light emerges in a polarized state (which means that the vibration of<br />
the light takes place in only one plane). For the beam reflected from<br />
the CD surface, however, the PBS acts like a mirror, since the reflected<br />
beam has an opposite polarization. The light is thus deflected<br />
toward the photodiode detector. The objective lens is needed<br />
to focus the light onto the disc surface. On the outer surface of the<br />
transparent disc, the main spot of light has a diameter of 0.8 millimeter,<br />
which narrows to only 0.0017 millimeter at the reflective sur-
466 / Laser-diode recording process<br />
face. At the surface, the spot is about three times the size of the microscopic<br />
pits (0.0005 millimeter).<br />
The data encoded on the disc determine the relative intensity of<br />
the reflected light, on the basis of the presence or absence of pits.<br />
When the reflected laser beam enters the photodiode, a modulated<br />
light beam is changed into a digital signal that becomes an analog<br />
(continuous) audio signal after several stages of signal processing<br />
<strong>and</strong> error correction.<br />
Consequences<br />
The development of the semiconductor laser diode <strong>and</strong> associated<br />
circuitry for reading stored information has made CD audio<br />
systems practical <strong>and</strong> affordable. These systems can offer the quality<br />
of a live musical performance with a clarity that is undisturbed<br />
by noise <strong>and</strong> distortion. Digital systems also offer several other significant<br />
advantages over analog devices. The dynamic range (the<br />
difference between the softest <strong>and</strong> the loudest signals that can be<br />
stored <strong>and</strong> reproduced) is considerably greater in digital systems. In<br />
addition, digital systems can be copied precisely; the signal is not<br />
degraded by copying, as is the case with analog systems. Finally,<br />
error-correcting codes can be used to detect <strong>and</strong> correct errors in<br />
transmitted or reproduced digital signals, allowing greater precision<br />
<strong>and</strong> a higher-quality output sound.<br />
Besides laser video systems, there are many other applications<br />
for laser-read CDs. Compact disc read-only memory (CD-ROM) is<br />
used to store computer text. One st<strong>and</strong>ard CD can store 500 megabytes<br />
of information, which is about twenty times the storage of a<br />
hard-disk drive on a typical home computer. Compact disc systems<br />
can also be integrated with conventional televisions (called CD-V)<br />
to present twenty minutes of sound <strong>and</strong> five minutes of sound with<br />
picture. Finally, CD systems connected with a computer (CD-I) mix<br />
audio, video, <strong>and</strong> computer programming. These devices allow the<br />
user to stop at any point in the program, request more information,<br />
<strong>and</strong> receive that information as sound with graphics, film clips, or<br />
as text on the screen.<br />
See also Compact disc; Laser; Videocassette recorder; Walkman<br />
cassette player.
Further Reading<br />
Laser-diode recording process / 467<br />
Atkinson, Terry. “Picture This: CD’s with Video, By Christmas ‘87.”<br />
Los Angeles Times (February 20, 1987).<br />
Botez, Dan, <strong>and</strong> Luis Figueroa. Laser-Diode Technology <strong>and</strong> Applications<br />
II: 16-19 January 1990, Los Angeles, California. Bellingham,<br />
Wash.: SPIE, 1990.<br />
Clemens, Jon K. “Video Disks: Three Choices.” IEEE Spectrum 19,<br />
no. 3 (March, 1982).<br />
“Self-Pulsating Laser for DVD.” Electronics Now 67, no. 5 (May,<br />
1996).
468<br />
Laser eye surgery<br />
Laser eye surgery<br />
The invention: The first significant clinical ophthalmic application<br />
of any laser system was the treatment of retinal tears with a<br />
pulsed ruby laser.<br />
The people behind the invention:<br />
Charles J. Campbell (1926- ), an ophthalmologist<br />
H. Christian Zweng (1925- ), an ophthalmologist<br />
Milton M. Zaret (1927- ), an ophthalmologist<br />
Theodore Harold Maiman (1927- ), the physicist who<br />
developed the first laser<br />
Monkeys <strong>and</strong> Rabbits<br />
The term “laser” is an acronym for light amplification by the<br />
stimulated emission of radiation. The development of the laser for<br />
ophthalmic (eye surgery) surgery arose from the initial concentration<br />
of conventional light by magnifying lenses.<br />
Within a laser, atoms are highly energized. When one of these atoms<br />
loses its energy in the form of light, it stimulates other atoms to<br />
emit light of the same frequency <strong>and</strong> in the same direction. A cascade<br />
of these identical light waves is soon produced, which then oscillate<br />
back <strong>and</strong> forth between the mirrors in the laser cavity. One<br />
mirror is only partially reflective, allowing some of the laser light to<br />
pass through. This light can be concentrated further into a small<br />
burst of high intensity.<br />
On July 7, 1960, Theodore Harold Maiman made public his discovery<br />
of the first laser—a ruby laser. Shortly thereafter, ophthalmologists<br />
began using ruby lasers for medical purposes.<br />
The first significant medical uses of the ruby laser occurred in<br />
1961, with experiments on animals conducted by Charles J. Campbell<br />
in New York, H. Christian Zweng, <strong>and</strong> Milton M. Zaret. Zaret <strong>and</strong> his<br />
colleagues produced photocoagulation (a thickening or drawing together<br />
of substances by use of light) of the eyes of rabbits by flashes<br />
from a ruby laser. Sufficient energy was delivered to cause immediate<br />
thermal injury to the retina <strong>and</strong> iris of the rabbit. The beam also was
directed to the interior of the rabbit eye, resulting in retinal coagulations.<br />
The team examined the retinal lesions <strong>and</strong> pointed out both<br />
the possible advantages of laser as a tool for therapeutic photocoagulation<br />
<strong>and</strong> the potential applications in medical research.<br />
In 1962, Zweng, along with several of his associates, began experimenting<br />
with laser photocoagulation on the eyes of monkeys<br />
<strong>and</strong> rabbits in order to establish parameters for the use of lasers on<br />
the human eye.<br />
Reflected by Blood<br />
Laser eye surgery / 469<br />
The vitreous humor, a transparent jelly that usually fills the vitreous<br />
cavity of the eyes of younger individuals, commonly shrinks with age,<br />
with myopia, or with certain pathologic conditions. As these conditions<br />
occur, the vitreous humor begins to separate from the adjacent<br />
retina. In some patients, the separating vitreous humor produces a<br />
traction (pulling), causing a retinal tear to form. Through this opening in<br />
the retina, liquefied vitreous humor can pass to a site underneath the<br />
retina, producing retinal detachment <strong>and</strong> loss of vision.<br />
Alaser can be used to cause photocoagulation of a retinal tear. As a<br />
result, an adhesive scar forms between the retina surrounding the<br />
tear <strong>and</strong> the underlying layers so that, despite traction, the retina<br />
does not detach. If more than a small area of retina has detached, the<br />
laser often is ineffective <strong>and</strong> major retinal detachment surgery must<br />
be performed. Thus, in the experiments of Campbell <strong>and</strong> Zweng, the<br />
ruby laser was used to prevent, rather than treat, retinal detachment.<br />
In subsequent experiments with humans, all patients were treated<br />
with the experimental laser photocoagulator without anesthesia.<br />
Although usually no attempt was made to seal holes or tears, the<br />
diseased portions of the retina were walled off satisfactorily so that<br />
no detachments occurred. One problem that arose involved microaneurysms.<br />
A “microaneurysm” is a tiny aneurysm, or blood-filled<br />
bubble extending from the wall of a blood vessel. When attempts to<br />
obliterate microaneurysms were unsuccessful, the researchers postulated<br />
that the color of the ruby pulse so resembled the red of blood<br />
that the light was reflected rather than absorbed. They believed that<br />
another lasing material emitting light in another part of the spectrum<br />
might have performed more successfully.
470 / Laser eye surgery<br />
Previously, xenon-arc lamp photocoagulators had been used to<br />
treat retinal tears. The long exposure time required of these systems,<br />
combined with their broad spectral range emission (versus<br />
the single wavelength output of a laser), however, made the retinal<br />
spot on which the xenon-arc could be focused too large for many<br />
applications. Focused laser spots on the retina could be as small as<br />
50 microns.<br />
Consequences<br />
The first laser in ophthalmic use by Campbell, Zweng, <strong>and</strong> Zaret,<br />
among others, was a solid laser—Maiman’s ruby laser. While the results<br />
they achieved with this laser were more impressive than with<br />
the previously used xenon-arc, in the decades following these experiments,<br />
argon gas replaced ruby as the most frequently used material<br />
in treating retinal tears.<br />
Argon laser energy is delivered to the area around the retinal tear<br />
through a slit lamp or by using an intraocular probe introduced directly<br />
into the eye. The argon wavelength is transmitted through the<br />
clear structures of the eye, such as the cornea, lens, <strong>and</strong> vitreous.<br />
This beam is composed of blue-green light that can be effectively<br />
aimed at the desired portion of the eye. Nevertheless, the beam can<br />
be absorbed by cataracts <strong>and</strong> by vitreous or retinal blood, decreasing<br />
its effectiveness.<br />
Moreover, while the ruby laser was found to be highly effective<br />
in producing an adhesive scar, it was not useful in the treatment of<br />
vascular diseases of the eye. A series of laser sources, each with different<br />
characteristics, was considered, investigated, <strong>and</strong> used clinically<br />
for various durations during the period that followed Campbell<br />
<strong>and</strong> Zweng’s experiments.<br />
Other laser types that are being adapted for use in ophthalmology<br />
are carbon dioxide lasers for scleral surgery (surgery on the<br />
tough, white, fibrous membrane covering the entire eyeball except<br />
the area covered by the cornea) <strong>and</strong> eye wall resection, dye lasers to<br />
kill or slow the growth of tumors, eximer lasers for their ability to<br />
break down corneal tissue without heating, <strong>and</strong> pulsed erbium lasers<br />
used to cut intraocular membranes.
See also Contact lenses; Coronary artery bypass surgery; Laser;<br />
Laser vaporization.<br />
Further Reading<br />
Laser eye surgery / 471<br />
Constable, Ian J., <strong>and</strong> Arthur Siew Ming Lin. Laser: Its Clinical Uses<br />
in Eye Diseases. Edinburgh: Churchill Livingstone, 1981.<br />
Guyer, David R. Retina, Vitreous, Macula. Philadelphia: Saunders,<br />
1999.<br />
Hecht, Jeff. Laser Pioneers. Rev. ed. Boston: Academic Press, 1992.<br />
Smiddy, William E., Lawrence P. Chong, <strong>and</strong> Donald A. Frambach.<br />
Retinal Surgery <strong>and</strong> Ocular Trauma. Philadelphia: Lippincott, 1995.
472<br />
Laser vaporization<br />
Laser vaporization<br />
The invention: Technique using laser light beams to vaporize the<br />
plaque that clogs arteries.<br />
The people behind the invention:<br />
Albert Einstein (1879-1955), a theoretical American physicist<br />
Theodore Harold Maiman (1927- ), inventor of the laser<br />
Light, Lasers, <strong>and</strong> Coronary Arteries<br />
Visible light, a type of electromagnetic radiation, is actually a<br />
form of energy. The fact that the light beams produced by a light<br />
bulb can warm an object demonstrates that this is the case. Light<br />
beams are radiated in all directions by a light bulb. In contrast, the<br />
device called the “laser” produces light that travels in the form of a<br />
“coherent” unidirectional beam. Coherent light beams can be focused<br />
on very small areas, generating sufficient heat to melt steel.<br />
The term “laser” was invented in 1957 by R. Gordon Gould of<br />
Columbia University. It st<strong>and</strong>s for light amplification by stimulated<br />
emission of radiation, the means by which laser light beams are<br />
made. Many different materials—including solid ruby gemstones,<br />
liquid dye solutions, <strong>and</strong> mixtures of gases—can produce such<br />
beams in a process called “lasing.” The different types of lasers yield<br />
light beams of different colors that have many uses in science, industry,<br />
<strong>and</strong> medicine. For example, ruby lasers, which were developed<br />
in 1960, are widely used in eye surgery. In 1983, a group of<br />
physicians in Toulouse, France, used a laser for cardiovascular treatment.<br />
They used the laser to vaporize the “atheroma” material that<br />
clogs the arteries in the condition called “atherosclerosis.” The technique<br />
that they used is known as “laser vaporization surgery.”<br />
Laser Operation, Welding, <strong>and</strong> Surgery<br />
Lasers are electronic devices that emit intense beams of light<br />
when a process called “stimulated emission” occurs. The principles<br />
of laser operation, including stimulated emission, were established<br />
by Albert Einstein <strong>and</strong> other scientists in the first third of the twenti-
Laser vaporization / 473<br />
eth century. In 1960, Theodore H. Maiman of the Hughes Research<br />
Center in Malibu, California, built the first laser, using a ruby crystal<br />
to produce a laser beam composed of red light.<br />
All lasers are made up of three main components. The first of<br />
these, the laser’s “active medium,” is a solid (like Maiman’s ruby<br />
crystal), a liquid, or a gas that can be made to lase. The second component<br />
is a flash lamp or some other light energy source that puts<br />
light into the active medium. The third component is a pair of mirrors<br />
that are situated on both sides of the active medium <strong>and</strong> are designed<br />
in such a way that one mirror transmits part of the energy<br />
that strikes it, yielding the light beam that leaves the laser.<br />
Lasers can produce energy because light is one of many forms of<br />
energy that are called, collectively, electromagnetic radiation (among<br />
the other forms of electromagnetic radiation are X rays <strong>and</strong> radio<br />
waves). These forms of electromagnetic radiation have different wavelengths;<br />
the smaller the wavelength, the higher the energy level. The<br />
energy level is measured in units called “quanta.” The emission of<br />
light quanta from atoms that are said to be in the “excited state” produces<br />
energy, <strong>and</strong> the absorption of quanta by unexcited atoms—<br />
atoms said to be in the “ground state”—excites those atoms.<br />
The familiar light bulb spontaneously <strong>and</strong> haphazardly emits<br />
light of many wavelengths from excited atoms. This emission occurs<br />
in all directions <strong>and</strong> at widely varying times. In contrast, the<br />
light reflection between the mirrors at the ends of a laser causes all<br />
of the many excited atoms present in the active medium simultaneously<br />
to emit light waves of the same wavelength. This process is<br />
called “stimulated emission.”<br />
Stimulated emission ultimately causes a laser to yield a beam of<br />
coherent light, which means that the wavelength, emission time,<br />
<strong>and</strong> direction of all the waves in the laser beam are the same. The<br />
use of focusing devices makes it possible to convert an emitted laser<br />
beam into a point source that can be as small as a few thous<strong>and</strong>ths of<br />
an inch in diameter. Such focused beams are very hot, <strong>and</strong> they can<br />
be used for such diverse functions as cutting or welding metal objects<br />
<strong>and</strong> performing delicate surgery. The nature of the active medium<br />
used in a laser determines the wavelength of its emitted light<br />
beam; this in turn dictates both the energy of the emitted quanta <strong>and</strong><br />
the appropriate uses for the laser.
474 / Laser vaporization<br />
A blocked artery (top) can be threaded with a flexible fiber-optic fiber or bundle of fibers until<br />
it reaches the blockage; the fiber then emits laser light, vaporizing the plaque (bottom) <strong>and</strong><br />
restoring circulation.<br />
Maiman’s ruby laser, for example, has been used since the 1960’s<br />
in eye surgery to reattach detached retinas. This is done by focusing<br />
the laser on the tiny retinal tear that causes a retina to become detached.<br />
The very hot, high-intensity light beam then “welds” the<br />
retina back into place, bloodlessly, by burning it to produce scar tissue.<br />
The burning process has no effect on nearby tissues. Other<br />
types of lasers have been used in surgeries on the digestive tract <strong>and</strong><br />
the uterus since the 1970’s.<br />
In 1983, a group of physicians began using lasers to treat cardiovascular<br />
disease. The original work, which was carried out by a<br />
number of physicians in Toulouse, France, involved the vaporization<br />
of atheroma deposits (atherosclerotic plaque) in a human ar-
tery. This very exciting event added a new method to medical science’s<br />
arsenal of life-saving techniques.<br />
Consequences<br />
Since their discovery, lasers have been used for many purposes<br />
in science <strong>and</strong> industry. Such uses include the study of the laws of<br />
chemistry <strong>and</strong> physics, photography, communications, <strong>and</strong> surveying.<br />
Lasers have been utilized in surgery since the mid-1960’s, <strong>and</strong><br />
their use has had a tremendous impact on medicine. The first type<br />
of laser surgery to be conducted was the repair of detached retinas<br />
via ruby lasers. This technique has become the method of choice for<br />
such eye surgery because it takes only minutes to perform rather<br />
than the hours required for conventional surgical methods. It is also<br />
beneficial because the lasing of the surgical site cauterizes that site,<br />
preventing bleeding.<br />
In the late 1970’s, the use of other lasers for abdominal cancer<br />
surgery <strong>and</strong> uterine surgery began <strong>and</strong> flourished. In these<br />
forms of surgery, more powerful lasers are used. In the 1980’s,<br />
laser vaporization surgery (LVS) began to be used to clear atherosclerotic<br />
plaque (atheromas) from clogged arteries. This methodology<br />
gives cardiologists a useful new tool. Before LVS was<br />
available, surgeons dislodged atheromas by means of “transluminal<br />
angioplasty,” which involved pushing small, fluoroscopeguided<br />
inflatable balloons through clogged arteries.<br />
See also Blood transfusion; CAT scanner; Coronary artery bypass<br />
surgery; Electrocardiogram; Laser; Laser eye surgery; Ultrasound.<br />
Further Reading<br />
Laser vaporization / 475<br />
Fackelmann, Kathleen. “Internal Laser Blast Might Ease Heart<br />
Pain.” USA Today (March 8, 1999).<br />
Hecht, Jeff. Laser Pioneers. Rev. ed. Boston: Academic Press, 1992.<br />
“Is Cervical Laser Therapy Painful?” Lancet no. 8629 (January 14,<br />
1989).
476 / Laser vaporization<br />
Lothian, Cheri L. “Laser Angioplasty: Vaporizing Coronary Artery<br />
Plaque.” Nursing 22, no. 1 (January, 1992).<br />
“New Cool Laser Procedure Has Promise for Treating Blocked Coronary<br />
Arteries.” Wall Street Journal (May 15, 1989).<br />
Rundle, Rhonda L. “FDA Approves Laser Systems for Angioplasty.”<br />
Wall Street Journal (February 3, 1992).<br />
Sutton, C. J. G., <strong>and</strong> Michael P. Diamond. Endoscopic Surgery for Gynecologists.<br />
Philadelphia: W. B. Saunders, 1993.
Long-distance radiotelephony<br />
Long-distance radiotelephony<br />
The invention: The first radio transmissions from the United States<br />
to Europe opened a new era in telecommunications.<br />
The people behind the invention:<br />
Guglielmo Marconi (1874-1937), Italian inventor of transatlantic<br />
telegraphy<br />
Reginald Aubrey Fessenden (1866-1932), an American radio<br />
engineer<br />
Lee de Forest (1873-1961), an American inventor<br />
Harold D. Arnold (1883-1933), an American physicist<br />
John J. Carty (1861-1932), an American electrical engineer<br />
An Accidental Broadcast<br />
477<br />
The idea of commercial transatlantic communication was first<br />
conceived by Italian physicist <strong>and</strong> inventor Guglielmo Marconi, the<br />
pioneer of wireless telegraphy. Marconi used a spark transmitter to<br />
generate radio waves that were interrupted, or modulated, to form<br />
the dots <strong>and</strong> dashes of Morse code. The rapid generation of sparks<br />
created an electromagnetic disturbance that sent radio waves of different<br />
frequencies into the air—a broad, noisy transmission that was<br />
difficult to tune <strong>and</strong> detect.<br />
The inventor Reginald Aubrey Fessenden produced an alternative<br />
method that became the basis of radio technology in the twentieth<br />
century. His continuous radio waves kept to one frequency,<br />
making them much easier to detect at long distances. Furthermore,<br />
the continuous waves could be modulated by an audio signal, making<br />
it possible to transmit the sound of speech.<br />
Fessenden used an alternator to generate electromagnetic waves<br />
at the high frequencies required in radio transmission. It was specially<br />
constructed at the laboratories of the General Electric Company.<br />
The machine was shipped to Brant Rock, Massachusetts, in<br />
1906 for testing. Radio messages were sent to a boat cruising offshore,<br />
<strong>and</strong> the feasibility of radiotelephony was thus demonstrated.<br />
Fessenden followed this success with a broadcast of messages <strong>and</strong>
478 / Long-distance radiotelephony<br />
music between Brant Rock <strong>and</strong> a receiving station constructed at<br />
Plymouth, Massachusetts.<br />
The equipment installed at Brant Rock had a range of about 160<br />
kilometers. The transmission distance was determined by the strength<br />
of the electric power delivered by the alternator, which was measured<br />
in watts. Fessenden’s alternator was rated at 500 watts, but it<br />
usually delivered much less power.<br />
Yet this was sufficient to send a radio message across the Atlantic.<br />
Fessenden had built a receiving station at Machrihanish, Scotl<strong>and</strong>,<br />
to test the operation of a large rotary spark transmitter that he<br />
had constructed. An operator at this station picked up the voice of<br />
an engineer at Brant Rock who was sending instructions to Plymouth.<br />
Thus, the first radiotelephone message had been sent across<br />
the Atlantic by accident. Fessenden, however, decided not to make<br />
this startling development public. The station at Machrihanish was<br />
destroyed in a storm, making it impossible to carry out further tests.<br />
The successful transmission undoubtedly had been the result of exceptionally<br />
clear atmospheric conditions that might never again favor<br />
the inventor.<br />
One of the parties following the development of the experiments<br />
in radio telephony was the American Telephone <strong>and</strong> Telegraph<br />
(AT&T) Company. Fessenden entered into negotiations to sell his<br />
system to the telephone company, but, because of the financial panic<br />
of 1907, the sale was never made.<br />
Virginia to Paris <strong>and</strong> Hawaii<br />
The English physicist John Ambrose Fleming had invented a twoelement<br />
(diode) vacuum tube in 1904 that could be used to generate<br />
<strong>and</strong> detect radio waves. Two years later, the American inventor Lee<br />
de Forest added a third element to the diode to produce his “audion”<br />
(triode), which was a more sensitive detector. John J. Carty, head of a<br />
research <strong>and</strong> development effort at AT&T, examined these new devices<br />
carefully. He became convinced that an electronic amplifier, incorporating<br />
the triode into its design, could be used to increase the<br />
strength of telephone signals <strong>and</strong> to long distances.<br />
On Carty’s advice, AT&T purchased the rights to de Forest’s<br />
audion. A team of about twenty-five researchers, under the leader-
Reginald Aubrey Fessenden<br />
Long-distance radiotelephony / 479<br />
Reginald Aubrey Fessenden was born in Canada in 1866 to<br />
a small-town minister <strong>and</strong> his wife. After graduating from<br />
Bishop’s College in Lennoxville, Quebec, he took a job as head<br />
of Whitney Institute in Bermuda. However, he was brilliant <strong>and</strong><br />
volatile <strong>and</strong> had greater ambitions. After two years, he l<strong>and</strong>ed a<br />
job as a tester for his idol, Thomas Edison. Soon he was working<br />
as an engineer <strong>and</strong> chemist.<br />
Fessenden became a professor of electrical engineering at<br />
Purdue University in 1892 <strong>and</strong> then a year later at the University<br />
of Pittsburgh. His ideas were often advanced, so far advanced<br />
that some were not developed until much later, <strong>and</strong> by<br />
others. His first patented invention, an electrolyte detector in<br />
1900, was far more sensitive than others in use <strong>and</strong> made it possible<br />
to pick up radio signals carrying complex sound. To transmit<br />
such signals, he pioneered the use of carrier waves. During<br />
his career he registered more than three hundred patents.<br />
Suspicious <strong>and</strong> feisty, he also spent a lot of time in disputes,<br />
<strong>and</strong> frequently in court, over his inventions. He sued his backers<br />
at the National Electric Signaling Company over rights to<br />
operate a connection to Great Britain, <strong>and</strong> won a $406,000 settlement,<br />
which bankrupted the company. He sued Radio Corporation<br />
of America (RCA) claiming it prevented him from exploiting<br />
his own patents commercially. RCA settled out of court but<br />
was enriched by Fessenden’s invention.<br />
Having returned to Bermuda, Fessenden died in 1932. He<br />
never succeeded in winning the fame <strong>and</strong> wealth for the radio<br />
that he felt was due to him.<br />
ship of physicist Harold D. Arnold, were assigned the job of perfecting<br />
the triode <strong>and</strong> turning it into a reliable amplifier. The improved<br />
triode was responsible for the success of transcontinental cable telephone<br />
service, which was introduced in January, 1915. The triode<br />
was also the basis of AT&T’s foray into radio telephony.<br />
Carty’s research plan called for a system with three components:<br />
an oscillator to generate the radio waves, a modulator to add the<br />
audio signals to the waves, <strong>and</strong> an amplifier to transmit the radio<br />
waves. The total power output of the system was 7,500 watts,<br />
enough to send the radio waves over thous<strong>and</strong>s of kilometers.
480 / Long-distance radiotelephony<br />
The apparatus was installed in the U.S. Navy’s radio tower in<br />
Arlington, Virginia, in 1915. Radio messages from Arlington were<br />
picked up at a receiving station in California, a distance of 4,000 kilometers,<br />
then at a station in Pearl Harbor, Hawaii, which was 7,200<br />
kilometers from Arlington. AT&T’s engineers had succeeded in<br />
joining the company telephone lines with the radio transmitter at<br />
Arlington; therefore, the president of AT&T, Theodore Vail, could<br />
pick up his telephone <strong>and</strong> talk directly with someone in California.<br />
The next experiment was to send a radio message from Arlington<br />
to a receiving station set up in the Eiffel Tower in Paris. After several<br />
unsuccessful attempts, the telephone engineers in the Eiffel Tower<br />
finally heard Arlington’s messages on October 21, 1915. The AT&T<br />
receiving station in Hawaii also picked up the messages. The two receiving<br />
stations had to send their reply by telegraph to the United<br />
States because both stations were set up to receive only. Two-way<br />
radio communication was still years in the future.<br />
Impact<br />
The announcement that messages had been received in Paris was<br />
front-page news <strong>and</strong> brought about an outburst of national pride in<br />
the United States. The demonstration of transatlantic radio telephony<br />
was more important as publicity for AT&T than as a scientific<br />
advance. All the credit went to AT&T <strong>and</strong> to Carty’s laboratory.<br />
Both Fessenden <strong>and</strong> de Forest attempted to draw attention to their<br />
contributions to long-distance radio telephony, but to no avail. The<br />
Arlington-to-Paris transmission was a triumph for corporate public<br />
relations <strong>and</strong> corporate research.<br />
The development of the triode had been achieved with large<br />
teams of highly trained scientists—in contrast to the small-scale efforts<br />
of Fessenden <strong>and</strong> de Forest, who had little formal scientific<br />
training. Carty’s laboratory was an example of the new type of industrial<br />
research that was to dominate the twentieth century. The<br />
golden days of the lone inventor, in the mold of Thomas Edison or<br />
Alex<strong>and</strong>er Graham Bell, were gone.<br />
In the years that followed the first transatlantic radio telephone<br />
messages, little was done by AT&T to advance the technology or to<br />
develop a commercial service. The equipment used in the 1915 dem-
onstration was more a makeshift laboratory apparatus than a prototype<br />
for a new radio technology. The messages sent were short <strong>and</strong><br />
faint. There was a great gulf between hearing “hello” <strong>and</strong> “goodbye”<br />
amid the static. The many predictions of a direct telephone<br />
connection between New York <strong>and</strong> other major cities overseas were<br />
premature. It was not until 1927 that a transatlantic radio circuit was<br />
opened for public use. By that time, a new technological direction<br />
had been taken, <strong>and</strong> the method used in 1915 had been superseded<br />
by shortwave radio communication.<br />
See also Communications satellite; Internet; Long-distance telephone;<br />
Radio; Radio crystal sets; Radiotelephony; Television.<br />
Further Reading<br />
Long-distance radiotelephony / 481<br />
Marconi, Degna. My Father: Marconi. Toronto: Guernica Editions,<br />
1996.<br />
Masini, Giancarlo. Marconi. New York: Marsilio, 1995.<br />
Seitz, Frederick. The Cosmic Inventor: Reginald Aubrey Fessenden.<br />
Philadelphia: American Philosophical Society, 1999.<br />
Streissguth, Thomas. Communications: Sending the Message. Minneapolis,<br />
Minn.: Oliver Press, 1997.
482<br />
Long-distance telephone<br />
Long-distance telephone<br />
The invention: System for conveying voice signals via wires over<br />
long distances.<br />
The people behind the invention:<br />
Alex<strong>and</strong>er Graham Bell (1847-1922), a Scottish American<br />
inventor<br />
Thomas A. Watson (1854-1934), an American electrical engineer<br />
The Problem of Distance<br />
The telephone may be the most important invention of the nineteenth<br />
century. The device developed by Alex<strong>and</strong>er Graham Bell<br />
<strong>and</strong> Thomas A. Watson opened a new era in communication <strong>and</strong><br />
made it possible for people to converse over long distances for the<br />
first time. During the last two decades of the nineteenth century <strong>and</strong><br />
the first decade of the twentieth century, the American Telephone<br />
<strong>and</strong> Telegraph (AT&T) Company continued to refine <strong>and</strong> upgrade<br />
telephone facilities, introducing such innovations as automatic dialing<br />
<strong>and</strong> long-distance service.<br />
One of the greatest challenges faced by Bell engineers was to<br />
develop a way of maintaining signal quality over long distances.<br />
Telephone wires were susceptible to interference from electrical<br />
storms <strong>and</strong> other natural phenomena, <strong>and</strong> electrical resistance<br />
<strong>and</strong> radiation caused a fairly rapid drop-off in signal strength,<br />
which made long-distance conversations barely audible or unintelligible.<br />
By 1900, Bell engineers had discovered that signal strength could<br />
be improved somewhat by wrapping the main wire conductor with<br />
thinner wires called “loading coils” at prescribed intervals along<br />
the length of the cable. Using this procedure, Bell extended longdistance<br />
service from New York to Denver, Colorado, which was<br />
then considered the farthest point that could be reached with acceptable<br />
quality. The result, however, was still unsatisfactory, <strong>and</strong><br />
Bell engineers realized that some form of signal amplification would<br />
be necessary to improve the quality of the signal.
A breakthrough came in 1906, when Lee de Forest invented the<br />
“audion tube,” which could send <strong>and</strong> amplify radio waves. Bell scientists<br />
immediately recognized the potential of the new device for<br />
long-distance telephony <strong>and</strong> began building amplifiers that would<br />
be placed strategically along the long-distance wire network.<br />
Work progressed so quickly that by 1909, Bell officials were predicting<br />
that the first transcontinental long-distance telephone service,<br />
between New York <strong>and</strong> San Francisco, was imminent. In that<br />
year, Bell president Theodore N. Vail went so far as to promise the<br />
organizers of the Panama-Pacific Exposition, scheduled to open in<br />
San Francisco in 1914, that Bell would offer a demonstration at<br />
the exposition. The promise was risky, because certain technical<br />
problems associated with sending a telephone signal over a 4,800kilometer<br />
wire had not yet been solved. De Forest’s audion tube was<br />
a crude device, but progress was being made.<br />
Two more breakthroughs came in 1912, when de Forest improved<br />
on his original concept <strong>and</strong> Bell engineer Harold D. Arnold<br />
improved it further. Bell bought the rights to de Forest’s vacuumtube<br />
patents in 1913 <strong>and</strong> completed the construction of the New<br />
York-San Francisco circuit. The last connection was made at the<br />
Utah-Nevada border on June 17, 1914.<br />
Success Leads to Further Improvements<br />
Long-distance telephone / 483<br />
Bell’s long-distance network was tested successfully on June 29,<br />
1914, but the official demonstration was postponed until January<br />
25, 1915, to accommodate the Panama-Pacific Exposition, which<br />
had also been postponed. On that date, a connection was established<br />
between Jekyll Isl<strong>and</strong>, Georgia, where Theodore Vail was recuperating<br />
from an illness, <strong>and</strong> New York City, where Alex<strong>and</strong>er<br />
Graham Bell was st<strong>and</strong>ing by to talk to his former associate Thomas<br />
Watson, who was in San Francisco. When everything was in place,<br />
the following conversation took place. Bell: “Hoy! Hoy! Mr. Watson?<br />
Are you there? Do you hear me?” Watson: “Yes, Dr. Bell, I hear<br />
you perfectly. Do you hear me well?” Bell: “Yes, your voice is perfectly<br />
distinct. It is as clear as if you were here in New York.”<br />
The first transcontinental telephone conversation transmitted<br />
by wire was followed quickly by another that was transmitted via
484 / Long-distance telephone<br />
radio. Although the Bell company was slow to recognize the potential<br />
of radio wave amplification for the “wireless” transmission<br />
of telephone conversations, by 1909 the company had made a significant<br />
commitment to conduct research in radio telephony. On<br />
April 4, 1915, a wireless signal was transmitted by Bell technicians<br />
from Montauk Point on Long Isl<strong>and</strong>, New York, to Wilmington,<br />
Delaware, a distance of more than 320 kilometers. Shortly thereafter,<br />
a similar test was conducted between New York City <strong>and</strong><br />
Brunswick, Georgia, via a relay station at Montauk Point. The total<br />
distance of the transmission was more than 1,600 kilometers. Finally,<br />
in September, 1915, Vail placed a successful transcontinental radiotelephone<br />
call from his office in New York to Bell engineering chief<br />
J. J. Carty in San Francisco.<br />
Only a month later, the first telephone transmission across the<br />
Atlantic Ocean was accomplished via radio from Arlington, Virginia,<br />
to the Eiffel Tower in Paris, France. The signal was detectable,<br />
although its quality was poor. It would be ten years before true<br />
transatlantic radio-telephone service would begin.<br />
The Bell company recognized that creating a nationwide longdistance<br />
network would increase the volume of telephone calls simply<br />
by increasing the number of destinations that could be reached<br />
from any single telephone station. As the network exp<strong>and</strong>ed, each<br />
subscriber would have more reason to use the telephone more often,<br />
thereby increasing Bell’s revenues. Thus, the company’s strategy<br />
became one of tying local <strong>and</strong> regional networks together to create<br />
one large system.<br />
Impact<br />
Just as the railroads had interconnected centers of commerce, industry,<br />
<strong>and</strong> agriculture all across the continental United States in the<br />
nineteenth century, the telephone promised to bring a new kind of<br />
interconnection to the country in the twentieth century: instantaneous<br />
voice communication. During the first quarter century after<br />
the invention of the telephone <strong>and</strong> during its subsequent commercialization,<br />
the emphasis of telephone companies was to set up central<br />
office switches that would provide interconnections among<br />
subscribers within a fairly limited geographical area. Large cities
were wired quickly, <strong>and</strong> by the beginning of the twentieth century<br />
most were served by telephone switches that could accommodate<br />
thous<strong>and</strong>s of subscribers.<br />
The development of intercontinental telephone service was a<br />
milestone in the history of telephony for two reasons. First, it was a<br />
practical demonstration of the almost limitless applications of this<br />
innovative technology. Second, for the first time in its brief history,<br />
the telephone network took on a national character. It became clear<br />
that large central office networks, even in large cities such as New<br />
York, Chicago, <strong>and</strong> Baltimore, were merely small parts of a much<br />
larger, universally accessible communication network that spanned<br />
a continent. The next step would be to look abroad, to Europe <strong>and</strong><br />
beyond.<br />
See also Cell phone; Fax machine; Internet; Long-distance radiotelephony;<br />
Rotary dial telephone; Telephone switching; Touch-tone<br />
telephone.<br />
Further Reading<br />
Long-distance telephone / 485<br />
Coe, Lewis. The Telephone <strong>and</strong> Its Several <strong>Inventors</strong>: A History. Jefferson,<br />
N.C.: McFarl<strong>and</strong>, 1995.<br />
Mackay, James A. Alex<strong>and</strong>er Graham Bell: A Life. New York: J. Wiley,<br />
1997.<br />
Young, Peter. Person to Person: The International Impact of the Telephone.<br />
Cambridge: Granta Editions, 1991.
486<br />
Mammography<br />
Mammography<br />
The invention: The first X-ray procedure for detecting <strong>and</strong> diagnosing<br />
breast cancer.<br />
The people behind the invention:<br />
Albert Salomon, the first researcher to use X-ray technology<br />
instead of surgery to identify breast cancer<br />
Jacob Gershon-Cohen (1899-1971), a breast cancer researcher<br />
Studying Breast Cancer<br />
Medical researchers have been studying breast cancer for more<br />
than a century. At the end of the nineteenth century, however, no one<br />
knew how to detect breast cancer until it was quite advanced. Often,<br />
by the time it was detected, it was too late for surgery; many patients<br />
who did have surgery died. So after X-ray technology first appeared<br />
in 1896, cancer researchers were eager to experiment with it.<br />
The first scientist to use X-ray techniques in breast cancer experiments<br />
was Albert Salomon, a German surgeon. Trying to develop a<br />
biopsy technique that could tell which tumors were cancerous <strong>and</strong><br />
thereby avoid unnecessary surgery, he X-rayed more than three<br />
thous<strong>and</strong> breasts that had been removed from patients during breast<br />
cancer surgery. In 1913, he published the results of his experiments,<br />
showing that X rays could detect breast cancer. Different types of Xray<br />
images, he said, showed different types of cancer.<br />
Though Salomon is recognized as the inventor of breast radiology,<br />
he never actually used his technique to diagnose breast cancer.<br />
In fact, breast cancer radiology, which came to be known as “mammography,”<br />
was not taken up quickly by other medical researchers.<br />
Those who did try to reproduce his research often found that their<br />
results were not conclusive.<br />
During the 1920’s, however, more research was conducted in Leipzig,<br />
Germany, <strong>and</strong> in South America. Eventually, the Leipzig researchers,<br />
led by Erwin Payr, began to use mammography to diagnose<br />
cancer. In the 1930’s, a Leipzig researcher named W. Vogel<br />
published a paper that accurately described differences between<br />
cancerous <strong>and</strong> noncancerous tumors as they appeared on X-ray pho-
tographs. Researchers in the United States paid little attention to<br />
mammography until 1926. That year, a physician in Rochester, New<br />
York, was using a fluoroscope to examine heart muscle in a patient<br />
<strong>and</strong> discovered that the fluoroscope could be used to make images of<br />
breast tissue as well. The physician, Stafford L. Warren, then developed<br />
a stereoscopic technique that he used in examinations before<br />
surgery. Warren published his findings in 1930; his article also described<br />
changes in breast tissue that occurred because of pregnancy,<br />
lactation (milk production), menstruation, <strong>and</strong> breast disease. Yet<br />
Stafford’s technique was complicated <strong>and</strong> required equipment that<br />
most physicians of the time did not have. Eventually, he lost interest<br />
in mammography <strong>and</strong> went on to other research.<br />
Using the Technique<br />
Mammography / 487<br />
In the late 1930’s, Jacob Gershon-Cohen became the first clinician<br />
to advocate regular mammography for all women to detect breast<br />
cancer before it became a major problem. Mammography was not<br />
very expensive, he pointed out, <strong>and</strong> it was already quite accurate. A<br />
milestone in breast cancer research came in 1956, when Gershon-<br />
Cohen <strong>and</strong> others began a five-year study of more than 1,300 women<br />
to test the accuracy of mammography for detecting breast cancer.<br />
Each woman studied was screened once every six months. Of the<br />
1,055 women who finished the study, 92 were diagnosed with benign<br />
tumors <strong>and</strong> 23 with malignant tumors. Remarkably, out of all<br />
these, only one diagnosis turned out to be wrong.<br />
During the same period, Robert Egan of Houston began tracking<br />
breast cancer X rays. Over a span of three years, one thous<strong>and</strong> X-ray<br />
photographs were used to make diagnoses. When these diagnoses<br />
were compared to the results of surgical biopsies, it was confirmed<br />
that mammography had produced 238 correct diagnoses of cancer,<br />
out of 240 cases. Egan therefore joined the crusade for regular breast<br />
cancer screening.<br />
Once mammography was finally accepted by doctors in the late<br />
1950’s <strong>and</strong> early 1960’s, researchers realized that they needed a way<br />
to teach mammography quickly <strong>and</strong> effectively to those who would<br />
use it. A study was done, <strong>and</strong> it showed that any radiologist could<br />
conduct the procedure with only five days of training.
488 / Mammography<br />
In the early 1970’s, the American Cancer Society <strong>and</strong> the National<br />
Cancer Institute joined forces on a nationwide breast cancer<br />
screening program called the “Breast Cancer Detection Demonstration<br />
Project.” Its goal in 1971 was to screen more than 250,000<br />
women over the age of thirty-five.<br />
Since the 1960’s, however, some people had argued that mammography<br />
was dangerous because it used radiation on patients. In<br />
1976, Ralph Nader, a consumer advocate, stated that women who<br />
were to undergo mammography should be given consent forms<br />
that would list the dangers of radiation. In the years that followed,<br />
mammography was refined to reduced the amount of radiation<br />
needed to detect cancer. It became a st<strong>and</strong>ard tool for diagnosis, <strong>and</strong><br />
doctors recommended that women have a mammogram every two<br />
or three years after the age of forty.<br />
Impact<br />
Radiology is not a science that concerns only breast cancer screening.<br />
While it does provide the technical facilities necessary to practice<br />
mammography, the photographic images obtained must be interpreted<br />
by general practitioners, as well as by specialists. Once<br />
Physicians recommend that women have a mammogram every two or three years after the<br />
age of forty. (Digital Stock)
Gershon-Cohen had demonstrated the viability of the technique, a<br />
means of training was devised that made it fairly easy for clinicians<br />
to learn how to practice mammography successfully. Once all these<br />
factors—accuracy, safety, simplicity—were in place, mammography<br />
became an important factor in the fight against breast cancer.<br />
The progress made in mammography during the twentieth century<br />
was a major improvement in the effort to keep more women<br />
from dying of breast cancer. The disease has always been one of the<br />
primary contributors to the number of female cancer deaths that occur<br />
annually in the United States <strong>and</strong> around the world. This high<br />
figure stems from the fact that women had no way of detecting the<br />
disease until tumors were in an advanced state.<br />
Once Salomon’s procedure was utilized, physicians had a means<br />
by which they could look inside breast tissue without engaging in<br />
exploratory surgery, thus giving women a screening technique that<br />
was simple <strong>and</strong> inexpensive. By 1971, a quarter million women over<br />
age thirty-five had been screened. Twenty years later, that number<br />
was in the millions.<br />
See also Amniocentesis; CAT scanner; Electrocardiogram; Electroencephalogram;<br />
Holography; Nuclear magnetic resonance; Pap<br />
test; Syphilis test; Ultrasound.<br />
Further Reading<br />
Mammography / 489<br />
“First Digital Mammography System Approved by FDA.” FDA<br />
Consumer 34, no. 3 (May/June, 2000).<br />
Hindle, William H. Breast Care: A Clinical Guidebook for Women’s Primary<br />
Health Care Providers. New York: Springer, 1999.<br />
Okie, Susan. “More Women Are Getting Mammograms: Experts<br />
Agree That the Test Has Played Big Role in Reducing Deaths<br />
from Breast Cancer.” Washington Post (January 21, 1997).<br />
Wolbarst, Anthony B. Looking Within: How X-ray, CT, MRI, Ultrasound,<br />
<strong>and</strong> Other Medical Images Are Created, <strong>and</strong> How They Help<br />
Physicians Save Lives. Berkeley: University of California Press,<br />
1999.
490<br />
Mark I calculator<br />
Mark I calculator<br />
The invention: Early digital calculator designed to solve differential<br />
equations that was a forerunner of modern computers.<br />
The people behind the invention:<br />
Howard H. Aiken (1900-1973), Harvard University professor<br />
<strong>and</strong> architect of the Mark I<br />
Clair D. Lake (1888-1958), a senior engineer at IBM<br />
Francis E. Hamilton (1898-1972), an IBM engineer<br />
Benjamin M. Durfee (1897-1980), an IBM engineer<br />
The Human Computer<br />
The physical world can be described by means of mathematics.<br />
In principle, one can accurately describe nature down to the smallest<br />
detail. In practice, however, this is impossible except for the simplest<br />
of atoms. Over the years, physicists have had great success in<br />
creating simplified models of real physical processes whose behavior<br />
can be described by the branch of mathematics called “calculus.”<br />
Calculus relates quantities that change over a period of time. The<br />
equations that relate such quantities are called “differential equations,”<br />
<strong>and</strong> they can be solved precisely in order to yield information<br />
about those quantities. Most natural phenomena, however, can<br />
be described only by differential equations that can be solved only<br />
approximately. These equations are solved by numerical means that<br />
involve performing a tremendous number of simple arithmetic operations<br />
(repeated additions <strong>and</strong> multiplications). It has been the<br />
dream of many scientists since the late 1700’s to find a way to automate<br />
the process of solving these equations.<br />
In the early 1900’s, people who spent day after day performing the<br />
tedious operations that were required to solve differential equations<br />
were known as “computers.” During the two world wars, these human<br />
computers created ballistics tables by solving the differential<br />
equations that described the hurling of projectiles <strong>and</strong> the dropping<br />
of bombs from aircraft. The war effort was largely responsible for accelerating<br />
the push to automate the solution to these problems.
A Computational Behemoth<br />
Mark I calculator / 491<br />
The ten-year period from 1935 to 1945 can be considered the<br />
prehistory of the development of the digital computer. (In a digital<br />
computer, digits represent magnitudes of physical quantities.<br />
These digits can have only certain values.) Before this time, all<br />
machines for automatic calculation were either analog in nature<br />
(in which case, physical quantities such as current or voltage represent<br />
the numerical values of the equation <strong>and</strong> can vary in a continuous<br />
fashion) or were simplistic mechanical or electromechanical<br />
adding machines.<br />
This was the situation that faced Howard Aiken. At the time, he<br />
was a graduate student working on his doctorate in physics. His<br />
dislike for the tremendous effort required to solve the differential<br />
equations used in his thesis drove him to propose, in the fall of 1937,<br />
constructing a machine that would automate the process. He proposed<br />
taking existing business machines that were commonly used<br />
in accounting firms <strong>and</strong> combining them into one machine that<br />
would be controlled by a series of instructions. One goal was to<br />
eliminate all manual intervention in the process in order to maximize<br />
the speed of the calculation.<br />
Aiken’s proposal came to the attention of Thomas Watson, who<br />
was then the president of International Business Machines Corporation<br />
(IBM). At that time, IBM was a major supplier of business machines<br />
<strong>and</strong> did not see much of a future in such “specialized” machines.<br />
It was the pressure provided by the computational needs of<br />
the military in World War II that led IBM to invest in building automated<br />
calculators. In 1939, a contract was signed in which IBM<br />
agreed to use its resources (personnel, equipment, <strong>and</strong> finances) to<br />
build a machine for Howard Aiken <strong>and</strong> Harvard University.<br />
IBM brought together a team of seasoned engineers to fashion a<br />
working device from Aiken’s sketchy ideas. Clair D. Lake, who was<br />
selected to manage the project, called on two talented engineers—<br />
Francis E. Hamilton <strong>and</strong> Benjamin M. Durfee—to assist him.<br />
After four years of effort, which was interrupted at times by the<br />
dem<strong>and</strong>s of the war, a machine was constructed that worked remarkably<br />
well. Completed in January, 1943, at Endicott, New York,<br />
it was then disassembled <strong>and</strong> moved to Harvard University in Cam-
492 / Mark I calculator<br />
bridge, Massachusetts, where it was reassembled. Known as the IBM<br />
automatic sequence controlled calculator (ASCC), it began operation<br />
in the spring of 1944 <strong>and</strong> was formally dedicated <strong>and</strong> revealed to the<br />
public on August 7, 1944. Its name indicates the machine’s distinguishing<br />
feature: the ability to load automatically the instructions<br />
that control the sequence of the calculation. This capability was provided<br />
by punching holes, representing the instructions, in a long,<br />
ribbonlike paper tape that could be read by the machine.<br />
Computers of that era were big, <strong>and</strong> the ASCC I was particularly<br />
impressive. It was 51 feet long by 8 feet tall, <strong>and</strong> it weighed 5 tons. It<br />
contained more than 750,000 parts, <strong>and</strong> when it was running, it<br />
sounded like a room filled with sewing machines. The ASCC later<br />
became known as the Harvard Mark I.<br />
Impact<br />
Although this machine represented a significant technological<br />
achievement at the time <strong>and</strong> contributed ideas that would be used<br />
in subsequent machines, it was almost obsolete from the start. It was<br />
electromechanical, since it relied on relays, but it was built at the<br />
dawn of the electronic age. Fully electronic computers offered better<br />
reliability <strong>and</strong> faster speeds. Howard Aiken continued, without the<br />
help of IBM, to develop successors to the Mark I. Because he resisted<br />
using electronics, however, his machines did not significantly affect<br />
the direction of computer development.<br />
For all its complexity, the Mark I operated reasonably well, first<br />
solving problems related to the war effort <strong>and</strong> then turning its attention<br />
to the more mundane tasks of producing specialized mathematical<br />
tables. It remained in operation at the Harvard Computational<br />
Laboratory until 1959, when it was retired <strong>and</strong> disassembled.<br />
Parts of this l<strong>and</strong>mark computational tool are now kept at the<br />
Smithsonian Institute.<br />
See also BASIC programming language; Differential analyzer;<br />
Personal computer; Pocket calculator; UNIVAC computer.
Further Reading<br />
Mark I calculator / 493<br />
Cohen, I. Bernard. Howard Aiken: Portrait of a Computer Pioneer. Cambridge,<br />
Mass.: MIT Press, 1999.<br />
Ritchie, David. The Computer Pioneers: The Making of the Modern Computer.<br />
New York: Simon <strong>and</strong> Schuster, 1986.<br />
Slater, Robert. Portraits in Silicon. Cambridge, Mass.: MIT Press,<br />
1987.
494<br />
Mass spectrograph<br />
Mass spectrograph<br />
The invention: The first device used to measure the mass of atoms,<br />
which was found to be the result of the combination of isotopes.<br />
The people behind the invention:<br />
Francis William Aston (1877-1945), an English physicist who<br />
was awarded the 1922 Nobel Prize in Chemistry<br />
Sir Joseph John Thomson (1856-1940), an English physicist<br />
William Prout (1785-1850), an English biochemist<br />
Ernest Rutherford (1871-1937), an English physicist<br />
Same Element, Different Weights<br />
Isotopes are different forms of a chemical element that act similarly<br />
in chemical or physical reactions. Isotopes differ in two ways:<br />
They possess different atomic weights <strong>and</strong> different radioactive transformations.<br />
In 1803, John Dalton proposed a new atomic theory of<br />
chemistry that claimed that chemical elements in a compound combine<br />
by weight in whole number proportions to one another. By 1815,<br />
William Prout had taken Dalton’s hypothesis one step further <strong>and</strong><br />
claimed that the atomic weights of elements were integral (the integers<br />
are the positive <strong>and</strong> negative whole numbers <strong>and</strong> zero) multiples<br />
of the hydrogen atom. For example, if the weight of hydrogen<br />
was 1, then the weight of carbon was 12, <strong>and</strong> that of oxygen 16. Over<br />
the next decade, several carefully controlled experiments were conducted<br />
to determine the atomic weights of a number of elements. Unfortunately,<br />
the results of these experiments did not support Prout’s<br />
hypothesis. For example, the atomic weight of chlorine was found to<br />
be 35.5. It took a theory of isotopes, developed in the early part of the<br />
twentieth century, to verify Prout’s original theory.<br />
After his discovery of the electron, Sir Joseph John Thomson, the<br />
leading physicist at the Cavendish Laboratory in Cambridge, Engl<strong>and</strong>,<br />
devoted much of his remaining research years to determining<br />
the nature of “positive electricity.” (Since electrons are negatively<br />
charged, most electricity is negative.) While developing an instrument<br />
sensitive enough to analyze the positive electron, Thomson in-
vited Francis William Aston to work with him at the Cavendish Laboratory.<br />
Recommended by J. H. Poynting, who had taught Aston<br />
physics at Mason College, Aston began a lifelong association at<br />
Cavendish, <strong>and</strong> Trinity College became his home.<br />
When electrons are stripped from an atom, the atom becomes positively<br />
charged. Through the use of magnetic <strong>and</strong> electrical fields, it is<br />
possible to channel the resulting positive rays into parabolic tracks.<br />
By examining photographic plates of these tracks, Thomson was able<br />
to identify the atoms of different elements. Aston’s first contribution<br />
at Cavendish was to improve the instrument used to photograph<br />
the parabolic tracks. He developed a more efficient pump to<br />
create the required vacuum <strong>and</strong> devised a camera that would provide<br />
sharper photographs. By 1912, the improved apparatus had<br />
provided proof that the individual molecules of a substance have<br />
the same mass. While working on the element neon, however,<br />
Thomson obtained two parabolas, one with a mass of 20 <strong>and</strong> the<br />
other with a mass of 22, which seemed to contradict the previous<br />
findings that molecules of any substance have the same mass. Aston<br />
was given the task of resolving this mystery.<br />
Treating Particles Like Light<br />
Mass spectrograph / 495<br />
In 1919, Aston began to build a device called a “mass spectrograph.”<br />
The idea was to treat ionized or positive atoms like light. He<br />
reasoned that, because light can be dispersed into a rainbowlike<br />
spectrum <strong>and</strong> analyzed by means of its different colors, the same<br />
procedure could be used with atoms of an element such as neon. By<br />
creating a device that used magnetic fields to focus the stream of<br />
particles emitted by neon, he was able to create a mass spectrum<br />
<strong>and</strong> record it on a photographic plate. The heavier mass of neon (the<br />
first neon isotope) was collected on one part of a spectrum <strong>and</strong> the<br />
lighter neon (the second neon isotope) showed up on another. This<br />
mass spectrograph was a magnificent apparatus: The masses could<br />
be analyzed without reference to the velocity of the particles, which<br />
was a problem with the parabola method devised by Thomson.<br />
Neon possessed two isotopes: one with a mass of 20 <strong>and</strong> the other<br />
with a mass of 22, in a ratio of 10:1. When combined, this gave the<br />
atomic weight 20.20, which was the accepted weight of neon.
496 / Mass spectrograph<br />
Francis William Aston<br />
Francis W. Aston was born near Birmingham, Engl<strong>and</strong>, in<br />
1877 to William Aston, a farmer <strong>and</strong> metals dealer, <strong>and</strong> Fanny<br />
Charlotte Hollis, a gunmaker’s daughter. As a boy he loved to<br />
perform experiments by himself in his own small laboratory at<br />
home. His diligence helped him earn top marks in school, <strong>and</strong><br />
he attended Mason College (later the University of Birmingham).<br />
However, he failed to win a scholarship to continue his<br />
studies after graduation in 1901.<br />
He did not give up on experiments, however, even while<br />
holding a job as the chemist for a local brewery. He built his own<br />
equipment <strong>and</strong> investigated the nature of electricity. This work<br />
attracted the attention of the most famous researchers of the<br />
day. He finally got a scholarship in 1903 to the University of Birmingham<br />
<strong>and</strong> then joined the staff of Joseph John Thomson at<br />
the Royal Institution in London <strong>and</strong> Cambridge University,<br />
which remained his home until his death in 1945.<br />
Aston liked to work alone as much as possible. Given his<br />
unflagging attention to the details of measurement <strong>and</strong> his inventiveness<br />
with experimental equipment, his colleagues respected<br />
his lone-dog approach. Their trust was rewarded. After<br />
refining the mass spectrograph, Aston was able to explain a<br />
thorny problem in chemistry by showing that elements are<br />
composed of differing percentages of isotopes <strong>and</strong> that atomic<br />
weight varied slightly depending on the density of their atoms’<br />
nuclei. The research earned him the Nobel Prize in Chemistry in<br />
1922.<br />
Aston’s solitude extended into his private life. He never<br />
married, lavishing his affection instead on animals, outdoor<br />
sports, photography, travel, <strong>and</strong> music.<br />
Aston’s accomplishment in developing the mass spectrograph<br />
was recognized immediately by the scientific community. His was a<br />
simple device that was capable of accomplishing a large amount of<br />
research quickly. The field of isotope research, which had been<br />
opened up by Aston’s research, ultimately played an important part<br />
in other areas of physics.
Impact<br />
The years following 1919 were highly charged with excitement,<br />
since month after month new isotopes were announced. Chlorine<br />
had two; bromine had isotopes of 79 <strong>and</strong> 81, which gave an almost<br />
exact atomic weight of 80; krypton had six isotopes; <strong>and</strong> xenon had<br />
even more. In addition to the discovery of nonradioactive isotopes,<br />
the “whole-number rule” for chemistry was verified: Protons were<br />
the basic building blocks for different atoms, <strong>and</strong> they occurred exclusively<br />
in whole numbers.<br />
Aston’s original mass spectrograph had an accuracy of 1 in 1,000.<br />
In 1927, he built an even more accurate instrument, which was ten<br />
times more accurate. The new apparatus was sensitive enough to<br />
measure Albert Einstein’s law of mass energy conversion during a<br />
nuclear reaction. Between 1927 <strong>and</strong> 1935, Aston reviewed all the elements<br />
that he had worked on earlier <strong>and</strong> published updated results.<br />
He also began to build a still more accurate instrument, which<br />
proved to be of great value to nuclear chemistry.<br />
The discovery of isotopes opened the way to further research in<br />
nuclear physics <strong>and</strong> completed the speculations begun by Prout<br />
during the previous century. Although radioactivity was discovered<br />
separately, isotopes played a central role in the field of nuclear<br />
physics <strong>and</strong> chain reactions.<br />
See also Cyclotron; Electron microscope; Neutrino detector;<br />
Scanning tunneling microscope; Synchrocyclotron; Tevatron accelerator;<br />
Ultramicroscope.<br />
Further Reading<br />
Mass spectrograph / 497<br />
Aston, Francis William. “Mass Spectra <strong>and</strong> Isotopes” [Nobel lecture].<br />
In Chemistry, 1922-1941. River Edge, N.J.: World Scientific,<br />
1999.<br />
Squires, Gordon. “Francis Aston <strong>and</strong> the Mass Spectrograph.” Journal<br />
of the Chemical Society. Dalton Transactions no. 23 (1998).<br />
Thackray, Arnold. Atoms <strong>and</strong> Powers: An Essay on Newtonian Matter-<br />
Theory <strong>and</strong> the Development of Chemistry. Cambridge, Mass.: Harvard<br />
University Press, 1970.
498<br />
Memory metal<br />
Memory metal<br />
The invention: Known as nitinol, a metal alloy that returns to its<br />
original shape, after being deformed, when it is heated to the<br />
proper temperature.<br />
The person behind the invention:<br />
William Buehler (1923- ), an American metallurgist<br />
The Alloy with a Memory<br />
In 1960, William Buehler developed an alloy that consisted of 53<br />
to 57 percent nickel (by weight) <strong>and</strong> the balance titanium. This alloy,<br />
which is called nitinol, turned out to have remarkable properties.<br />
Nitinol is a “memory metal,” which means that, given the proper<br />
conditions, objects made of nitinol can be restored to their original<br />
shapes even after they have been radically deformed. The return to<br />
the original shape is triggered by heating the alloy to a moderate<br />
temperature. As the metal “snaps back” to its original shape, considerable<br />
force is exerted <strong>and</strong> mechanical work can be done.<br />
Alloys made of nickel <strong>and</strong> titanium have great potential in a<br />
wide variety of industrial <strong>and</strong> government applications. These include:<br />
for the computer market, a series of high-performance electronic<br />
connectors; for the medical market, intravenous fluid devices<br />
that feature precise fluid control; for the consumer market, eyeglass<br />
frame components; <strong>and</strong>, for the industrial market, power cable couplings<br />
that provide durability at welded joints.<br />
The Uncoiling Spring<br />
At one time, the “uncoiling spring experiment” was used to<br />
amuse audiences, <strong>and</strong> a number of scientists have had fun with<br />
nitinol in front of unsuspecting viewers. It is now generally recognized<br />
that the shape memory effect involves a thermoelastic transformation<br />
at the atomic level. This process is unique in that the<br />
transformation back to the original shape occurs as a result of stored<br />
elastic energy that assists the chemical driving force that is unleashed<br />
by heating the metal.
Memory metal / 499<br />
The mechanism, simply stated, is that shape memory alloys<br />
are rather easily deformed below their “critical temperature.”<br />
Provided that the extent of the deformation is not too great, the<br />
original, undeformed state can be recovered by heating the alloy<br />
to a temperature just below the critical temperature. It is also<br />
significant that substantial stresses are generated when a deformed<br />
specimen “springs back” to its original shape. This phenomenon<br />
is very peculiar compared to the ordinary behavior of<br />
most materials.<br />
Researchers at the Naval Ordnance Laboratory discovered nitinol<br />
by accident in the process of trying to learn how to make titanium<br />
less brittle. They tried adding nickel, <strong>and</strong> when they were showing a<br />
wire of the alloy to some administrators, someone smoking a cigar<br />
held his match too close to the sample, causing the nitinol to spring<br />
back into shape. One of the first applications of the discovery was a<br />
new way to link hydraulic lines on the Navy’s F-14 fighter jets. The<br />
nitinol “sleeve” was cooled with liquid nitrogen, which enlarged<br />
the sample. Then it was slipped into place between two pipes.<br />
When the sleeve was warmed up, it contracted, clamping the pipes<br />
together <strong>and</strong> keeping them clamped with a force of nearly 50,000<br />
pounds per square inch.<br />
Nitinol is not an easy alloy with which to work. When it is drilled<br />
or passed through a lathe, it becomes hardened <strong>and</strong> resists change.<br />
Welding nitinol <strong>and</strong> electroplating it have become manufacturing<br />
nightmares. It also resists taking on a desired shape. The frictional<br />
forces of many processes heat the nitinol, which activates its memory.<br />
Its fantastic elasticity also causes difficulties. If it is placed in a press<br />
with too little force, the spring comes out of the die unchanged. With<br />
too much force, the metal breaks into fragments. Using oil as a cooling<br />
lubricant <strong>and</strong> taking a step-wise approach to altering the alloy,<br />
however, allows it to be fashioned into particular shapes.<br />
One unique use of nitinol occurs in cardiac surgery. Surgical<br />
tools made of nitinol can be bent up to 90 degrees, allowing them<br />
to be passed into narrow vessels <strong>and</strong> then retrieved. The tools are<br />
then straightened out in an autoclave so that they can be reused.
500 / Memory metal<br />
Consequences<br />
Many of the technical problems of working with nitinol have<br />
been solved, <strong>and</strong> manufacturers of the alloy are selling more than<br />
twenty different nitinol products to countless companies in the<br />
fields of medicine, transportation, consumer products, <strong>and</strong> toys.<br />
Nitinol toys include blinking movie posters, butterflies with<br />
flapping wings, <strong>and</strong> dinosaurs whose tails move; all these applications<br />
are driven by a contracting bit of wire that is connected to a<br />
watch battery. The “Thermobile” <strong>and</strong> the “Icemobile” are toys whose<br />
wheels are set in motion by hot water or by ice cubes.<br />
Orthodontists sometimes use nitinol wires <strong>and</strong> springs in braces<br />
because the alloy pulls with a force that is more gentle <strong>and</strong> even<br />
than that of stainless steel, thus causing less pain. Nitinol does not<br />
react with organic materials, <strong>and</strong> it is also useful as a new type of<br />
blood-clot filter. Best of all, however, is the use of nitinol for eyeglass<br />
frames. If the wearer deforms the frames by sitting on them (<strong>and</strong><br />
people do so frequently), the optometrist simply dips the crumpled<br />
frames in hot water <strong>and</strong> the frames regain their original shape.<br />
From its beginnings as an “accidental” discovery, nitinol has<br />
gone on to affect various fields of science <strong>and</strong> technology, from the<br />
“Cryofit” couplings used in the hydraulic tubing of aircraft to the<br />
pin-<strong>and</strong>-socket contacts used in electrical circuits. Nitinol has also<br />
found its way into integrated circuit packages.<br />
In an age of energy conservation, the unique phase transformation<br />
of nickel-titanium alloys allows them to be used in lowtemperature<br />
heat engines. The world has abundant resources of<br />
low-grade thermal energy, <strong>and</strong> the recovery of this energy can be<br />
accomplished by the use of materials such as nitinol. Despite the<br />
limitations imposed on heat engines working at low temperatures<br />
across a small temperature change, sources of low-grade heat are<br />
so widespread that the economical conversion of a fractional percentage<br />
of that energy could have a significant impact on the<br />
world’s energy supply.<br />
Nitinol has also become useful as a material capable of absorbing<br />
internal vibrations in structural materials, <strong>and</strong> it has been used as<br />
“Harrington rods” to treat scoliosis (curvature of the spine).
See also Disposable razor; Neoprene; Plastic; Steelmaking process;<br />
Teflon; Tungsten filament.<br />
Further Reading<br />
Memory metal / 501<br />
Gisser, Kathleen R. C., et al. “Nickel-Titanium Memory Metal.” Journal<br />
of Chemical Education 71, no. 4 (April, 1994).<br />
Iovine, John. “The World’s ‘Smartest’ Metal.” Poptronics 1, no. 12<br />
(December, 2000).<br />
Jackson, Curtis M., H. J. Wagner, <strong>and</strong> Roman Jerzy Wasilewski. 55-<br />
Nitinol: The Alloy with a Memory: Its Physical Metallurgy, Properties,<br />
<strong>and</strong> Applications. Washington: Technology Utilization Office,<br />
1972.<br />
Walker, Jearl. “The Amateur Scientist.” Scientific American 254, no. 5<br />
(May, 1986).
502<br />
Microwave cooking<br />
Microwave cooking<br />
The invention: System of high-speed cooking that uses microwave<br />
radition to agitate liquid molecules to raise temperatures by friction.<br />
The people behind the invention:<br />
Percy L. Spencer (1894-1970), an American engineer<br />
Heinrich Hertz (1857-1894), a German physicist<br />
James Clerk Maxwell (1831-1879), a Scottish physicist<br />
The Nature of Microwaves<br />
Microwaves are electromagnetic waves, as are radio waves, X<br />
rays, <strong>and</strong> visible light. Water waves <strong>and</strong> sound waves are waveshaped<br />
disturbances of particles in the media—water in the case of<br />
water waves <strong>and</strong> air or water in the case of sound waves—through<br />
which they travel. Electromagnetic waves, however, are wavelike<br />
variations of intensity in electric <strong>and</strong> magnetic fields.<br />
Electromagnetic waves were first studied in 1864 by James Clerk<br />
Maxwell, who explained mathematically their behavior <strong>and</strong> velocity.<br />
Electromagnetic waves are described in terms of their “wavelength”<br />
<strong>and</strong> “frequency.” The wavelength is the length of one cycle,<br />
which is the distance from the highest point of one wave to the highest<br />
point of the next wave, <strong>and</strong> the frequency is the number of cycles<br />
that occur in one second. Frequency is measured in units called<br />
“hertz,” named for the German physicist Heinrich Hertz. The frequencies<br />
of microwaves run from 300 to 3,000 megahertz (1 megahertz<br />
equals 1 million hertz, or 1 million cycles per second), corresponding<br />
to wavelengths of 100 to 10 centimeters.<br />
Microwaves travel in the same way that light waves do; they are<br />
reflected by metallic objects, absorbed by some materials, <strong>and</strong> transmitted<br />
by other materials. When food is subjected to microwaves, it<br />
heats up because the microwaves make the water molecules in foods<br />
(water is the most common compound in foods) vibrate. Water is a<br />
“dipole molecule,” which means that it contains both positive <strong>and</strong><br />
negative charges. When the food is subjected to microwaves, the di-
pole water molecules try to align themselves with the alternating<br />
electromagnetic field of the microwaves. This causes the water molecules<br />
to collide with one another <strong>and</strong> with other molecules in the<br />
food. Consequently, heat is produced as a result of friction.<br />
Development of the Microwave Oven<br />
Microwave cooking / 503<br />
Percy L. Spencer apparently discovered the principle of microwave<br />
cooking while he was experimenting with a radar device at<br />
the Raytheon Company. A c<strong>and</strong>y bar in his pocket melted after being<br />
exposed to microwaves. After realizing what had happened,<br />
Spencer made the first microwave oven from a milk can <strong>and</strong> applied<br />
for two patents, “Method of Treating Foodstuffs” <strong>and</strong> “Means for<br />
Treating Foodstuffs,” on October 8, 1945, giving birth to microwaveoven<br />
technology.<br />
Spencer wrote that his invention “relates to the treatment of<br />
foodstuffs <strong>and</strong>, more particularly, to the cooking thereof through<br />
the use of electromagnetic energy.” Though the use of electromagnetic<br />
energy for heating was recognized at that time, the frequencies<br />
that were used were lower than 50 megahertz. Spencer discovered<br />
that heating at such low frequencies takes a long time. He eliminated<br />
the time disadvantage by using shorter wavelengths in the<br />
microwave region. Wavelengths of 10 centimeters or shorter were<br />
comparable to the average dimensions of foods. When these wavelengths<br />
were used, the heat that was generated became intense, the<br />
energy that was required was minimal, <strong>and</strong> the process became efficient<br />
enough to be exploited commercially.<br />
Although Spencer’s patents refer to the cooking of foods with<br />
microwave energy, neither deals directly with a microwave oven.<br />
The actual basis for a microwave oven may be patents filed by other<br />
researchers at Raytheon. A patent by Karl Stiefel in 1949 may be the<br />
forerunner of the microwave oven, <strong>and</strong> in 1950, Fritz Gross received<br />
a patent entitled “Cooking Apparatus,” which specifically describes<br />
an oven that is very similar to modern microwave ovens.<br />
Perhaps the first mention of a commercial microwave oven was<br />
made in the November, 1946, issue of Electronics magazine. This article<br />
described the newly developed Radarange as a device that<br />
could bake biscuits in 29 seconds, cook hamburgers in 35 seconds,
504 / Microwave cooking<br />
Percy L. Spencer<br />
Percy L. Spencer (1894-1970) had an unpromising background<br />
for the inventor of the twentieth century’s principal innovation<br />
in the technology of cooking. He was orphaned while<br />
still a young boy <strong>and</strong> never completed grade school. However,<br />
he possessed a keen curiosity <strong>and</strong> the imaginative intelligence<br />
to educate himself <strong>and</strong> recognize how to make things better.<br />
In 1941 the magnetron, which produces microwaves, was so<br />
complex <strong>and</strong> difficult to make that fewer than two dozen were<br />
produced in a day. This pace delayed the campaign to improve<br />
radar, which used magnetrons, so Spencer, while working for<br />
Raytheon Corporation, set out to speed things along. He simplified<br />
the design <strong>and</strong> made it more efficient at the same time. Production<br />
of magnetrons soon increased more than a thous<strong>and</strong>fold.<br />
In 1945 he discovered by accident that microwaves could<br />
heat chocolate past the melting point. He immediately tried an<br />
experiment by training microwaves on popcorn kernels <strong>and</strong><br />
was delighted to see them puff up straight away.<br />
The first microwave oven based on his discovery stood five<br />
feet, six inches tall <strong>and</strong> weighed 750 pounds, suitable only for<br />
restaurants. However, it soon got smaller, thanks to researchers<br />
at Raytheon. And after some initial hostility from cooks, it became<br />
popular. Raytheon bought Amana Refrigeration in 1965<br />
to manufacture the home models <strong>and</strong> marketed them worldwide.<br />
Meanwhile, Spencer had become a senior vice president<br />
at the company <strong>and</strong> a member of its board of directors. Raytheon<br />
named one of its buildings after him, the U.S. Navy presented<br />
him with the Distinguished Service Medal for his contributions,<br />
<strong>and</strong> in 1999 he entered the <strong>Inventors</strong> Hall of Fame.<br />
<strong>and</strong> grill a hot dog in 8 to 10 seconds. Another article that appeared a<br />
month later mentioned a unit that had been developed specifically<br />
for airline use. The frequency used in this oven was 3,000 megahertz.<br />
Within a year, a practical model 13 inches wide, 14 inches<br />
deep, <strong>and</strong> 15 inches high appeared, <strong>and</strong> several new models were<br />
operating in <strong>and</strong> around Boston. In June, 1947, Electronics magazine<br />
reported the installation of a Radarange in a restaurant, signaling<br />
the commercial use of microwave cooking. It was reported that this
method more than tripled the speed of service. The Radarange became<br />
an important addition to a number of restaurants, <strong>and</strong> in 1948,<br />
Bernard Proctor <strong>and</strong> Samuel Goldblith used it for the first time to<br />
conduct research into microwave cooking.<br />
In the United States, the radio frequencies that can be used for<br />
heating are allocated by the Federal Communications Commission<br />
(FCC). The two most popular frequencies for microwave cooking<br />
are 915 <strong>and</strong> 2,450 megahertz, <strong>and</strong> the 2,450 frequency is used in<br />
home microwave ovens. It is interesting that patents filed by Spencer<br />
in 1947 mention a frequency on the order of 2,450 megahertz. This<br />
fact is another example of Spencer’s vision in the development of<br />
microwave cooking principles. The Raytheon Company concentrated<br />
on using 2,450 megahertz, <strong>and</strong> in 1955, the first domestic microwave<br />
oven was introduced. It was not until the late 1960’s, however,<br />
that the price of the microwave oven decreased sufficiently for<br />
the device to become popular. The first patent describing a microwave<br />
heating system being used in conjunction with a conveyor<br />
was issued to Spencer in 1952. Later, based on this development,<br />
continuous industrial applications of microwaves were developed.<br />
Impact<br />
Microwave cooking / 505<br />
Initially, microwaves were viewed as simply an efficient means<br />
of rapidly converting electric energy to heat. Since that time, however,<br />
they have become an integral part of many applications. Because<br />
of the pioneering efforts of Percy L. Spencer, microwave applications<br />
in the food industry for cooking <strong>and</strong> for other processing<br />
operations have flourished. In the early 1970’s, there were eleven<br />
microwave oven companies worldwide, two of which specialized<br />
in food processing operations, but the growth of the microwave<br />
oven industry has paralleled the growth in the radio <strong>and</strong> television<br />
industries. In 1984, microwave ovens accounted for more shipments<br />
than had ever been achieved by any appliance—9.1 million units.<br />
By 1989, more than 75 percent of the homes in the United States<br />
had microwave ovens, <strong>and</strong> in the 1990’s, microwavable foods were<br />
among the fastest-growing products in the food industry. Microwave<br />
energy facilitates reductions in operating costs <strong>and</strong> required<br />
energy, higher-quality <strong>and</strong> more reliable products, <strong>and</strong> positive en-
506 / Microwave cooking<br />
vironmental effects. To some degree, the use of industrial microwave<br />
energy remains in its infancy. New <strong>and</strong> improved applications<br />
of microwaves will continue to appear.<br />
See also Electric refrigerator; Fluorescent lighting; Food freezing;<br />
Robot (household); Television; Tupperware; Vacuum cleaner;<br />
Washing machine.<br />
Further Reading<br />
Baird, Davis, R. I. G. Hughes, <strong>and</strong> Alfred Nordmann. Heinrich Hertz:<br />
Classical Physicist, Modern Philosopher. Boston: Kluwer Academic,<br />
1998.<br />
Roman, Mark. “That Marvelous Machine in Your Kitchen.” Reader’s<br />
Digest (February, 1990).<br />
Scott, Otto. The Creative Ordeal: The Story of Raytheon. New York:<br />
Atheneum, 1974.<br />
Simpson, Thomas K. Maxwell on the Electromagnetic Field: A Guided<br />
Study. New Brunswick, N.J.: Rutgers University Press, 1997.<br />
Tolstoy, Ivan. James Clerk Maxwell: A Biography. Chicago: University<br />
of Chicago Press, 1982.
Neoprene<br />
Neoprene<br />
The invention: The first commercially practical synthetic rubber,<br />
Neoprene gave a boost to polymer chemistry <strong>and</strong> the search for<br />
new materials.<br />
The people behind the invention:<br />
Wallace Hume Carothers (1896-1937), an American chemist<br />
Arnold Miller Collins (1899- ), an American chemist<br />
Elmer Keiser Bolton (1886-1968), an American chemist<br />
Julius Arthur Nieuwl<strong>and</strong> (1879-1936), a Belgian American<br />
priest, botanist, <strong>and</strong> chemist<br />
Synthetic Rubber: A Mirage?<br />
507<br />
The growing dependence of the industrialized nations upon<br />
elastomers (elastic substances) <strong>and</strong> the shortcomings of natural<br />
rubber motivated the twentieth century quest for rubber substitutes.<br />
By 1914, rubber had become nearly as indispensable as coal<br />
or iron. The rise of the automobile industry, in particular, had created<br />
a strong dem<strong>and</strong> for rubber. Unfortunately, the availability of<br />
rubber was limited by periodic shortages <strong>and</strong> spiraling prices. Furthermore,<br />
the particular properties of natural rubber, such as its<br />
lack of resistance to oxygen, oils, <strong>and</strong> extreme temperatures, restrict<br />
its usefulness in certain applications. These limitations stimulated<br />
a search for special-purpose rubber substitutes.<br />
Interest in synthetic rubber dates back to the 1860 discovery by<br />
the English chemist Greville Williams that the main constituent<br />
of rubber is isoprene, a liquid hydrocarbon. Nineteenth century<br />
chemists attempted unsuccessfully to transform isoprene into<br />
rubber. The first large-scale production of a rubber substitute occurred<br />
during World War I. A British blockade forced Germany to<br />
begin to manufacture methyl rubber in 1916, but methyl rubber<br />
turned out to be a poor substitute for natural rubber. When the<br />
war ended in 1918, a practical synthetic rubber was still only a mirage.<br />
Nevertheless, a breakthrough was on the horizon.
508 / Neoprene<br />
Mirage Becomes Reality<br />
In 1930, chemists at E. I. Du Pont de Nemours discovered the<br />
elastomer known as neoprene. Of the more than twenty chemists<br />
who helped to make this discovery possible, four st<strong>and</strong> out: Elmer<br />
Bolton, Julius Nieuwl<strong>and</strong>, Wallace Carothers, <strong>and</strong> Arnold Collins.<br />
Bolton directed Du Pont’s drystuffs department in the mid-<br />
1920’s. Largely because of the rapidly increasing price of rubber, he<br />
initiated a project to synthesize an elastomer from acetylene, a gaseous<br />
hydrocarbon. In December, 1925, Bolton attended the American<br />
Chemical Society’s convention in Rochester, New York, <strong>and</strong><br />
heard a presentation dealing with acetylene reactions. The presenter<br />
was Julius Nieuwl<strong>and</strong>, the foremost authority on the chemistry<br />
of acetylene.<br />
Nieuwl<strong>and</strong> was a professor of organic chemistry at the University<br />
of Notre Dame. (One of his students was the legendary football<br />
coach Knute Rockne.) The priest-scientist had been investigating<br />
acetylene reactions for more than twenty years. Using a copper<br />
chloride catalyst he had discovered, he isolated a new compound,<br />
divinylacetylene (DVA). He later treated DVA with a vulcanizing<br />
(hardening) agent <strong>and</strong> succeeded in producing a rubberlike substance,<br />
but the substance proved to be too soft for practical use.<br />
Bolton immediately recognized the importance of Nieuwl<strong>and</strong>’s<br />
discoveries <strong>and</strong> discussed with him the possibility of using DVA as<br />
a raw material for a synthetic rubber. Seven months later, an alliance<br />
was formed that permitted Du Pont researchers to use Nieuwl<strong>and</strong>’s<br />
copper catalyst. Bolton hoped that the catalyst would be the key to<br />
making an elastomer from acetylene. As it turned out, Nieuwl<strong>and</strong>’s<br />
catalyst was indispensable for manufacturing neoprene.<br />
Over the next several years, Du Pont scientists tried unsuccessfully<br />
to produce rubberlike materials. Using Nieuwl<strong>and</strong>’s catalyst,<br />
they managed to prepare DVA <strong>and</strong> also to isolate monovinylacetylene<br />
(MVA), a new compound that eventually proved to be the vital<br />
intermediate chemical in the making of neoprene. Reactions of<br />
MVA <strong>and</strong> DVA, however, produced only hard, brittle materials.<br />
In 1928, Du Pont hired a thirty-one-year-old Harvard instructor,<br />
Wallace Carothers, to direct the organic chemicals group. He began<br />
a systematic exploration of polymers (complex molecules). In early
1930, he accepted an assignment to investigate the chemistry of<br />
DVA. He appointed one of his assistants, Arnold Collins, to conduct<br />
the laboratory experiments. Carothers suggested that Collins<br />
should explore the reaction between MVA <strong>and</strong> hydrogen chloride.<br />
His suggestion would lead to the discovery of neoprene.<br />
One of Collins’s experiments yielded a new liquid, <strong>and</strong> on April<br />
17, 1930, he recorded in his laboratory notebook that the liquid had<br />
solidified into a rubbery substance. When he dropped it on a bench,<br />
it bounced. This was the first batch of neoprene. Carothers named<br />
Collins’s liquid “chloroprene.” Chloroprene is analogous structurally<br />
to isoprene, but it polymerizes much more rapidly. Carothers<br />
conducted extensive investigations of the chemistry of chloroprene<br />
<strong>and</strong> related compounds. His studies were the foundation for Du<br />
Pont’s development of an elastomer that was superior to all previously<br />
known synthetic rubbers.<br />
Du Pont chemists, including Carothers <strong>and</strong> Collins, formally introduced<br />
neoprene—originally called “DuPrene”—on November 3,<br />
1931, at the meeting of the American Chemical Society in Akron,<br />
Ohio. Nine months later, the new elastomer began to be sold.<br />
Impact<br />
Neoprene / 509<br />
The introduction of neoprene was a milestone in humankind’s development<br />
of new materials. It was the first synthetic rubber worthy<br />
of the name. Neoprene possessed higher tensile strength than rubber<br />
<strong>and</strong> much better resistance to abrasion, oxygen, heat, oils, <strong>and</strong> chemicals.<br />
Its main applications included jacketing for electric wires <strong>and</strong><br />
cables, work-shoe soles, gasoline hoses, <strong>and</strong> conveyor <strong>and</strong> powertransmission<br />
belting. By 1939, when Adolf Hitler’s troops invaded Pol<strong>and</strong>,<br />
nearly every major industry in America was using neoprene.<br />
After the Japanese bombing of Pearl Harbor, in 1941, the elastomer<br />
became even more valuable to the United States. It helped the United<br />
States <strong>and</strong> its allies survive the critical shortage of natural rubber that<br />
resulted when Japan seized Malayan rubber plantations.<br />
A scientifically <strong>and</strong> technologically significant side effect of the<br />
introduction of neoprene was the stimulus that the breakthrough<br />
gave to polymer research. Chemists had long debated whether<br />
polymers were mysterious aggregates of smaller units or were gen-
510 / Neoprene<br />
uine molecules. Carothers ended the debate by demonstrating in a<br />
series of now-classic papers that polymers were indeed ordinary—<br />
but very large—molecules. In the 1930’s, he put polymer studies on<br />
a firm footing. The advance of polymer science led, in turn, to the<br />
development of additional elastomers <strong>and</strong> synthetic fibers, including<br />
nylon, which was invented by Carothers himself in 1935.<br />
See also Buna rubber; Nylon; Orlon; Plastic; Polyester; Polyethylene;<br />
Polystyrene; Silicones; Teflon.<br />
Further Reading<br />
Furukawa, Yasu. Inventing Polymer Science: Staudinger, Carothers, <strong>and</strong><br />
the Emergence of Macromolecular Chemistry. Philadelphia: University<br />
of Pennsylvania Press, 1998.<br />
Hermes, Matthew E. Enough for One Lifetime: Wallace Carothers, Inventor<br />
of Rayon. Washington, D.C.: American Chemical Society<br />
<strong>and</strong> the Chemical Heritage Foundation, 1996.<br />
Taylor, Graham D., <strong>and</strong> Patricia E. Sudnik. Du Pont <strong>and</strong> the International<br />
Chemical Industry. Boston, Mass.: Twayne, 1984.
Neutrino detector<br />
Neutrino detector<br />
The invention: A device that provided the first direct evidence that<br />
the Sun runs on thermonuclear power <strong>and</strong> challenged existing<br />
models of the Sun.<br />
The people behind the invention:<br />
Raymond Davis, Jr. (1914- ), an American chemist<br />
John Norris Bahcall (1934- ), an American astrophysicist<br />
Missing Energy<br />
511<br />
In 1871, Hermann von Helmholtz, the German physicist, anatomist,<br />
<strong>and</strong> physiologist, suggested that no ordinary chemical reaction<br />
could be responsible for the enormous energy output of the<br />
Sun. By the 1920’s, astrophysicists had realized that the energy radiated<br />
by the Sun must come from nuclear fusion, in which protons or<br />
nuclei combine to form larger nuclei <strong>and</strong> release energy. These reactions<br />
were assumed to be taking place deep in the interior of the<br />
Sun, in an immense thermonuclear furnace, where the pressures<br />
<strong>and</strong> temperatures were high enough to allow fusion to proceed.<br />
Conventional astronomical observations could record only the<br />
particles of light emitted by the much cooler outer layers of the Sun<br />
<strong>and</strong> could not provide evidence for the existence of a thermonuclear<br />
furnace in the interior. Then scientists realized that the neutrino<br />
might be used to prove that this huge furnace existed. Of all the particles<br />
released in the fusion process, only one type—the neutrino—<br />
interacts so infrequently with matter that it can pass through the<br />
Sun <strong>and</strong> reach the earth. These neutrinos provide a way to verify directly<br />
the hypothesis of thermonuclear energy generated in stars.<br />
The neutrino was “invented” in 1930 by the American physicist<br />
Wolfgang Pauli to account for the apparent missing energy in the<br />
beta decay, or emission of an electron, from radioactive nuclei. He<br />
proposed that an unseen nuclear particle, which he called a neutrino,<br />
was also emitted in beta decay, <strong>and</strong> that it carried off the<br />
“missing” energy. To balance the energy but not be observed in the<br />
decay process, Pauli’s hypothetical particle had to have no electrical
512 / Neutrino detector<br />
charge, have little or no mass, <strong>and</strong> interact only very weakly with<br />
ordinary matter. Typical neutrinos would have to be able to pass<br />
through millions of miles of ordinary matter in order to reach the<br />
earth. Scientists’ detectors, <strong>and</strong> even the whole earth or Sun, were<br />
essentially transparent as far as Pauli’s neutrinos were concerned.<br />
Because the neutrino is so difficult to detect, it took more than<br />
twenty-five years to confirm its existence. In 1956, Clyde Cowan<br />
<strong>and</strong> Frederick Reines, both physicists at the Los Alamos National<br />
Laboratory, built the world’s largest scintillation counter, a device to<br />
detect the small flash of light given off when the neutrino strikes<br />
(“interacts” with) a certain substance in the apparatus. They placed<br />
this scintillation counter near the Savannah River Nuclear Reactor,<br />
which was producing about 1 trillion neutrinos every second. Although<br />
only one neutrino interaction was observed in their detector<br />
every twenty minutes, Cowan <strong>and</strong> Reines were able to confirm the<br />
existence of Pauli’s elusive particle.<br />
The task of detecting the solar neutrinos was even more formidable.<br />
If an apparatus similar to the Cowan <strong>and</strong> Reines detector were<br />
employed to search for the neutrinos from the Sun, only one interaction<br />
could be expected every few thous<strong>and</strong> years.<br />
Missing Neutrinos<br />
At about the same time that Cowan <strong>and</strong> Reines performed their<br />
experiment, another type of neutrino detector was under development<br />
by Raymond Davis, Jr., a chemist at the Brookhaven National<br />
Laboratory. Davis employed an idea, originally suggested in 1948<br />
by the nuclear physicist Bruno Pontecorvo, that when a neutrino interacts<br />
with a chlorine-37 nucleus, it produces a nucleus of argon 37.<br />
Any argon so produced could then be extracted from large volumes<br />
of chlorine-rich liquid by passing helium gas through the liquid.<br />
Since argon 37 is radioactive, it is relatively easy to detect.<br />
Davis tested a version of this neutrino detector, containing about<br />
3,785 liters of carbon tetrachloride liquid, near a nuclear reactor at<br />
the Brookhaven National Laboratory from 1954 to 1956. In the scientific<br />
paper describing his results, Davis suggested that this type of<br />
neutrino detector could be made large enough to permit detection<br />
of solar neutrinos.
Neutrino detector / 513<br />
Patients undergoing nuclear magnetic resonance image (MRI) examinations are placed inside<br />
cylindrical chambers in which their bodies are held rigidly in place. (Digital Stock)<br />
Although Davis’s first attempt to detect solar neutrinos from a<br />
limestone mine at Barberton, Ohio, failed, he continued his search<br />
with a much larger detector 1,478 meters underground in the<br />
Homestake Gold Mine in Lead, South Dakota. The cylindrical tank<br />
(6.1 meters in diameter, 16 meters long, <strong>and</strong> containing 378,540 liters<br />
of perchloroethylene) was surrounded by water to shield the<br />
detector from neutrons emitted by trace quantities of uranium <strong>and</strong><br />
thorium in the walls of the mine. The experiment was conducted<br />
underground to shield it from cosmic radiation.<br />
To describe his results, Davis coined a new unit, the “solar<br />
neutrino unit” (SNU), with 1 SNU indicating the production of<br />
one atom of argon 37 every six days. Astrophysicist John Norris<br />
Bahcall, using the best available astronomical models of the nuclear<br />
reactions going on in the sun’s interior, as well as the physical<br />
properties of the neutrinos, had predicted a capture rate of 50<br />
SNUs in 1963. The 1967 results from Davis’s detector, however,<br />
had an upper limit of only 3 SNUs.
514 / Neutrino detector<br />
Consequences<br />
The main significance of the detection of solar neutrinos by Davis<br />
was the direct confirmation that thermonuclear fusion must be occurring<br />
at the center of the Sun. The low number of solar neutrinos<br />
Davis detected, however, has called into question some of the fundamental<br />
beliefs of astrophysics. As Bahcall explained: “We know<br />
more about the Sun than about any other star....TheSunisalso in<br />
what is believed to be the best-understood stage of stellar evolution....Ifwearetohave<br />
confidence in the many astronomical <strong>and</strong><br />
cosmological applications of the theory of stellar evolution, it ought<br />
at least to give the right answers about the Sun.”<br />
Many solutions to the problem of the “missing” solar neutrinos<br />
have been proposed. Most of these solutions can be divided into<br />
two broad classes: those that challenge the model of the sun’s interior<br />
<strong>and</strong> those that challenge the underst<strong>and</strong>ing of the behavior of<br />
the neutrino. Since the number of neutrinos produced is very sensitive<br />
to the temperature of the sun’s interior, some astrophysicists<br />
have suggested that the true solar temperature may be lower than<br />
expected. Others suggest that the sun’s outer layer may absorb<br />
more neutrinos than expected. Some physicists, however, believe<br />
neutrinos may occur in several different forms, only one of which<br />
can be detected by the chlorine detectors.<br />
Alpha Rays (charged helium nuclei)<br />
Beta Rays (charged electrons)<br />
Gamma Rays, X Rays (photons)<br />
Neutrinos<br />
(chargeless, nearly massless subatomic particles)<br />
Skin<br />
Metal Metal<br />
(0.12 cm aluminum) (lead)<br />
Neutrinos can pass through most forms of matter without interacting with other nuclear<br />
particles.
Davis’s discovery of the low number of neutrinos reaching Earth<br />
has focused years of attention on a better underst<strong>and</strong>ing of how the<br />
Sun generates its energy <strong>and</strong> how the neutrino behaves. New <strong>and</strong><br />
more elaborate solar neutrino detectors have been built with the<br />
aim of underst<strong>and</strong>ing stars, including the Sun, as well as the physics<br />
<strong>and</strong> behavior of the elusive neutrino.<br />
See also Radio interferometer; Weather satellite.<br />
Further Reading<br />
Neutrino detector / 515<br />
Bartusiak, Marcia. “Underground Astronomer.” Astronomy 28, no. 1<br />
(January, 2000).<br />
“Neutrino Test to Probe Sun.” New Scientist 140, no. 1898 (November<br />
6, 1993).<br />
“Pioneering Neutrino Astronomers to Share 2000 Wolf Prize in<br />
Physics.” Physics Today 53, no. 3 (March, 2000).<br />
Schwarzschild, Bertram. “Can Helium Mixing Explain the Solar<br />
Neutrino Shortages?” Physics Today 50, no. 3 (March, 1997).<br />
Zimmerman, Robert. “The Shadow Boxer.” The Sciences 36, no. 1<br />
(January/February, 1996).
516<br />
Nuclear magnetic resonance<br />
Nuclear magnetic resonance<br />
The invention: Procedure that uses hydrogen atoms in the human<br />
body, strong electromagnets, radio waves, <strong>and</strong> detection equipment<br />
to produce images of sections of the brain.<br />
The people behind the invention:<br />
Raymond Damadian (1936- ), an American physicist <strong>and</strong><br />
inventor<br />
Paul C. Lauterbur (1929- ), an American chemist<br />
Peter Mansfield (1933- ), a scientist at the University of<br />
Nottingham, Engl<strong>and</strong><br />
Peering into the Brain<br />
Doctors have always wanted the ability to look into the skull <strong>and</strong><br />
see the human brain without harming the patient who is being examined.<br />
Over the years, various attempts were made to achieve this<br />
ability. At one time, the use of X rays, which were first used by Wilhelm<br />
Conrad Röntgen in 1895, seemed to be an option, but it was<br />
found that X rays are absorbed by bone, so the skull made it impossible<br />
to use X-ray technology to view the brain. The relatively recent<br />
use of computed tomography (CT) scanning, a computer-assisted<br />
imaging technology, made it possible to view sections of the head<br />
<strong>and</strong> other areas of the body, but the technique requires that the part<br />
of the body being “imaged,” or viewed, be subjected to a small<br />
amount of radiation, thereby putting the patient at risk. Positron<br />
emission tomography (PET) could also be used, but it requires that<br />
small amounts of radiation be injected into the patient, which also<br />
puts the patient at risk. Since the early 1940’s, however, a new technology<br />
had been developing.<br />
This technology, which appears to pose no risk to patients, is<br />
called “nuclear magnetic resonance spectroscopy.” It was first used<br />
to study the molecular structures of pure samples of chemicals. This<br />
method developed until it could be used to follow one chemical as it<br />
changed into another, <strong>and</strong> then another, in a living cell. By 1971,<br />
Raymond Damadian had proposed that body images that were
more vivid <strong>and</strong> more useful than X rays could be produced by<br />
means of nuclear magnetic resonance spectroscopy. In 1978, he<br />
founded his own company, FONAR, which manufactured the scanners<br />
that are necessary for the technique.<br />
Magnetic Resonance Images<br />
Nuclear magnetic resonance / 517<br />
The first nuclear magnetic resonance images (MRIs) were published<br />
by Paul Lauterbur in 1973. Although there seemed to be no<br />
possibility that MRI could be harmful to patients, everyone involved<br />
in MRI research was very cautious. In 1976, Peter Mansfield,<br />
at the University of Nottingham, Engl<strong>and</strong>, obtained an MRI of his<br />
partner’s finger. The next year, Paul Bottomley, a member of Waldo<br />
Hinshaw’s research group at the same university, put his left wrist<br />
into an experimental machine that the group had developed. A<br />
vivid cross section that showed layers of skin, muscle, bone, muscle,<br />
<strong>and</strong> skin, in that order, appeared on the machine’s monitor. Studies<br />
with animals showed no apparent memory or other brain problems.<br />
In 1978, Electrical <strong>and</strong> Musical Industries (EMI), a British corporate<br />
pioneer in electronics that merged with Thorn in 1980, obtained the<br />
first MRI of the human head. It took six minutes.<br />
An MRI of the brain, or any other part of the body, is made possible<br />
by the water content of the body. The gray matter of the brain<br />
contains more water than the white matter does. The blood vessels<br />
<strong>and</strong> the blood itself also have water contents that are different from<br />
those of other parts of the brain. Therefore, the different structures<br />
<strong>and</strong> areas of the brain can be seen clearly in an MRI. Bone contains<br />
very little water, so it does not appear on the monitor. This is why<br />
the skull <strong>and</strong> the backbone cause no interference when the brain or<br />
the spinal cord is viewed.<br />
Every water molecule contains two hydrogen atoms <strong>and</strong> one<br />
oxygen atom. A strong electromagnetic field causes the hydrogen<br />
molecules to line up like marchers in a parade. Radio waves can be<br />
used to change the position of these parallel hydrogen molecules.<br />
When the radio waves are discontinued, a small radio signal is<br />
produced as the molecules return to their marching position. This<br />
distinct radio signal is the basis for the production of the image on<br />
a computer screen.
518 / Nuclear magnetic resonance<br />
Hydrogen was selected for use in MRI work because it is very<br />
abundant in the human body, it is part of the water molecule, <strong>and</strong> it<br />
has the proper magnetic qualities. The nucleus of the hydrogen<br />
atom consists of a single proton, a particle with a positive charge.<br />
The signal from the hydrogen’s proton is comparatively strong.<br />
There are several methods by which the radio signal from the<br />
hydrogen atom can be converted into an image. Each method<br />
uses a computer to create first a two-dimensional, then a threedimensional,<br />
image.<br />
Peter Mansfield’s team at the University of Nottingham holds<br />
the patent for the slice-selection technique that makes it possible to<br />
excite <strong>and</strong> image selectively a specific cross section of the brain or<br />
any other part of the body. This is the key patent in MRI technology.<br />
Damadian was granted a patent that described the use of two coils,<br />
one to drive <strong>and</strong> one to pick up signals across selected portions of<br />
the human body. EMI, the company that introduced the X-ray scanner<br />
for CT images, developed a commercial prototype for the MRI.<br />
The British Technology Group, a state-owned company that helps to<br />
bring innovations to the marketplace, has sixteen separate MRIrelated<br />
patents. Ten years after EMI produced the first image of the<br />
human brain, patents <strong>and</strong> royalties were still being sorted out.<br />
Consequences<br />
MRI technology has revolutionized medical diagnosis, especially<br />
in regard to the brain <strong>and</strong> the spinal cord. For example, in multiple<br />
sclerosis, the loss of the covering on nerve cells can be detected. Tumors<br />
can be identified accurately. The painless <strong>and</strong> noninvasive use<br />
of MRI has almost completely replaced the myelogram, which involves<br />
using a needle to inject dye into the spine.<br />
Although there is every indication that the use of MRI is very<br />
safe, there are some people who cannot benefit from this valuable<br />
tool. Those whose bodies contain metal cannot be placed into the<br />
MRI machine. No one instrument can meet everyone’s needs.<br />
The development of MRI st<strong>and</strong>s as an example of the interaction<br />
of achievements in various fields of science. Fundamental physics,<br />
biochemistry, physiology, electronic image reconstruction, advances<br />
in superconducting wires, the development of computers, <strong>and</strong> ad-
vancements in anatomy all contributed to the development of MRI.<br />
Its development is also the result of international efforts. Scientists<br />
<strong>and</strong> laboratories in Engl<strong>and</strong> <strong>and</strong> the United States pioneered the<br />
technology, but contributions were also made by scientists in France,<br />
Switzerl<strong>and</strong>, <strong>and</strong> Scotl<strong>and</strong>. This kind of interaction <strong>and</strong> cooperation<br />
can only lead to greater underst<strong>and</strong>ing of the human brain.<br />
See also Amniocentesis; CAT scanner; Electrocardiogram; Electroencephalogram;<br />
Mammography; Ultrasound; X-ray image intensifier.<br />
Further Reading<br />
Nuclear magnetic resonance / 519<br />
Elster, Allen D., <strong>and</strong> Jonathan H. Burdette. Questions <strong>and</strong> Answers in<br />
Magnetic Resonance Imaging. 2d ed. St. Louis, Mo.: Mosby, 2001.<br />
Mackay, R. Stuart. Medical Images <strong>and</strong> Displays: Comparisons of Nuclear<br />
Magnetic Resonance, Ultrasound, X-rays, <strong>and</strong> Other Modalities.<br />
New York: Wiley, 1984.<br />
Mattson, James, <strong>and</strong> Merrill Simon. The Story of MRI: The Pioneers of<br />
NMR <strong>and</strong> Magnetic Resonance in Medicine. Jericho, N.Y.: Dean<br />
Books, 1996.<br />
Wakefield, Julie. “The ‘Indomitable’ MRI.” Smithsonian 31, no. 3<br />
(June, 2000).<br />
Wolbarst, Anthony B. Looking Within: How X-ray, CT, MRI, Ultrasound,<br />
<strong>and</strong> Other Medical Images Are Created, <strong>and</strong> How they Help<br />
Physicians Save Lives. Berkeley: University of California Press,<br />
1999.
520<br />
Nuclear power plant<br />
Nuclear power plant<br />
The invention: The first full-scale commercial nuclear power plant,<br />
which gave birth to the nuclear power industry.<br />
The people behind the invention:<br />
Enrico Fermi (1901-1954), an Italian American physicist who<br />
won the 1938 Nobel Prize in Physics<br />
Otto Hahn (1879-1968), a German physical chemist who won the<br />
1944 Nobel Prize in Chemistry<br />
Lise Meitner (1878-1968), an Austrian Swedish physicist<br />
Hyman G. Rickover (1898-1986), a Polish American naval officer<br />
Discovering Fission<br />
Nuclear fission involves the splitting of an atomic nucleus, leading<br />
to the release of large amounts of energy. Nuclear fission was<br />
discovered in Germany in 1938 by Otto Hahn after he had bombarded<br />
uranium with neutrons <strong>and</strong> observed traces of radioactive<br />
barium. When Hahn’s former associate, Lise Meitner, heard of this,<br />
she realized that the neutrons may have split the uranium nuclei<br />
(each of which holds 92 protons) into two smaller nuclei to produce<br />
barium (56 protons) <strong>and</strong> krypton (36 protons). Meitner <strong>and</strong> her<br />
nephew, Otto Robert Frisch, were able to calculate the enormous energy<br />
that would be released in this type of reaction. They published<br />
their results early in 1939.<br />
Nuclear fission was quickly verified in several laboratories, <strong>and</strong><br />
the Danish physicist Niels Bohr soon demonstrated that the rare uranium<br />
235 (U-235) isotope is much more likely to fission than the common<br />
uranium 238 (U-238) isotope, which makes up 99.3 percent of<br />
natural uranium. It was also recognized that fission would produce<br />
additional neutrons that could cause new fissions, producing even<br />
more neutrons <strong>and</strong> thus creating a self-sustaining chain reaction. In<br />
this process, the fissioning of one gram of U-235 would release about<br />
as much energy as the burning of three million tons of coal.<br />
The first controlled chain reaction was demonstrated on December<br />
2, 1942, in a nuclear reactor at the University of Chicago, under
the leadership of Enrico Fermi. He used a graphite moderator to<br />
slow the neutrons by collisions with carbon atoms. “Critical mass”<br />
was achieved when the mass of graphite <strong>and</strong> uranium assembled<br />
was large enough that the number of neutrons not escaping from<br />
the pile would be sufficient to sustain a U-235 chain reaction. Cadmium<br />
control rods could be inserted to absorb neutrons <strong>and</strong> slow<br />
the reaction.<br />
It was also recognized that the U-238 in the reactor would absorb<br />
accelerated neutrons to produce the new element plutonium, which<br />
is also fissionable. During World War II (1939-1945), large reactors<br />
were built to “breed” plutonium, which was easier to separate than<br />
U-235. An experimental breeder reactor at Arco, Idaho, was the first<br />
to use the energy of nuclear fission to produce a small amount of<br />
electricity (about 100 watts) on December 20, 1951.<br />
Nuclear Electricity<br />
Nuclear power plant / 521<br />
Power reactors designed to produce substantial amounts of<br />
electricity use the heat generated by fission to produce steam or<br />
hot gas to drive a turbine connected to an ordinary electric generator.<br />
The first power reactor design to be developed in the United<br />
States was the pressurized water reactor (PWR). In the PWR, water<br />
under high pressure is used both as the moderator <strong>and</strong> as the coolant.<br />
After circulating through the reactor core, the hot pressurized<br />
water flows through a heat exchanger to produce steam. Reactors<br />
moderated by “heavy water” (in which the hydrogen in the water<br />
is replaced with deuterium, which contains an extra neutron) can<br />
operate with natural uranium.<br />
The pressurized water system was used in the first reactor to<br />
produce substantial amounts of power, the experimental Mark I<br />
reactor. It was started up on May 31, 1953, at the Idaho National<br />
Engineering Laboratory. The Mark I became the prototype for the<br />
reactor used in the first nuclear-powered submarine. Under the<br />
leadership of Hyman G. Rickover, who was head of the Division of<br />
Naval Reactors of the Atomic Energy Commission (AEC), Westinghouse<br />
Electric Corporation was engaged to build a PWR system<br />
to power the submarine USS Nautilus. It began sea trials in January<br />
of 1955 <strong>and</strong> ran for two years before refueling.
522 / Nuclear power plant<br />
Cooling towers of a nuclear power plant. (PhotoDisc)<br />
In the meantime, the first experimental nuclear power plant for<br />
generating electricity was completed in the Soviet Union in June of<br />
1954, under the direction of the Soviet physicist Igor Kurchatov. It<br />
produced 5 megawatts of electric power. The first full-scale nuclear<br />
power plant was built in Engl<strong>and</strong> under the direction of the British<br />
nuclear engineer Sir Christopher Hinton. It began producing about<br />
90 megawatts of electric power in October, 1956.
On December 2, 1957, on the fifteenth anniversary of the first controlled<br />
nuclear chain reaction, the Shippingport Atomic Power Station<br />
in Shippingport, Pennsylvania, became the first full-scale commercial<br />
nuclear power plant in the United States. It produced about<br />
60 megawatts of electric power for the Duquesne Light Company until<br />
1964, when its reactor core was replaced, increasing its power to<br />
100 megawatts with a maximum capacity of 150 megawatts.<br />
Consequences<br />
Nuclear power plant / 523<br />
The opening of the Shippingport Atomic Power Station marked<br />
the beginning of the nuclear power industry in the United States,<br />
with all of its glowing promise <strong>and</strong> eventual problems. It was predicted<br />
that electrical energy would become too cheap to meter. The<br />
AEC hoped to encourage the participation of industry, with government<br />
support limited to research <strong>and</strong> development. They encouraged<br />
a variety of reactor types in the hope of extending technical<br />
knowledge.<br />
The Dresden Nuclear Power Station, completed by Commonwealth<br />
Edison in September, 1959, at Morris, Illinois, near Chicago,<br />
was the first full-scale privately financed nuclear power station in<br />
the United States. By 1973, forty-two plants were in operation producing<br />
26,000 megawatts, fifty more were under construction, <strong>and</strong><br />
about one hundred were on order. Industry officials predicted that<br />
50 percent of the nation’s electric power would be nuclear by the<br />
end of the twentieth century.<br />
The promise of nuclear energy has not been completely fulfilled.<br />
Growing concerns about safety <strong>and</strong> waste disposal have led to increased<br />
efforts to delay or block the construction of new plants. The<br />
cost of nuclear plants rose as legal delays <strong>and</strong> inflation pushed costs<br />
higher, so that many in the planning stages could no longer be competitive.<br />
The 1979 Three Mile Isl<strong>and</strong> accident in Pennsylvania <strong>and</strong><br />
the much more serious 1986 Chernobyl accident in the Soviet Union<br />
increased concerns about the safety of nuclear power. Nevertheless,<br />
by 1986, more than one hundred nuclear power plants were operating<br />
in the United States, producing about 60,000 megawatts of<br />
power. More than three hundred reactors in twenty-five countries<br />
provide about 200,000 megawatts of electric power worldwide.
524 / Nuclear power plant<br />
Many believe that, properly controlled, nuclear energy offers a<br />
clean-energy solution to the problem of environmental pollution.<br />
See also Breeder reactor; Compressed-air-accumulating power<br />
plant; Fuel cell; Geothermal power; Nuclear reactor; Solar thermal<br />
engine; Tidal power plant.<br />
Further Reading<br />
Henderson, Harry. Nuclear Power: A Reference H<strong>and</strong>book. Santa<br />
Barbara, Calif.: ABC-CLIO, 2000.<br />
Rockwell, Theodore. The Rickover Effect: The Inside Story of How Admiral<br />
Hyman Rickover Built the Nuclear Navy. New York: J. Wiley,<br />
1995.<br />
Shea, William R. Otto Hahn <strong>and</strong> the Rise of Nuclear Physics. Boston: D.<br />
Reidel, 1983.<br />
Sime, Ruth Lewin. Lise Meitner: A Life in Physics. Berkeley: University<br />
of California Press, 1996.
Nuclear reactor<br />
Nuclear reactor<br />
The invention: The first nuclear reactor to produce substantial<br />
quantities of plutonium, making it practical to produce usable<br />
amounts of energy from a chain reaction.<br />
The people behind the invention:<br />
Enrico Fermi (1901-1954), an American physicist<br />
Martin D. Whitaker (1902-1960), the first director of Oak Ridge<br />
National Laboratory<br />
Eugene Paul Wigner (1902-1995), the director of research <strong>and</strong><br />
development at Oak Ridge<br />
The Technology to End a War<br />
525<br />
The construction of the nuclear reactor at Oak Ridge National<br />
Laboratory in 1943 was a vital part of the Manhattan Project, the effort<br />
by the United States during World War II (1939-1945) to develop<br />
an atomic bomb. The successful operation of that reactor<br />
was a major achievement not only for the project itself but also for<br />
the general development <strong>and</strong> application of nuclear technology.<br />
The first director of the Oak Ridge National Laboratory was Martin<br />
D. Whitaker; the director of research <strong>and</strong> development was Eugene<br />
Paul Wigner.<br />
The nucleus of an atom is made up of protons <strong>and</strong> neutrons. “Fission”<br />
is the process by which the nucleus of certain elements is split<br />
in two by a neutron from some material that emits an occasional<br />
neutron naturally. When an atom splits, two things happen: A tremendous<br />
amount of thermal energy is released, <strong>and</strong> two or three<br />
neutrons, on the average, escape from the nucleus. If all the atoms in<br />
a kilogram of “uranium 235” were to fission, they would produce as<br />
much heat energy as the burning of 3 million kilograms of coal. The<br />
neutrons that are released are important, because if at least one of<br />
them hits another atom <strong>and</strong> causes it to fission (<strong>and</strong> thus to release<br />
more energy <strong>and</strong> more neutrons), the process will continue. It will<br />
become a self-sustaining chain reaction that will produce a continuing<br />
supply of heat.
526 / Nuclear reactor<br />
Inside a reactor, a nuclear chain reaction is controlled so that it<br />
proceeds relatively slowly. The most familiar use for the heat thus<br />
released is to boil water <strong>and</strong> make steam to turn the turbine generators<br />
that produce electricity to serve industrial, commercial, <strong>and</strong><br />
residential needs. The fissioning process in a weapon, however, proceeds<br />
very rapidly, so that all the energy in the atoms is produced<br />
<strong>and</strong> released virtually at once. The first application of nuclear technology,<br />
which used a rapid chain reaction, was to produce the two<br />
atomic bombs that ended World War II.<br />
Breeding Bomb Fuel<br />
The work that began at Oak Ridge in 1943 was made possible by a<br />
major event that took place in 1942. At the University of Chicago,<br />
Enrico Fermi had demonstrated for the first time that it was possible to<br />
achieve a self-sustaining atomic chain reaction. More important, the reaction<br />
could be controlled: It could be started up, it could generate heat<br />
<strong>and</strong> sufficient neutrons to keep itself going, <strong>and</strong> it could be turned off.<br />
That first chain reaction was very slow, <strong>and</strong> it generated very little heat;<br />
but it demonstrated that controlled fission was possible.<br />
Any heat-producing nuclear reaction is an energy conversion<br />
process that requires fuel. There is only one readily fissionable element<br />
that occurs naturally <strong>and</strong> can be used as fuel. It is a form of<br />
uranium called uranium 235. It makes up less than 1 percent of all<br />
naturally occurring uranium. The remainder is uranium 238, which<br />
does not fission readily. Even uranium 235, however, must be enriched<br />
before it can be used as fuel.<br />
The process of enrichment increases the concentration of uranium<br />
235 sufficiently for a chain reaction to occur. Enriched uranium is used<br />
to fuel the reactors used by electric utilities. Also, the much more plentiful<br />
uranium 238 can be converted into plutonium 239, a form of the<br />
human-made element plutonium, which does fission readily. That<br />
conversion process is the way fuel is produced for a nuclear weapon.<br />
Therefore, the major objective of the Oak Ridge effort was to develop a<br />
pilot operation for separating plutonium from the uranium in which it<br />
was produced. Large-scale plutonium production, which had never<br />
been attempted before, eventually would be done at the Hanford Engineer<br />
Works in Washington. First, however, plutonium had to be pro-
duced successfully on a small scale at Oak Ridge.<br />
The reactor was started up on November 4, 1943. By March 1,<br />
1944, the Oak Ridge laboratory had produced several grams of plutonium.<br />
The material was sent to the Los Alamos laboratory in New<br />
Mexico for testing. By July, 1944, the reactor operated at four times<br />
its original power level. By the end of that year, however, plutonium<br />
production at Oak Ridge had ceased, <strong>and</strong> the reactor thereafter was<br />
used principally to produce radioisotopes for physical <strong>and</strong> biological<br />
research <strong>and</strong> for medical treatment. Ultimately, the Hanford Engineer<br />
Works’ reactors produced the plutonium for the bomb that<br />
was dropped on Nagasaki, Japan, on August 9, 1945.<br />
The original objectives for which Oak Ridge had been built had<br />
been achieved, <strong>and</strong> subsequent activity at the facility was directed<br />
toward peacetime missions that included basic studies of the structure<br />
of matter.<br />
Impact<br />
Nuclear reactor / 527<br />
Part of the Oak Ridge National Laboratory, where plutonium was separated to create the<br />
first atomic bomb. (Martin Marietta)<br />
The most immediate impact of the work done at Oak Ridge was<br />
its contribution to ending World War II. When the atomic bombs<br />
were dropped, the war ended, <strong>and</strong> the United States emerged intact.<br />
The immediate <strong>and</strong> long-range devastation to the people of Japan,
528 / Nuclear reactor<br />
however, opened the public’s eyes to the almost unimaginable<br />
death <strong>and</strong> destruction that could be caused by a nuclear war. Fears<br />
of such a war remain to this day, especially as more <strong>and</strong> more nations<br />
develop the technology to build nuclear weapons.<br />
On the other h<strong>and</strong>, great contributions to human civilization<br />
have resulted from the development of nuclear energy. Electric<br />
power generation, nuclear medicine, spacecraft power, <strong>and</strong> ship<br />
propulsion have all profited from the pioneering efforts at the Oak<br />
Ridge National Laboratory. Currently, the primary use of nuclear<br />
energy is to produce electric power. H<strong>and</strong>led properly, nuclear energy<br />
may help to solve the pollution problems caused by the burning<br />
of fossil fuels.<br />
See also Breeder reactor; Compressed-air-accumulating power<br />
plant; Fuel cell; Geothermal power; Heat pump; Nuclear power<br />
plant; Solar thermal engine; Tidal power plant.<br />
Further Reading<br />
Epstein, Sam, Beryl Epstein, <strong>and</strong> Raymond Burns. Enrico Fermi: Father<br />
of Atomic Power. Champaign, Ill.: Garrard, 1970.<br />
Johnson, Lel<strong>and</strong>, <strong>and</strong> Daniel Schaffer. Oak Ridge National Laboratory:<br />
The First Fifty Years. Knoxville: University of Tennessee Press,<br />
1994.<br />
Morgan, K. Z., <strong>and</strong> Ken M. Peterson. The Angry Genie: One Man’s<br />
Walk Through the Nuclear Age. Norman: University of Oklahoma<br />
Press, 1999.<br />
Wagner, Francis S. Eugene P. Wigner, An Architect of the Atomic Age.<br />
Toronto: Rákóczi Foundation, 1981.
Nylon<br />
Nylon<br />
The invention: A resilient, high-strength polymer with applications<br />
ranging from women’s hose to safety nets used in space<br />
flights.<br />
The people behind the invention:<br />
Wallace Hume Carothers (1896-1937), an American organic<br />
chemist<br />
Charles M. A. Stine (1882-1954), an American chemist <strong>and</strong><br />
director of chemical research at Du Pont<br />
Elmer Keiser Bolton (1886-1968), an American industrial<br />
chemist<br />
Pure Research<br />
529<br />
In the twentieth century, American corporations created industrial<br />
research laboratories. Their directors became the organizers of<br />
inventions, <strong>and</strong> their scientists served as the sources of creativity.<br />
The research program of E. I. Du Pont de Nemours <strong>and</strong> Company<br />
(Du Pont), through its most famous invention—nylon—became the<br />
model for scientifically based industrial research in the chemical<br />
industry.<br />
During World War I (1914-1918), Du Pont tried to diversify,<br />
concerned that after the war it would not be able to exp<strong>and</strong> with<br />
only explosives as a product. Charles M. A. Stine, Du Pont’s director<br />
of chemical research, proposed that Du Pont should move<br />
into fundamental research by hiring first-rate academic scientists<br />
<strong>and</strong> giving them freedom to work on important problems in<br />
organic chemistry. He convinced company executives that a program<br />
to explore the fundamental science underlying Du Pont’s<br />
technology would ultimately result in discoveries of value to the<br />
company. In 1927, Du Pont gave him a new laboratory for research.<br />
Stine visited universities in search of brilliant, but not-yetestablished,<br />
young scientists. He hired Wallace Hume Carothers.<br />
Stine suggested that Carothers do fundamental research in polymer<br />
chemistry.
530 / Nylon<br />
Before the 1920’s, polymers were a mystery to chemists. Polymeric<br />
materials were the result of ingenious laboratory practice,<br />
<strong>and</strong> this practice ran far ahead of theory <strong>and</strong> underst<strong>and</strong>ing. German<br />
chemists debated whether polymers were aggregates of smaller<br />
units held together by some unknown special force or genuine molecules<br />
held together by ordinary chemical bonds.<br />
German chemist Hermann Staudinger asserted that they were<br />
large molecules with endlessly repeating units. Carothers shared<br />
this view, <strong>and</strong> he devised a scheme to prove it by synthesizing very<br />
large molecules by simple reactions in such a way as to leave no<br />
doubt about their structure. Carothers’s synthesis of polymers revealed<br />
that they were ordinary molecules but giant in size.<br />
The Longest Molecule<br />
In April, 1930, Carothers’s research group produced two major<br />
innovations: neoprene synthetic rubber <strong>and</strong> the first laboratorysynthesized<br />
fiber. Neither result was the goal of their research. Neoprene<br />
was an incidental discovery during a project to study short<br />
polymers of acetylene. During experimentation, an unexpected substance<br />
appeared that polymerized spontaneously. Carothers studied<br />
its chemistry <strong>and</strong> developed the process into the first successful synthetic<br />
rubber made in the United States.<br />
The other discovery was an unexpected outcome of the group’s<br />
project to synthesize polyesters by the reaction of acids <strong>and</strong> alcohols.<br />
Their goal was to create a polyester that could react indefinitely<br />
to form a substance with high molecular weight. The scientists<br />
encountered a molecular weight limit of about 5,000 units to the<br />
size of the polyesters, until Carothers realized that the reaction also<br />
produced water, which was decomposing polyesters back into acid<br />
<strong>and</strong> alcohol. Carothers <strong>and</strong> his associate Julian Hill devised an apparatus<br />
to remove the water as it formed. The result was a polyester<br />
with a molecular weight of more than 12,000, far higher than any<br />
previous polymer.<br />
Hill, while removing a sample from the apparatus, found that he<br />
could draw it out into filaments that on cooling could be stretched to<br />
form very strong fibers. This procedure, called “cold-drawing,” oriented<br />
the molecules from a r<strong>and</strong>om arrangement into a long, linear
one of great strength. The polyester fiber, however, was unsuitable<br />
for textiles because of its low melting point.<br />
In June, 1930, Du Pont promoted Stine; his replacement as research<br />
director was Elmer Keiser Bolton. Bolton wanted to control<br />
fundamental research more closely, relating it to projects that would<br />
pay off <strong>and</strong> not allowing the research group freedom to pursue<br />
purely theoretical questions.<br />
Despite their differences, Carothers <strong>and</strong> Bolton shared an interest<br />
in fiber research. On May 24, 1934, Bolton’s assistant Donald<br />
Coffman “drew” a strong fiber from a new polyamide. This was the<br />
first nylon fiber, although not the one commercialized by Du Pont.<br />
The nylon fiber was high-melting <strong>and</strong> tough, <strong>and</strong> it seemed that a<br />
practical synthetic fiber might be feasible.<br />
By summer of 1934, the fiber project was the heart of the research<br />
group’s activity. The one that had the best fiber properties was nylon<br />
5-10, the number referring to the number of carbon atoms in the<br />
amine <strong>and</strong> acid chains. Yet the nylon 6-6 prepared on February 28,<br />
1935, became Du Pont’s nylon. Nylon 5-10 had some advantages,<br />
but Bolton realized that its components would be unsuitable for<br />
commercial production, whereas those of nylon 6-6 could be obtained<br />
from chemicals in coal.<br />
A determined Bolton pursued nylon’s practical development,<br />
a process that required nearly four years. Finally, in April, 1937,<br />
Du Pont filed a patent for synthetic fibers, which included a statement<br />
by Carothers that there was no previous work on polyamides;<br />
this was a major breakthrough. After Carothers’s death<br />
on April 29, 1937, the patent was issued posthumously <strong>and</strong> assigned<br />
to Du Pont. Du Pont made the first public announcement<br />
of nylon on October 27, 1938.<br />
Impact<br />
Nylon / 531<br />
Nylon was a generic term for polyamides, <strong>and</strong> several types of<br />
nylon became commercially important in addition to nylon 6-6.<br />
These nylons found widespread use as both a fiber <strong>and</strong> a moldable<br />
plastic. Since it resisted abrasion <strong>and</strong> crushing, was nonabsorbent,<br />
was stronger than steel on a weight-for-weight basis, <strong>and</strong> was almost<br />
nonflammable, it embraced an astonishing range of uses: in
532 / Nylon<br />
laces, screens, surgical sutures, paint, toothbrushes, violin strings,<br />
coatings for electrical wires, lingerie, evening gowns, leotards, athletic<br />
equipment, outdoor furniture, shower curtains, h<strong>and</strong>bags, sails,<br />
luggage, fish nets, carpets, slip covers, bus seats, <strong>and</strong> even safety<br />
nets on the space shuttle.<br />
The invention of nylon stimulated notable advances in the chemistry<br />
<strong>and</strong> technology of polymers. Some historians of technology<br />
have even dubbed the postwar period as the “age of plastics,” the<br />
age of synthetic products based on the chemistry of giant molecules<br />
made by ingenious chemists <strong>and</strong> engineers.<br />
The success of nylon <strong>and</strong> other synthetics, however, has come at<br />
a cost. Several environmental problems have surfaced, such as those<br />
created by the nondegradable feature of some plastics, <strong>and</strong> there is<br />
the problem of the increasing utilization of valuable, vanishing resources,<br />
such as petroleum, which contains the essential chemicals<br />
needed to make polymers. The challenge to reuse <strong>and</strong> recycle these<br />
polymers is being addressed by both scientists <strong>and</strong> policymakers.<br />
See also Buna rubber; Neoprene; Orlon; Plastic; Polyester; Polyethylene;<br />
Polystyrene.<br />
Further Reading<br />
Furukawa, Yasu. Inventing Polymer Science: Staudinger, Carothers, <strong>and</strong><br />
the Emergence of Macromolecular Chemistry. Philadelphia: University<br />
of Pennsylvania Press, 1998.<br />
H<strong>and</strong>ley, Susannah. Nylon: The Story of a Fashion Revolution: A Celebration<br />
of Design from Art Silk to Nylon <strong>and</strong> Thinking Fibres. Baltimore:<br />
Johns Hopkins University Press, 1999.<br />
Hermes, Matthew E. Enough for One Lifetime: Wallace Carothers, Inventor<br />
of Rayon. Washington, D.C.: American Chemical Society<br />
<strong>and</strong> the Chemical Heritage Foundation, 1996.<br />
Joyce, Robert M. Elmer Keiser Bolton: June 23, 1886-July 30, 1968.<br />
Washington, D.C.: National Academy Press, 1983.
Oil-well drill bit<br />
Oil-well drill bit<br />
The invention: A rotary cone drill bit that enabled oil-well drillers<br />
to penetrate hard rock formations.<br />
The people behind the invention:<br />
Howard R. Hughes (1869-1924), an American lawyer, drilling<br />
engineer, <strong>and</strong> inventor<br />
Walter B. Sharp (1860-1912), an American drilling engineer,<br />
inventor, <strong>and</strong> partner to Hughes<br />
Digging for Oil<br />
533<br />
A rotary drill rig of the 1990’s is basically unchanged in its essential<br />
components from its earlier versions of the 1900’s. A drill bit is<br />
attached to a line of hollow drill pipe. The latter passes through a<br />
hole on a rotary table, which acts essentially as a horizontal gear<br />
wheel <strong>and</strong> is driven by an engine. As the rotary table turns, so do the<br />
pipe <strong>and</strong> drill bit.<br />
During drilling operations, mud-laden water is pumped under<br />
high pressure down the sides of the drill pipe <strong>and</strong> jets out with great<br />
force through the small holes in the rotary drill bit against the bottom<br />
of the borehole. This fluid then returns outside the drill pipe to<br />
the surface, carrying with it rock material cuttings from below. Circulated<br />
rock cuttings <strong>and</strong> fluids are regularly examined at the surface<br />
to determine the precise type <strong>and</strong> age of rock formation <strong>and</strong> for<br />
signs of oil <strong>and</strong> gas.<br />
A key part of the total rotary drilling system is the drill bit, which<br />
has sharp cutting edges that make direct contact with the geologic<br />
formations to be drilled. The first bits used in rotary drilling were<br />
paddlelike “fishtail” bits, fairly successful for softer formations, <strong>and</strong><br />
tubular coring bits for harder surfaces. In 1893, M. C. Baker <strong>and</strong> C. E.<br />
Baker brought a rotary water-well drill rig to Corsicana, Texas, for<br />
modification to deeper oil drilling. This rig led to the discovery of<br />
the large Corsicana-Powell oil field in Navarro County, Texas. This<br />
success also motivated its operators, the American Well <strong>and</strong> Prospecting<br />
Company, to begin the first large-scale manufacture of rotary<br />
drilling rigs for commercial sale.
534 / Oil-well drill bit<br />
In the earliest rotary drilling for oil, short fishtail bits were the<br />
tool of choice, insofar as they were at that time the best at being able<br />
to bore through a wide range of geologic strata without needing frequent<br />
replacement. Even so, in the course of any given oil well,<br />
many bits were required typically in coastal drilling in the Gulf of<br />
Mexico. Especially when encountering locally harder rock units<br />
such as limestone, dolomite, or gravel beds, fishtail bits would typically<br />
either curl backward or break off in the hole, requiring the<br />
time-consuming work of pulling out all drill pipe <strong>and</strong> “fishing” to<br />
retrieve fragments <strong>and</strong> clear the hole.<br />
Because of the frequent bit wear <strong>and</strong> damage, numerous small<br />
blacksmith shops established themselves near drill rigs, dressing or<br />
sharpening bits with a h<strong>and</strong> forge <strong>and</strong> hammer. Each bit-forging<br />
shop had its own particular way of shaping bits, producing a wide<br />
variety of designs. Nonst<strong>and</strong>ard bit designs were frequently modified<br />
further as experiments to meet the specific requests of local drillers<br />
encountering specific drilling difficulties in given rock layers.<br />
Speeding the Process<br />
In 1907 <strong>and</strong> 1908, patents were obtained in New Jersey <strong>and</strong><br />
Texas for steel, cone-shaped drill bits incorporating a roller-type<br />
coring device with many serrated teeth. Later in 1908, both patents<br />
were bought by lawyer Howard R. Hughes.<br />
Although comparatively weak rocks such as s<strong>and</strong>s, clays, <strong>and</strong><br />
soft shales could be drilled rapidly (at rates exceeding 30 meters per<br />
hour), in harder shales, lime-dolostones, <strong>and</strong> gravels, drill rates of 1<br />
meter per hour or less were not uncommon. Conventional drill bits<br />
of the time had average operating lives of three to twelve hours.<br />
Economic drilling m<strong>and</strong>ated increases in both bit life <strong>and</strong> drilling<br />
rate. Directly motivated by his petroleum prospecting interests,<br />
Hughes <strong>and</strong> his partner, Walter B. Sharp, undertook what were<br />
probably the first recorded systematic studies of drill bit performance<br />
while matched against specific rock layers.<br />
Although many improvements in detail <strong>and</strong> materials have been<br />
made to the Hughes cone bit since its inception in 1908, its basic design<br />
is still used in rotary drilling. One of Hughes’s major innovations<br />
was the much larger size of the cutters, symmetrically distrib-
Howard R. Hughes<br />
Oil-well drill bit / 535<br />
Howard Hughes (1905-1976) is famous for having been one<br />
of the most dashing, innovative, quirky tycoons of the twentieth<br />
century. It all started with his father, Howard R. Hughes. In<br />
fact it was the father’s enterprise, Hughes Tool Company, that<br />
the son took over at age eighteen <strong>and</strong> built into an immense financial<br />
empire based on high-tech products.<br />
The senior Hughes was born in Lancaster, Missouri, in 1869.<br />
He spent his boyhood in Keokuk, Iowa, where his own father<br />
practiced law. He himself studied law at Harvard University<br />
<strong>and</strong> the University of Iowa <strong>and</strong> then joined his father’s practice,<br />
but not for long. In 1901 news came of a big oil strike near Beaumont,<br />
Texas. Like hundreds of other ambitious men, Hughes<br />
headed there. By 1906 he had immersed himself in the technical<br />
problems of drilling <strong>and</strong> began experimenting to improve drill<br />
bits. He produced a wooden model of the roller-type drill two<br />
years later while in Oil City, Louisiana. With business associate<br />
Walter Sharp he successfully tested a prototype in an oil well in<br />
the Goose Creek field near Houston. It drilled faster <strong>and</strong> more<br />
efficiently than those then in use.<br />
Hughes <strong>and</strong> Sharp opened the Sharp-Hughes Tool Company<br />
to manufacture the drills <strong>and</strong> related equipment, <strong>and</strong> their<br />
products quickly became the industry st<strong>and</strong>ard. A shrewd business<br />
strategist, Hughes leased, rather than sold, his drill bits<br />
for $30,000 per well, retaining his patents to preserve his monopoly<br />
over the rotary drill technology. After Sharp died in<br />
1912, Hughes changed the company to the Hughes Tool Company.<br />
When Hughes himself died in 1924, he left his son, then a<br />
student at Rice Institute (later Rice University), the company<br />
<strong>and</strong> a million-dollar fortune, which Hughes junior would eventually<br />
multiply hundreds of times over.<br />
uted as a large number of small individual teeth on the outer face of<br />
two or more cantilevered bearing pins. In addition, “hard facing”<br />
was employed to drill bit teeth to increase usable life. Hard facing is<br />
a metallurgical process basically consisting of wedding a thin layer<br />
of a hard metal or alloy of special composition to a metal surface to<br />
increase its resistance to abrasion <strong>and</strong> heat. A less noticeable but<br />
equally essential innovation, not included in other drill bit patents,
536 / Oil-well drill bit<br />
was an ingeniously designed gauge surface that provided strong<br />
uniform support for all the drill teeth. The force-fed oil lubrication<br />
was another new feature included in Hughes’s patent <strong>and</strong> prototypes,<br />
reducing the power necessary to rotate the bit by 50 percent<br />
over that of prior mud or water lubricant designs.<br />
Impact<br />
In 1925, the first superhard facing was used on cone drill bits. In<br />
addition, the first so-called self-cleaning rock bits appeared from<br />
Hughes, with significant advances in roller bearings <strong>and</strong> bit tooth<br />
shape translating into increased drilling efficiency. The much larger<br />
teeth were more adaptable to drilling in a wider variety of geological<br />
formations than earlier models. In 1928, tungsten carbide was<br />
introduced as an additional bit facing hardener by Hughes metallurgists.<br />
This, together with other improvements, resulted in the<br />
Hughes ACME tooth form, which has been in almost continuous<br />
use since 1926.<br />
Many other drilling support technologies, such as drilling mud,<br />
mud circulation pumps, blowout detectors <strong>and</strong> preventers, <strong>and</strong><br />
pipe properties <strong>and</strong> connectors have enabled rotary drilling rigs to<br />
reach new depths (exceeding 5 kilometers in 1990). The successful<br />
experiments by Hughes in 1908 were critical initiators of these developments.<br />
See also Geothermal power; Steelmaking process; Thermal<br />
cracking process.<br />
Further Reading<br />
Brantly, John Edward. History of Oil Well Drilling. Houston: Gulf<br />
Publishing, 1971.<br />
Charlez, Philippe A. Rock Mechanics. Vol. 2: Petroleum Applications.<br />
Paris: Editions Technip, 1997.<br />
Rao, Karanam Umamaheshwar, <strong>and</strong> Misra Banabihari. Principles of<br />
Rock Drilling. Brookfield, Vt.: Balkema, 1998.
Optical disk<br />
Optical disk<br />
The invention: A nonmagnetic storage medium for computers that<br />
can hold much greater quantities of data than similar size magnetic<br />
media, such as hard <strong>and</strong> floppy disks.<br />
The people behind the invention:<br />
Klaas Compaan, a Dutch physicist<br />
Piet Kramer, head of Philips’ optical research laboratory<br />
Lou F. Ottens, director of product development for Philips’<br />
musical equipment division<br />
George T. de Kruiff, manager of Philips’ audio-product<br />
development department<br />
Joop Sinjou, a Philips project leader<br />
Holograms Can Be Copied Inexpensively<br />
537<br />
Holography is a lensless photographic method that uses laser<br />
light to produce three-dimensional images. This is done by splitting<br />
a laser beam into two beams. One of the beams is aimed at the object<br />
whose image is being reproduced so that the laser light will reflect<br />
from the object <strong>and</strong> strike a photographic plate or film. The second<br />
beam of light is reflected from a mirror near the object <strong>and</strong> also<br />
strikes the photographic plate or film. The “interference pattern,”<br />
which is simply the pattern created by the differences between the<br />
two reflected beams of light, is recorded on the photographic surface.<br />
The recording that is made in this way is called a “hologram.”<br />
When laser light or white light strikes the hologram, an image is created<br />
that appears to be a three-dimensional object.<br />
Early in 1969, Radio Corporation of America (RCA) engineers<br />
found a way to copy holograms inexpensively by impressing interference<br />
patterns on a nickel sheet that then became a mold from<br />
which copies could be made. Klaas Compaan, a Dutch physicist,<br />
learned of this method <strong>and</strong> had the idea that images could be recorded<br />
in a similar way <strong>and</strong> reproduced on a disk the size of a phonograph<br />
record. Once the images were on the disk, they could be<br />
projected onto a screen in any sequence. Compaan saw the possibilities<br />
of such a technology in the fields of training <strong>and</strong> education.
538 / Optical disk<br />
Computer Data Storage Breakthrough<br />
In 1969, Compaan shared his idea with Piet Kramer, who was the<br />
head of Philips’ optical research laboratory. The idea intrigued<br />
Kramer. Between 1969 <strong>and</strong> 1971, Compaan spent much of his time<br />
working on the development of a prototype.<br />
By September, 1971, Compaan <strong>and</strong> Kramer, together with a h<strong>and</strong>ful<br />
of others, had assembled a prototype that could read a black<strong>and</strong>-white<br />
video signal from a spinning glass disk. Three months<br />
later, they demonstrated it for senior managers at Philips. In July,<br />
1972, a color prototype was demonstrated publicly. After the demonstration,<br />
Philips began to consider putting sound, rather than images,<br />
on the disks. The main attraction of that idea was that the 12inch<br />
(305-millimeter) disks would hold up to forty-eight hours of<br />
music. Very quickly, however, Lou F. Ottens, director of product development<br />
for Philips’ musical equipment division, put an end to<br />
any talk of a long-playing audio disk.<br />
Ottens had developed the cassette-tape cartridge in the 1960’s.<br />
He had plenty of experience with the recording industry, <strong>and</strong> he had<br />
no illusions that the industry would embrace that new medium. He<br />
was convinced that the recording companies would consider fortyeight<br />
hours of music unmarketable. He also knew that any new<br />
medium would have to offer a dramatic improvement over existing<br />
vinyl records.<br />
In 1974, only three years after the first microprocessor (the basic<br />
element of computers) was invented, designing a digital consumer<br />
product—rather than an analog product such as those that were already<br />
commonly accepted—was risky. (Digital technology uses<br />
numbers to represent information, whereas analog technology represents<br />
information by mechanical or physical means.) When<br />
George T. de Kruiff became Ottens’s manager of audio-product<br />
development in June, 1974, he was amazed that there were no<br />
digital circuit specialists in the audio department. De Kruiff recruited<br />
new digital engineers, bought computer-aided design<br />
tools, <strong>and</strong> decided that the project should go digital.<br />
Within a few months, Ottens’s engineers had rigged up a digital<br />
system. They used an audio signal that was representative of an<br />
acoustical wave, sampled it to change it to digital form, <strong>and</strong> en-
Optical disk / 539<br />
coded it as a series of pulses. On the disk itself, they varied the<br />
length of the “dimples” that were used to represent the sound so<br />
that the rising <strong>and</strong> falling edges of the series of pulses corresponded<br />
to the dimples’ walls. A helium-neon laser was reflected from<br />
the dimples to photodetectors that were connected to a digital-toanalog<br />
converter.<br />
In 1978, Philips demonstrated a prototype for Polygram (a West<br />
German company) <strong>and</strong> persuaded Polygram to develop an inexpensive<br />
disk material with the appropriate optical qualities. Most<br />
important was that the material could not warp. Polygram spent<br />
about $150,000 <strong>and</strong> three months to develop the disk. In addition, it<br />
was determined that the gallium-arsenide (GaAs) laser would be<br />
used in the project. Sharp Corporation agreed to manufacture a<br />
long-life GaAs diode laser to Philips’ specifications.<br />
The optical-system designers wanted to reduce the number<br />
of parts in order to decrease manufacturing costs <strong>and</strong> improve<br />
reliability. Therefore, the lenses were simplified <strong>and</strong> considerable<br />
work was devoted to developing an error-correction code.<br />
Philips <strong>and</strong> Sony engineers also worked together to create a st<strong>and</strong>ard<br />
format. In 1983, Philips made almost 100,000 units of optical<br />
disks.<br />
Optical Disk<br />
An optical memory.<br />
Laser Beam<br />
Direction of<br />
Rotation<br />
Direction of Laser
540 / Optical disk<br />
Consequences<br />
In 1983, one of the most successful consumer products of all time<br />
was introduced: the optical-disk system. The overwhelming success<br />
of optical-disk reproduction led to the growth of a multibillion-dollar<br />
industry around optical information <strong>and</strong> laid the groundwork<br />
for a whole crop of technologies that promise to revolutionize computer<br />
data storage. Common optical-disk products are the compact<br />
disc (CD), the compact disc read-only memory (CD-ROM), the<br />
write-once, read-many (WORM) erasable disk, <strong>and</strong> CD-I (interactive<br />
CD).<br />
The CD-ROM, the WORM, <strong>and</strong> the erasable optical disk, all of<br />
which are used in computer applications, can hold more than 550<br />
megabytes, from 200 to 800 megabytes, <strong>and</strong> 650 megabytes of data,<br />
respectively.<br />
The CD-ROM is a nonerasable disc that is used to store computer<br />
data. After the write-once operation is performed, a WORM becomes<br />
a read-only optical disk. An erasable optical disk can be<br />
erased <strong>and</strong> rewritten easily. CD-ROMs, coupled with expert-system<br />
technology, are expected to make data retrieval easier. The CD-ROM,<br />
the WORM, <strong>and</strong> the erasable optical disk may replace magnetic<br />
hard <strong>and</strong> floppy disks as computer data storage devices.<br />
See also Bubble memory; Compact disc; Computer chips;<br />
Floppy disk; Hard disk; Holography.<br />
Further Reading<br />
Fox, Barry. “Head to Head in the Recording Wars.” New Scientist<br />
136, no. 1843 (October 17, 1992).<br />
Goff, Leslie. “Philips’ Eye on the Future.” Computerworld 33, no. 32<br />
(August 9, 1999).<br />
Kolodziej, Stan. “Optical Discs: The Dawn of a New Era in Mass<br />
Storage.” Canadian Datasystems 14, no. 9 (September, 1982). 36-39.<br />
Savage, Maria. “Beyond Film.” Bulletin of the American Society for Information<br />
Science 7, no. 1 (October, 1980).
Orlon<br />
Orlon<br />
The invention: A synthetic fiber made from polyacrylonitrile that<br />
has become widely used in textiles <strong>and</strong> in the preparation of<br />
high-strength carbon fibers.<br />
The people behind the invention:<br />
Herbert Rein (1899-1955), a German chemist<br />
Ray C. Houtz (1907- ), an American chemist<br />
A Difficult Plastic<br />
541<br />
“Polymers” are large molecules that are made up of chains of<br />
many smaller molecules, called “monomers.” Materials that are<br />
made of polymers are also called polymers, <strong>and</strong> some polymers,<br />
such as proteins, cellulose, <strong>and</strong> starch, occur in nature. Most polymers,<br />
however, are synthetic materials, which means that they were<br />
created by scientists.<br />
The twenty-year period beginning in 1930 was the age of great<br />
discoveries in polymers by both chemists <strong>and</strong> engineers. During<br />
this time, many of the synthetic polymers, which are also known as<br />
plastics, were first made <strong>and</strong> their uses found. Among these polymers<br />
were nylon, polyester, <strong>and</strong> polyacrylonitrile. The last of these<br />
materials, polyacrylonitrile (PAN), was first synthesized by German<br />
chemists in the late 1920’s. They linked more than one thous<strong>and</strong><br />
of the small, organic molecules of acrylonitrile to make a polymer.<br />
The polymer chains of this material had the properties that<br />
were needed to form strong fibers, but there was one problem. Instead<br />
of melting when heated to a high temperature, PAN simply<br />
decomposed. This made it impossible, with the technology that existed<br />
then, to make fibers.<br />
The best method available to industry at that time was the process<br />
of melt spinning, in which fibers were made by forcing molten<br />
polymer through small holes <strong>and</strong> allowing it to cool. Researchers realized<br />
that, if PAN could be put into a solution, the same apparatus<br />
could be used to spin PAN fibers. Scientists in Germany <strong>and</strong> the<br />
United States tried to find a solvent or liquid that would dissolve<br />
PAN, but they were unsuccessful until World War II began.
542 / Orlon<br />
Fibers for War<br />
In 1938, the German chemist Walter Reppe developed a new<br />
class of organic solvents called “amides.” These new liquids were<br />
able to dissolve many materials, including some of the recently discovered<br />
polymers. When World War II began in 1940, both the Germans<br />
<strong>and</strong> the Allies needed to develop new materials for the war effort.<br />
Materials such as rubber <strong>and</strong> fibers were in short supply. Thus,<br />
there was increased governmental support for chemical <strong>and</strong> industrial<br />
research on both sides of the war. This support was to result in<br />
two independent solutions to the PAN problem.<br />
In 1942, Herbert Rein, while working for I. G. Farben in Germany,<br />
discovered that PAN fibers could be produced from a solution of<br />
polyacrylonitrile dissolved in the newly synthesized solvent dimethylformamide.<br />
At the same time Ray C. Houtz, who was working for E.<br />
I. Du Pont de Nemours in Wilmington, Delaware, found that the related<br />
solvent dimethylacetamide would also form excellent PAN fibers.<br />
His work was patented, <strong>and</strong> some fibers were produced for use<br />
by the military during the war. In 1950, Du Pont began commercial<br />
production of a form of polyacrylonitrile fibers called Orlon. The<br />
Monsanto Company followed with a fiber called Acrilon in 1952, <strong>and</strong><br />
other companies began to make similar products in 1958.<br />
There are two ways to produce PAN fibers. In both methods,<br />
polyacrylonitrile is first dissolved in a suitable solvent. The solution<br />
is next forced through small holes in a device called a “spinneret.”<br />
The solution emerges from the spinneret as thin streams of a thick,<br />
gooey liquid. In the “wet spinning method,” the streams then enter<br />
another liquid (usually water or alcohol), which extracts the solvent<br />
from the solution, leaving behind the pure PAN fiber. After air drying,<br />
the fiber can be treated like any other fiber. The “dry spinning<br />
method” uses no liquid. Instead, the solvent is evaporated from the<br />
emerging streams by means of hot air, <strong>and</strong> again the PAN fiber is left<br />
behind.<br />
In 1944, another discovery was made that is an important part of<br />
the polyacrylonitrile fiber story. W. P. Coxe of Du Pont <strong>and</strong> L. L.<br />
Winter at Union Carbide Corporation found that, when PAN fibers<br />
are heated under certain conditions, the polymer decomposes <strong>and</strong><br />
changes into graphite (one of the elemental forms of carbon) but still
keeps its fiber form. In contrast to most forms of graphite, these fibers<br />
were exceptionally strong. These were the first carbon fibers<br />
ever made. Originally known as “black Orlon,” they were first produced<br />
commercially by the Japanese in 1964, but they were too<br />
weak to find many uses. After new methods of graphitization were<br />
developed jointly by labs in Japan, Great Britain, <strong>and</strong> the United<br />
States, the strength of the carbon fibers was increased, <strong>and</strong> the fibers<br />
began to be used in many fields.<br />
Impact<br />
Orlon / 543<br />
As had been predicted earlier, PAN fibers were found to have<br />
some very useful properties. Their discovery <strong>and</strong> commercialization<br />
helped pave the way for the acceptance <strong>and</strong> wide use of polymers.<br />
The fibers derive their properties from the stiff, rodlike structure<br />
of polyacrylonitrile. Known as acrylics, these fibers are more<br />
durable than cotton, <strong>and</strong> they are the best alternative to wool for<br />
sweaters. Acrylics are resistant to heat <strong>and</strong> chemicals, can be dyed<br />
easily, resist fading or wrinkling, <strong>and</strong> are mildew-resistant. Thus, after<br />
their introduction, PAN fibers were very quickly made into<br />
yarns, blankets, draperies, carpets, rugs, sportswear, <strong>and</strong> various<br />
items of clothing. Often, the fibers contain small amounts of other<br />
polymers that give them additional useful properties.<br />
A significant amount of PAN fiber is used in making carbon fibers.<br />
These lightweight fibers are stronger for their weight than any<br />
known material, <strong>and</strong> they are used to make high-strength composites<br />
for applications in aerospace, the military, <strong>and</strong> sports. A “fiber<br />
composite” is a material made from two parts: a fiber, such as carbon<br />
or glass, <strong>and</strong> something to hold the fibers together, which is<br />
usually a plastic called an “epoxy.” Fiber composites are used in<br />
products that require great strength <strong>and</strong> light weight. Their applications<br />
can be as ordinary as a tennis racket or fishing pole or as exotic<br />
as an airplane tail or the body of a spacecraft.<br />
See also Buna rubber; Neoprene; Nylon; Plastic; Polyester; Polyethylene;<br />
Polystyrene.
544 / Orlon<br />
Further Reading<br />
H<strong>and</strong>ley, Susannah. Nylon: The Story of a Fashion Revolution: A Celebration<br />
of Design from Art Silk to Nylon <strong>and</strong> Thinking Fibres. Baltimore:<br />
Johns Hopkins University Press, 1999.<br />
Hunter, David. “Du Pont Bids Adieu to Acrylic Fibers.” Chemical<br />
Week 146, no. 24 (June 20, 1990).<br />
Kornheiser, Tony. “So Long, Orlon.” Washington Post (June 13, 1990).<br />
Seymour, Raymond Benedict, <strong>and</strong> Roger Stephen Porter. Manmade<br />
Fibers: Their Origin <strong>and</strong> Development. New York: Elsevier Applied<br />
Science, 1993.
Pacemaker<br />
Pacemaker<br />
The invention: A small device using transistor circuitry that regulates<br />
the heartbeat of the patient in whom it is surgically emplaced.<br />
The people behind the invention:<br />
Ake Senning (1915- ), a Swedish physician<br />
Rune Elmquist, co-inventor of the first pacemaker<br />
Paul Maurice Zoll (1911- ), an American cardiologist<br />
Cardiac Pacing<br />
545<br />
The fundamentals of cardiac electrophysiology (the electrical activity<br />
of the heart) were determined during the eighteenth century;<br />
the first successful cardiac resuscitation by electrical stimulation occurred<br />
in 1774. The use of artificial pacemakers for resuscitation was<br />
demonstrated in 1929 by Mark Lidwell. Lidwell <strong>and</strong> his coworkers<br />
developed a portable apparatus that could be connected to a power<br />
source. The pacemaker was used successfully on several stillborn<br />
infants after other methods of resuscitation failed. Nevertheless,<br />
these early machines were unreliable.<br />
Ake Senning’s first experience with the effect of electrical stimulation<br />
on cardiac physiology was memorable; grasping a radio<br />
ground wire, Senning felt a brief episode of ventricular arrhythmia<br />
(irregular heartbeat). Later, he was able to apply a similar electrical<br />
stimulation to control a heartbeat during surgery.<br />
The principle of electrical regulation of the heart was valid. It was<br />
shown that pacemakers introduced intravenously into the sinus<br />
node area of a dog’s heart could be used to control the heartbeat<br />
rate. Although Paul Maurice Zoll utilized a similar apparatus in<br />
several patients with cardiac arrhythmia, it was not appropriate for<br />
extensive clinical use; it was large <strong>and</strong> often caused unpleasant sensations<br />
or burns. In 1957, however, Ake Senning observed that attaching<br />
stainless steel electrodes to a child’s heart made it possible<br />
to regulate the heart’s rate of contraction. Senning considered this to<br />
represent the beginning of the era of clinical pacing.
546 / Pacemaker<br />
Development of Cardiac Pacemakers<br />
Senning’s observations of the successful use of the cardiac pacemaker<br />
had allowed him to identify the problems inherent in the device.<br />
He realized that the attachment of the device to the lower, ventricular<br />
region of the heart made possible more reliable control, but<br />
other problems remained unsolved. It was inconvenient, for example,<br />
to carry the machine externally; a cord was wrapped around the<br />
patient that allowed the pacemaker to be recharged, which had to be<br />
done frequently. Also, for unknown reasons, heart resistance would<br />
increase with use of the pacemaker, which meant that increasingly<br />
large voltages had to be used to stimulate the heart. Levels as high<br />
as 20 volts could cause quite a “start” in the patient. Furthermore,<br />
there was a continuous threat of infection.<br />
In 1957, Senning <strong>and</strong> his colleague Rune Elmquist developed a<br />
pacemaker that was powered by rechargeable nickel-cadmium batteries,<br />
which had to be recharged once a month. Although Senning<br />
<strong>and</strong> Elmquist did not yet consider the pacemaker ready for human<br />
testing, fate intervened. Aforty-three-year-old man was admitted to<br />
the hospital suffering from an atrioventricular block, an inability of<br />
the electrical stimulus to travel along the conductive fibers of the<br />
“bundle of His” (a b<strong>and</strong> of cardiac muscle fibers). As a result of this<br />
condition, the patient required repeated cardiac resuscitation. Similar<br />
types of heart block were associated with a mortality rate higher<br />
than 50 percent per year <strong>and</strong> nearly 95 percent over five years.<br />
Senning implanted two pacemakers (one failed) into the myocardium<br />
of the patient’s heart, one of which provided a regulatory<br />
rate of 64 beats per minute. Although the pacemakers required periodic<br />
replacement, the patient remained alive <strong>and</strong> active for twenty<br />
years. (He later became president of the Swedish Association for<br />
Heart <strong>and</strong> Lung Disease.)<br />
During the next five years, the development of more reliable <strong>and</strong><br />
more complex pacemakers continued, <strong>and</strong> implanting the pacemaker<br />
through the vein rather than through the thorax made it simpler<br />
to use the procedure. The first pacemakers were of the “asynchronous”<br />
type, which generated a regular charge that overrode the<br />
natural pacemaker in the heart. The rate could be set by the physician<br />
but could not be altered if the need arose. In 1963, an atrial-
triggered synchronous pacemaker was installed by a Swedish team.<br />
The advantage of this apparatus lay in its ability to trigger a heart<br />
contraction only when the normal heart rhythm was interrupted.<br />
Most of these pacemakers contained a sensing device that detected<br />
the atrial impulse <strong>and</strong> generated an electrical discharge only when<br />
the heart rate fell below 68 to 72 beats per minute.<br />
The biggest problems during this period lay in the size of the<br />
pacemaker <strong>and</strong> the short life of the battery. The expiration of the<br />
electrical impulse sometimes caused the death of the patient. In addition,<br />
the most reliable method of checking the energy level of the<br />
battery was to watch for a decreased pulse rate. As improvements<br />
were made in electronics, the pacemaker became smaller, <strong>and</strong> in<br />
1972, the more reliable lithium-iodine batteries were introduced.<br />
These batteries made it possible to store more energy <strong>and</strong> to monitor<br />
the energy level more effectively. The use of this type of power<br />
source essentially eliminated the battery as the limiting factor in the<br />
longevity of the pacemaker. The period of time that a pacemaker<br />
could operate continuously in the body increased from a period of<br />
days in 1958 to five to ten years by the 1970’s.<br />
Consequences<br />
Pacemaker / 547<br />
The development of electronic heart pacemakers revolutionized<br />
cardiology. Although the initial machines were used primarily to<br />
control cardiac bradycardia, the often life-threatening slowing of<br />
the heartbeat, a wide variety of arrhythmias <strong>and</strong> problems with cardiac<br />
output can now be controlled through the use of these devices.<br />
The success associated with the surgical implantation of pacemakers<br />
is attested by the frequency of its use. Prior to 1960, only three<br />
pacemakers had been implanted. During the 1990’s, however, some<br />
300,000 were implanted each year throughout the world. In the<br />
United States, the prevalence of implants is on the order of 1 per<br />
1,000 persons in the population.<br />
Pacemaker technology continues to improve. Newer models can<br />
sense pH <strong>and</strong> oxygen levels in the blood, as well as respiratory rate.<br />
They have become further sensitized to minor electrical disturbances<br />
<strong>and</strong> can adjust accordingly. The use of easily sterilized circuitry<br />
has eliminated the danger of infection. Once the pacemaker
548 / Pacemaker<br />
has been installed in the patient, the basic electronics require no additional<br />
attention. With the use of modern pacemakers, many forms<br />
of electrical arrhythmias need no longer be life-threatening.<br />
See also Artificial heart; Contact lenses; Coronary artery bypass<br />
surgery; Electrocardiogram; Hearing aid; Heart-lung machine.<br />
Further Reading<br />
Bigelow, W. G. Cold Hearts: The Story of Hypothermia <strong>and</strong> the Pacemaker<br />
in Heart Surgery. Toronto: McClell<strong>and</strong> <strong>and</strong> Stewart, 1984.<br />
Greatbatch, Wilson. The Making of the Pacemaker: Celebrating a Lifesaving<br />
Invention. Amherst, N.Y.: Prometheus Books, 2000.<br />
“The Pacemaker.” Newsweek 130, no. 24A (Winter, 1997/1998).<br />
Thalen, H. J. The Artificial Cardiac Pacemaker: Its History, Development<br />
<strong>and</strong> Clinical Application. London: Heinemann Medical, 1969.
Pap test<br />
Pap test<br />
The invention: A cytologic technique the diagnosing uterine cancer,<br />
the second most common fatal cancer in American women.<br />
The people behind the invention:<br />
George N. Papanicolaou (1883-1962), a Greek-born American<br />
physician <strong>and</strong> anatomist<br />
Charles Stockard (1879-1939), an American anatomist<br />
Herbert Traut (1894-1972), an American gynecologist<br />
Cancer in History<br />
549<br />
Cancer, first named by the ancient Greek physician Hippocrates<br />
of Cos, is one of the most painful <strong>and</strong> dreaded forms of human disease.<br />
It occurs when body cells run wild <strong>and</strong> interfere with the normal<br />
activities of the body. The early diagnosis of cancer is extremely<br />
important because early detection often makes it possible to effect<br />
successful cures. The modern detection of cancer is usually done by<br />
the microscopic examination of the cancer cells, using the techniques<br />
of the area of biology called “cytology, ” or cell biology.<br />
Development of cancer cytology began in 1867, after L. S. Beale<br />
reported tumor cells in the saliva from a patient who was afflicted<br />
with cancer of the pharynx. Beale recommended the use in cancer<br />
detection of microscopic examination of cells shed or removed (exfoliated)<br />
from organs including the digestive, the urinary, <strong>and</strong> the<br />
reproductive tracts. Soon, other scientists identified numerous striking<br />
differences, including cell size <strong>and</strong> shape, the size of cell nuclei,<br />
<strong>and</strong> the complexity of cell nuclei.<br />
Modern cytologic detection of cancer evolved from the work of<br />
George N. Papanicolaou, a Greek physician who trained at the University<br />
of Athens Medical School. In 1913, he emigrated to the<br />
United States.<br />
In 1917, he began studying sex determination of guinea pigs with<br />
Charles Stockard at New York’s Cornell Medical College. Papanicolaou’s<br />
efforts required him to obtain ova (egg cells) at a precise<br />
period in their maturation cycle, a process that required an indicator
550 / Pap test<br />
of the time at which the animals ovulated. In search of this indicator,<br />
Papanicolaou designed a method that involved microscopic examination<br />
of the vaginal discharges from female guinea pigs.<br />
Initially, Papanicolaou sought traces of blood, such as those<br />
seen in the menstrual discharges from both primates <strong>and</strong> humans.<br />
Papanicolaou found no blood in the guinea pig vaginal discharges.<br />
Instead, he noticed changes in the size <strong>and</strong> the shape of the uterine<br />
cells shed in these discharges. These changes recurred in a fifteento-sixteen-day<br />
cycle that correlated well with the guinea pig menstrual<br />
cycle.<br />
“New Cancer Detection Method”<br />
Papanicolaou next extended his efforts to the study of humans.<br />
This endeavor was designed originally to identify whether comparable<br />
changes in the exfoliated cells of the human vagina occurred<br />
in women. Its goal was to gain an underst<strong>and</strong>ing of the human menstrual<br />
cycle. In the course of this work, Papanicolaou observed distinctive<br />
abnormal cells in the vaginal fluid from a woman afflicted<br />
with cancer of the cervix. This led him to begin to attempt to develop<br />
a cytologic method for the detection of uterine cancer, the second<br />
most common type of fatal cancer in American women of the<br />
time.<br />
In 1928, Papanicolaou published his cytologic method of cancer<br />
detection in the Proceedings of the Third Race Betterment Conference,<br />
held in Battle Creek, Michigan. The work was received well by the<br />
news media (for example, the January 5, 1928, New York World credited<br />
him with a “new cancer detection method”). Nevertheless, the<br />
publication—<strong>and</strong> others he produced over the next ten years—was<br />
not very interesting to gynecologists of the time. Rather, they preferred<br />
use of the st<strong>and</strong>ard methodology of uterine cancer diagnosis<br />
(cervical biopsy <strong>and</strong> curettage).<br />
Consequently, in 1932, Papanicolaou turned his energy toward<br />
studying human reproductive endocrinology problems related to<br />
the effects of hormones on cells of the reproductive system. One example<br />
of this work was published in a 1933 issue of The American<br />
Journal of Anatomy, where he described “the sexual cycle in the human<br />
female.” Other such efforts resulted in better underst<strong>and</strong>ing of
eproductive problems that include amenorrhea <strong>and</strong> menopause.<br />
It was not until Papanicolaou’s collaboration with gynecologist<br />
Herbert Traut (beginning in 1939), which led to the publication of<br />
Diagnosis of Uterine Cancer by the Vaginal Smear (1943), that clinical<br />
acceptance of the method began to develop. Their monograph documented<br />
an impressive, irrefutable group of studies of both normal<br />
<strong>and</strong> disease states that included nearly two hundred cases of cancer<br />
of the uterus.<br />
Soon, many other researchers began to confirm these findings;<br />
by 1948, the newly named American Cancer Society noted that the<br />
“Pap” smear seemed to be a very valuable tool for detecting vaginal<br />
cancer. Wide acceptance of the Pap test followed, <strong>and</strong>, beginning<br />
in 1947, hundreds of physicians from all over the world<br />
flocked to Papanicolaou’s course on the subject. They learned his<br />
smear/diagnosis techniques <strong>and</strong> disseminated them around the<br />
world.<br />
Impact<br />
Pap test / 551<br />
The Pap test has been cited by many physicians as being the most<br />
significant <strong>and</strong> useful modern discovery in the field of cancer research.<br />
One way of measuring its impact is the realization that the<br />
test allows the identification of uterine cancer in the earliest stages,<br />
long before other detection methods can be used. Moreover, because<br />
of resultant early diagnosis, the disease can be cured in more<br />
than 80 percent of all cases identified by the test. In addition, Pap<br />
testing allows the identification of cancer of the uterine cervix so<br />
early that its cure rate can be nearly 100 percent.<br />
Papanicolaou extended the use of the smear technique from<br />
examination of vaginal discharges to diagnosis of cancer in many<br />
other organs from which scrapings, washings, <strong>and</strong> discharges<br />
can be obtained. These tissues include the colon, the kidney, the<br />
bladder, the prostate, the lung, the breast, <strong>and</strong> the sinuses. In<br />
most cases, such examination of these tissues has made it possible<br />
to diagnose cancer much sooner than is possible by using<br />
other existing methods. As a result, the smear method has become<br />
a basis of cancer control in national health programs throughout the<br />
world.
552 / Pap test<br />
See also Amniocentesis; Birth control pill; Mammography;<br />
Syphilis test; Ultrasound.<br />
Further Reading<br />
Apgar, Barbara, Lawrence L. Gabel, <strong>and</strong> Robert T. Brown. Oncology.<br />
Philadelphia: W. B. Saunders, 1998.<br />
Entman, Stephen S., <strong>and</strong> Charles B. Rush. Office Gynecology. Philadelphia:<br />
Saunders, 1995.<br />
Glass, Robert H., Michèle G. Curtis, <strong>and</strong> Michael P. Hopkins. Glass’s<br />
Office Gynecology. 5th ed. Baltimore: Williams & Wilkins, 1999.<br />
Rushing, Lynda, <strong>and</strong> Nancy Joste. Abnormal Pap Smears: What Every<br />
Woman Needs to Know. Amherst, N.Y.: Prometheus Books, 2001.
Penicillin<br />
Penicillin<br />
The invention: The first successful <strong>and</strong> widely used antibiotic<br />
drug, penicillin has been called the twentieth century’s greatest<br />
“wonder drug.”<br />
The people behind the invention:<br />
Sir Alex<strong>and</strong>er Fleming (1881-1955), a Scottish bacteriologist,<br />
cowinner of the 1945 Nobel Prize in Physiology or Medicine<br />
Baron Florey (1898-1968), an Australian pathologist, cowinner<br />
of the 1945 Nobel Prize in Physiology or Medicine<br />
Ernst Boris Chain (1906-1979), an émigré German biochemist,<br />
cowinner of the 1945 Nobel Prize in Physiology or Medicine<br />
The Search for the Perfect Antibiotic<br />
553<br />
During the early twentieth century, scientists were aware of antibacterial<br />
substances but did not know how to make full use of them<br />
in the treatment of diseases. Sir Alex<strong>and</strong>er Fleming discovered penicillin<br />
in 1928, but he was unable to duplicate his laboratory results<br />
of its antibiotic properties in clinical tests; as a result, he did not recognize<br />
the medical potential of penicillin. Between 1935 <strong>and</strong> 1940,<br />
penicillin was purified, concentrated, <strong>and</strong> clinically tested by pathologist<br />
Baron Florey, biochemist Ernst Boris Chain, <strong>and</strong> members<br />
of their Oxford research group. Their achievement has since been regarded<br />
as one of the greatest medical discoveries of the twentieth<br />
century.<br />
Florey was a professor at Oxford University in charge of the Sir<br />
William Dunn School of Pathology. Chain had worked for two years<br />
at Cambridge University in the laboratory of Frederick Gowl<strong>and</strong><br />
Hopkins, an eminent chemist <strong>and</strong> discoverer of vitamins. Hopkins<br />
recommended Chain to Florey, who was searching for a c<strong>and</strong>idate<br />
to lead a new biochemical unit in the Dunn School of Pathology.<br />
In 1938, Florey <strong>and</strong> Chain formed a research group to investigate<br />
the phenomenon of antibiosis, or the antagonistic association between<br />
different forms of life. The union of Florey’s medical knowledge<br />
<strong>and</strong> Chain’s biochemical expertise proved to be an ideal com-
554 / Penicillin<br />
bination for exploring the antibiosis potential of penicillin. Florey<br />
<strong>and</strong> Chain began their investigation with a literature search in<br />
which Chain came across Fleming’s work <strong>and</strong> added penicillin to<br />
their list of potential antibiotics.<br />
Their first task was to isolate pure penicillin from a crude liquid<br />
extract. A culture of Fleming’s original Penicillium notatum was<br />
maintained at Oxford <strong>and</strong> was used by the Oxford group for penicillin<br />
production. Extracting large quantities of penicillin from the<br />
medium was a painstaking task, as the solution contained only one<br />
part of the antibiotic in ten million. When enough of the raw juice<br />
was collected, the Oxford group focused on eliminating impurities<br />
<strong>and</strong> concentrating the penicillin. The concentrated liquid was then<br />
freeze-dried, leaving a soluble brown powder.<br />
Spectacular Results<br />
In May, 1940, Florey’s clinical tests of the crude penicillin proved<br />
its value as an antibiotic. Following extensive controlled experiments<br />
with mice, the Oxford group concluded that they had discovered<br />
an antibiotic that was nontoxic <strong>and</strong> far more effective against<br />
pathogenic bacteria than any of the known sulfa drugs. Furthermore,<br />
penicillin was not inactivated after injection into the bloodstream<br />
but was excreted unchanged in the urine. Continued tests<br />
showed that penicillin did not interfere with white blood cells <strong>and</strong><br />
had no adverse effect on living cells. Bacteria susceptible to the antibiotic<br />
included those responsible for gas gangrene, pneumonia,<br />
meningitis, diphtheria, <strong>and</strong> gonorrhea. American researchers later<br />
proved that penicillin was also effective against syphilis.<br />
In January, 1941, Florey injected a volunteer with penicillin<br />
<strong>and</strong> found that there were no side effects to treatment with the<br />
antibiotic. In February, the group began treatment of Albert Alex<strong>and</strong>er,<br />
a forty-three-year-old policeman with a serious staphylococci<br />
<strong>and</strong> streptococci infection that was resisting massive doses of<br />
sulfa drugs. Alex<strong>and</strong>er had been hospitalized for two months after<br />
an infection in the corner of his mouth had spread to his face,<br />
shoulder, <strong>and</strong> lungs. After receiving an injection of 200 milligrams<br />
of penicillin, Alex<strong>and</strong>er showed remarkable progress, <strong>and</strong> for the<br />
next ten days his condition improved. Unfortunately, the Oxford
Sir Alex<strong>and</strong>er Fleming<br />
Penicillin / 555<br />
In 1900 Alex<strong>and</strong>er Fleming (1881-1955) enlisted in the London<br />
Scottish Regiment, hoping to see action in the South African<br />
(Boer) War then underway between Great Britain <strong>and</strong> South<br />
Africa’s independent Afrikaner republics. However, the war<br />
ended too soon for him. So, having come into a small inheritance,<br />
he decided to become a physician instead. Accumulating<br />
honors <strong>and</strong> prizes along the way, he succeeded <strong>and</strong> became a<br />
fellow of the Royal College of Surgeons of Engl<strong>and</strong> in 1909.<br />
His mentor was Sir Almroth Wright. Fleming assisted him at<br />
St. Mary’s Hospital in Paddington, <strong>and</strong> they were at the forefront<br />
of the burgeoning field of bacteriology. They were, for example,<br />
among the first to treat syphilis with the newly discovered<br />
Salvarsan, <strong>and</strong> they championed immunization through<br />
vaccination. With the outbreak of World War I, Fleming followed<br />
Wright into the Royal Army Medical Corps, conducting<br />
research on battlefield wounds at a laboratory near Boulogne.<br />
The infections Fleming inspected horrified him. After the war,<br />
again at St. Mary’s Hospital, he dedicated himself to finding<br />
anti-bacterial agents.<br />
He succeed twice: “lysozyme” in 1921 <strong>and</strong> penicillin in 1928.<br />
To his great disappointment, he was unable to produce pure,<br />
potent concentrations of the drug. That had to await the work of<br />
Ernst Chain <strong>and</strong> Howard Florey in 1940. Meanwhile, Fleming<br />
studied the antibacterial properties of sulfa drugs. He was overjoyed<br />
that Chain <strong>and</strong> Florey succeeded where he had failed <strong>and</strong><br />
that penicillin saved lives during World War II <strong>and</strong> afterward,<br />
but he was taken aback when with them he began to receive a<br />
stream of tributes, awards, decorations, honorary degrees, <strong>and</strong><br />
fellowships, including the Nobel Prize in Physiology or Medicine<br />
in 1945. He was by nature a reserved man.<br />
However, he adjusted to his role as one of the most lionized<br />
medical researchers of his generation <strong>and</strong> continued his work,<br />
both as a professor of medicine at the University of London from<br />
1928 until 1948 <strong>and</strong> as director of the same St. Mary’s Hospital<br />
laboratory where he had started his career (renamed the Wright-<br />
Fleming Institute in 1948). He died soon after he retired in 1955.
556 / Penicillin<br />
production facility was unable to generate enough penicillin to<br />
overcome Alex<strong>and</strong>er’s advanced infection completely, <strong>and</strong> he died<br />
on March 15. A later case involving a fourteen-year-old boy with<br />
staphylococcal septicemia <strong>and</strong> osteomyelitis had a more spectacular<br />
result: The patient made a complete recovery in two months. In<br />
all the early clinical treatments, patients showed vast improvement,<br />
<strong>and</strong> most recovered completely from infections that resisted<br />
all other treatment.<br />
Impact<br />
Penicillin is among the greatest medical discoveries of the twentieth<br />
century. Florey <strong>and</strong> Chain’s chemical <strong>and</strong> clinical research<br />
brought about a revolution in the treatment of infectious disease.<br />
Almost every organ in the body is vulnerable to bacteria. Before<br />
penicillin, the only antimicrobial drugs available were quinine, arsenic,<br />
<strong>and</strong> sulfa drugs. Of these, only the sulfa drugs were useful for<br />
treatment of bacterial infection, but their high toxicity often limited<br />
their use. With this small arsenal, doctors were helpless to treat<br />
thous<strong>and</strong>s of patients with bacterial infections.<br />
The work of Florey <strong>and</strong> Chain achieved particular attention because<br />
of World War II <strong>and</strong> the need for treatments of such scourges<br />
as gas gangrene, which had infected the wounds of numerous<br />
World War I soldiers. With the help of Florey <strong>and</strong> Chain’s Oxford<br />
group, scientists at the U.S. Department of Agriculture’s Northern<br />
Regional Research Laboratory developed a highly efficient method<br />
for producing penicillin using fermentation. After an extended search,<br />
scientists were also able to isolate a more productive penicillin<br />
strain, Penicillium chrysogenum. By 1945, a strain was developed that<br />
produced five hundred times more penicillin than Fleming’s original<br />
mold had.<br />
Penicillin, the first of the “wonder drugs,” remains one of the<br />
most powerful antibiotic in existence. Diseases such as pneumonia,<br />
meningitis, <strong>and</strong> syphilis are still treated with penicillin. Penicillin<br />
<strong>and</strong> other antibiotics also had a broad impact on other fields of medicine,<br />
as major operations such as heart surgery, organ transplants,<br />
<strong>and</strong> management of severe burns became possible once the threat of<br />
bacterial infection was minimized.
Florey <strong>and</strong> Chain received numerous awards for their achievement,<br />
the greatest of which was the 1945 Nobel Prize in Physiology<br />
or Medicine, which they shared with Fleming for his original discovery.<br />
Florey was among the most effective medical scientists of<br />
his generation, <strong>and</strong> Chain earned similar accolades in the science of<br />
biochemistry. This combination of outst<strong>and</strong>ing medical <strong>and</strong> chemical<br />
expertise made possible one of the greatest discoveries in human<br />
history.<br />
See also Antibacterial drugs; Artificial hormone; Genetically engineered<br />
insulin; Polio vaccine (Sabin); Polio vaccine (Salk); Reserpine;<br />
Salvarsan; Tuberculosis vaccine; Typhus vaccine; Yellow fever<br />
vaccine.<br />
Further Reading<br />
Penicillin / 557<br />
Bickel, Lennard. Florey, The Man Who Made Penicillin. Carlton South,<br />
Victoria, Australia: Melbourne University Press, 1995.<br />
Clark, Ronald William. The Life of Ernst Chain: Penicillin <strong>and</strong> Beyond.<br />
New York: St. Martin’s Press, 1985.<br />
Hughes, William Howard. Alex<strong>and</strong>er Fleming <strong>and</strong> Penicillin. Hove:<br />
Wayl<strong>and</strong>, 1979.<br />
Mateles, Richard I. Penicillin: A Paradigm for Biotechnology. Chicago:<br />
Canadida Corporation, 1998.
558<br />
Personal computer<br />
Personal computer<br />
The invention: Originally a tradename of the IBM Corporation,<br />
“personal computer” has become a generic term for increasingly<br />
powerful desktop computing systems using microprocessors.<br />
The people behind the invention:<br />
Tom J. Watson, (1874-1956), the founder of IBM, who set<br />
corporate philosophy <strong>and</strong> marketing principles<br />
Frank Cary (1920- ), the chief executive officer of IBM at the<br />
time of the decision to market a personal computer<br />
John Opel (1925- ), a member of the Corporate Management<br />
Committee<br />
George Belzel, a member of the Corporate Management<br />
Committee<br />
Paul Rizzo, a member of the Corporate Management Committee<br />
Dean McKay (1921- ), a member of the Corporate<br />
Management Committee<br />
William L. Sydnes, the leader of the original twelve-member<br />
design team<br />
Shaking up the System<br />
For many years, the International Business Machines (IBM) Corporation<br />
had been set in its ways, sticking to traditions established<br />
by its founder, Tom Watson, Sr. If it hoped to enter the new microcomputer<br />
market, however, it was clear that only nontraditional<br />
methods would be useful. Apple Computer was already beginning<br />
to make inroads into large IBM accounts, <strong>and</strong> IBM stock was starting<br />
to stagnate on Wall Street. A 1979 Business Week article asked: “Is<br />
IBM just another stodgy, mature company?” The microcomputer<br />
market was expected to grow more than 40 percent in the early<br />
1980’s, but IBM would have to make some changes in order to bring<br />
a competitive personal computer (PC) to the market.<br />
The decision to build <strong>and</strong> market the PC was made by the company’s<br />
Corporate Management Committee (CMC). CMC members<br />
included chief executive officer Frank Cary, John Opel, George
Belzel, Paul Rizzo, Dean McKay, <strong>and</strong> three senior vice presidents. In<br />
July of 1980, Cary gave the order to proceed. He wanted the PC to be<br />
designed <strong>and</strong> built within a year. The CMC approved the initial design<br />
of the PC one month later. Twelve engineers, with William L.<br />
Sydnes as their leader, were appointed as the design team. At the<br />
end of 1980, the team had grown to 150.<br />
Most parts of the PC had to be produced outside IBM. Microsoft<br />
Corporation won the contract to produce the PC’s disk operating system<br />
(DOS) <strong>and</strong> the BASIC (Beginner’s All-purpose Symbolic Instruction<br />
Code) language that is built into the PC’s read-only memory<br />
(ROM). Intel Corporation was chosen to make the PC’s central processing<br />
unit (CPU) chip, the “brains” of the machine. Outside programmers<br />
wrote software for the PC. Ten years earlier, this strategy<br />
would have been unheard of within IBM since all aspects of manufacturing,<br />
service, <strong>and</strong> repair were traditionally taken care of in-house.<br />
Marketing the System<br />
Personal computer / 559<br />
IBM hired a New York firm to design a media campaign for the<br />
new PC. Readers of magazines <strong>and</strong> newspapers saw the character<br />
of Charlie Chaplin advertising the new PC. The machine was delivered<br />
on schedule on August 12, 1981. The price of the basic “system<br />
unit” was $1,565. A system with 64 kilobytes of r<strong>and</strong>om access<br />
memory (RAM), a 13-centimeter single-sided disk drive holding<br />
160 kilobytes, <strong>and</strong> a monitor was priced at about $3,000. A system<br />
with color graphics, a second disk drive, <strong>and</strong> a dot matrix printer<br />
cost about $4,500.<br />
Many useful computer programs had been adapted to the PC<br />
<strong>and</strong> were available when it was introduced. VisiCalc from Personal<br />
Software—the program that is credited with “making” the microcomputer<br />
revolution—was one of the first available. Other packages<br />
included a comprehensive accounting system by Peachtree<br />
Software <strong>and</strong> a word processing package called Easywriter by Information<br />
Unlimited Software.<br />
As the selection of software grew, so did sales. In the first year after<br />
its introduction, the IBM PC went from a zero market share to 28<br />
percent of the market. Yet the credit for the success of the PC does<br />
not go to IBM alone. Many hundreds of companies were able to pro-
560 / Personal computer<br />
duce software <strong>and</strong> hardware for the PC. Within two years, powerful<br />
products such as Lotus Corporation’s 1-2-3 business spreadsheet<br />
had come to the market. Many believed that Lotus 1-2-3 was the<br />
program that caused the PC to become so phenomenally successful.<br />
Other companies produced hardware features (expansion boards)<br />
that increased the PC’s memory storage or enabled the machine to<br />
“drive” audiovisual presentations such as slide shows. Business especially<br />
found the PC to be a powerful tool. The PC has survived because<br />
of its expansion capability.<br />
IBM has continued to upgrade the PC. In 1983, the PC/XT was<br />
introduced. It had more expansion slots <strong>and</strong> a fixed disk offering 10<br />
million bytes of storage for programs <strong>and</strong> data. Many of the companies<br />
that made expansion boards found themselves able to make<br />
whole PCs. An entire range of PC-compatible systems was introduced<br />
to the market, many offering features that IBM did not include<br />
in the original PC. The original PC has become a whole family<br />
of computers, sold by both IBM <strong>and</strong> other companies. The hardware<br />
<strong>and</strong> software continue to evolve; each generation offers more computing<br />
power <strong>and</strong> storage with a lower price tag.<br />
Consequences<br />
IBM’s entry into the microcomputer market gave microcomputers<br />
credibility. Apple Computer’s earlier introduction of its computer<br />
did not win wide acceptance with the corporate world. Apple<br />
did, however, thrive within the educational marketplace. IBM’s<br />
name already carried with it much clout, because IBM was a successful<br />
company. Apple Computer represented all that was great<br />
about the “new” microcomputer, but the IBM PC benefited from<br />
IBM’s image of stability <strong>and</strong> success.<br />
IBM coined the term personal computer <strong>and</strong> its acronym PC. The<br />
acronym PC is now used almost universally to refer to the microcomputer.<br />
It also had great significance with users who had previously<br />
used a large mainframe computer that had to be shared with<br />
the whole company. This was their personal computer. That was important<br />
to many PC buyers, since the company mainframe was perceived<br />
as being complicated <strong>and</strong> slow. The PC owner now had complete<br />
control.
See also Apple II computer; BINAC computer; Colossus computer;<br />
ENIAC computer; Floppy disk; Hard disk; IBM Model 1401<br />
computer; Internet; Supercomputer; UNIVAC computer.<br />
Further Reading<br />
Personal computer / 561<br />
Cerruzi, Paul E. A History of Modern Computing. Cambridge, Mass.:<br />
MIT Press, 2000.<br />
Chposky, James, <strong>and</strong> Ted Leonsis. Blue Magic: The People, Power, <strong>and</strong><br />
Politics Behind the IBM Personal Computer. New York: Facts on File,<br />
1988.<br />
Freiberger, Paul, <strong>and</strong> Michael Swaine. Fire in the Valley: The Making of<br />
the Personal Computer. New York: McGraw-Hill, 2000.<br />
Grossman. Wendy. Remembering the Future: Interviews from Personal<br />
Computer World. New York: Springer, 1997.
562<br />
Photoelectric cell<br />
Photoelectric cell<br />
The invention: The first devices to make practical use of the photoelectric<br />
effect, photoelectric cells were of decisive importance in<br />
the electron theory of metals.<br />
The people behind the invention:<br />
Julius Elster (1854-1920), a German experimental physicist<br />
Hans Friedrich Geitel (1855-1923), a German physicist<br />
Wilhelm Hallwachs (1859-1922), a German physicist<br />
Early Photoelectric Cells<br />
The photoelectric effect was known to science in the early<br />
nineteenth century when the French physicist Alex<strong>and</strong>re-Edmond<br />
Becquerel wrote of it in connection with his work on glass-enclosed<br />
primary batteries. He discovered that the voltage of his batteries increased<br />
with intensified illumination <strong>and</strong> that green light produced<br />
the highest voltage. Since Becquerel researched batteries exclusively,<br />
however, the liquid-type photocell was not discovered until<br />
1929, when the Wein <strong>and</strong> Arcturus cells were introduced commercially.<br />
These cells were miniature voltaic cells arranged so that light<br />
falling on one side of the front plate generated a considerable<br />
amount of electrical energy. The cells had short lives, unfortunately;<br />
when subjected to cold, the electrolyte froze, <strong>and</strong> when subjected to<br />
heat, the gas generated would exp<strong>and</strong> <strong>and</strong> explode the cells.<br />
What came to be known as the photoelectric cell, a device connecting<br />
light <strong>and</strong> electricity, had its beginnings in the 1880’s. At<br />
that time, scientists noticed that a negatively charged metal plate<br />
lost its charge much more quickly in the light (especially ultraviolet<br />
light) than in the dark. Several years later, researchers demonstrated<br />
that this phenomenon was not an “ionization” effect because<br />
of the air’s increased conductivity, since the phenomenon<br />
took place in a vacuum but did not take place if the plate were positively<br />
charged. Instead, the phenomenon had to be attributed to<br />
the light that excited the electrons of the metal <strong>and</strong> caused them to<br />
fly off: A neutral plate even acquired a slight positive charge under
the influence of strong light. Study of this effect not only contributed<br />
evidence to an electronic theory of matter—<strong>and</strong>, as a result of<br />
some brilliant mathematical work by the physicist Albert Einstein,<br />
later increased knowledge of the nature of radiant energy—but<br />
also further linked the studies of light <strong>and</strong> electricity. It even explained<br />
certain chemical phenomena, such as the process of photography.<br />
It is important to note that all the experimental work on<br />
photoelectricity accomplished prior to the work of Julius Elster<br />
<strong>and</strong> Hans Friedrich Geitel was carried out before the existence of<br />
the electron was known.<br />
Explaining Photoelectric Emission<br />
Photoelectric cell / 563<br />
After the English physicist Sir Joseph John Thomson’s discovery<br />
of the electron in 1897, investigators soon realized that the photoelectric<br />
effect was caused by the emission of electrons under the influence<br />
of radiation. The fundamental theory of photoelectric emission<br />
was put forward by Einstein in 1905 on the basis of the German<br />
physicist Max Planck’s quantum theory (1900). Thus, it was not surprising<br />
that light was found to have an electronic effect. Since it was<br />
known that the longer radio waves could shake electrons into resonant<br />
oscillations <strong>and</strong> the shorter X rays could detach electrons from<br />
the atoms of gases, the intermediate waves of visual light would<br />
have been expected to have some effect upon electrons—such as detaching<br />
them from metal plates <strong>and</strong> therefore setting up a difference<br />
of potential. The photoelectric cell, developed by Elster <strong>and</strong> Geitel<br />
in 1904, was a practical device that made use of this effect.<br />
In 1888, Wilhelm Hallwachs observed that an electrically charged<br />
zinc electrode loses its charge when exposed to ultraviolet radiation<br />
if the charge is negative, but is able to retain a positive charge under<br />
the same conditions. The following year, Elster <strong>and</strong> Geitel discovered<br />
a photoelectric effect caused by visible light; however, they<br />
used the alkali metals potassium <strong>and</strong> sodium for their experiments<br />
instead of zinc.<br />
The Elster-Geitel photocell (a vacuum emission cell, as opposed to<br />
a gas-filled cell) consisted of an evacuated glass bulb containing two<br />
electrodes. The cathode consisted of a thin film of a rare, chemically<br />
active metal (such as potassium) that lost its electrons fairly readily;
564 / Photoelectric cell<br />
Julius Elster <strong>and</strong> Hans Geitel<br />
Nicknamed the Castor <strong>and</strong> Pollux of physics after the twins<br />
of Greek mythology, Johann Philipp Ludwig Julius Elster <strong>and</strong><br />
Hans Friedrich Geitel were among the most productive teams<br />
in the history of science. Elster, born in 1854, <strong>and</strong> Geitel, born in<br />
1855, met in 1875 while attending university in Heidelberg,<br />
Germany. Graduate studies took them to separate cities, but<br />
then in 1881 they were together again as mathematics <strong>and</strong> physics<br />
teachers at Herzoglich Gymnasium in Wolfenbüttel. In 1884<br />
they began their scientific collaboration, which lasted more<br />
than thirty years <strong>and</strong> produced more than 150 reports.<br />
Essentially experimentalists, they investigated phenomena<br />
that were among the greatest mysteries of the times. Their first<br />
works concerned the electrification of flames <strong>and</strong> the electrical<br />
properties of thunderstorms. They went on to study the photoelectric<br />
effect, thermal electron emission, practical uses for photocells,<br />
<strong>and</strong> Becquerel rays in the earth <strong>and</strong> air. They developed a<br />
method for measuring electrical phenomena in gases that remained<br />
the st<strong>and</strong>ard for the following forty years.<br />
Their greatest achievements, however, lay with radioactivity<br />
<strong>and</strong> radiation. Their demonstration that inc<strong>and</strong>escent filaments<br />
emitted “negative electricity” proved beyond doubt that<br />
electrons, which J. J. Thomson had recently claimed to have detected,<br />
did in fact exist. They also proved that radioactivity,<br />
such as that from uranium, came wholly from within the atom,<br />
not from environmental influences. Ernest Rutherford, the great<br />
English physicist, said in 1913 that Elster <strong>and</strong> Geitel had contributed<br />
more to the underst<strong>and</strong>ing of terrestrial <strong>and</strong> atmospheric<br />
radioactivity than anyone else.<br />
The pair were practically inseparable until Elster died in<br />
1920. Geitel died three years later.<br />
the anode was simply a wire sealed in to complete the circuit. This anode<br />
was maintained at a positive potential in order to collect the negative<br />
charges released by light from the cathode. The Elster-Geitel<br />
photocell resembled two other types of vacuum tubes in existence at<br />
the time: the cathode-ray tube, in which the cathode emitted electrons<br />
under the influence of a high potential, <strong>and</strong> the thermionic<br />
valve (a valve that permits the passage of current in one direction
only), in which it emitted electrons under the influence of heat. Like<br />
both of these vacuum tubes, the photoelectric cell could be classified<br />
as an “electronic” device.<br />
The new cell, then, emitted electrons when stimulated by light, <strong>and</strong><br />
at a rate proportional to the intensity of the light. Hence, a current<br />
could be obtained from the cell. Yet Elster <strong>and</strong> Geitel found that their<br />
photoelectric currents fell off gradually; they therefore spoke of “fatigue”<br />
(instability). It was discovered later that most of this change was<br />
not a direct effect of a photoelectric current’s passage; it was not even<br />
an indirect effect but was caused by oxidation of the cathode by the air.<br />
Since all modern cathodes are enclosed in sealed vessels, that source of<br />
change has been completely abolished. Nevertheless, the changes that<br />
persist in modern cathodes often are indirect effects of light that can be<br />
produced independently of any photoelectric current.<br />
Impact<br />
Photoelectric cell / 565<br />
The Elster-Geitel photocell was, for some twenty years, used in<br />
all emission cells adapted for the visible spectrum, <strong>and</strong> throughout<br />
the twentieth century, the photoelectric cell has had a wide variety<br />
of applications in numerous fields. For example, if products leaving<br />
a factory on a conveyor belt were passed between a light <strong>and</strong> a cell,<br />
they could be counted as they interrupted the beam. Persons entering<br />
a building could be counted also, <strong>and</strong> if invisible ultraviolet rays<br />
were used, those persons could be detected without their knowledge.<br />
Simple relay circuits could be arranged that would automatically<br />
switch on street lamps when it grew dark. The sensitivity of<br />
the cell with an amplifying circuit enabled it to “see” objects too<br />
faint for the human eye, such as minor stars or certain lines in the<br />
spectra of elements excited by a flame or discharge. The fact that the<br />
current depended on the intensity of the light made it possible to<br />
construct photoelectric meters that could judge the strength of illumination<br />
without risking human error—for example, to determine<br />
the right exposure for a photograph.<br />
A further use for the cell was to make talking films possible. The<br />
early “talkies” had depended on gramophone records, but it was very<br />
difficult to keep the records in time with the film. Now, the waves of<br />
speech <strong>and</strong> music could be recorded in a “sound track” by turning the
566 / Photoelectric cell<br />
sound first into current through a microphone <strong>and</strong> then into light with<br />
a neon tube or magnetic shutter; next, the variations in the intensity of<br />
this light on the side of the film were photographed. By reversing the<br />
process <strong>and</strong> running the film between a light <strong>and</strong> a photoelectric cell,<br />
the visual signals could be converted back to sound.<br />
See also Alkaline storage battery; Photovoltaic cell; Solar thermal<br />
engine.<br />
Further Reading<br />
Hoberman, Stuart. Solar Cell <strong>and</strong> Photocell Experimenters Guide. Indianapolis,<br />
Ind.: H. W. Sams, 1965.<br />
Perlin, John. From Space to Earth: The Story of Solar Electricity. Ann Arbor,<br />
Mich.: Aatec <strong>Public</strong>ations, 1999.<br />
Walker, R. C., <strong>and</strong> T. M. C. Lance. Photoelectric Cell Applications: A<br />
Practical Book Describing the Uses of Photoelectric Cells in Television,<br />
Talking Pictures, Electrical Alarms, Counting Devices, Etc. 3ded.<br />
London: Sir I. Pitman & Sons, 1938.
Photovoltaic cell<br />
Photovoltaic cell<br />
The invention: Drawing their energy directly from the Sun, the<br />
first photovoltaic cells powered instruments on early space vehicles<br />
<strong>and</strong> held out hope for future uses of solar energy.<br />
The people behind the invention:<br />
Daryl M. Chapin (1906-1995), an American physicist<br />
Calvin S. Fuller (1902-1994), an American chemist<br />
Gerald L. Pearson (1905- ), an American physicist<br />
Unlimited Energy Source<br />
All the energy that the world has at its disposal ultimately comes<br />
from the Sun. Some of this solar energy was trapped millions of years<br />
ago in the form of vegetable <strong>and</strong> animal matter that became the coal,<br />
oil, <strong>and</strong> natural gas that the world relies upon for energy. Some of this<br />
fuel is used directly to heat homes <strong>and</strong> to power factories <strong>and</strong> gasoline<br />
vehicles. Much of this fossil fuel, however, is burned to produce<br />
the electricity on which modern society depends.<br />
The amount of energy available from the Sun is difficult to imagine,<br />
but some comparisons may be helpful. During each forty-hour<br />
period, the Sun provides the earth with as much energy as the<br />
earth’s total reserves of coal, oil, <strong>and</strong> natural gas. It has been estimated<br />
that the amount of energy provided by the sun’s radiation<br />
matches the earth’s reserves of nuclear fuel every forty days. The<br />
annual solar radiation that falls on about twelve hundred square<br />
miles of l<strong>and</strong> in Arizona matched the world’s estimated total annual<br />
energy requirement for 1960. Scientists have been searching for<br />
many decades for inexpensive, efficient means of converting this<br />
vast supply of solar radiation directly into electricity.<br />
The Bell Solar Cell<br />
567<br />
Throughout its history, Bell Systems has needed to be able to<br />
transmit, modulate, <strong>and</strong> amplify electrical signals. Until the 1930’s,<br />
these tasks were accomplished by using insulators <strong>and</strong> metallic con-
568 / Photovoltaic cell<br />
ductors. At that time, semiconductors, which have electrical properties<br />
that are between those of insulators <strong>and</strong> those of conductors,<br />
were developed. One of the most important semiconductor materials<br />
is silicon, which is one of the most common elements on the<br />
earth. Unfortunately, silicon is usually found in the form of compounds<br />
such as s<strong>and</strong> or quartz, <strong>and</strong> it must be refined <strong>and</strong> purified<br />
before it can be used in electrical circuits. This process required<br />
much initial research, <strong>and</strong> very pure silicon was not available until<br />
the early 1950’s.<br />
Electric conduction in silicon is the result of the movement of<br />
negative charges (electrons) or positive charges (holes). One way of<br />
accomplishing this is by deliberately adding to the silicon phosphorus<br />
or arsenic atoms, which have five outer electrons. This addition<br />
creates a type of semiconductor that has excess negative charges (an<br />
n-type semiconductor). Adding boron atoms, which have three<br />
outer electrons, creates a semiconductor that has excess positive<br />
charges (a p-type semiconductor). Calvin Fuller made an important<br />
study of the formation of p-n junctions, which are the points at<br />
which p-type <strong>and</strong> n-type semiconductors meet, by using the process<br />
of diffusing impurity atoms—that is, adding atoms of materials that<br />
would increase the level of positive or negative charges, as described<br />
above. Fuller’s work stimulated interested in using the process<br />
of impurity diffusion to create cells that would turn solar energy<br />
into electricity. Fuller <strong>and</strong> Gerald Pearson made the first largearea<br />
p-n junction by using the diffusion process. Daryl Chapin,<br />
Fuller, <strong>and</strong> Pearson made a similar p-n junction very close to the<br />
surface of a silicon crystal, which was then exposed to sunlight.<br />
The cell was constructed by first making an ingot of arsenicdoped<br />
silicon that was then cut into very thin slices. Then a very<br />
thin layer of p-type silicon was formed over the surface of the n-type<br />
wafer, providing a p-n junction close to the surface of the cell. Once<br />
the cell cooled, the p-type layer was removed from the back of the<br />
cell <strong>and</strong> lead wires were attached to the two surfaces. When light<br />
was absorbed at the p-n junction, electron-hole pairs were produced,<br />
<strong>and</strong> the electric field that was present at the junction forced<br />
the electrons to the n side <strong>and</strong> the holes to the p side.<br />
The recombination of the electrons <strong>and</strong> holes takes place after the<br />
electrons have traveled through the external wires, where they do
useful work. Chapin, Fuller, <strong>and</strong> Pearson announced in 1954 that<br />
the resulting photovoltaic cell was the most efficient (6 percent)<br />
means then available for converting sunlight into electricity.<br />
The first experimental use of the silicon solar battery was in amplifiers<br />
for electrical telephone signals in rural areas. An array of 432<br />
silicon cells, capable of supplying 9 watts of power in bright sunlight,<br />
was used to charge a nickel-cadmium storage battery. This, in<br />
turn, powered the amplifier for the telephone signal. The electrical<br />
energy derived from sunlight during the day was sufficient to keep<br />
the storage battery charged for continuous operation. The system<br />
was successfully tested for six months of continuous use in Americus,<br />
Georgia, in 1956. Although it was a technical success, the silicon solar<br />
cell was not ready to compete economically with conventional<br />
means of producing electrical power.<br />
Consequences<br />
Parabolic mirrors at a solar power plant. (PhotoDisc)<br />
Photovoltaic cell / 569<br />
One of the immediate applications of the solar cell was to supply<br />
electrical energy for Telstar satellites. These cells are used extensively<br />
on all satellites to generate power. The success of the U.S. sat-
570 / Photovoltaic cell<br />
ellite program prompted serious suggestions in 1965 for the use of<br />
an orbiting power satellite. A large satellite could be placed into a<br />
synchronous orbit of the earth. It would collect sunlight, convert it<br />
to microwave radiation, <strong>and</strong> beam the energy to an Earth-based receiving<br />
station. Many technical problems must be solved, however,<br />
before this dream can become a reality.<br />
Solar cells are used in small-scale applications such as power<br />
sources for calculators. Large-scale applications are still not economically<br />
competitive with more traditional means of generating<br />
electric power. The development of the Third World countries, however,<br />
may provide the incentive to search for less-expensive solar<br />
cells that can be used, for example, to provide energy in remote villages.<br />
As the st<strong>and</strong>ards of living in such areas improve, the need for<br />
electric power will grow. Solar cells may be able to provide the necessary<br />
energy while safeguarding the environment for future generations.<br />
See also Alkaline storage battery; Fluorescent lighting; Fuel cell;<br />
Photoelectric cell; Solar thermal engine.<br />
Further Reading<br />
Green, Martin A. Power to the People: Sunlight to Electricity Using Solar<br />
Cells. Sydney, Australia: University of South Wales Press, 2000.<br />
_____. “Photovoltaics: Technology Overview.” Energy Policy 28, no.<br />
14 (November, 2000).<br />
Perlin, John. From Space to Earth: The Story of Solar Electricity. Ann Arbor,<br />
Mich.: Aatec <strong>Public</strong>ations, 1999.
Plastic<br />
Plastic<br />
The invention: The first totally synthetic thermosetting plastic,<br />
which paved the way for modern materials science.<br />
The people behind the invention:<br />
John Wesley Hyatt (1837-1920), an American inventor<br />
Leo Hendrik Baekel<strong>and</strong> (1863-1944), a Belgian-born chemist,<br />
consultant, <strong>and</strong> inventor<br />
Christian Friedrich Schönbein (1799-1868), a German chemist<br />
who produced guncotton, the first artificial polymer<br />
Adolf von Baeyer (1835-1917), a German chemist<br />
Exploding Billiard Balls<br />
571<br />
In the 1860’s, the firm of Phelan <strong>and</strong> Collender offered a prize of<br />
ten thous<strong>and</strong> dollars to anyone producing a substance that could<br />
serve as an inexpensive substitute for ivory, which was somewhat<br />
difficult to obtain in large quantities at reasonable prices. Earlier,<br />
Christian Friedrich Schönbein had laid the groundwork for a breakthrough<br />
in the quest for a new material in 1846 by the serendipitous<br />
discovery of nitrocellulose, more commonly known as “guncotton,”<br />
which was produced by the reaction of nitric acid with cotton.<br />
An American inventor, John Wesley Hyatt, while looking for a<br />
substitute for ivory as a material for making billiard balls, discovered<br />
that the addition of camphor to nitrocellulose under certain<br />
conditions led to the formation of a white material that could be<br />
molded <strong>and</strong> machined. He dubbed this substance “celluloid,” <strong>and</strong><br />
this product is now acknowledged as the first synthetic plastic. Celluloid<br />
won the prize for Hyatt, <strong>and</strong> he promptly set out to exploit his<br />
product. Celluloid was used to make baby rattles, collars, dentures,<br />
<strong>and</strong> other manufactured goods.<br />
As a billiard ball substitute, however, it was not really adequate,<br />
for various reasons. First, it is thermoplastic—in other words, a material<br />
that softens when heated <strong>and</strong> can then be easily deformed or<br />
molded. It was thus too soft for billiard ball use. Second, it was<br />
highly flammable, hardly a desirable characteristic. A widely circu-
572 / Plastic<br />
lated, perhaps apocryphal, story claimed that celluloid billiard balls<br />
detonated when they collided.<br />
Truly Artificial<br />
Since celluloid can be viewed as a derivative of a natural product,<br />
it is not a completely synthetic substance. Leo Hendrik Baekel<strong>and</strong><br />
has the distinction of being the first to produce a completely artificial<br />
plastic. Born in Ghent, Belgium, Baekel<strong>and</strong> emigrated to the<br />
United States in 1889 to pursue applied research, a pursuit not encouraged<br />
in Europe at the time. One area in which Baekel<strong>and</strong> hoped<br />
to make an inroad was in the development of an artificial shellac.<br />
Shellac at the time was a natural <strong>and</strong> therefore expensive product,<br />
<strong>and</strong> there would be a wide market for any reasonably priced substitute.<br />
Baekel<strong>and</strong>’s research scheme, begun in 1905, focused on finding<br />
a solvent that could dissolve the resinous products from a certain<br />
class of organic chemical reaction.<br />
The particular resins he used had been reported in the mid-<br />
1800’s by the German chemist Adolf von Baeyer. These resins were<br />
produced by the condensation reaction of formaldehyde with a<br />
class of chemicals called “phenols.” Baeyer found that frequently<br />
the major product of such a reaction was a gummy residue that was<br />
virtually impossible to remove from glassware. Baekel<strong>and</strong> focused<br />
on finding a material that could dissolve these resinous products.<br />
Such a substance would prove to be the shellac substitute he sought.<br />
These efforts proved frustrating, as an adequate solvent for these<br />
resins could not be found. After repeated attempts to dissolve these<br />
residues, Baekel<strong>and</strong> shifted the orientation of his work. Ab<strong>and</strong>oning<br />
the quest to dissolve the resin, he set about trying to develop a resin<br />
that would be impervious to any solvent, reasoning that such a material<br />
would have useful applications.<br />
Baekel<strong>and</strong>’s experiments involved the manipulation of phenolformaldehyde<br />
reactions through precise control of the temperature<br />
<strong>and</strong> pressure at which the reactions were performed. Many of these<br />
experiments were performed in a 1.5-meter-tall reactor vessel, which<br />
he called a “Bakelizer.” In 1907, these meticulous experiments paid<br />
off when Baekel<strong>and</strong> opened the reactor to reveal a clear solid that<br />
was heat resistant, nonconducting, <strong>and</strong> machinable. Experimenta-
Plastic / 573<br />
tion proved that the material could be dyed practically any color in<br />
the manufacturing process, with no effect on the physical properties<br />
of the solid.<br />
Baekel<strong>and</strong> filed a patent for this new material in 1907. (This patent<br />
was filed one day before that filed by James Swinburne, a British<br />
John Wesley Hyatt<br />
John Wesley Hyatt’s parents wanted him to be a minister, a<br />
step up in status from his father’s job as a blacksmith. Born in<br />
1837 in Starkey, New York, Hyatt received the st<strong>and</strong>ard primary<br />
education <strong>and</strong> then obediently went to a seminary as a teenager.<br />
However, his mind was on making things rather than spirituality;<br />
he was especially ingenious with machinery. The seminary<br />
held him only a year. He became a printer’s apprentice at<br />
sixteen <strong>and</strong> later set up shop in Albany.<br />
His mind ranged beyond printing too. He invented a method<br />
to make emery wheels for sharpening cutlery, which brought<br />
him his first patent at twenty-four. In an attempt to win the<br />
Phelan <strong>and</strong> Collender Company contest for artificial billiard<br />
balls, he developed several moldable compounds from wood<br />
pulp. He started the Embossing Company in Albany to make<br />
chess <strong>and</strong> checker pieces from the compounds <strong>and</strong> put his<br />
youngest brother in charge. With another brother he experimented<br />
with guncotton until he invented celluloid. In 1872, he<br />
<strong>and</strong> his brothers started the Celluloid Manufacturing Company.<br />
They designed new milling machinery for the new substance<br />
<strong>and</strong> turned out billiard balls, bowling balls, golf club<br />
heads <strong>and</strong> other sporting goods but then branched out into domestic<br />
items, such as boxes, h<strong>and</strong>les, combs, <strong>and</strong> even collars.<br />
Celluloid became the basic material of photographic film <strong>and</strong>,<br />
later, motion picture film.<br />
Meanwhile, Hyatt continued to invent—machinery for cutting<br />
<strong>and</strong> molding plastic <strong>and</strong> rolling steel, a water purification<br />
system, a method for squeezing juice from sugar cane, an industrial<br />
sewing machine, roller bearings for heavy machinery—registering<br />
more than 250 patents, which is impressive for<br />
a person with no formal scientific or technical training. The Society<br />
of Chemical Industry awarded Hyatt its prestigious Perkin<br />
Medal in 1914. Hyatt died in 1920.
574 / Plastic<br />
electrical engineer who had developed a similar material in his<br />
quest to produce an insulating material.) Baekel<strong>and</strong> dubbed his new<br />
creation “Bakelite” <strong>and</strong> announced its existence to the scientific<br />
community on February 15, 1909, at the annual meeting of the American<br />
Chemical Society. Among its first uses was in the manufacture<br />
of ignition parts for the rapidly growing automobile industry.<br />
Impact<br />
Bakelite proved to be the first of a class of compounds called<br />
“synthetic polymers.” Polymers are long chains of molecules chemically<br />
linked together. There are many natural polymers, such as cotton.<br />
The discovery of synthetic polymers led to vigorous research<br />
into the field <strong>and</strong> attempts to produce other useful artificial materials.<br />
These efforts met with a fair amount of success; by 1940, a multitude<br />
of new products unlike anything found in nature had been discovered.<br />
These included such items as polystyrene <strong>and</strong> low-density<br />
polyethylene. In addition, artificial substitutes for natural polymers,<br />
such as rubber, were a goal of polymer chemists. One of the results<br />
of this research was the development of neoprene.<br />
Industries also were interested in developing synthetic polymers<br />
to produce materials that could be used in place of natural fibers<br />
such as cotton. The most dramatic success in this area was achieved<br />
by Du Pont chemist Wallace Carothers, who had also developed<br />
neoprene. Carothers focused his energies on forming a synthetic fiber<br />
similar to silk, resulting in the synthesis of nylon.<br />
Synthetic polymers constitute one branch of a broad area known<br />
as “materials science.” Novel, useful materials produced synthetically<br />
from a variety of natural materials have allowed for tremendous<br />
progress in many areas. Examples of these new materials include<br />
high-temperature superconductors, composites, ceramics, <strong>and</strong><br />
plastics. These materials are used to make the structural components<br />
of aircraft, artificial limbs <strong>and</strong> implants, tennis rackets, garbage<br />
bags, <strong>and</strong> many other common objects.<br />
See also Buna rubber; Contact lenses; Laminated glass; Neoprene;<br />
Nylon; Orlon; Polyester; Polyethylene; Polystyrene; Pyrex<br />
glass; Silicones; Teflon; Velcro.
Further Reading<br />
Plastic / 575<br />
Amato, Ivan. “Chemist: Leo Baekel<strong>and</strong>.” Time 153, no. 12 (March 29,<br />
1999).<br />
Clark, Tessa. Bakelite Style. Edison, N.J.: Chartwell Books, 1997.<br />
Fenichell, Stephen. Plastic: The Making of a Synthetic Century. New<br />
York: HarperBusiness, 1997.<br />
Sparke, Penny. The Plastics Age: From Bakelite to Beanbags <strong>and</strong> Beyond.<br />
Woodstock, N.Y.: Overlook Press, 1990.
576<br />
Pocket calculator<br />
Pocket calculator<br />
The invention: The first portable <strong>and</strong> reliable h<strong>and</strong>-held calculator<br />
capable of performing a wide range of mathematical computations.<br />
The people behind the invention:<br />
Jack St. Clair Kilby (1923- ), the inventor of the<br />
semiconductor microchip<br />
Jerry D. Merryman (1932- ), the first project manager of the<br />
team that invented the first portable calculator<br />
James Van Tassel (1929- ), an inventor <strong>and</strong> expert on<br />
semiconductor components<br />
An Ancient Dream<br />
In the earliest accounts of civilizations that developed number<br />
systems to perform mathematical calculations, evidence has been<br />
found of efforts to fashion a device that would permit people to perform<br />
these calculations with reduced effort <strong>and</strong> increased accuracy.<br />
The ancient Babylonians are regarded as the inventors of the first<br />
abacus (or counting board, from the Greek abakos, meaning “board”<br />
or “tablet”). It was originally little more than a row of shallow<br />
grooves with pebbles or bone fragments as counters.<br />
The next step in mechanical calculation did not occur until the<br />
early seventeenth century. John Napier, a Scottish baron <strong>and</strong> mathematician,<br />
originated the concept of “logarithms” as a mathematical<br />
device to make calculating easier. This concept led to the first slide<br />
rule, created by the English mathematician William Oughtred of<br />
Cambridge. Oughtred’s invention consisted of two identical, circular<br />
logarithmic scales held together <strong>and</strong> adjusted by h<strong>and</strong>. The slide<br />
rule made it possible to perform rough but rapid multiplication <strong>and</strong><br />
division. Oughtred’s invention in 1623 was paralleled by the work<br />
of a German professor, Wilhelm Schickard, who built a “calculating<br />
clock” the same year. Because the record of Schickard’s work was<br />
lost until 1935, however, the French mathematician Blaise Pascal<br />
was generally thought to have built the first mechanical calculator,<br />
the “Pascaline,” in 1645.
Other versions of mechanical calculators were built in later centuries,<br />
but none was rapid or compact enough to be useful beyond specific<br />
laboratory or mercantile situations. Meanwhile, the dream of<br />
such a machine continued to fascinate scientists <strong>and</strong> mathematicians.<br />
The development that made a fast, small calculator possible did<br />
not occur until the middle of the twentieth century, when Jack St.<br />
Clair Kilby of Texas Instruments invented the silicon microchip (or<br />
integrated circuit) in 1958. An integrated circuit is a tiny complex of<br />
electronic components <strong>and</strong> their connections that is produced in or<br />
on a small slice of semiconductor material such as silicon. Patrick<br />
Haggerty, then president of Texas Instruments, wrote in 1964 that<br />
“integrated electronics” would “remove limitations” that determined<br />
the size of instruments, <strong>and</strong> he recognized that Kilby’s invention<br />
of the microchip made possible the creation of a portable,<br />
h<strong>and</strong>-held calculator. He challenged Kilby to put together a team to<br />
design a calculator that would be as powerful as the large, electromechanical<br />
models in use at the time but small enough to fit into a<br />
coat pocket. Working with Jerry D. Merryman <strong>and</strong> James Van Tassel,<br />
Kilby began to work on the project in October, 1965.<br />
An Amazing Reality<br />
Pocket calculator / 577<br />
At the outset, there were basically five elements that had to be designed.<br />
These were the logic designs that enabled the machine to<br />
perform the actual calculations, the keyboard or keypad, the power<br />
supply, the readout display, <strong>and</strong> the outer case. Kilby recalls that<br />
once a particular size for the unit had been determined (something<br />
that could be easily held in the h<strong>and</strong>), project manager Merryman<br />
was able to develop the initial logic designs in three days. Van Tassel<br />
contributed his experience with semiconductor components to solve<br />
the problems of packaging the integrated circuit. The display required<br />
a thermal printer that would work on a low power source.<br />
The machine also had to include a microencapsulated ink source so<br />
that the paper readouts could be imprinted clearly. Then the paper<br />
had to be advanced for the next calculation. Kilby, Merryman, <strong>and</strong><br />
Van Tassel filed for a patent on their work in 1967.<br />
Although this relatively small, working prototype of the minicalculator<br />
made obsolete the transistor-operated design of the much
578 / Pocket calculator<br />
Jerry D. Merryman<br />
In 1965 Texas Instruments assigned two engineers to join<br />
Jack St. Clair Kilby, inventor of the integrated circuit, in an effort<br />
to produce a pocket-sized calculator: James H. Van Tassel, a<br />
specialist in semiconductor components, <strong>and</strong> Jerry D. Merryman,<br />
a versatile engineer who became the project manager. It<br />
took Merryman only seventy-two hours to work out the logic<br />
design for the calculator, <strong>and</strong> the team set about designing, fabricating,<br />
<strong>and</strong> testing its components. After two years, it had a<br />
prototype, the first pocket calculator. However, it required a<br />
large, strong pocket. It measured 4.25 inches by 6.12 inches by<br />
1.76 inches <strong>and</strong> weighed 2.8 pounds. Kilby, Van Tassel, <strong>and</strong><br />
Merry filed for a patent <strong>and</strong> received it in 1975. In 1989 the team<br />
was jointly presented the Holley Medical for the achievement<br />
by the American Society of Mechanical Engineers. By then<br />
Merryman held sixty other patents, foreign <strong>and</strong> domestic.<br />
Born in 1932, Merryman grew up in Hearne, Texas, <strong>and</strong> after<br />
high school went to Texas A&M University. He never graduated,<br />
but he did become extraordinarily adept at electrical engineering,<br />
teaching himself what he needed to know while doing<br />
small jobs on his own. He was said to have almost an intuitive<br />
sense for circuitry. After he joined Texas Instruments in 1963<br />
he quickly earned a reputation for solving complex problems,<br />
one of the reasons he was made part of the h<strong>and</strong> calculator<br />
team. He became a Texas Instruments Fellow in 1975 <strong>and</strong> helped<br />
design semiconductor manufacturing equipment, particularly<br />
by adapting high-speed lasers for use in extremely fine optical<br />
lithography. He also invented thermal data systems.<br />
Along with Kilby <strong>and</strong> Van Tassel, Merryman received the<br />
George R. Stibitz Computer Pioneer Award in 1997.<br />
larger desk calculators, the cost of setting up new production lines<br />
<strong>and</strong> the need to develop a market made it impractical to begin production<br />
immediately. Instead, Texas Instruments <strong>and</strong> Canon of Tokyo<br />
formed a joint venture, which led to the introduction of the<br />
Canon Pocketronic Printing Calculator in Japan in April, 1970, <strong>and</strong><br />
in the United States that fall. Built entirely of Texas Instruments<br />
parts, this four-function machine with three metal oxide semicon-
Pocket calculator / 579<br />
True pocket calculators fit as easily in shirt pockets as pencils <strong>and</strong> pens. (PhotoDisc)<br />
ductor (MOS) circuits was similar to the prototype designed in 1967.<br />
The calculator was priced at $400, weighed 740 grams, <strong>and</strong> measured<br />
101 millimeters wide by 208 millimeters long by 49 millimeters<br />
high. It could perform twelve-digit calculations <strong>and</strong> worked up<br />
to four decimal places.<br />
In September, 1972, Texas Instruments put the Datamath, its first<br />
commercial h<strong>and</strong>-held calculator using a single MOS chip, on the<br />
retail market. It weighed 340 grams <strong>and</strong> measured 75 millimeters<br />
wide by 137 millimeters long by 42 millimeters high. The Datamath<br />
was priced at $120 <strong>and</strong> included a full-floating decimal point that<br />
could appear anywhere among the numbers on its eight-digit, lightemitting<br />
diode (LED) display. It came with a rechargeable battery<br />
that could also be connected to a st<strong>and</strong>ard alternating current (AC)<br />
outlet. The Datamath also had the ability to conserve power while<br />
awaiting the next keyboard entry. Finally, the machine had a built-in<br />
limited amount of memory storage.
580 / Pocket calculator<br />
Consequences<br />
Prior to 1970, most calculating machines were of such dimensions<br />
that professional mathematicians <strong>and</strong> engineers were either tied to<br />
their desks or else carried slide rules whenever they had to be away<br />
from their offices. By 1975, Keuffel & Esser, the largest slide rule manufacturer<br />
in the world, was producing its last model, <strong>and</strong> mechanical<br />
engineers found that problems that had previously taken a week<br />
could now be solved in an hour using the new machines.<br />
That year, the Smithsonian Institution accepted the world’s first<br />
miniature electronic calculator for its permanent collection, noting<br />
that it was the forerunner of more than one hundred million pocket<br />
calculators then in use. By the 1990’s, more than fifty million portable<br />
units were being sold each year in the United States. In general,<br />
the electronic pocket calculator revolutionized the way in which<br />
people related to the world of numbers.<br />
Moreover, the portability of the h<strong>and</strong>-held calculator made it<br />
ideal for use in remote locations, such as those a petroleum engineer<br />
might have to explore. Its rapidity <strong>and</strong> reliability made it an indispensable<br />
instrument for construction engineers, architects, <strong>and</strong> real<br />
estate agents, who could figure the volume of a room <strong>and</strong> other<br />
building dimensions almost instantly <strong>and</strong> then produce cost estimates<br />
almost on the spot.<br />
See also Cell phone; Differential analyzer; Mark I calculator; Personal<br />
computer; Transistor radio; Walkman cassette player.<br />
Further Reading<br />
Ball, Guy. Collector’s Guide to Pocket Calculators. Tustin, Calif.: Wilson/Barnett<br />
Publishing, 1996.<br />
Clayton, Mark. “Calculators in Class: Freedom from Scratch Paper<br />
or ‘Crutch’?” Christian Science Monitor (May 23, 2000).<br />
Lederer, Victor. “Calculators: The Applications Are Unlimited. Administrative<br />
Management 38 (July, 1977).<br />
Lee, Jennifer. “Throw Teachers a New Curve.” New York Times (September<br />
2, 1999).<br />
“The Semiconductor Becomes a New Marketing Force.” Business<br />
Week (August 24, 1974).
Polio vaccine (Sabin)<br />
Polio vaccine (Sabin)<br />
The invention: Albert Bruce Sabin’s vaccine was the first to stimulate<br />
long-lasting immunity against polio without the risk of causing<br />
paralytic disease.<br />
The people behind the invention:<br />
Albert Bruce Sabin (1906-1993), a Russian-born American<br />
virologist<br />
Jonas Edward Salk (1914-1995), an American physician,<br />
immunologist, <strong>and</strong> virologist<br />
Renato Dulbecco (1914- ), an Italian-born American<br />
virologist who shared the 1975 Nobel Prize in Physiology or<br />
Medicine<br />
The Search for a Living Vaccine<br />
581<br />
Almost a century ago, the first major poliomyelitis (polio) epidemic<br />
was recorded. Thereafter, epidemics of increasing frequency<br />
<strong>and</strong> severity struck the industrialized world. By the 1950’s, as many<br />
as sixteen thous<strong>and</strong> individuals, most of them children, were being<br />
paralyzed by the disease each year.<br />
Poliovirus enters the body through ingestion by the mouth. It<br />
replicates in the throat <strong>and</strong> the intestines <strong>and</strong> establishes an infection<br />
that normally is harmless. From there, the virus can enter the<br />
bloodstream. In some individuals it makes its way to the nervous<br />
system, where it attacks <strong>and</strong> destroys nerve cells crucial for muscle<br />
movement. The presence of antibodies in the bloodstream will prevent<br />
the virus from reaching the nervous system <strong>and</strong> causing paralysis.<br />
Thus, the goal of vaccination is to administer poliovirus that<br />
has been altered so that it cannot cause disease but nevertheless will<br />
stimulate the production of antibodies to fight the disease.<br />
Albert Bruce Sabin received his medical degree from New York<br />
University College of Medicine in 1931. Polio was epidemic in 1931,<br />
<strong>and</strong> for Sabin polio research became a lifelong interest. In 1936,<br />
while working at the Rockefeller Institute, Sabin <strong>and</strong> Peter Olinsky<br />
successfully grew poliovirus using tissues cultured in vitro. Tissue<br />
culture proved to be an excellent source of virus. Jonas Edward Salk
582 / Polio vaccine (Sabin)<br />
soon developed an inactive polio vaccine consisting of virus grown<br />
from tissue culture that had been inactivated (killed) by chemical<br />
treatment. This vaccine became available for general use in 1955, almost<br />
fifty years after poliovirus had first been identified.<br />
Sabin, however, was not convinced that an inactivated virus vaccine<br />
was adequate. He believed that it would provide only temporary<br />
protection <strong>and</strong> that individuals would have to be vaccinated<br />
repeatedly in order to maintain protective levels of antibodies.<br />
Knowing that natural infection with poliovirus induced lifelong immunity,<br />
Sabin believed that a vaccine consisting of a living virus<br />
was necessary to produce long-lasting immunity. Also, unlike the<br />
inactive vaccine, which is injected, a living virus (weakened so that<br />
it would not cause disease) could be taken orally <strong>and</strong> would invade<br />
the body <strong>and</strong> replicate of its own accord.<br />
Sabin was not alone in his beliefs. Hilary Koprowski <strong>and</strong> Harold<br />
Cox also favored a living virus vaccine <strong>and</strong> had, in fact, begun<br />
searching for weakened strains of poliovirus as early as 1946 by repeatedly<br />
growing the virus in rodents. When Sabin began his search<br />
for weakened virus strains in 1953, a fiercely competitive contest ensued<br />
to achieve an acceptable live virus vaccine.<br />
Rare, Mutant Polioviruses<br />
Sabin’s approach was based on the principle that, as viruses acquire<br />
the ability to replicate in a foreign species or tissue (for example,<br />
in mice), they become less able to replicate in humans <strong>and</strong> thus<br />
less able to cause disease. Sabin used tissue culture techniques to<br />
isolate those polioviruses that grew most rapidly in monkey kidney<br />
cells. He then employed a technique developed by Renato Dulbecco<br />
that allowed him to recover individual virus particles. The recovered<br />
viruses were injected directly into the brains or spinal cords of<br />
monkeys in order to identify those viruses that did not damage the<br />
nervous system. These meticulously performed experiments, which<br />
involved approximately nine thous<strong>and</strong> monkeys <strong>and</strong> more than<br />
one hundred chimpanzees, finally enabled Sabin to isolate rare mutant<br />
polioviruses that would replicate in the intestinal tract but not<br />
in the nervous systems of chimpanzees or, it was hoped, of humans.<br />
In addition, the weakened virus strains were shown to stimulate an-
Polio vaccine (Sabin) / 583<br />
tibodies when they were fed to chimpanzees; this was a critical attribute<br />
for a vaccine strain.<br />
By 1957, Sabin had identified three strains of attenuated viruses that<br />
were ready for small experimental trials in humans. A small group of<br />
volunteers, including Sabin’s own wife <strong>and</strong> children, were fed the vaccine<br />
with promising results. Sabin then gave his vaccine to virologists<br />
in the Soviet Union, Eastern Europe, Mexico, <strong>and</strong> Holl<strong>and</strong> for further<br />
testing. Combined with smaller studies in the United States, these trials<br />
established the effectiveness <strong>and</strong> safety of his oral vaccine.<br />
During this period, the strains developed by Cox <strong>and</strong> by Koprowski<br />
were being tested also in millions of persons in field trials<br />
around the world. In 1958, two laboratories independently compared<br />
the vaccine strains <strong>and</strong> concluded that the Sabin strains were<br />
superior. In 1962, after four years of deliberation by the U.S. <strong>Public</strong><br />
Health Service, all three of Sabin’s vaccine strains were licensed for<br />
general use.<br />
Albert Sabin<br />
Born in Bialystok, Pol<strong>and</strong>, in 1906, Albert Bruce Sabin emigrated<br />
with his family to the United States in 1921. Like Jonas<br />
Salk—the other great inventor of a polio vaccine—Sabin earned<br />
his medical degree at New York University (1931), where he began<br />
his research on polio.<br />
While in the U.S. Army Medical Corps during World War II,<br />
he helped produce vaccines for dengue fever <strong>and</strong> Japanese encephalitis.<br />
After the war he returned to his professorship at the<br />
University of Cincinnati College of Medicine <strong>and</strong> Children’s<br />
Hospital Research Foundation. The polio vaccine he developed<br />
there saved millions of children worldwide from paralytic polio.<br />
Many of these lives were doubtless saved because of his refusal<br />
to patent the vaccine, thereby making it simpler to produce<br />
<strong>and</strong> distribute <strong>and</strong> less expensive to administer<br />
Sabin’s work brought him more than forty honorary degrees<br />
from American <strong>and</strong> foreign universities <strong>and</strong> medals from the<br />
governments of the United States <strong>and</strong> Soviet Union. He was<br />
president of the Weizmann Institute of Science after 1970 <strong>and</strong><br />
later became a professor of biomedicine at the Medical University<br />
of South Carolina. He died in 1993.
584 / Polio vaccine (Sabin)<br />
Consequences<br />
The development of polio vaccines ranks as one of the triumphs of<br />
modern medicine. In the early 1950’s, paralytic polio struck 13,500<br />
out of every 100 million Americans. The use of the Salk vaccine<br />
greatly reduced the incidence of polio, but outbreaks of paralytic disease<br />
continued to occur: Fifty-seven hundred cases were reported in<br />
1959 <strong>and</strong> twenty-five hundred cases in 1960. In 1962, the oral Sabin<br />
vaccine became the vaccine of choice in the United States. Since its<br />
widespread use, the number of paralytic cases in the United States<br />
has dropped precipitously, eventually averaging fewer than ten per<br />
year. Worldwide, the oral vaccine prevented an estimated 5 million<br />
cases of paralytic poliomyelitis between 1970 <strong>and</strong> 1990.<br />
The oral vaccine is not without problems. Occasionally, the living<br />
virus mutates to a disease-causing (virulent) form as it multiplies in<br />
the vaccinated person. When this occurs, the person may develop<br />
paralytic poliomyelitis. The inactive vaccine, in contrast, cannot<br />
mutate to a virulent form. Ironically, nearly every incidence of polio<br />
in the United States is caused by the vaccine itself.<br />
In the developing countries of the world, the issue of vaccination is<br />
more pressing. Millions receive neither form of polio vaccine; as a result,<br />
at least 250,000 individuals are paralyzed or die each year. The World<br />
Health Organization <strong>and</strong> other health providers continue to work toward<br />
the very practical goal of completely eradicating this disease.<br />
See also Antibacterial drugs; Birth control pill; Iron lung; Penicillin;<br />
Polio vaccine (Salk); Reserpine; Salvarsan; Tuberculosis vaccine;<br />
Typhus vaccine; Yellow fever vaccine.<br />
Further Reading<br />
DeJauregui, Ruth. 100 Medical Milestones That Shaped World History.<br />
San Mateo, Calif.: Bluewood Books, 1998.<br />
Grady, Denise. “As Polio Fades, Dr. Salk’s Vaccine Re-emerges.”<br />
New York Times (December 14, 1999).<br />
Plotkin, Stanley A., <strong>and</strong> Edward A. Mortimer. Vaccines. 2d ed. Philadelphia:<br />
W. B. Saunders, 1994.<br />
Seavey, Nina Gilden, Jane S. Smith, <strong>and</strong> Paul Wagner. A Paralyzing<br />
Fear: The Triumph over Polio in America. New York: TV Books, 1998.
Polio vaccine (Salk)<br />
Polio vaccine (Salk)<br />
The invention: Jonas Salk’s vaccine was the first that prevented polio,<br />
resulting in the virtual eradication of crippling polio epidemics.<br />
The people behind the invention:<br />
Jonas Edward Salk (1914-1995), an American physician,<br />
immunologist, <strong>and</strong> virologist<br />
Thomas Francis, Jr. (1900-1969), an American microbiologist<br />
Cause for Celebration<br />
585<br />
Poliomyelitis (polio) is an infectious disease that can adversely<br />
affect the central nervous system, causing paralysis <strong>and</strong> great muscle<br />
wasting due to the destruction of motor neurons (nerve cells) in<br />
the spinal cord. Epidemiologists believe that polio has existed since<br />
ancient times, <strong>and</strong> evidence of its presence in Egypt, circa 1400 b.c.e.,<br />
has been presented. Fortunately, the Salk vaccine <strong>and</strong> the later vaccine<br />
developed by the American virologist Albert Bruce Sabin can<br />
prevent the disease. Consequently, except in underdeveloped nations,<br />
polio is rare. Moreover, although once a person develops polio,<br />
there is still no cure for it, a large number of polio cases end without<br />
paralysis or any observable effect.<br />
Polio is often called “infantile paralysis.” This results from the<br />
fact that it is seen most often in children. It is caused by a virus <strong>and</strong><br />
begins with body aches, a stiff neck, <strong>and</strong> other symptoms that are<br />
very similar to those of a severe case of influenza. In some cases,<br />
within two weeks after its onset, the course of polio begins to lead to<br />
muscle wasting <strong>and</strong> paralysis.<br />
On April 12, 1955, the world was thrilled with the announcement<br />
that Jonas Edward Salk’s poliomyelitis vaccine could prevent the<br />
disease. It was reported that schools were closed in celebration of<br />
this event. Salk, the son of a New York City garment worker, has<br />
since become one of the most well-known <strong>and</strong> publicly venerated<br />
medical scientists in the world.<br />
Vaccination is a method of disease prevention by immunization,<br />
whereby a small amount of virus is injected into the body to prevent
586 / Polio vaccine (Salk)<br />
a viral disease. The process depends on the production of antibodies<br />
(body proteins that are specifically coded to prevent the disease<br />
spread by the virus) in response to the vaccination. Vaccines are<br />
made of weakened or killed virus preparations.<br />
Electrifying Results<br />
Jonas Salk<br />
The son of a garment industry worker, Jonas Edward Salk<br />
was born in New York City in 1914. He worked his way through<br />
school, graduating from New York University School of Medicine<br />
in 1938. Afterward he joined microbiologist Thomas Francis,<br />
Jr., in developing a vaccine for influenza.<br />
In 1942, Salk began a research fellowship at the University of<br />
Michigan <strong>and</strong> subsequently joined the epidemiology faculty.<br />
He moved to the University of Pittsburgh in 1947, directing its<br />
Viral Research Lab, <strong>and</strong> while there developed his vaccine for<br />
poliomyelitis. The discovery catapulted Salk into worldwide<br />
fame, but he was a controversial figure among scientists.<br />
Although Salk received the Presidential Medal of Freedom,<br />
a Congressional gold medal, <strong>and</strong> the Nehru Award for International<br />
Underst<strong>and</strong>ing, he was turned down for membership in<br />
the National Academy of Sciences. In 1963 he opened the Salk<br />
Institute for Biological Sciences in La Jolla, California. Well<br />
aware of his reputation among medical researchers, he once<br />
joked, “I couldn’t possibly have become a member of this institute<br />
if I hadn’t founded it myself.” He died in 1995.<br />
The Salk vaccine was produced in two steps. First, polio viruses<br />
were grown in monkey kidney tissue cultures. These polio viruses<br />
were then killed by treatment with the right amount of formaldehyde<br />
to produce an effective vaccine. The killed-virus polio vaccine<br />
was found to be safe <strong>and</strong> to cause the production of antibodies<br />
against the disease, a sign that it should prevent polio.<br />
In early 1952, Salk tested a prototype vaccine against Type I polio virus<br />
on children who were afflicted with the disease <strong>and</strong> were thus<br />
deemed safe from reinfection. This test showed that the vaccination
greatly elevated the concentration of polio antibodies in these children.<br />
On July 2, 1952, encouraged by these results, Salk vaccinated fortythree<br />
children who had never had polio with vaccines against each of<br />
the three virus types (Type I, Type II, <strong>and</strong> Type III). All inoculated children<br />
produced high levels of polio antibodies, <strong>and</strong> none of them developed<br />
the disease. Consequently, the vaccine appeared to be both safe in<br />
humans <strong>and</strong> likely to become an effective public health tool.<br />
In 1953, Salk reported these findings in the Journal of the American<br />
Medical Association. In April, 1954, nationwide testing of the Salk<br />
vaccine began, via the mass vaccination of American schoolchildren.<br />
The results of the trial were electrifying. The vaccine was safe,<br />
<strong>and</strong> it greatly reduced the incidence of the disease. In fact, it was estimated<br />
that Salk’s vaccine gave schoolchildren 60 to 90 percent protection<br />
against polio.<br />
Salk was instantly praised. Then, however, several cases of polio<br />
occurred as a consequence of the vaccine. Its use was immediately<br />
suspended by the U.S. surgeon general, pending a complete examination.<br />
Soon, it was evident that all the cases of vaccine-derived polio<br />
were attributable to faulty batches of vaccine made by one<br />
pharmaceutical company. Salk <strong>and</strong> his associates were in no way responsible<br />
for the problem. Appropriate steps were taken to ensure<br />
that such an error would not be repeated, <strong>and</strong> the Salk vaccine was<br />
again released for use by the public.<br />
Consequences<br />
Polio vaccine (Salk) / 587<br />
The first reports on the polio epidemic in the United States had<br />
occurred on June 27, 1916, when one hundred residents of Brooklyn,<br />
New York, were afflicted. Soon, the disease had spread. By August,<br />
twenty-seven thous<strong>and</strong> people had developed polio. Nearly seven<br />
thous<strong>and</strong> afflicted people died, <strong>and</strong> many survivors of the epidemic<br />
were permanently paralyzed to varying extents. In New York City<br />
alone, nine thous<strong>and</strong> people developed polio <strong>and</strong> two thous<strong>and</strong><br />
died. Chaos reigned as large numbers of terrified people attempted<br />
to leave <strong>and</strong> were turned back by police. Smaller polio epidemics<br />
occurred throughout the nation in the years that followed (for example,<br />
the Catawba County, North Carolina, epidemic of 1944). A<br />
particularly horrible aspect of polio was the fact that more than 70
588 / Polio vaccine (Salk)<br />
percent of polio victims were small children. Adults caught it too;<br />
the most famous of these adult polio victims was U.S. President<br />
Franklin D. Roosevelt. There was no cure for the disease. The best<br />
available treatment was physical therapy.<br />
As of August, 1955, more than four million polio vaccines had<br />
been given. The Salk vaccine appeared to work very well. There were<br />
only half as many reported cases of polio in 1956 as there had been in<br />
1955. It appeared that polio was being conquered. By 1957, the number<br />
of cases reported nationwide had fallen below six thous<strong>and</strong>.<br />
Thus, in two years, its incidence had dropped by about 80 percent.<br />
This was very exciting, <strong>and</strong> soon other countries clamored for the<br />
vaccine. By 1959, ninety other countries had been supplied with the<br />
Salk vaccine. Worldwide, the disease was being eradicated. The introduction<br />
of an oral polio vaccine by Albert Bruce Sabin supported<br />
this progress.<br />
Salk received many honors, including honorary degrees from<br />
American <strong>and</strong> foreign universities, the Lasker Award, a Congressional<br />
Medal for Distinguished Civilian Service, <strong>and</strong> membership in<br />
the French Legion of Honor, yet he received neither the Nobel Prize<br />
nor membership in the American National Academy of Sciences. It<br />
is believed by many that this neglect was a result of the personal antagonism<br />
of some of the members of the scientific community who<br />
strongly disagreed with his theories of viral inactivation.<br />
See also Antibacterial drugs; Birth control pill; Iron lung; Penicillin;<br />
Polio vaccine (Sabin); Reserpine; Salvarsan; Tuberculosis vaccine;<br />
Typhus vaccine; Yellow fever vaccine.<br />
Further Reading<br />
DeJauregui, Ruth. 100 Medical Milestones That Shaped World History.<br />
San Mateo, Calif.: Bluewood Books, 1998.<br />
Plotkin, Stanley A., <strong>and</strong> Edward A. Mortimer. Vaccines. 2d ed. Philadelphia:<br />
W. B. Saunders, 1994.<br />
Seavey, Nina Gilden, Jane S. Smith, <strong>and</strong> Paul Wagner. A Paralyzing<br />
Fear: The Triumph over Polio in America. New York: TV Books, 1998.<br />
Smith, Jane S. Patenting the Sun: Polio <strong>and</strong> the Salk Vaccine. New York:<br />
Anchor/Doubleday, 1991.
Polyester<br />
Polyester<br />
The invention: A synthetic fibrous polymer used especially in fabrics.<br />
The people behind the invention:<br />
Wallace H. Carothers (1896-1937), an American polymer<br />
chemist<br />
Hilaire de Chardonnet (1839-1924), a French polymer chemist<br />
John R. Whinfield (1901-1966), a British polymer chemist<br />
A Story About Threads<br />
589<br />
Human beings have worn clothing since prehistoric times. At<br />
first, clothing consisted of animal skins sewed together. Later, people<br />
learned to spin threads from the fibers in plant or animal materials<br />
<strong>and</strong> to weave fabrics from the threads (for example, wool, silk,<br />
<strong>and</strong> cotton). By the end of the nineteenth century, efforts were begun<br />
to produce synthetic fibers for use in fabrics. These efforts were<br />
motivated by two concerns. First, it seemed likely that natural materials<br />
would become too scarce to meet the needs of a rapidly increasing<br />
world population. Second, a series of natural disasters—<br />
affecting the silk industry in particular—had demonstrated the<br />
problems of relying solely on natural fibers for fabrics.<br />
The first efforts to develop synthetic fabric focused on artificial<br />
silk, because of the high cost of silk, its beauty, <strong>and</strong> the fact that silk<br />
production had been interrupted by natural disasters more often<br />
than the production of any other material. The first synthetic silk<br />
was rayon, which was originally patented by a French count,<br />
Hilaire de Chardonnet, <strong>and</strong> was later much improved by other<br />
polymer chemists. Rayon is a semisynthetic material that is made<br />
from wood pulp or cotton.<br />
Because there was a need for synthetic fabrics whose manufacture<br />
did not require natural materials, other avenues were explored. One<br />
of these avenues led to the development of totally synthetic polyester<br />
fibers. In the United States, the best-known of these is Dacron, which<br />
is manufactured by E. I. Du Pont de Nemours. Easily made into
590 / Polyester<br />
threads, Dacron is widely used in clothing. It is also used to make audiotapes<br />
<strong>and</strong> videotapes <strong>and</strong> in automobile <strong>and</strong> boat bodies.<br />
From Polymers to Polyester<br />
Dacron belongs to a group of chemicals known as “synthetic<br />
polymers.” All polymers are made of giant molecules, each of<br />
which is composed of a large number of simpler molecules (“monomers”)<br />
that have been linked, chemically, to form long strings. Efforts<br />
by industrial chemists to prepare synthetic polymers developed<br />
in the twentieth century after it was discovered that many<br />
natural building materials <strong>and</strong> fabrics (such as rubber, wood, wool,<br />
silk, <strong>and</strong> cotton) were polymers, <strong>and</strong> as the ways in which monomers<br />
could be joined to make polymers became better understood.<br />
One group of chemists who studied polymers sought to make inexpensive<br />
synthetic fibers to replace expensive silk <strong>and</strong> wool. Their efforts<br />
led to the development of well-known synthetic fibers such as<br />
nylon <strong>and</strong> Dacron.<br />
Wallace H. Carothers of Du Pont pioneered the development of<br />
polyamide polymers, collectively called “nylon,” <strong>and</strong> was the first<br />
researcher to attempt to make polyester. It was British polymer<br />
chemists John R. Whinfield <strong>and</strong> J. T. Dickson of Calico Printers Association<br />
(CPA) Limited, however, who in 1941 perfected <strong>and</strong> patented<br />
polyester that could be used to manufacture clothing. The<br />
first polyester fiber products were produced in 1950 in Great Britain<br />
by London’s British Imperial Chemical Industries, which had secured<br />
the British patent rights from CPA. This polyester, which was<br />
made of two monomers, terphthalic acid <strong>and</strong> ethylene glycol, was<br />
called Terylene. In 1951, Du Pont, which had acquired Terylene patent<br />
rights for the Western Hemisphere, began to market its own version<br />
of this polyester, which was called Dacron. Soon, other companies<br />
around the world were selling polyester materials of similar<br />
composition.<br />
Dacron <strong>and</strong> other polyesters are used in many items in the<br />
United States. Made into fibers <strong>and</strong> woven, Dacron becomes cloth.<br />
When pressed into thin sheets, it becomes Mylar, which is used in<br />
videotapes <strong>and</strong> audiotapes. Dacron polyester, mixed with other materials,<br />
is also used in many industrial items, including motor vehi-
cle <strong>and</strong> boat bodies. Terylene <strong>and</strong> similar polyester preparations<br />
serve the same purposes in other countries.<br />
The production of polyester begins when monomers are mixed<br />
in huge reactor tanks <strong>and</strong> heated, which causes them to form giant<br />
polymer chains composed of thous<strong>and</strong>s of alternating monomer<br />
units. If T represents terphthalic acid <strong>and</strong> E represents ethylene glycol,<br />
a small part of a necklace-like polymer can be shown in the following<br />
way: (TETETETETE). Once each batch of polyester polymer<br />
has the desired composition, it is processed for storage until it is<br />
needed. In this procedure, the material, in liquid form in the hightemperature<br />
reactor, is passed through a device that cools it <strong>and</strong><br />
forms solid strips. These strips are then diced, dried, <strong>and</strong> stored.<br />
When polyester fiber is desired, the diced polyester is melted <strong>and</strong><br />
then forced through tiny holes in a “spinneret” device; this process<br />
is called “extruding.” The extruded polyester cools again, while<br />
passing through the spinneret holes, <strong>and</strong> becomes fine fibers called<br />
“filaments.” The filaments are immediately wound into threads that<br />
are collected in rolls. These rolls of thread are then dyed <strong>and</strong> used to<br />
weave various fabrics. If polyester sheets or other forms of polyester<br />
are desired, the melted, diced polyester is processed in other ways.<br />
Polyester preparations are often mixed with cotton, glass fibers, or<br />
other synthetic polymers to produce various products.<br />
Impact<br />
Polyester / 591<br />
The development of polyester was a natural consequence of the<br />
search for synthetic fibers that developed from work on rayon. Once<br />
polyester had been developed, its great utility led to its widespread<br />
use in industry. In addition, the profitability of the material spurred<br />
efforts to produce better synthetic fibers for specific uses. One example<br />
is that of stretchy polymers such as Helance, which is a form<br />
of nylon. In addition, new chemical types of polymer fibers were developed,<br />
including the polyurethane materials known collectively<br />
as “sp<strong>and</strong>ex” (for example, Lycra <strong>and</strong> Vyrenet).<br />
The wide variety of uses for polyester is amazing. Mixed with<br />
cotton, it becomes wash-<strong>and</strong>-wear clothing; mixed with glass, it is<br />
used to make boat <strong>and</strong> motor vehicle bodies; combined with other<br />
materials, it is used to make roofing materials, conveyor belts,
592 / Polyester<br />
hoses, <strong>and</strong> tire cords. In Europe, polyester has become the main<br />
packaging material for consumer goods, <strong>and</strong> the United States does<br />
not lag far behind in this area.<br />
The future is sure to hold more uses for polyester <strong>and</strong> the invention<br />
of new polymers. These spinoffs of polyester will be essential in<br />
the development of high technology.<br />
See also Buna rubber; Neoprene; Nylon; Orlon; Plastic; Polyethylene;<br />
Polystyrene.<br />
Further Reading<br />
Furukawa, Yasu. Inventing Polymer Science: Staudinger, Carothers, <strong>and</strong><br />
the Emergence of Macromolecular Chemistry. Philadelphia: University<br />
of Pennsylvania Press, 1998.<br />
H<strong>and</strong>ley, Susannah. Nylon: The Story of a Fashion Revolution, A Celebration<br />
of Design from Art Silk to Nylon <strong>and</strong> Thinking Fibres. Baltimore:<br />
Johns Hopkins University Press, 1999.<br />
Hermes, Matthew E. Enough for One Lifetime: Wallace Carothers, Inventor<br />
of Nylon. Washington, D.C.: American Chemical Society<br />
<strong>and</strong> the Chemical Heritage Foundation, 1996.<br />
Smith, Matthew Boyd. Polyester: The Indestructible Fashion. Atglen,<br />
Pa.: Schiffer, 1998.
Polyethylene<br />
Polyethylene<br />
The invention: An artificial polymer with strong insulating properties<br />
<strong>and</strong> many other applications.<br />
The people behind the invention:<br />
Karl Ziegler (1898-1973), a German chemist<br />
Giulio Natta (1903-1979), an Italian chemist<br />
August Wilhelm von Hofmann (1818-1892), a German chemist<br />
The Development of Synthetic Polymers<br />
593<br />
In 1841, August Hofmann completed his Ph.D. with Justus von<br />
Liebig, a German chemist <strong>and</strong> founding father of organic chemistry.<br />
One of Hofmann’s students, William Henry Perkin, discovered that<br />
coal tars could be used to produce brilliant dyes. The German chemical<br />
industry, under Hofmann’s leadership, soon took the lead in<br />
this field, primarily because the discipline of organic chemistry was<br />
much more developed in Germany than elsewhere.<br />
The realities of the early twentieth century found the chemical<br />
industry struggling to produce synthetic substitutes for natural<br />
materials that were in short supply, particularly rubber. Rubber is<br />
a natural polymer, a material composed of a long chain of small<br />
molecules that are linked chemically. An early synthetic rubber,<br />
neoprene, was one of many synthetic polymers (some others were<br />
Bakelite, polyvinyl chloride, <strong>and</strong> polystyrene) developed in the<br />
1920’s <strong>and</strong> 1930’s. Another polymer, polyethylene, was developed<br />
in 1936 by Imperial Chemical Industries. Polyethylene was a<br />
tough, waxy material that was produced at high temperature <strong>and</strong><br />
at pressures of about one thous<strong>and</strong> atmospheres. Its method of<br />
production made the material expensive, but it was useful as an insulating<br />
material.<br />
World War II <strong>and</strong> the material shortages associated with it brought<br />
synthetic materials into the limelight. Many new uses for polymers<br />
were discovered, <strong>and</strong> after the war they were in dem<strong>and</strong> for the production<br />
of a variety of consumer goods, although polyethylene was<br />
still too expensive to be used widely.
594 / Polyethylene<br />
Organometallics Provide the Key<br />
Karl Ziegler, an organic chemist with an excellent international<br />
reputation, spent most of his career in Germany. With his international<br />
reputation <strong>and</strong> lack of political connections, he was a natural<br />
c<strong>and</strong>idate to take charge of the Kaiser Wilhelm Institute for Coal Research<br />
(later renamed the Max Planck Institute) in 1943. Wise planners<br />
saw him as a director who would be favored by the conquering<br />
Allies. His appointment was a shrewd one, since he was allowed to<br />
retain his position after World War II ended. Ziegler thus played a<br />
key role in the resurgence of German chemical research after the war.<br />
Before accepting the position at the Kaiser Wilhelm Institute,<br />
Ziegler made it clear that he would take the job only if he could pursue<br />
his own research interests in addition to conducting coal research.<br />
The location of the institute in the Ruhr Valley meant that<br />
abundant supplies of ethylene were available from the local coal industry,<br />
so it is not surprising that Ziegler began experimenting with<br />
that material.<br />
Although Ziegler’s placement as head of the institute was an important<br />
factor in his scientific breakthrough, his previous research<br />
was no less significant. Ziegler devoted much time to the field of<br />
organometallic compounds, which are compounds that contain a<br />
metal atom that is bonded to one or more carbon atoms. Ziegler was<br />
interested in organoaluminum compounds, which are compounds<br />
that contain aluminum-carbon bonds.<br />
Ziegler was also interested in polymerization reactions, which<br />
involve the linking of thous<strong>and</strong>s of smaller molecules into the single<br />
long chain of a polymer. Several synthetic polymers were known,<br />
but chemists could exert little control on the actual process. It was<br />
impossible to regulate the length of the polymer chain, <strong>and</strong> the extent<br />
of branching in the chain was unpredictable. It was as a result of<br />
studying the effect of organoaluminum compounds on these chain<br />
formation reactions that the key discovery was made.<br />
Ziegler <strong>and</strong> his coworkers already knew that ethylene would react<br />
with organoaluminum compounds to produce hydrocarbons,<br />
which are compounds that contain only carbon <strong>and</strong> hydrogen <strong>and</strong><br />
that have varying chain lengths. Regulating the product chain length<br />
continued to be a problem.
At this point, fate intervened in the form of a trace of nickel left in a<br />
reactor from a previous experiment. The nickel caused the chain<br />
lengthening to stop after two ethylene molecules had been linked.<br />
Ziegler <strong>and</strong> his colleagues then tried to determine whether metals<br />
other than nickel caused a similar effect with a longer polymeric<br />
chain. Several metals were tested, <strong>and</strong> the most important finding<br />
was that a trace of titanium chloride in the reactor caused the deposition<br />
of large quantities of high-density polyethylene at low pressures.<br />
Ziegler licensed the procedure, <strong>and</strong> within a year, Giulio Natta<br />
had modified the catalysts to give high yields of polymers with<br />
highly ordered side chains branching from the main chain. This<br />
opened the door for the easy production of synthetic rubber. For<br />
their discovery of Ziegler-Natta catalysts, Ziegler <strong>and</strong> Natta shared<br />
the 1963 Nobel Prize in Chemistry.<br />
Consequences<br />
Polyethylene / 595<br />
Ziegler’s process produced polyethylene that was much more<br />
rigid than the material produced at high pressure. His product also<br />
had a higher density <strong>and</strong> a higher softening temperature. Industrial<br />
exploitation of the process was unusually rapid, <strong>and</strong> within ten years<br />
more than twenty plants utilizing the process had been built throughout<br />
Europe, producing more than 120,000 metric tons of polyethylene.<br />
This rapid exploitation was one reason Ziegler <strong>and</strong> Natta were<br />
awarded the Nobel Prize after such a relatively short time.<br />
By the late 1980’s, total production stood at roughly 18 billion<br />
pounds worldwide. Other polymeric materials, including polypropylene,<br />
can be produced by similar means. The ready availability<br />
<strong>and</strong> low cost of these versatile materials have radically transformed<br />
the packaging industry. Polyethylene bottles are far lighter<br />
than their glass counterparts; in addition, gases <strong>and</strong> liquids do not<br />
diffuse into polyethylene very easily, <strong>and</strong> it does not break easily.<br />
As a result, more <strong>and</strong> more products are bottled in containers<br />
made of polyethylene or other polymers. Other novel materials<br />
possessing properties unparalleled by any naturally occurring material<br />
(Kevlar, for example, which is used to make bullet-resistant<br />
vests) have also been an outgrowth of the availability of low-cost<br />
polymeric materials.
596 / Polyethylene<br />
See also Buna rubber; Neoprene; Nylon; Orlon; Plastic; Polyester;<br />
Polystyrene.<br />
Further Reading<br />
Boor, John. Ziegler-Natta Catalysts <strong>and</strong> Polymerizations. New York:<br />
Academic Press, 1979.<br />
Clarke, Alison J. Tupperware: The Promise of Plastic in 1950s America.<br />
Washington, D.C.: Smithsonian Institution Press, 1999.<br />
Natta, Giulio. “From Stereospecific Polymerization to Asymmetric<br />
Autocatalytic Synthesis of Macromolecules.” In Chemistry, 1963-<br />
1970. River Edge, N.J.: World Scientific, 1999.<br />
Ziegler, Karl. “Consequences <strong>and</strong> Development of an Invention.” In<br />
Chemistry, 1963-1970. River Edge, N.J.: World Scientific, 1999.
Polystyrene<br />
Polystyrene<br />
The invention: A clear, moldable polymer with many industrial<br />
uses whose overuse has also threatened the environment.<br />
The people behind the invention:<br />
Edward Simon, an American chemist<br />
Charles Gerhardt (1816-1856), a French chemist<br />
Marcellin Pierre Berthelot (1827-1907), a French chemist<br />
Polystyrene Is Characterized<br />
597<br />
In the late eighteenth century, a scientist by the name of Casper<br />
Neuman described the isolation of a chemical called “storax” from a<br />
balsam tree that grew in Asia Minor. This isolation led to the first report<br />
on the physical properties of the substance later known as “styrene.”<br />
The work of Neuman was confirmed <strong>and</strong> exp<strong>and</strong>ed upon<br />
years later, first in 1839 by Edward Simon, who evaluated the temperature<br />
dependence of styrene, <strong>and</strong> later by Charles Gerhardt,<br />
who proposed its molecular formula. The work of these two men<br />
sparked an interest in styrene <strong>and</strong> its derivatives.<br />
Polystyrene belongs to a special class of molecules known as<br />
polymers. Apolymer (the name means “many parts”) is a giant molecule<br />
formed by combining small molecular units, called “monomers.”<br />
This combination results in a macromolecule whose physical<br />
properties—especially its strength <strong>and</strong> flexibility—are significantly<br />
different from those of its monomer components. Such polymers are<br />
often simply called “plastics.”<br />
Polystyrene has become an important material in modern society<br />
because it exhibits a variety of physical characteristics that can be<br />
manipulated for the production of consumer products. Polystyrene<br />
is a “thermoplastic,” which means that it can be softened by heat<br />
<strong>and</strong> then reformed, after which it can be cooled to form a durable<br />
<strong>and</strong> resilient product.<br />
At 94 degrees Celsius, polystyrene softens; at room temperature,<br />
however, it rings like a metal when struck. Because of the glasslike<br />
nature <strong>and</strong> high refractive index of polystyrene, products made
598 / Polystyrene<br />
from it are known for their shine <strong>and</strong> attractive texture. In addition,<br />
the material is characterized by a high level of water resistance <strong>and</strong><br />
by electrical insulating qualities. It is also flammable, can by dissolved<br />
or softened by many solvents, <strong>and</strong> is sensitive to light. These<br />
qualities make polystyrene a valuable material in the manufacture<br />
of consumer products.<br />
Plastics on the Market<br />
In 1866, Marcellin Pierre Berthelot prepared styrene from ethylene<br />
<strong>and</strong> benzene mixtures in a heated reaction flask. This was the<br />
first synthetic preparation of polystyrene. In 1925, the Naugatuck<br />
Chemical Company began to operate the first commercial styrene/<br />
polystyrene manufacturing plant. In the 1930’s, the Dow Chemical<br />
Company became involved in the manufacturing <strong>and</strong> marketing of<br />
styrene/polystyrene products. Dow’s Styron 666 was first marketed<br />
as a general-purpose polystyrene in 1938. This material was<br />
the first plastic product to demonstrate polystyrene’s excellent mechanical<br />
properties <strong>and</strong> ease of fabrication.<br />
The advent of World War II increased the need for plastics. When<br />
the Allies’ supply of natural rubber was interrupted, chemists sought<br />
to develop synthetic substitutes. The use of additives with polymer<br />
species was found to alter some of the physical properties of those<br />
species. Adding substances called “elastomers” during the polymerization<br />
process was shown to give a rubberlike quality to a normally<br />
brittle species. An example of this is Dow’s Styron 475, which<br />
was marketed in 1948 as the first “impact” polystyrene. It is called<br />
an impact polystyrene because it also contains butadiene, which increases<br />
the product’s resistance to breakage. The continued characterization<br />
of polystyrene products has led to the development of a<br />
worldwide industry that fills a wide range of consumer needs.<br />
Following World War II, the plastics industry revolutionized<br />
many aspects of modern society. Polystyrene is only one of the<br />
many plastics involved in this process, but it has found its way into<br />
a multitude of consumer products. Disposable kitchen utensils,<br />
trays <strong>and</strong> packages, cups, videocassettes, insulating foams, egg cartons,<br />
food wrappings, paints, <strong>and</strong> appliance parts are only a few of<br />
the typical applications of polystyrenes. In fact, the production of
polystyrene has grown to exceed 5 billion pounds per year.<br />
The tremendous growth of this industry in the postwar era has<br />
been fueled by a variety of factors. Having studied the physical<br />
<strong>and</strong> chemical properties of polystyrene, chemists <strong>and</strong> engineers<br />
were able to envision particular uses <strong>and</strong> to tailor the manufacture<br />
of the product to fit those uses precisely. Because of its low cost of<br />
production, superior performance, <strong>and</strong> light weight, polystyrene<br />
has become the material of choice for the packaging industry. The<br />
automobile industry also enjoys its benefits. Polystyrene’s lower<br />
density compared to those of glass <strong>and</strong> steel makes it appropriate<br />
for use in automobiles, since its light weight means that using<br />
it can reduce the weight of automobiles, thereby increasing gas<br />
efficiency.<br />
Impact<br />
Polystyrene / 599<br />
There is no doubt that the marketing of polystyrene has greatly<br />
affected almost every aspect of modern society. From computer keyboards<br />
to food packaging, the use of polystyrene has had a powerful<br />
impact on both the quality <strong>and</strong> the prices of products. Its use is not,<br />
however, without drawbacks; it has also presented humankind<br />
with a dilemma. The wholesale use of polystyrene has created an<br />
environmental problem that represents a danger to wildlife, adds to<br />
roadside pollution, <strong>and</strong> greatly contributes to the volume of solid<br />
waste in l<strong>and</strong>fills.<br />
Polystyrene has become a household commodity because it lasts.<br />
The reciprocal effect of this fact is that it may last forever. Unlike natural<br />
products, which decompose upon burial, polystyrene is very<br />
difficult to convert into degradable forms. The newest challenge facing<br />
engineers <strong>and</strong> chemists is to provide for the safe <strong>and</strong> efficient<br />
disposal of plastic products. Thermoplastics such as polystyrene<br />
can be melted down <strong>and</strong> remolded into new products, which makes<br />
recycling <strong>and</strong> reuse of polystyrene a viable option, but this option<br />
requires the cooperation of the same consumers who have benefited<br />
from the production of polystyrene products.<br />
See also Food freezing; Nylon; Orlon; Plastic; Polyester; Polyethylene;<br />
Pyrex glass; Teflon; Tupperware.
600 / Polystyrene<br />
Further Reading<br />
Fenichell, Stephen. Plastic: The Making of a Synthetic Century. New<br />
York: HarperBusiness, 1997.<br />
Mossman, S. T. I. Early Plastics: Perspectives, 1850-1950. London: Science<br />
Museum, 1997.<br />
Wünsch, J. R. Polystyrene: Synthesis, Production <strong>and</strong> Applications.<br />
Shropshire, Engl<strong>and</strong>: Rapra Technology, 2000.
Propeller-coordinated<br />
machine gun<br />
Propeller-coordinated machine gun<br />
The invention: A mechanism that synchronized machine gun fire<br />
with propeller movement to prevent World War I fighter plane<br />
pilots from shooting off their own propellers during combat.<br />
The people behind the invention:<br />
Anthony Herman Gerard Fokker (1890-1939), a Dutch-born<br />
American entrepreneur, pilot, aircraft designer, <strong>and</strong><br />
manufacturer<br />
Rol<strong>and</strong> Garros (1888-1918), a French aviator<br />
Max Immelmann (1890-1916), a German aviator<br />
Raymond Saulnier (1881-1964), a French aircraft designer <strong>and</strong><br />
manufacturer<br />
French Innovation<br />
601<br />
The first true aerial combat of World War I took place in 1915. Before<br />
then, weapons attached to airplanes were inadequate for any<br />
real combat work. H<strong>and</strong>-held weapons <strong>and</strong> clumsily mounted machine<br />
guns were used by pilots <strong>and</strong> crew members in attempts to<br />
convert their observation planes into fighters. On April 1, 1915, this<br />
situation changed. From an airfield near Dunkerque, France, a<br />
French airman, Lieutenant Rol<strong>and</strong> Garros, took off in an airplane<br />
equipped with a device that would make his plane the most feared<br />
weapon in the air at that time.<br />
During a visit to Paris, Garros met with Raymond Saulnier, a French<br />
aircraft designer. In April of 1914, Saulnier had applied for a patent on<br />
a device that mechanically linked the trigger of a machine gun to a cam<br />
on the engine shaft. Theoretically, such an assembly would allow the<br />
gun to fire between the moving blades of the propeller. Unfortunately,<br />
the available machine gun Saulnier used to test his device was a<br />
Hotchkiss gun, which tended to fire at an uneven rate. On Garros’s arrival,<br />
Saulnier showed him a new invention: a steel deflector shield<br />
that, when fastened to the propeller, would deflect the small percentage<br />
of mistimed bullets that would otherwise destroy the blade.
602 / Propeller-coordinated machine gun<br />
The first test-firing was a disaster, shooting the propeller off <strong>and</strong><br />
destroying the fuselage. Modifications were made to the deflector<br />
braces, streamlining its form into a wedge shape with gutterchannels<br />
for deflected bullets. The invention was attached to a<br />
Morane-Saulnier monoplane, <strong>and</strong> on April 1, Garros took off alone<br />
toward the German lines. Success was immediate. Garros shot<br />
down a German observation plane that morning. During the next<br />
two weeks, Garros shot down five more German aircraft.<br />
German Luck<br />
The German high comm<strong>and</strong>, frantic over the effectiveness of the<br />
French “secret weapon,” sent out spies to try to steal the secret <strong>and</strong><br />
also ordered engineers to develop a similar weapon. Luck was with<br />
them. On April 18, 1915, despite warnings by his superiors not to fly<br />
over enemy-held territory, Garros was forced to crash-l<strong>and</strong> behind<br />
German lines with engine trouble. Before he could destroy his aircraft,<br />
Garros <strong>and</strong> his plane were captured by German troops. The secret<br />
weapon was revealed.<br />
The Germans were ecstatic about the opportunity to examine<br />
the new French weapon. Unlike the French, the Germans had the<br />
first air-cooled machine gun, the Parabellum, which shot continuous<br />
b<strong>and</strong>s of one hundred bullets <strong>and</strong> was reliable enough to be<br />
adapted to a timing mechanism.<br />
In May of 1915, Anthony Herman Gerard Fokker was shown<br />
Garros’s captured plane <strong>and</strong> was ordered to copy the idea. Instead,<br />
Fokker <strong>and</strong> his assistant designed a new firing system. It is unclear<br />
whether Fokker <strong>and</strong> his team were already working on a synchronizer<br />
or to what extent they knew of Saulnier’s previous work in<br />
France. Within several days, however, they had constructed a working<br />
prototype <strong>and</strong> attached it to a Fokker Eindecker 1 airplane. The<br />
design consisted of a simple linkage of cams <strong>and</strong> push-rods connected<br />
to the oil-pump drive of an Oberursel engine <strong>and</strong> the trigger<br />
of a Parabellum machine gun. The firing of the gun had to be timed<br />
precisely to fire its six hundred rounds per minute between the<br />
twelve-hundred-revolutions-per-minute propeller blades.<br />
Fokker took his invention to Doberitz air base, <strong>and</strong> after a series
Propeller-coordinated machine gun / 603<br />
Anthony Herman Gerard Fokker<br />
Anthony Fokker was born on the isl<strong>and</strong> of Java in the Dutch<br />
East Indies (now Indonesia) in 1890. He returned to his parent’s<br />
home country, the Netherl<strong>and</strong>s, to attend school <strong>and</strong> then studied<br />
aeronautics in Germany. He built his first plane in 1910 <strong>and</strong><br />
established Fokker Aeroplanbau near Berlin in 1912.<br />
His monoplanes were highly esteemed when World War I<br />
erupted in 1914, <strong>and</strong> he offered his designs to both the German<br />
<strong>and</strong> the French governments. The Germans hired him. By the<br />
end of the war his fighters, especially the Dr I triplane <strong>and</strong> D VII<br />
biplane, were practically synonymous with German air warfare<br />
because they had been the scourge of Allied pilots.<br />
In 1922 Fokker moved to the United States <strong>and</strong> opened the<br />
Atlantic Aircraft Corporation in New Jersey. He had lost enthusiasm<br />
for military aircraft <strong>and</strong> turned his skills toward producing<br />
advanced designs for civilian use. The planes his company<br />
turned out established one first after another. His T-2 monoplane<br />
became the first to fly nonstop from coast to coast, New<br />
York to San Diego. His ten-seat airliner, the F VII/3m, carried<br />
Lieutenant Comm<strong>and</strong>er Richard Byrd over the North Pole in<br />
1926 <strong>and</strong> Charles Kingsford-Smith across the Pacific Ocean in<br />
1928.<br />
By the time Fokker died in New York in 1939, he had become<br />
a visionary. He foresaw passenger planes as the means to knit<br />
together the far-flung nations of the world into a network of<br />
rapid travel <strong>and</strong> communications.<br />
of exhausting trials before the German high comm<strong>and</strong>, both on the<br />
ground <strong>and</strong> in the air, he was allowed to take two prototypes of the<br />
machine-gun-mounted airplanes to Douai in German-held France.<br />
At Douai, two German pilots crowded into the cockpit with Fokker<br />
<strong>and</strong> were given demonstrations of the plane’s capabilities. The airmen<br />
were Oswald Boelcke, a test pilot <strong>and</strong> veteran of forty reconnaissance<br />
missions, <strong>and</strong> Max Immelmann, a young, skillful aviator<br />
who was assigned to the front.<br />
When the first combat-ready versions of Fokker’s Eindecker 1<br />
were delivered to the front lines, one was assigned to Boelcke, the<br />
other to Immelmann. On August 1, 1915, with their aerodrome un-
604 / Propeller-coordinated machine gun<br />
der attack from nine English bombers, Boelcke <strong>and</strong> Immelmann<br />
manned their aircraft <strong>and</strong> attacked. Boelcke’s gun jammed, <strong>and</strong> he<br />
was forced to cut off his attack <strong>and</strong> return to the aerodrome. Immelmann,<br />
however, succeeded in shooting down one of the bombers<br />
with his synchronized machine gun. It was the first victory credited<br />
to the Fokker-designed weapon system.<br />
Impact<br />
At the outbreak of World War I, military strategists <strong>and</strong> comm<strong>and</strong>ers<br />
on both sides saw the wartime function of airplanes as a<br />
means to supply intelligence information behind enemy lines or as<br />
airborne artillery spotting platforms. As the war progressed <strong>and</strong> aircraft<br />
flew more or less freely across the trenches, providing vital information<br />
to both armies, it became apparent to ground comm<strong>and</strong>ers<br />
that while it was important to obtain intelligence on enemy<br />
movements, it was important also to deny the enemy similar information.<br />
Early in the war, the French used airplanes as strategic bombing<br />
platforms. As both armies began to use their air forces for strategic<br />
bombing of troops, railways, ports, <strong>and</strong> airfields, it became evident<br />
that aircraft would have to be employed against enemy aircraft to<br />
prevent reconnaissance <strong>and</strong> bombing raids.<br />
With the invention of the synchronized forward-firing machine<br />
gun, pilots could use their aircraft as attack weapons. A pilot finally<br />
could coordinate control of his aircraft <strong>and</strong> his armaments with<br />
maximum efficiency. This conversion of aircraft from nearly passive<br />
observation platforms to attack fighters is the single greatest innovation<br />
in the history of aerial warfare. The development of fighter<br />
aircraft forced a change in military strategy, tactics, <strong>and</strong> logistics <strong>and</strong><br />
ushered in the era of modern warfare. Fighter planes are responsible<br />
for the battle-tested military adage: Whoever controls the sky controls<br />
the battlefield.<br />
See also Airplane; Radar; Stealth aircraft.
Further Reading<br />
Propeller-coordinated machine gun / 605<br />
Dierikx, M. L. J. Fokker: A Transatlantic Biography. Washington:<br />
Smithsonian Institution Press, 1997.<br />
Franks, Norman L. R. Aircraft Versus Aircraft: The Illustrated Story of<br />
Fighter Pilot Combat from 1914 to the Present Day. New York:<br />
Barnes & Noble Books, 1999.<br />
Guttman, Jon. Fighting Firsts: Fighter Aircraft Combat Debuts from<br />
1914 to 1944. London: Cassell, 2000.
606<br />
Pyrex glass<br />
Pyrex glass<br />
The invention: A superhard <strong>and</strong> durable glass product with widespread<br />
uses in industry <strong>and</strong> home products.<br />
The people behind the invention:<br />
Jesse T. Littleton (1888-1966), the chief physicist of Corning<br />
Glass Works’ research department<br />
Eugene G. Sullivan (1872-1962), the founder of Corning’s<br />
research laboratories<br />
William C. Taylor (1886-1958), an assistant to Sullivan<br />
Cooperating with Science<br />
By the twentieth century, Corning Glass Works had a reputation<br />
as a corporation that cooperated with the world of science to improve<br />
existing products <strong>and</strong> develop new ones. In the 1870’s, the<br />
company had hired university scientists to advise on improving the<br />
optical quality of glasses, an early example of today’s common practice<br />
of academics consulting for industry.<br />
When Eugene G. Sullivan established Corning’s research laboratory<br />
in 1908 (the first of its kind devoted to glass research), the task<br />
that he undertook with William C. Taylor was that of making a heatresistant<br />
glass for railroad lantern lenses. The problem was that ordinary<br />
flint glass (the kind in bottles <strong>and</strong> windows, made by melting<br />
together silica s<strong>and</strong>, soda, <strong>and</strong> lime) has a fairly high thermal expansion,<br />
but a poor heat conductivity. The glass thus exp<strong>and</strong>s<br />
unevenly when exposed to heat. This condition can cause the glass<br />
to break, sometimes violently. Colored lenses for oil or gas railroad<br />
signal lanterns sometimes shattered if they were heated too much<br />
by the flame that produced the light <strong>and</strong> were then sprayed by rain<br />
or wet snow. This changed a red “stop” light to a clear “proceed”<br />
signal <strong>and</strong> caused many accidents or near misses in railroading in<br />
the late nineteenth century.<br />
Two solutions were possible: to improve the thermal conductivity<br />
or reduce the thermal expansion. The first is what metals do:<br />
When exposed to heat, most metals have an expansion much greater
than that of glass, but they conduct heat so quickly that they exp<strong>and</strong><br />
nearly equally throughout <strong>and</strong> seldom lose structural integrity from<br />
uneven expansion. Glass, however, is an inherently poor heat conductor,<br />
so this approach was not possible.<br />
Therefore, a formulation had to be found that had little or no<br />
thermal expansivity. Pure silica (one example is quartz) fits this description,<br />
but it is expensive <strong>and</strong>, with its high melting point, very<br />
difficult to work.<br />
The formulation that Sullivan <strong>and</strong> Taylor devised was a borosilicate<br />
glass—essentially a soda-lime glass with the lime replaced by<br />
borax, with a small amount of alumina added. This gave the low thermal<br />
expansion needed for signal lenses. It also turned out to have<br />
good acid-resistance, which led to its being used for the battery jars<br />
required for railway telegraph systems <strong>and</strong> other applications. The<br />
glass was marketed as “Nonex” (for “nonexpansion glass”).<br />
From the Railroad to the Kitchen<br />
Pyrex glass / 607<br />
Jesse T. Littleton joined Corning’s research laboratory in 1913.<br />
The company had a very successful lens <strong>and</strong> battery jar material,<br />
but no one had even considered it for cooking or other heat-transfer<br />
applications, because the prevailing opinion was that glass absorbed<br />
<strong>and</strong> conducted heat poorly. This meant that, in glass pans,<br />
cakes, pies, <strong>and</strong> the like would cook on the top, where they were exposed<br />
to hot air, but would remain cold <strong>and</strong> wet (or at least undercooked)<br />
next to the glass surface. As a physicist, Littleton knew that<br />
glass absorbed radiant energy very well. He thought that the heatconduction<br />
problem could be solved by using the glass vessel itself<br />
to absorb <strong>and</strong> distribute heat. Glass also had a significant advantage<br />
over metal in baking. Metal bakeware mostly reflects radiant energy<br />
to the walls of the oven, where it is lost ultimately to the surroundings.<br />
Glass would absorb this radiation energy <strong>and</strong> conduct it evenly to<br />
the cake or pie, giving a better result than that of the metal bakeware.<br />
Moreover, glass would not absorb <strong>and</strong> carry over flavors from<br />
one baking effort to the next, as some metals do.<br />
Littleton took a cut-off battery jar home <strong>and</strong> asked his wife to<br />
bake a cake in it. He took it to the laboratory the next day, h<strong>and</strong>ing<br />
pieces around <strong>and</strong> not disclosing the method of baking until all had
608 / Pyrex glass<br />
Jesse T. Littleton<br />
To prove that glass is good for baking, place an uncooked pie<br />
in a pie tin <strong>and</strong> place another pie pan under it, made half of tin<br />
<strong>and</strong> half of non-exp<strong>and</strong>ing glass. Place it in all the oven. That is<br />
the experiment Jesse Talbot Littleton, Jr., used at Corning Glass<br />
Works soon after he hired on in 1913. The story behind it began<br />
with a ceramic dish that cracked when his wife baked a cake.<br />
That would not happen, he realized, with the right kind of<br />
glass. Although his wife baked a cake successfully in a glass<br />
battery jar bottom at his request, Littleton had to demonstrate<br />
the feat for his superiors scientifically. The half of the pie over<br />
the glass, it turned out, cooked faster <strong>and</strong> more evenly. Kitchen<br />
glassware was born.<br />
Littleton was born in Belle Haven, Virginia, in 1888. After<br />
taking degrees from Southern University <strong>and</strong> Tulane University,<br />
he earned a doctorate in physics from the University of<br />
Wisconsin in 1911. He briefly vowed to remain a bachelor <strong>and</strong><br />
dedicate his life to physics, but Besse Cook, a pretty Mississippi<br />
school teacher, turned his head, <strong>and</strong> so he got married instead.<br />
He was the first physicist added to the newly organized research<br />
laboratories at Corning in New York. There he studied<br />
practical problems involved in the industrial applications of<br />
glass, including tempering, <strong>and</strong> helped invent a gas pressure<br />
meter to measure the flow of air in blowing glass <strong>and</strong> a sensitive,<br />
faster thermometer. He rose rapidly in the organization. In<br />
1920 he became chief of the physical lab, assistant director of research<br />
in 1940, vice president in 1943, director of all Corning research<br />
<strong>and</strong> development in 1946, <strong>and</strong> general technical adviser<br />
in 1951.<br />
Littleton retired a year later <strong>and</strong>, a passionate outdoorsman,<br />
devoted himself to hunting <strong>and</strong> fishing. A leading figure in the<br />
ceramics industry, he belonged to the American Academy for<br />
the Advancement of Science, American Physical Society, <strong>and</strong><br />
the American Institute of Engineers <strong>and</strong> was an editor for the<br />
Journal of Applied Physics. He died in 1966.
agreed that the results were excellent. With this agreement, he was<br />
able to commit laboratory time to developing variations on the<br />
Nonex formula that were more suitable for cooking. The result was<br />
Pyrex, patented <strong>and</strong> trademarked in May of 1915.<br />
Impact<br />
Pyrex glass / 609<br />
In the 1930’s, Pyrex “Flameware” was introduced, with a new<br />
glass formulation that could resist the increased heat of stovetop<br />
cooking. In the half century since Flameware was introduced,<br />
Corning went on to produce a variety of other products <strong>and</strong> materials:<br />
tableware in tempered opal glass; cookware in Pyroceram, a<br />
glass product that during heat treatment gained such mechanical<br />
strength as to be virtually unbreakable; even hot plates <strong>and</strong> stoves<br />
topped with Pyroceram.<br />
In the same year that Pyrex was marketed for cooking, it was<br />
also introduced for laboratory apparatus. Laboratory glassware<br />
had been coming from Germany at the beginning of the twentieth<br />
century; World War I cut off the supply. Corning filled the gap<br />
with Pyrex beakers, flasks, <strong>and</strong> other items. The delicate blownglass<br />
equipment that came from Germany was completely displaced<br />
by the more rugged <strong>and</strong> heat-resistant machine-made Pyrex<br />
ware.<br />
Any number of operations are possible with Pyrex that cannot<br />
be performed safely in flint glass: Test tubes can be thrust directly<br />
into burner flames, with no preliminary warming; beakers <strong>and</strong><br />
flasks can be heated on hot plates; <strong>and</strong> materials that dissolve<br />
when exposed to heat can be made into solutions directly in Pyrex<br />
storage bottles, a process that cannot be performed in regular<br />
glass. The list of such applications is almost endless.<br />
Pyrex has also proved to be the material of choice for lenses in<br />
the great reflector telescopes, beginning in 1934 with that at Mount<br />
Palomar. By its nature, astronomical observation must be done<br />
with the scope open to the weather. This means that the mirror<br />
must not change shape with temperature variations, which rules<br />
out metal mirrors. Silvered (or aluminized) Pyrex serves very well,<br />
<strong>and</strong> Corning has developed great expertise in casting <strong>and</strong> machining<br />
Pyrex blanks for mirrors of all sizes.
610 / Pyrex glass<br />
See also Laminated glass; Microwave cooking; Plastic; Polystyrene;<br />
Teflon; Tupperware.<br />
Further Reading<br />
Blaszczyk, Regina Lee. Imagining Consumers: Design <strong>and</strong> Innovation<br />
from Wedgwood to Corning. Baltimore: Johns Hopkins University<br />
Press, 2000.<br />
Graham, Margaret B. W., <strong>and</strong> Alec T. Shuldiner. Corning <strong>and</strong> the Craft<br />
of Innovation. New York: Oxford University Press, 2001.<br />
Stage, Sarah, <strong>and</strong> Virginia Bramble Vincenti. Rethinking Home Economics:<br />
Women <strong>and</strong> the History of a Profession. Ithaca, N.Y.: Cornell<br />
University Press, 1997.<br />
Rogove, Susan Tobier, <strong>and</strong> Marcia B. Steinhauer. Pyrex by Corning: A<br />
Collector’s Guide. Marietta, Ohio: Antique <strong>Public</strong>ations, 1993.
Radar<br />
Radar<br />
The invention: An electronic system for detecting objects at great<br />
distances, radar was a major factor in the Allied victory of World<br />
War II <strong>and</strong> now pervades modern life, including scientific research.<br />
The people behind the invention:<br />
Sir Robert Watson-Watt (1892-1973), the father of radar who<br />
proposed the chain air-warning system<br />
Arnold F. Wilkins, the person who first calculated the intensity<br />
of a radio wave<br />
William C. Curtis (1914-1976), an American engineer<br />
Looking for Thunder<br />
611<br />
Sir Robert Watson-Watt, a scientist with twenty years of experience<br />
in government, led the development of the first radar, an acronym<br />
for radio detection <strong>and</strong> ranging. “Radar” refers to any instrument<br />
that uses the reflection of radio waves to determine the<br />
distance, direction, <strong>and</strong> speed of an object.<br />
In 1915, during World War I (1914-1918), Watson-Watt joined<br />
Great Britain’s Meteorological Office. He began work on the detection<br />
<strong>and</strong> location of thunderstorms at the Royal Aircraft Establishment<br />
in Farnborough <strong>and</strong> remained there throughout the<br />
war. Thunderstorms were known to be a prolific source of “atmospherics”<br />
(audible disturbances produced in radio receiving apparatus<br />
by atmospheric electrical phenomena), <strong>and</strong> Watson-Watt<br />
began the design of an elementary radio direction finder that<br />
gave the general position of such storms. Research continued after<br />
the war <strong>and</strong> reached a high point in 1922 when sealed-off<br />
cathode-ray tubes first became available. With assistance from<br />
J. F. Herd, a fellow Scot who had joined him at Farnborough, he<br />
constructed an instantaneous direction finder, using the new<br />
cathode-ray tubes, that gave the direction of thunderstorm activity.<br />
It was admittedly of low sensitivity, but it worked, <strong>and</strong> it was<br />
the first of its kind.
612 / Radar<br />
William C. Curtis<br />
In addition to radar’s applications in navigation, civil aviation,<br />
<strong>and</strong> science, it rapidly became an integral part of military<br />
aircraft by guiding weaponry <strong>and</strong> detecting enemy aircraft <strong>and</strong><br />
missiles. The research <strong>and</strong> development industry that grew to<br />
provide offensive <strong>and</strong> defensive systems greatly exp<strong>and</strong>ed the<br />
opportunities for young scientists during the Cold War. Among<br />
them was William C. Curtis (1914-1976), one of the most influential<br />
African Americans in defense research.<br />
Curtis graduated from the Tuskegee Institute (later Tuskegee<br />
University), where he later served as its first dean of engineering.<br />
While there, he helped form <strong>and</strong> train the Tuskegee<br />
Airmen, a famous squadron of African American fighter pilots<br />
during World War II. He also worked for the Radio Corporation<br />
of American (RCA) for twenty-three years. It was while at RCA<br />
that he contributed innovations to military radar. These include<br />
the Black Cat weapons system, MG-3 fire control system, 300-A<br />
weapon radar system, <strong>and</strong> Airborne Interceptor Data Link.<br />
Watson-Watt did much of this work at a new site at Ditton Park,<br />
near Slough, where the National Physical Laboratory had a field<br />
station devoted to radio research. In 1927, the two endeavors were<br />
combined as the Radio Research Station; it came under the general<br />
supervision of the National Physical Laboratory, with Watson-Watt<br />
as the first superintendent. This became a center with unrivaled expertise<br />
in direction finding using the cathode-ray tube <strong>and</strong> in studying<br />
the ionosphere using radio waves. No doubt these facilities<br />
were a factor when Watson-Watt invented radar in 1935.<br />
As radar developed, its practical uses exp<strong>and</strong>ed. Meteorological<br />
services around the world, using ground-based radar, gave warning<br />
of approaching rainstorms. Airborne radars proved to be a great<br />
help to aircraft by allowing them to recognize potentially hazardous<br />
storm areas. This type of radar was used also to assist research into<br />
cloud <strong>and</strong> rain physics. In this type of research, radar-equipped research<br />
aircraft observe the radar echoes inside a cloud as rain develops,<br />
<strong>and</strong> then fly through the cloud, using on-board instruments to<br />
measure the water content.
Technician at a modern radar display. (PhotoDisc)
614 / Radar<br />
Aiming Radar at the Moon<br />
The principles of radar were further developed through the discipline<br />
of radio astronomy. This field began with certain observations<br />
made by the American electrical engineer Karl Jansky in 1933<br />
at the Bell Laboratories at Holmdell, New Jersey. Radio astronomers<br />
learn about objects in space by intercepting the radio waves that<br />
these objects emit.<br />
Jansky found that radio signals were coming to Earth from space.<br />
He called these mysterious pulses “cosmic noise.” In particular, there<br />
was an unusual amount of radar noise when the radio antennas were<br />
pointed at the Sun, which increased at the time of sun-spot activity.<br />
All this information lay dormant until after World War II (1939-<br />
1945), at which time many investigators turned their attention to interpreting<br />
the cosmic noise. The pioneers were Sir Bernard Lovell at<br />
Manchester, Engl<strong>and</strong>, Sir Martin Ryle at Cambridge, Engl<strong>and</strong>, <strong>and</strong><br />
Joseph Pawsey of the Commonwealth of Science Industrial Research<br />
Organization, in Australia. The intensity of these radio waves was<br />
first calculated by Arnold F. Wilkins.<br />
As more powerful tools became available toward the end of<br />
World War II, curiosity caused experimenters to try to detect radio<br />
signals from the Moon. This was accomplished successfully in the<br />
late 1940’s <strong>and</strong> led to experiments on other objects in the solar system:<br />
planets, satellites, comets, <strong>and</strong> asteroids.<br />
Impact<br />
Radar introduced some new <strong>and</strong> revolutionary concepts into warfare,<br />
<strong>and</strong> in doing so gave birth to entirely new branches of technology.<br />
In the application of radar to marine navigation, the long-range<br />
navigation system developed during the war was taken up at once<br />
by the merchant fleets that used military-style radar equipment<br />
without modification. In addition, radar systems that could detect<br />
buoys <strong>and</strong> other ships <strong>and</strong> obstructions in closed waters, particularly<br />
under conditions of low visibility, proved particularly useful<br />
to peacetime marine navigation.<br />
In the same way, radar was adopted to assist in the navigation of<br />
civil aircraft. The various types of track guidance systems devel-
oped after the war were aimed at guiding aircraft in the critical last<br />
hundred kilometers or so of their run into an airport. Subsequent<br />
improvements in the system meant that an aircraft could place itself<br />
on an approach or l<strong>and</strong>ing path with great accuracy.<br />
The ability of radar to measure distance to an extraordinary degree<br />
of accuracy resulted in the development of an instrument that<br />
provided pilots with a direct measurement of the distances between<br />
airports. Along with these aids, ground-based radars were developed<br />
for the control of aircraft along the air routes or in the airport<br />
control area.<br />
The development of electronic computers can be traced back to<br />
the enormous advances in circuit design, which were an integral part<br />
of radar research during the war. During that time, some elements<br />
of electronic computing had been built into bombsights <strong>and</strong> other<br />
weaponry; later, it was realized that a whole range of computing operations<br />
could be performed electronically. By the end of the war,<br />
many pulse-forming networks, pulse-counting circuits, <strong>and</strong> memory<br />
circuits existed in the form needed for an electronic computer.<br />
Finally, the developing radio technology has continued to help<br />
astronomers explore the universe. Large radio telescopes exist in almost<br />
every country <strong>and</strong> enable scientists to study the solar system<br />
in great detail. Radar-assisted cosmic background radiation studies<br />
have been a building block for the big bang theory of the origin of<br />
the universe.<br />
See also Airplane; Cruise missile; Radio interferometer; Sonar;<br />
Stealth aircraft.<br />
Further Reading<br />
Radar / 615<br />
Brown, Louis. A Radar History of World War II: Technical <strong>and</strong> Military<br />
Imperatives. Philadelphia: Institute of Physics, 1999.<br />
Latham, Colin, <strong>and</strong> Anne Stobbs. Pioneers of Radar. Gloucestershire:<br />
Sutton, 1999.<br />
Rowl<strong>and</strong>, John. The Radar Man: The Story of Sir Robert Watson-Watt.<br />
New York: Roy Publishers, 1964.<br />
Watson-Watt, Robert Alex<strong>and</strong>er. The Pulse of Radar: The Autobiography<br />
of Sir Robert Watson-Watt. New York: Dial Press, 1959.
616<br />
Radio<br />
Radio<br />
The invention: The first radio transmissions of music <strong>and</strong> voice<br />
laid the basis for the modern radio <strong>and</strong> television industries.<br />
The people behind the invention:<br />
Guglielmo Marconi (1874-1937), an Italian physicist <strong>and</strong><br />
inventor<br />
Reginald Aubrey Fessenden (1866-1932), an American radio<br />
pioneer<br />
True Radio<br />
The first major experimenter in the United States to work with<br />
wireless radio was Reginald Aubrey Fessenden. This transplanted<br />
Canadian was a skilled, self-made scientist, but unlike American inventor<br />
Thomas Alva Edison, he lacked the business skills to gain the<br />
full credit <strong>and</strong> wealth that such pathbreaking work might have merited.<br />
Guglielmo Marconi, in contrast, is most often remembered as<br />
the person who invented wireless (as opposed to telegraphic) radio.<br />
There was a great difference between the contributions of Marconi<br />
<strong>and</strong> Fessenden. Marconi limited himself to experiments with<br />
radio telegraphy; that is, he sought to send through the air messages<br />
that were currently being sent by wire—signals consisting of dots<br />
<strong>and</strong> dashes. Fessenden sought to perfect radio telephony, or voice<br />
communication by wireless transmission. Fessenden thus pioneered<br />
the essential precursor of modern radio broadcasting. At the beginning<br />
of the twentieth century, Fessenden spent much time <strong>and</strong> energy<br />
publicizing his experiments, thus promoting interest in the<br />
new science of radio broadcasting.<br />
Fessenden began his career as an inventor while working for the<br />
U.S. Weather Bureau. He set out to invent a radio system by which<br />
to broadcast weather forecasts to users on l<strong>and</strong> <strong>and</strong> at sea. Fessenden<br />
believed that his technique of using continuous waves in the<br />
radio frequency range (rather than interrupted waves Marconi had<br />
used to produce the dots <strong>and</strong> dashes of Morse code) would provide<br />
the power necessary to carry Morse telegraph code yet be effective<br />
enough to h<strong>and</strong>le voice communication. He would turn out to be
correct. He conducted experiments as early as 1900 at Rock Point,<br />
Maryl<strong>and</strong>, about 80 kilometers south of Washington, D.C., <strong>and</strong> registered<br />
his first patent in the area of radio research in 1902.<br />
Fame <strong>and</strong> Glory<br />
Radio / 617<br />
In 1900, Fessenden asked the General Electric Company to produce<br />
a high-speed generator of alternating current—or alternator—<br />
to use as the basis of his radio transmitter. This proved to be the first<br />
major request for wireless radio apparatus that could project voices<br />
<strong>and</strong> music. It took the engineers three years to design <strong>and</strong> deliver<br />
the alternator. Meanwhile, Fessenden worked on an improved radio<br />
receiver. To fund his experiments, Fessenden aroused the interest<br />
of financial backers, who put up one million dollars to create the<br />
National Electric Signalling Company in 1902.<br />
Fessenden, along with a small group of h<strong>and</strong>picked scientists,<br />
worked at Brant Rock on the Massachusetts coast south of Boston.<br />
Working outside the corporate system, Fessenden sought fame <strong>and</strong><br />
glory based on his own work, rather than on something owned by a<br />
corporate patron.<br />
Fessenden’s moment of glory came on December 24, 1906, with<br />
the first announced broadcast of his radio telephone. Using an ordinary<br />
telephone microphone <strong>and</strong> his special alternator to generate<br />
the necessary radio energy, Fessenden alerted ships up <strong>and</strong> down<br />
the Atlantic coast with his wireless telegraph <strong>and</strong> arranged for<br />
newspaper reporters to listen in from New York City. Fessenden<br />
made himself the center of the show. He played the violin, sang,<br />
<strong>and</strong> read from the Bible. Anticipating what would become st<strong>and</strong>ard<br />
practice fifty years later, Fessenden also transmitted the sounds of a<br />
phonograph recording. He ended his first broadcast by wishing those<br />
listening “a Merry Christmas.” A similar, equally well-publicized<br />
demonstration came on December 31.<br />
Although Fessenden was skilled at drawing attention to his invention<br />
<strong>and</strong> must be credited, among others, as one of the engineering<br />
founders of the principles of radio, he was far less skilled at<br />
making money with his experiments, <strong>and</strong> thus his long-term impact<br />
was limited. The National Electric Signalling Company had a fine<br />
beginning <strong>and</strong> for a time was a supplier of equipment to the United
618 / Radio<br />
Fruit Company. The financial panic of 1907, however, wiped out an<br />
opportunity to sell the Fessenden patents—at a vast profit—to a corporate<br />
giant, the American Telephone <strong>and</strong> Telegraph Corporation.<br />
Impact<br />
Had there been more receiving equipment available <strong>and</strong> in place,<br />
a massive audience could have heard Fessenden’s first broadcast.<br />
He had the correct idea, even to the point of playing a crude phonograph<br />
record. Yet Fessenden, Marconi, <strong>and</strong> their rivals were unable<br />
to establish a regular series of broadcasts. Their “stations” were experimental<br />
<strong>and</strong> promotional.<br />
It took the stresses of World War I to encourage broader use of<br />
wireless radio based on Fessenden’s experiments. Suddenly, communicating<br />
from ship to ship or from a ship to shore became a frequent<br />
matter of life or death. Generating publicity was no longer<br />
necessary. Governments fought over crucial patent rights. The Radio<br />
Corporation of America (RCA) pooled vital knowledge. Ultimately,<br />
RCA came to acquire the Fessenden patents. Radio broadcasting<br />
commenced, <strong>and</strong> the radio industry, with its multiple uses<br />
for mass communication, was off <strong>and</strong> running.<br />
Antique tabletop radio. (PhotoDisc)
Guglielmo Marconi<br />
Guglielmo Marconi failed his entrance examinations to the<br />
University of Bologna in 1894. He had a weak educational background,<br />
particularly in science, but he was not about to let<br />
that—or his father’s disapproval—stop him after he conceived<br />
a deep interest in wireless telegraphy during his teenage years.<br />
Marconi was born in 1874 to a wealthy Italian l<strong>and</strong>owner<br />
<strong>and</strong> an Irish whiskey distiller’s daughter <strong>and</strong> grew up both in<br />
Italy <strong>and</strong> Engl<strong>and</strong>. His parents provided tutors for<br />
him, but he <strong>and</strong> his brother often accompanied their<br />
mother, a socialite, on extensive travels. He acquired<br />
considerable social skills, easy self-confidence, <strong>and</strong><br />
determination from the experience.<br />
Thus, when he failed his exams, he simply tried another<br />
route for his ambitions. He <strong>and</strong> his mother persuaded<br />
a science professor to let Marconi use a university<br />
laboratory unofficially. His father thought it a<br />
waste of time. However, he changed his mind when<br />
his son succeeded in building equipment that could<br />
transmit electronic signals around their house without wires, an<br />
achievement right at the vanguard of technology.<br />
Now supported by his father’s money, Marconi <strong>and</strong> his<br />
brother built an elaborate set of equipment—including an oscillator,<br />
coherer, galvanometer, <strong>and</strong> antennas—that they hoped<br />
would send a signal outside over a long distance. His brother<br />
walked off a mile <strong>and</strong> a half, out of sight, with the galvanometer<br />
<strong>and</strong> a rifle. When the galvanometer moved, indicating a signal<br />
had arrived from the oscillator, he fired the rifle to let Marconi<br />
know he had succeeded. The incident is widely cited as the first<br />
radio transmission.<br />
Marconi went on to send signals over greater <strong>and</strong> greater<br />
distances. He patented a tuner to permit transmissions at specific<br />
frequencies, <strong>and</strong> he started the Wireless Telegraph <strong>and</strong> Signal<br />
Company to bring his inventions to the public; its American<br />
branch was the Radio Corporation of America (RCA). He not<br />
only grew wealthy at a young age; he also was awarded half of<br />
the 1909 Nobel Prize in Physics for his work. He died in Rome<br />
in 1937, one of the most famous inventors in the world.<br />
Radio / 619<br />
(Library of Congress)
620 / Radio<br />
See also Communications satellite; Compact disc; Dolby noise<br />
reduction; FM radio; Long-distance radiotelephony; Radio crystal<br />
sets; Television; Transistor; Transistor radio.<br />
Further Reading<br />
Fessenden, <strong>and</strong> Helen May Trott. Fessenden: Builder of Tomorrows.<br />
New York: Arno Press, 1974.<br />
Lewis, Tom. Empire of the Air: The Men Who Made Radio. New York:<br />
HarperPerennial, 1993.<br />
Masini, Giancarlo. Marconi. New York: Marsilio Publishers, 1995.<br />
Seitz. Frederick. The Cosmic Inventor: Reginald Aubrey Fessenden,<br />
1866-1932. Philadelphia: American Philosophical Society, 1999.
Radio crystal sets<br />
Radio crystal sets<br />
The invention: The first primitive radio receivers, crystal sets led<br />
to the development of the modern radio.<br />
The people behind the invention:<br />
H. H. Dunwoody (1842-1933), an American inventor<br />
Sir John A. Fleming (1849-1945), a British scientist-inventor<br />
Heinrich Rudolph Hertz (1857-1894), a German physicist<br />
Guglielmo Marconi (1874-1937), an Italian engineer-inventor<br />
James Clerk Maxwell (1831-1879), a Scottish physicist<br />
Greenleaf W. Pickard (1877-1956), an American inventor<br />
From Morse Code to Music<br />
621<br />
In the 1860’s, James Clerk Maxwell demonstrated that electricity<br />
<strong>and</strong> light had electromagnetic <strong>and</strong> wave properties. The conceptualization<br />
of electromagnetic waves led Maxwell to propose that<br />
such waves, made by an electrical discharge, would eventually be<br />
sent long distances through space <strong>and</strong> used for communication<br />
purposes. Then, near the end of the nineteenth century, the technology<br />
that produced <strong>and</strong> transmitted the needed Hertzian (or radio)<br />
waves was devised by Heinrich Rudolph Hertz, Guglielmo Marconi<br />
(inventor of the wireless telegraph), <strong>and</strong> many others. The resultant<br />
radio broadcasts, however, were limited to the dots <strong>and</strong><br />
dashes of the Morse code.<br />
Then, in 1901, H. H. Dunwoody <strong>and</strong> Greenleaf W. Pickard invented<br />
the crystal set. Crystal sets were the first radio receivers<br />
that made it possible to hear music <strong>and</strong> the many other types of<br />
now-familiar radio programs. In addition, the simple construction<br />
of the crystal set enabled countless amateur radio enthusiasts<br />
to build “wireless receivers” (the name for early radios) <strong>and</strong><br />
to modify them. Although, except as curiosities, crystal sets were<br />
long ago replaced by more effective radios, they are where it all<br />
began.
622 / Radio crystal sets<br />
Crystals, Diodes, Transistors, <strong>and</strong> Chips<br />
Radio broadcasting works by means of electromagnetic radio<br />
waves, which are low-energy cousins of light waves. All electromagnetic<br />
waves have characteristic vibration frequencies <strong>and</strong> wavelengths.<br />
This article will deal mostly with long radio waves of frequencies<br />
from 550 to 1,600 kilocycles (kilohertz), which can be seen<br />
on amplitude-modulation (AM) radio dials. Frequency-modulation<br />
(FM), shortwave, <strong>and</strong> microwave radio transmission use higherenergy<br />
radio frequencies.<br />
The broadcasting of radio programs begins with the conversion<br />
of sound to electrical impulses by means of microphones. Then, radio<br />
transmitters turn the electrical impulses into radio waves that<br />
are broadcast together with higher-energy carrier waves. The combined<br />
waves travel at the speed of light to listeners. Listeners hear<br />
radio programs by using radio receivers that pick up broadcast<br />
waves through antenna wires <strong>and</strong> reverse the steps used in broadcasting.<br />
This is done by converting those waves to electrical impulses<br />
<strong>and</strong> then into sound waves. The two main types of radio<br />
broadcasting are AM <strong>and</strong> FM, which allow the selection (modulation)<br />
of the power (amplitude) or energy (frequency) of the broadcast<br />
waves.<br />
The crystal set radio receiver of Dunwoody <strong>and</strong> Pickard had<br />
many shortcomings. These led to the major modifications that produced<br />
modern radios. Crystal sets, however, began the radio industry<br />
<strong>and</strong> fostered its development. Today, it is possible to purchase<br />
somewhat modified forms of crystal sets, as curiosity items. All<br />
crystal sets, original or modern versions, are crude AM radio receivers<br />
that are composed of four components: an antenna wire, a crystal<br />
detector, a tuning circuit, <strong>and</strong> a headphone or loudspeaker.<br />
Antenna wires (aerials) pick up radio waves broadcast by external<br />
sources. Originally simple wires, today’s aerials are made to<br />
work better by means of insulation <strong>and</strong> grounding. The crystal detector<br />
of a crystal set is a mineral crystal that allows radio waves to<br />
be selected (tuned). The original detectors were crystals of a leadsulfur<br />
mineral, galena. Later, other minerals (such as silicon <strong>and</strong> carborundum)<br />
were also found to work. The tuning circuit is composed<br />
of 80 to 100 turns of insulated wire, wound on a 0.33-inch
support. Some surprising supports used in homemade tuning circuits<br />
include cardboard toilet-paper-roll centers <strong>and</strong> Quaker Oats<br />
cereal boxes. When realism is desired in collector crystal sets, the<br />
coil is usually connected to a wire probe selector called a “cat’s<br />
whisker.” In some such crystal sets, a condenser (capacitor) <strong>and</strong> additional<br />
components are used to extend the range of tunable signals.<br />
Headphones convert chosen radio signals to sound waves that are<br />
heard by only one listener. If desired, loudspeakers can be used to<br />
enable a roomful of listeners to hear chosen programs.<br />
An interesting characteristic of the crystal set is the fact that its<br />
operation does not require an external power supply. Offsetting<br />
this are its short reception range <strong>and</strong> a great difficulty in tuning or<br />
maintaining tuned-in radio signals. The short range of these radio<br />
receivers led to, among other things, the use of power supplies<br />
(house current or batteries) in more sophisticated radios. Modern<br />
solutions to tuning problems include using manufactured diode<br />
vacuum tubes to replace crystal detectors, which are a kind of natural<br />
diode. The first manufactured diodes, used in later crystal sets<br />
<strong>and</strong> other radios, were invented by John Ambrose Fleming, a colleague<br />
of Marconi’s. Other modifications of crystal sets that led to<br />
more sophisticated modern radios include more powerful aerials,<br />
better circuits, <strong>and</strong> vacuum tubes. Then came miniaturization,<br />
which was made possible by the use of transistors <strong>and</strong> silicon chips.<br />
Impact<br />
Radio crystal sets / 623<br />
The impact of the invention of crystal sets is almost incalculable,<br />
since they began the modern radio industry. These early radio receivers<br />
enabled countless radio enthusiasts to build radios, to receive radio<br />
messages, <strong>and</strong> to become interested in developing radio communication<br />
systems. Crystal sets can be viewed as having spawned all<br />
the variant modern radios. These include boom boxes <strong>and</strong> other portable<br />
radios; navigational radios used in ships <strong>and</strong> supersonic jet<br />
airplanes; <strong>and</strong> the shortwave, microwave, <strong>and</strong> satellite networks<br />
used in the various aspects of modern communication.<br />
The later miniaturization of radios <strong>and</strong> the development of sophisticated<br />
radio system components (for example, transistors<br />
<strong>and</strong> silicon chips) set the stage for both television <strong>and</strong> computers.
624 / Radio crystal sets<br />
Certainly, if one tried to assess the ultimate impact of crystal sets by<br />
simply counting the number of modern radios in the United States,<br />
one would find that few Americans more than ten years old own<br />
fewer than two radios. Typically, one of these is run by house electric<br />
current <strong>and</strong> the other is a portable set that is carried almost everywhere.<br />
See also FM radio; Long-distance radiotelephony; Radio; Television;<br />
Transistor radio.<br />
Further Reading<br />
Masini, Giancarlo. Marconi. New York: Marsilio, 1995.<br />
Sievers, Maurice L. Crystal Clear: Vintage American Crystal Sets, Crystal<br />
Detectors, <strong>and</strong> Crystals. Vestal, N.Y.: Vestal Press, 1991.<br />
Tolstoy, Ivan. James Clerk Maxwell: A Biography. Chicago: University<br />
of Chicago Press, 1982.
Radio interferometer<br />
Radio interferometer<br />
The invention: An astronomical instrument that combines multiple<br />
radio telescopes into a single system that makes possible the<br />
exploration of distant space.<br />
The people behind the invention:<br />
Sir Martin Ryle (1918-1984), an English astronomer<br />
Karl Jansky (1905-1950), an American radio engineer<br />
Hendrik Christoffel van de Hulst (1918- ), a Dutch radio<br />
astronomer<br />
Harold Irving Ewan (1922- ), an American astrophysicist<br />
Edward Mills Purcell (1912-1997), an American physicist<br />
Seeing with Radio<br />
625<br />
Since the early 1600’s, astronomers have relied on optical telescopes<br />
for viewing stellar objects. Optical telescopes detect the<br />
visible light from stars, galaxies, quasars, <strong>and</strong> other astronomical<br />
objects. Throughout the late twentieth century, astronomers developed<br />
more powerful optical telescopes for peering deeper into the<br />
cosmos <strong>and</strong> viewing objects located hundreds of millions of lightyears<br />
away from the earth.<br />
In 1933, Karl Jansky, an American radio engineer with Bell Telephone<br />
Laboratories, constructed a radio antenna receiver for locating<br />
sources of telephone interference. Jansky discovered a daily radio<br />
burst that he was able to trace to the center of the Milky Way<br />
galaxy. In 1935, Grote Reber, another American radio engineer, followed<br />
up Jansky’s work with the construction of the first dishshaped<br />
“radio” telescope. Reber used his 9-meter-diameter radio<br />
telescope to repeat Jansky’s experiments <strong>and</strong> to locate other radio<br />
sources in space. He was able to map precisely the locations of various<br />
radio sources in space, some of which later were identified as<br />
galaxies <strong>and</strong> quasars.<br />
Following World War II (that is, after 1945), radio astronomy<br />
blossomed with the help of surplus radar equipment. Radio astronomy<br />
tries to locate objects in space by picking up the radio waves
626 / Radio interferometer<br />
that they emit. In 1944, the Dutch astronomer Hendrik Christoffel<br />
van de Hulst had proposed that hydrogen atoms emit radio waves<br />
with a 21-centimeter wavelength. Because hydrogen is the most<br />
abundant element in the universe, van de Hulst’s discovery had explained<br />
the nature of extraterrestrial radio waves. His theory later<br />
was confirmed by the American radio astronomers Harold Irving<br />
Ewen <strong>and</strong> Edward Mills Purcell of Harvard University.<br />
By coupling the newly invented computer technology with radio<br />
telescopes, astronomers were able to generate a radio image of a star<br />
almost identical to the star’s optical image. A major advantage of radio<br />
telescopes over optical telescopes is the ability of radio telescopes<br />
to detect extraterrestrial radio emissions day or night, as well as their<br />
ability to bypass the cosmic dust that dims or blocks visible light.<br />
More with Less<br />
After 1945, major research groups were formed in Engl<strong>and</strong>, Australia,<br />
<strong>and</strong> The Netherl<strong>and</strong>s. Sir Martin Ryle was head of the Mullard<br />
Radio Astronomy Observatory of the Cavendish Laboratory,<br />
University of Cambridge. He had worked with radar for the Telecommunications<br />
Research Establishment during World War II.<br />
The radio telescopes developed by Ryle <strong>and</strong> other astronomers<br />
operate on the same basic principle as satellite television receivers.<br />
A constant stream of radio waves strikes the parabolic-shaped reflector<br />
dish, which aims all the radio waves at a focusing point<br />
above the dish. The focusing point directs the concentrated radio<br />
beam to the center of the dish, where it is sent to a radio receiver,<br />
then an amplifier, <strong>and</strong> finally to a chart recorder or computer.<br />
With large-diameter radio telescopes, astronomers can locate<br />
stars <strong>and</strong> galaxies that cannot be seen with optical telescopes. This<br />
ability to detect more distant objects is called “resolution.” Like<br />
optical telescopes, large-diameter radio telescopes have better resolution<br />
than smaller ones. Very large radio telescopes were constructed<br />
in the late 1950’s <strong>and</strong> early 1960’s (Jodrell Bank, Engl<strong>and</strong>;<br />
Green Bank, West Virginia; Arecibo, Puerto Rico). Instead of just<br />
building larger radio telescopes to achieve greater resolution, however,<br />
Ryle developed a method called “interferometry.” In Ryle’s<br />
method, a computer is used to combine the incoming radio waves
Moving<br />
Spacecraft<br />
Angular<br />
Separation Fixed<br />
Radio interferometer / 627<br />
Radio Star<br />
(Quasar)<br />
California Spain<br />
One use of VLBI is to navigate a spacecraft: By measuring the angular separation between a<br />
fixed radio star, such as a quasar, <strong>and</strong> a moving spacecraft, the craft’s location, orientation,<br />
<strong>and</strong> path can be precisely monitored <strong>and</strong> adjusted.<br />
of two or more movable radio telescopes pointed at the same stellar<br />
object.<br />
Suppose that one had a 30-meter-diameter radio telescope. Its radio<br />
wave-collecting area would be limited to its diameter. If a second<br />
identical 30-meter-diameter radio telescope was linked with<br />
the first, then one would have an interferometer. The two radio telescopes<br />
would point exactly at the same stellar object, <strong>and</strong> the radio<br />
emissions from this object captured by the two telescopes would be<br />
combined by computer to produce a higher-resolution image. If the<br />
two radio telescopes were located 1.6 kilometers apart, then their<br />
combined resolution would be equivalent to that of a single radio<br />
telescope dish 1.6 kilometers in diameter.<br />
Ryle constructed the first true radio telescope interferometer at<br />
the Mullard Radio Astronomy Observatory in 1955. He used combinations<br />
of radio telescopes to produce interferometers containing<br />
about twelve radio receivers. Ryle’s interferometer greatly improved<br />
radio telescope resolution for detecting stellar radio sources, mapping<br />
the locations of stars <strong>and</strong> galaxies, assisting in the discovery of
628 / Radio interferometer<br />
“quasars” (quasi-stellar radio sources), measuring the earth’s rotation<br />
around the Sun, <strong>and</strong> measuring the motion of the solar system<br />
through space.<br />
Consequences<br />
Following Ryle’s discovery, interferometers were constructed at<br />
radio astronomy observatories throughout the world. The United<br />
States established the National Radio Astronomy Observatory (NRAO)<br />
in rural Green Bank, West Virginia. The NRAO is operated by nine<br />
eastern universities <strong>and</strong> is funded by the National Science Foundation.<br />
At Green Bank, a three-telescope interferometer was constructed,<br />
with each radio telescope having a 26-meter-diameter<br />
dish. During the late 1970’s, the NRAO constructed the largest radio<br />
interferometer in the world, the Very Large Array (VLA). The VLA,<br />
located approximately 80 kilometers west of Socorro, New Mexico,<br />
consists of twenty-seven 25-meter-diameter radio telescopes linked<br />
by a supercomputer. The VLA has a resolution equivalent to that of<br />
a single radio telescope 32 kilometers in diameter.<br />
Even larger radio telescope interferometers can be created with<br />
a technique known as “very long baseline interferometry” (VLBI).<br />
VLBI has been used to construct a radio telescope having an effective<br />
diameter of several thous<strong>and</strong> kilometers. Such an arrangement<br />
involves the precise synchronization of radio telescopes located<br />
in several different parts of the world. Supernova 1987A in<br />
the Large Magellanic Cloud was studied using a VLBI arrangement<br />
between observatories located in Australia, South America,<br />
<strong>and</strong> South Africa.<br />
Launching radio telescopes into orbit <strong>and</strong> linking them with<br />
ground-based radio telescopes could produce a radio telescope<br />
whose effective diameter would be larger than that of the earth.<br />
Such instruments will enable astronomers to map the distribution<br />
of galaxies, quasars, <strong>and</strong> other cosmic objects, to underst<strong>and</strong> the<br />
origin <strong>and</strong> evolution of the universe, <strong>and</strong> possibly to detect meaningful<br />
radio signals from extraterrestrial civilizations.<br />
See also Neutrino detector; Weather satellite; Artificial satellite;<br />
Communications satellite; Radar; Rocket; Weather satellite.
Further Reading<br />
Radio interferometer / 629<br />
Graham-Smith, Francis. Sir Martin Ryle: A Biographical Memoir. London:<br />
Royal Society, 1987.<br />
Malphrus, Benjamin K. The History of Radio Astronomy <strong>and</strong> the National<br />
Radio Astronomy Observatory: Evolution Toward Big Science.<br />
Malabar, Fla.: Krieger, 1996.<br />
Pound, Robert V. Edward Mills Purcell: August 30, 1912-March 7,<br />
1997. Washington, D.C.: National Academy Press, 2000.
630<br />
Refrigerant gas<br />
Refrigerant gas<br />
The invention: A safe refrigerant gas for domestic refrigerators,<br />
dichlorodifluoromethane helped promote a rapid growth in the<br />
acceptance of electrical refrigerators in homes.<br />
The people behind the invention:<br />
Thomas Midgley, Jr. (1889-1944), an American engineer <strong>and</strong><br />
chemist<br />
Charles F. Kettering (1876-1958), an American engineer <strong>and</strong><br />
inventor who was the head of research for General Motors<br />
Albert Henne (1901-1967), an American chemist who was<br />
Midgley’s chief assistant<br />
Frédéric Swarts (1866-1940), a Belgian chemist<br />
Toxic Gases<br />
Refrigerators, freezers, <strong>and</strong> air conditioners have had a major impact<br />
on the way people live <strong>and</strong> work in the twentieth century. With<br />
them, people can live more comfortably in hot <strong>and</strong> humid areas,<br />
<strong>and</strong> a great variety of perishable foods can be transported <strong>and</strong><br />
stored for extended periods. As recently as the early nineteenth century,<br />
the foods most regularly available to Americans were bread<br />
<strong>and</strong> salted meats. Items now considered essential to a balanced diet,<br />
such as vegetables, fruits, <strong>and</strong> dairy products, were produced <strong>and</strong><br />
consumed only in small amounts.<br />
Through the early part of the twentieth century, the pattern of<br />
food storage <strong>and</strong> distribution evolved to make perishable foods<br />
more available. Farmers shipped dairy products <strong>and</strong> frozen meats<br />
to mechanically refrigerated warehouses. Smaller stores <strong>and</strong> most<br />
American households used iceboxes to keep perishable foods fresh.<br />
The iceman was a familiar figure on the streets of American towns,<br />
delivering large blocks of ice regularly.<br />
In 1930, domestic mechanical refrigerators were being produced<br />
in increasing numbers. Most of them were vapor compression machines,<br />
in which a gas was compressed in a closed system of pipes<br />
outside the refrigerator by a mechanical pump <strong>and</strong> condensed to a
liquid. The liquid was pumped into a sealed chamber in the refrigerator<br />
<strong>and</strong> allowed to evaporate to a gas. The process of evaporation<br />
removes heat from the environment, thus cooling the interior of the<br />
refrigerator.<br />
The major drawback of early home refrigerators involved the<br />
types of gases used. In 1930, these included ammonia, sulfur dioxide,<br />
<strong>and</strong> methyl chloride. These gases were acceptable if the refrigerator’s<br />
gas pipes never sprang a leak. Unfortunately, leaks sometimes<br />
occurred, <strong>and</strong> all these gases are toxic. Ammonia <strong>and</strong> sulfur<br />
dioxide both have unpleasant odors; if they leaked, at least they<br />
would be detected rapidly. Methyl chloride however, can form a<br />
dangerously explosive mixture with air, <strong>and</strong> it has only a very faint,<br />
<strong>and</strong> not unpleasant, odor. In a hospital in Clevel<strong>and</strong> during the<br />
1920’s, a refrigerator with methyl chloride leaked, <strong>and</strong> there was a<br />
disastrous explosion of the methyl chloride-air mixture. After that,<br />
methyl chloride for use in refrigerators was mixed with a small<br />
amount of a very bad-smelling compound to make leaks detectable.<br />
(The same tactic is used with natural gas.)<br />
Three-Day Success<br />
Refrigerant gas / 631<br />
General Motors, through its Frigidaire division, had a substantial<br />
interest in the domestic refrigerator market. Frigidaire refrigerators<br />
used sulfur dioxide as the refrigerant gas. Charles F. Kettering,<br />
director of research for General Motors, decided that Frigidaire<br />
needed a new refrigerant gas that would have good thermal properties<br />
but would be nontoxic <strong>and</strong> nonexplosive. In early 1930, he sent<br />
Lester S. Keilholtz, chief engineer of General Motors’ Frigidaire division,<br />
to Thomas Midgley, Jr., a mechanical engineer <strong>and</strong> selftaught<br />
chemist. He challenged them to develop such a new gas.<br />
Midgley’s associates, Albert Henne <strong>and</strong> Robert McNary, researched<br />
what types of compounds might already fit Kettering’s specifications.<br />
Working with research that had been done by the Belgian<br />
chemist Frédéric Swarts in the late nineteenth <strong>and</strong> early twentieth<br />
centuries, Midgley, Henne, <strong>and</strong> McNary realized that dichlorodifluoromethane<br />
would have ideal thermal properties <strong>and</strong> the right<br />
boiling point for a refrigerant gas. The only question left to be answered<br />
was whether the compound was toxic.
632 / Refrigerant gas<br />
The chemists prepared a few grams of dichlorodifluoromethane<br />
<strong>and</strong> put it, along with a guinea pig, into a closed chamber. They<br />
were delighted to see that the animal seemed to suffer no ill effects<br />
at all <strong>and</strong> was able to breathe <strong>and</strong> move normally. They were briefly<br />
puzzled when a second batch of the compound killed a guinea pig<br />
almost instantly. Soon, they discovered that an impurity in one of<br />
the ingredients had produced a potent poison in their refrigerant<br />
gas. A simple washing procedure completely removed the poisonous<br />
contaminant.<br />
This astonishingly successful research project was completed in<br />
three days. The boiling point of dichlorodifluoromethane is −5.6 degrees<br />
Celsius. It is nontoxic <strong>and</strong> nonflammable <strong>and</strong> possesses excellent<br />
thermal properties. When Midgley was awarded the Perkin<br />
Medal for industrial chemistry in 1937, he gave the audience a<br />
graphic demonstration of the properties of dichlorodifluoromethane:<br />
He inhaled deeply of its vapors <strong>and</strong> exhaled gently into a jar<br />
containing a burning c<strong>and</strong>le. The c<strong>and</strong>le flame promptly went out.<br />
This visual evidence proved that dichlorodifluoromethane was not<br />
poisonous <strong>and</strong> would not burn.<br />
Impact<br />
The availability of this safe refrigerant gas, which was renamed<br />
Freon, led to drastic changes in the United States. The current patterns<br />
of food production, distribution, <strong>and</strong> consumption are a direct<br />
result, as is air conditioning. Air conditioning was developed early<br />
in the twentieth century; by the late 1970’s, most American cars <strong>and</strong><br />
residences were equipped with air conditioning, <strong>and</strong> other countries<br />
with hot climates followed suit. Consequently, major relocations<br />
of populations <strong>and</strong> businesses have become possible. Since<br />
World War II, there have been steady migrations to the “Sun Belt,”<br />
the states spanning the United States from southeast to southwest,<br />
because air conditioners have made these areas much more livable.<br />
Freon is a member of a family of chemicals called “chlorofluorocarbons.”<br />
In addition to refrigeration, it is also used as a propellant<br />
in aerosols <strong>and</strong> in the production of polystyrene plastics. In 1974,<br />
scientists began to suspect that chlorofluorocarbons, when released<br />
into the air, might have a serious effect on the environment. They
speculated that the compounds might migrate into the stratosphere,<br />
where they could be decomposed by the intense ultraviolet light<br />
from the sunlight that is prevented from reaching the earth’s surface<br />
by the thin but vital layer of ozone in the stratosphere. In the process,<br />
large amounts of the ozone layer might also be destroyed—<br />
letting in the dangerous ultraviolet light. In addition to possible climatic<br />
effects, the resulting increase in ultraviolet light reaching the<br />
earth’s surface would raise the incidence of skin cancers. As a result,<br />
chemical manufacturers are trying to develop alternative refrigerant<br />
gases that will not harm the ozone layer.<br />
See also Electric refrigerator; Electric refrigerator; Food freezing;<br />
Microwave cooking.<br />
Further Reading<br />
Refrigerant gas / 633<br />
Leslie, Stuart W. Boss Kettering. New York: Columbia University<br />
Press, 1983.<br />
Mahoney, Thomas A. “The Seventy-one-year Saga of CFC’s.” Air<br />
Conditioning, Heating <strong>and</strong> Refrigeration News (March 15, 1999).<br />
Preville, Cherie R., <strong>and</strong> Chris King. “Cooling Takes Off in the Roaring<br />
Twenties.” Air Conditioning, Heating <strong>and</strong> Refrigeration News<br />
(April 30, 2001).
634<br />
Reserpine<br />
Reserpine<br />
The invention: A drug with unique hypertension-decreasing effects<br />
that provides clinical medicine with a versatile <strong>and</strong> effective<br />
tool.<br />
The people behind the invention:<br />
Robert Wallace Wilkins (1906- ), an American physician <strong>and</strong><br />
clinical researcher<br />
Walter E. Judson (1916- ) , an American clinical researcher<br />
Treating Hypertension<br />
Excessively elevated blood pressure, clinically known as “hypertension,”<br />
has long been recognized as a pervasive <strong>and</strong> serious human<br />
malady. In a few cases, hypertension is recognized as an effect<br />
brought about by particular pathologies (diseases or disorders). Often,<br />
however, hypertension occurs as the result of unknown causes.<br />
Despite the uncertainty about its origins, unattended hypertension<br />
leads to potentially dramatic health problems, including increased<br />
risk of kidney disease, heart disease, <strong>and</strong> stroke.<br />
Recognizing the need to treat hypertension in a relatively straightforward<br />
<strong>and</strong> effective way, Robert Wallace Wilkins, a clinical researcher<br />
at Boston University’s School of Medicine <strong>and</strong> the head of<br />
Massachusetts Memorial Hospital’s Hypertension Clinic, began to<br />
experiment with reserpine in the early 1950’s. Initially, the samples<br />
that were made available to Wilkins were crude <strong>and</strong> unpurified.<br />
Eventually, however, a purified version was used.<br />
Reserpine has a long <strong>and</strong> fascinating history of use—both clinically<br />
<strong>and</strong> in folk medicine—in India. The source of reserpine is the<br />
root of the shrub Rauwolfia serpentina, first mentioned in Western<br />
medical literature in the 1500’s but virtually unknown, or at least<br />
unaccepted, outside India until the mid-twentieth century. Crude<br />
preparations of the shrub had been used for a variety of ailments in<br />
India for centuries prior to its use in the West.<br />
Wilkins’s work with the drug did not begin on an encouraging<br />
note, because reserpine does not act rapidly—a fact that had been
noted in Indian medical literature. The st<strong>and</strong>ard observation in<br />
Western pharmacotherapy, however, was that most drugs work<br />
rapidly; if a week has elapsed without positive effects being shown<br />
by a drug, the conventional Western wisdom is that it is unlikely<br />
to work at all. Additionally, physicians <strong>and</strong> patients alike tend to<br />
look for rapid improvement or at least positive indications. Reserpine<br />
is deceptive in this temporal context, <strong>and</strong> Wilkins <strong>and</strong> his<br />
coworkers were nearly deceived. In working with crude preparations<br />
of Rauwolfia serpentina, they were becoming very pessimistic,<br />
when a patient who had been treated for many consecutive<br />
days began to show symptomatic relief. Nevertheless, only after<br />
months of treatment did Wilkins become a believer in the drug’s<br />
beneficial effects.<br />
The Action of Reserpine<br />
Reserpine / 635<br />
When preparations of pure reserpine became available in 1952,<br />
the drug did not at first appear to be the active ingredient in the<br />
crude preparations. When patients’ heart rate <strong>and</strong> blood pressure<br />
began to drop after weeks of treatment, however, the investigators<br />
saw that reserpine was indeed responsible for the improvements.<br />
Once reserpine’s activity began, Wilkins observed a number of<br />
important <strong>and</strong> unique consequences. Both the crude preparations<br />
<strong>and</strong> pure reserpine significantly reduced the two most meaningful<br />
measures of blood pressure. These two measures are systolic blood<br />
pressure <strong>and</strong> diastolic blood pressure. Systolic pressure represents<br />
the peak of pressure produced in the arteries following a contraction<br />
of the heart. Diastolic pressure is the low point that occurs<br />
when the heart is resting. To lower the mean blood pressure in the<br />
system significantly, both of these pressures must be reduced. The<br />
administration of low doses of reserpine produced an average drop<br />
in pressure of about 15 percent, a figure that was considered less<br />
than dramatic but still highly significant. The complex phenomenon<br />
of blood pressure is determined by a multitude of factors, including<br />
the resistance of the arteries, the force of contraction of the<br />
heart, <strong>and</strong> the heartbeat rate. In addition to lowering the blood pressure,<br />
reserpine reduced the heartbeat rate by about 15 percent, providing<br />
an important auxiliary action.
636 / Reserpine<br />
In the early 1950’s, various therapeutic drugs were used to treat<br />
hypertension. Wilkins recognized that reserpine’s major contribution<br />
would be as a drug that could be used in combination with<br />
drugs that were already in use. His studies established that reserpine,<br />
combined with at least one of the drugs already in use, produced<br />
an additive effect in lowering blood pressure. Indeed, at<br />
times, the drug combinations produced a “synergistic effect,” which<br />
means that the combination of drugs created an effect that was more<br />
effective than the sum of the effects of the drugs when they were administered<br />
alone. Wilkins also discovered that reserpine was most<br />
effective when administered in low dosages. Increasing the dosage<br />
did not increase the drug’s effect significantly, but it did increase the<br />
likelihood of unwanted side effects. This fact meant that reserpine<br />
was indeed most effective when administered in low dosages along<br />
with other drugs.<br />
Wilkins believed that reserpine’s most unique effects were not<br />
those found directly in the cardiovascular system but those produced<br />
indirectly by the brain. Hypertension is often accompanied<br />
by neurotic anxiety, which is both a consequence of the justifiable<br />
fears of future negative health changes brought on by<br />
prolonged hypertension <strong>and</strong> contributory to the hypertension itself.<br />
Wilkins’s patients invariably felt better mentally, were less<br />
anxious, <strong>and</strong> were sedated, but in an unusual way. Reserpine<br />
made patients drowsy but did not generally cause sleep, <strong>and</strong> if<br />
sleep did occur, patients could be awakened easily. Such effects<br />
are now recognized as characteristic of tranquilizing drugs, or<br />
antipsychotics. In effect, Wilkins had discovered a new <strong>and</strong> important<br />
category of drugs: tranquilizers.<br />
Impact<br />
Reserpine holds a vital position in the historical development of<br />
antihypertensive drugs for two reasons. First, it was the first drug<br />
that was discovered to block activity in areas of the nervous system<br />
that use norepinephrine or its close relative dopamine as transmitter<br />
substances. Second, it was the first hypertension drug to be<br />
widely accepted <strong>and</strong> used. Its unusual combination of characteristics<br />
made it effective in most patients.
Since the 1950’s, medical science has rigorously examined cardiovascular<br />
functioning <strong>and</strong> diseases such as hypertension. Many<br />
new factors, such as diet <strong>and</strong> stress, have been recognized as factors<br />
in hypertension. Controlling diet <strong>and</strong> life-style help tremendously<br />
in treating hypertension, but if the nervous system could not be partially<br />
controlled, many cases of hypertension would continue to be<br />
problematic. Reserpine has made that control possible.<br />
See also Abortion pill; Antibacterial drugs; Artificial kidney;<br />
Birth control pill; Salvarsan.<br />
Further Reading<br />
Reserpine / 637<br />
MacGregor, G. A., <strong>and</strong> Norman M. Kaplan. Hypertension. 2ded.<br />
Abingdon: Health Press, 2001.<br />
“Reconsidering Reserpine.” American Family Physician 45 (March,<br />
1992).<br />
Weber, Michael A. Hypertension Medicine. Totowa, N.J.: Humana,<br />
2001.
638<br />
Rice <strong>and</strong> wheat strains<br />
Rice <strong>and</strong> wheat strains<br />
The invention: Artificially created high-yielding wheat <strong>and</strong> rice<br />
varieties that are helping food producers in developing countries<br />
keep pace with population growth<br />
The people behind the invention:<br />
Orville A. Vogel (1907-1991), an agronomist who developed<br />
high-yielding semidwarf winter wheats <strong>and</strong> equipment for<br />
wheat research<br />
Norman E. Borlaug (1914- ), a distinguished agricultural<br />
scientist<br />
Robert F. Ch<strong>and</strong>ler, Jr. (1907-1999), an international agricultural<br />
consultant <strong>and</strong> director of the International Rice Research<br />
Institute, 1959-1972<br />
William S. Gaud (1907-1977), a lawyer <strong>and</strong> the administrator of<br />
the U.S. Agency for International Development, 1966-1969<br />
The Problem of Hunger<br />
In the 1960’s, agricultural scientists created new, high-yielding<br />
strains of rice <strong>and</strong> wheat designed to fight hunger in developing<br />
countries. Although the introduction of these new grains raised levels<br />
of food production in poor countries, population growth <strong>and</strong><br />
other factors limited the success of the so-called “Green Revolution.”<br />
Before World War II, many countries of Asia, Africa, <strong>and</strong> Latin<br />
America exported grain to Western Europe. After the war, however,<br />
these countries began importing food, especially from the United<br />
States. By 1960, they were importing about nineteen million tons of<br />
grain a year; that level nearly doubled to thirty-six million tons in<br />
1966. Rapidly growing populations forced the largest developing<br />
countries—China, India, <strong>and</strong> Brazil in particular—to import huge<br />
amounts of grain. Famine was averted on the Indian subcontinent<br />
in 1966 <strong>and</strong> 1967 only by the United States shipping wheat to the region.<br />
The United States then changed its food policy. Instead of contributing<br />
food aid directly to hungry countries, the U.S. began
working to help such countries feed themselves.<br />
The new rice <strong>and</strong> wheat strains were introduced just as countries<br />
in Africa <strong>and</strong> Asia were gaining their independence from the European<br />
nations that had colonized them. The Cold War was still going<br />
strong, <strong>and</strong> Washington <strong>and</strong> other Western capitals feared that the<br />
Soviet Union was gaining influence in the emerging countries. To<br />
help counter this threat, the U.S. Agency for International Development<br />
(USAID) was active in the Third World in the 1960’s, directing<br />
or contributing to dozens of agricultural projects, including building<br />
rural infrastructure (farm-to-market roads, irrigation projects,<br />
<strong>and</strong> rural electric systems), introducing modern agricultural techniques,<br />
<strong>and</strong> importing fertilizer or constructing fertilizer factories in<br />
other countries. By raising the st<strong>and</strong>ard of living of impoverished<br />
people in developing countries through applying technology to agriculture,<br />
policymakers hoped to eliminate the socioeconomic conditions<br />
that would support communism.<br />
The Green Revolution<br />
Rice <strong>and</strong> wheat strains / 639<br />
It was against this background that William S. Gaud, administrator<br />
of USAID from 1966 to 1969, first talked about a “green revolution”<br />
in a 1968 speech before the Society for International Development<br />
in Washington, D.C. The term “green revolution” has<br />
been used to refer to both the scientific development of highyielding<br />
food crops <strong>and</strong> the broader socioeconomic changes in a<br />
country’s agricultural sector stemming from farmers’ adoption of<br />
these crops.<br />
In 1947, S. C. Salmon, a United States Department of Agriculture<br />
(USDA) scientist, brought a wheat-dwarfing gene to the United<br />
States. Developed in Japan, the gene produced wheat on a short<br />
stalk that was strong enough to bear a heavy head of grain. Orville<br />
Vogel, another USDA scientist, then introduced the gene into local<br />
wheat strains, creating a successful dwarf variety known as Gaines<br />
wheat. Under irrigation, Gaines wheat produced record yields. After<br />
hearing about Vogel’s work, Norman Borlaug, who headed<br />
the Rockefeller Foundation’s wheat-breeding program in Mexico,<br />
adapted Gaines wheat, later called “miracle wheat,” to a variety of<br />
growing conditions in Mexico.
640 / Rice <strong>and</strong> wheat strains<br />
Workers in an Asian rice field. (PhotoDisc)<br />
Success with the development of high-yielding wheat varieties<br />
persuaded the Rockefeller <strong>and</strong> Ford foundations to pursue similar<br />
ends in rice culture. The foundations funded the International Rice<br />
Research Institute (IRRI) in Los Banos, Philippines, appointing as director<br />
Robert F. Ch<strong>and</strong>ler, Jr., an international agricultural consultant.<br />
Under his leadership, IRRI researchers cross-bred Peta, a tall variety<br />
of rice from Indonesia, with Deo-geo-woo-gen, a dwarf rice from Taiwan,<br />
to produce a new strain, IR-8. Released in 1966 <strong>and</strong> dubbed<br />
“miracle rice,” IR-8 produced yields double those of other Asian rice<br />
varieties <strong>and</strong> in a shorter time, 120 days in contrast to 150 to 180 days.<br />
Statistics from India illustrate the expansion of the new grain varieties.<br />
During the 1966-1967 growing season, Indian farmers planted<br />
improved rice strains on 900,000 hectares, or 2.5 percent of the total<br />
area planted in rice. By 1984-1985, the surface area planted in improved<br />
rice varieties stood at 23.4 million hectares, or 56.9 percent of<br />
the total. The rate of adoption was even faster for wheat. In 1966-<br />
1967, improved varieties covered 500,000 hectares, comprising 4.2<br />
percent of the total wheat crop. By the 1984-1985 growing season,<br />
the surface area had exp<strong>and</strong>ed to 19.6 million hectares, or 82.9 percent<br />
of the total wheat crop.
To produce such high yields, IR-8 <strong>and</strong> other genetically engineered<br />
varieties of rice <strong>and</strong> wheat required the use of irrigation, fertilizers,<br />
<strong>and</strong> pesticides. Irrigation further increased food production<br />
by allowing year-round farming <strong>and</strong> the planting of multiple crops<br />
on the same plot of l<strong>and</strong>, either two crops of high-yielding grain varieties<br />
or one grain crop <strong>and</strong> another food crop.<br />
Expectations<br />
Rice <strong>and</strong> wheat strains / 641<br />
The rationale behind the introduction of high-yielding grains in<br />
developing countries was that it would start a cycle of improvement<br />
in the lives of the rural poor. High-yielding grains would lead to<br />
bigger harvests <strong>and</strong> better-nourished <strong>and</strong> healthier families. If better<br />
nutrition enabled more children to survive, the need to have large<br />
families to ensure care for elderly parents would ease. A higher survival<br />
rate of children would lead couples to use family planning,<br />
slowing overall population growth <strong>and</strong> allowing per capita food intake<br />
to rise.<br />
The greatest impact of the Green Revolution has been seen in<br />
Asia, which experienced dramatic increases in rice production, <strong>and</strong><br />
on the Indian subcontinent, with increases in rice <strong>and</strong> wheat yields.<br />
Latin America, especially Mexico, enjoyed increases in wheat harvests.<br />
Subsaharan Africa initially was left out of the revolution, as<br />
scientists paid scant attention to increasing the yields of such staple<br />
food crops as yams, cassava, millet, <strong>and</strong> sorghum. By the 1980’s,<br />
however, this situation was being remedied with new research directed<br />
toward millet <strong>and</strong> sorghum.<br />
Research is conducted by a network of international agricultural<br />
research centers. Backed by both public <strong>and</strong> private funds, these<br />
centers cooperate with international assistance agencies, private<br />
foundations, universities, multinational corporations, <strong>and</strong> government<br />
agencies to pursue <strong>and</strong> disseminate research into improved<br />
crop varieties to farmers in the Third World. IRRI <strong>and</strong> the International<br />
Maize <strong>and</strong> Wheat Improvement Center (IMMYT) in Mexico<br />
City are two of these agencies.
642 / Rice <strong>and</strong> wheat strains<br />
Impact<br />
Expectations went unrealized in the first few decades following<br />
the green revolution. Despite the higher yields from millions of<br />
tons of improved grain seeds imported into the developing world,<br />
lower-yielding grains still accounted for much of the surface area<br />
planted in grain. The reasons for this explain the limits <strong>and</strong> impact<br />
of the Green Revolution.<br />
The subsistence mentality dies hard. The main targets of Green<br />
Revolution programs were small farmers, people whose crops provide<br />
barely enough to feed their families <strong>and</strong> provide seed for the<br />
next crop. If an experimental grain failed, they faced starvation.<br />
Such farmers hedged their bets when faced with a new proposition,<br />
for example, by intercropping, alternating rows of different grains<br />
in the same field. In this way, even if one crop failed, another might<br />
feed the family.<br />
Poor farmers in developing countries also were likely to be illiterate<br />
<strong>and</strong> not eager to try something they did not fully underst<strong>and</strong>.<br />
Also, by definition, poor farmers often did not have the means to<br />
purchase the inputs—irrigation, fertilizer, <strong>and</strong> pesticides—required<br />
to grow the improved varieties.<br />
In many developing countries, therefore, rich farmers tended to be<br />
the innovators. More likely than poor farmers to be literate, they also<br />
had the money to exploit fully the improved grain varieties. They<br />
also were more likely than subsistence-level farmers to be in touch<br />
with the monetary economy, making purchases from the agricultural<br />
supply industry <strong>and</strong> arranging sales through established marketing<br />
channels, rather than producing primarily for personal or family use.<br />
Once wealthy farmers adopted the new grains, it often became<br />
more difficult for poor farmers to do so. Increased dem<strong>and</strong> for limited<br />
supplies, such as pesticides <strong>and</strong> fertilizers, raised costs, while<br />
bigger-than-usual harvests depressed market prices. With high sales<br />
volumes, owners of large farms could withst<strong>and</strong> the higher costs <strong>and</strong><br />
lower-per-unit profits, but smaller farmers often could not.<br />
Often, the result of adopting improved grains was that small<br />
farmers could no longer make ends meet solely by farming. Instead,<br />
they were forced to hire themselves out as laborers on large farms.<br />
Surges of laborers into a limited market depressed rural wages,
making it even more difficult for small farmers to eke out a living.<br />
The result was that rich farmers got richer <strong>and</strong> poor farmers got<br />
poorer. Often, small farmers who could no longer support their<br />
families would leave rural areas <strong>and</strong> migrate to the cities, seeking<br />
work <strong>and</strong> swelling the ranks of the urban poor.<br />
Mixed Results<br />
Orville A. Vogel<br />
Rice <strong>and</strong> wheat strains / 643<br />
Born in 1907, Orville Vogel grew up on a farm in eastern Nebraska,<br />
<strong>and</strong> farming remained his passion for his entire life. He<br />
earned bachelor’s <strong>and</strong> master’s degrees in agriculture from the<br />
University of Nebraska, <strong>and</strong> then a doctorate in agronomy from<br />
Washington State University (WSU) in 1939.<br />
Eastern Washington agreed with him, <strong>and</strong> he stayed there.<br />
He began his career as a wheat breeder 1931 for the U.S. Department<br />
of Agriculture, stationed at WSU. During the next fortytwo<br />
years, he also took on the responsibilities of associate<br />
agronomist for the university’s Division of Agronomy <strong>and</strong> from<br />
1960 until his retirement in 1973 was professor of agronomy.<br />
At heart Vogel was an experimenter <strong>and</strong> tinkerer, renowned<br />
among his peers for his keen powers of observation <strong>and</strong> his unselfishness.<br />
In addition to the wheat strains he bred that helped<br />
launch the Green Revolution, he took part in the search for<br />
plant varieties resistant to snow mold <strong>and</strong> foot rot. However,<br />
according to the father of the Green Revolution, Nobel laureate<br />
Norman Borlaug, Vogel’s greatest contribution may not have<br />
been semi-dwarf wheat varieties but the many innovations in<br />
farming equipment he built as a sideline. These unheralded inventions<br />
automated the planting <strong>and</strong> harvesting of research<br />
plots, <strong>and</strong> so made research much easier to carry out <strong>and</strong> faster.<br />
In recognition of his achievements, Vogel received the U.S.<br />
National Medal of Science in 1975 <strong>and</strong> entered the Agricultural<br />
Research Service’s Science Hall of Fame in 1987. Vogel died in<br />
Washington in 1991.<br />
The effects of the Green Revolution were thus mixed. The dissemination<br />
of improved grain varieties unquestionably increased<br />
grain harvests in some of the poorest countries of the world. Seed
644 / Rice <strong>and</strong> wheat strains<br />
companies developed, produced, <strong>and</strong> sold commercial quantities of<br />
improved grains, <strong>and</strong> fertilizer <strong>and</strong> pesticide manufacturers logged<br />
sales to developing countries thanks to USAID-sponsored projects.<br />
Along with disrupting the rural social structure <strong>and</strong> encouraging<br />
rural flight to the cities, the Green Revolution has had other negative<br />
effects. For example, the millions of tube wells sunk in India to<br />
irrigate crops reduced groundwater levels in some regions faster<br />
than they could be recharged. In other areas, excessive use of pesticides<br />
created health hazards, <strong>and</strong> fertilizer use led to streams <strong>and</strong><br />
ponds being clogged by weeds. The scientific community became<br />
concerned that the use of improved varieties of grain, many of<br />
which were developed from the same mother variety, reduced the<br />
genetic diversity of the world’s food crops, making them especially<br />
vulnerable to attack by disease or pests.<br />
Perhaps the most significant impact of the Green Revolution is<br />
the change it wrought in the income <strong>and</strong> class structure of rural areas;<br />
often, malnutrition was not eliminated in either the countryside<br />
or the cities. Almost without exception, the relative position of peasants<br />
deteriorated. Many analysts admit that the Green Revolution<br />
did not end world hunger, but they argue that it did buy time. The<br />
poorest of the poor would be even worse off without it.<br />
See also Artificial chromosome; Cloning; Genetic “fingerprinting”;<br />
Genetically engineered insulin; In vitro plant culture.<br />
Further Reading<br />
Glaeser, Bernhard, ed. The Green Revolution Revisited: Critique <strong>and</strong> Alternatives.<br />
London: Allen & Unwin, 1987.<br />
Hayami, Yujiro, <strong>and</strong> Masao Kikuchi. A Rice Village Saga: Three Decades<br />
of Green Revolution in the Philippines. Lanham, Md.: Barnes<br />
<strong>and</strong> Noble, 2000.<br />
Karim, M. Bazlul. The Green Revolution: An International Bibliography.<br />
New York: Greenwood Press, 1986.<br />
Lipton, Michael, <strong>and</strong> Richard Longhurst. New Seeds <strong>and</strong> Poor People.<br />
Baltimore: Johns Hopkins University Press, 1989.<br />
Perkins, John H. Geopolitics <strong>and</strong> the Green Revolution: Wheat, Genes,<br />
<strong>and</strong> the Cold War. New York: Oxford University Press, 1997.
Richter scale<br />
Richter scale<br />
The invention: A scale for measuring the strength of earthquakes<br />
based on their seismograph recordings.<br />
The people behind the invention:<br />
Charles F. Richter (1900-1985), an American seismologist<br />
Beno Gutenberg (1889-1960), a German American seismologist<br />
Kiyoo Wadati (1902- ), a pioneering Japanese seismologist<br />
Giuseppe Mercalli (1850-1914), an Italian physicist,<br />
volcanologist, <strong>and</strong> meteorologist<br />
Earthquake Study by Eyewitness Report<br />
645<br />
Earthquakes range in strength from barely detectable tremors to<br />
catastrophes that devastate large regions <strong>and</strong> take hundreds of thous<strong>and</strong>s<br />
of lives. Yet the human impact of earthquakes is not an accurate<br />
measure of their power; minor earthquakes in heavily populated regions<br />
may cause great destruction, whereas powerful earthquakes in<br />
remote areas may go unnoticed. To study earthquakes, it is essential<br />
to have an accurate means of measuring their power.<br />
The first attempt to measure the power of earthquakes was the<br />
development of intensity scales, which relied on damage effects<br />
<strong>and</strong> reports by witnesses to measure the force of vibration. The<br />
first such scale was devised by geologists Michele Stefano de Rossi<br />
<strong>and</strong> François-Alphonse Forel in 1883. It ranked earthquakes on a<br />
scale of 1 to 10. The de Rossi-Forel scale proved to have two serious<br />
limitations: Its level 10 encompassed a great range of effects, <strong>and</strong> its<br />
description of effects on human-made <strong>and</strong> natural objects was so specifically<br />
European that it was difficult to apply the scale elsewhere.<br />
To remedy these problems, Giuseppe Mercalli published a revised<br />
intensity scale in 1902. The Mercalli scale, as it came to be<br />
called, added two levels to the high end of the de Rossi-Forel scale,<br />
making its highest level 12. It also was rewritten to make it more<br />
globally applicable. With later modifications by Charles F. Richter,<br />
the Mercalli scale is still in use.<br />
Intensity measurements, even though they are somewhat subjec-
646 / Richter scale<br />
Charles F. Richter<br />
Charles Francis Richter was born in Ohio in 1900. After his<br />
mother divorced his father, she moved the family to Los Angles<br />
in 1909. A precocious student, Richter entered the University of<br />
Southern California at sixteen <strong>and</strong> transferred to Stanford University<br />
a year later, majoring in physics. He graduated in 1920<br />
<strong>and</strong> finished a doctorate in theoretical physics at the California<br />
Institute of Technology in 1928.<br />
While Richter was a graduate student at Caltech, Noble laureate<br />
Robert A. Millikan lured him away from his original interest,<br />
astronomy, to become an assistant at the seismology laboratory.<br />
Richter realized that seismology was then a relatively new<br />
discipline <strong>and</strong> that he could help it mature. He stayed with it—<br />
<strong>and</strong> Caltech—for the rest of his university career, retiring as<br />
professor emeritus in 1970. In 1971 he opened a consulting<br />
firm—Lindvall, Richter <strong>and</strong> Associates—to assess the earthquake<br />
readiness of structures.<br />
Richter published more than two hundred articles about<br />
earthquakes <strong>and</strong> earthquake engineering <strong>and</strong> two influential<br />
books, Elementary Seismology <strong>and</strong> Seismicity of the Earth (with<br />
Beno Gutenberg). These works, together with his teaching,<br />
trained a generation of earthquake researchers <strong>and</strong> gave them a<br />
basic tool, the Richter scale, to work with. He died in California<br />
in 1985.<br />
tive, are very useful in mapping the extent of earthquake effects.<br />
Nevertheless, intensity measurements are still not ideal measuring<br />
techniques. Intensity varies from place to place <strong>and</strong> is strongly influenced<br />
by geologic features, <strong>and</strong> different observers frequently report<br />
different intensities. There is a need for an objective method of<br />
describing the strength of earthquakes with a single measurement.<br />
Measuring Earthquakes One Hundred Kilometers Away<br />
An objective technique for determining the power of earthquakes<br />
was devised in the early 1930’s by Richter at the California Institute<br />
of Technology in Pasadena, California. The eventual usefulness of<br />
the scale that came to be called the “Richter scale” was completely<br />
unforeseen at first.
Amplified Maximum Ground Motion (Microns)<br />
9<br />
10<br />
8<br />
10<br />
Richter scale / 647<br />
Alaska, 1964<br />
San Francisco, 1906<br />
Great devastation;<br />
many fatalities possible<br />
In 1931, the California Institute of Technology was preparing to<br />
issue a catalog of all earthquakes detected by its seismographs in the<br />
preceding three years. Several hundred earthquakes were listed,<br />
most of which had not been felt by humans, but detected only by instruments.<br />
Richter was concerned about the possible misinterpretations<br />
of the listing. With no indication of the strength of the earthquakes,<br />
the public might overestimate the risk of earthquakes in<br />
areas where seismographs were numerous <strong>and</strong> underestimate the<br />
risk in areas where seismographs were few.<br />
To remedy the lack of a measuring method, Richter devised the<br />
scale that now bears his name. On this scale, earthquake force is expressed<br />
in magnitudes, which in turn are expressed in whole numbers<br />
<strong>and</strong> decimals. Each increase of one magnitude indicates a tenfold jump<br />
in the earthquake’s force. These measurements were defined for a<br />
st<strong>and</strong>ard seismograph located one hundred kilometers from the earthquake.<br />
By comparing records for earthquakes recorded on different<br />
8<br />
8.9<br />
Great<br />
Major<br />
New Madrid, Missouri, 1812<br />
7<br />
10<br />
7<br />
6 10<br />
5<br />
10<br />
4<br />
10<br />
2<br />
10<br />
1<br />
10 -1 0<br />
3<br />
1<br />
2<br />
Not felt<br />
Strong<br />
6<br />
Moderate<br />
5<br />
4 Small<br />
Minor<br />
Damage begins;<br />
fatalities rare<br />
-1 0 1 2 3 4 5 6 7 8 9<br />
Magnitude<br />
Graphic representation of the Richter scale showing examples of historically important<br />
earthquakes.
648 / Richter scale<br />
devices at different distances, Richter was able to create conversion tables<br />
for measuring magnitudes for any instrument at any distance.<br />
Impact<br />
Richter had hoped to create a rough means of separating small,<br />
medium, <strong>and</strong> large earthquakes, but he found that the scale was capable<br />
of making much finer distinctions. Most magnitude estimates<br />
made with a variety of instruments at various distances from earthquakes<br />
agreed to within a few tenths of a magnitude. Richter formally<br />
published a description of his scale in January, 1935, in the<br />
Bulletin of the Seismological Society of America. Other systems of estimating<br />
magnitude had been attempted, notably that of Kiyoo Wadati,<br />
published in 1931, but Richter’s system proved to be the most workable<br />
scale yet devised <strong>and</strong> rapidly became the st<strong>and</strong>ard.<br />
Over the next few years, the scale was refined. One critical refinement<br />
was in the way seismic recordings were converted into magnitude.<br />
Earthquakes produce many types of waves, but it was not<br />
known which type should be the st<strong>and</strong>ard for magnitude. So-called<br />
surface waves travel along the surface of the earth. It is these waves<br />
that produce most of the damage in large earthquakes; therefore, it<br />
seemed logical to let these waves be the st<strong>and</strong>ard. Earthquakes deep<br />
within the earth, however, produce few surface waves. Magnitudes<br />
based on surface waves would therefore be too small for these earthquakes.<br />
Deep earthquakes produce mostly waves that travel through<br />
the solid body of the earth; these are the so-called body waves.<br />
It became apparent that two scales were needed: one based on<br />
surface waves <strong>and</strong> one on body waves. Richter <strong>and</strong> his colleague<br />
Beno Gutenberg developed scales for the two different types of<br />
waves, which are still in use. Magnitudes estimated from surface<br />
waves are symbolized by a capital M, <strong>and</strong> those based on body<br />
waves are denoted by a lowercase m.<br />
From a knowledge of Earth movements associated with seismic<br />
waves, Richter <strong>and</strong> Gutenberg succeeded in defining the energy<br />
output of an earthquake in measurements of magnitude. A magnitude<br />
6 earthquake releases about as much energy as a one-megaton<br />
nuclear explosion; a magnitude 0 earthquake releases about as<br />
much energy as a small car dropped off a two-story building.
See also Carbon dating; Geiger counter; Gyrocompass; Sonar;<br />
Scanning tunneling microscope.<br />
Further Reading<br />
Richter scale / 649<br />
Bates, Charles C., Thomas Frohock Gaskell, <strong>and</strong> Robert B. Rice. Geophysics<br />
in the Affairs of Man: A Personalized History of Exploration<br />
Geophysics <strong>and</strong> Its Allied Sciences of Seismology <strong>and</strong> Oceanography.<br />
New York: Pergamon Press, 1982.<br />
Davison, Charles. 1927. Reprint. The Founders of Seismology. New<br />
York: Arno Press, 1978.<br />
Howell, Benjamin F. An Introduction to Seismological Research: History<br />
<strong>and</strong> Development. Cambridge, N.Y.: Cambridge University Press,<br />
1990.
650<br />
Robot (household)<br />
Robot (household)<br />
The invention: The first available personal robot, Hero 1 could<br />
speak; carry small objects in a gripping arm, <strong>and</strong> sense light, motion,<br />
sound, <strong>and</strong> time.<br />
The people behind the invention:<br />
Karel Capek (1890-1938), a Czech playwright<br />
The Health Company, an American electronics manufacturer<br />
Personal Robots<br />
In 1920, the Czech playwright Karel Capek introduced the term<br />
robot, which he used to refer to intelligent, humanoid automatons<br />
that were subservient to humans. Robots such as those described<br />
by Capek have not yet been developed; their closest counterparts<br />
are the nonintelligent automatons used by industry <strong>and</strong> by private<br />
individuals. Most industrial robots are heavy-duty, immobile machines<br />
designed to replace humans in routine, undesirable, monotonous<br />
jobs. Most often, they use programmed gripping arms to<br />
carry out tasks such as spray painting cars, assembling watches,<br />
<strong>and</strong> shearing sheep.<br />
Modern personal robots are smaller, more mobile, less expensive<br />
models that serve mostly as toys or teaching tools. In some<br />
cases, they can be programmed to carry out activities such as walking<br />
dogs or serving mixed drinks. Usually, however, it takes more<br />
effort to program a robot to perform such activities than it does to<br />
do them oneself.<br />
The Hero 1, which was first manufactured by the Heath Company<br />
in 1982, has been a very popular personal robot. Conceived<br />
as a toy <strong>and</strong> a teaching tool, the Hero 1 can be programmed<br />
to speak; to sense light, sound, motion, <strong>and</strong> time; <strong>and</strong><br />
to carry small objects. The Hero 1 <strong>and</strong> other personal robots are<br />
often viewed as tools that will someday make it possible to produce<br />
intelligent robots.
Hero 1 Operation<br />
Robot (household) / 651<br />
The concept of artificial beings serving humanity has existed<br />
since antiquity (for example, it is found in Greek mythology). Such<br />
devices, which are now called robots, were first actualized, in a<br />
simple form, in the 1960’s. Then, in the mid-1970’s, the manufacture<br />
of personal robots began. One of the first personal robots was<br />
the Turtle, which was made by the Terrapin Company of Cambridge,<br />
Massachusetts. The Turtle was a toy that entertained owners<br />
via remote control, programmable motion, a beeper, <strong>and</strong> blinking<br />
displays. The Turtle was controlled by a computer to which it<br />
was linked by a cable.<br />
Among the first significant personal robots was the Hero 1. This<br />
robot, which was usually sold in the form of a $1,000 kit that had to<br />
be assembled, is a squat, thirty-nine-pound mobile unit containing a<br />
head, a body, <strong>and</strong> a base. The head contains control boards, sensors,<br />
<strong>and</strong> a manipulator arm. The body houses control boards <strong>and</strong> related<br />
electronics, while the base contains a three-wheel-drive unit that<br />
renders the robot mobile.<br />
The Heath Company, which produced the Hero 1, viewed it as<br />
providing entertainment for <strong>and</strong> teaching people who are interested<br />
in robot applications. To facilitate these uses, the following<br />
abilities were incorporated into the Hero 1: independent operation<br />
via rechargeable batteries; motion- <strong>and</strong> distance/position-sensing<br />
capability; light, sound, <strong>and</strong> language use/recognition; a manipulator<br />
arm to carry out simple tasks; <strong>and</strong> easy programmability.<br />
The Hero 1 is powered by four rechargeable batteries arranged as<br />
two 12-volt power supplies. Recharging is accomplished by means<br />
of a recharging box that is plugged into a home outlet. It takes six to<br />
eight hours to recharge depleted batteries, <strong>and</strong> complete charging is<br />
signaled by an indicator light. In the functioning robot, the power<br />
supplies provide 5-volt <strong>and</strong> 12-volt outputs to logic <strong>and</strong> motor circuits,<br />
respectively.<br />
The Hero 1 moves by means of a drive mechanism in its base. The<br />
mechanism contains three wheels, two of which are unpowered<br />
drones. The third wheel, which is powered for forward <strong>and</strong> reverse<br />
motion, is connected to a stepper motor that makes possible directional<br />
steering. Also included in the powered wheel is a metal disk
652 / Robot (household)<br />
with spaced reflective slots that helps Hero 1 to identify its position.<br />
As the robot moves, light is used to count the slots, <strong>and</strong> the slot<br />
count is used to measure the distance the robot has traveled, <strong>and</strong><br />
therefore its position.<br />
The robot’s “senses,” located in its head, consist of sound, light,<br />
<strong>and</strong> motion detectors as well as a phoneme synthesizer (phonemes<br />
are sounds, or units of speech). All these components are connected<br />
with the computer. The Hero 1 can detect sounds between 200 <strong>and</strong><br />
5,000 hertz. Its motion sensor detects all movement within a 15-foot<br />
radius. The phoneme synthesizer is capable of producing most<br />
words by using combinations of 64 phonemes. In addition, the robot<br />
keeps track of time by using an internal clock/calendar.<br />
The Hero 1 can carry out various tasks by using a gripper that<br />
serves as a h<strong>and</strong>. The arm on which the gripper is located is connected<br />
to the back of the robot’s head. The head (<strong>and</strong>, therefore, the<br />
arm) can rotate 350 degrees horizontally. In addition, the arm contains<br />
a shoulder motor that allows it to rise or drop 150 degrees vertically,<br />
<strong>and</strong> its forearm can be either extended or retracted. Finally, a<br />
wrist motor allows the gripper’s tip to rotate by 350 degrees, <strong>and</strong> the<br />
two-fingered gripper can open up to a maximum width of 3.5<br />
inches. The arm is not useful except as an educational tool, since its<br />
load-bearing capacity is only about a pound <strong>and</strong> its gripper can exert<br />
a force of only 6 ounces.<br />
The computational capabilities of the robot are much more impressive<br />
than its physical capabilities. Programming is accomplished<br />
by means of a simple keypad located on the robot’s head, which<br />
provides an inexpensive, easy-to-use method of operator-computer<br />
communication. To make things simpler for users who want entertainment<br />
without having to learn robotics, a manual mode is included<br />
for programming. In the manual mode, a h<strong>and</strong>-held teaching<br />
pendant is connected to Hero 1 <strong>and</strong> used to program all the<br />
motion capabilities of the robot. The programming of sensory <strong>and</strong><br />
language abilities, however, must be accomplished by using the<br />
keyboard. Using the keyboard <strong>and</strong> the various options that are<br />
available enables Hero 1 owners to program the robot to perform<br />
many interesting activities.
Consequences<br />
The Hero 1 had a huge impact on robotics; thous<strong>and</strong>s of people<br />
purchased it <strong>and</strong> used it for entertainment, study, <strong>and</strong> robot design.<br />
The Heath Company itself learned from the Hero 1 <strong>and</strong> later introduced<br />
an improved version: Heathkit 2000. This personal robot,<br />
which costs between $2,000 <strong>and</strong> $4,500, has ten times the capabilities<br />
of Hero 1, operates via radio-controlled keyboard, contains a<br />
voice synthesizer that can be programmed in any language, <strong>and</strong><br />
plugs itself in for recharging.<br />
Other companies, including the Androbot Company in California,<br />
have manufactured personal robots that sell for up to $10,000.<br />
One such robot is the Androbot BOB (brains on board). It can guard<br />
a home, call the police, walk at 2.5 kilometers per hour, <strong>and</strong> sing.<br />
Androbot has also designed Topo, a personal robot that can serve<br />
drinks. Still other robots can sort laundry <strong>and</strong>/or vacuum-clean<br />
houses. Although modern robots lack intelligence <strong>and</strong> merely have<br />
the ability to move when they are directed to by a program or by remote<br />
control, there is no doubt that intelligent robots will be developed<br />
in the future.<br />
See also Electric refrigerator; Microwave cooking; Robot (industrial);<br />
Vacuum cleaner; Washing machine.<br />
Further Reading<br />
Robot (household) / 653<br />
Aleks<strong>and</strong>er, Igor, <strong>and</strong> Piers Burnett. Reinventing Man: The Robot Becomes<br />
Reality. London: Kogan Page, 1983.<br />
Asimov, Isaac. Robots: Machines in Man’s Image. New York: Harmony<br />
Books, 1985.<br />
Bell, Trudy E. “Robots in the Home: Promises, Promises.” IEEE Spectrum<br />
22, no. 5 (May, 1985).<br />
Whalen, Bernie. “Upscale Consumers Adopt Home Robots, but<br />
Widespread Lifestyle Impact Is Years Away.” Marketing News 17,<br />
no. 24 (November 25, 1983).
654<br />
Robot (industrial)<br />
Robot (industrial)<br />
The invention: The first industrial robots, Unimates were designed to<br />
replace humans in undesirable, hazardous, <strong>and</strong> monotonous jobs.<br />
The people behind the invention:<br />
Karel Capek (1890-1938), a Czech playwright<br />
George C. Devol, Jr. (1912- ), an American inventor<br />
Joseph F. Engelberger (1925- ), an American entrepreneur<br />
Robots, from Concept to Reality<br />
The 1920 play Rossum’s Universal Robots, by Czech writer Karel<br />
Capek, introduced robots to the world. Capek’s humanoid robots—<br />
robot, a word created by Capek, essentially means slave—revolted<br />
<strong>and</strong> took over the world, which made the concept of robots somewhat<br />
frightening. The development of robots, which are now defined<br />
as machines that do work that would ordinarily be carried out<br />
by humans, has not yet advanced to the stage of being able to produce<br />
humanoid robots, however, much less robots capable of carrying<br />
out a revolt.<br />
Most modern robots are found in industry, where they perform<br />
dangerous or monotonous tasks that previously were done by humans.<br />
The first industrial robots were the Unimates (short for “universal<br />
automaton”), which were derived from a robot design invented<br />
by George C. Devol <strong>and</strong> patented in 1954. The first Unimate<br />
prototypes, developed by Devol <strong>and</strong> Joseph F. Engelberger, were<br />
completed in 1962 by Unimation Incorporated <strong>and</strong> tested in industry.<br />
They were so successful that the company, located in Danbury,<br />
Connecticut, manufactured <strong>and</strong> sold thous<strong>and</strong>s of Unimates to<br />
companies in the United States <strong>and</strong> abroad. Unimates are very versatile<br />
at performing routine industrial tasks <strong>and</strong> are easy to program<br />
<strong>and</strong> reprogram. The tasks they perform include various steps in automobile<br />
manufacturing, spray painting, <strong>and</strong> running lathes. The<br />
huge success of the Unimates led companies in other countries to<br />
produce their own industrial robots, <strong>and</strong> advancing technology has<br />
improved all industrial robots tremendously.
A New Industrial Revolution<br />
Robot (industrial) / 655<br />
Each of the first Unimate robots, which were priced at $25,000,<br />
was almost five feet tall <strong>and</strong> stood on a four-foot by five-foot base. It<br />
has often been said that a Unimate resembles the gun turret of a<br />
minitank, set atop a rectangular box. In operation, such a robot will<br />
swivel, swing, <strong>and</strong>/or dip <strong>and</strong> turn at the wrist of its hydraulically<br />
powered arm, which has a steel h<strong>and</strong>. The precisely articulated<br />
h<strong>and</strong> can pick up an egg without breaking it. At the same time, however,<br />
it is powerful enough to lift a hundred-pound weight.<br />
The Unimate is a robotic jack of all trades: It can be programmed,<br />
in about an hour, to carry out a complex operation, after which it can<br />
have its memory erased <strong>and</strong> be reprogrammed in another hour to<br />
do something entirely different. In addition, programming a Unimate<br />
requires no special training. The programmer simply uses a teachcable<br />
selector that allows the programmer to move the Unimate arm<br />
through the desired operation. This selector consists of a group of<br />
pushbutton control boxes, each of which is equipped with buttons<br />
in opposed pairs. Each button pair records the motion that will put a<br />
Unimate arm through one of five possible motions, in opposite directions.<br />
For example, pushing the correct buttons will record a motion<br />
in which the robot’s arm moves out to one side, aims upward,<br />
<strong>and</strong> angles appropriately to carry out the first portion of its intended<br />
job. If the Unimate overshoots, undershoots, or otherwise<br />
performs the function incorrectly, the activity can be fine-tuned<br />
with the buttons.<br />
Once the desired action has been performed correctly, pressing a<br />
“record” button on the robot’s main control panel enters the operation<br />
into its computer memory. In this fashion, Unimates can be programmed<br />
to carry out complex actions that require as many as two<br />
hundred comm<strong>and</strong>s. Each comm<strong>and</strong> tells the Unimate to move its<br />
arm or h<strong>and</strong> in a given way by combining the following five motions:<br />
sliding the arm forward, swinging the arm horizontally, tilting<br />
the arm up or down, bending the wrist up or down, <strong>and</strong> swiveling<br />
the h<strong>and</strong> in a half-circle clockwise or counterclockwise.<br />
Before pressing the “record” button on the Unimate’s control<br />
panel, the operator can also comm<strong>and</strong> the h<strong>and</strong> to grasp an item<br />
when in a particular position. Furthermore, the strength of the
656 / Robot (industrial)<br />
grasp can be controlled, as can the duration of time between each action.<br />
Finally, the Unimate can be instructed to start or stop another<br />
routine (such as operating a paint sprayer) at any point. Once the instructor<br />
is satisfied with the robot’s performance, pressing a “repeat<br />
continuous” control starts the Unimate working. The robot will stop<br />
repeating its program only when it is turned off.<br />
Inside the base of an original Unimate is a magnetic drum that<br />
contains its memory. The drum turns intermittently, moving each of<br />
two hundred long strips of metal beneath recording heads. This<br />
strip movement brings specific portions of each strip—dictated by<br />
particular motions—into position below the heads. When the “record”<br />
button is pressed after a motion is completed, the h<strong>and</strong> position<br />
is recorded as a series of numbers that tells the computer the<br />
complete h<strong>and</strong> position in each of the five permissible movement<br />
modes.<br />
Once “repeat continuous” is pressed, the computer begins the<br />
comm<strong>and</strong> series by turning the drum appropriately, carrying out<br />
each memorized comm<strong>and</strong> in the chosen sequence. When the sequence<br />
ends, the computer begins again, <strong>and</strong> the process repeats<br />
until the robot is turned off. If a Unimate user wishes to change the<br />
function of such a robot, its drum can be erased <strong>and</strong> reprogrammed.<br />
Users can also remove programmed drums, store them for future<br />
use, <strong>and</strong> replace them with new drums.<br />
Consequences<br />
The first Unimates had a huge impact on industrial manufacturing.<br />
In time, different sizes of robots became available so that additional<br />
tasks could be performed, <strong>and</strong> the robots’ circuitry was improved.<br />
Because they have no eyes <strong>and</strong> cannot make judgments,<br />
Unimates are limited to relatively simple tasks that are coordinated<br />
by means of timed operations <strong>and</strong> simple computer interactions.<br />
Most of the thous<strong>and</strong>s of modern Unimates <strong>and</strong> their multinational<br />
cousins in industry are very similar to the original Unimates<br />
in terms of general capabilities, although they can now assemble<br />
watches <strong>and</strong> perform other delicate tasks that the original Unimates<br />
could not perform. The crude magnetic drums <strong>and</strong> computer controls<br />
have given way to silicon chips <strong>and</strong> microcomputers, which
have made the robots more accurate <strong>and</strong> reliable. Some robots can<br />
even build other robots, <strong>and</strong> others can perform tasks such as mowing<br />
lawns <strong>and</strong> walking dogs.<br />
Various improvements have been planned that will ultimately<br />
lead to some very interesting <strong>and</strong> advanced modifications. It is<br />
likely that highly sophisticated humanoid robots like those predicted<br />
by Karel Capek will be produced at some future time. One<br />
can only hope that these robots will not rebel against their human<br />
creators.<br />
See also CAD/CAM; Robot (household); SAINT; Virtual machine.<br />
Further Reading<br />
Robot (industrial) / 657<br />
Aleks<strong>and</strong>er, Igor, <strong>and</strong> Piers Burnett. Reinventing Man: The Robot Becomes<br />
Reality. London: Kogan Page, 1983.<br />
Asimov, Isaac. Robots: Machines in Man’s Image. New York: Harmony<br />
Books, 1985.<br />
Chakravarty, Subrata N. “Springtime for an Ugly Duckling.” Forbes<br />
127, no. 9 (April, 1981).<br />
Hartley, J. “Robots Attack the Quiet World of Arc Welding.” Engineer<br />
246, no. 6376 (June, 1978).<br />
Lamb, W. G. Unimates at Work. Edited by C. W. Burckhardt. Basel,<br />
Switzerl<strong>and</strong>: Birkhauser Verlag, 1975.<br />
Tuttle, Howard C. “Robots’ Contribution: Faster Cycles, Better<br />
Quality.” Production 88, no. 5 (November, 1981).
658<br />
Rocket<br />
Rocket<br />
The invention: Liquid-fueled rockets developed by Robert H. Goddard<br />
made possible all later developments in modern rocketry,<br />
which in turn has made the exploration of space practical.<br />
The person behind the invention:<br />
Robert H. Goddard (1882-1945), an American physics professor<br />
History in a Cabbage Patch<br />
Just as the age of air travel began on an out-of-the-way shoreline<br />
at Kitty Hawk, North Carolina, with the Wright brothers’ airplane<br />
in 1903, so too the seemingly impossible dream of spaceflight<br />
began in a cabbage patch in Auburn, Massachusetts, with<br />
Robert H. Goddard’s launch of a liquid-fueled rocket on March 16,<br />
1926. On that clear, cold day, with snow still on the ground, Goddard<br />
launched a three-meter-long rocket using liquid oxygen <strong>and</strong><br />
gasoline. The flight lasted only about two <strong>and</strong> one-half seconds,<br />
during which the rocket rose 12 meters <strong>and</strong> l<strong>and</strong>ed about 56 meters<br />
away.<br />
Although the launch was successful, the rocket’s design was<br />
clumsy. At first, Goddard had thought that a rocket would be<br />
steadier if the motor <strong>and</strong> nozzles were ahead of the fuel tanks,<br />
rather like a horse <strong>and</strong> buggy. After this first launch, it was clear<br />
that the motor needed to be placed at the rear of the rocket. Although<br />
Goddard had spent several years working on different<br />
pumps to control the flow of fuel to the motor, the first rocket had<br />
no pumps or electrical system. Henry Sacks, a Clark University<br />
machinist, launched the rocket by turning a valve, placing an alcohol<br />
stove beneath the motor, <strong>and</strong> dashing for safety. Goddard <strong>and</strong><br />
his coworker Percy Roope watched the launch from behind an iron<br />
wall.<br />
Despite its humble setting, this simple event changed the course<br />
of history. Many people saw in Goddard’s launch the possibilities<br />
for high-altitude research, space travel, <strong>and</strong> modern weaponry. Although<br />
Goddard invented <strong>and</strong> experimented mostly in private,
others in the United States, the Soviet Union, <strong>and</strong> Germany quickly<br />
followed in his footsteps. The V-2 rockets used by Nazi Germany<br />
in World War II (1939-1945) included many of Goddard’s designs<br />
<strong>and</strong> ideas.<br />
A Lifelong Interest<br />
Rocket / 659<br />
Goddard’s success was no accident. He had first become interested<br />
in rockets <strong>and</strong> space travel when he was seventeen, no doubt<br />
because of reading books such as H. G. Wells’s The War of the Worlds<br />
(1898) <strong>and</strong> Garrett P. Serviss’s Edison’s Conquest of Mars (1898). In<br />
1907, he sent to several scientific journals a paper describing his ideas<br />
about traveling through a near vacuum. Although the essay was rejected,<br />
Goddard began thinking about liquid fuels in 1909. After finishing<br />
his doctorate in physics at Clark University <strong>and</strong> postdoctoral<br />
studies at Princeton University, he began to experiment.<br />
One of the things that made Goddard so successful was his ability<br />
to combine things he had learned from chemistry, physics, <strong>and</strong><br />
engineering into rocket design. More than anyone else at the time,<br />
Goddard had the ability to combine ideas with practice.<br />
Goddard was convinced that the key for moving about in space<br />
was the English physicist <strong>and</strong> mathematician Sir Isaac Newton’s<br />
third law of motion (for every action there is an equal <strong>and</strong> opposite<br />
reaction). To prove this, he showed that a gun recoiled when it was<br />
fired in a vacuum. During World War I (1914-1918), Goddard<br />
moved to the Mount Wilson Observatory in California, where he<br />
investigated the use of black powder <strong>and</strong> smokeless powder as<br />
rocket fuel. Goddard’s work led to the invention of the bazooka, a<br />
weapon that was much used during World War II, as well as bombardment<br />
<strong>and</strong> antiaircraft rockets.<br />
After World War I, Goddard returned to Clark University. By<br />
1920, mostly because of the experiments he had done during the<br />
war, he had decided that a liquid-fuel motor, with its smooth thrust,<br />
had the best chance of boosting a rocket into space. The most powerful<br />
fuel was hydrogen, but it is very difficult to h<strong>and</strong>le. Oxygen had<br />
many advantages, but it was hard to find <strong>and</strong> extremely dangerous,<br />
since it boils at −148 degrees Celsius <strong>and</strong> explodes when it comes in<br />
contact with oils, greases, <strong>and</strong> flames. Other possible fuels were pro-
(Library of Congress)<br />
660 / Rocket<br />
Robert H. Goddard<br />
In 1920 The New York Times made fun of Robert Hutchings<br />
Goddard (1882-1945) for claiming that rockets could travel<br />
through outer space to the Moon. It was impossible, the newspaper’s<br />
editorial writer confidently asserted, because in outer<br />
space the engine would have no air to push against <strong>and</strong> so<br />
could not move the rocket. A sensitive, quiet man, the Clark<br />
University physics professor was stung by the public rebuke,<br />
all the more so because it displayed ignorance of<br />
basic physics. “Every vision is a joke,” Goddard<br />
said, somewhat bitterly, “until the first man accomplishes<br />
it.”<br />
Goddard had already proved that a rocket could<br />
move in a vacuum, but he refrained from rebutting<br />
the Times article. In 1919 he had become the first<br />
American to describe mathematically the theory of<br />
rocket propulsion in his classic article “A Method of<br />
Reaching Extreme Altitude,” <strong>and</strong> during World War I<br />
he had acquired experience designing solid-fuel rockets.<br />
However, even though he was the world’s leading<br />
expert on rocketry, he decided to seek privacy for<br />
his experiments. His successful launch of a liquidfuel<br />
rocket in 1926, followed by new designs that reached ever<br />
higher altitudes, was a source of satisfaction, as were his 214<br />
patents, but real recognition of his achievements did not come<br />
his way until World War II. In 1942 he was named director of research<br />
at the U.S. Navy’s Bureau of Aeronautics, for which he<br />
worked on jet-assisted takeoff rockets <strong>and</strong> variable-thrust liquid-propellant<br />
rockets. In 1943 the Curtiss-Wright Corporation<br />
hired him as a consulting engineer, <strong>and</strong> in 1945 he became director<br />
of the American Rocket Society.<br />
The New York Times finally apologized to Goddard for its<br />
1920 article on the morning after Apollo 11 took off for the<br />
Moon in 1969. However, Goddard, who battled tuberculosis<br />
most of his life, had died twenty-four years earlier.<br />
pane, ether, kerosene, or gasoline, but they all had serious disadvantages.<br />
Finally, Goddard found a local source of oxygen <strong>and</strong> was able<br />
to begin testing its thrust.
Another problem was designing a fuel pump. Goddard <strong>and</strong> his<br />
assistant Nils Riffolt spent years on this problem before the historic<br />
test flight of March, 1926. In the end, because of pressure from the<br />
Smithsonian Institution <strong>and</strong> others who were funding his research,<br />
Goddard decided to do without a pump <strong>and</strong> use an inert gas to<br />
push the fuel into the explosion chamber.<br />
Goddard worked without much funding between 1920 <strong>and</strong> 1925.<br />
Riffolt helped him greatly in designing a pump, <strong>and</strong> Goddard’s<br />
wife, Esther, photographed some of the tests <strong>and</strong> helped in other<br />
ways. Clark University had granted him some research money in<br />
1923, but by 1925 money was in short supply, <strong>and</strong> the Smithsonian<br />
Institution did not seem willing to grant more. Goddard was convinced<br />
that his research would be taken seriously if he could show<br />
some serious results, so on March 16, 1926, he launched a rocket<br />
even though his design was not yet perfect. The success of that<br />
launch not only changed his career but also set the stage for rocketry<br />
experiments both in the United States <strong>and</strong> in Europe.<br />
Impact<br />
Rocket / 661<br />
Goddard was described as being secretive <strong>and</strong> a loner. He never<br />
tried to cash in on his invention but continued his research during<br />
the next three years. On July 17, 1929, Goddard launched a rocket<br />
carrying a camera <strong>and</strong> instruments for measuring temperature<br />
<strong>and</strong> air pressure. The New York Times published a story about the<br />
noisy crash of this rocket <strong>and</strong> local officials’ concerns about public<br />
safety. The article also mentioned Goddard’s idea that a similar<br />
rocket might someday strike the Moon. When American aviation<br />
hero Charles A. Lindbergh learned of Goddard’s work, Lindbergh<br />
helped him to get grants from the Carnegie Institution <strong>and</strong> the<br />
Guggenheim Foundation.<br />
By the middle of 1930, Goddard <strong>and</strong> a small group of assistants<br />
had established a full-time research program near Roswell, New<br />
Mexico. Now that money was not so much of a problem, Goddard<br />
began to make significant advances in almost every area of astronautics.<br />
In 1941, Goddard launched a rocket to a height of 2,700 meters.<br />
Flight stability was helped by a gyroscope, <strong>and</strong> he was finally<br />
able to use a fuel pump.
662 / Rocket<br />
During the 1920’s <strong>and</strong> 1930’s, members of the American Rocket<br />
Society <strong>and</strong> the German Society for Space Travel continued their<br />
own research. When World War II began, rocket research became a<br />
high priority for the American <strong>and</strong> German governments.<br />
Germany’s success with the V-2 rocket was a direct result of<br />
Goddard’s research <strong>and</strong> inventions, but the United States did not<br />
benefit fully from Goddard’s work until after his death. Nevertheless,<br />
Goddard remains modern rocketry’s foremost pioneer—a scientist<br />
with vision, underst<strong>and</strong>ing, <strong>and</strong> practical skill.<br />
See also Airplane; Artificial satellite; Communications satellite;<br />
Cruise missile; Hydrogen bomb; Stealth aircraft; Supersonic passenger<br />
plane; Turbojet; V-2 rocket; Weather satellite.<br />
Further Reading<br />
Alway, Peter. Retro Rockets: Experimental Rockets, 1926-1941. Ann Arbor,<br />
Mich.: Saturn Press, 1996.<br />
Goddard, Robert Hutchings. The Autobiography of Robert Hutchings<br />
Goddard, Father of the Space Age: Early Years to 1927. Worcester,<br />
Mass.: A. J. St. Onge, 1966.<br />
Lehman, Milton. Robert H. Goddard: Pioneer of Space Research. New<br />
York: Da Capo Press, 1988.
Rotary dial telephone<br />
Rotary dial telephone<br />
The invention: The first device allowing callers to connect their<br />
telephones to other parties without the aid of an operator, the rotary<br />
dial telephone preceded the touch-tone phone.<br />
The people behind the invention:<br />
Alex<strong>and</strong>er Graham Bell (1847-1922), an American inventor<br />
Antoine Barnay (1883-1945), a French engineer<br />
Elisha Gray (1835-1901), an American inventor<br />
Rotary Telephones Dials Make Phone Linkups Automatic<br />
663<br />
The telephone uses electricity to carry sound messages over long<br />
distances. When a call is made from a telephone set, the caller<br />
speaks into a telephone transmitter <strong>and</strong> the resultant sound waves<br />
are converted into electrical signals. The electrical signals are then<br />
transported over a telephone line to the receiver of a second telephone<br />
set that was designated when the call was initiated. This receiver<br />
reverses the process, converting the electrical signals into the<br />
sounds heard by the recipient of the call. The process continues as<br />
the parties talk to each other.<br />
The telephone was invented in the 1870’s <strong>and</strong> patented in 1876 by<br />
Alex<strong>and</strong>er Graham Bell. Bell’s patent application barely preceded<br />
an application submitted by his competitor Elisha Gray. After a<br />
heated patent battle between Bell <strong>and</strong> Gray, which Bell won, Bell<br />
founded the Bell Telephone Company, which later came to be called<br />
the American Telephone <strong>and</strong> Telegraph Company.<br />
At first, the transmission of phone calls between callers <strong>and</strong> recipients<br />
was carried out manually, by switchboard operators. In<br />
1923, however, automation began with Antoine Barnay’s development<br />
of the rotary telephone dial. This dial caused the emission of<br />
variable electrical impulses that could be decoded automatically<br />
<strong>and</strong> used to link the telephone sets of callers <strong>and</strong> call recipients. In<br />
time, the rotary dial system gave way to push-button dialing <strong>and</strong><br />
other more modern networking techniques.
664 / Rotary dial telephone<br />
Rotary-dial telephone. (Image Club Graphics)<br />
Telephones, Switchboards, <strong>and</strong> Automation<br />
The carbon transmitter, which is still used in many modern telephone<br />
sets, was the key to the development of the telephone by Alex<strong>and</strong>er<br />
Graham Bell. This type of transmitter—<strong>and</strong> its more modern<br />
replacements—operates like an electric version of the human<br />
ear. When a person talks into the telephone set in a carbon transmitter-equipped<br />
telephone, the sound waves that are produced strike<br />
an electrically connected metal diaphragm <strong>and</strong> cause it to vibrate.<br />
The speed of vibration of this electric eardrum varies in accordance<br />
with the changes in air pressure caused by the changing tones of the<br />
speaker’s voice.<br />
Behind the diaphragm of a carbon transmitter is a cup filled with<br />
powdered carbon. As the vibrations cause the diaphragm to press<br />
against the carbon, the electrical signals—electrical currents of varying<br />
strength—pass out of the instrument through a telephone wire.<br />
Once the electrical signals reach the receiver of the phone being<br />
called, they activate electromagnets in the receiver that make a second<br />
diaphragm vibrate. This vibration converts the electrical signals<br />
into sounds that are very similar to the sounds made by the person<br />
who is speaking. Therefore, a telephone receiver may be viewed<br />
as an electric mouth.<br />
In modern telephone systems, transportation of the electrical signals<br />
between any two phone sets requires the passage of those signals<br />
through vast telephone networks consisting of huge numbers<br />
of wires, radio systems, <strong>and</strong> other media. The linkup of any two
Alex<strong>and</strong>er Graham Bell<br />
Rotary dial telephone / 665<br />
During the funeral for Alex<strong>and</strong>er Graham Bell in 1922, telephone<br />
service throughout the United States stopped for one<br />
minute to honor him. To most people he was the inventor of the<br />
telephone. In fact, his genius ranged much further.<br />
Bell was born in Edinburgh, Scotl<strong>and</strong>, in 1847. His father,<br />
an elocutionist who invented a phonetic alphabet, <strong>and</strong> his<br />
mother, who was deaf, imbued him with deep curiosity, especially<br />
about sound. As a boy Bell became an exceptional pianist,<br />
<strong>and</strong> he produced his first invention, for cleaning wheat, at<br />
fourteen. After Edinburgh’s Royal High School, he attended<br />
classes at Edinburgh University <strong>and</strong> University College, London,<br />
but at the age of twenty-three, battling tuberculosis, he<br />
left school to move with his parents to Ontario, Canada, to<br />
convalesce. Meanwhile, he worked on his idea for a telegraph<br />
capable of sending multiple messages at once. From it grew<br />
the basic concept for the telephone. He developed it while<br />
teaching Visible Speech at the Boston School for Deaf Mutes<br />
after 1871. Assisted by Thomas Watson, he succeeded in sending<br />
speech over a wire <strong>and</strong> was issued a patent for his device,<br />
among the most valuable ever granted, in 1876. His demonstration<br />
of the telephone later that year at Philadelphia’s<br />
Centennial Exhibition <strong>and</strong> its subsequent development into a<br />
household appliance brought him wealth <strong>and</strong> fame.<br />
He moved to Nova Scotia, Canada, <strong>and</strong> continued inventing.<br />
He created a photophone, tetrahedron modules for construction,<br />
<strong>and</strong> an airplane, the Silver Dart, which flew in 1909.<br />
Even though existing technology made them impracticable,<br />
some of his ideas anticipated computers <strong>and</strong> magnetic sound<br />
recording. His last patented invention, tested three years before<br />
his death, was a hydrofoil. Capable of reaching seventy-one<br />
miles per hour <strong>and</strong> freighting fourteen thous<strong>and</strong> pounds, the<br />
HD-4 was then the fastest watercraft in the world.<br />
Bell also helped found the National Geographic Society in<br />
1888 <strong>and</strong> became its president in 1898. He hired Gilbert Grosvenor<br />
to edit the society’s famous magazine, National Geographic<br />
<strong>and</strong> together they planned the format—breathtaking<br />
photography <strong>and</strong> vivid writing—that made it one of the world’s<br />
best known magazines.
666 / Rotary dial telephone<br />
phone sets was originally, however, accomplished manually—on a<br />
relatively small scale—by a switchboard operator who made the<br />
necessary connections by h<strong>and</strong>. In such switchboard systems, each<br />
telephone set in the network was associated with a jack connector in<br />
the switchboard. The operator observed all incoming calls, identified<br />
the phone sets for which they were intended, <strong>and</strong> then used<br />
wires to connect the appropriate jacks. At the end of the call, the<br />
jacks were disconnected.<br />
This cumbersome methodology limited the size <strong>and</strong> efficiency of<br />
telephone networks <strong>and</strong> invaded the privacy of callers. The development<br />
of automated switching systems soon solved these problems<br />
<strong>and</strong> made switchboard operators obsolete. It was here that<br />
Antoine Barnay’s rotary dial was used, making possible an exchange<br />
that automatically linked the phone sets of callers <strong>and</strong> call<br />
recipients in the following way.<br />
First, a caller lifted a telephone “off the hook,” causing a switchhook,<br />
like those used in modern phones, to close the circuit that connected<br />
the telephone set to the telephone network. Immediately, a<br />
dial tone (still familiar to callers) came on to indicate that the automatic<br />
switching system could h<strong>and</strong>le the planned call. When the<br />
phone dial was used, each number or letter that was dialed produced<br />
a fixed number of clicks. Every click indicated that an electrical<br />
pulse had been sent to the network’s automatic switching system,<br />
causing switches to change position slightly. Immediately after<br />
a complete telephone number was dialed, the overall operation of<br />
the automatic switchers connected the two telephone sets. This connection<br />
was carried out much more quickly <strong>and</strong> accurately than had<br />
been possible when telephone operators at manual switchboards<br />
made the connection.<br />
Impact<br />
The telephone has become the world’s most important communication<br />
device. Most adults use it between six <strong>and</strong> eight times per<br />
day, for personal <strong>and</strong> business calls. This widespread use has developed<br />
because huge changes have occurred in telephones <strong>and</strong> telephone<br />
networks. For example, automatic switching <strong>and</strong> the rotary<br />
dial system were only the beginning of changes in phone calling.
Touch-tone dialing replaced Barnay’s electrical pulses with audio<br />
tones outside the frequency of human speech. This much-improved<br />
system can be used to send calls over much longer distances than<br />
was possible with the rotary dial system, <strong>and</strong> it also interacts well<br />
with both answering machines <strong>and</strong> computers.<br />
Another advance in modern telephoning is the use of radio<br />
transmission techniques in mobile phones, rendering telephone<br />
cords obsolete. The mobile phone communicates with base stations<br />
arranged in “cells” throughout the service area covered. As the user<br />
changes location, the phone link automatically moves from cell to<br />
cell in a cellular network.<br />
In addition, the use of microwave, laser, <strong>and</strong> fiber-optic technologies<br />
has helped to lengthen the distance over which phone calls can<br />
be transmitted. These technologies have also increased the number<br />
of messages that phone networks can h<strong>and</strong>le simultaneously <strong>and</strong><br />
have made it possible to send radio <strong>and</strong> television programs (such<br />
as cable television), scientific data (via modems), <strong>and</strong> written messages<br />
(via facsimile, or “fax,” machines) over phone lines. Many<br />
other advances in telephone technology are expected as society’s<br />
needs change <strong>and</strong> new technology is developed.<br />
See also Cell phone; Internet; Long-distance telephone; Telephone<br />
switching; Touch-tone telephone.<br />
Further Reading<br />
Rotary dial telephone / 667<br />
Aitken, William. Who Invented the Telephone? London: Blackie <strong>and</strong><br />
Son, 1939.<br />
Coe, Lewis. The Telephone <strong>and</strong> Its Several <strong>Inventors</strong>: A History. Jefferson,<br />
N.C.: McFarl<strong>and</strong>, 1995.<br />
Evenson, A. Edward. The Telephone Patent Conspiracy of 1876: The<br />
Elisha Gray-Alex<strong>and</strong>er Bell Controversy <strong>and</strong> Its Many Players. Jefferson,<br />
N.C.: McFarl<strong>and</strong>, 2000.<br />
Lisser, Eleena de. “Telecommunications: If You Have a Rotary<br />
Phone, Press 1: The Trials of Using the Old Apparatus.” Wall<br />
Street Journal (July 28, 1994).<br />
Mackay, James A. Alex<strong>and</strong>er Graham Bell: A Life. New York: J. Wiley,<br />
1997.
668<br />
SAINT<br />
SAINT<br />
The invention: Taking its name from the acronym for symbolic automatic<br />
integrator, SAINT is recognized as the first “expert system”—a<br />
computer program designed to perform mental tasks requiring<br />
human expertise.<br />
The person behind the invention:<br />
James R. Slagle (1934-1994), an American computer scientist<br />
The Advent of Artificial Intelligence<br />
In 1944, the Harvard-IBM Mark I was completed. This was an<br />
electromechanical (that is, not fully electronic) digital computer<br />
that was operated by means of coding instructions punched into<br />
paper tape. The machine took about six seconds to perform a multiplication<br />
operation, twelve for a division operation. In the following<br />
year, 1945, the world’s first fully electronic digital computer,<br />
the Electronic Numerical Integrator <strong>and</strong> Calculator (ENIAC),<br />
became operational. This machine, which was constructed at the<br />
University of Pennsylvania, was thirty meters long, three meters<br />
high, <strong>and</strong> one meter deep.<br />
At the same time that these machines were being built, a similar<br />
machine was being constructed in the United Kingdom: the automated<br />
computing engine (ACE). Akey figure in the British development<br />
was Alan Turing, a mathematician who had used computers<br />
to break German codes during World War II. After the war, Turing<br />
became interested in the area of “computing machinery <strong>and</strong> intelligence.”<br />
He posed the question “Can machines think?” <strong>and</strong> set the<br />
following problem, which is known as the “Turing test.” This test<br />
involves an interrogator who sits at a computer terminal <strong>and</strong> asks<br />
questions on the terminal about a subject for which he or she seeks intelligent<br />
answers. The interrogator does not know, however, whether<br />
the system is linked to a human or if the responses are, in fact, generated<br />
by a program that is acting intelligently. If the interrogator cannot<br />
tell the difference between the human operator <strong>and</strong> the computer<br />
system, then the system is said to have passed the Turing test<br />
<strong>and</strong> has exhibited intelligent behavior.
SAINT: An Expert System<br />
SAINT / 669<br />
In the attempt to answer Turing’s question <strong>and</strong> create machines<br />
that could pass the Turing test, researchers investigated techniques<br />
for performing tasks that were considered to require expert levels of<br />
knowledge. These tasks included games such as checkers, chess, <strong>and</strong><br />
poker. These games were chosen because the total possible number of<br />
variations in each game was very large. This led the researchers to<br />
several interesting questions for study. How do experts make a decision<br />
in a particular set of circumstances? How can a problem such as<br />
a game of chess be represented in terms of a computer program? Is it<br />
possible to know why the system chose a particular solution?<br />
One researcher, James R. Slagle at the Massachusetts Institute of<br />
Technology, chose to develop a program that would be able to solve<br />
elementary symbolic integration problems (involving the manipulation<br />
of integrals in calculus) at the level of a good college freshman.<br />
The program that Slagle constructed was known as SAINT, an<br />
acronym for symbolic automatic integrator, <strong>and</strong> it is acknowledged<br />
as the first “expert system.”<br />
An expert system is a system that performs at the level of a human<br />
expert. An expert system has three basic components: a knowledge<br />
base, in which domain-specific information is held (for example, rules<br />
on how best to perform certain types of integration problems); an inference<br />
engine, which decides how to break down a given problem utilizing<br />
the rules in the knowledge base; <strong>and</strong> a human-computer interface<br />
that inputs data—in this case, the integral to be solved—<strong>and</strong><br />
outputs the result of performing the integration. Another feature of expert<br />
systems is their ability to explain their reasoning.<br />
The integration problems that could be solved by SAINT were<br />
in the form of elementary integral functions. SAINT could perform<br />
indefinite integration (also called “antidifferentiation”) on these<br />
functions. In addition, it was capable of performing definite <strong>and</strong><br />
indefinite integration on trivial extensions of indefinite integration.<br />
SAINT was tested on a set of eighty-six problems, fifty-four of<br />
which were drawn from the MIT final examinations in freshman<br />
calculus; it succeeded in solving all but two. Slagle added more<br />
rules to the knowledge base so that problems of the type it encountered<br />
but could not solve could be solved in the future.
670 / SAINT<br />
The power of the SAINT system was, in part, based on its ability<br />
to perform integration through the adoption of a “heuristic” processing<br />
system. Aheuristic method is one that helps in discovering a<br />
problem’s solution by making plausible but feasible guesses about<br />
the best strategy to apply next to the current problem situation. A<br />
heuristic is a rule of thumb that makes it possible to take short cuts<br />
in reaching a solution, rather than having to go through every step<br />
in a solution path. These heuristic rules are contained in the knowledge<br />
base. SAINT was written in the LISP programming language<br />
<strong>and</strong> ran on an IBM 7090 computer. The program <strong>and</strong> research were<br />
Slagle’s doctoral dissertation.<br />
Consequences<br />
User<br />
Interface<br />
Global<br />
Database<br />
Control<br />
Mechanism<br />
Knowledge<br />
Base<br />
Basic structure of an expert system.<br />
Facts<br />
Search<br />
<strong>and</strong><br />
Resolve<br />
Rules<br />
The SAINT system that Slagle developed was significant for several<br />
reasons: First, it was the first serious attempt at producing a<br />
program that could come close to passing the Turing test. Second, it<br />
brought the idea of representing an expert’s knowledge in a computer<br />
program together with strategies for solving complex <strong>and</strong> difficult<br />
problems in an area that previously required human expertise.<br />
Third, it identified the area of knowledge-based systems <strong>and</strong>
James R. Slagle<br />
SAINT / 671<br />
James R. Slagle was born in 1934 in Brooklyn, New York, <strong>and</strong><br />
attended nearby St. John’s University. He majored in mathematics<br />
<strong>and</strong> graduated with a bachelor of science degree in 1955,<br />
also winning the highest scholastic average award. While earning<br />
his master’s degree (1957) <strong>and</strong> doctorate (1961) at the Massachusetts<br />
Institute of Technology (MIT), he was a staff mathematician<br />
in the university’s Lincoln Laboratory.<br />
Slagle taught in MIT’s electrical engineering department<br />
part-time after completing his dissertation on the first expert<br />
computer system <strong>and</strong> then moved to Lawrence-Livermore<br />
National Laboratory near Berkeley, California. While working<br />
there he also taught at the University of California. From 1967<br />
until 1974 he was an adjunct member of the computer science<br />
faculty of Johns Hopkins University in Baltimore, Maryl<strong>and</strong>,<br />
<strong>and</strong> then was appointed chief of the computer science laboratory<br />
at the Naval Research Laboratory (NRL) in Washington, D.C., receiving<br />
the Outst<strong>and</strong>ing H<strong>and</strong>icapped Federal Employee of the<br />
Year Award in 1979. In 1984 he was made a special assistant in<br />
the Navy Center for Applied Research in Artificial Intelligence<br />
at NRL but left in 1984 to become Distinguished Professor of<br />
Computer Science at the University of Minnesota.<br />
In these various positions Slagle helped mature the fledgling<br />
discipline of artificial intelligence, publishing the influential<br />
book Artificial Intelligence in 1971. He developed an expert system<br />
designed to set up other expert systems—A Generalized<br />
Network-based Expert System Shell, or AGNESS. He also worked<br />
on parallel expert systems, artificial neural networks, timebased<br />
logic, <strong>and</strong> methods for uncovering causal knowledge in<br />
large databases. He died in 1994.<br />
showed that computers could feasibly be used for programs that<br />
did not relate to business data processing. Fourth, the SAINT system<br />
showed how the use of heuristic rules <strong>and</strong> information could<br />
lead to the solution of problems that could not have been solved<br />
previously because of the amount of time needed to calculate a solution.<br />
SAINT’s major impact was in outlining the uses of these techniques,<br />
which led to continued research in the subfield of artificial<br />
intelligence that became known as expert systems.
672 / SAINT<br />
See also BASIC programming language; CAD/CAM; COBOL<br />
computer language; Differential analyzer; FORTRAN programming<br />
language; Robot (industrial).<br />
Further Reading<br />
Campbell-Kelly, Martin, <strong>and</strong> William Aspray. Computer: A History of<br />
the Information Machine. New York: Basic Books, 1996.<br />
Ceruzzi, Paul E. A History of Modern Computing. Cambridge, Mass.:<br />
MIT Press, 2000.<br />
Rojas, Paul. Encyclopedia of Computers <strong>and</strong> Computer History. London:<br />
Fitzroy Dearborn, 2001.
Salvarsan<br />
Salvarsan<br />
The invention: The first successful chemotherapeutic for the treatment<br />
of syphilis<br />
The people behind the invention:<br />
Paul Ehrlich (1854-1915), a German research physician <strong>and</strong><br />
chemist<br />
Wilhelm von Waldeyer (1836-1921), a German anatomist<br />
Friedrich von Frerichs (1819-1885), a German physician <strong>and</strong><br />
professor<br />
Sahachiro Hata (1872-1938), a Japanese physician <strong>and</strong><br />
bacteriologist<br />
Fritz Schaudinn (1871-1906), a German zoologist<br />
The Great Pox<br />
673<br />
The ravages of syphilis on humankind are seldom discussed<br />
openly. A disease that struck all varieties of people <strong>and</strong> was transmitted<br />
by direct <strong>and</strong> usually sexual contact, syphilis was both<br />
feared <strong>and</strong> reviled. Many segments of society across all national<br />
boundaries were secure in their belief that syphilis was divine punishment<br />
of the wicked for their evil ways.<br />
It was not until 1903 that bacteriologists Élie Metchnikoff <strong>and</strong><br />
Pierre-Paul-Émile Roux demonstrated the transmittal of syphilis to<br />
apes, ending the long-held belief that syphilis was exclusively a human<br />
disease. The disease destroyed families, careers, <strong>and</strong> lives,<br />
driving its infected victims mad, destroying the brain, or destroying<br />
the cardiovascular system. It was methodical <strong>and</strong> slow, but in every<br />
case, it killed with singular precision. There was no hope of a safe<br />
<strong>and</strong> effective cure prior to the discovery of Salvarsan.<br />
Prior to 1910, conventional treatment consisted principally of<br />
mercury or, later, potassium iodide. Mercury, however, administered<br />
in large doses, led to severe ulcerations of the tongue, jaws,<br />
<strong>and</strong> palate. Swelling of the gums <strong>and</strong> loosening of the teeth resulted.<br />
Dribbling saliva <strong>and</strong> the attending fetid odor also occurred. These<br />
side effects of mercury treatment were so severe that many pre-
674 / Salvarsan<br />
ferred to suffer the disease to the end rather than undergo the st<strong>and</strong>ard<br />
cure. About 1906, Metchnikoff <strong>and</strong> Roux demonstrated that<br />
mercurial ointments, applied very early, at the first appearance of<br />
the primary lesion, were effective.<br />
Once the spirochete-type bacteria invaded the bloodstream <strong>and</strong><br />
tissues, the infected person experienced symptoms of varying nature<br />
<strong>and</strong> degree—high fever, intense headaches, <strong>and</strong> excruciating<br />
pain. The patient’s skin often erupted in pustular lesions similar in<br />
appearance to smallpox. It was the distinguishing feature of these<br />
pustular lesions that gave syphilis its other name: the “Great Pox.”<br />
Death brought the only relief then available.<br />
Poison Dyes<br />
Paul Ehrlich became fascinated by the reactions of dyes with biological<br />
cells <strong>and</strong> tissues while a student at the University of Strasbourg<br />
under Wilhelm von Waldeyer. It was von Waldeyer who<br />
sparked Ehrlich’s interest in the chemical viewpoint of medicine.<br />
Thus, as a student, Ehrlich spent hours at this laboratory experimenting<br />
with different dyes on various tissues. In 1878, he published<br />
a book that detailed the discriminate staining of cells <strong>and</strong> cellular<br />
components by various dyes.<br />
Ehrlich joined Friedrich von Frerichs at the Charité Hospital in<br />
Berlin, where Frerichs allowed Ehrlich to do as much research as he<br />
wanted. Ehrlich began studying atoxyl in 1908, the year he won<br />
jointly with Metchnikoff the Nobel Prize in Physiology or Medicine<br />
for his work on immunity. Atoxyl was effective against trypanosome—a<br />
parasite responsible for a variety of infections, notably<br />
sleeping sickness—but also imposed serious side effects upon the<br />
patient, not the least of which was blindness. It was Ehrlich’s study<br />
of atoxyl, <strong>and</strong> several hundred derivatives sought as alternatives to<br />
atoxyl in trypanosome treatment, that led to the development of derivative<br />
606 (Salvarsan). Although compound 606 was the first<br />
chemotherapeutic to be used effectively against syphilis, it was discontinued<br />
as an atoxyl alternative <strong>and</strong> shelved as useless for five<br />
years.<br />
The discovery <strong>and</strong> development of compound 606 was enhanced<br />
by two critical events. First, the Germans Fritz Schaudinn <strong>and</strong> Erich
The wonder drug Salvarsan was often called “Ehrlich’s silver bullet,” after its developer,<br />
Paul Ehrlich (left). (Library of Congress)
676 / Salvarsan<br />
Hoffmann discovered that syphilis is a bacterially caused disease.<br />
The causative microorganism is a spirochete so frail <strong>and</strong> gossameric<br />
in substance that it is nearly impossible to detect by casual microscopic<br />
examination; Schaudinn chanced upon it one day in March,<br />
1905. This discovery led, in turn, to German bacteriologist August<br />
von Wassermann’s development of the now famous test for syphilis:<br />
the Wassermann test. Second, a Japanese bacteriologist, Sahachiro<br />
Hata, came to Frankfurt in 1909 to study syphilis with<br />
Ehrlich. Hata had studied syphilis in rabbits in Japan. Hata’s assignment<br />
was to test every atoxyl derivative ever developed under<br />
Ehrlich for its efficacy in syphilis treatment. After hundreds of tests<br />
<strong>and</strong> clinical trials, Ehrlich <strong>and</strong> Hata announced Salvarsan as a<br />
“magic bullet” that could cure syphilis, at the April, 1910, Congress<br />
of Internal Medicine in Wiesbaden, Germany.<br />
The announcement was electrifying. The remedy was immediately<br />
<strong>and</strong> widely sought, but it was not without its problems. A few deaths<br />
resulted from its use, <strong>and</strong> it was not safe for treatment of the gravely ill.<br />
Some of the difficulties inherent in Salvarsan were overcome by the development<br />
of neosalvarsan in 1912 <strong>and</strong> sodium salvarsan in 1913. Although<br />
Ehrlich achieved much, he fell short of his own assigned goal, a<br />
chemotherapeutic that would cure in one injection.<br />
Impact<br />
The significance of the development of Salvarsan as an antisyphilitic<br />
chemotherapeutic agent cannot be overstated. Syphilis at<br />
that time was as frightening <strong>and</strong> horrifying as leprosy <strong>and</strong> was a virtual<br />
sentence of slow, torturous death. Salvarsan was such a significant<br />
development that Ehrlich was recommended for a 1912 <strong>and</strong><br />
1913 Nobel Prize for his work in chemotherapy.<br />
It was several decades before any further significant advances in<br />
“wonder drugs” occurred, namely, the discovery of prontosil in 1932<br />
<strong>and</strong> its first clinical use in 1935. On the heels of prontosil—a sulfa<br />
drug—came other sulfa drugs. The sulfa drugs would remain supreme<br />
in the fight against bacterial infection until the antibiotics, the<br />
first being penicillin, were discovered in 1928; however, they were<br />
not clinically recognized until World War II (1939-1945). With the discovery<br />
of streptomycin in 1943 <strong>and</strong> Aureomycin in 1944, the assault
against bacteria was finally on a sound basis. Medicine possessed an<br />
arsenal with which to combat the pathogenic microbes that for centuries<br />
before had visited misery <strong>and</strong> death upon humankind.<br />
See also Abortion pill; Antibacterial drugs; Birth control pill;<br />
Penicillin; Reserpine; Syphilis test; Tuberculosis vaccine; Typhus<br />
vaccine; Yellow fever vaccine.<br />
Further Reading<br />
Salvarsan / 677<br />
Bäumler, Ernst. Paul Ehrlich: Scientist for Life. New York: Holmes &<br />
Meier, 1984.<br />
Leyden, John G. “From Nobel Prize to Courthouse Battle: Paul<br />
Ehrlich’s ‘Wonder Drug’ for Syphilis Won Him Acclaim but also<br />
Led Critics to Hound Him.” Washington Post (July 27, 1999).<br />
Quétel, Claude. History of Syphilis. Baltimore: Johns Hopkins University<br />
Press, 1992.
678<br />
Scanning tunneling microscope<br />
Scanning tunneling microscope<br />
The invention: A major advance on the field ion microscope, the<br />
scanning tunneling microscope has pointed toward new directions<br />
in the visualization <strong>and</strong> control of matter at the atomic<br />
level.<br />
The people behind the invention:<br />
Gerd Binnig (1947- ), a West German physicist who was a<br />
cowinner of the 1986 Nobel Prize in Physics<br />
Heinrich Rohrer (1933- ), a Swiss physicist who was a<br />
cowinner of the 1986 Nobel Prize in Physics<br />
Ernst Ruska (1906-1988), a West German engineer who was a<br />
cowinner of the 1986 Nobel Prize in Physics<br />
Antoni van Leeuwenhoek (1632-1723), a Dutch naturalist<br />
The Limit of Light<br />
The field of microscopy began at the end of the seventeenth century,<br />
when Antoni van Leeuwenhoek developed the first optical microscope.<br />
In this type of microscope, a magnified image of a sample<br />
is obtained by directing light onto it <strong>and</strong> then taking the light<br />
through a lens system. Van Leeuwenhoek’s microscope allowed<br />
him to observe the existence of life on a scale that is invisible to the<br />
naked eye. Since then, developments in the optical microscope have<br />
revealed the existence of single cells, pathogenic agents, <strong>and</strong> bacteria.<br />
There is a limit, however, to the resolving power of optical microscopes.<br />
Known as “Abbe’s barrier,” after the German physicist <strong>and</strong><br />
lens maker Ernst Abbe, this limit means that objects smaller than<br />
about 400 nanometers (about a millionth of a millimeter) cannot be<br />
viewed by conventional microscopes.<br />
In 1925, the physicist Louis de Broglie predicted that electrons<br />
would exhibit wave behavior as well as particle behavior. This prediction<br />
was confirmed by Clinton J. Davisson <strong>and</strong> Lester H. Germer<br />
of Bell Telephone Laboratories in 1927. It was found that highenergy<br />
electrons have shorter wavelengths than low-energy electrons<br />
<strong>and</strong> that electrons with sufficient energies exhibit wave-
lengths comparable to the diameter of the atom. In 1927, Hans<br />
Busch showed in a mathematical analysis that current-carrying<br />
coils behave like electron lenses <strong>and</strong> that they obey the same lens<br />
equation that governs optical lenses. Using these findings, Ernst<br />
Ruska developed the electron microscope in the early 1930’s.<br />
By 1944, the German corporation of Siemens <strong>and</strong> Halske had<br />
manufactured electron microscopes with a resolution of 7 nanometers;<br />
modern instruments are capable of resolving objects as<br />
small as 0.5 nanometer. This development made it possible to view<br />
structures as small as a few atoms across as well as large atoms <strong>and</strong><br />
large molecules.<br />
The electron beam used in this type of microscope limits the usefulness<br />
of the device. First, to avoid the scattering of the electrons,<br />
the samples must be put in a vacuum, which limits the applicability<br />
of the microscope to samples that can sustain such an environment.<br />
Most important, some fragile samples, such as organic molecules,<br />
are inevitably destroyed by the high-energy beams required for<br />
high resolutions.<br />
Viewing Atoms<br />
Scanning tunneling microscope / 679<br />
From 1936 to 1955, Erwin Wilhelm Müller developed the field ion<br />
microscope (FIM), which used an extremely sharp needle to hold the<br />
sample. This was the first microscope to make possible the direct<br />
viewing of atomic structures, but it was limited to samples capable of<br />
sustaining the high electric fields necessary for its operation.<br />
In the early 1970’s, Russel D. Young <strong>and</strong> Clayton Teague of the<br />
National Bureau of St<strong>and</strong>ards (NBS) developed the “topografiner,”<br />
a new kind of FIM. In this microscope, the sample is placed at a large<br />
distance from the tip of the needle. The tip is scanned across the surface<br />
of the sample with a precision of about a nanometer. The precision<br />
in the three-dimensional motion of the tip was obtained by using<br />
three legs made of piezoelectric crystals. These materials change<br />
shape in a reproducible manner when subjected to a voltage. The<br />
extent of expansion or contraction of the crystal depends on the<br />
amount of voltage that is applied. Thus, the operator can control the<br />
motion of the probe by varying the voltage acting on the three legs.<br />
The resolution of the topografiner is limited by the size of the probe.
680 / Scanning tunneling microscope<br />
Gerd Binnig <strong>and</strong> Heinrich Rohrer<br />
Both Gerd Binnig <strong>and</strong> Heinrich Rohrer believe an early <strong>and</strong><br />
pleasurable introduction to teamwork led to their later success<br />
in inventing the scanning tunneling microscope, for which they<br />
shared the 1986 Nobel Prize in Physics with Ernst Ruska.<br />
Binnig was born in Frankfurt, Germany, in 1947. He acquired<br />
an early interest in physics but was always deeply influenced<br />
by classical music, introduced to him by his mother, <strong>and</strong><br />
the rock music that his younger brother played for him. Binnig<br />
played in rock b<strong>and</strong>s as a teenager <strong>and</strong> learned to enjoy the creative<br />
interplay of teamwork. At J. W. Goethe University in<br />
Frankfurt he earned a bachelor’s degree (1973) <strong>and</strong> doctorate<br />
(1978) in physics <strong>and</strong> then took a position at International Business<br />
Machine’s Zurich Research Laboratory. There he recaptured<br />
the pleasures of working with a talented team after joining<br />
Rohrer in research.<br />
Rohrer had been at the Zurich facility since just after it<br />
opened in 1963. He was born in Buch, Switzerl<strong>and</strong>, in 1933, <strong>and</strong><br />
educated at the Swiss Federal Institute of Technology in Zurich,<br />
where he completed his doctorate in 1960. After post-doctoral<br />
work at Rutgers University, he joined the IBM research team, a<br />
time that he describes as among the most enjoyable passages of<br />
his career.<br />
In addition to the Nobel Prize, the pair also received the German<br />
Physics Prize, Otto Klung Prize, Hewlett Packard Prize,<br />
<strong>and</strong> King Faisal Prize. Rohrer became an IBM Fellow in 1986<br />
<strong>and</strong> was selected to manage the physical sciences department at<br />
the Zurich Research Laboratory. He retired from IBM in July<br />
1997. Binnig became an IBM Fellow in 1987.<br />
The idea for the scanning tunneling microscope (STM) arose<br />
when Heinrich Rohrer of the International Business Machines (IBM)<br />
Corporation’s Zurich research laboratory met Gerd Binnig in Frankfurt<br />
in 1978. The STM is very similar to the topografiner. In the STM,<br />
however, the tip is kept at a height of less than a nanometer away<br />
from the surface, <strong>and</strong> the voltage that is applied between the specimen<br />
<strong>and</strong> the probe is low. Under these conditions, the electron<br />
cloud of atoms at the end of the tip overlaps with the electron cloud<br />
of atoms at the surface of the specimen. This overlapping results in a
measurable electrical current flowing through the vacuum or insulating<br />
material existing between the tip <strong>and</strong> the sample. When the<br />
probe is moved across the surface <strong>and</strong> the voltage between the<br />
probe <strong>and</strong> sample is kept constant, the change in the distance between<br />
the probe <strong>and</strong> the surface (caused by surface irregularities)<br />
results in a change of the tunneling current.<br />
Two methods are used to translate these changes into an image of<br />
the surface. The first method involves changing the height of the<br />
probe to keep the tunneling current constant; the voltage used to<br />
change the height is translated by a computer into an image of the<br />
surface. The second method scans the probe at a constant height<br />
away from the sample; the voltage across the probe <strong>and</strong> sample is<br />
changed to keep the tunneling current constant. These changes in<br />
voltage are translated into the image of the surface. The main limitation<br />
of the technique is that it is applicable only to conducting samples<br />
or to samples with some surface treatment.<br />
Consequences<br />
Scanning tunneling microscope / 681<br />
In October, 1989, the STM was successfully used in the manipulation<br />
of matter at the atomic level. By letting the probe sink into the<br />
surface of a metal-oxide crystal, researchers at Rutgers University<br />
were able to dig a square hole about 250 atoms across <strong>and</strong> 10 atoms<br />
deep. Amore impressive feat was reported in the April 5, 1990, issue<br />
of Nature; M. Eigler <strong>and</strong> Erhard K. Schweiser of IBM’s Almaden Research<br />
Center spelled out their employer’s three-letter acronym using<br />
thirty-five atoms of xenon. This ability to move <strong>and</strong> place individual<br />
atoms precisely raises several possibilities, which include the<br />
creation of custom-made molecules, atomic-scale data storage, <strong>and</strong><br />
ultrasmall electrical logic circuits.<br />
The success of the STM has led to the development of several<br />
new microscopes that are designed to study other features of sample<br />
surfaces. Although they all use the scanning probe technique to<br />
make measurements, they use different techniques for the actual detection.<br />
The most popular of these new devices is the atomic force<br />
microscope (AFM). This device measures the tiny electric forces that<br />
exist between the electrons of the probe <strong>and</strong> the electrons of the<br />
sample without the need for electron flow, which makes the tech-
682 / Scanning tunneling microscope<br />
nique particularly useful in imaging nonconducting surfaces. Other<br />
scanned probe microscopes use physical properties such as temperature<br />
<strong>and</strong> magnetism to probe the surfaces.<br />
See also Cyclotron; Electron microscope; Ion field microscope;<br />
Mass spectrograph; Neutrino detector; Sonar; Synchrocyclotron;<br />
Tevatron accelerator; Ultramicroscope.<br />
Further Reading<br />
Morris, Michael D. Microscopic <strong>and</strong> Spectroscopic Imaging of the Chemical<br />
State. New York: M. Dekker, 1993.<br />
Wiesendanger, Robert. Scanning Probe Microscopy: Analytical Methods.<br />
New York: Springer-Verlag, 1998.<br />
_____, <strong>and</strong> Hans-Joachim Güntherodt. Scanning Tunneling Microscopy<br />
II: Further Applications <strong>and</strong> Related Scanning Techniques.2ded.<br />
New York: Springer, 1995.<br />
_____. Scanning Tunneling Microscopy III: Theory of STM <strong>and</strong> Related<br />
Scanning Probe Methods. 2d ed. New York: Springer, 1996.
Silicones<br />
Silicones<br />
The invention: Synthetic polymers characterized by lubricity, extreme<br />
water repellency, thermal stability, <strong>and</strong> inertness that are<br />
widely used in lubricants, protective coatings, paints, adhesives,<br />
electrical insulation, <strong>and</strong> prosthetic replacements for body parts.<br />
The people behind the invention:<br />
Eugene G. Rochow (1909- ), an American research chemist<br />
Frederic Stanley Kipping (1863-1949), a Scottish chemist <strong>and</strong><br />
professor<br />
James Franklin Hyde (1903- ), an American organic chemist<br />
Synthesizing Silicones<br />
683<br />
Frederic Stanley Kipping, in the first four decades of the twentieth<br />
century, made an extensive study of the organic (carbon-based)<br />
chemistry of the element silicon. He had a distinguished academic<br />
career <strong>and</strong> summarized his silicon work in a lecture in 1937 (“Organic<br />
Derivatives of Silicon”). Since Kipping did not have available<br />
any naturally occurring compounds with chemical bonds between<br />
carbon <strong>and</strong> silicon atoms (organosilicon compounds), it was necessary<br />
for him to find methods of establishing such bonds. The basic<br />
method involved replacing atoms in naturally occurring silicon<br />
compounds with carbon atoms from organic compounds.<br />
While Kipping was probably the first to prepare a silicone <strong>and</strong> was<br />
certainly the first to use the term silicone, he did not pursue the commercial<br />
possibilities of silicones. Yet his careful experimental work was<br />
a valuable starting point for all subsequent workers in organosilicon<br />
chemistry, including those who later developed the silicone industry.<br />
On May 10, 1940, chemist Eugene G. Rochow of the General<br />
Electric (GE) Company’s corporate research laboratory in<br />
Schenectady, New York, discovered that methyl chloride gas,<br />
passed over a heated mixture of elemental silicon <strong>and</strong> copper, reacted<br />
to form compounds with silicon-carbon bonds. Kipping<br />
had shown that these silicon compounds react with water to form<br />
silicones.
684 / Silicones<br />
The importance of Rochow’s discovery was that it opened the<br />
way to a continuous process that did not consume expensive metals,<br />
such as magnesium, or flammable ether solvents, such as those<br />
used by Kipping <strong>and</strong> other researchers. The copper acts as a catalyst,<br />
<strong>and</strong> the desired silicon compounds are formed with only minor<br />
quantities of by-products. This “direct synthesis,” as it came to be<br />
called, is now done commercially on a large scale.<br />
Silicone Structure<br />
Silicones are examples of what chemists call polymers. Basically, a<br />
polymer is a large molecule made up of many smaller molecules<br />
that are linked together. At the molecular level, silicones consist of<br />
long, repeating chains of atoms. In this molecular characteristic, silicones<br />
resemble plastics <strong>and</strong> rubber.<br />
Silicone molecules have a chain composed of alternate silicon <strong>and</strong><br />
oxygen atoms. Each silicon atom bears two organic groups as substituents,<br />
while the oxygen atoms serve to link the silicon atoms into a<br />
chain. The silicon-oxygen backbone of the silicones is responsible for<br />
their unique <strong>and</strong> useful properties, such as the ability of a silicone oil<br />
to remain liquid over an extremely broad temperature range <strong>and</strong> to<br />
resist oxidative <strong>and</strong> thermal breakdown at high temperatures.<br />
A fundamental scientific consideration with silicone, as with any<br />
polymer, is to obtain the desired physical <strong>and</strong> chemical properties in<br />
a product by closely controlling its chemical structure <strong>and</strong> molecular<br />
weight. Oily silicones with thous<strong>and</strong>s of alternating silicon <strong>and</strong><br />
oxygen atoms have been prepared. The average length of the molecular<br />
chain determines the flow characteristics (viscosity) of the oil.<br />
In samples with very long chains, rubber-like elasticity can be<br />
achieved by cross-linking the silicone chains in a controlled manner<br />
<strong>and</strong> adding a filler such as silica. High degrees of cross-linking<br />
could produce a hard, intractable material instead of rubber.<br />
The action of water on the compounds produced from Rochow’s<br />
direct synthesis is a rapid method of obtaining silicones, but does<br />
not provide much control of the molecular weight. Further development<br />
work at GE <strong>and</strong> at the Dow-Corning company showed that<br />
the best procedure for controlled formation of silicone polymers involved<br />
treating the crude silicones with acid to produce a mixture
Eugene G. Rochow<br />
Silicones / 685<br />
Eugene George Rochow was born in 1909 <strong>and</strong> grew up in the<br />
rural New Jersey town of Maplewood. There his father, who<br />
worked in the tanning industry, <strong>and</strong> his big brother maintained<br />
a small attic laboratory. They experimented with electricity, radio—Eugene<br />
put together his own crystal set in an oatmeal<br />
box—<strong>and</strong> chemistry.<br />
Rochow followed his brother to Cornell University in 1927.<br />
The Great Depression began during his junior year, <strong>and</strong> although<br />
he had to take jobs as lecture assistant to get by, he managed<br />
to earn his bachelor’s degree in chemistry in 1931 <strong>and</strong> his<br />
doctorate in 1935. Luck came his way in the extremely tight job<br />
market: General Electric (GE) hired him for his expertise in inorganic<br />
chemistry.<br />
In 1938 the automobile industry, among other manufacturers,<br />
had a growing need for a high-temperature-resistant insulators.<br />
Scientists at Corning were convinced that silicone would<br />
have the best properties for the purpose, but they could not find<br />
a way to synthesize it cheaply <strong>and</strong> in large volume. When word<br />
about their ideas got back to Rochow at GE, he reasoned that a<br />
flexible silicone able to withst<strong>and</strong> temperatures of 200 to 300 degrees<br />
Celsius could be made by bonding silicone to carbon. His<br />
research succeeded in producing methyl silicone in 1939, <strong>and</strong><br />
he devised a way to make it cheaply in 1941. It was the first<br />
commercially practical silicone. His process is still used.<br />
After World War II GE asked him to work on an effort to<br />
make aircraft carriers nuclear powered. However, Rochow was<br />
a Quaker <strong>and</strong> pacifist, <strong>and</strong> he refused. Instead, he moved to<br />
Harvard University as a chemistry professor in 1948 <strong>and</strong> remained<br />
there until his retirement in 1970. As the founder of a<br />
new branch of industrial chemistry, he received most of the discipline’s<br />
awards <strong>and</strong> medals, including the Perkin Award, <strong>and</strong><br />
honorary doctorates.<br />
from which high yields of an intermediate called “D4” could be obtained<br />
by distillation. The intermediate D4 could be polymerized in<br />
a controlled manner by use of acidic or basic catalysts. Wilton I.<br />
Patnode of GE <strong>and</strong> James F. Hyde of Dow-Corning made important<br />
advances in this area. Hyde’s discovery of the use of traces of potassium<br />
hydroxide as a polymerization catalyst for D4 made possible
686 / Silicones<br />
the manufacture of silicone rubber, which is the most commercially<br />
valuable of all the silicones.<br />
Impact<br />
Although Kipping’s discovery <strong>and</strong> naming of the silicones occurred<br />
from 1901 to 1904, the practical use <strong>and</strong> impact of silicones<br />
started in 1940, with Rochow’s discovery of direct synthesis.<br />
Production of silicones in the United States came rapidly enough<br />
to permit them to have some influence on military supplies for<br />
World War II (1939-1945). In aircraft communication equipment, extensive<br />
waterproofing of parts by silicones resulted in greater reliability<br />
of the radios under tropical conditions of humidity, where<br />
condensing water could be destructive. Silicone rubber, because<br />
of its ability to withst<strong>and</strong> heat, was used in gaskets under hightemperature<br />
conditions, in searchlights, <strong>and</strong> in the engines on B-29<br />
bombers. Silicone grease applied to aircraft engines also helped to<br />
protect spark plugs from moisture <strong>and</strong> promote easier starting.<br />
After World War II, the uses for silicones multiplied. Silicone rubber<br />
appeared in many products from caulking compounds to wire insulation<br />
to breast implants for cosmetic surgery. Silicone rubber boots were<br />
used on the moon walks where ordinary rubber would have failed.<br />
Silicones in their present form owe much to years of patient developmental<br />
work in industrial laboratories. Basic research, such as<br />
that conducted by Kipping <strong>and</strong> others, served to point the way <strong>and</strong><br />
catalyzed the process of commercialization.<br />
See also Buna rubber; Neoprene; Nylon; Plastic; Polystyrene; Teflon.<br />
Further Reading<br />
Clarson, Stephen J. Silicones <strong>and</strong> Silicone-Modified Materials. Washington,<br />
D.C.: American Chemical Society, 2000.<br />
Koerner, G. Silicones, Chemistry <strong>and</strong> Technology. Boca Raton, Fla.:<br />
CRC Press, 1991.<br />
Potter, Michael, <strong>and</strong> Noel R. Rose. Immunology of Silicones. New<br />
York: Springer, 1996.<br />
Smith, A. Lee. The Analytical Chemistry of Silicones. New York: Wiley,<br />
1991.
Solar thermal engine<br />
Solar thermal engine<br />
The invention: The first commercially practical plant for generating<br />
electricity from solar energy.<br />
The people behind the invention:<br />
Frank Shuman (1862-1918), an American inventor<br />
John Ericsson (1803-1889), an American engineer<br />
Augustin Mouchout (1825-1911), a French physics professor<br />
Power from the Sun<br />
687<br />
According to tradition, the Greek scholar Archimedes used<br />
reflective mirrors to concentrate the rays of the Sun <strong>and</strong> set afire<br />
the ships of an attacking Roman fleet in 212 b.c.e. The story illustrates<br />
the long tradition of using mirrors to concentrate solar energy<br />
from a large area onto a small one, producing very high<br />
temperatures.<br />
With the backing of Napoleon III, the Frenchman Augustin<br />
Mouchout built, between 1864 <strong>and</strong> 1872, several steam engines<br />
that were powered by the Sun. Mirrors concentrated the sun’s rays<br />
to a point, producing a temperature that would boil water. The<br />
steam drove an engine that operated a water pump. The largest engine<br />
had a cone-shaped collector, or “axicon,” lined with silverplated<br />
metal. The French government operated the engine for six<br />
months but decided it was too expensive to be practical.<br />
John Ericsson, the American famous for designing <strong>and</strong> building<br />
the Civil War ironclad ship Monitor, built seven steam-driven<br />
solar engines between 1871 <strong>and</strong> 1878. In Ericsson’s design,<br />
rays were focused onto a line rather than a point. Long mirrors,<br />
curved into a parabolic shape, tracked the Sun. The rays were focused<br />
onto a water-filled tube mounted above the reflectors to<br />
produce steam. The engineer’s largest engine, which used an 11- ×<br />
16-foot trough-shaped mirror, delivered nearly 2 horsepower. Because<br />
his solar engines were ten times more expensive than conventional<br />
steam engines, Ericsson converted them to run on coal to<br />
avoid financial loss.
688 / Solar thermal engine<br />
Frank Shuman, a well-known inventor in Philadelphia, Pennsylvania,<br />
entered the field of solar energy in 1906. The self-taught engineer<br />
believed that curved, movable mirrors were too expensive. His<br />
first large solar engine was a hot-box, or flat-plate, collector. It lay<br />
flat on the ground <strong>and</strong> had blackened pipes filled with a liquid that<br />
had a low boiling point. The solar-heated vapor ran a 3.5-horsepower<br />
engine.<br />
Shuman’s wealthy investors formed the Sun Power Company to<br />
develop <strong>and</strong> construct the largest solar plant ever built. The site chosen<br />
was in Egypt, but the plant was built near Shuman’s home for<br />
testing before it was sent to Egypt.<br />
When the inventor added ordinary flat mirrors to reflect more<br />
sunlight into each collector, he doubled the heat production of the<br />
collectors. The 572 trough-type collectors were assembled in twentysix<br />
rows. Water was piped through the troughs <strong>and</strong> converted to<br />
steam. A condenser converted the steam to water, which reentered<br />
the collectors. The engine pumped 3,000 gallons of water per minute<br />
<strong>and</strong> produced 14 horsepower per day; performance was expected to<br />
improve 25 percent in the sunny climate of Egypt.<br />
British investors requested that professor C. V. Boys review the<br />
solar plant before it was shipped to Egypt. Boys pointed out that the<br />
bottom of each collector was not receiving any direct solar energy;<br />
in fact, heat was being lost through the bottom. He suggested that<br />
each row of flat mirrors be replaced by a single parabolic reflector,<br />
<strong>and</strong> Shuman agreed. Shuman thought Boys’s idea was original, but<br />
he later realized it was based on Ericsson’s design.<br />
The company finally constructed the improved plant in Meadi,<br />
Egypt, a farming district on the Nile River. Five solar collectors,<br />
spaced 25 feet apart, were built in a north-south line. Each was<br />
about 200 feet long <strong>and</strong> 10 feet wide. Trough-shaped reflectors were<br />
made of mirrors held in place by brass springs that exp<strong>and</strong>ed<br />
<strong>and</strong> contracted with changing temperatures. The parabolic mirrors<br />
shifted automatically so that the rays were always focused on the<br />
boiler. Inside the 15-inch boiler that ran down the middle of the collector,<br />
water was heated <strong>and</strong> converted to steam. The engine produced<br />
more than 55 horsepower, which was enough to pump 6,000<br />
gallons of water per minute.<br />
The purchase price of Shuman’s solar plant was twice as high as
Solar thermal engine / 689<br />
Trough-shaped collectors with flat mirrors (above) produced enough solar thermal energy to<br />
pump 3,000 gallons of water per minute. Trough-shaped collectors with parabolic mirrors<br />
(below) produced enough solar thermal energy to pump 6,000 gallons of water per minute.<br />
that of a coal-fired plant, but its operating costs were far lower. In<br />
Egypt, where coal was expensive, the entire purchase price would<br />
be recouped in four years. Afterward, the plant would operate for<br />
practically nothing. The first practical solar engine was now in operation,<br />
providing enough energy to drive a large-scale irrigation system<br />
in the floodplain of the Nile River.<br />
By 1914, Shuman’s work was enthusiastically supported, <strong>and</strong> solar<br />
plants were planned for India <strong>and</strong> Africa. Shuman hoped to<br />
build 20,000 reflectors in the Sahara Desert <strong>and</strong> generate energy<br />
equal to all the coal mined in one year, but the outbreak of World
690 / Solar thermal engine<br />
War I ended his dreams of large-scale solar developments. The<br />
Meadi project was ab<strong>and</strong>oned in 1915, <strong>and</strong> Shuman died before the<br />
war ended. Powerful nations lost interest in solar power <strong>and</strong> began<br />
to replace coal with oil. Rich oil reserves were discovered in many<br />
desert zones that were ideal locations for solar power.<br />
Impact<br />
Although World War I ended Frank Shuman’s career, his breakthrough<br />
proved to the world that solar power held great promise for<br />
the future. His ideas were revived in 1957, when the Soviet Union<br />
planned a huge solar project for Siberia. A large boiler was fixed on<br />
a platform 140 feet high. Parabolic mirrors, mounted on 1,300 railroad<br />
cars, revolved on circular tracks to focus light on the boiler. The<br />
full-scale model was never built, but the design inspired the solar<br />
power tower.<br />
In the Mojave desert near Barstow, California, an experimental<br />
power tower, Solar One, began operation in 1982. The system collects<br />
solar energy to deliver steam to turbines that produce electric<br />
power. The 30-story tower is surrounded by more than 1,800 mirrors<br />
that adjust continually to track the Sun. Solar One generates<br />
about 10 megawatts per day, enough power for 5,000 people.<br />
Solar One was expensive, but future power towers will generate<br />
electricity as cheaply as fossil fuels can. If the costs of the air <strong>and</strong><br />
water pollution caused by coal burning were considered, solar power<br />
plants would already be recognized as cost effective. Meanwhile,<br />
Frank Shuman’s success in establishing <strong>and</strong> operating a thoroughly<br />
practical large-scale solar engine continues to inspire research <strong>and</strong><br />
development.<br />
See also Compressed-air-accumulating power plant; Fuel cell;<br />
Geothermal power; Nuclear power plant; Photoelectric cell; Photovoltaic<br />
cell; Tidal power plant.<br />
Further Reading<br />
De Kay, James T. Monitor: The Story of the Legendary Civil War Ironclad<br />
<strong>and</strong> the Man Whose Invention Changed the Course of History. New<br />
York: Ballantine, 1999.
Solar thermal engine / 691<br />
Mancini, Thomas R., James M. Chavez, <strong>and</strong> Gregory J. Kolb. “Solar<br />
Thermal Power Today <strong>and</strong> Tomorrow.” Mechanical Engineering<br />
116, no. 8 (August, 1994).<br />
Moore, Cameron M. “Cooking Up Electricity with Sunlight.” The<br />
World & I 12, no. 7 (July, 1997).<br />
Parrish, Michael. “Enron Makes Electrifying Proposal Energy: The<br />
Respected Developer Announces a Huge Solar Plant <strong>and</strong> a Breakthrough<br />
Price.” Los Angeles Times (November 5, 1994).
692<br />
Sonar<br />
Sonar<br />
The invention: A device that detects soundwaves transmitted<br />
through water, sonar was originally developed to detect enemy<br />
submarines but is also used in navigation, fish location, <strong>and</strong><br />
ocean mapping.<br />
The people behind the invention:<br />
Jacques Curie (1855-1941), a French physicist<br />
Pierre Curie (1859-1906), a French physicist<br />
Paul Langévin (1872-1946), a French physicist<br />
Active Sonar, Submarines, <strong>and</strong> Piezoelectricity<br />
Sonar, which st<strong>and</strong>s for sound navigation <strong>and</strong> ranging, is the<br />
American name for a device that the British call “asdic.” There are<br />
two types of sonar. Active sonar, the more widely used of the two<br />
types, detects <strong>and</strong> locates underwater objects when those objects reflect<br />
sound pulses sent out by the sonar. Passive sonar merely listens<br />
for sounds made by underwater objects. Passive sonar is used<br />
mostly when the loud signals produced by active sonar cannot be<br />
used (for example, in submarines).<br />
The invention of active sonar was the result of American, British,<br />
<strong>and</strong> French efforts, although it is often credited to Paul Langévin,<br />
who built the first working active sonar system by 1917. Langévin’s<br />
original reason for developing sonar was to locate icebergs, but the<br />
horrors of German submarine warfare in World War I led to the new<br />
goal of submarine detection. Both Langévin’s short-range system<br />
<strong>and</strong> long-range modern sonar depend on the phenomenon of “piezoelectricity,”<br />
which was discovered by Pierre <strong>and</strong> Jacques Curie in<br />
1880. (Piezoelectricity is electricity that is produced by certain materials,<br />
such as certain crystals, when they are subjected to pressure.)<br />
Since its invention, active sonar has been improved <strong>and</strong> its capabilities<br />
have been increased. Active sonar systems are used to detect<br />
submarines, to navigate safely, to locate schools of fish, <strong>and</strong> to map<br />
the oceans.
Sonar Theory, Development, <strong>and</strong> Use<br />
Sonar / 693<br />
Although active sonar had been developed by 1917, it was not<br />
available for military use until World War II. An interesting major<br />
use of sonar before that time was measuring the depth of the ocean.<br />
That use began when the 1922 German Meteor Oceanographic Expedition<br />
was equipped with an active sonar system. The system<br />
was to be used to help pay German World War I debts by aiding in<br />
the recovery of gold from wrecked vessels. It was not used successfully<br />
to recover treasure, but the expedition’s use of sonar to determine<br />
ocean depth led to the discovery of the Mid-Atlantic Ridge.<br />
This development revolutionized underwater geology.<br />
Active sonar operates by sending out sound pulses, often called<br />
“pings,” that travel through water <strong>and</strong> are reflected as echoes when<br />
they strike large objects. Echoes from these targets are received by<br />
the system, amplified, <strong>and</strong> interpreted. Sound is used instead of<br />
light or radar because its absorption by water is much lower. The<br />
time that passes between ping transmission <strong>and</strong> the return of an<br />
echo is used to identify the distance of a target from the system by<br />
means of a method called “echo ranging.” The basis for echo ranging<br />
is the normal speed of sound in seawater (5,000 feet per second).<br />
The distance of the target from the radar system is calculated by<br />
means of a simple equation: range = speed of sound × 0.5 elapsed<br />
time. The time is divided in half because it is made up of the time<br />
taken to reach the target <strong>and</strong> the time taken to return.<br />
The ability of active sonar to show detail increases as the energy<br />
of transmitted sound pulses is raised by decreasing the<br />
sound wavelength. Figuring out active sonar data is complicated<br />
by many factors. These include the roughness of the ocean, which<br />
scatters sound <strong>and</strong> causes the strength of echoes to vary, making<br />
it hard to estimate the size <strong>and</strong> identity of a target; the speed of<br />
the sound wave, which changes in accordance with variations in<br />
water temperature, pressure, <strong>and</strong> saltiness; <strong>and</strong> noise caused by<br />
waves, sea animals, <strong>and</strong> ships, which limits the range of active sonar<br />
systems.<br />
A simple active pulse sonar system produces a piezoelectric signal<br />
of a given frequency <strong>and</strong> time duration. Then, the signal is amplified<br />
<strong>and</strong> turned into sound, which enters the water. Any echo
694 / Sonar<br />
that is produced returns to the system to be amplified <strong>and</strong> used to<br />
determine the identity <strong>and</strong> distance of the target.<br />
Most active sonar systems are mounted near surface vessel keels<br />
or on submarine hulls in one of three ways. The first <strong>and</strong> most popular<br />
mounting method permits vertical rotation <strong>and</strong> scanning of a<br />
section of the ocean whose center is the system’s location. The second<br />
method, which is most often used in depth sounders, directs<br />
the beam downward in order to measure ocean depth. The third<br />
method, called wide scanning, involves the use of two sonar systems,<br />
one mounted on each side of the vessel, in such a way that the<br />
two beams that are produced scan the whole ocean at right angles to<br />
the direction of the vessel’s movement.<br />
Active single-beam sonar operation applies an alternating voltage<br />
to a piezoelectric crystal, making it part of an underwater loudspeaker<br />
(transducer) that creates a sound beam of a particular frequency.<br />
When an echo returns, the system becomes an underwater<br />
microphone (receiver) that identifies the target <strong>and</strong> determines its<br />
range. The sound frequency that is used is determined by the sonar’s<br />
Sonar<br />
Active sonar detects <strong>and</strong> locates underwater objects that reflect sound pulses sent out by the<br />
sonar.
purpose <strong>and</strong> the fact that the absorption of sound by water increases<br />
with frequency. For example, long-range submarine-seeking sonar<br />
systems (whose detection range is about ten miles) operate at 3 to 40<br />
kilohertz. In contrast, short-range systems that work at about 500 feet<br />
(in mine sweepers, for example) use 150 kilohertz to 2 megahertz.<br />
Impact<br />
Paul Langévin<br />
If he had not published the Special Theory of Relativity in<br />
1905, Albert Einstein once said, Paul Langévin would have<br />
done so not long afterward. Born in Paris in 1872, Langévin was<br />
among the foremost physicists of his generation. He studied in<br />
the best French schools of science—<strong>and</strong> with such teachers as<br />
Pierre Curie <strong>and</strong> Jean Perrin—<strong>and</strong> became a professor of physics<br />
at the College de France in 1904. He moved to the Sorbonne<br />
in 1909.<br />
Langévin’s research was always widely influential. In addition<br />
to his invention of active sonar, he was especially noted for<br />
his studies of the molecular structure of gases, analysis of secondary<br />
X rays from irradiated metals, his theory of magnetism,<br />
<strong>and</strong> work on piezoelectricity <strong>and</strong> piezoceramics. His suggestion<br />
that magnetic properties are linked to the valence electrons of atoms<br />
inspired Niels Bohr’s classic model of the atom. In his later<br />
career, a champion of Einstein’s theories of relativity, Langévin<br />
worked on the implications of the space-time continuum.<br />
During World War II, Langévin, a pacifist, publicly denounced<br />
the Nazis <strong>and</strong> their occupation of France. They jailed him for it.<br />
He escaped to Switzerl<strong>and</strong> in 1944, returning as soon as France<br />
was liberated. He died in late 1946.<br />
Sonar / 695<br />
Modern active sonar has affected military <strong>and</strong> nonmilitary activities<br />
ranging from submarine location to undersea mapping <strong>and</strong><br />
fish location. In all these uses, two very important goals have been<br />
to increase the ability of sonar to identify a target <strong>and</strong> to increase the<br />
effective range of sonar. Much work related to these two goals has<br />
involved the development of new piezoelectric materials <strong>and</strong> the replacement<br />
of natural minerals (such as quartz) with synthetic piezoelectric<br />
ceramics.
696 / Sonar<br />
Efforts have also been made to redesign the organization of sonar<br />
systems. One very useful development has been changing beammaking<br />
transducers from one-beam units to multibeam modules<br />
made of many small piezoelectric elements. Systems that incorporate<br />
these developments have many advantages, particularly the ability<br />
to search simultaneously in many directions. In addition, systems<br />
have been redesigned to be able to scan many echo beams simultaneously<br />
with electronic scanners that feed into a central receiver.<br />
These changes, along with computer-aided tracking <strong>and</strong> target<br />
classification, have led to the development of greatly improved active<br />
sonar systems. It is expected that sonar systems will become<br />
even more powerful in the future, finding uses that have not yet<br />
been imagined.<br />
See also Aqualung; Bathyscaphe; Bathysphere; Geiger counter;<br />
Gyrocompass; Radar; Richter scale; Ultrasound.<br />
Further Reading<br />
Curie, Marie. Pierre Curie. New York: Dover <strong>Public</strong>ations, 1923.<br />
Hackmann, Willem Dirk. Seek <strong>and</strong> Strike: Sonar, Anti-Submarine Warfare,<br />
<strong>and</strong> the Royal Navy, 1914-54. London: H.M.S.O., 1984.<br />
Segrè, Emilio. From X-Rays to Quarks: Modern Physicists <strong>and</strong> Their<br />
Discoveries. San Francisco: W. H. Freeman, 1980.<br />
Senior, John E. Marie <strong>and</strong> Pierre Curie. Gloucestershire: Sutton, 1998.
Stealth aircraft<br />
Stealth aircraft<br />
The invention: The first generation of “radar-invisible” aircraft,<br />
stealth planes were designed to elude enemy radar systems.<br />
The people behind the invention:<br />
Lockhead Corporation, an American research <strong>and</strong> development firm<br />
Northrop Corporation, an American aerospace firm<br />
Radar<br />
During World War II, two weapons were developed that radically<br />
altered the thinking of the U.S. military-industrial establishment<br />
<strong>and</strong> the composition of U.S. military forces. These weapons<br />
were the atomic bombs that were dropped on the Japanese cities of<br />
Hiroshima <strong>and</strong> Nagasaki by U.S. forces <strong>and</strong> “radio detection <strong>and</strong><br />
ranging,” or radar. Radar saved the English during the Battle of Britain,<br />
<strong>and</strong> it was radar that made it necessary to rethink aircraft design.<br />
With radar, attacking aircraft can be detected hundreds of<br />
miles from their intended targets, which makes it possible for those<br />
aircraft to be intercepted before they can attack. During World<br />
War II, radar, using microwaves, was able to relay the number, distance,<br />
direction, <strong>and</strong> speed of German aircraft to British fighter interceptors.<br />
This development allowed the fighter pilots of the Royal<br />
Air Force, “the few” who were so highly praised by Winston Churchill,<br />
to shoot down four times as many planes as they lost.<br />
Because of the development of radar, American airplane design<br />
strategy has been to reduce the planes’ cross sections, reduce or<br />
eliminate the use of metal by replacing it with composite materials,<br />
<strong>and</strong> eliminate the angles that are found on most aircraft control surfaces.<br />
These actions help make aircraft less visible—<strong>and</strong> in some<br />
cases, almost invisible—to radar. The Lockheed F-117A Nightrider<br />
<strong>and</strong> the Northrop B-2 Stealth Bomber are the results of these efforts.<br />
Airborne “Ninjas”<br />
697<br />
Hidden inside Lockheed Corporation is a research <strong>and</strong> development<br />
organization that is unique in the corporate world. This
698 / Stealth aircraft<br />
facility has provided the Air Force with the Sidewinder heatseeking<br />
missile; the SR-71, a titanium-skinned aircraft that can fly<br />
at four times the speed of sound; <strong>and</strong>, most recently, the F-117A<br />
Nightrider. The Nightrider eluded Iraqi radar so effectively during<br />
the 1991 Persian Gulf War that the Iraqis nicknamed it Shaba,<br />
which is an Arabic word that means ghost. In an unusual move<br />
for military projects, the Nightrider was delivered to the Air<br />
Force in 1982, before the plane had been perfected. This was done<br />
so that Air Force pilots could test fly the plane <strong>and</strong> provide input<br />
that could be used to improve the aircraft before it went into full<br />
production.<br />
The Northrop B-2 Stealth Bomber was the result of a design philosophy<br />
that was completely different from that of the F-117A<br />
Nightrider. The F-117A, for example, has a very angular appearance,<br />
but the angles are all greater than 180 degrees. This configuration<br />
spreads out radar waves rather than allowing them to be concentrated<br />
<strong>and</strong> sent back to their point of origin. The B-2, however,<br />
stays away from angles entirely, opting for a smooth surface that<br />
also acts to spread out the radar energy. (The B-2 so closely resembles<br />
the YB-49 Flying Wing, which was developed in the late 1940’s,<br />
that it even has the same wingspan.) The surface of the aircraft is<br />
covered with radar-absorbing material <strong>and</strong> carries its engines <strong>and</strong><br />
weapons inside to reduce the radar cross section. There are no vertical<br />
control surfaces, which has the disadvantage of making the aircraft<br />
unstable, so the stabilizing system uses computers to make<br />
small adjustments in the control elements on the trailing edges of<br />
the wings, thus increasing the craft’s stability.<br />
The F-117A Nightrider <strong>and</strong> the B-2 Stealth Bomber are the “ninjas”<br />
of military aviation. Capable of striking powerfully, rapidly,<br />
<strong>and</strong> invisibly, these aircraft added a dimension to the U.S. Air Force<br />
that did not exist previously. Before the advent of these aircraft, missions<br />
that required radar-avoidance tactics had to be flown below<br />
the horizon of ground-based radar, which is 30.5 meters above the<br />
ground. Such low-altitude flight is dangerous because of both the<br />
increased difficulty of maneuvering <strong>and</strong> vulnerability to ground<br />
fire. Additionally, such flying does not conceal the aircraft from the<br />
airborne radar carried by such craft as the American E-3A AWACS<br />
<strong>and</strong> the former Soviet Mainstay. In a major conflict, the only aircraft
that could effectively penetrate enemy airspace would be the Nightrider<br />
<strong>and</strong> the B-2.<br />
The purpose of the B-2 was to carry nuclear weapons into hostile<br />
airspace undetected. With the demise of the Soviet Union, mainl<strong>and</strong><br />
China seemed the only remaining major nuclear threat. For this reason,<br />
many defense experts believed that there was no longer a need<br />
for two radar-invisible planes, <strong>and</strong> cuts in U.S. military expenditures<br />
threatened the B-2 program during the early 1990’s.<br />
Consequences<br />
The development of the Nightrider <strong>and</strong> the B-2 meant that the<br />
former Soviet Union would have had to spend at least $60 billion to<br />
upgrade its air defense forces to meet the challenge offered by these<br />
aircraft. This fact, combined with the evolution of the Strategic Defense<br />
Initiative, commonly called “Star Wars,” led to the United<br />
States’ victory in the arms race. Additionally, stealth technology has<br />
found its way onto the conventional battlefield.<br />
As was shown in 1991 during the Desert Storm campaign in Iraq,<br />
targets that have strategic importance are often surrounded by a<br />
network of anti-air missiles <strong>and</strong> gun emplacements. During the<br />
Desert Storm air war, the F-117A was the only Allied aircraft to be<br />
assigned to targets in Baghdad. Nightriders destroyed more than 47<br />
percent of the strategic areas that were targeted, <strong>and</strong> every pilot <strong>and</strong><br />
plane returned to base unscathed.<br />
Since the world appears to be moving away from superpower<br />
conflicts <strong>and</strong> toward smaller regional conflicts, stealth aircraft may<br />
come to be used more for surveillance than for air attacks. This is<br />
particularly true because the SR-71, which previously played the<br />
primary role in surveillance, has been retired from service.<br />
See also Airplane; Cruise missile; Hydrogen bomb; Radar;<br />
Rocket; Turbojet; V-2 rocket.<br />
Further Reading<br />
Stealth aircraft / 699<br />
Chun, Clayton K. S. The Lockheed F-117A. Santa Monica, Calif.: R<strong>and</strong>,<br />
1991.
700 / Stealth aircraft<br />
Goodall, James C. America’s Stealth Fighters <strong>and</strong> Bombers. Osceola,<br />
Wis.: Motorbooks, 1992.<br />
Pape, Garry R., <strong>and</strong> John M. Campbell. Northrop Flying Wings: A History<br />
of Jack Northrop’s Visionary Aircraft. Atglen, Pa.: Schiffer, 1995.<br />
Thornborough, Anthony M. Stealth. London: Ian Allen, 1991.
Steelmaking process<br />
Steelmaking process<br />
The invention: Known as the basic oxygen, or L-D, process, a<br />
method for producing steel that worked about twelve times<br />
faster than earlier methods.<br />
The people behind the invention:<br />
Henry Bessemer (1813-1898), the English inventor of a process<br />
for making steel from iron<br />
Robert Durrer (1890-1978), a Swiss scientist who first proved<br />
the workability of the oxygen process in a laboratory<br />
F. A. Loosley (1891-1966), head of research <strong>and</strong> development at<br />
Dofasco Steel in Canada<br />
Theodor Suess (1894-1956), works manager at Voest<br />
Ferrous Metal<br />
701<br />
The modern industrial world is built on ferrous metal. Until<br />
1857, ferrous metal meant cast iron <strong>and</strong> wrought iron, though a few<br />
specialty uses of steel, especially for cutlery <strong>and</strong> swords, had existed<br />
for centuries. In 1857, Henry Bessemer developed the first largescale<br />
method of making steel, the Bessemer converter. By the 1880’s,<br />
modification of his concepts (particularly the development of a ‘’basic”<br />
process that could h<strong>and</strong>le ores high in phosphor) had made<br />
large-scale production of steel possible.<br />
Bessemer’s invention depended on the use of ordinary air, infused<br />
into the molten metal, to burn off excess carbon. Bessemer himself<br />
had recognized that if it had been possible to use pure oxygen instead<br />
of air, oxidation of the carbon would be far more efficient <strong>and</strong> rapid.<br />
Pure oxygen was not available in Bessemer’s day, except at very high<br />
prices, so steel producers settled for what was readily available, ordinary<br />
air. In 1929, however, the Linde-Frakl process for separating the<br />
oxygen in air from the other elements was discovered, <strong>and</strong> for the<br />
first time inexpensive oxygen became available.<br />
Nearly twenty years elapsed before the ready availability of pure<br />
oxygen was applied to refining the method of making steel. The first<br />
experiments were carried out in Switzerl<strong>and</strong> by Robert Durrer. In
702 / Steelmaking process<br />
1949, he succeeded in making steel expeditiously in a laboratory setting<br />
through the use of a blast of pure oxygen. Switzerl<strong>and</strong>, however,<br />
had no large-scale metallurgical industry, so the Swiss turned<br />
the idea over to the Austrians, who for centuries had exploited the<br />
large deposits of iron ore in a mountain in central Austria. Theodor<br />
Suess, the works manager of the state-owned Austrian steel complex,<br />
Voest, instituted some pilot projects. The results were sufficiently<br />
favorable to induce Voest to authorize construction of production<br />
converters. In 1952, the first ‘’heat” (as a batch of steel is<br />
called) was “blown in,” at the Voest works in Linz. The following<br />
year, another converter was put into production at the works in<br />
Donauwitz. These two initial locations led to the basic oxygen process<br />
sometimes being referred to as the L-D process.<br />
The L-D Process<br />
The basic oxygen, or L-D, process makes use of a converter similar<br />
to the Bessemer converter. Unlike the Bessemer, however, the L-<br />
D converter blows pure oxygen into the molten metal from above<br />
through a water-cooled injector known as a lance. The oxygen burns<br />
off the excess carbon rapidly, <strong>and</strong> the molten metal can then be<br />
poured off into ingots, which can later be reheated <strong>and</strong> formed into<br />
the ultimately desired shape. The great advantage of the process is<br />
the speed with which a “heat” reaches the desirable metallurgical<br />
composition for steel, with a carbon content between 0.1 percent<br />
<strong>and</strong> 2 percent. The basic oxygen process requires about forty minutes.<br />
In contrast, the prevailing method of making steel, using an<br />
open-hearth furnace (which transferred the technique from the<br />
closed Bessemer converter to an open-burning furnace to which the<br />
necessary additives could be introduced by h<strong>and</strong>) requires eight to<br />
eleven hours for a “heat” or batch.<br />
The L-D process was not without its drawbacks, however. It was<br />
adopted by the Austrians because, by carefully calibrating the timing<br />
<strong>and</strong> amount of oxygen introduced, they could turn their moderately<br />
phosphoric ore into steel without further intervention. The<br />
process required ore of a st<strong>and</strong>ardized metallurgical, or chemical,<br />
content, for which the lancing had been calculated. It produced a<br />
large amount of iron-oxide dust that polluted the surrounding at-
Steelmaking process / 703<br />
mosphere, <strong>and</strong> it required a lining in the converter of dolomitic<br />
brick. The specific chemical content of the brick contributed to the<br />
chemical mixture that produced the desired result.<br />
The Austrians quickly realized that the process was an improvement.<br />
In May, 1952, the patent specifications for the new process<br />
were turned over to a new company, Brassert Oxygen Technik, or<br />
BOT, which filed patent applications around the world. BOT embarked<br />
on an aggressive marketing campaign, bringing potential<br />
customers to Austria to observe the process in action. Despite BOT’s<br />
efforts, the new process was slow to catch on, even though in 1953<br />
BOT licensed a U.S. firm, Kaiser Engineers, to spread the process in<br />
the United States.<br />
Many factors serve to explain the reluctance of steel producers<br />
around the world to adopt the new process. One of these was the<br />
large investment most major steel producers had in their openhearth<br />
furnaces. Another was uncertainty about the pollution factor.<br />
Later, special pollution-control equipment would be developed<br />
to deal with this problem. A third concern was whether the necessary<br />
refractory liners for the new converters would be available. A<br />
fourth was the fact that the new process could h<strong>and</strong>le a load that<br />
contained no more than 30 percent scrap, preferably less. In practice,<br />
therefore, it would only work where a blast furnace smelting<br />
ore was already set up.<br />
One of the earliest firms to show serious interest in the new technology<br />
was Dofasco, a Canadian steel producer. Between 1952 <strong>and</strong><br />
1954, Dofasco, pushed by its head of research <strong>and</strong> development, F.<br />
A. Loosley, built pilot operations to test the methodology. The results<br />
were sufficiently promising that in 1954 Dofasco built the first<br />
basic oxygen furnace outside Austria. Dofasco had recently built its<br />
own blast furnace, so it had ore available on site. It was able to devise<br />
ways of dealing with the pollution problem, <strong>and</strong> it found refractory<br />
liners that would work. It became the first North American<br />
producer of basic oxygen steel.<br />
Having bought the licensing rights in 1953, Kaiser Engineers was<br />
looking for a U.S. steel producer adventuresome enough to invest in<br />
the new technology. It found that producer in McLouth Steel, a<br />
small steel plant in Detroit, Michigan. Kaiser Engineers supplied<br />
much of the technical advice that enabled McLouth to build the first
704 / Steelmaking process<br />
U.S. basic oxygen steel facility, though McLouth also sent one of its<br />
engineers to Europe to observe the Austrian operations. McLouth,<br />
which had backing from General Motors, also made use of technical<br />
descriptions in the literature.<br />
The Specifications Question<br />
Henry Bessemer<br />
Henry Bessemer was born in the small village of Charlton,<br />
Engl<strong>and</strong>, in 1813. His father was an early example of a technician,<br />
specializing in steam engines, <strong>and</strong> operated a business<br />
making metal type for printing presses. The elder Bessemer<br />
wanted his son to attend university, but Henry preferred to<br />
study under his father. During his apprenticeship, he learned<br />
the properties of alloys. At seventeen he moved to London to<br />
open his own business, which fabricated specialty metals.<br />
Three years later the Royal Academy held an exhibition of<br />
Bessemer’s work. His career, well begun, moved from one invention<br />
to another until at his death in 1898 he held 114 patents.<br />
Among them were processes for casting type <strong>and</strong> producing<br />
graphite for pencils; methods for manufacturing glass, sugar,<br />
bronze powder, <strong>and</strong> ships; <strong>and</strong> his best known creation, the Bessemer<br />
converter for making steel from iron. Bessemer built his<br />
first converter in 1855; fifteen years later Great Britain was producing<br />
half of the world’s steel.<br />
Bessemer’s life <strong>and</strong> career were models of early Industrial<br />
Age industry, prosperity, <strong>and</strong> longevity. A millionaire from patent<br />
royalties, he retired at fifty-nine, lived another twenty-six<br />
years, working on yet more inventions <strong>and</strong> cultivating astronomy<br />
as a hobby, <strong>and</strong> was married for sixty-four years. Among<br />
his many awards <strong>and</strong> honors was a knighthood, bestowed by<br />
Queen Victoria.<br />
One factor that held back adoption of basic oxygen steelmaking<br />
was the question of specifications. Many major engineering projects<br />
came with precise specifications detailing the type of steel to be<br />
used <strong>and</strong> even the method of its manufacture. Until basic oxygen<br />
steel was recognized as an acceptable form by the engineering fra-
Steelmaking process / 705<br />
ternity, so that job specifications included it as appropriate in specific<br />
applications, it could not find large-scale markets. It took a<br />
number of years for engineers to modify their specifications so that<br />
basic oxygen steel could be used.<br />
The next major conversion to the new steelmaking process occurred<br />
in Japan. The Japanese had learned of the process early,<br />
while Japanese metallurgical engineers were touring Europe in<br />
1951. Some of them stopped off at the Voest works to look at the pilot<br />
projects there, <strong>and</strong> they talked with the Swiss inventor, Robert<br />
Durrer. These engineers carried knowledge of the new technique<br />
back to Japan. In 1957 <strong>and</strong> 1958, Yawata Steel <strong>and</strong> Nippon Kokan,<br />
the largest <strong>and</strong> third-largest steel producers in Japan, decided to implement<br />
the basic oxygen process. An important contributor to this<br />
decision was the Ministry of International Trade <strong>and</strong> Industry, which<br />
brokered a licensing arrangement through Nippon Kokan, which in<br />
turn had signed a one-time payment arrangement with BOT. The<br />
licensing arrangement allowed other producers besides Nippon<br />
Kokan to use the technique in Japan.<br />
The Japanese made two important technical improvements in<br />
the basic oxygen technology. They developed a multiholed lance for<br />
blowing in oxygen, thus dispersing it more effectively in the molten<br />
metal <strong>and</strong> prolonging the life of the refractory lining of the converter<br />
vessel. They also pioneered the OG process for recovering<br />
some of the gases produced in the converter. This procedure reduced<br />
the pollution generated by the basic oxygen converter.<br />
The first large American steel producer to adopt the basic oxygen<br />
process was Jones <strong>and</strong> Laughlin, which decided to implement the<br />
new process for several reasons. It had some of the oldest equipment<br />
in the American steel industry, ripe for replacement. It also<br />
had experienced significant technical difficulties at its Aliquippa<br />
plant, difficulties it was unable to solve by modifying its openhearth<br />
procedures. It therefore signed an agreement with Kaiser Engineers<br />
to build some of the new converters for Aliquippa. These<br />
converters were constructed on license from Kaiser Engineers by<br />
Pennsylvania Engineering, with the exception of the lances, which<br />
were imported from Voest in Austria. Subsequent lances, however,<br />
were built in the United States. Some of Jones <strong>and</strong> Laughlin’s production<br />
managers were sent to Dofasco for training, <strong>and</strong> technical
706 / Steelmaking process<br />
advisers were brought to the Aliquippa plant both from Kaiser Engineers<br />
<strong>and</strong> from Austria.<br />
Other European countries were somewhat slower to adopt the<br />
new process. A major cause for the delay was the necessary modification<br />
of the process to fit the high phosphoric ores available in Germany<br />
<strong>and</strong> France. Europeans also experimented with modifications<br />
of the basic oxygen technique by developing converters that revolved.<br />
These converters, known as Kaldo in Sweden <strong>and</strong> Rotor in<br />
Germany, proved in the end to have sufficient technical difficulties<br />
that they were ab<strong>and</strong>oned in favor of the st<strong>and</strong>ard basic oxygen<br />
converter. The problems they had been designed to solve could be<br />
better dealt with through modifications of the lance <strong>and</strong> through<br />
adjustments in additives.<br />
By the mid-1980’s, the basic oxygen process had spread throughout<br />
the world. Neither Japan nor the European Community was<br />
producing any steel by the older, open-hearth method. In conjunction<br />
with the electric arc furnace, fed largely on scrap metal, the basic<br />
oxygen process had transformed the steel industry of the world.<br />
Impact<br />
The basic oxygen process has significant advantages over older<br />
procedures. It does not require additional heat, whereas the openhearth<br />
technique calls for the infusion of nine to twelve gallons of<br />
fuel oil to raise the temperature of the metal to the level necessary to<br />
burn off all the excess carbon. The investment cost of the converter<br />
is about half that of an open-hearth furnace. Fewer refractories are<br />
required, less than half those needed in an open-hearth furnace.<br />
Most important of all, however, a “heat” requires less than an hour,<br />
as compared with the eight or more hours needed for a “heat” in an<br />
open-hearth furnace.<br />
There were some disadvantages to the basic oxygen process. Perhaps<br />
the most important was the limited amount of scrap that could<br />
be included in a “heat,” a maximum of 30 percent. Because the process<br />
required at least 70 percent new ore, it could be operated most<br />
effectively only in conjunction with a blast furnace. Counterbalancing<br />
this last factor was the rapid development of the electric arc<br />
furnace, which could operate with 100 percent scrap. A firm with its
Steelmaking process / 707<br />
own blast furnace could, with both an oxygen converter <strong>and</strong> an electric<br />
arc furnace, h<strong>and</strong>le the available raw material.<br />
The advantages of the basic oxygen process overrode the disadvantages.<br />
Some other new technologies combined to produce this<br />
effect. The most important of these was continuous casting. Because<br />
of the short time required for a “heat,” it was possible, if a plant had<br />
two or three converters, to synchronize output with the fill needs of<br />
a continuous caster, thus largely canceling out some of the economic<br />
drawbacks of the batch process. Continuous production, always<br />
more economical, was now possible in the basic steel industry, particularly<br />
after development of computer-controlled rolling mills.<br />
These new technologies forced major changes in the world’s steel<br />
industry. Labor requirements for the basic oxygen converter were<br />
about half those for the open-hearth furnace. The high speed of the<br />
new technology required far less manual labor but much more technical<br />
expertise. Labor requirements were significantly reduced, producing<br />
major social dislocations in steel-producing regions. This effect<br />
was magnified by the fact that dem<strong>and</strong> for steel dropped<br />
sharply in the 1970’s, further reducing the need for steelworkers.<br />
The U.S. steel industry was slower than either the Japanese or the<br />
European to convert to the basic oxygen technique. The U.S. industry<br />
generally operated with larger quantities, <strong>and</strong> it took a number<br />
of years before the basic oxygen technique was adapted to converters<br />
with an output equivalent to that of the open-hearth furnace. By<br />
the time that had happened, world steel dem<strong>and</strong> had begun to<br />
drop. U.S. companies were less profitable, failing to generate internally<br />
the capital needed for the major investment involved in<br />
ab<strong>and</strong>oning open-hearth furnaces for oxygen converters. Although<br />
union contracts enabled companies to change work assignments<br />
when new technologies were introduced, there was stiff resistance<br />
to reducing employment of steelworkers, most of whom had lived<br />
all their lives in one-industry towns. Finally, engineers at the steel<br />
firms were wedded to the old methods <strong>and</strong> reluctant to change, as<br />
were the large bureaucracies of the big U.S. steel firms.<br />
The basic oxygen technology in steel is part of a spate of new<br />
technical developments that have revolutionized industrial production,<br />
drastically reducing the role of manual labor <strong>and</strong> dramatically<br />
increasing the need for highly skilled individuals with technical ex-
708 / Steelmaking process<br />
pertise. Because capital costs are significantly lower than for alternative<br />
processes, it has allowed a number of developing countries<br />
to enter a heavy industry <strong>and</strong> compete successfully with the old industrial<br />
giants. It has thus changed the face of the steel industry.<br />
See also Assembly line; Buna rubber; Disposable razor; Laminated<br />
glass; Memory metal; Neoprene; Oil-well drill bit; Pyrex<br />
glass.<br />
Further Reading<br />
Bain, Trevor. Banking the Furnace: Restructuring of the Steel Industry in<br />
Eight Countries. Kalamazoo, Mich.: W. E. Upjohn Institute for Employment<br />
Research, 1992.<br />
Gold, Bela, Gerhard Rosegger, <strong>and</strong> Myles G. Boylan, Jr. Evaluating<br />
Technological Innovations: Methods, Expectations, <strong>and</strong> Findings. Lexington,<br />
Mass.: Lexington Books, 1980.<br />
Hall, Christopher. Steel Phoenix: The Fall <strong>and</strong> Rise of the U.S. Steel Industry.<br />
New York: St. Martin’s Press, 1997.<br />
Hoerr, John P. And the Wolf Finally Came: The Decline of the American<br />
Steel Industry. Pittsburgh, Pa.: University of Pittsburgh Press,<br />
1988.<br />
Lynn, Leonard H. How Japan Innovates: A Comparison with the United<br />
States in the Case of Oxygen Steelmaking. Boulder, Colo.: Westview<br />
Press, 1982.<br />
Seely, Burce Edsall. Iron <strong>and</strong> Steel in the Twentieth Century. New York:<br />
Facts on File, 1994.
Supercomputer<br />
Supercomputer<br />
The invention: A computer that had the greatest computational<br />
power that then existed.<br />
The person behind the invention:<br />
Seymour R. Cray (1928-1996), American computer architect <strong>and</strong><br />
designer<br />
The Need for Computing Power<br />
Although modern computers have roots in concepts first proposed<br />
in the early nineteenth century, it was only around 1950 that they became<br />
practical. Early computers enabled their users to calculate equations<br />
quickly <strong>and</strong> precisely, but it soon became clear that even more<br />
powerful computers—machines capable of receiving, computing, <strong>and</strong><br />
sending out data with great precision <strong>and</strong> at the highest speeds—<br />
would enable researchers to use computer “models,” which are programs<br />
that simulate the conditions of complex experiments.<br />
Few computer manufacturers gave much thought to building the<br />
fastest machine possible, because such an undertaking is expensive<br />
<strong>and</strong> because the business use of computers rarely dem<strong>and</strong>s the<br />
greatest processing power. The first company to build computers<br />
specifically to meet scientific <strong>and</strong> governmental research needs was<br />
Control Data Corporation (CDC). The company had been founded<br />
in 1957 by William Norris, <strong>and</strong> its young vice president for engineering<br />
was the highly respected computer engineer Seymour R.<br />
Cray. When CDC decided to limit high-performance computer design,<br />
Cray struck out on his own, starting Cray Research in 1972. His<br />
goal was to design the most powerful computer possible. To that<br />
end, he needed to choose the principles by which his machine<br />
would operate; that is, he needed to determine its architecture.<br />
The Fastest Computer<br />
709<br />
All computers rely upon certain basic elements to process data.<br />
Chief among these elements are the central processing unit, or CPU
710 / Supercomputer<br />
(which h<strong>and</strong>les data), memory (where data are stored temporarily<br />
before <strong>and</strong> after processing), <strong>and</strong> the bus (the interconnection between<br />
memory <strong>and</strong> the processor, <strong>and</strong> the means by which data are<br />
transmitted to or from other devices, such as a disk drive or a monitor).<br />
The structure of early computers was based on ideas developed<br />
by the mathematician John von Neumann, who, in the 1940’s,<br />
conceived a computer architecture in which the CPU controls all<br />
events in a sequence: It fetches data from memory, performs calculations<br />
on those data, <strong>and</strong> then stores the results in memory. Because it<br />
functions in sequential fashion, the speed of this “scalar processor”<br />
is limited by the rate at which the processor is able to complete each<br />
cycle of tasks.<br />
Before Cray produced his first supercomputer, other designers<br />
tried different approaches. One alternative was to link a vector processor<br />
to a scalar unit. A vector processor achieves its speed by performing<br />
computations on a large series of numbers (called a vector)<br />
at one time rather than in sequential fashion, though specialized<br />
<strong>and</strong> complex programs were necessary to make use of this feature.<br />
In fact, vector processing computers spent most of their time operating<br />
as traditional scalar processors <strong>and</strong> were not always efficient at<br />
switching back <strong>and</strong> forth between the two processing types.<br />
Another option chosen by Cray’s competitors was the notion of<br />
“pipelining” the processor’s tasks. A scalar processor often must<br />
wait while data are retrieved or stored in memory. Pipelining techniques<br />
allow the processor to make use of idle time for calculations<br />
in other parts of the program being run, thus increasing the effective<br />
speed. A variation on this technique is “parallel processing,” in<br />
which multiple processors are linked. If each of, for example, eight<br />
central processors is given a portion of a computing task to perform,<br />
the task will be completed more quickly than the traditional von<br />
Neumann architecture, with its single processor, would allow.<br />
Ever the pragmatist, however, Cray decided to employ proved<br />
technology rather than use advanced techniques in his first supercomputer,<br />
the Cray 1, which was introduced in 1976. Although the<br />
Cray 1 did incorporate vector processing, Cray used a simple form<br />
of vector calculation that made the technique practical <strong>and</strong> easy to<br />
use. Most striking about this computer was its shape, which was far<br />
more modern than its internal design. The Cray 1 was shaped like a
Seymour R. Cray<br />
Supercomputer / 711<br />
Seymour R. Cray was born in 1928 in Chippewa Falls, Wisconsin.<br />
The son of a civil engineer, he became interested in radio<br />
<strong>and</strong> electronics as a boy. After graduating from high school in<br />
1943, he joined the U.S. Army, was posted to Europe in an infantry<br />
communications platoon, <strong>and</strong> fought in the Battle of the<br />
Bulge. Back from the war, he pursued his interest in electronics<br />
in college while majoring in mathematics at the University of<br />
Minnesota. Upon graduation in 1950, he took a job at Engineering<br />
Research Associates. It was there that he first learned<br />
about computers. In fact, he helped design the first digital computer,<br />
UNIVAC.<br />
Cray co-founded Control Data Corporation in 1957. Based<br />
on his ideas, the company built large-scale, high-speed computers.<br />
In 1972 he founded his own company, Cray Research Incorporated,<br />
with the intention of employing new processing methods<br />
<strong>and</strong> simplifying architecture <strong>and</strong> software to build the<br />
world’s fastest computers. He succeeded, <strong>and</strong> the series of computers<br />
that the company marketed made possible computer<br />
modeling as a central part of scientific research in areas as diverse<br />
as meteorology, oil exploration, <strong>and</strong> nuclear weapons design.<br />
Through the 1970’s <strong>and</strong> 1980’s Cray Research was at the<br />
forefront of supercomputer technology, which became one of<br />
the symbols of American technological leadership.<br />
In 1989 Cray left Cray Research to form still another company,<br />
Cray Computer Corporation. He planned to build the<br />
next generation supercomputer, the Cray 5, but advances in microprocessor<br />
technology undercut the dem<strong>and</strong> for supercomputers.<br />
Cray Computer entered bankruptcy in 1995. Ayear later<br />
he died from injuries sustained in an automobile accident near<br />
Colorado Springs, Colorado.<br />
cylinder with a small section missing <strong>and</strong> a hollow center, with<br />
what appeared to be a bench surrounding it. The shape of the machine<br />
was designed to minimize the length of the interconnecting<br />
wires that ran between circuit boards to allow electricity to move the<br />
shortest possible distance. The bench concealed an important part<br />
of the cooling system that kept the system at an appropriate operating<br />
temperature.
712 / Supercomputer<br />
The measurements that describe the performance of supercomputers<br />
are called MIPS (millions of instructions per second) for scalar<br />
processors <strong>and</strong> megaflops (millions of floating-point operations per<br />
second) for vector processors. (Floating-point numbers are numbers<br />
expressed in scientific notation; for example, 10 27 .) Whereas the fastest<br />
computer before the Cray 1 was capable of some 35 MIPS, the<br />
Cray 1 was capable of 80 MIPS. Moreover, the Cray 1 was theoretically<br />
capable of vector processing at the rate of 160 megaflops, a rate<br />
unheard of at the time.<br />
Consequences<br />
Seymour Cray first estimated that there would be few buyers for<br />
a machine as advanced as the Cray 1, but his estimate turned out to<br />
be incorrect. There were many scientists who wanted to perform<br />
computer modeling (in which scientific ideas are expressed in such<br />
a way that computer-based experiments can be conducted) <strong>and</strong><br />
who needed raw processing power.<br />
When dealing with natural phenomena such as the weather or<br />
geological structures, or in rocket design, researchers need to make<br />
calculations involving large amounts of data. Before computers,<br />
advanced experimental modeling was simply not possible, since<br />
even the modest calculations for the development of atomic energy,<br />
for example, consumed days <strong>and</strong> weeks of scientists’ time.<br />
With the advent of supercomputers, however, large-scale computation<br />
of vast amounts of information became possible. Weather<br />
researchers can design a detailed program that allows them to analyze<br />
complex <strong>and</strong> seemingly unpredictable weather events such<br />
as hurricanes; geologists searching for oil fields can gather data<br />
about successful finds to help identify new ones; <strong>and</strong> spacecraft<br />
designers can “describe” in computer terms experimental ideas<br />
that are too costly or too dangerous to carry out. As supercomputer<br />
performance evolves, there is little doubt that scientists will<br />
make ever greater use of its power.<br />
See also Apple II computer; BINAC computer; Colossus computer;<br />
ENIAC computer; IBM Model 1401 computer; Personal computer;<br />
UNIVAC computer.
Further Reading<br />
Supercomputer / 713<br />
Edwards, Owen. “Seymour Cray.” Forbes 154, no. 5 (August 29,<br />
1994).<br />
Lloyd, Therese, <strong>and</strong> Stanley N. Wellborn. “Computers’ Next Frontiers.”<br />
U.S. News & World Report 99 (August 26, 1985).<br />
Slater, Robert. Portraits in Silicon. Cambridge, Mass.: MIT Press,<br />
1987.<br />
Zipper, Stuart. “Chief Exec. Leaves Cray Computer.” Electronic<br />
News 38, no. 1908 (April, 1992).
714<br />
Supersonic passenger plane<br />
Supersonic passenger plane<br />
The invention: The first commercial airliner that flies passengers at<br />
speeds in excess of the speed of sound.<br />
The people behind the invention:<br />
Sir Archibald Russell (1904- ), a designer with the British<br />
Aircraft Corporation<br />
Pierre Satre (1909- ), technical director at Sud-Aviation<br />
Julian Amery (1919- ), British minister of aviation, 1962-1964<br />
Geoffroy de Cource (1912- ), French minister of aviation,<br />
1962<br />
William T. Coleman, Jr. (1920- ), U.S. secretary of<br />
transportation, 1975-1977<br />
Birth of Supersonic Transportations<br />
On January 21, 1976, the Anglo-French Concorde became the<br />
world’s first supersonic airliner to carry passengers on scheduled<br />
commercial flights. British Airways flew a Concorde from London’s<br />
Heathrow Airport to the Persian Gulf emirate of Bahrain in<br />
three hours <strong>and</strong> thirty-eight minutes. At about the same time, Air<br />
France flew a Concorde from Paris’s Charles de Gaulle Airport to<br />
Rio de Janeiro, Brazil, in seven hours <strong>and</strong> twenty-five minutes.<br />
The Concordes’ cruising speeds were about twice the speed of<br />
sound, or 1,350 miles per hour. On May 24, 1976, the United States<br />
<strong>and</strong> Europe became linked for the first time with commercial supersonic<br />
air transportation. British Airways inaugurated flights<br />
between Dulles International Airport in Washington, D.C., <strong>and</strong><br />
Heathrow Airport. Likewise, Air France inaugurated flights between<br />
Dulles International Airport <strong>and</strong> Charles de Gaulle Airport.<br />
The London-Washington, D.C., flight was flown in an unprecedented<br />
time of three hours <strong>and</strong> forty minutes. The Paris-<br />
Washington, D.C., flight was flown in a time of three hours <strong>and</strong><br />
fifty-five minutes.
The Decision to Build the SST<br />
Supersonic passenger plane / 715<br />
Events leading to the development <strong>and</strong> production of the Anglo-<br />
French Concorde went back almost twenty years <strong>and</strong> included approximately<br />
$3 billion in investment costs. Issues surrounding the<br />
development <strong>and</strong> final production of the supersonic transport (SST)<br />
were extremely complex <strong>and</strong> at times highly emotional. The concept<br />
of developing an SST brought with it environmental concerns<br />
<strong>and</strong> questions, safety issues both in the air <strong>and</strong> on the ground, political<br />
intrigue of international proportions, <strong>and</strong> enormous economic<br />
problems from costs of operations, research, <strong>and</strong> development.<br />
In Engl<strong>and</strong>, the decision to begin the SST project was made in October,<br />
1956. Under the promotion of Morien Morgan with the Royal<br />
Aircraft Establishment in Farnborough, Engl<strong>and</strong>, it was decided at<br />
the Aviation Ministry headquarters in London to begin development<br />
of a supersonic aircraft. This decision was based on the intense competition<br />
from the American Boeing 707 <strong>and</strong> Douglas DC-8 subsonic<br />
jets going into commercial service. There was little point in developing<br />
another subsonic plane; the alternative was to go above the speed<br />
of sound. In November, 1956, at Farnborough, the first meeting of the<br />
Supersonic Transport Aircraft Committee, known as STAC, was held.<br />
Members of the STAC proposed that development costs would be<br />
in the range of $165 million to $260 million, depending on the range,<br />
speed, <strong>and</strong> payload of the chosen SST. The committee also projected<br />
that by 1970, there would be a world market for at least 150 to 500 supersonic<br />
planes. Estimates were that the supersonic plane would recover<br />
its entire research <strong>and</strong> development cost through thirty sales.<br />
The British, in order to continue development of an SST, needed a European<br />
partner as a way of sharing the costs <strong>and</strong> preempting objections<br />
to proposed funding by Engl<strong>and</strong>’s Treasury.<br />
In 1960, the British government gave the newly organized British<br />
Aircraft Corporation (BAC) $1 million for an SST feasibility study.<br />
Sir Archibald Russell, BAC’s chief supersonic designer, visited Pierre<br />
Satre, the technical director at the French firm of Sud-Aviation.<br />
Satre’s suggestion was to evolve an SST from Sud-Aviation’s highly<br />
successful subsonic Caravelle transport. By September, 1962, an<br />
agreement was reached by Sud <strong>and</strong> BAC design teams on a new<br />
SST, the Super Caravelle.
716 / Supersonic passenger plane<br />
There was a bitter battle over the choice of engines with two British<br />
engine firms, Bristol-Siddeley <strong>and</strong> Rolls-Royce, as contenders.<br />
Sir Arnold Hall, the managing director of Bristol-Siddeley, in collaboration<br />
with the French aero-engine company SNECMA, was eventually<br />
awarded the contract for the engines. The engine chosen was<br />
a “civilianized” version of the Olympus, which Bristol had been developing<br />
for the multirole TRS-2 combat plane.<br />
The Concorde Consortium<br />
On November 29, 1962, the Concorde Consortium was created<br />
by an agreement between Engl<strong>and</strong> <strong>and</strong> the French Republic, signed<br />
by Ministers of Aviation Julian Amery <strong>and</strong> Geoffroy de Cource<br />
(1912- ). The first Concorde, Model 001, rolled out from Sud-<br />
Aviation’s St. Martin-du-Touch assembly plant on December 11,<br />
1968. The second, Model 002, was completed at the British Aircraft<br />
Corporation a few months later. Eight years later, on January 21,<br />
1976, the Concorde became the world’s first supersonic airliner to<br />
carry passengers on scheduled commercial flights.<br />
Development of the SST did not come easily for the Anglo-<br />
French consortium. The nature of supersonic flight created numerous<br />
problems <strong>and</strong> uncertainties not present for subsonic flight. The<br />
SST traveled faster than the speed of sound. Sound travels at 760<br />
miles per hour at sea level at a temperature of 59 degrees Fahrenheit.<br />
This speed drops to about 660 miles per hour at sixty-five thous<strong>and</strong><br />
feet, cruising altitude for the SST, where the air temperature<br />
drops to 70 degrees below zero.<br />
The Concorde was designed to fly at a maximum of 1,450 miles<br />
per hour. The European designers could use an aluminum alloy<br />
construction <strong>and</strong> stay below the critical skin-friction temperatures<br />
that required other airframe alloys, such as titanium. The Concorde<br />
was designed with a slender curved wing surface. The design incorporated<br />
widely separated engine nacelles, each housing two Olympus<br />
593 jet engines. The Concorde was also designed with a “droop<br />
snoot,” providing three positions: the supersonic configuration, a<br />
heat-visor retracted position for subsonic flight, <strong>and</strong> a nose-lowered<br />
position for l<strong>and</strong>ing patterns.
Impact<br />
Supersonic passenger plane / 717<br />
Early SST designers were faced with questions such as the intensity<br />
<strong>and</strong> ionization effect of cosmic rays at flight altitudes of sixty to<br />
seventy thous<strong>and</strong> feet. The “cascade effect” concerned the intensification<br />
of cosmic radiation when particles from outer space struck a<br />
metallic cover. Scientists looked for ways to shield passengers from<br />
this hazard inside the aluminum or titanium shell of an SST flying<br />
high above the protective blanket of the troposphere. Experts questioned<br />
whether the risk of being struck by meteorites was any<br />
greater for the SST than for subsonic jets <strong>and</strong> looked for evidence on<br />
wind shear of jet streams in the stratosphere.<br />
Other questions concerned the strength <strong>and</strong> frequency of clear air<br />
turbulence above forty-five thous<strong>and</strong> feet, whether the higher ozone<br />
content of the air at SST cruise altitude would affect the materials of<br />
the aircraft, whether SST flights would upset or destroy the protective<br />
nature of the earth’s ozone layer, the effect of aerodynamic heating<br />
on material strength, <strong>and</strong> the tolerable strength of sonic booms<br />
over populated areas. These <strong>and</strong> other questions consumed the designers<br />
<strong>and</strong> researchers involved in developing the Concorde.<br />
Through design research <strong>and</strong> flight tests, many of the questions<br />
were resolved or realized to be less significant than anticipated. Several<br />
issues did develop into environmental, economic, <strong>and</strong> international<br />
issues. In late 1975, the British <strong>and</strong> French governments requested<br />
permission to use the Concorde at New York’s John F.<br />
Kennedy International Airport <strong>and</strong> at Dulles International Airport<br />
for scheduled flights between the United States <strong>and</strong> Europe. In December,<br />
1975, as a result of strong opposition from anti-Concorde<br />
environmental groups, the U.S. House of Representatives approved<br />
a six-month ban on SSTs coming into the United States so that the<br />
impact of flights could be studied. Secretary of Transportation William<br />
T. Coleman, Jr., held hearings to prepare for a decision by February<br />
5, 1976, as to whether to allow the Concorde into U.S. airspace.<br />
The British <strong>and</strong> French, if denied l<strong>and</strong>ing rights, threatened<br />
to take the United States to an international court, claiming that<br />
treaties had been violated.<br />
The treaties in question were the Chicago Convention <strong>and</strong> Bermuda<br />
agreements of February 11, 1946, <strong>and</strong> March 27, 1946. These
718 / Supersonic passenger plane<br />
treaties prohibited the United States from banning aircraft that both<br />
France <strong>and</strong> Great Britain had certified to be safe. The Environmental<br />
Defense Fund contended that the United States had the right to ban<br />
SST aircraft on environmental grounds.<br />
Under pressure from both sides, Coleman decided to allow limited<br />
Concorde service at Dulles <strong>and</strong> John F. Kennedy airports for a<br />
sixteen-month trial period. Service into John F. Kennedy Airport,<br />
however, was delayed by a ban by the Port Authority of New York<br />
<strong>and</strong> New Jersey until a pending suit was pursued by the airlines.<br />
During the test period, detailed records were to be kept on the<br />
Concorde’s noise levels, vibration, <strong>and</strong> engine emission levels. Other<br />
provisions included that the plane would not fly at supersonic<br />
speeds over the continental United States; that all flights could be<br />
cancelled by the United States with four months notice, or immediately<br />
if they proved harmful to the health <strong>and</strong> safety of Americans;<br />
<strong>and</strong> that at the end of a year, four months of study would begin to<br />
determine if the trial period should be extended.<br />
The Concorde’s noise was one of the primary issues in determining<br />
whether the plane should be allowed into U.S. airports. The Federal<br />
Aviation Administration measured the effective perceived noise<br />
in decibels. After three months of monitoring the Concorde’s departure<br />
noise at 3.5 nautical miles was found to vary from 105 to 130<br />
decibels. The Concorde’s approach noise at one nautical mile from<br />
threshold varied from 115 to 130 decibels. These readings were approximately<br />
equal to noise levels of other four-engine jets, such as<br />
the Boeing 747, on l<strong>and</strong>ing but were twice as loud on takeoff.<br />
The Economics of Operation<br />
Another issue of significance was the economics of Concorde’s<br />
operation <strong>and</strong> its tremendous investment costs. In 1956, early predictions<br />
of Great Britain’s STAC were for a world market of 150 to<br />
500 supersonic planes. In November, 1976, Great Britain’s Gerald<br />
Kaufman <strong>and</strong> France’s Marcel Cavaille said that production of the<br />
Concorde would not continue beyond the sixteen vehicles then contracted<br />
for with BAC <strong>and</strong> Sud-Aviation. There was no dem<strong>and</strong> by<br />
U.S. airline corporations for the plane. Given that the planes could<br />
not fly at supersonic speeds over populated areas because of the
sonic boom phenomenon, markets for the SST had to be separated<br />
by at least three thous<strong>and</strong> miles, with flight paths over mostly water<br />
or desert. Studies indicated that there were only twelve to fifteen<br />
routes in the world for which the Concorde was suitable. The planes<br />
were expensive, at a price of approximately $74 million each <strong>and</strong><br />
had a limited seating capacity of one hundred passengers. The<br />
plane’s range was about four thous<strong>and</strong> miles.<br />
These statistics compared to a Boeing 747 with a cost of $35 million,<br />
seating capacity of 360, <strong>and</strong> a range of six thous<strong>and</strong> miles. In<br />
addition, the International Air Transport Association negotiated<br />
that the fares for the Concorde flights should be equivalent to current<br />
first-class fares plus 20 percent. The marketing promotion for<br />
the Anglo-French Concorde was thus limited to the elite business<br />
traveler who considered speed over cost of transportation. Given<br />
these factors, the recovery of research <strong>and</strong> development costs for<br />
Great Britain <strong>and</strong> France would never occur.<br />
See also Airplane; Bullet train; Dirigible; Rocket; Stealth aircraft;<br />
Turbojet; V-2 rocket.<br />
Further Reading<br />
Supersonic passenger plane / 719<br />
Ellingsworth, Rosalind K. “Concorde Stresses Time, Service.” Aviation<br />
Week <strong>and</strong> Space Technology 105 (August 16, 1976).<br />
Kozicharow, Eugene. “Concorde Legal Questions Raised.” Aviation<br />
Week <strong>and</strong> Space Technology 104 (January 12, 1976).<br />
Ropelewski, Robert. “Air France Poised for Concorde Service.” Aviation<br />
Week <strong>and</strong> Space Technology 104 (January 19, 1976).<br />
Sparaco, Pierre. “Official Optimism Grows for Concorde’s Return.”<br />
Aviation Week <strong>and</strong> Space Technology 154, no. 8 (February 19, 2001).<br />
Trubshaw, Brian. Concorde: The Inside Story. Thrupp, Stroud: Sutton,<br />
2000.
720<br />
Synchrocyclotron<br />
Synchrocyclotron<br />
The invention: A powerful particle accelerator that performed<br />
better than its predecessor, the cyclotron.<br />
The people behind the invention:<br />
Edwin Mattison McMillan (1907-1991), an American physicist<br />
who won the Nobel Prize in Chemistry in 1951<br />
Vladimir Iosifovich Veksler (1907-1966), a Soviet physicist<br />
Ernest Orl<strong>and</strong>o Lawrence (1901-1958), an American physicist<br />
Hans Albrecht Bethe (1906- ), a German American physicist<br />
The First Cyclotron<br />
The synchrocyclotron is a large electromagnetic apparatus designed<br />
to accelerate atomic <strong>and</strong> subatomic particles at high energies.<br />
Therefore, it falls under the broad class of scientific devices<br />
known as “particle accelerators.” By the early 1920’s, the experimental<br />
work of physicists such as Ernest Rutherford <strong>and</strong> George<br />
Gamow dem<strong>and</strong>ed that an artificial means be developed to generate<br />
streams of atomic <strong>and</strong> subatomic particles at energies much<br />
greater than those occurring naturally. This requirement led Ernest<br />
Orl<strong>and</strong>o Lawrence to develop the cyclotron, the prototype for most<br />
modern accelerators. The synchrocyclotron was developed in response<br />
to the limitations of the early cyclotron.<br />
In September, 1930, Lawrence announced the basic principles behind<br />
the cyclotron. Ionized—that is, electrically charged—particles<br />
are admitted into the central section of a circular metal drum. Once<br />
inside the drum, the particles are exposed to an electric field alternating<br />
within a constant magnetic field. The combined action of the<br />
electric <strong>and</strong> magnetic fields accelerates the particles into a circular<br />
path, or orbit. This increases the particles’ energy <strong>and</strong> orbital radii.<br />
This process continues until the particles reach the desired energy<br />
<strong>and</strong> velocity <strong>and</strong> are extracted from the machine for use in experiments<br />
ranging from particle-to-particle collisions to the synthesis of<br />
radioactive elements.
Although Lawrence was interested in the practical applications<br />
of his invention in medicine <strong>and</strong> biology, the cyclotron also was applied<br />
to a variety of experiments in a subfield of physics called<br />
“high-energy physics.” Among the earliest applications were studies<br />
of the subatomic, or nuclear, structure of matter. The energetic<br />
particles generated by the cyclotron made possible the very type of<br />
experiment that Rutherford <strong>and</strong> Gamow had attempted earlier.<br />
These experiments, which bombarded lithium targets with streams<br />
of highly energetic accelerated protons, attempted to probe the inner<br />
structure of matter.<br />
Although funding for scientific research on a large scale was<br />
scarce before World War II (1939-1945), Lawrence nevertheless conceived<br />
of a 467-centimeter cyclotron that would generate particles<br />
with energies approaching 100 million electronvolts. By the end of<br />
the war, increases in the public <strong>and</strong> private funding of scientific research<br />
<strong>and</strong> a dem<strong>and</strong> for higher-energy particles created a situation<br />
in which this plan looked as if it would become reality, were it not<br />
for an inherent limit in the physics of cyclotron operation.<br />
Overcoming the Problem of Mass<br />
Synchrocyclotron / 721<br />
In 1937, Hans Albrecht Bethe discovered a severe theoretical limitation<br />
to the energies that could be produced in a cyclotron. Physicist<br />
Albert Einstein’s special theory of relativity had demonstrated<br />
that as any mass particle gains velocity relative to the speed of light,<br />
its mass increases. Bethe showed that this increase in mass would<br />
eventually slow the rotation of each particle. Therefore, as the rotation<br />
of each particle slows <strong>and</strong> the frequency of the alternating electric<br />
field remains constant, particle velocity will decrease eventually.<br />
This factor set an upper limit on the energies that any cyclotron<br />
could produce.<br />
Edwin Mattison McMillan, a colleague of Lawrence at Berkeley,<br />
proposed a solution to Bethe’s problem in 1945. Simultaneously <strong>and</strong><br />
independently, Vladimir Iosifovich Veksler of the Soviet Union proposed<br />
the same solution. They suggested that the frequency of the<br />
alternating electric field be slowed to meet the decreasing rotational<br />
frequencies of the accelerating particles—in essence, “synchroniz-
722 / Synchrocyclotron<br />
ing” the electric field with the moving particles. The result was the<br />
synchrocyclotron.<br />
Prior to World War II, Lawrence <strong>and</strong> his colleagues had obtained<br />
the massive electromagnet for the new 100-million-electronvolt cyclotron.<br />
This 467-centimeter magnet would become the heart of the<br />
new Berkeley synchrocyclotron. After initial tests proved successful,<br />
the Berkeley team decided that it would be reasonable to convert<br />
the cyclotron magnet for use in a new synchrocyclotron. The<br />
apparatus was operational in November of 1946.<br />
These high energies combined with economic factors to make the<br />
synchrocyclotron a major achievement for the Berkeley Radiation<br />
Laboratory. The synchrocyclotron required less voltage to produce<br />
higher energies than the cyclotron because the obstacles cited by<br />
Bethe were virtually nonexistent. In essence, the energies produced<br />
by synchrocyclotrons are limited only by the economics of building<br />
them. These factors led to the planning <strong>and</strong> construction of other<br />
synchrocyclotrons in the United States <strong>and</strong> Europe. In 1957, the<br />
Berkeley apparatus was redesigned in order to achieve energies of<br />
720 million electronvolts, at that time the record for cyclotrons of<br />
any kind.<br />
Impact<br />
Previously, scientists had had to rely on natural sources for highly<br />
energetic subatomic <strong>and</strong> atomic particles with which to experiment.<br />
In the mid-1920’s, the American physicist Robert Andrews Millikan<br />
began his experimental work in cosmic rays, which are one natural<br />
source of energetic particles called “mesons.” Mesons are charged<br />
particles that have a mass more than two hundred times that of the<br />
electron <strong>and</strong> are therefore of great benefit in high-energy physics experiments.<br />
In February of 1949, McMillan announced the first synthetically<br />
produced mesons using the synchrocyclotron.<br />
McMillan’s theoretical development led not only to the development<br />
of the synchrocyclotron but also to the development of the<br />
electron synchrotron, the proton synchrotron, the microtron, <strong>and</strong><br />
the linear accelerator. Both proton <strong>and</strong> electron synchrotrons have<br />
been used successfully to produce precise beams of muons <strong>and</strong> pimesons,<br />
or pions (a type of meson).
The increased use of accelerator apparatus ushered in a new era<br />
of physics research, which has become dominated increasingly by<br />
large accelerators <strong>and</strong>, subsequently, larger teams of scientists <strong>and</strong><br />
engineers required to run individual experiments. More sophisticated<br />
machines have generated energies in excess of 2 trillion<br />
electronvolts at the United States’ Fermi National Accelerator Laboratory,<br />
or Fermilab, in Illinois. Part of the huge Tevatron apparatus<br />
at Fermilab, which generates these particles, is a proton synchrotron,<br />
a direct descendant of McMillan <strong>and</strong> Lawrence’s early<br />
efforts.<br />
See also Atomic bomb; Cyclotron; Electron microscope; Field ion<br />
microscope; Geiger counter; Hydrogen bomb; Mass spectrograph;<br />
Neutrino detector; Scanning tunneling microscope; Tevatron accelerator.<br />
Further Reading<br />
Synchrocyclotron / 723<br />
Bernstein, Jeremy. Hans Bethe: Prophet of Energy. New York: Basic<br />
Books, 1980.<br />
McMillan, Edwin. “The Synchrotron: A Proposed High-Energy Particle<br />
Accelerator.” Physical Review 68 (September, 1945).<br />
_____. “Vladimir Iosifovich Veksler.” Physics Today (November,<br />
1966).<br />
“Witness to a Century.” Discover 20 (December, 1999).
724<br />
Synthetic amino acid<br />
Synthetic amino acid<br />
The invention: A method for synthesizing amino acids by combining<br />
water, hydrogen, methane, <strong>and</strong> ammonia <strong>and</strong> exposing the<br />
mixture to an electric spark.<br />
The people behind the invention:<br />
Stanley Lloyd Miller (1930- ), an American professor of<br />
chemistry<br />
Harold Clayton Urey (1893-1981), an American chemist who<br />
won the 1934 Nobel Prize in Chemistry<br />
Aleks<strong>and</strong>r Ivanovich Oparin (1894-1980), a Russian biochemist<br />
John Burdon S<strong>and</strong>erson Haldane (1892-1964), a British scientist<br />
Prebiological Evolution<br />
The origin of life on Earth has long been a tough problem for scientists<br />
to solve. While most scientists can envision the development<br />
of life through geologic time from simple single-cell bacteria<br />
to complex mammals by the processes of mutation <strong>and</strong> natural selection,<br />
they have found it difficult to develop a theory to define<br />
how organic materials were first formed <strong>and</strong> organized into lifeforms.<br />
This stage in the development of life before biologic systems<br />
arose, which is called “chemical evolution,” occurred between<br />
4.5 <strong>and</strong> 3.5 billion years ago. Although great advances in<br />
genetics <strong>and</strong> biochemistry have shown the intricate workings of<br />
the cell, relatively little light has been shed on the origins of this intricate<br />
machinery of the cell. Some experiments, however, have<br />
provided important data from which to build a scientific theory of<br />
the origin of life. The first of these experiments was the classic<br />
work of Stanley Lloyd Miller.<br />
Miller worked with Harold Clayton Urey, a Nobel laureate, on the<br />
environments of the early earth. John Burdon S<strong>and</strong>erson Haldane, a<br />
British biochemist, had suggested in 1929 that the earth’s early atmosphere<br />
was a reducing one—that it contained no free oxygen. In<br />
1952, Urey published a seminal work in planetology, The Planets,in<br />
which he elaborated on Haldane’s suggestion, <strong>and</strong> he postulated
that the earth had formed from a cold stellar dust cloud. Urey<br />
thought that the earth’s primordial atmosphere probably contained<br />
elements in the approximate relative abundances found in the solar<br />
system <strong>and</strong> the universe.<br />
It had been discovered in 1929 that the Sun is approximately 87<br />
percent hydrogen, <strong>and</strong> by 1935 it was known that hydrogen encompassed<br />
the vast majority (92.8 percent) of atoms in the universe.<br />
Urey reasoned that the earth’s early atmosphere contained mostly<br />
hydrogen, with the oxygen, nitrogen, <strong>and</strong> carbon atoms chemically<br />
bonded to hydrogen to form water, ammonia, <strong>and</strong> methane. Most<br />
important, free oxygen could not exist in the presence of such an<br />
abundance of hydrogen.<br />
As early as the mid-1920’s, Aleks<strong>and</strong>r Ivanovich Oparin, a Russian<br />
biochemist, had argued that the organic compounds necessary<br />
for life had been built up on the early earth by chemical combinations<br />
in a reducing atmosphere. The energy from the Sun would<br />
have been sufficient to drive the reactions to produce life. Haldane<br />
later proposed that the organic compounds would accumulate in<br />
the oceans to produce a “dilute organic soup” <strong>and</strong> that life might<br />
have arisen by some unknown process from that mixture of organic<br />
compounds.<br />
Primordial Soup in a Bottle<br />
Synthetic amino acid / 725<br />
Miller combined the ideas of Oparin <strong>and</strong> Urey <strong>and</strong> designed a<br />
simple, but elegant, experiment. He decided to mix the gases presumed<br />
to exist in the early atmosphere (water vapor, hydrogen, ammonia,<br />
<strong>and</strong> methane) <strong>and</strong> expose them to an electrical spark to determine<br />
which, if any, organic compounds were formed. To do this,<br />
he constructed a relatively simple system, essentially consisting of<br />
two Pyrex flasks connected by tubing in a roughly circular pattern.<br />
The water <strong>and</strong> gases in the smaller flask were boiled <strong>and</strong> the resulting<br />
gas forced through the tubing into a larger flask that contained<br />
tungsten electrodes. As the gases passed the electrodes, an electrical<br />
spark was generated, <strong>and</strong> from this larger flask the gases <strong>and</strong> any<br />
other compounds were condensed. The gases were recycled through<br />
the system, whereas the organic compounds were trapped in the<br />
bottom of the system.
726 / Synthetic amino acid<br />
Miller was trying to simulate conditions that had prevailed on<br />
the early earth. During the one week of operation, Miller extracted<br />
<strong>and</strong> analyzed the residue of compounds at the bottom of the system.<br />
The results were truly astounding. He found that numerous organic<br />
compounds had, indeed, been formed in only that one week. As<br />
much as 15 percent of the carbon (originally in the gas methane) had<br />
been combined into organic compounds, <strong>and</strong> at least 5 percent of<br />
the carbon was incorporated into biochemically important compounds.<br />
The most important compounds produced were some of<br />
the twenty amino acids essential to life on Earth.<br />
The formation of amino acids is significant because they are the<br />
building blocks of proteins. Proteins consist of a specific sequence of<br />
amino acids assembled into a well-defined pattern. Proteins are necessary<br />
for life for two reasons. First, they are important structural<br />
Water in<br />
Vacuum<br />
Boiling Water<br />
Water Vapor<br />
Electrode<br />
Nh3<br />
CH4<br />
The Miller-Urey experiment.<br />
H2<br />
Condenser<br />
Cooled Water<br />
Containing<br />
Organic Compounds<br />
Sample for<br />
Chemical Analysis
materials used to build the cells of the body. Second, the enzymes<br />
that increase the rate of the multitude of biochemical reactions of life<br />
are also proteins. Miller not only had produced proteins in the laboratory<br />
but also had shown clearly that the precursors of proteins—<br />
the amino acids—were easily formed in a reducing environment<br />
with the appropriate energy.<br />
Perhaps the most important aspect of the experiment was the<br />
ease with which the amino acids were formed. Of all the thous<strong>and</strong>s<br />
of organic compounds that are known to chemists, amino acids<br />
were among those that were formed by this simple experiment. This<br />
strongly implied that one of the first steps in chemical evolution was<br />
not only possible but also highly probable. All that was necessary<br />
for the synthesis of amino acids were the common gases of the solar<br />
system, a reducing environment, <strong>and</strong> an appropriate energy source,<br />
all of which were present on early Earth.<br />
Consequences<br />
Synthetic amino acid / 727<br />
Miller opened an entirely new field of research with his pioneering<br />
experiments. His results showed that much about chemical<br />
evolution could be learned by experimentation in the laboratory.<br />
As a result, Miller <strong>and</strong> many others soon tried variations on<br />
his original experiment by altering the combination of gases, using<br />
other gases, <strong>and</strong> trying other types of energy sources. Almost all<br />
the essential amino acids have been produced in these laboratory<br />
experiments.<br />
Miller’s work was based on the presumed composition of the<br />
primordial atmosphere of Earth. The composition of this atmosphere<br />
was calculated on the basis of the abundance of elements<br />
in the universe. If this reasoning is correct, then it is highly likely<br />
that there are many other bodies in the universe that have similar<br />
atmospheres <strong>and</strong> are near energy sources similar to the Sun.<br />
Moreover, Miller’s experiment strongly suggests that amino acids,<br />
<strong>and</strong> perhaps life as well, should have formed on other planets.<br />
See also Artificial hormone; Artificial kidney; Synthetic DNA;<br />
Synthetic RNA.
728 / Synthetic amino acid<br />
Further Reading<br />
Dronamraju, Krishna R., <strong>and</strong> J. B. S. Haldane. Haldane’s Daedalus Revisited.<br />
New York: Oxford University Press, 1995.<br />
Lipkin, Richard. “Early Earth May Have Had Two Key RNA Bases.”<br />
Science News 148, no. 1 (July 1, 1995).<br />
Miller, Stanley L., <strong>and</strong> Leslie E. Orgel. The Origins of Life on the Earth.<br />
Englewood Cliffs, N.J.: Prentice-Hall, 1974.<br />
Nelson, Kevin E., Matthew Levy, <strong>and</strong> Stanley L. Miller. “Peptide<br />
Nucleic Acids Rather than RNA May Have Been the First Genetic<br />
Molecule.” Proceedings of the National Academy of Sciences of the<br />
United States of America 97, no. 8 (April 11, 2000).<br />
Yockey, Hubert P. “Walther Lob, Stanley L. Miller, <strong>and</strong> Prebiotic<br />
‘Building Blocks’ in the Silent Electrical Discharge.” Perspectives<br />
in Biology <strong>and</strong> Medicine 41, no. 1 (Autumn, 1997).
Synthetic DNA<br />
Synthetic DNA<br />
The invention: A method for replicating viral deoxyribonucleic<br />
acid (DNA) in a test tube that paved the way for genetic engineering.<br />
The people behind the invention:<br />
Arthur Kornberg (1918- ), an American physician <strong>and</strong><br />
biochemist<br />
Robert L. Sinsheimer (1920- ), an American biophysicist<br />
Mehran Goulian (1929- ), a physician <strong>and</strong> biochemist<br />
The Role of DNA<br />
729<br />
Until the mid-1940’s, it was believed that proteins were the<br />
carriers of genetic information, the source of heredity. Proteins<br />
appeared to be the only biological molecules that had the complexity<br />
necessary to encode the enormous amount of genetic information<br />
required to reproduce even the simplest organism.<br />
Nevertheless, proteins could not be shown to have genetic properties,<br />
<strong>and</strong> by 1944, it was demonstrated conclusively that deoxyribonucleic<br />
acid (DNA) was the material that transmitted hereditary<br />
information. It was discovered that DNA isolated from a<br />
strain of infective bacteria that can cause pneumonia was able to<br />
transform a strain of noninfective bacteria into an infective strain;<br />
in addition, the infectivity trait was transmitted to future generations.<br />
Subsequently, it was established that DNA is the genetic material<br />
in virtually all forms of life.<br />
Once DNA was known to be the transmitter of genetic information,<br />
scientists sought to discover how it performs its role. DNA is a<br />
polymeric molecule composed of four different units, called “deoxynucleotides.”<br />
The units consist of a sugar, a phosphate group, <strong>and</strong> a<br />
base; they differ only in the nature of the base, which is always one of<br />
four related compounds: adenine, guanine, cytosine, or thymine. The<br />
way in which such a polymer could transmit genetic information,<br />
however, was difficult to discern. In 1953, biophysicists James D. Watson<br />
<strong>and</strong> Francis Crick brilliantly determined the three-dimensional
730 / Synthetic DNA<br />
structure of DNA by analyzing X-ray diffraction photographs of DNA<br />
fibers. From their analysis of the structure of DNA, Watson <strong>and</strong> Crick<br />
inferred DNA’s mechanism of replication. Their work led to an underst<strong>and</strong>ing<br />
of gene function in molecular terms.<br />
Watson <strong>and</strong> Crick showed that DNA has a very long doublestr<strong>and</strong>ed<br />
(duplex) helical structure. DNA has a duplex structure because<br />
each base forms a link to a specific base on the opposite<br />
str<strong>and</strong>. The discovery of this complementary pairing of bases provided<br />
a model to explain the two essential functions of a hereditary<br />
molecule: It must preserve the genetic code from one generation to<br />
the next, <strong>and</strong> it must direct the development of the cell.<br />
Watson <strong>and</strong> Crick also proposed that DNA is able to serve as a<br />
mold (or template) for its own reproduction because the two str<strong>and</strong>s<br />
of DNA polymer can separate. Upon separation, each str<strong>and</strong> acts as a<br />
template for the formation of a new complementary str<strong>and</strong>. An adenine<br />
base in the existing str<strong>and</strong> gives rise to cytosine, <strong>and</strong> so on. In<br />
this manner, a new double-str<strong>and</strong>ed DNA is generated that is identical<br />
to the parent DNA.<br />
DNA in a Test Tube<br />
Watson <strong>and</strong> Crick’s theory provided a valuable model for the reproduction<br />
of DNA, but it did not explain the biological mechanism<br />
by which the process occurs. The biochemical pathway of DNA reproduction<br />
<strong>and</strong> the role of the enzymes required for catalyzing the<br />
reproduction process were discovered by Arthur Kornberg <strong>and</strong> his<br />
coworkers. For his success in achieving DNA synthesis in a test tube<br />
<strong>and</strong> for discovering <strong>and</strong> isolating an enzyme—DNA polymerase—<br />
that catalyzed DNA synthesis, Kornberg won the 1959 Nobel Prize<br />
in Physiology or Medicine.<br />
To achieve DNA replication in a test tube, Kornberg found that a<br />
small amount of preformed DNA must be present, in addition to<br />
DNA polymerase enzyme <strong>and</strong> all four of the deoxynucleotides that<br />
occur in DNA. Kornberg discovered that the base composition of<br />
the newly made DNA was determined solely by the base composition<br />
of the preformed DNA, which had been used as a template in<br />
the test-tube synthesis. This result showed that DNA polymerase<br />
obeys instructions dictated by the template DNA. It is thus said to
e “template-directed.” DNA polymerase was the first templatedirected<br />
enzyme to be discovered.<br />
Although test-tube synthesis was a most significant achievement,<br />
important questions about the precise character of the newly<br />
made DNA were still unanswered. Methods of analyzing the order,<br />
or sequence, of the bases in DNA were not available, <strong>and</strong> hence it<br />
could not be shown directly whether DNAmade in the test tube was<br />
an exact copy of the template of DNA or merely an approximate<br />
copy. In addition, some DNAs prepared by DNA polymerase appeared<br />
to be branched structures. Since chromosomes in living cells<br />
contain long, linear, unbranched str<strong>and</strong>s of DNA, this branching<br />
might have indicated that DNA synthesized in a test tube was not<br />
equivalent to DNA synthesized in the living cell.<br />
Kornberg realized that the best way to demonstrate that newly<br />
synthesized DNA is an exact copy of the original was to test the new<br />
DNA for biological activity in a suitable system. Kornberg reasoned<br />
that a demonstration of infectivity in viral DNA produced in a test<br />
tube would prove that polymerase-catalyzed synthesis was virtually<br />
error-free <strong>and</strong> equivalent to natural, biological synthesis. The<br />
experiment, carried out by Kornberg, Mehran Goulian at Stanford<br />
University, <strong>and</strong> Robert L. Sinsheimer at the California Institute of<br />
Technology, was a complete success. The viral DNAs produced in a<br />
test tube by the DNA polymerase enzyme, using a viral DNA template,<br />
were fully infective. This synthesis showed that DNA polymerase<br />
could copy not merely a single gene but also an entire chromosome<br />
of a small virus without error.<br />
Consequences<br />
Synthetic DNA / 731<br />
The purification of DNA polymerase <strong>and</strong> the preparation of biologically<br />
active DNA were major achievements that influenced<br />
biological research on DNA for decades. Kornberg’s methodology<br />
proved to be invaluable in the discovery of other enzymes that synthesize<br />
DNA. These enzymes have been isolated from Escherichia<br />
coli bacteria <strong>and</strong> from other bacteria, viruses, <strong>and</strong> higher organisms.<br />
The test-tube preparation of viral DNA also had significance in<br />
the studies of genes <strong>and</strong> chromosomes. In the mid-1960’s, it had not<br />
been established that a chromosome contains a continuous str<strong>and</strong> of
732 / Synthetic DNA<br />
DNA. Kornberg <strong>and</strong> Sinsheimer’s synthesis of a viral chromosome<br />
proved that it was, indeed, a very long str<strong>and</strong> of uninterrupted<br />
DNA.<br />
Kornberg <strong>and</strong> Sinsheimer’s work laid the foundation for subsequent<br />
recombinant DNA research <strong>and</strong> for genetic engineering technology.<br />
This technology promises to revolutionize both medicine<br />
<strong>and</strong> agriculture. The enhancement of food production <strong>and</strong> the generation<br />
of new drugs <strong>and</strong> therapies are only a few of the subsequent<br />
benefits that may be expected.<br />
See also Artificial chromosome; Artificial hormone; Cloning; Genetic<br />
“fingerprinting”; Genetically engineered insulin; In vitro plant<br />
culture; Synthetic amino acid; Synthetic RNA.<br />
Further Reading<br />
Baker, Tania A., <strong>and</strong> Arthur Kornberg. DNA Replication. 2d ed. New<br />
York: W. H. Freeman, 1991.<br />
Kornberg, Arthur. The Golden Helix: Inside Biotech Ventures. Sausalito,<br />
Calif.: University Science Books, 1995.<br />
_____. For the Love of Enzymes: The Odyssey of a Biochemist. Harvard<br />
University Press, 1991.<br />
Sinsheimer, Robert. The Str<strong>and</strong>s of a Life: The Science of DNA <strong>and</strong> the<br />
Art of Education. Berkeley: University of California Press, 1994.
Synthetic RNA<br />
Synthetic RNA<br />
The invention: A method for synthesizing the biological molecule<br />
RNA established that this process can occur outside the living<br />
cell.<br />
The people behind the invention:<br />
Severo Ochoa (1905-1993), a Spanish biochemist who shared<br />
the 1959 Nobel Prize in Physiology or Medicine<br />
Marianne Grunberg-Manago (1921- ), a French biochemist<br />
Marshall W. Nirenberg (1927- ), an American biochemist<br />
who won the 1968 Nobel Prize in Physiology or Medicine<br />
Peter Lengyel (1929- ), a Hungarian American biochemist<br />
RNA Outside the Cells<br />
733<br />
In the early decades of the twentieth century, genetics had not<br />
been experimentally united with biochemistry. This merging soon<br />
occurred, however, with work involving the mold Neurospora crassa.<br />
This Nobel award-winning work by biochemist Edward Lawrie<br />
Tatum <strong>and</strong> geneticist George Wells Beadle showed that genes control<br />
production of proteins, which are major functional molecules in<br />
cells. Yet no one knew the chemical composition of genes <strong>and</strong> chromosomes,<br />
or, rather, the molecules of heredity.<br />
The American bacteriologist Oswald T. Avery <strong>and</strong> his colleagues<br />
at New York’s Rockefeller Institute determined experimentally that<br />
the molecular basis of heredity was a large polymer known as deoxyribonucleic<br />
acid (DNA). Avery’s discovery triggered a furious<br />
worldwide search for the particular structural characteristics of<br />
DNA, which allow for the known biological characteristics of genes.<br />
One of the most famous studies in the history of science solved<br />
this problem in 1953. Scientists James D. Watson, Francis Crick, <strong>and</strong><br />
Maurice H. F. Wilkins postulated that DNA exists as a double helix.<br />
That is, two long str<strong>and</strong>s twist about each other in a predictable pattern,<br />
with each single str<strong>and</strong> held to the other by weak, reversible<br />
linkages known as “hydrogen bonds.” About this time, researchers<br />
recognized also that a molecule closely related to DNA, ribonucleic
734 / Synthetic RNA<br />
acid (RNA), plays an important role in transcribing the genetic information<br />
as well as in other biological functions.<br />
Severo Ochoa was born in Spain as the science of genetics was<br />
developing. In 1942, he moved to New York University, where he<br />
studied the bacterium Azobacter vinel<strong>and</strong>ii. Specifically, Ochoa was<br />
focusing on the question of how cells process energy in the form of<br />
organic molecules such as the sugar glucose to provide usable biological<br />
energy in the form of adenosine triphosphate (ATP). With<br />
postdoctoral fellow Marianne Grunberg-Manago, he studied enzymatic<br />
reactions capable of incorporating inorganic phosphate (a<br />
compound consisting of one atom of phosphorus <strong>and</strong> four atoms of<br />
oxygen) into adenosine diphosphate (ADP) to form ATP.<br />
One particularly interesting reaction was followed by monitoring<br />
the amount of radioactive phosphate reacting with ADP. Following<br />
separation of the reaction products, it was discovered that<br />
the main product was not ATP, but a much larger molecule. Chemical<br />
characterization demonstrated that this product was a polymer<br />
of adenosine monophosphate. When other nucleocide diphosphates,<br />
such as inosine diphosphate, were used in the reaction, the<br />
corresponding polymer of inosine monophosphate was formed.<br />
Thus, in each case, a polymer (a long string of building-block<br />
units) was formed. The polymers formed were synthetic RNAs, <strong>and</strong><br />
the enzyme responsible for the conversion became known as “polynucleotide<br />
phosphorylase.” This finding, once the early skepticism<br />
was resolved, was received by biochemists with great enthusiasm<br />
because no technique outside the cell had ever been discovered<br />
previously in which a nucleic acid similar to RNA could be<br />
synthesized.<br />
Learning the Language<br />
Ochoa, Peter Lengyel, <strong>and</strong> Marshall W. Nirenberg at the National<br />
Institute of Health took advantage of this breakthrough to synthesize<br />
different RNAs useful in cracking the genetic code. Crick had<br />
postulated that the flow of information in biological systems is from<br />
DNA to RNA to protein. In other words, genetic information contained<br />
in the DNA structure is transcribed into complementary<br />
RNA structures, which, in turn, are translated into the protein. Pro-
tein synthesis, an extremely complex process, involves bringing a<br />
type of RNA, known as messenger RNA, together with amino acids<br />
<strong>and</strong> huge cellular organelles known as ribosomes.<br />
Yet investigators did not know the nature of the nucleic acid alphabet—for<br />
example, how many single units of the RNA polymer<br />
code were needed for each amino acid, <strong>and</strong> the order that the units<br />
must be in to st<strong>and</strong> for a “word” in the nucleic acid language. In<br />
1961, Nirenberg demonstrated that the polymer of synthetic RNA<br />
with multiple units of uracil (poly U) would “code” only for a protein<br />
containing the amino acid phenylalanine. Each three units (U’s)<br />
gave one phenylalanine. Therefore, genetic words each contain<br />
three letters. UUU translates into phenylalanine. Poly A, the first<br />
polymer discovered with polynucleotide phosphorylase, was coded<br />
for a protein containing multiple lysines. That is, AAA translates<br />
into the amino acid lysine.<br />
The words, containing combinations of letters, such as AUG, were<br />
not as easily studied, but Nirenberg, Ochoa, <strong>and</strong> Gobind Khorana of<br />
the University of Wisconsin eventually uncovered the exact translation<br />
for each amino acid. In RNA, there are four possible letters (A, U,<br />
G, <strong>and</strong> C) <strong>and</strong> three letters in each word. Accordingly, there are sixtyfour<br />
possible words. With only twenty amino acids, it became clear<br />
that more than one RNA word can translate into a given amino acid.<br />
Yet, no given word st<strong>and</strong>s for any more than one amino acid. A few<br />
words do not translate into any amino acid; they are stop signals, telling<br />
the ribosome to cease translating RNA.<br />
The question of which direction an RNA is translated is critical.<br />
For example, CAA codes for the amino acid glutamine, but the reverse,<br />
AAC, translates to the amino acid asparagine. Such a difference<br />
is critical because the exact sequence of a protein determines its<br />
activity—that is, what it will do in the body <strong>and</strong> therefore what genetic<br />
trait it will express.<br />
Consequences<br />
Synthetic RNA / 735<br />
Synthetic RNAs provided the key to underst<strong>and</strong>ing the genetic<br />
code. The genetic code is universal; it operates in all organisms, simple<br />
or complex. It is used by viruses, which are nearly life but are not<br />
alive. Spelling out the genetic code was one of the top discoveries of
736 / Synthetic RNA<br />
the twentieth century. Nearly all work in molecular biology depends<br />
on this knowledge.<br />
The availability of synthetic RNAs has provided hybridization<br />
tools for molecular geneticists. Hybridization is a technique in which<br />
an RNA is allowed to bind in a complementary fashion to DNA under<br />
investigation. The greater the similarity between RNA <strong>and</strong> DNA,<br />
the greater the amount of binding. The differential binding allows for<br />
seeking, finding, <strong>and</strong> ultimately isolating a target DNA from a large,<br />
diverse pool of DNA—in short, finding a needle in a haystack. Hybridization<br />
has become an indispensable aid in experimental molecular<br />
genetics as well as in applied sciences, such as forensics.<br />
See also Artificial chromosome; Artificial hormone; Cloning; Genetic<br />
“fingerprinting”; Genetically engineered insulin; In vitro plant<br />
culture; Synthetic amino acid; Synthetic DNA.<br />
Further Reading<br />
“Biochemist Severo Ochoa Dies: Won Nobel Prize.” Washington Post<br />
(November 3, 1993).<br />
Santesmases, Maria Jesus. “Severo Ochoa <strong>and</strong> the Biomedical Sciences<br />
in Spain Under Franco, 1959-1975.” Isis 91, no. 4 (December,<br />
2000).<br />
“Severo Ochoa, 1905-1993.” Nature 366, no. 6454 (December, 1993).
Syphilis test<br />
Syphilis test<br />
The invention: The first simple test for detecting the presence of<br />
the venereal disease syphilis led to better syphilis control <strong>and</strong><br />
other advances in immunology.<br />
The people behind the invention:<br />
Reuben Leon Kahn (1887-1974), a Soviet-born American<br />
serologist <strong>and</strong> immunologist<br />
August von Wassermann (1866-1925), a German physician <strong>and</strong><br />
bacteriologist<br />
Columbus’s Discoveries<br />
737<br />
Syphilis is one of the chief venereal diseases, a group of diseases<br />
whose name derives from Venus, the Roman goddess of love. The<br />
term “venereal” arose from the idea that the diseases were transmitted<br />
solely by sexual contact with an infected individual. Although<br />
syphilis is almost always passed from one person to another in this<br />
way, it occasionally arises after contact with objects used by infected<br />
people in highly unclean surroundings, particularly in the underdeveloped<br />
countries of the world.<br />
It is believed by many that syphilis was introduced to Europe by<br />
the members of Spanish explorer Christopher Columbus’s crew—<br />
supposedly after they were infected by sexual contact with West Indian<br />
women—during their voyages of exploration. Columbus is reported<br />
to have died of heart <strong>and</strong> brain problems very similar to<br />
symptoms produced by advanced syphilis. At that time, according<br />
to many historians, syphilis spread rapidly over sixteenth century<br />
Europe. The name “syphilis” was coined by the Italian physician<br />
Girolamo Fracastoro in 1530 in an epic poem he wrote.<br />
Modern syphilis is much milder than the original disease <strong>and</strong> relatively<br />
uncommon. Yet, if it is not identified <strong>and</strong> treated appropriately,<br />
syphilis can be devastating <strong>and</strong> even fatal. It can also be passed from<br />
pregnant mothers to their unborn children. In these cases, the afflicted<br />
children will develop serious health problems that can include<br />
paralysis, insanity, <strong>and</strong> heart disease. Therefore, the underst<strong>and</strong>ing,
738 / Syphilis test<br />
detection, <strong>and</strong> cure of syphilis are important worldwide.<br />
Syphilis is caused by a spiral-shaped germ called a “spirochete.”<br />
Spirochetes enter the body through breaks in the skin or through the<br />
mucous membranes, regardless of how they are transmitted. Once<br />
spirochetes enter the body, they spread rapidly. During the first four<br />
to six weeks after infection, syphilis—said to be in its primary<br />
phase—is very contagious. During this time, it is identified by the<br />
appearance of a sore, or chancre, at the entry site of the infecting spirochetes.<br />
The chancre disappears quickly, <strong>and</strong> within six to twenty-four<br />
weeks, the disease shows itself as a skin rash, feelings of malaise,<br />
<strong>and</strong> other flulike symptoms (secondary-phase syphilis). These problems<br />
also disappear quickly in most cases, <strong>and</strong> here is where the real<br />
trouble—latent syphilis—begins. In latent syphilis, now totally without<br />
symptoms, spirochetes that have spread through the body may<br />
lodge in the brain or the heart. When this happens, paralysis, mental<br />
incapacitation, <strong>and</strong> death may follow.<br />
Testing Before Marriage<br />
Because of the danger to unborn children, Americans wishing to<br />
marry must be certified as being free of the disease before a marriage<br />
license is issued. The cure for syphilis is easily accomplished<br />
through the use of penicillin or other types of antibiotics, though no<br />
vaccine is yet available to prevent the disease. It is for this reason<br />
that syphilis detection is particularly important.<br />
The first viable test for syphilis was originated by August von<br />
Wassermann in 1906. In this test, blood samples are taken <strong>and</strong><br />
treated in a medical laboratory. The treatment of the samples is<br />
based on the fact that the blood of infected persons has formed antibodies<br />
to fight the syphilis spirochete, <strong>and</strong> that these antibodies will<br />
react with certain body chemicals to cause the blood sample to clot.<br />
This indicates the person has the disease. After the syphilis has been<br />
cured, the antibodies disappear, as does the clotting.<br />
Although the Wassermann test was effective in 95 percent of all<br />
infected persons, it was very time-consuming (requiring a two-day<br />
incubation period) <strong>and</strong> complex. In 1923, Reuben Leon Kahn developed<br />
a modified syphilis test, “the st<strong>and</strong>ard Kahn test,” that was
simpler <strong>and</strong> faster: The test was complete after only a few minutes.<br />
By 1925, Kahn’s test had become the st<strong>and</strong>ard syphilis test of the<br />
United States Navy <strong>and</strong> later became a worldwide test for the detection<br />
of the disease.<br />
Kahn soon realized that his test was not perfect <strong>and</strong> that in some<br />
cases, the results were incorrect. This led him to a broader study of<br />
the immune reactions at the center of the Kahn test. He investigated<br />
the role of various tissues in immunity, as compared to the role of<br />
white blood antibodies <strong>and</strong> white blood cells. Kahn showed, for example,<br />
that different tissues of immunized or nonimmunized animals<br />
possessed differing immunologic capabilities. Furthermore,<br />
the immunologic capabilities of test animals varied with their<br />
age, being very limited in newborns <strong>and</strong> increasing as they matured.<br />
This effort led, by 1951, to Kahn’s “universal serological reaction,”<br />
a precipitation reaction in which blood serum was tested<br />
against a reagent composed of tissue lipids. Kahn viewed it as a potentially<br />
helpful chemical indicator of how healthy or ill an individual<br />
was. This effort is viewed as an important l<strong>and</strong>mark in the development<br />
of the science of immunology.<br />
Impact<br />
Syphilis test / 739<br />
At the time that Kahn developed his st<strong>and</strong>ard Kahn test for syphilis,<br />
the Wassermann test was used all over the world for the diagnosis<br />
of syphilis. As has been noted, one of the great advantages of the<br />
st<strong>and</strong>ard Kahn test was its speed, minutes versus days. For example,<br />
in October, 1923, Kahn is reported to have tested forty serum<br />
samples in fifteen minutes.<br />
Kahn’s efforts have been important to immunology <strong>and</strong> to medicine.<br />
Among the consequences of his endeavors was the stimulation<br />
of other developments in the field, including the VDRL test (originated<br />
by the Venereal Disease Research Laboratory), which has replaced<br />
the Kahn test as one of the most often used screening tests for<br />
syphilis. Even more specific syphilis tests developed later include a<br />
fluorescent antibody test to detect the presence of the antibody to<br />
the syphilis spirochete.
740 / Syphilis test<br />
See also Abortion pill; Amniocentesis; Antibacterial drugs; Birth<br />
control pill; Mammography; Pap test; Penicillin; Ultrasound.<br />
Further Reading<br />
Cates, William, Jr., Richard B. Rothenberg, <strong>and</strong> Joseph H. Blount.<br />
“Syphilis Control.” Sexually Transmitted Diseases 23, no. 1 (January,<br />
1996).<br />
Cobb, W. Montague. “Reuben Leon Kahn.” Journal of the National<br />
Medical Association 63 (September, 1971).<br />
Quétel, Claude. History of Syphilis. Baltimore: Johns Hopkins University<br />
Press, 1992.<br />
St. Louis, Michael E., <strong>and</strong> Judith N. Wasserheit. “Elimination of<br />
Syphilis in the United States.” Science 281, no. 5375 (July, 1998).
Talking motion pictures<br />
Talking motion pictures<br />
The invention: The first practical system for linking sound with<br />
moving pictures.<br />
The people behind the invention:<br />
Harry Warner (1881-1958), the brother who used sound to<br />
fashion a major filmmaking company<br />
Albert Warner (1884-1967), the brother who persuaded theater<br />
owners to show Warner films<br />
Samuel Warner (1887-1927), the brother who adapted soundrecording<br />
technology to filmmaking<br />
Jack Warner (1892-1978), the brother who supervised the<br />
making of Warner films<br />
Taking the Lead<br />
741<br />
The silent films of the early twentieth century had live sound accompaniment<br />
featuring music <strong>and</strong> sound effects. Neighborhood<br />
theaters made do with a piano <strong>and</strong> violin; larger “picture palaces”<br />
in major cities maintained resident orchestras of more than seventy<br />
members. During the late 1920’s, Warner Bros. led the American<br />
film industry in producing motion pictures with their own soundtracks,<br />
which were first recorded on synchronized records <strong>and</strong> later<br />
added on to the film beside the images.<br />
The ideas that led to the addition of sound to film came from corporate-sponsored<br />
research by American Telephone <strong>and</strong> Telegraph<br />
Company (AT&T) <strong>and</strong> the Radio Corporation of America (RCA).<br />
Both companies worked to improve sound recording <strong>and</strong> playback,<br />
AT&T to help in the design of long-distance telephone equipment<br />
<strong>and</strong> RCA as part of the creation of better radio sets. Yet neither company<br />
could, or would, enter filmmaking. AT&T was willing to contract<br />
its equipment out to Paramount or one of the other major Hollywood<br />
studios of the day; such studios, however, did not want to<br />
risk their sizable profit positions by junking silent films. The giants<br />
of the film industry were doing fine with what they had <strong>and</strong> did not<br />
want to switch to something that had not been proved.
742 / Talking motion pictures<br />
In 1924, Warner Bros. was a prosperous, though small, corporation<br />
that produced films with the help of outside financial backing. That<br />
year, Harry Warner approached the important Wall Street investment<br />
banking house of Goldman, Sachs <strong>and</strong> secured the help he needed.<br />
As part of this initial wave of expansion, Warner Bros. acquired a<br />
Los Angeles radio station in order to publicize its films. Through<br />
this deal, the four Warner brothers learned of the new technology<br />
that the radio <strong>and</strong> telephone industries had developed to record<br />
sound, <strong>and</strong> they succeeded in securing the necessary equipment<br />
from AT&T. During the spring of 1925, the brothers devised a plan<br />
by which they could record the most popular musical artists on film<br />
<strong>and</strong> then offer these “shorts” as added attractions to theaters that<br />
booked its features. As a bonus, Warner Bros. could add recorded<br />
orchestral music to its feature films <strong>and</strong> offer this music to theaters<br />
that relied on small musical ensembles.<br />
“Vitaphone”<br />
On August 6, 1926, Warner Bros. premiered its new “Vitaphone”<br />
technology. The first package consisted of a traditional silent film<br />
(Don Juan) with a recorded musical accompaniment, plus six recordings<br />
of musical talent highlighted by a performance from Giovanni<br />
Martineli, the most famous opera tenor of the day.<br />
The first Vitaphone feature was The Jazz Singer, which premiered<br />
in October, 1927. The film was silent during much of the movie, but<br />
as soon as Al Jolson, the star, broke into song, the new technology<br />
would be implemented. The film was an immediate hit. The Jazz<br />
Singer package, which included accompanying shorts with sound,<br />
forced theaters in cities that rarely held films over for more than a<br />
single week to ask to have the package stay for two, three, <strong>and</strong><br />
sometimes four straight weeks.<br />
The Jazz Singer did well at the box office, but skeptics questioned<br />
the staying power of talkies. If sound was so important, they wondered,<br />
why hadn’t The Jazz Singer moved to the top of the all-time<br />
box-office list? Such success, though, would come a year later with<br />
The Singing Fool, also starring Jolson. From its opening day (September<br />
20, 1928), it was the financial success of its time; produced for an<br />
estimated $200,000, it took in $5 million. In New York City, The
Talking motion pictures / 743<br />
In the early days of sound films, cameras had to be soundproofed so their operating noises<br />
would not be picked up by the primitive sound-recording equipment. (Library of Congress)<br />
Singing Fool registered the heaviest business in Broadway history,<br />
with an advance sale that exceeded more than $100,000 (equivalent<br />
to more than half a million dollars in 1990’s currency).
744 / Talking motion pictures<br />
Impact<br />
The coming of sound transformed filmmaking, ushering in what<br />
became known as the golden age of Hollywood. By 1930, there were<br />
more reporters stationed in the filmmaking capital of the world<br />
than in any capital of Europe or Asia.<br />
The Warner Brothers<br />
Businessmen rather than inventors, the four Warner brothers<br />
were hustlers who knew a good thing when they saw it.<br />
They started out running theaters in 1903, evolved into film distributors,<br />
<strong>and</strong> began making their own films in 1909, in defiance<br />
of the Patents Company, a trust established by Thomas A. Edison<br />
to eliminate competition from independent filmmakers.<br />
Harry Warner was the president of the company, Sam <strong>and</strong> Jack<br />
were vice presidents in charge of production, <strong>and</strong> Abe (or Albert)<br />
was the treasurer.<br />
Theirs was a small concern. Their silent films <strong>and</strong> serials attracted<br />
few audiences, <strong>and</strong> during World War I they made<br />
training films for the government. In fact, their film about syphilis,<br />
Open Your Eyes, was their first real success. In 1918, however,<br />
they released My Four Years in Germany, a dramatized<br />
documentary, <strong>and</strong> it was their first blockbuster. Although considered<br />
gauche upstarts, they were suddenly taken seriously by<br />
the movie industry.<br />
When Sam first heard an actor talk on screen in an experimental<br />
film at the Bell lab in New York in 1925, he recognized a<br />
revolutionary opportunity. He soon convinced Jack that talking<br />
movies would be a gold mine. However, Harry <strong>and</strong> Abe were<br />
against the idea because of its costs—<strong>and</strong> because earlier attempts<br />
at “talkies” had been dismal failures. Sam <strong>and</strong> Jack<br />
tricked Harry into a seeing a experimental film of an orchestra,<br />
however, <strong>and</strong> he grew enthusiastic despite his misgivings. Within<br />
a year, the brothers released the all-music Don Juan. The rave<br />
notices from critics astounded Harry <strong>and</strong> Abe.<br />
Still, they thought sound in movies was simply a novelty.<br />
When Sam pointed out that they could make movies in which<br />
the actors talked, as on stage, Harry, who detested actors, snorted,<br />
“Who the hell wants to hear actors talk?” Sam <strong>and</strong> Jack pressed<br />
for dramatic talkies, nonetheless, <strong>and</strong> prevailed upon Harry to<br />
finance them. The silver screen has seldom been silent since.
As a result of its foresight, Warner Bros. was the sole small competitor<br />
of the early 1920’s to succeed in the Hollywood elite, producing<br />
successful films for consumption throughout the world.<br />
After Warner Bros.’ innovation, the soundtrack became one of<br />
the features that filmmakers controlled when making a film. Indeed,<br />
sound became a vital part of the filmmaker’s art; music, in<br />
particular, could make or break a film.<br />
Finally, the coming of sound helped make films a dominant medium<br />
of mass culture, both in the United States <strong>and</strong> throughout the<br />
world. Innumerable fashions, expressions, <strong>and</strong> designs were soon created<br />
or popularized by filmmakers. Many observers had not viewed<br />
the silent cinema as especially significant; with the coming of the talkies,<br />
however, there was no longer any question about the social <strong>and</strong><br />
cultural importance of films. As one clear consequence of the new<br />
power of the movie industry, within a few years of the coming of<br />
sound, the notorious Hays Code m<strong>and</strong>ating prior restraint of film content<br />
went into effect. The pairing of images <strong>and</strong> sound caused talking<br />
films to be deemed simply too powerful for uncensored presentation<br />
to audiences; although the Hays Code was gradually weakened <strong>and</strong><br />
eventually ab<strong>and</strong>oned, less onerous “rating systems” would continue<br />
to be imposed on filmmakers by various regulatory bodies.<br />
See also Autochrome plate; Dolby noise reduction; Electronic<br />
synthesizer; Television.<br />
Further Reading<br />
Talking motion pictures / 745<br />
Brayer, Elizabeth. George Eastman: A Biography. Baltimore: Johns<br />
Hopkins University Press, 1996.<br />
Crafton, Donald. The Talkies: American Cinema’s Transition to Sound,<br />
1926-1931. Berkeley: University of California Press, 1999.<br />
Geduld, Harry M. The Birth of the Talkies: From Edison to Jolson.<br />
Bloomington: Indiana University Press, 1975.<br />
Neale, Stephen. Cinema <strong>and</strong> Technology: Image, Sound, Colour. London:<br />
Macmillan Education, 1985.<br />
Wagner, A. F. Recollections of Thomas A. Edison: A Personal History of<br />
the Early Days of the Phonograph, the Silent <strong>and</strong> Sound Film, <strong>and</strong> Film<br />
Censorship. 2d ed. London: City of London Phonograph & Gramophone<br />
Society, 1996.
746<br />
Teflon<br />
Teflon<br />
The invention: A fluorocarbon polymer whose chemical inertness<br />
<strong>and</strong> physical properties have made it useful for many applications,<br />
from nonstick cookware coatings to suits for astronauts.<br />
The person behind the invention:<br />
Roy J. Plunkett (1910-1994), an American chemist<br />
Nontoxic Refrigerant Sought<br />
As the use of mechanical refrigeration increased in the late 1930’s,<br />
manufacturers recognized the need for a material to replace sulfur<br />
dioxide <strong>and</strong> ammonia, which, although they were the commonly<br />
used refrigerants of the time, were less than ideal for the purpose.<br />
The material sought had to be nontoxic, odorless, colorless, <strong>and</strong> not<br />
flammable. Thomas Midgley, Jr., <strong>and</strong> Albert Henne of General Motors<br />
Corporation’s Frigidaire Division concluded, from studying<br />
published reports listing properties of a wide variety of chemicals,<br />
that hydrocarbon-like materials with hydrogen atoms replaced by<br />
chlorine <strong>and</strong> fluorine atoms would be appropriate.<br />
Their conclusion led to the formation of a joint effort between the<br />
General Motors Corporation’s Frigidaire Division <strong>and</strong> E. I. Du Pont<br />
de Nemours to research <strong>and</strong> develop the chemistry of fluorocarbons.<br />
In this research effort, a number of scientists began making<br />
<strong>and</strong> studying the large number of individual chemicals in the general<br />
class of compounds being investigated. It fell to Roy J. Plunkett<br />
to do a detailed study of tetrafluoroethylene, a compound consisting<br />
of two carbon atoms, each of which is attached to the other as<br />
well as to two fluorine atoms.<br />
The “Empty” Tank<br />
Tetrafluoroethylene, at normal room temperature <strong>and</strong> pressure,<br />
is a gas that is supplied to users in small pressurized cylinders. On<br />
the morning of the day of the discovery, Plunkett attached such a<br />
tank to his experimental apparatus <strong>and</strong> opened the tank’s valve. To
Teflon / 747<br />
his great surprise, no gas flowed from the tank. Plunkett’s subsequent<br />
actions transformed this event from an experiment gone<br />
wrong into a historically significant discovery. Rather than replacing<br />
the tank with another <strong>and</strong> going on with the work planned for<br />
the day, Plunkett, who wanted to know what had happened, examined<br />
the “empty” tank. When he weighed the tank, he discovered<br />
that it was not empty; it did contain the chemical that was listed on<br />
the label. Opening the valve <strong>and</strong> running a wire through the opening<br />
proved that what had happened had not been caused by a malfunctioning<br />
valve. Finally, Plunkett sawed the cylinder in half <strong>and</strong><br />
discovered what had happened. The chemical in the tank was no<br />
longer a gas; instead, it was a waxy white powder.<br />
Plunkett immediately recognized the meaning of the presence of<br />
the solid. The six-atom molecules of the tetrafluoroethylene gas had<br />
somehow linked with one another to form much larger molecules.<br />
The gas had polymerized, becoming polytetrafluoroethylene, a solid<br />
with a high molecular weight. Capitalizing on this occurrence,<br />
Plunkett, along with other Du Pont chemists, performed a series of<br />
experiments <strong>and</strong> soon learned to control the polymerization reaction<br />
so that the product could be produced, its properties could be<br />
studied, <strong>and</strong> applications for it could be developed.<br />
The properties of the substance were remarkable indeed. It was<br />
unaffected by strong acids <strong>and</strong> bases, withstood high temperatures<br />
without reacting or melting, <strong>and</strong> was not dissolved by any solvent<br />
that the scientists tried. In addition to this highly unusual behavior,<br />
the polymer had surface properties that made it very slick. It was so<br />
slippery that other materials placed on its surface slid off in much<br />
the same way that beads of water slide off the surface of a newly<br />
waxed automobile.<br />
Although these properties were remarkable, no applications were<br />
suggested immediately for the new material. The polymer might<br />
have remained a laboratory curiosity if a conversation had not<br />
taken place between Leslie R. Groves, the head of the Manhattan<br />
Project (which engineered the construction of the first atomic bombs),<br />
<strong>and</strong> a Du Pont chemist who described the polymer to him. The<br />
Manhattan Project research team was hunting for an inert material<br />
to use for gaskets to seal pumps <strong>and</strong> piping. The gaskets had to be<br />
able to withst<strong>and</strong> the highly corrosive uranium hexafluoride with
748 / Teflon<br />
which the team was working. This uranium compound is fundamental<br />
to the process of upgrading uranium for use in explosive devices<br />
<strong>and</strong> power reactors. Polytetrafluoroethylene proved to be just<br />
the material that they needed, <strong>and</strong> Du Pont proceeded, throughout<br />
World War II <strong>and</strong> after, to manufacture gaskets for use in uranium<br />
enrichment plants.<br />
The high level of secrecy of the Manhattan Project in particular<br />
<strong>and</strong> atomic energy in general delayed the commercial introduction<br />
of the polymer, which was called Teflon, until the late 1950’s. At that<br />
time, the first Teflon-coated cooking utensils were introduced.<br />
Impact<br />
Roy J. Plunkett<br />
Roy J. Plunkett was born in 1910 in New Carlisle, Ohio. In<br />
1932 he received a bachelor’s degree in chemistry from Manchester<br />
College <strong>and</strong> transferred to Ohio State University for<br />
graduate school, earning a master’s degree in 1933 <strong>and</strong> a doctorate<br />
in 1936. The same year he went to work for E. I. Du Pont<br />
de Nemours <strong>and</strong> Company as a research chemist at the Jackson<br />
Laboratory in Deepwater, New Jersey. Less then two years later,<br />
when he was only twenty-seven years old, he found the strange<br />
polymer tetrafluoroethylene, whose trade name became Teflon.<br />
It would turn out to be among Du Pont’s most famous products.<br />
In 1938 Du Pont appointed Plunkett the chemical supervisor<br />
at its largest plant, the Chamber Works in Deepwater, which<br />
produced tetraethyl lead. He held the position until 1952 <strong>and</strong><br />
afterward directed the company’s Freon Products Division. He<br />
retired in 1975. In 1985 he was inducted into the Inventor’s Hall<br />
of Fame, <strong>and</strong> after his death in 1994, Du Pont created the<br />
Plunkett Award, presented to inventors who find new uses for<br />
Teflon <strong>and</strong> Tefzel, a related fluoropolymer, in aerospace, automotive,<br />
chemical, or electrical applications.<br />
Plunkett’s thoroughness in following up a chance observation<br />
gave the world a material that has found a wide variety of uses, ranging<br />
from home kitchens to outer space. Some applications make use
Teflon / 749<br />
of Teflon’s slipperiness, others<br />
make use of its inertness,<br />
<strong>and</strong> others take advantage<br />
of both properties.<br />
The best-known application<br />
of Teflon is as a nonstick<br />
coating for cookware.<br />
Teflon’s very slippery surface<br />
initially was troublesome,<br />
when it proved to be<br />
difficult to attach to other<br />
materials. Early versions of<br />
Teflon-coated cookware shed<br />
their surface coatings easily,<br />
even when care was taken<br />
to avoid scraping it off. A<br />
suitable bonding process was<br />
soon developed, however,<br />
<strong>and</strong> the present coated sur-<br />
An important space application for Teflon is its use<br />
faces are very rugged <strong>and</strong><br />
on the outer skins of suits worn by astronauts.<br />
(PhotoDisc)<br />
provide a noncontaminating<br />
coating that can be cleaned<br />
easily.<br />
Teflon has proved to be a useful material in making devices that<br />
are implanted in the human body. It is easily formed into various<br />
shapes <strong>and</strong> is one of the few materials that the human body does not<br />
reject. Teflon has been used to make heart valves, pacemakers, bone<br />
<strong>and</strong> tendon substitutes, artificial corneas, <strong>and</strong> dentures.<br />
Teflon’s space applications have included its use as the outer skin<br />
of the suits worn by astronauts, as insulating coating on wires <strong>and</strong><br />
cables in spacecraft that must resist high-energy cosmic radiation,<br />
<strong>and</strong> as heat-resistant nose cones <strong>and</strong> heat shields on spacecraft.<br />
See also Buna rubber; Neoprene; Nylon; Plastic; Polystyrene;<br />
Pyrex glass; Tupperware.
750 / Teflon<br />
Further Reading<br />
Friedel, Robert. “The Accidental Inventor.” Discover 17, no. 10 (October,<br />
1996).<br />
“Happy Birthday, Teflon.” Design News 44, no. 8 (April, 1988).<br />
“Teflon.” Newsweek 130, 24a (Winter, 1997/1998).
Telephone switching<br />
Telephone switching<br />
The invention: The first completely automatic electronic system<br />
for switching telephone calls.<br />
The people behind the invention:<br />
Almon B. Strowger (1839-1902), an American inventor<br />
Charles Wilson Hoover, Jr. (1925- ), supervisor of memory<br />
system development<br />
Wallace Andrew Depp (1914- ), director of Electronic<br />
Switching<br />
Merton Brown Purvis (1923- ), designer of switching<br />
matrices<br />
Electromechanical Switching Systems<br />
751<br />
The introduction of electronic switching technology into the telephone<br />
network was motivated by the desire to improve the quality<br />
of the telephone system, add new features, <strong>and</strong> reduce the cost of<br />
switching technology. Telephone switching systems have three features:<br />
signaling, control, <strong>and</strong> switching functions. There were several<br />
generations of telephone switching equipment before the first<br />
fully electronic switching “office” (device) was designed.<br />
The first automatic electromechanical (partly electronic <strong>and</strong> partly<br />
mechanical) switching office was the Strowger step-by-step switch.<br />
Strowger switches relied upon the dial pulses generated by rotary<br />
dial telephones to move their switching elements to the proper positions<br />
to connect one telephone with another. In the step-by-step process,<br />
the first digit dialed moved the first mechanical switch into position,<br />
the second digit moved the second mechanical switch into<br />
position, <strong>and</strong> so forth, until the proper telephone connection was established.<br />
These Strowger switching offices were quite large, <strong>and</strong><br />
they lacked flexibility <strong>and</strong> calling features.<br />
The second generation of automatic electromechanical telephone<br />
switching offices was of the “crossbar” type. Initially, crossbar<br />
switches relied upon a specialized electromechanical controller called<br />
a “marker” to establish call connections. Electromechanical telephone
752 / Telephone switching<br />
switching offices had difficulty implementing additional features<br />
<strong>and</strong> were unable to h<strong>and</strong>le large numbers of incoming calls.<br />
Electronic Switching Systems<br />
In the early 1940’s, research into the programmed control of<br />
switching offices began at the American Telephone <strong>and</strong> Telegraph<br />
Company’s Bell Labs. This early research resulted in a trial office being<br />
put into service in Morris, Illinois, in 1960. The Morris switch<br />
used a unique memory called the “flying spot store.” It used a photographic<br />
plate as a program memory, <strong>and</strong> the memory was accessed<br />
optically. In order to change the memory, one had to scratch<br />
out or cover parts of the photographic plate.<br />
Before the development of the Morris switch, gas tubes had been<br />
used to establish voice connections. This was accomplished by applying<br />
a voltage difference across the end points of the conversation.<br />
When this voltage difference was applied, the gas tubes would<br />
conduct electricity, thus establishing the voice connection. The Morris<br />
trial showed that gas tubes could not support the voltages that<br />
the new technology required to make telephones ring or to operate<br />
pay telephones.<br />
The knowledge gained from the Morris trial led to the development<br />
of the first full-scale, commercial, computer-controlled<br />
electronic switch, the electronic switching system 1 (ESS-1). The<br />
first ESS-1 went into service in New Jersey in 1965. In the ESS-1,<br />
electromechanical switching elements, or relays, were controlled<br />
by computer software. A centralized computer h<strong>and</strong>led call processing.<br />
Because the telephone service of an entire community<br />
depends on the reliability of the telephone switching office, the<br />
ESS-1 had two central processors, so that one would be available<br />
if the other broke down. The switching system of the ESS-1 was<br />
composed of electromechanical relays; the control of the switching<br />
system was electronic, but the switching itself remained mechanical.<br />
Bell Labs developed models to demonstrate the concept of integrating<br />
digital transmission <strong>and</strong> switching systems. Unfortunately,<br />
the solid state electronics necessary for such an undertaking had not<br />
developed sufficiently at that time, so the commercial development
Almon B. Strowger<br />
Telephone switching / 753<br />
Some people thought Almon B. Strowger was strange, perhaps<br />
even demented. Certainly, he was hot-tempered, restless,<br />
<strong>and</strong> argumentative. One thing he was not, however, was unimaginative.<br />
Born near Rochester, New York, in 1839, Strowger was old<br />
enough to fight for the Union at the second battle of Manassas<br />
during the American Civil War. The bloody battle apparently<br />
shattered <strong>and</strong> embittered him. He w<strong>and</strong>ered slowly west after<br />
the war, taught himself undertaking, <strong>and</strong> opened a funeral<br />
home in Topeka, Kansas, in 1882. There began his running war<br />
with telephone operators, which continued when he moved his<br />
business to Kansas City.<br />
With the help of technicians (whom he later cheated) he built<br />
the first “collar box,” an automatic switching device, in 1887.<br />
The round contraption held a pencil that could be revolved to<br />
different pins arrange around it in order to change phone connections.<br />
Two years later he produced a more sophisticated device<br />
that was operated by push-button, <strong>and</strong> despite initial misgivings<br />
brought out a rotary dial device in 1896. That same year<br />
he sold the rights to his patents to business partners for $1,800<br />
<strong>and</strong> his share in Strowger Automatic Dial Telephone Exchange<br />
for $10,000 in 1898. He moved to St. Petersburg, Florida, <strong>and</strong><br />
opened a small hotel, dying there in 1902. It surely would have<br />
done his temper no good to learn that fourteen years later the<br />
Bell system bought his patents for $2.5 million.<br />
of digital switching was not pursued. New versions of the ESS continued<br />
to employ electromechanical technology, although mechanical<br />
switching elements can cause impulse noise in voice signals <strong>and</strong><br />
are larger <strong>and</strong> more difficult to maintain than electronic switching<br />
elements. Ten years later, however, Bell Labs began to develop a digital<br />
toll switch, the ESS-4, in which both switching <strong>and</strong> control functions<br />
were electronic.<br />
Although the ESS-1 was the first electronically controlled switching<br />
system, it did not switch voices electronically. The ESS-1 used<br />
computer control to move mechanical contacts in order to establish<br />
a conversation. In a fully electronic switching system, the voices are
754 / Telephone switching<br />
digitized before switching is performed. This technique, which is<br />
called “digital switching,” is still used.<br />
The advent of electronically controlled switching systems made<br />
possible features such as call forwarding, call waiting, <strong>and</strong> detailed<br />
billing for long-distance calls. Changing these services became a<br />
matter of simply changing tables in computer programs. Telephone<br />
maintenance personnel could communicate with the central processor<br />
of the ESS-1 by using a teletype, <strong>and</strong> they could change numbers<br />
simply by typing comm<strong>and</strong>s on the teletype. In electromechanically<br />
controlled telephone switching systems, however, changing numbers<br />
required rewiring.<br />
Consequences<br />
Electronic switching has greatly decreased the size of switching<br />
offices. Digitization of the voice prior to transmission improves<br />
voice quality. When telephone switches were electromechanical, a<br />
large area was needed to house the many mechanical switches that<br />
were required. In the era of electronic switching, voices are switched<br />
digitally by computer. In this method, voice samples are read into a<br />
computer memory <strong>and</strong> then read out of the memory when it is time<br />
to connect a caller with a desired number. Basically, electronic telephone<br />
systems are specialized computer systems that move digitized<br />
voice samples between customers.<br />
Telephone networks are moving toward complete digitization.<br />
Digitization was first applied to the transmission of voice signals.<br />
This made it possible for a single pair of copper wires to be shared<br />
by a number of telephone users. Currently, voices are digitized<br />
upon their arrival at the switching office. If the final destination of<br />
the telephone call is not connected to the particular switching office,<br />
the voice is sent to the remote office by means of digital circuits.<br />
Currently, voice signals are sent between the switching office <strong>and</strong><br />
homes or businesses. In the future, digitization of the voice signal<br />
will occur in the telephone sets themselves. Digital voice signals<br />
will be sent directly from one telephone to another. This will provide<br />
homes with direct digital communication. A network that provides<br />
such services is called the “integrated services digital network”<br />
(ISDN).
See also Cell phone; Long-distance telephone; Rotary dial telephone;<br />
Touch-tone telephone.<br />
Further Reading<br />
Telephone switching / 755<br />
Briley, Bruce E. Introduction to Telephone Switching. Reading, Mass.:<br />
Addison-Wesley, 1983.<br />
Talley, David. Basic Electronic Switching for Telephone Systems. 2ded.<br />
Rochelle Park, N.J.: Hayden, 1982.<br />
Thompson, Richard A. Telephone Switching Systems. Boston: Artech<br />
House, 2000.
756<br />
Television<br />
Television<br />
The invention: System that converts moving pictures <strong>and</strong> sounds<br />
into electronic signals that can be broadcast at great distances.<br />
The people behind the invention:<br />
Vladimir Zworykin (1889-1982), a Soviet electronic engineer <strong>and</strong><br />
recipient of the National Medal of Science in 1967<br />
Paul Gottlieb Nipkow (1860-1940), a German engineer <strong>and</strong><br />
inventor<br />
Alan A. Campbell Swinton (1863-1930), a Scottish engineer <strong>and</strong><br />
Fellow of the Royal Society<br />
Charles F. Jenkins (1867-1934), an American physicist, engineer,<br />
<strong>and</strong> inventor<br />
The Persistence of Vision<br />
In 1894, an American inventor, Charles F. Jenkins, described a<br />
scheme for electrically transmitting moving pictures. Jenkins’s idea,<br />
however, was only one in an already long tradition of theoretical<br />
television systems. In 1842, for example, the English physicist Alex<strong>and</strong>er<br />
Bain had invented an automatic copying telegraph for sending<br />
still pictures. Bain’s system scanned images line by line. Similarly,<br />
the wide recognition of the persistence of vision—the mind’s<br />
ability to retain a visual image for a short period of time after the image<br />
has been removed—led to experiments with systems in which<br />
the image to be projected was repeatedly scanned line by line. Rapid<br />
scanning of images became the underlying principle of all television<br />
systems, both electromechanical <strong>and</strong> all-electronic.<br />
In 1884, a German inventor, Paul Gottlieb Nipkow, patented a<br />
complete television system that utilized a mechanical sequential<br />
scanning system <strong>and</strong> a photoelectric cell sensitized with selenium<br />
for transmission. The selenium photoelectric cell converted the light<br />
values of the image being scanned into electrical impulses to be<br />
transmitted to a receiver where the process would be reversed. The<br />
electrical impulses led to light of varying brightnesses being produced<br />
<strong>and</strong> projected on to a rotating disk that was scanned to repro-
Electron Gun<br />
Electron Beam<br />
Deflection <strong>and</strong> Focus Coils<br />
Phosphor Screen<br />
Glass Envelope<br />
Schematic of a television picture tube.<br />
Television / 757<br />
duce the original image. If the system—that is, the transmitter <strong>and</strong><br />
the receiver—were in perfect synchronization <strong>and</strong> if the disk rotated<br />
quickly enough, persistence of vision enabled the viewer to<br />
see a complete image rather than a series of moving points of light.<br />
For a television image to be projected onto a screen of reasonable<br />
size <strong>and</strong> retain good quality <strong>and</strong> high resolution, any system employing<br />
only thirty to one hundred lines (as early mechanical systems<br />
did) is inadequate. Afew systems were developed that utilized<br />
two hundred or more lines, but the difficulties these presented<br />
made the possibility of an all-electronic system increasingly attractive.<br />
These difficulties were not generally recognized until the early<br />
1930’s, when television began to move out of the laboratory <strong>and</strong> into<br />
commercial production.<br />
Interest in all-electronic television paralleled interest in mechanical<br />
systems, but solutions to technical problems proved harder to<br />
achieve. In 1908, a Scottish engineer, Alan A. Campbell Swinton,<br />
proposed what was essentially an all-electronic television system.<br />
Swinton theorized that the use of magnetically deflected cathoderay<br />
tubes for both the transmitter <strong>and</strong> receiver in a system was possible.<br />
In 1911, Swinton formally presented his idea to the Röntgen
758 / Television<br />
Society in London, but the technology available did not allow for<br />
practical experiments.<br />
Zworykin’s Picture Tube<br />
Vladimir Zworykin<br />
Born in 1889, Vladimir Kosma Zworykin grew up in Murom,<br />
a small town two hundred miles east of Moscow. His father ran<br />
a riverboat service, <strong>and</strong> Zworykin sometimes helped him, but<br />
his mind was on electricity, which he studied on his own while<br />
aboard his father’s boats. In 1906, he entered the St. Petersburg<br />
Institute of Technology, <strong>and</strong> there he became acquainted with<br />
the idea of television through the work of Professor Boris von<br />
Rosing.<br />
Zworykin assisted Rosing in his attempts to transmit pictures<br />
with a cathode-ray tube. He served with the Russian Signal<br />
Corps during World War I, but then fled to the United States<br />
after the Bolshevist Revolution. In 1920 he got a job at Westinghouse’s<br />
research laboratory in Pittsburgh, helping develop radio<br />
tubes <strong>and</strong> photoelectric cells. He became an American citizen<br />
in 1924 <strong>and</strong> completed a doctorate at the University of<br />
Pittsburgh in 1926. By then he had already demonstrated his<br />
iconoscope <strong>and</strong> applied for a patent. Unable to interest Westinghouse<br />
in his invention, he moved to the Radio Corporation<br />
of America (RCA) in 1929, <strong>and</strong> later became director of its electronics<br />
research laboratory. RCA’s president, David Sarnoff,<br />
also a Russian immigrant, had faith in Zworykin <strong>and</strong> his ideas.<br />
Before Zworykin retired in 1954, RCA had invested $50 million<br />
in television.<br />
Among the many awards Zworykin received for his culturechanging<br />
invention was the National Medal of Science, presented<br />
by President Lyndon Johnson in 1966. Zworykin died on<br />
his birthday in 1982.<br />
In 1923, Vladimir Zworykin, a Soviet electronic engineer working<br />
for the Westinghouse Electric Corporation, filed a patent application<br />
for the “iconoscope,” or television transmission tube. On<br />
March 17, 1924, Zworykin applied for a patent for a two-way system.<br />
The first cathode-ray tube receiver had a cathode, a modulating<br />
grid, an anode, <strong>and</strong> a fluorescent screen.
Early console model television. (PhotoDisc)<br />
Television / 759<br />
Zworykin later admitted that the results were very poor <strong>and</strong> the<br />
system, as shown, was still far removed from a practical television<br />
system. Zworykin’s employers were so unimpressed that they admonished<br />
him to forget television <strong>and</strong> work on something more<br />
useful. Zworykin’s interest in television was thereafter confined to<br />
his nonworking hours, as he spent the next year working on photographic<br />
sound recording.<br />
It was not until the late 1920’s that he was able to devote his full<br />
attention to television. Ironically, Westinghouse had by then resumed<br />
research in television, but Zworykin was not part of the<br />
team. After he returned from a trip to France, where in 1928 he had<br />
witnessed an exciting demonstration of an electrostatic tube, Westinghouse<br />
indicated that it was not interested. This lack of corporate<br />
support in Pittsburgh led Zworykin to approach the Radio Corporation<br />
of America (RCA). According to reports, Zworykin demonstrated<br />
his system to the Institute of Radio Engineers at Rochester,<br />
New York, on November 18, 1929, claiming to have developed a
760 / Television<br />
working picture tube, a tube that would revolutionize television development.<br />
Finally, RCA recognized the potential.<br />
Impact<br />
The picture tube, or “kinescope,” developed by Zworykin changed<br />
the history of television. Within a few years, mechanical systems<br />
disappeared <strong>and</strong> television technology began to utilize systems<br />
similar to Zworykin’s by use of cathode-ray tubes at both ends of<br />
the system. At the transmitter, the image is focused upon a mosaic<br />
screen composed of light-sensitive cells. A stream of electrons sweeps<br />
the image, <strong>and</strong> each cell sends off an electric current pulse as it is hit<br />
by the electrons, the light <strong>and</strong> shade of the focused image regulating<br />
the amount of current.<br />
This string of electrical impulses, after amplification <strong>and</strong> modification<br />
into ultrahigh frequency wavelengths, is broadcast by antenna<br />
to be picked up by any attuned receiver, where it is retransformed<br />
into a moving picture in the cathode-ray tube receiver. The<br />
cathode-ray tubes contain no moving parts, as the electron stream is<br />
guided entirely by electric attraction.<br />
Although both the iconoscope <strong>and</strong> the kinescope were far from<br />
perfect when Zworykin initially demonstrated them, they set the<br />
stage for all future television development.<br />
See also Color television; Community antenna television; Communications<br />
satellite; Fiber-optics; FM radio; Holography; Internet;<br />
Radio; Talking motion pictures.<br />
Further Reading<br />
Abramson, Albert. Zworykin: Pioneer of Television. Urbana: University<br />
of Illinois Press, 1995.<br />
Sconce, Jeffrey. Haunted Media: Electronic Presence from Telegraphy to<br />
Television. Durham, N.C.: Duke University Press, 2000.<br />
Zworykin, Vladimir Kosma, <strong>and</strong> George Ashmun Morton. Television:<br />
The Electronics of Image Transmission in Color <strong>and</strong> Monochrome.<br />
2d ed. New York: J. Wiley, 1954.
Tevatron accelerator<br />
Tevatron accelerator<br />
The invention: A particle accelerator that generated collisions between<br />
beams of protons <strong>and</strong> antiprotons at the highest energies<br />
ever recorded.<br />
The people behind the invention:<br />
Robert Rathbun Wilson (1914- ), an American physicist <strong>and</strong><br />
director of Fermilab from 1967 to 1978<br />
John Peoples (1933- ), an American physicist <strong>and</strong> deputy<br />
director of Fermilab from 1987<br />
Putting Supermagnets to Use<br />
761<br />
The Tevatron is a particle accelerator, a large electromagnetic device<br />
used by high-energy physicists to generate subatomic particles<br />
at sufficiently high energies to explore the basic structure of matter.<br />
The Tevatron is a circular, tubelike track 6.4 kilometers in circumference<br />
that employs a series of superconducting magnets to accelerate<br />
beams of protons, which carry a positive charge in the atom, <strong>and</strong><br />
antiprotons, the proton’s negatively charged equivalent, at energies<br />
up to 1 trillion electronvolts (equal to 1 teraelectronvolt, or 1 TeV;<br />
hence the name Tevatron). An electronvolt is the unit of energy that<br />
an electron gains through an electrical potential of 1 volt.<br />
The Tevatron is located at the Fermi National Accelerator Laboratory,<br />
which is also known as Fermilab. The laboratory was one of<br />
several built in the United States during the 1960’s.<br />
The heart of the original Fermilab was the 6.4-kilometer main accelerator<br />
ring. This main ring was capable of accelerating protons to<br />
energies approaching 500 billion electronvolts, or 0.5 teraelectronvolt.<br />
The idea to build the Tevatron grew out of a concern for the<br />
millions of dollars spent annually on electricity to power the main<br />
ring, the need for higher energies to explore the inner depths of the<br />
atom <strong>and</strong> the consequences of new theories of both matter <strong>and</strong> energy,<br />
<strong>and</strong> the growth of superconductor technology. Planning for a<br />
second accelerator ring, the Tevatron, to be installed beneath the<br />
main ring began in 1972.
762 / Tevatron accelerator<br />
Robert Rathbun Wilson, the director of Fermilab at that time, realized<br />
that the only way the laboratory could achieve the higher energies<br />
needed for future experiments without incurring intolerable<br />
electricity costs was to design a second accelerator ring that employed<br />
magnets made of superconducting material. Extremely powerful<br />
magnets are the heart of any particle accelerator; charged particles<br />
such as protons are given a “push” as they pass through an electromagnetic<br />
field. Each successive push along the path of the circular<br />
accelerator track gives the particle more <strong>and</strong> more energy. The enormous<br />
magnetic fields required to accelerate massive particles such<br />
as protons to energies approaching 1 trillion electronvolts would require<br />
electricity expenditures far beyond Fermilab’s operating budget.<br />
Wilson estimated that using superconducting materials, however,<br />
which have virtually no resistance to electrical current, would<br />
make it possible for the Tevatron to achieve double the main ring’s<br />
magnetic field strength, doubling energy output without significantly<br />
increasing energy costs.<br />
Tevatron to the Rescue<br />
The Tevatron was conceived in three phases. Most important,<br />
however, were Tevatron I <strong>and</strong> Tevatron II, where the highest energies<br />
were to be generated <strong>and</strong> where it was hoped new experimental findings<br />
would emerge. Tevatron II experiments were designed to be<br />
very similar to other proton beam experiments, except that in this<br />
case, the protons would be accelerated to an energy of 1 trillion<br />
electronvolts. More important still are the proton-antiproton colliding<br />
beam experiments of Tevatron I. In this phase, beams of protons<br />
<strong>and</strong> antiprotons rotating in opposite directions are caused to collide<br />
in the Tevatron, producing a combined, or center-of-mass, energy<br />
approaching 2 trillion electronvolts, nearly three times the energy<br />
achievable at the largest accelerator at Centre Européen de Recherche<br />
Nucléaire (the European Center for Nuclear Research, or CERN).<br />
John Peoples was faced with the problem of generating a beam of<br />
antiprotons of sufficient intensity to collide efficiently with a beam<br />
of protons. Knowing that he had the use of a large proton accelerator—the<br />
old main ring—Peoples employed the two-ring mode in<br />
which 120 billion electronvolt protons from the main ring are aimed
at a fixed tungsten target, generating antiprotons, which scatter<br />
from the target. These particles were extracted <strong>and</strong> accumulated in a<br />
smaller storage ring. These particles could be accelerated to relatively<br />
low energies. After sufficient numbers of antiprotons were<br />
collected, they were injected into the Tevatron, along with a beam of<br />
protons for the colliding beam experiments. On October 13, 1985,<br />
Fermilab scientists reported a proton-antiproton collision with a<br />
center-of-mass energy measured at 1.6 trillion electronvolts, the<br />
highest energy ever recorded.<br />
Consequences<br />
Tevatron accelerator / 763<br />
The Tevatron’s success at generating high-energy protonantiproton<br />
collisions affected future plans for accelerator development<br />
in the United States <strong>and</strong> offered the potential for important<br />
discoveries in high-energy physics at energy levels that no other accelerator<br />
could achieve.<br />
Physics recognized four forces in nature: the electromagnetic<br />
force, the gravitational force, the strong nuclear force, <strong>and</strong> the weak<br />
nuclear force. A major goal of the physics community is to formulate<br />
a theory that will explain all these forces: the so-called gr<strong>and</strong><br />
unification theory. In 1967, one of the first of the so-called gauge theories<br />
was developed that unified the weak nuclear force <strong>and</strong> the<br />
electromagnetic force. One consequence of this theory was that the<br />
weak force was carried by massive particles known as “bosons.”<br />
The search for three of these particles—the intermediate vector bosons<br />
W + ,W − , <strong>and</strong> Z 0 —led to the rush to conduct colliding beam experiments<br />
to the early 1970’s. Because the Tevatron was in the planning<br />
phase at this time, these particles were discovered by a team of<br />
international scientists based in Europe. In 1989, Tevatron physicists<br />
reported the most accurate measure to date of the Z 0 mass.<br />
The Tevatron is thought to be the only particle accelerator in the<br />
world with sufficient power to conduct further searches for the elusive<br />
Higgs boson, a particle attributed to weak interactions by University<br />
of Edinburgh physicist Peter Higgs in order to account for<br />
the large masses of the intermediate vector bosons. In addition, the<br />
Tevatron has the ability to search for the so-called top quark. Quarks<br />
are believed to be the constituent particles of protons <strong>and</strong> neutrons.
764 / Tevatron accelerator<br />
Evidence has been gathered of five of the six quarks believed to exist.<br />
Physicists have yet to detect evidence of the most massive quark,<br />
the top quark.<br />
See also Atomic bomb; Cyclotron; Electron microscope; Field ion<br />
microscope; Geiger counter; Hydrogen bomb; Mass spectrograph;<br />
Neutrino detector; Scanning tunneling microscope; Synchrocyclotron.<br />
Further Reading<br />
Hilts, Philip J. Scientific Temperaments: Three Lives in Contemporary<br />
Science. New York: Simon <strong>and</strong> Schuster, 1984.<br />
Ladbury, Ray. “Fermilab Tevatron Collider Group Goes over the<br />
Top—Cautiously.” Physics Today 47, no. 6 (June, 1994).<br />
Lederman, Leon M. “The Tevatron.” Scientific American 264, no. 3<br />
(March, 1991).<br />
Wilson, Robert R., <strong>and</strong> Raphael Littauer. Accelerators: Machines of<br />
Nuclear Physics. London: Heinemann, 1962.
Thermal cracking process<br />
Thermal cracking process<br />
The invention: Process that increased the yield of refined gasoline<br />
extracted from raw petroleum by using heat to convert complex<br />
hydrocarbons into simpler gasoline hydrocarbons, thereby making<br />
possible the development of the modern petroleum industry.<br />
The people behind the invention:<br />
William M. Burton (1865-1954), an American chemist<br />
Robert E. Humphreys (1942- ), an American chemist<br />
Gasoline, Motor Vehicles, <strong>and</strong> Thermal Cracking<br />
765<br />
Gasoline is a liquid mixture of hydrocarbons (chemicals made up<br />
of only hydrogen <strong>and</strong> carbon) that is used primarily as a fuel for internal<br />
combustion engines. It is produced by petroleum refineries<br />
that obtain it by processing petroleum (crude oil), a naturally occurring<br />
mixture of thous<strong>and</strong>s of hydrocarbons, the molecules of which<br />
can contain from one to sixty carbon atoms.<br />
Gasoline production begins with the “fractional distillation” of<br />
crude oil in a fractionation tower, where it is heated to about 400 degrees<br />
Celsius at the tower’s base. This heating vaporizes most of the<br />
hydrocarbons that are present, <strong>and</strong> the vapor rises in the tower,<br />
cooling as it does so. At various levels of the tower, various portions<br />
(fractions) of the vapor containing simple hydrocarbon mixtures become<br />
liquid again, are collected, <strong>and</strong> are piped out as “petroleum<br />
fractions.” Gasoline, the petroleum fraction that boils between 30<br />
<strong>and</strong> 190 degrees Celsius, is mostly a mixture of hydrocarbons that<br />
contain five to twelve carbon atoms.<br />
Only about 25 percent of petroleum will become gasoline via<br />
fractional distillation. This amount of “straight run” gasoline is not<br />
sufficient to meet the world’s needs. Therefore, numerous methods<br />
have been developed to produce the needed amounts of gasoline.<br />
The first such method, “thermal cracking,” was developed in 1913<br />
by William M. Burton of St<strong>and</strong>ard Oil of Indiana. Burton’s cracking<br />
process used heat to convert complex hydrocarbons (whose molecules<br />
contain many carbon atoms) into simpler gasoline hydrocar-
766 / Thermal cracking process<br />
bons (whose molecules contain fewer carbon atoms), thereby increasing<br />
the yield of gasoline from petroleum. Later advances in<br />
petroleum technology, including both an improved Burton method<br />
<strong>and</strong> other methods, increased the gasoline yield still further.<br />
More Gasoline!<br />
Starting in about 1900, gasoline became important as a fuel for<br />
the internal combustion engines of the new vehicles called automobiles.<br />
By 1910, half a million automobiles traveled American roads.<br />
Soon, the great dem<strong>and</strong> for gasoline—which was destined to grow<br />
<strong>and</strong> grow—required both the discovery of new crude oil fields<br />
around the world <strong>and</strong> improved methods for refining the petroleum<br />
mined from these new sources. Efforts were made to increase<br />
the yield of gasoline—at that time, about 15 percent—from petroleum.<br />
The Burton method was the first such method.<br />
At the time that the cracking process was developed, Burton was<br />
the general superintendent of the Whiting refinery, owned by St<strong>and</strong>ard<br />
Oil of Indiana. The Burton process was developed in collaboration<br />
with Robert E. Humphreys <strong>and</strong> F. M. Rogers. This three-person<br />
research group began work knowing that heating petroleum<br />
fractions that contained hydrocarbons more complex than those<br />
present in gasoline—a process called “coking”—produced kerosene,<br />
coke (a form of carbon), <strong>and</strong> a small amount of gasoline. The<br />
process needed to be improved substantially, however, before it<br />
could be used commercially.<br />
Initially, Burton <strong>and</strong> his coworkers used the “heavy fuel” fraction<br />
of petroleum (the 66 percent of petroleum that boils at a temperature<br />
higher than the boiling temperature of kerosene). Soon, they<br />
found that it was better to use only the part of the material that contained<br />
its smaller hydrocarbons (those containing fewer carbon atoms),<br />
all of which were still much larger than those present in gasoline.<br />
The cracking procedure attempted first involved passing the<br />
starting material through a hot tube. This hot-tube treatment vaporized<br />
the material <strong>and</strong> broke down 20 to 30 percent of the larger hydrocarbons<br />
into the hydrocarbons found in gasoline. Various tarry<br />
products were also produced, however, that reduced the quality of<br />
the gasoline that was obtained in this way.
Asphalt Industrial<br />
Fuel Oil<br />
Roofing<br />
Paints<br />
CRUDE OIL IN<br />
PETROLEUM REFINERY<br />
Thermal cracking process / 767<br />
SEPARATING PURIFICATION CONVERSION<br />
C<strong>and</strong>les<br />
Polish<br />
Waxed Paper<br />
Ointments<br />
<strong>and</strong><br />
Creams<br />
De-waxing<br />
Lubricants<br />
<strong>and</strong><br />
Greases<br />
Plastics<br />
Photographic<br />
Film<br />
Synthetic<br />
Rubber<br />
Weed-killers<br />
<strong>and</strong><br />
Fertilizers<br />
Diesel<br />
Oils<br />
Cracking<br />
Medicines<br />
Detergents<br />
Enamel<br />
Synthetic<br />
Fibers<br />
Next, the investigators attempted to work at a higher temperature<br />
by bubbling the starting material through molten lead. More<br />
gasoline was made in this way, but it was so contaminated with<br />
gummy material that it could not be used. Continued investigation<br />
showed, however, that moderate temperatures (between those used<br />
in the hot-tube experiments <strong>and</strong> that of molten lead) produced the<br />
best yield of useful gasoline.<br />
The Burton group then had the idea of using high pressure to<br />
“keep starting materials still.” Although the theoretical basis for the<br />
use of high pressure was later shown to be incorrect, the new<br />
method worked quite well. In 1913, the Burton method was patented<br />
<strong>and</strong> put into use. The first cracked gasoline, called Motor<br />
Spirit, was not very popular, because it was yellowish <strong>and</strong> had a<br />
somewhat unpleasant odor. The addition of some minor refining<br />
procedures, however, soon made cracked gasoline indistinguishable<br />
from straight run gasoline. St<strong>and</strong>ard Oil of Indiana made huge<br />
profits from cracked gasoline over the next ten years. Ultimately,<br />
thermal cracking subjected the petroleum fractions that were uti-<br />
Fuel<br />
Oil<br />
Bottled<br />
Gas<br />
Gasoline<br />
Jet Fuel<br />
Solvents<br />
Insecticides<br />
Burton’s process contributed to the development of petroleum refining, shown in this<br />
diagram.
768 / Thermal cracking process<br />
lized to temperatures between 550 <strong>and</strong> 750 degrees Celsius, under<br />
pressures between 250 <strong>and</strong> 750 pounds per square inch.<br />
Impact<br />
In addition to using thermal cracking to make gasoline for sale,<br />
St<strong>and</strong>ard Oil of Indiana also profited by licensing the process for use<br />
by other gasoline producers. Soon, the method was used throughout<br />
the oil industry. By 1920, it had been perfected as much as it<br />
could be, <strong>and</strong> the gasoline yield from petroleum had been significantly<br />
increased. The disadvantages of thermal cracking include a<br />
relatively low yield of gasoline (compared to those of other methods),<br />
the waste of hydrocarbons in fractions converted to tar <strong>and</strong><br />
coke, <strong>and</strong> the relatively high cost of the process.<br />
A partial solution to these problems was found in “catalytic<br />
cracking”—the next logical step from the Burton method—in which<br />
petroleum fractions to be cracked are mixed with a catalyst (a substance<br />
that causes a chemical reaction to proceed more quickly,<br />
without reacting itself). The most common catalysts used in such<br />
cracking were minerals called “zeolites.” The wide use of catalytic<br />
cracking soon enabled gasoline producers to work at lower temperatures<br />
(450 to 550 degrees Celsius) <strong>and</strong> pressures (10 to 50 pounds<br />
per square inch). This use decreased manufacturing costs because<br />
catalytic cracking required relatively little energy, produced only<br />
small quantities of undesirable side products, <strong>and</strong> produced highquality<br />
gasoline.<br />
Various other methods of producing gasoline have been developed—among<br />
them catalytic reforming, hydrocracking, alkylation,<br />
<strong>and</strong> catalytic isomerization—<strong>and</strong> now about 60 percent of the petroleum<br />
starting material can be turned into gasoline. These methods,<br />
<strong>and</strong> others still to come, are expected to ensure that the world’s<br />
needs for gasoline will continue to be satisfied—as long as petroleum<br />
remains available.<br />
See also Fuel cell; Gas-electric car; Geothermal power; Internal<br />
combustion engine; Oil-well drill bit; Solar thermal engine.
Further Reading<br />
Thermal cracking process / 769<br />
Gorman, Hugh S. Redefining Efficiency: Pollution Concerns, Regulatory<br />
Mechanisms, <strong>and</strong> Technological Change in the U.S. Petroleum Industry.<br />
Akron, Ohio: University of Akron Press, 2001.<br />
Sung, Hsun-chang, Robert Roy White, <strong>and</strong> George Granger Brown.<br />
Thermal Cracking of Petroleum. Ann Arbor: University of Michigan,<br />
1945.<br />
William Meriam Burton: A Pioneer in Modern Petroleum Technology.<br />
Cambridge, Mass.: University Press, 1952.
770<br />
Tidal power plant<br />
Tidal power plant<br />
The invention: Plant that converts the natural ocean tidal forces<br />
into electrical power.<br />
The people behind the invention:<br />
Mariano di Jacopo detto Taccola (Mariano of Siena, 1381-1453),<br />
an Italian notary, artist, <strong>and</strong> engineer<br />
Bernard Forest de Bélidor (1697 or 1698-1761), a French engineer<br />
Franklin D. Roosevelt (1882-1945), president of the United States<br />
Tidal Enersgy<br />
Ocean tides have long been harnessed to perform useful work.<br />
Ancient Greeks, Romans, <strong>and</strong> medieval Europeans all left records<br />
<strong>and</strong> ruins of tidal mills, <strong>and</strong> Mariano di Jacopo included tidal power<br />
in his treatise De Ingeneis (1433; on engines). Some mills consisted of<br />
water wheels suspended in tidal currents, others lifted weights that<br />
powered machinery as they fell, <strong>and</strong> still others trapped the high<br />
tide to run a mill.<br />
Bernard Forest de Bélidor’s Architecture hydraulique (1737; hydraulic<br />
architecture) is often cited as initiating the modern era of<br />
tidal power exploitation. Bélidor was an instructor in the French<br />
École d’Artillerie et du Génie (School of Artillery <strong>and</strong> Engineering).<br />
Industrial expansion between 1700 <strong>and</strong> 1800 led to the construction<br />
of many tidal mills. In these mills, waterwheels or simple turbines<br />
rotated shafts that drove machinery by means of gears or<br />
belts. They powered small enterprises located on the seashore.<br />
Steam engines, however, soon began to replace tidal mills. Steam<br />
could be generated wherever it was needed, <strong>and</strong> steam mills were<br />
not dependent upon the tides or limited in their production capacity<br />
by the amount of tidal flow. Thus, tidal mills gradually were ab<strong>and</strong>oned,<br />
although a few still operate in New Engl<strong>and</strong>, Great Britain,<br />
France, <strong>and</strong> elsewhere.
Electric Power from Tides<br />
Tidal power plant / 771<br />
Modern society requires tremendous amounts of electric energy<br />
generated by large power stations. This need was first met by<br />
using coal <strong>and</strong> by damming rivers. Later, oil <strong>and</strong> nuclear power became<br />
important. Although small mechanical tidal mills are inadequate<br />
for modern needs, tidal power itself remains an attractive<br />
source of energy. Periodic alarms about coal or oil supplies <strong>and</strong><br />
concern about the negative effects on the environment of using<br />
coal, oil, or nuclear energy continue to stimulate efforts to develop<br />
renewable energy sources with fewer negative effects. Every crisis—for<br />
example, the perceived European coal shortages in the<br />
early 1900’s, oil shortages in the 1920’s <strong>and</strong> 1970’s, <strong>and</strong> growing<br />
anxiety about nuclear power—revives interest in tidal power.<br />
In 1912, a tidal power plant was proposed at Busum, Germany.<br />
The English, in 1918 <strong>and</strong> more recently, promoted elaborate schemes<br />
for the Severn Estuary. In 1928, the French planned a plant at Aber-<br />
Wrach in Brittany. In 1935, under the leadership of Franklin Delano<br />
Roosevelt, the United States began construction of a tidal power<br />
plant at Passamaquoddy, Maine. These plants, however, were never<br />
built. All of them had to be located at sites where tides were extremely<br />
high, <strong>and</strong> such sites are often far from power users. So<br />
much electricity was lost in transmission that profitable quantities<br />
of power could not be sent where they were needed. Also, large<br />
tidal power stations were too expensive to compete with existing<br />
steam plants <strong>and</strong> river dams. In addition, turbines <strong>and</strong> generators<br />
capable of using the large volumes of slow-moving tidal water that<br />
reversed flow had not been invented. Finally, large tidal plants inevitably<br />
hampered navigation, fisheries, recreation, <strong>and</strong> other uses<br />
of the sea <strong>and</strong> shore.<br />
French engineers, especially Robert Gibrat, the father of the La<br />
Rance project, have made the most progress in solving the problems<br />
of tidal power plants. France, a highly industrialized country, is<br />
short of coal <strong>and</strong> petroleum, which has brought about an intense<br />
search by the French for alternative energy supplies.<br />
La Rance, which was completed in December, 1967, is the first<br />
full-scale tidal electric power plant in the world. The Chinese, however,<br />
have built more than a hundred small tidal electric stations
772 / Tidal power plant<br />
about the size of the old mechanical tidal mills, <strong>and</strong> the Canadians<br />
<strong>and</strong> the Russians have both operated plants of pilot-plant size.<br />
La Rance, which was selected from more than twenty competing<br />
localities in France, is one of a few places in the world where the<br />
tides are extremely high. It also has a large reservoir that is located<br />
above a narrow constriction in the estuary. Finally, interference with<br />
navigation, fisheries, <strong>and</strong> recreational activities is minimal at La<br />
Rance.<br />
Submersible “bulbs” containing generators <strong>and</strong> mounting propeller<br />
turbines were specially designed for the La Rance project.<br />
These turbines operate using both incoming <strong>and</strong> outgoing tides,<br />
<strong>and</strong> they can pump water either into or out of the reservoir. These<br />
features allow daily <strong>and</strong> seasonal changes in power generation to be<br />
“smoothed out.” These turbines also deliver electricity most economically.<br />
Many engineering problems had to be solved, however,<br />
before the dam could be built in the tidal estuary.<br />
The La Rance plant produces 240 megawatts of electricity. Its<br />
twenty-four highly reliable turbine generator sets operate about 95<br />
percent of the time. Output is coordinated with twenty-four other<br />
hydroelectric plants by means of a computer program. In this system,<br />
pump-storage stations use excess La Rance power during periods<br />
of low dem<strong>and</strong> to pump water into elevated reservoirs. Later,<br />
during peak dem<strong>and</strong>, this water is fed through a power plant, thus<br />
“saving” the excess generated at La Rance when it was not immediately<br />
needed. In this way, tidal energy, which must be used or lost as<br />
the tides continue to flow, can be saved.<br />
Consequences<br />
The operation of La Rance proved the practicality of tide-generated<br />
electricity. The equipment, engineering practices, <strong>and</strong> operating<br />
procedures invented for La Rance have been widely applied. Submersible,<br />
low-head, high-flow reversible generators of the La Rance<br />
type are now used in Austria, Switzerl<strong>and</strong>, Sweden, Russia, Canada,<br />
the United States, <strong>and</strong> elsewhere.<br />
Economic problems have prevented the building of more large<br />
tidal power plants. With technological advances, the inexorable<br />
depletion of oil <strong>and</strong> coal resources, <strong>and</strong> the increasing cost of nu-
clear power, tidal power may be used more widely in the future.<br />
Construction costs may be significantly lowered by using preconstructed<br />
power units <strong>and</strong> dam segments that are floated into place<br />
<strong>and</strong> submerged, thus making unnecessary expensive dams <strong>and</strong> reducing<br />
pumping costs.<br />
See also Compressed-air-accumulating power plant; Geothermal<br />
power; Nuclear power plant; Nuclear reactor; Solar thermal engine;<br />
Thermal cracking process.<br />
Further Reading<br />
Tidal power plant / 773<br />
Bernshtein, L. B. Tidal Power Plants. Seoul, Korea: Korea Ocean Research<br />
<strong>and</strong> Development Institute, 1996.<br />
Boyle, Godfrey. Renewable Energy: Power for a Sustainable Future. Oxford:<br />
Oxford University Press, 1998.<br />
Ross, David. Power from the Waves. New York: Oxford University<br />
Press, 1995.<br />
Seymour, Richard J. Ocean Energy Recovery: The State of the Art. New<br />
York: American Society of Civil Engineers, 1992.
774<br />
Touch-tone telephone<br />
Touch-tone telephone<br />
The invention: A push-button dialing system for telephones that<br />
replaced the earlier rotary-dial phone.<br />
The person behind the invention:<br />
Bell Labs, the research <strong>and</strong> development arm of the American<br />
Telephone <strong>and</strong> Telegraph Company<br />
Dialing Systems<br />
A person who wishes to make a telephone call must inform the<br />
telephone switching office which number he or she wishes to reach.<br />
A telephone call begins with the customer picking up the receiver<br />
<strong>and</strong> listening for a dial tone. The action of picking up the telephone<br />
causes a switch in the telephone to close, allowing electric current to<br />
flow between the telephone <strong>and</strong> the switching office. This signals<br />
the telephone office that the user is preparing to dial a number. To<br />
acknowledge its readiness to receive the digits of the desired number,<br />
the telephone office sends a dial tone to the user. Two methods<br />
have been used to send telephone numbers to the telephone office:<br />
dial pulsing <strong>and</strong> touch-tone dialing.<br />
“Dial pulsing” is the method used by telephones that have rotary<br />
dials. In this method, the dial is turned until it stops, after which it is<br />
released <strong>and</strong> allowed to return to its resting position. When the dial<br />
is returning to its resting position, the telephone breaks the current<br />
between the telephone <strong>and</strong> the switching office. The switching office<br />
counts the number of times that current flow is interrupted,<br />
which indicates the number that had been dialed.<br />
Introduction of Touch-tone Dialing<br />
The dial-pulsing technique was particularly appropriate for use<br />
in the first electromechanical telephone switching offices, because<br />
the dial pulses actually moved mechanical switches in the switching<br />
office to set up the telephone connection. The introduction of<br />
touch-tone dialing into electromechanical systems was made possi-
Touch-tone telephone / 775<br />
ble by a special device that converted the touch-tones into rotary<br />
dial pulses that controlled the switches. At the American Telephone<br />
<strong>and</strong> Telegraph Company’s Bell Labs, experimental studies were<br />
pursued that explored the use of “multifrequency key pulsing” (in<br />
other words, using keys that emitted tones of various frequencies)<br />
by both operators <strong>and</strong> customers. Initially, plucked tuned reeds<br />
were proposed. These were, however, replaced with “electronic<br />
transistor oscillators,” which produced the required signals electronically.<br />
The introduction of “crossbar switching” made dial pulse signaling<br />
of the desired number obsolete. The dial pulses of the telephone<br />
were no longer needed to control the mechanical switching process<br />
at the switching office. When electronic control was introduced into<br />
switching offices, telephone numbers could be assigned by computer<br />
rather than set up mechanically. This meant that a single<br />
touch-tone receiver at the switching office could be shared by a<br />
large number of telephone customers.<br />
Before 1963, telephone switching offices relied upon rotary dial<br />
pulses to move electromechanical switching elements. Touch-tone<br />
dialing was difficult to use in systems that were not computer controlled,<br />
such as the electromechanical step-by-step method. In about<br />
1963, however, it became economically feasible to implement centralized<br />
computer control <strong>and</strong> touch-tone dialing in switching offices.<br />
Computerized switching offices use a central touch-tone receiver<br />
to detect dialed numbers, after which the receiver sends the<br />
number to a call processor so that a voice connection can be established.<br />
Touch-tone dialing transmits two tones simultaneously to represent<br />
a digit. The tones that are transmitted are divided into two<br />
groups: a high-b<strong>and</strong> group <strong>and</strong> a low-b<strong>and</strong> group. For each digit<br />
that is dialed, one tone from the low-frequency (low-b<strong>and</strong>) group<br />
<strong>and</strong> one tone from the high-frequency (high-b<strong>and</strong>) group are transmitted.<br />
The two frequencies of a tone are selected so that they are<br />
not too closely related harmonically. In addition, touch-tone receivers<br />
must be designed so that false digits cannot be generated when<br />
people are speaking into the telephone.<br />
For a call to be completed, the first digit dialed must be detected<br />
in the presence of a dial tone, <strong>and</strong> the receiver must not interpret
776 / Touch-tone telephone<br />
background noise or speech as valid digits. In order to avoid such<br />
misinterpretation, the touch-tone receiver uses both the relative <strong>and</strong><br />
the absolute strength of the two simultaneous tones of the first digit<br />
dialed to determine what that digit is.<br />
A system similar to the touch-tone system is used to send telephone<br />
numbers between telephone switching offices. This system,<br />
which is called “multifrequency signaling,” also uses two tones to<br />
indicate a single digit, but the frequencies used are not the same frequencies<br />
that are used in the touch-tone system. Multifrequency<br />
signaling is currently being phased out; new computer-based systems<br />
are being introduced to replace it.<br />
Impact<br />
Touch-tone dialing has made new caller features available. The<br />
touch-tone system can be used not only to signal the desired number<br />
to the switching office but also to interact with voice-response<br />
systems. This means that touch-tone dialing can be used in conjunction<br />
with such devices as bank teller machines. A customer can also<br />
dial many more digits per second with a touch-tone telephone than<br />
with a rotary dial telephone.<br />
Touch-tone dialing has not been implemented in Europe, <strong>and</strong><br />
one reason may be that the economics of touch-tone dialing change<br />
as a function of technology. In the most modern electronic switching<br />
offices, rotary signaling can be performed at no additional cost,<br />
whereas the addition of touch-tone dialing requires a centralized<br />
touch-tone receiver at the switching office. Touch-tone signaling<br />
was developed in an era of analog telephone switching offices, <strong>and</strong><br />
since that time, switching offices have become overwhelmingly digital.<br />
When the switching network becomes entirely digital, as will<br />
be the case when the integrated services digital network (ISDN) is<br />
implemented, touch-tone dialing will become unnecessary. In the<br />
future, ISDN telephone lines will use digital signaling methods exclusively.<br />
See also Cell phone; Rotary dial telephone; Telephone switching.
Further Reading<br />
Touch-tone telephone / 777<br />
Coe, Lewis. The Telephone <strong>and</strong> Its Several <strong>Inventors</strong>: A History. Jefferson,<br />
N.C.: McFarl<strong>and</strong>, 1995.<br />
Young, Peter. Person to Person: The International Impact of the Telephone.<br />
Cambridge: Granta Editions, 1991.
778<br />
Transistor<br />
Transistor<br />
The invention: A miniature electronic device, comprising a tiny<br />
semiconductor <strong>and</strong> multiple electrical contacts, used in circuits<br />
as an amplifier, detector, or switch, that revolutionized electronics<br />
in the mid-twentieth century.<br />
The people behind the invention:<br />
William B. Shockley (1910-1989), an American physicist who led<br />
the Bell Laboratories team that produced the first transistors<br />
Akio Morita (1921-1999), a Japanese physicist <strong>and</strong> engineer who<br />
was the cofounder of the Sony electronics company<br />
Masaru Ibuka (1908-1997), a Japanese electrical engineer <strong>and</strong><br />
businessman who cofounded Sony with Morita<br />
The Birth of Sony<br />
In 1952, a Japanese engineer visiting the United States learned<br />
that the Western Electric company was granting licenses to use its<br />
transistor technology. He was aware of the development of this device<br />
<strong>and</strong> thought that it might have some commercial applications.<br />
Masaru Ibuka told his business partner in Japan about the opportunity,<br />
<strong>and</strong> they decided to raise the $25,000 required to obtain a license.<br />
The following year, his partner, Akio Morita, traveled to New<br />
York City <strong>and</strong> concluded negotiations with Western Electric. This<br />
was a turning point in the history of the Sony company <strong>and</strong> in the<br />
electronics industry, for transistor technology was to open profitable<br />
new fields in home entertainment.<br />
The origins of the Sony corporation were in the ruins of postwar<br />
Japan. The Tokyo Telecommunications Company was incorporated<br />
in 1946 <strong>and</strong> manufactured a wide range of electrical equipment<br />
based on the existing vacuum tube technology. Morita <strong>and</strong> Ibuka<br />
were involved in research <strong>and</strong> development of this technology during<br />
the war <strong>and</strong> intended to put it to use in the peacetime economy.<br />
In the United States <strong>and</strong> Europe, electrical engineers who had done<br />
the same sort of research founded companies to build advanced<br />
audio products such as high-performance amplifiers, but Morita
<strong>and</strong> Ibuka did not have the resources to make such sophisticated<br />
products <strong>and</strong> concentrated on simple items such as electric water<br />
heaters <strong>and</strong> small electric motors for record players.<br />
In addition to their experience as electrical engineers, both men<br />
were avid music lovers, as a result of their exposure to Americanbuilt<br />
phonographs <strong>and</strong> gramophones exported to Japan in the early<br />
twentieth century. They decided to combine their twin interests by<br />
devising innovative audio products <strong>and</strong> looked to the new field of<br />
magnetic recording as a likely area for exploitation. They had learned<br />
about tape recorders from technical journals <strong>and</strong> had seen them in<br />
use by the American occupation force.<br />
They developed a reel-to-reel tape recorder <strong>and</strong> introduced it in<br />
1950. It was a large machine with vacuum tube amplifiers, so heavy<br />
that they transported it by truck. Although it worked well, they had<br />
a hard job selling it. Ibuka went to the United States in 1952 partly<br />
on a fact-finding mission <strong>and</strong> partly to get some ideas about marketing<br />
the tape recorder to schools <strong>and</strong> businesses. It was not seen as a<br />
consumer product.<br />
Ibuka <strong>and</strong> Morita had read about the invention of the transistor<br />
in Western Electric’s laboratories shortly after the war. John Bardeen<br />
<strong>and</strong> Walter H. Brattain had discovered that a semiconducting material<br />
could be used to amplify or control electric current. Their point<br />
contact transistor of 1948 was a crude laboratory apparatus that<br />
served as the basis for further research. The project was taken over<br />
by William B. Shockley, who had suggested the theory of the transistor<br />
effect. A new generation of transistors was devised; they were<br />
simpler <strong>and</strong> more efficient than the original. The junction transistors<br />
were the first to go into production.<br />
Ongoing Research<br />
Transistor / 779<br />
Bell Laboratories had begun transistor research because Western<br />
Electric, one of its parent companies along with American Telephone<br />
<strong>and</strong> Telegraph, was interested in electronic amplification.<br />
This was seen as a means to increase the strength of telephone signals<br />
traveling over long distances, a job carried out by vacuum<br />
tubes. The junction transistor was developed as an amplifier. Western<br />
Electric thought that the hearing aid was the only consumer
780 / Transistor<br />
product that could be based on it <strong>and</strong> saw the transistor solely as a<br />
telecommunications technology. The Japanese purchased the license<br />
with only the slightest underst<strong>and</strong>ing of the workings of<br />
semiconductors <strong>and</strong> despite the belief that transistors could not be<br />
used at the high frequencies associated with radio.<br />
The first task of Ibuka <strong>and</strong> Morita was to develop a highfrequency<br />
transistor. Once this was accomplished, in 1954, a method<br />
had to be found to manufacture it cheaply. Transistors were made<br />
from crystals, which had to be grown <strong>and</strong> doped with impurities to<br />
form different layers of conductivity. This was not an exact science,<br />
<strong>and</strong> Sony engineers found that the failure rate for high-frequency<br />
transistors was very high. This increased costs <strong>and</strong> put the entire<br />
project into doubt, because the adoption of transistors was based on<br />
simplicity, reliability, <strong>and</strong> low cost.<br />
The introduction of the first Sony transistor radio, the TR-55, in<br />
1955 was the result of basic research combined with extensive industrial<br />
engineering. Morita admitted that its sound was poor, but<br />
because it was the only transistor radio in Japan, it sold well. These<br />
were not cheap products, nor were they particularly compact. The<br />
selling point was that they consumed much less battery power than<br />
the old portable radios.<br />
The TR-55 carried the br<strong>and</strong> name Sony, a relative of the Soni<br />
magnetic tape made by the company <strong>and</strong> a name influenced by the<br />
founders’ interest in sound. Morita <strong>and</strong> Ibuka had already decided<br />
that the future of their company would be in international trade <strong>and</strong><br />
wanted its name to be recognized all over the world. In 1957, they<br />
changed the company’s name from Tokyo Telecomunications Engineering<br />
to Sony.<br />
The first product intended for the export market was a small<br />
transistor radio. Ibuka was disappointed at the large size of the TR-<br />
55 because one of the advantages of the transistor over the vacuum<br />
tube was supposed to be smaller size. He saw a miniature radio as a<br />
promising consumer product <strong>and</strong> gave his engineers the task of designing<br />
one small enough to fit into his shirt pocket.<br />
All elements of the radio had to be reduced in size: amplifier,<br />
transformer, capacitor, <strong>and</strong> loudspeaker. Like many other Japanese<br />
manufacturers, Sony bought many of the component parts of its<br />
products from small manufacturers, all of which had to be cajoled
into decreasing the size of their parts. Morita <strong>and</strong> Ibuka stated that<br />
the hardest task in developing this new product was negotiating<br />
with the subcontractors. Finally, the Type 63 pocket transistor radio—the<br />
“Transistor Six”—was introduced in 1957.<br />
Impact<br />
Transistor / 781<br />
When the transistor radio was introduced, the market for radios<br />
was considered to be saturated. People had rushed to buy them<br />
when they were introduced in the 1920’s, <strong>and</strong> by the time of the<br />
Great Depression, the majority of American households had one.<br />
Improvements had been made to the receiver <strong>and</strong> more attractive<br />
radio/phonograph console sets had been introduced, but these developments<br />
did not add many new customers. The most manufacturers<br />
could hope for was the replacement market with a few sales<br />
as children moved out of their parents’ homes <strong>and</strong> established new<br />
households.<br />
The pocket radio created a new market. It could be taken anywhere<br />
<strong>and</strong> used at any time. Its portability was its major asset, <strong>and</strong> it<br />
became an indispensable part of youth-oriented popular culture of<br />
the 1950’s <strong>and</strong> 1960’s. It provided an outlet for the crowded airwaves<br />
of commercial AM radio <strong>and</strong> was the means to bring the new<br />
music of rock <strong>and</strong> roll to a mass audience.<br />
As soon as Sony introduced the Transistor Six, it began to redesign<br />
it to reduce manufacturing cost. Subsequent transistor radios<br />
were smaller <strong>and</strong> cheaper. Sony sold them by the millions, <strong>and</strong> millions<br />
more were made by other companies under br<strong>and</strong> names such<br />
as “Somy” <strong>and</strong> “Sonny.” By 1960, more than twelve million transistor<br />
radios had been sold.<br />
The transistor radio was the product that established Sony as an<br />
international audio concern. Morita had resisted the temptation to<br />
make radios for other companies to sell under their names. Exports<br />
of Sony radios increased name recognition <strong>and</strong> established a bridgehead<br />
in the United States, the biggest market for electronic consumer<br />
products. Morita planned to follow the radio with other transistorized<br />
products.<br />
The television had challenged radio’s position as the mechanical<br />
entertainer in the home. Like the radio, it stood in nearly every
782 / Transistor<br />
William Shockley<br />
William Shockley’s reputation contains extremes. He helped<br />
invent one of the basic devices supporting modern technological<br />
society, the transistor. He also tried to revive one of the most<br />
infamous social theories, eugenics.<br />
His parents, mining engineer William Hillman Shockley,<br />
<strong>and</strong> surveyor May Bradford Shockley, were on assignment in<br />
Engl<strong>and</strong> in 1910 when he was born. The family returned to<br />
Northern California when the younger William was three, <strong>and</strong><br />
they schooled him at home until he was eight. He acquired an<br />
early interest in physics from a neighbor who taught at Stanford<br />
University. Shockley pursed that interest at the California Institute<br />
of Technology <strong>and</strong> the Massachusetts Institute of Technology,<br />
which awarded him a doctorate in 1936.<br />
Shockley went to work for Bell Telephone Laboratories in<br />
the same year. While trying to design a vacuum tube that could<br />
amplify current, it occurred to him that solid state components<br />
might work better than the fragile tubes. He experimented with<br />
the semiconductors germanium <strong>and</strong> silicon, but the materials<br />
available were too impure for his purpose. World War II interrupted<br />
the experiments, <strong>and</strong> he worked instead to improve radar<br />
<strong>and</strong> anti-submarine devices for the military. Back at Bell<br />
Labs in 1945, Shockley teamed with theorist John Bardeen <strong>and</strong><br />
experimentalist Walter Brattain. Two years later they succeeded<br />
in making the first amplifier out of semiconductor materials<br />
<strong>and</strong> called it a transistor (short for transfer resistor). Its effect on<br />
the electronics industry was revolutionary, <strong>and</strong> the three shared<br />
the 1956 Nobel Prize in Physics for their achievement.<br />
In the mid-1950’s Shockley left Bell Labs to start Shockley<br />
Transistor, then switched to academia in 1963, becoming Stanford<br />
University’s Alex<strong>and</strong>er M. Poniatoff Professor of Engineering<br />
<strong>and</strong> Applied Science. He grew interested in the relation<br />
between race <strong>and</strong> intellectual ability. Teaching himself psychology<br />
<strong>and</strong> genetics, he conceived the theory that Caucasians were<br />
inherently more intelligent than other races because of their genetic<br />
make-up. When he lectured on his br<strong>and</strong> of eugenics, he<br />
was denounced by the public as a racist <strong>and</strong> by scientists for<br />
shoddy thinking. Shockley retired in 1975 <strong>and</strong> died in 1989.
Transistor / 783<br />
American living room <strong>and</strong> used the same vacuum tube amplification<br />
unit. The transistorized portable television set did for images<br />
what the transistor radio did for sound. Sony was the first to develop<br />
an all-transistor television, in 1959. At a time when the trend<br />
in television receivers was toward larger screens, Sony produced<br />
extremely small models with eight-inch screens. Ignoring the marketing<br />
experts who said that Americans would never buy such a<br />
product, Sony introduced these models into the United States in<br />
1960 <strong>and</strong> found that there was a huge dem<strong>and</strong> for them.<br />
As in radio, the number of television stations on the air <strong>and</strong><br />
broadcasts for the viewer to choose from grew. Apersonal television<br />
or radio gave the audience more choices. Instead of one machine in<br />
the family room, there were now several around the house. The<br />
transistorization of mechanical entertainers allowed each family<br />
member to choose his or her own entertainment. Sony learned several<br />
important lessons from the success of the transistor radio <strong>and</strong><br />
television. The first was that small size <strong>and</strong> low price could create<br />
new markets for electronic consumer products. The second was that<br />
constant innovation <strong>and</strong> cost reduction were essential to keep ahead<br />
of the numerous companies that produced cheaper copies of original<br />
Sony products.<br />
In 1962, Sony introduced a tiny television receiver with a fiveinch<br />
screen. In the 1970’s <strong>and</strong> 1980’s, it produced even smaller models,<br />
until it had a TV set that could sit in the palm of the h<strong>and</strong>—the<br />
Video Walkman. Sony’s scientists had developed an entirely new<br />
television screen that worked on a new principle <strong>and</strong> gave better<br />
color resolution; the company was again able to blend the fruits of<br />
basic scientific research with innovative industrial engineering.<br />
The transistorized amplifier unit used in radio <strong>and</strong> television sets<br />
was applied to other products, including amplifiers for record players<br />
<strong>and</strong> tape recorders. Japanese manufacturers were slow to take<br />
part in the boom in high-fidelity audio equipment that began in the<br />
United States in the 1950’s. The leading manufacturers of highquality<br />
audio components were small American companies based<br />
on the talents of one engineer, such as Avery Fisher or Henry Koss.<br />
They sold expensive amplifiers <strong>and</strong> loudspeakers to audiophiles.<br />
The transistor reduced the size, complexity, <strong>and</strong> price of these components.<br />
The Japanese took the lead devising complete audio units
784 / Transistor<br />
based on transistorized integrated circuits, thus developing the basic<br />
home stereo.<br />
In the 1960’s, companies such as Sony <strong>and</strong> Matsushita dominated<br />
the market for inexpensive home stereos. These were the basic<br />
radio/phonograph combination, with two detached speakers.<br />
The finely crafted wooden consoles that had been the st<strong>and</strong>ard for<br />
the home phonograph were replaced by small plastic boxes. The<br />
Japanese were also quick to exploit the opportunities of the tape cassette.<br />
The Philips compact cassette was enthusiastically adopted by<br />
Japanese manufacturers <strong>and</strong> incorporated into portable tape recorders.<br />
This was another product with its ancestry in the transistor<br />
radio. As more of them were sold, the price dropped, encouraging<br />
more consumers to buy. The cassette player became as commonplace<br />
in American society in the 1970’s as the transistor radio had<br />
been in the 1960’s.<br />
The Walkman<br />
The transistor took another step in miniaturization in the Sony<br />
Walkman, a personal stereo sound system consisting of a cassette<br />
player <strong>and</strong> headphones. It was based on the same principles as the<br />
transistor radio <strong>and</strong> television. Sony again confounded marketing<br />
experts by creating a new market for a personal electronic entertainer.<br />
In the ten years following the introduction of the Walkman in<br />
1979, Sony sold fifty million units worldwide, half of those in the<br />
United States. Millions of imitation products were sold by other<br />
companies.<br />
Sony’s acquisition of the Western Electric transistor technology<br />
was a turning point in the fortunes of that company <strong>and</strong> of Japanese<br />
manufacturers in general. Less than ten years after suffering defeat<br />
in a disastrous war, Japanese industry served notice that it had lost<br />
none of its engineering capabilities <strong>and</strong> innovative skills. The production<br />
of the transistor radio was a testament to the excellence of<br />
Japanese research <strong>and</strong> development. Subsequent products proved<br />
that the Japanese had an uncanny sense of the potential market for<br />
consumer products based on transistor technology. The ability to incorporate<br />
solid-state electronics into innovative home entertainment<br />
products allowed Japanese manufacturers to dominate the
world market for electronic consumer products <strong>and</strong> to eliminate<br />
most of their American competitors.<br />
The little transistor radio was the vanguard of an invasion of new<br />
products unparalleled in economic history. Japanese companies<br />
such as Sony <strong>and</strong> Panasonic later established themselves at the leading<br />
edge of digital technology, the basis of a new generation of entertainment<br />
products. Instead of Japanese engineers scraping together<br />
the money to buy a license for an American technology, the<br />
great American companies went to Japan to license compact disc<br />
<strong>and</strong> other digital technologies.<br />
See also Cassette recording; Color television; FM radio; Radio;<br />
Television; Transistor radio; Videocassette recorder; Walkman cassette<br />
player.<br />
Further Reading<br />
Transistor / 785<br />
Lyons, Nick. The Sony Vision. New York: Crown Publishers, 1976.<br />
Marshall, David V. Akio Morita <strong>and</strong> Sony. Watford: Exley, 1995.<br />
Morita, Akio, with Edwin M. Reingold, <strong>and</strong> Mitsuko Shimomura.<br />
Made in Japan: Akio Morita <strong>and</strong> Sony. London: HarperCollins,<br />
1994.<br />
Reid, T. R. The Chip: How Two Americans Invented the Microchip <strong>and</strong><br />
Launched a Revolution. New York: Simon <strong>and</strong> Schuster, 1984.<br />
Riordan, Michael. Crystal Fire: The Invention of the Transistor <strong>and</strong> the<br />
Birth of the Information Age. New York: Norton, 1998.<br />
Scott, Otto. The Creative Ordeal: The Story of Raytheon. New York:<br />
Atheneum, 1974.
786<br />
Transistor radio<br />
Transistor radio<br />
The invention: Miniature portable radio that used transistors <strong>and</strong><br />
created a new mass market for electronic products.<br />
The people behind the invention:<br />
John Bardeen (1908-1991), an American physicist<br />
Walter H. Brattain (1902-1987), an American physicist<br />
William Shockley (1910-1989), an American physicist<br />
Akio Morita (1921-1999), a Japanese physicist <strong>and</strong> engineer<br />
Masaru Ibuka (1907-1997), a Japanese electrical engineer <strong>and</strong><br />
industrialist<br />
A Replacement for Vacuum Tubes<br />
The invention of the first transistor by William Shockley, John<br />
Bardeen, <strong>and</strong> Walter H. Brattain of Bell Labs in 1947 was a scientific<br />
event of great importance. Its commercial importance at the time,<br />
however, was negligible. The commercial potential of the transistor<br />
lay in the possibility of using semiconductor materials to carry out<br />
the functions performed by vacuum tubes, the fragile <strong>and</strong> expensive<br />
tubes that were the electronic hearts of radios, sound amplifiers,<br />
<strong>and</strong> telephone systems. Transistors were smaller, more rugged,<br />
<strong>and</strong> less power-hungry than vacuum tubes. They did not suffer<br />
from overheating. They offered an alternative to the unreliability<br />
<strong>and</strong> short life of vacuum tubes.<br />
Bell Labs had begun the semiconductor research project in an effort<br />
to find a better means of electronic amplification. This was<br />
needed to increase the strength of telephone signals over long distances.<br />
Therefore, the first commercial use of the transistor was<br />
sought in speech amplification, <strong>and</strong> the small size of the device<br />
made it a perfect component for hearing aids. Engineers from the<br />
Raytheon Company, the leading manufacturer of hearing aids, were<br />
invited to Bell Labs to view the new transistor <strong>and</strong> to help assess the<br />
commercial potential of the technology. The first transistorized consumer<br />
product, the hearing aid, was soon on the market. The early<br />
models built by Raytheon used three junction-type transistors <strong>and</strong><br />
cost more than two hundred dollars. They were small enough to go
directly into the ear or to be incorporated into eyeglasses.<br />
The commercial application of semiconductors was aimed largely<br />
at replacing the control <strong>and</strong> amplification functions carried out by<br />
vacuum tubes. The perfect vehicle for this substitution was the radio<br />
set. Vacuum tubes were the most expensive part of a radio set<br />
<strong>and</strong> the most prone to break down. The early junction transistors<br />
operated best at low frequencies, <strong>and</strong> subsequently more research<br />
was needed to produce a commercial high-frequency transistor.<br />
Several of the licensees embarked on this quest, including the Radio<br />
Corporation of America (RCA), Texas Instruments, <strong>and</strong> the Tokyo<br />
Telecommunications Engineering Company of Japan.<br />
Perfecting the Transistor<br />
Transistor radio / 787<br />
The Tokyo Telecommunications Engineering Company of Japan,<br />
formed in 1946, had produced a line of instruments <strong>and</strong> consumer<br />
products based on vacuum-tube technology. Its most successful<br />
product was a magnetic tape recorder. In 1952, one of the founders<br />
of the company, Masaru Ibuka, visited the United States to learn<br />
more about the use of tape recorders in schools <strong>and</strong> found out that<br />
Western Electric was preparing to license the transistor patent. With<br />
only the slightest underst<strong>and</strong>ing of the workings of semiconductors,<br />
Tokyo Telecommunications purchased a license in 1954 with<br />
the intention of using transistors in a radio set.<br />
The first task facing the Japanese was to increase the frequency<br />
response of the transistor to make it suitable for radio use. Then a<br />
method of manufacturing transistors cheaply had to be found. At<br />
the time, junction transistors were made from slices of germanium<br />
crystal. Growing the crystal was not an exact science, nor was the<br />
process of “doping” it with impurities to form the different layers of<br />
conductivity that made semiconductors useful. The Japanese engineers<br />
found that the failure rate for high-frequency transistors was<br />
extremely high. The yield of good transistors from one batch ran as<br />
low as 5 percent, which made them extremely expensive <strong>and</strong> put the<br />
whole project in doubt. The effort to replace vacuum tubes with<br />
components made of semiconductors was motivated by cost rather<br />
than performance; if transistors proved to be more expensive, then<br />
it was not worth using them.
788 / Transistor radio<br />
Engineers from Tokyo Telecommunications again came to the<br />
United States to search for information about the production of<br />
transistors. In 1954, the first high-frequency transistor was produced<br />
in Japan. The success of Texas Instruments in producing the<br />
components for the first transistorized radio (introduced by the Regency<br />
Company in 1954) spurred the Japanese to greater efforts.<br />
Much of their engineering <strong>and</strong> research work was directed at the<br />
manufacture <strong>and</strong> quality control of transistors. In 1955, they introduced<br />
their transistor radio, the TR-55, which carried the br<strong>and</strong><br />
name “Sony.” The name was chosen because the executives of the<br />
company believed that the product would have an international appeal<br />
<strong>and</strong> therefore needed a br<strong>and</strong> name that could be recognized<br />
easily <strong>and</strong> remembered in many languages. In 1957, the name of the<br />
entire company was changed to Sony.<br />
Impact<br />
Although Sony’s transistor radios were successful in the marketplace,<br />
they were still relatively large <strong>and</strong> cumbersome. Ibuka saw a<br />
consumer market for a miniature radio <strong>and</strong> gave his engineers the<br />
task of designing a radio small enough to fit into a shirt pocket. The<br />
realization of this design—“Transistor Six”—was introduced in 1957.<br />
It was an immediate success. Sony sold the radios by the millions,<br />
<strong>and</strong> numerous imitations were also marketed under br<strong>and</strong> names<br />
such as “Somy” <strong>and</strong> “Sonny.” The product became an indispensable<br />
part of popular culture of the late 1950’s <strong>and</strong> 1960’s; its low cost enabled<br />
the masses to enjoy radio wherever there were broadcasts.<br />
The pocket-sized radio was the first of a line of electronic consumer<br />
products that brought technology into personal contact with<br />
the user. Sony was convinced that miniaturization did more than<br />
make products more portable; it established a one-on-one relationship<br />
between people <strong>and</strong> machines. Sony produced the first alltransistor<br />
television in 1960. Two years later, it began to market a<br />
miniature television in the United States. The continual reduction in<br />
the size of Sony’s tape recorders reached a climax with the portable<br />
tape player introduced in the 1980’s. The Sony Walkman was a marketing<br />
triumph <strong>and</strong> a further reminder that Japanese companies led<br />
the way in the design <strong>and</strong> marketing of electronic products.
John Bardeen<br />
Transistor radio / 789<br />
The transistor reduced the size of electronic circuits <strong>and</strong> at<br />
the same time the amount of energy lost from them as heat.<br />
Superconduction gave rise to electronic circuits with practically<br />
no loss of energy at all. John Bardeen helped unlock the secrets<br />
of both.<br />
Bardeen was born in 1908 in Madison, Wisconsin, where his<br />
mother was an artist <strong>and</strong> his father was a professor of anatomy<br />
at the University of Wisconsin. Bardeen attended the university,<br />
earning a bachelor’s degree in electrical engineering in 1928<br />
<strong>and</strong> a master’s degree in geophysics in 1929. After working as a<br />
geophysicist, he entered Princeton University, studying with<br />
Eugene Wigner, the leading authority on solid-state physics,<br />
<strong>and</strong> received a doctorate in mathematics <strong>and</strong> physics in 1936.<br />
Bardeen taught at Harvard University <strong>and</strong> the University of<br />
Minnesota until World War II, when he moved to the Naval<br />
Ordnance Laboratory. Finding academic salaries too low to<br />
support his family after the war, he accepted a position at Bell<br />
Telephone Laboratories. There, with Walter Brattain, he turned<br />
William Shockley’s theory of semiconductors into a practical<br />
device—the transfer resistor, or transistor.<br />
He returned to academia as a professor at the University of<br />
Illinois <strong>and</strong> began to investigate a long-st<strong>and</strong>ing mystery in<br />
physics, superconductivity, with a postdoctoral associate, Leon<br />
Cooper, <strong>and</strong> a graduate student, J. Robert Schrieffer. In 1956<br />
Cooper made a key discovery—superconducting electrons<br />
travel in pairs. And while Bardeen was in Stockholm, Sweden,<br />
collecting a share of the 1956 Nobel Prize in Physics for his work<br />
on transistors, Schrieffer worked out a mathematical analysis of<br />
the phenomenon. The theory that the three men published since<br />
became known as BCS theory from the first letters of their last<br />
names, <strong>and</strong> as well as explain superconductors, it pointed toward<br />
a great deal of technology <strong>and</strong> additional basic research.<br />
The team won the 1972 Nobel Prize in Physics for BCS theory,<br />
making Bardeen the only person to ever win two Nobel Prizes<br />
for physics. He retired in 1975 <strong>and</strong> died sixteen years later.<br />
See also Compact disc; FM radio; Radio; Radio crystal sets; Television;<br />
Transistor; Walkman cassette player.
790 / Transistor radio<br />
Further Reading<br />
H<strong>and</strong>y, Roger, Maureen Erbe, <strong>and</strong> Aileen Antonier. Made in Japan:<br />
Transistor Radios of the 1950s <strong>and</strong> 1960s. San Francisco: Chronicle<br />
Books, 1993.<br />
Marshall, David V. Akio Morita <strong>and</strong> Sony. Watford: Exley, 1995.<br />
Morita, Akio, with Edwin M. Reingold, <strong>and</strong> Mitsuko Shimomura.<br />
Made in Japan: Akio Morita <strong>and</strong> Sony. London: HarperCollins, 1994.<br />
Nathan, John. Sony: The Private Life. London: HarperCollins-<br />
Business, 2001.
Tuberculosis vaccine<br />
Tuberculosis vaccine<br />
The invention: Vaccine that uses an avirulent (nondisease) strain<br />
of bovine tuberculosis bacilli that is safer than earlier vaccines.<br />
The people behind the invention:<br />
Albert Calmette (1863-1933), a French microbiologist<br />
Camille Guérin (1872-1961), a French veterinarian <strong>and</strong><br />
microbiologist<br />
Robert Koch (1843-1910), a German physician <strong>and</strong><br />
microbiologist<br />
Isolating Bacteria<br />
791<br />
Tuberculosis, once called “consumption,” is a deadly, contagious<br />
disease caused by the bacterium Mycobacterium tuberculosis,<br />
first identified by the eminent German physician Robert Koch in<br />
1882. The bacterium can be transmitted from person to person by<br />
physical contact or droplet infection (for example, sneezing). The<br />
condition eventually inflames <strong>and</strong> damages the lungs, causing difficulty<br />
in breathing <strong>and</strong> failure of the body to deliver sufficient oxygen<br />
to various tissues. It can spread to other body tissues, where<br />
further complications develop. Without treatment, the disease progresses,<br />
disabling <strong>and</strong> eventually killing the victim. Tuberculosis<br />
normally is treated with a combination of antibiotics <strong>and</strong> other<br />
drugs.<br />
Koch developed his approach for identifying bacterial pathogens<br />
(disease producers) with simple equipment, primarily microscopy.<br />
Having taken blood samples from diseased animals, he would<br />
identify <strong>and</strong> isolate the bacteria he found in the blood. Each strain of<br />
bacteria would be injected into a healthy animal. The latter would<br />
then develop the disease caused by the particular strain.<br />
In 1890, he discovered that a chemical released from tubercular<br />
bacteria elicits a hypersensitive (allergic) reaction in individuals<br />
previously exposed to or suffering from tuberculosis. This chemical,<br />
called “tuberculin,” was isolated from culture extracts in which tubercular<br />
bacteria were being grown.
792 / Tuberculosis vaccine<br />
When small amounts of tuberculin are injected into a person subcutaneously<br />
(beneath the skin), a reddened, inflamed patch approximately<br />
the size of a quarter develops if the person has been exposed<br />
to or is suffering from tuberculosis. Injection of tuberculin into an<br />
uninfected person yields a negative response (that is, no inflammation).<br />
Tuberculin does not harm those being tested.<br />
Tuberculosis’s Weaker Gr<strong>and</strong>children<br />
The first vaccine to prevent tuberculosis was developed in 1921<br />
by two French microbiologists, Albert Calmette <strong>and</strong> Camille Guérin.<br />
Calmette was a student of the eminent French microbiologist Louis<br />
Pasteur at Pasteur’s Institute in Paris. Guérin was a veterinarian<br />
who joined Calmette’s laboratory in 1897. At Lille, Calmette <strong>and</strong><br />
Guérin focused their research upon the microbiology of infectious<br />
diseases, especially tuberculosis.<br />
In 1906, they discovered that individuals who had been exposed to<br />
tuberculosis or who had mild infections were developing resistance to<br />
the disease. They found that resistance to tuberculosis was initiated by<br />
the body’s immune system. They also discovered that tubercular bacteria<br />
grown in culture over many generations become progressively<br />
weaker <strong>and</strong> avirulent, losing their ability to cause disease.<br />
From 1906 through 1921, Calmette <strong>and</strong> Guérin cultured tubercle<br />
bacilli from cattle. With proper nutrients <strong>and</strong> temperature, bacteria<br />
can reproduce by fission (that is, one bacterium splits into two bacteria)<br />
in as little time as thirty minutes. Calmette <strong>and</strong> Guérin cultivated<br />
these bacteria in a bile-derived food medium for thous<strong>and</strong>s of<br />
generations over fifteen years, periodically testing the bacteria for<br />
virulence by injecting them into cattle. After many generations, the<br />
bacteria lost their virulence, their ability to cause disease. Nevertheless,<br />
these weaker, or “avirulent” bacteria still stimulated the animals’<br />
immune systems to produce antibodies. Calmette <strong>and</strong> Guérin<br />
had successfully bred a strain of avirulent bacteria that could not<br />
cause tuberculosis in cows but could also stimulate immunity against<br />
the disease.<br />
There was considerable concern over whether the avirulent strain<br />
was harmless to humans. Calmette <strong>and</strong> Guérin continued cultivating<br />
weaker versions of the avirulent strain that retained antibody-
stimulating capacity. By 1921, they had isolated an avirulent antibody-stimulating<br />
strain that was harmless to humans, a strain they<br />
called “Bacillus Calmette-Guérin” (BCG).<br />
In 1922, they began BCG-vaccinating newborn children against<br />
tuberculosis at the Charité Hospital in Paris. The immunized children<br />
exhibited no ill effects from the BCG vaccination. Calmette <strong>and</strong><br />
Guérin’s vaccine was so successful in controlling the spread of tuberculosis<br />
in France that it attained widespread use in Europe <strong>and</strong><br />
Asia beginning in the 1930’s.<br />
Impact<br />
Tuberculosis vaccine / 793<br />
Most bacterial vaccines involve the use of antitoxin or heat- or<br />
chemical-treated bacteria. BCG is one of the few vaccines that use<br />
specially bred live bacteria. Its use sparked some controversy in<br />
the United States <strong>and</strong> Engl<strong>and</strong>, where the medical community<br />
questioned its effectiveness <strong>and</strong> postponed BCG immunization<br />
until the late 1950’s. Extensive testing of the vaccine was performed<br />
at the University of Illinois before it was adopted in the<br />
United States. Its effectiveness is questioned by some physicians to<br />
this day.<br />
Some of the controversy stems from the fact that the avirulent,<br />
antibody-stimulating BCG vaccine conflicts with the tuberculin<br />
skin test. The tuberculin skin test is designed to identify people<br />
suffering from tuberculosis so that they can be treated. A BCGvaccinated<br />
person will have a positive tuberculin skin test similar<br />
to that of a tuberculosis sufferer. If a physician does not know that<br />
a patient has had a BCG vaccination, it will be presumed (incorrectly)<br />
that the patient has tuberculosis. Nevertheless, the BCG<br />
vaccine has been invaluable in curbing the worldwide spread of<br />
tuberculosis, although it has not eradicated the disease.<br />
See also Antibacterial drugs; Birth control pill; Penicillin; Polio<br />
vaccine (Sabin); Polio vaccine (Salk); Salvarsan; Typhus vaccine;<br />
Yellow fever vaccine.
794 / Tuberculosis vaccine<br />
Further Reading<br />
Daniel, Thomas M. Pioneers of Medicine <strong>and</strong> their Impact on Tuberculosis.<br />
Rochester, N.Y.: University of Rochester Press, 2000.<br />
DeJauregui, Ruth. 100 Medical Milestones That Shaped World History.<br />
San Mateo, Calif.: Bluewood Books, 1998.<br />
Fry, William F. “Prince Hamlet <strong>and</strong> Professor Koch.” Perspectives in<br />
Biology <strong>and</strong> Medicine 40, no. 3 (Spring, 1997).<br />
Lutwick, Larry I. New Vaccines <strong>and</strong> New Vaccine Technology. Philadelphia:<br />
Saunders, 1999.
Tungsten filament<br />
Tungsten filament<br />
The invention: Metal filament used in the inc<strong>and</strong>escent light bulbs<br />
that have long provided most of the world’s electrical lighting.<br />
The people behind the invention:<br />
William David Coolidge (1873-1975), an American electrical<br />
engineer<br />
Thomas Alva Edison (1847-1931), an American inventor<br />
The Inc<strong>and</strong>escent Light Bulb<br />
The electric lamp developed along with an underst<strong>and</strong>ing of<br />
electricity in the latter half of the nineteenth century. In 1841, the<br />
first patent for an inc<strong>and</strong>escent lamp was granted in Great Britain. A<br />
patent is a legal claim that protects the patent holder for a period of<br />
time from others who might try to copy the invention <strong>and</strong> make a<br />
profit from it. Although others tried to improve upon the inc<strong>and</strong>escent<br />
lamp, it was not until 1877, when Thomas Alva Edison, the famous<br />
inventor, became interested in developing a successful electric<br />
lamp, that real progress was made. The Edison Electric Light<br />
Company was founded in 1878, <strong>and</strong> in 1892, it merged with other<br />
companies to form the General Electric Company.<br />
Early electric lamps used platinum wire as a filament. Because<br />
platinum is expensive, alternative filament materials were sought.<br />
After testing many substances, Edison finally decided to use carbon<br />
as a filament material. Although carbon is fragile, making it difficult<br />
to manufacture filaments, it was the best choice available at the time.<br />
The Manufacture of Ductile Tungsten<br />
795<br />
Edison <strong>and</strong> others had tested tungsten as a possible material for<br />
lamp filaments but discarded it as unsuitable. Tungsten is a hard,<br />
brittle metal that is difficult to shape <strong>and</strong> easy to break, but it possesses<br />
properties that are needed for lamp filaments. It has the highest<br />
melting point (3,410 degrees Celsius) of any known metal; therefore,<br />
it can be heated to a very high temperature, giving off a
796 / Tungsten filament<br />
relatively large amount of radiation without melting (as platinum<br />
does) or decomposing (as carbon does). The radiation it emits when<br />
heated is primarily visible light. Its resistance to the passage of electricity<br />
is relatively high, so it requires little electric current to reach<br />
its operating voltage. It also has a high boiling point (about 5,900 degrees<br />
Celsius) <strong>and</strong> therefore does not tend to boil away, or vaporize,<br />
when heated. In addition, it is mechanically strong, resisting breaking<br />
caused by mechanical shock.<br />
William David Coolidge, an electrical engineer with the General<br />
Electric Company, was assigned in 1906 the task of transforming<br />
tungsten from its natural state into a form suitable for lamp filaments.<br />
The accepted procedure for producing fine metal wires was<br />
(<strong>and</strong> still is) to force a wire rod through successively smaller holes in<br />
a hard metal block until a wire of the proper diameter is achieved.<br />
The property that allows a metal to be drawn into a fine wire by<br />
means of this procedure is called “ductility.” Tungsten is not naturally<br />
ductile, <strong>and</strong> it was Coolidge’s assignment to make it into a ductile<br />
form. Over a period of five years, <strong>and</strong> after many failures, Coolidge<br />
<strong>and</strong> his workers achieved their goal. By 1911, General Electric<br />
was selling lamps that contained tungsten filaments.<br />
Originally, Coolidge attempted to mix powdered tungsten with a<br />
suitable substance, form a paste, <strong>and</strong> squirt that paste through a die<br />
to form the wire. The paste-wire was then sintered (heated at a temperature<br />
slightly below its melting point) in an effort to fuse the<br />
powder into a solid mass. Because of its higher boiling point, the<br />
tungsten would remain after all the other components in the paste<br />
boiled away. At about 300 degrees Celsius, tungsten softens sufficiently<br />
to be hammered into an elongated form. Upon cooling, however,<br />
tungsten again becomes brittle, which prevents it from being<br />
shaped further into filaments. It was suggested that impurities in<br />
the tungsten caused the brittleness, but specially purified tungsten<br />
worked no better than the unpurified form.<br />
Many metals can be reduced from rods to wires if the rods are<br />
passed through a series of rollers that are successively closer together.<br />
Some success was achieved with this method when the rollers<br />
were heated along with the metal, but it was still not possible to<br />
produce sufficiently fine wire. Next, Coolidge tried a procedure<br />
called “swaging,” in which a thick wire is repeatedly <strong>and</strong> rapidly
struck by a series of rotating hammers as the wire is drawn past<br />
them. After numerous failures, a fine wire was successfully produced<br />
using this procedure. It was still too thick for lamp filaments,<br />
but it was ductile at room temperature.<br />
Microscopic examination of the wire revealed a change in the<br />
crystalline structure of tungsten as a result of the various treatments.<br />
The individual crystals had elongated, taking on a fiberlike<br />
appearance. Now the wire could be drawn through a die to achieve<br />
the appropriate thickness. Again, the wire had to be heated, <strong>and</strong> if<br />
the temperature was too high, the tungsten reverted to a brittle<br />
state. The dies themselves were heated, <strong>and</strong> the reduction progressed<br />
in stages, each of which reduced the wire’s diameter by a<br />
thous<strong>and</strong>th of an inch.<br />
Finally, Coolidge had been successful. Pressed tungsten bars<br />
measuring 1 4 × 3 8 × 6 inches were hammered <strong>and</strong> rolled into rods 1 8<br />
inch, or 125 1000 inch, in diameter. The unit 1 1000 inch is often called a<br />
“mil.” These rods were then swaged to approximately 30 mil <strong>and</strong><br />
then passed through dies to achieve the filament size of 25 mil or<br />
smaller, depending on the power output of the lamp in which the<br />
filament was to be used. Tungsten wires of 1 mil or smaller are now<br />
readily available.<br />
Impact<br />
Tungsten filament / 797<br />
Ductile tungsten wire filaments are superior in several respects<br />
to platinum, carbon, or sintered tungsten filaments. Ductile filament<br />
lamps can withst<strong>and</strong> more mechanical shock without breaking.<br />
This means that they can be used in, for example, automobile<br />
headlights, in which jarring frequently occurs. Ductile wire can also<br />
be coiled into compact cylinders within the lamp bulb, which makes<br />
for a more concentrated source of light <strong>and</strong> easier focusing. Ductile<br />
tungsten filament lamps require less electricity than do carbon filament<br />
lamps, <strong>and</strong> they also last longer. Because the size of the filament<br />
wire can be carefully controlled, the light output from lamps<br />
of the same power rating is more reproducible. One 60-watt bulb is<br />
therefore exactly like another in terms of light production.<br />
Improved production techniques have greatly reduced the cost<br />
of manufacturing ductile tungsten filaments <strong>and</strong> of light-bulb man-
798 / Tungsten filament<br />
ufacturing in general. The modern world is heavily dependent<br />
upon this reliable, inexpensive light source, which turns darkness<br />
into daylight.<br />
See also Fluorescent lighting; Memory metal; Steelmaking<br />
process.<br />
Further Reading<br />
Baldwin, Neil. Edison: Inventing the Century. Chicago: University of<br />
Chicago Press, 2001.<br />
Cramer, Carol. Thomas Edison. San Diego, Calif.: Greenhaven Press,<br />
2001.<br />
Israel, Paul. Edison: A Life of Invention. New York: John Wiley, 1998.<br />
Liebhafsky, H. A. William David Coolidge: A Centenarian <strong>and</strong> His Work.<br />
New York: Wiley, 1974.<br />
Miller, John A. Yankee Scientist: William David Coolidge. Schenectady,<br />
N.Y.: Mohawk Development Service, 1963.
Tupperware<br />
Tupperware<br />
The invention: Trademarked food-storage products that changed<br />
the way Americans viewed plastic products <strong>and</strong> created a model<br />
for selling products in consumers’ homes.<br />
The people behind the invention:<br />
Earl S. Tupper (1907-1983), founder of Tupperware<br />
Brownie Wise, the creator of the vast home sales network for<br />
Tupperware<br />
Morison Cousins (1934-2001), a designer hired by Tupperware<br />
to modernize its products in the early 1990’s<br />
“The Wave of the Future”?<br />
799<br />
Relying on a belief that plastic was the wave of the future <strong>and</strong><br />
wanting to improve on the newest refrigeration technology, Earl S.<br />
Tupper, who called himself “a ham inventor <strong>and</strong> Yankee trader,”<br />
created an empire of products that changed America’s kitchens.<br />
Tupper, a self-taught chemical engineer, began working at Du Pont<br />
in the 1930’s. This was a time of important developments in the<br />
field of polymers <strong>and</strong> the technology behind plastics. Wanting to<br />
experiment with this new material yet unable to purchase the<br />
needed supplies, Tupper went to his employer for help. Because of<br />
the limited availability of materials, major chemical companies<br />
had been receiving all the raw goods for plastic production. Although<br />
Du Pont would not part with raw materials, the company<br />
was willing to let Tupper have the slag.<br />
Polyethylene slag was a black, rock-hard, malodorous waste<br />
product of oil refining. It was virtually unusable. Undaunted,<br />
Tupper developed methods to purify the slag. He then designed<br />
an injection molding machine to form bowls <strong>and</strong> other containers<br />
out of his “Poly-T.” Tupper did not want to call the substance plastic<br />
because of a public distrust of that substance. In 1938, he<br />
founded the Tupper Plastics Company to pursue his dream. It was<br />
during those first years that he formulated the design for the famous<br />
Tupperware seal.
800 / Tupperware<br />
Refrigeration techniques had improved tremendously during<br />
the first part of the twentieth century. The iceboxes in use prior to<br />
the 1940’s were inconsistent in their interior conditions <strong>and</strong> were<br />
usually damp inside because of melting of the ice. In addition, the<br />
metal, glass, or earthenware food storage containers used during<br />
the first half of the century did not seal tightly <strong>and</strong> allowed food to<br />
stay moist. Iceboxes allowed mixing of food odors, particularly evident<br />
with strong-smelling items such as onions <strong>and</strong> fish.<br />
Electric Refrigerators<br />
In contrast to iceboxes, the electric refrigerators available starting<br />
in the 1940’s maintained dry interiors <strong>and</strong> low temperatures. This<br />
change in environment resulted in food drying out <strong>and</strong> wilting.<br />
Tupper set out to alleviate this problem through his plastic containers.<br />
The key to Tupper’s solution was his containers’ seal. He took<br />
his design from paint can lids <strong>and</strong> inverted it. This tight seal created<br />
a partial vacuum that protected food from the dry refrigeration process<br />
<strong>and</strong> kept food odors sealed within containers.<br />
In 1942, Tupper bought his first manufacturing plant, in Farnumsville,<br />
Massachusetts. There he continued to improve on his designs.<br />
In 1945, Tupper introduced Tupperware, selling it through<br />
hardware <strong>and</strong> department stores as well as through catalog sales.<br />
Tupperware products were made of flexible, translucent plastic.<br />
Available in frosted crystal <strong>and</strong> five pastel colors, the new containers<br />
were airtight <strong>and</strong> waterproof. In addition, they carried a lifetime<br />
warranty against chipping, cracking, peeling, <strong>and</strong> breaking in normal<br />
noncommercial use. Early supporters of Tupperware included<br />
the American Thermos Bottle Company, which purchased seven<br />
million nesting cups, <strong>and</strong> the Tek Corporation, which ordered fifty<br />
thous<strong>and</strong> tumblers to sell with toothbrushes.<br />
Even though he benefited from this type of corporate support,<br />
Tupper wanted his products to be for home use. Marketing the new<br />
products proved to be difficult in the early years. Tupperware sat on<br />
hardware <strong>and</strong> department store shelves, <strong>and</strong> catalog sales were<br />
nearly nonexistent. The problem appeared to involve a basic distrust<br />
of plastic by consumers <strong>and</strong> an unfamiliarity with how to use<br />
the new products. The product did not come with instructions on
how to seal the containers or descriptions of how the closed container<br />
protected the food within. Brownie Wise, an early direct seller<br />
<strong>and</strong> veteran distributor of Stanley Home Products, stated that it<br />
took her several days to underst<strong>and</strong> the technology behind the seal<br />
<strong>and</strong> the now-famous Tupperware “burp,” the sound made when air<br />
leaves the container as it seals.<br />
Wise <strong>and</strong> two other direct sellers, Tom Damigella <strong>and</strong> Harvey<br />
Hollenbush, found the niche for selling Tupperware for daily use—<br />
home sales. Wise approached Tupper with a home party sales strategy<br />
<strong>and</strong> detailed how it provided a relaxed atmosphere in which to<br />
learn about the products <strong>and</strong> thus lowered sales resistance. In April,<br />
1951, Tupper took his product off store shelves <strong>and</strong> hired Wise to<br />
create a new direct selling system under the name of Tupperware<br />
Home Parties, Inc.<br />
Impact<br />
Tupperware / 801<br />
Home sales had already proved to be successful for the Fuller<br />
Brush Company <strong>and</strong> numerous encyclopedia publishers, yet Brownie<br />
Wise wanted to exp<strong>and</strong> the possibilities. Her first step was to found<br />
a campus-like headquarters in Kissimmee, Florida. There, Tupper <strong>and</strong><br />
a design department worked to develop new products, <strong>and</strong> Tupperware<br />
Home Parties, Inc., under Wise’s direction, worked to develop<br />
new incentives for Tupperware’s direct sellers, called hostesses.<br />
Wise added spark to the notion of home demonstrations. “Parties,”<br />
as they were called, included games, recipes, giveaways, <strong>and</strong> other<br />
ideas designed to help housewives learn how to use Tupperware<br />
products. The marketing philosophy was to make parties appealing<br />
events at which women could get together while their children were<br />
in school. This fit into the suburban lifestyle of the 1950’s. These parties<br />
offered a nonthreatening means for home sales representatives<br />
to attract audiences for their demonstrations <strong>and</strong> gave guests a chance<br />
to meet <strong>and</strong> socialize with their neighbors. Often compared to<br />
the barbecue parties of the 1950’s, Tupperware parties were social,<br />
yet educational, affairs. While guests ate lunch or snacked on desserts,<br />
the Tupperware hostess educated them about the technology<br />
behind the bowls <strong>and</strong> their seals as well as suggesting a wide variety<br />
of uses for the products. For example, a party might include
802 / Tupperware<br />
recipes for dinner parties, with information provided on how<br />
party leftovers could be stored efficiently <strong>and</strong> economically with<br />
Tupperware products.<br />
While Tupperware products were changing the kitchens of America,<br />
they were also changing the women who sold them (almost all the<br />
hosts were women). Tupperware sales offered employment for women<br />
at a time when society disapproved of women working outside the<br />
home. Being a hostess, however, was not a nine-to-five position. The<br />
job allowed women freedom to tailor their schedules to meet family<br />
needs. Employment offered more than the economic incentive of 35<br />
percent of gross sales. Hostesses also learned new skills <strong>and</strong> developed<br />
self-esteem. An acclaimed mentoring program for new <strong>and</strong> advancing<br />
employees provided motivational training. Managers came only from<br />
the ranks of hostesses; moving up the corporate ladder meant spending<br />
time selling Tupperware at home parties.<br />
The opportunity to advance offered incentive. In addition, annual<br />
sales conventions were renowned for teaching new marketing<br />
strategies in fun-filled classes. These conventions also gave women<br />
an opportunity to network <strong>and</strong> establish contacts. These experiences<br />
proved to be invaluable as women entered the workforce in<br />
increasing numbers in later decades.<br />
Exp<strong>and</strong>ing Home-Sales Business<br />
The tremendous success of Tupperware’s marketing philosophy<br />
helped to set the stage for other companies to enter home sales.<br />
These companies used home-based parties to educate potential customers<br />
in familiar surroundings, in their own homes or in the<br />
homes of friends. The Mary Kay Cosmetics Company, founded in<br />
1963, used beauty makeovers in the home party setting as its chief<br />
marketing tool. Discovery Toys, founded in 1978, encouraged guests<br />
to get on the floor <strong>and</strong> play with the toys demonstrated at its home<br />
parties. Both companies extended the socialization aspects found in<br />
Tupperware parties.<br />
In addition to setting the st<strong>and</strong>ard for home sales, Tupperware<br />
is also credited with starting the plastic revolution. Early plastics<br />
were of poor quality <strong>and</strong> cracked or broke easily. This created distrust<br />
of plastic products among consumers. Earl Tupper’s dem<strong>and</strong>
Tupperware / 803<br />
for quality set the stage for the future of plastics. He started with<br />
high-quality resin <strong>and</strong> developed a process that kept the “Poly-T”<br />
from splitting. He then invented an injection molding machine that<br />
mass-produced his bowl <strong>and</strong> cup designs. His st<strong>and</strong>ards of quality<br />
from start to finish helped other companies exp<strong>and</strong> into plastics.<br />
The 1950’s saw a wide variety of products appear in the improved<br />
material, including furniture <strong>and</strong> toys. This shift from wood, glass,<br />
<strong>and</strong> metal to plastic continued for decades.<br />
Maintaining the position of Tupperware within the housewares<br />
Earl S. Tupper<br />
Born in 1907, Earl Silas Tupper came from a family of go-getters.<br />
His mother, Lulu Clark Tupper, kept a boardinghouse <strong>and</strong><br />
took in laundry, while his father, Earnest, ran a small farm <strong>and</strong><br />
greenhouse in New Hampshire. The elder Tupper was also a<br />
small-time inventor, patenting a device for stretching out chickens<br />
to make cleaning them easier. Earl absorbed the family’s<br />
taste for invention <strong>and</strong> enterprise.<br />
Fresh out of high school in 1925, Tupper vowed to turn himself<br />
into a millionaire by the time he was thirty. He started a<br />
l<strong>and</strong>scaping <strong>and</strong> nursery business in 1928, but the Depression<br />
led his company, Tupper Tree, into bankruptcy in 1936. Tupper<br />
was undeterred. He hired on with Du Pont the next year. Du<br />
Pont taught him a great deal about the chemistry <strong>and</strong> manufacturing<br />
of plastics, but it did not give him scope to apply his<br />
ideas, so in 1938 he founded the Earl S. Tupper Company. He<br />
continued to work as a contractor for Du Pont to make the<br />
fledgling company profitable, <strong>and</strong> during World War II the<br />
company made plastic moldings for gas masks <strong>and</strong> Navy signal<br />
lamps. Finally, in the 1940’s Tupper could devote himself to<br />
his dream—designing plastic food containers, cups, <strong>and</strong> such<br />
small household conveniences as cases for cigarette packs.<br />
Thanks to aggressive, innovative direct marketing, Tupper’s<br />
kitchenware, Tupperware, became synonymous with plastic<br />
containers during the 1950’s. In 1958 Tupper sold his company<br />
to Rexall for $16 million, having finally realized his youthful<br />
ambition to make himself wealthy through Yankee wit <strong>and</strong><br />
hard work. He died in 1983.
804 / Tupperware<br />
market meant keeping current. As more Americans were able to purchase<br />
the newest refrigerators, Tupperware exp<strong>and</strong>ed to meet their<br />
needs. The company added new products, improved marketing<br />
strategies, <strong>and</strong> changed or updated designs. Over the years, Tupperware<br />
added baking items, toys, <strong>and</strong> home storage containers for such<br />
items as photographs, sewing materials, <strong>and</strong> holiday ornaments. The<br />
1980’s <strong>and</strong> 1990’s brought microwaveable products.<br />
As women moved into the work force in great numbers, Tupperware<br />
moved with them. The company introduced lunchtime parties<br />
at the workplace <strong>and</strong> parties at daycare centers for busy working<br />
parents. Tupperware also started a fund-raising line, in special colors,<br />
that provided organizations with a means to bring in money<br />
while not necessitating full-fledged parties. New party themes developed<br />
around time-saving techniques <strong>and</strong> health concerns such<br />
as diet planning. Beginning in 1992, customers too busy to attend a<br />
party could call a toll-free number, request a catalog, <strong>and</strong> be put in<br />
contact with a “consultant,” as “hostesses” now were called.<br />
Another marketing strategy developed out of a public push for<br />
environmentally conscious products. Tupperware consultants stressed<br />
the value of buying food in bulk to create less trash as well as saving<br />
money. To store these increased purchases, the company developed<br />
a new line for kitchen staples called Modular Mates. These stackable<br />
containers came in a wide variety of shapes <strong>and</strong> sizes to hold everything<br />
from cereal to flour to pasta. They were made of see-through<br />
plastic, allowing the user to see if the contents needed replenishing.<br />
Some consultants tailored parties around ideas to better organize<br />
kitchen cabinets using the new line. Another environmentally conscious<br />
product idea was the Tupperware lunch kit. These kits did<br />
away with the need for throwaway products such as paper plates,<br />
plastic storage bags, <strong>and</strong> aluminum foil. Lunch kits marketed in<br />
other countries were developed to accommodate the countries’ particular<br />
needs. For example, Japanese designs included chopsticks,<br />
while Latin American styles were designed to hold tortillas.<br />
Design Changes<br />
Tupperware designs have been well received over the years.<br />
Early designs prompted a 1947 edition of House Beautiful to call the
Tupperware / 805<br />
product “Fine Art for 39 cents.” Fifteen of Tupper’s earliest designs<br />
are housed in a permanent collection at the Museum of Modern Art<br />
in New York City. Other museums, such as the Metropolitan Museum<br />
of Art <strong>and</strong> the Brooklyn Museum, also house Tupperware designs.<br />
Tupperware established its own Museum of Historic Food<br />
Containers at its international headquarters in Florida. Despite this<br />
critical acclaim, the company faced a constant struggle to keep<br />
product lines competitive with more accessible products, such as<br />
those made by Rubbermaid, that could be found on the shelves of<br />
local grocery or department stores.<br />
Some of the biggest design changes came with the hiring of<br />
Morison Cousins in the early 1990’s. Cousins, an accomplished designer,<br />
set out to modernize the Tupperware line. He sought to return<br />
to simple, traditional styles while bringing in time-saving aspects.<br />
He changed lid designs to make them easier to clean <strong>and</strong><br />
rounded the bottoms of bowls so that every portion could be scooped<br />
out. Cousins also added thumb h<strong>and</strong>les to bowls.<br />
Backed by a knowledgeable sales force <strong>and</strong> quality product, the<br />
company experienced tremendous growth. Tupperware sales reached<br />
$25 million in 1954. By 1958, the company had grown from seven<br />
distributorships to a vast system covering the United States <strong>and</strong><br />
Canada. That same year, Brownie Wise left the company, <strong>and</strong> Tupper<br />
Plastics was sold to Rexall Drug Company for $9 million. Rexall<br />
Drug changed its name to Dart Industries, Inc., in 1969, then merged<br />
with Kraft, Inc., eleven years later to become Dart <strong>and</strong> Kraft, Inc.<br />
During this time of parent-company name changing, Tupperware<br />
continued to be an important subsidiary. Through the 1960’s <strong>and</strong><br />
1970’s, the company spread around the world, with sales in Western<br />
Europe, the Far East, <strong>and</strong> Latin America. In 1986, Dart <strong>and</strong> Kraft,<br />
Inc., split into Kraft, Inc., <strong>and</strong> Premark International, Inc., of which<br />
Dart (<strong>and</strong> therefore Tupperware) was a subsidiary. Premark International<br />
included other home product companies such as West<br />
Bend, Precor, <strong>and</strong> Florida Tile.<br />
By the early 1990’s, annual sales of Tupperware products reached<br />
$1.1 billion. Manufacturing plants in Halls, Tennessee, <strong>and</strong> Hemingway,<br />
South Carolina, worked to meet the high dem<strong>and</strong> for Tupperware<br />
products in more than fifty countries. Foreign sales accounted<br />
for almost 75 percent of the company’s business. By meeting the
806 / Tupperware<br />
needs of consumers <strong>and</strong> keeping current with design changes, new<br />
sales techniques, <strong>and</strong> new products, Tupperware was able to reach<br />
90 percent of America’s homes.<br />
See also Electric refrigerator; Food freezing; Freeze-drying; Microwave<br />
cooking; Plastic; Polystyrene; Pyrex glass; Teflon.<br />
Further Reading<br />
Brown, Patricia Leigh. “New Designs to Keep Tupperware Fresh.”<br />
New York Times (June 10, 1993).<br />
Clarke, Alison J. Tupperware: The Promise of Plastic in 1950s America.<br />
Washington, D.C.: Smithsonian Institution Press, 1999.<br />
Gershman, Michael. Getting It Right the Second Time. Reading, Mass.:<br />
Addison-Wesley, 1990.<br />
Martin, Douglas. “Morison S. Cousins, Sixty-six, Designer, Dies; Revamped<br />
Tupperware’s Look with Flair.” New York Times (February<br />
18, 2001).<br />
Sussman, Vic. “I Was the Only Virgin at the Party.” Sales <strong>and</strong> Marketing<br />
Management 141 (September 1, 1989).
Turbojet<br />
Turbojet<br />
The invention: A jet engine with a turbine-driven compressor that<br />
uses its hot-gas exhaust to develop thrust.<br />
The people behind the invention:<br />
Henry Harley Arnold (1886-1950), a chief of staff of the U.S.<br />
Army Air Corps<br />
Gerry Sayer, a chief test pilot for Gloster Aircraft Limited<br />
Hans Pabst von Ohain (1911- ), a German engineer<br />
Sir Frank Whittle (1907-1996), an English Royal Air Force<br />
officer <strong>and</strong> engineer<br />
Developments in Aircraft Design<br />
807<br />
On the morning of May 15, 1941, some eleven months after<br />
France had fallen to Adolf Hitler’s advancing German army, an experimental<br />
jet-propelled aircraft was successfully tested by pilot<br />
Gerry Sayer. The airplane had been developed in a little more than<br />
two years by the English company Gloster Aircraft under the supervision<br />
of Sir Frank Whittle, the inventor of Engl<strong>and</strong>’s first jet engine.<br />
Like the jet engine that powered it, the plane had a number of<br />
predecessors. In fact, the May, 1941, flight was not the first jetpowered<br />
test flight: That flight occurred on August 27, 1939, when a<br />
Heinkel aircraft powered by a jet engine developed by Hans Pabst<br />
von Ohain completed a successful test flight in Germany. During<br />
this period, Italian airplane builders were also engaged in jet aircraft<br />
testing, with lesser degrees of success.<br />
Without the knowledge that had been gained from Whittle’s experience<br />
in experimental aviation, the test flight at the Royal Air<br />
Force’s Cranwell airfield might never have been possible. Whittle’s<br />
repeated efforts to develop turbojet propulsion engines had begun<br />
in 1928, when, as a twenty-one-year-old Royal Air Force (RAF)<br />
flight cadet at Cranwell Academy, he wrote a thesis entitled “Future<br />
Developments in Aircraft Design.” One of the principles of Whittle’s<br />
earliest research was that if aircraft were eventually to achieve<br />
very high speeds over long distances, they would have to fly at very
808 / Turbojet<br />
high altitudes, benefiting from the reduced wind resistance encountered<br />
at such heights.<br />
Whittle later stated that the speed he had in mind at that time<br />
was about 805 kilometers per hour—close to that of the first jetpowered<br />
aircraft. His earliest idea of the engines that would be necessary<br />
for such planes focused on rocket propulsion (that is, “jets” in<br />
which the fuel <strong>and</strong> oxygen required to produce the explosion needed<br />
to propel an air vehicle are entirely contained in the engine, or, alternatively,<br />
in gas turbines driving propellers at very high speeds).<br />
Later, it occurred to him that gas turbines could be used to provide<br />
forward thrust by what would become “ordinary” jet propulsion<br />
(that is, “thermal air” engines that take from the surrounding atmosphere<br />
the oxygen they need to ignite their fuel). Eventually, such<br />
ordinary jet engines would function according to one of four possible<br />
systems: the so-called athodyd, or continuous-firing duct; the<br />
pulsejet, or intermittent-firing duct; the turbojet, or gas-turbine jet;<br />
or the propjet, which uses a gas turbine jet to rotate a conventional<br />
propeller at very high speeds.<br />
Passing the Test<br />
The aircraft that was to be used to test the flight performance<br />
was completed by April, 1941. On April 7, tests were conducted<br />
on the ground at Gloster Aircraft’s l<strong>and</strong>ing strip at Brockworth<br />
by chief test pilot Sayer. At this point, all parties concerned tried<br />
to determine whether the jet engine’s capacity would be sufficient<br />
to push the aircraft forward with enough speed to make it<br />
airborne. Sayer dared to take the plane off the ground for a limited<br />
distance of between 183 meters <strong>and</strong> 273 meters, despite the<br />
technical staff’s warnings against trying to fly in the first test<br />
flights.<br />
On May 15, the first real test was conducted at Cranwell. During<br />
that test, Sayer flew the plane, now called the Pioneer, for seventeen<br />
minutes at altitudes exceeding 300 meters <strong>and</strong> at a conservative test<br />
speed exceeding 595 kilometers per hour, which was equivalent to<br />
the top speed then possible in the RAF’s most versatile fighter<br />
plane, the Spitfire.
Once it was clear that the tests undertaken at Cranwell were not<br />
only successful but also highly promising in terms of even better<br />
performance, a second, more extensive test was set for May 21, 1941.<br />
It was this later demonstration that caused the Ministry of Air Production<br />
(MAP) to initiate the first steps to produce the Meteor jet<br />
fighter aircraft on a full industrial scale barely more than a year after<br />
the Cranwell test flight.<br />
Impact<br />
Turbojet / 809<br />
Since July, 1936, the Junkers engine <strong>and</strong> aircraft companies in<br />
Hitler’s Germany had been a part of a new secret branch dedicated<br />
to the development of a turbojet-driven aircraft. In the same period,<br />
Junkers’ rival in the German aircraft industry, Heinkel, Inc., approached<br />
von Ohain, who was far enough along in his work on the<br />
turbojet principle to have patented a device very similar to Whittle’s<br />
in 1935. Alater model of this jet engine would power a test aircraft in<br />
August, 1939.<br />
In the meantime, the wider impact of the flight was the result of<br />
decisions made by General Henry Harley Arnold, chief of staff of<br />
the U.S. Army Air Corps. Even before learning of the successful<br />
flight in May, he made arrangements to have one of Whittle’s engines<br />
shipped to the United States to be used by General Electric<br />
Company as a model for U.S. production. The engine arrived in<br />
October, 1941, <strong>and</strong> within one year, a General Electric-built engine<br />
powered a Bell Aircraft plane, the XP-59 A Airacomet, in its<br />
maiden flight.<br />
The jet airplane was not perfected in time to have any significant<br />
impact on the outcome of World War II, but all of the wartime experimental<br />
jet aircraft developments that were either sparked by the<br />
flight in 1941 or preceded it prepared the way for the research <strong>and</strong><br />
development projects that would leave a permanent revolutionary<br />
mark on aviation history in the early 1950’s.<br />
See also Airplane; Dirigible; Rocket; Rocket; Stealth aircraft; Supersonic<br />
passenger plane; V-2 rocket.
810 / Turbojet<br />
Further Reading<br />
Adams, Robert. “Smithsonian Horizons.” Smithsonian 18 (July, 1987).<br />
Boyne, Walter J., Donald S. Lopez, <strong>and</strong> Anselm Franz. The Jet Age:<br />
Forty Years of Jet Aviation. Washington: National Air <strong>and</strong> Space<br />
Museum, 1979.<br />
Constant, Edward W. The Origins of the Turbojet Revolution. Baltimore:<br />
Johns Hopkins University Press, 1980.<br />
Launius, Roger D. Innovation <strong>and</strong> the Development of Flight. College<br />
Station: Texas A&M University Press, 1999.
Typhus vaccine<br />
Typhus vaccine<br />
The invention: the first effective vaccine against the virulent typhus<br />
disease.<br />
The person behind the invention:<br />
Hans Zinsser (1878-1940), an American bacteriologist <strong>and</strong><br />
immunologist<br />
Studying Diseases<br />
811<br />
As a bacteriologist <strong>and</strong> immunologist, Hans Zinsser was interested<br />
in how infectious diseases spread. During an outbreak of typhus<br />
in Serbia in 1915, he traveled with a Red Cross team so that he<br />
could study the disease. He made similar trips to the Soviet Union<br />
in 1923, Mexico in 1931, <strong>and</strong> China in 1938. His research showed<br />
that, as had been suspected, typhus was caused by the rickettsia, an<br />
organism that had been identified in 1916 by Henrique da Rocha-<br />
Lima. The organism was known to be carried by a louse or a rat flea<br />
<strong>and</strong> transmitted to humans through a bite. Poverty, dirt, <strong>and</strong> overcrowding<br />
led to environments that helped the typhus disease to<br />
spread.<br />
The rickettsia is a microorganism that is rod-shaped or spherical.<br />
Within the insect’s body, it works its way into the cells that line the<br />
gut. Multiplying within this tissue, the rickettsia passes from the insect<br />
body with the feces. Since its internal cells are being destroyed,<br />
the insect dies within three weeks after it has been infected with the<br />
microorganism. As the infected flea or louse feeds on a human, it<br />
causes itching. When the bite is scratched, the skin may be opened,<br />
<strong>and</strong> the insect feces, carrying rickettsia, can then enter the body.<br />
Also, dried airborne feces can be inhaled.<br />
Once inside the human, the rickettsia invades endothelial cells<br />
<strong>and</strong> causes an inflammation of the blood vessels. Cell death results,<br />
<strong>and</strong> this leads to tissue death. In a few days, the infected person may<br />
have a rash, a severe headache, a fever, dizziness, ringing in the ears,<br />
or deafness. Also, light may hurt the person’s eyes, <strong>and</strong> the thinking<br />
processes become foggy <strong>and</strong> mixed up. (The word “typhus” comes
812 / Typhus vaccine<br />
from a Greek word meaning “cloudy” or “misty.”) Without treatment,<br />
the victim dies within nine to eighteen days.<br />
Medical science now recognizes three forms of typhus: the epidemic<br />
louse-borne, the Brill-Zinsser, <strong>and</strong> the murine (or rodentrelated)<br />
form. The epidemic louse-borne (or “classical”) form is the<br />
most severe. The Brill-Zinsser (or “endemic”) form is similar but<br />
less severe. The murine form of typhus is also milder then the epidemic<br />
type.<br />
In 1898, a researcher named Brill studied typhus among immigrants<br />
in New York City; the form of typhus he found was called<br />
“Brill’s disease.” In the late 1920’s, Hermann Mooser proved that<br />
Brill’s disease was carried by the rat flea.<br />
When Zinsser began his work on typhus, he realized that what<br />
was known about the disease had never been properly organized.<br />
Zinsser <strong>and</strong> his coworkers, including Mooser <strong>and</strong> others, worked to<br />
identify the various types of typhus. In the 1930’s, Zinsser suggested<br />
that the typhus studied by Brill in New York City had actually<br />
included two types: the rodent-associated form <strong>and</strong> Brill’s disease.<br />
As a result of Zinsser’s effort to identify the types of typhus<br />
disease, it was renamed Brill-Zinsser disease.<br />
Making a Vaccine<br />
Zinsser’s studies had shown him that the disease-causing organism<br />
in typhus contained some kind of antigen, most likely a polysaccharide.<br />
In 1932, Zinsser would identify agglutinins, or antibodies,<br />
in the blood serum of patients who had the murine <strong>and</strong> classical<br />
forms of typhus. Zinsser believed that a vaccine could be developed<br />
to prevent the spread of typhus. He realized, however, that a large<br />
number of dead microorganisms was needed to help people develop<br />
an immunity.<br />
Zinsser <strong>and</strong> his colleagues set out to develop a method of growing<br />
organisms in large quantities in tissue culture. The infected tissue<br />
was used to inoculate large quantities of normal chick tissue,<br />
<strong>and</strong> this tissue was then grown in flasks. In this way, Zinsser’s team<br />
was able to produce the quantities of microorganisms they needed.<br />
The type of immunization that Zinsser developed (in 1930) is<br />
known as “passive immunity.” The infecting organisms carry anti-
gens, which stimulate the production of antibodies. The antigens<br />
can elicit an immune reaction even if the cell is weak or dead.<br />
“B” cells <strong>and</strong> macrophages, both of which are used in fighting<br />
disease organisms, recognize <strong>and</strong> respond to the antigen. The B cells<br />
produce antibodies that can destroy the invading organism directly<br />
or attract more macrophages to the area so that they can attack the<br />
organism. B cells also produce “memory cells,” which remain in the<br />
blood <strong>and</strong> trigger a quick second response if there is a later infection.<br />
Since the vaccine contains weakened or dead organisms, the<br />
person who is vaccinated may have a mild reaction but does not actually<br />
come down with the disease.<br />
Impact<br />
Typhus vaccine / 813<br />
Typhus is still common in many parts of the world, especially<br />
where there is poverty <strong>and</strong> overcrowding. Classical typhus is quite<br />
rare; the last report of this type of typhus in the United States was in<br />
1921. Endemic <strong>and</strong> murine typhus are more common. In the United<br />
States, where children are vaccinated against the disease, only about<br />
fifty cases are now reported each year. Antibiotics such as tetracycline<br />
<strong>and</strong> chloramphenicol are effective in treating the disease, so<br />
few infected people now die of the disease in areas where medical<br />
care is available.<br />
The work of Zinsser <strong>and</strong> his colleagues was very important in<br />
stopping the spread of typhus. Zinsser’s classification of different<br />
types of the disease meant that it was better understood, <strong>and</strong> this<br />
led to the development of cures. The control of lice <strong>and</strong> rodents <strong>and</strong><br />
improved cleanliness in living conditions helped bring typhus under<br />
control. Once Zinsser’s vaccine was available, even people who<br />
lived in crowded inner cities could be protected against the disease.<br />
Zinsser’s research in growing the rickettsia in tissue culture also<br />
inspired further work. Other researchers modified <strong>and</strong> improved<br />
his technique so that the use of tissue culture is now st<strong>and</strong>ard in laboratories.<br />
See also Antibacterial drugs; Birth control pill; Penicillin; Polio<br />
vaccine (Sabin); Polio vaccine (Salk); Salvarsan; Tuberculosis vaccine;<br />
Yellow fever vaccine.
814 / Typhus vaccine<br />
Further Reading<br />
DeJauregui, Ruth. 100 Medical Milestones That Shaped World History.<br />
San Mateo, Calif.: Bluewood Books, 1998.<br />
Gray, Michael W. “Rickettsia in Medicine <strong>and</strong> History.” Nature 396,<br />
no. 6707 (November, 1998).<br />
Hoff, Brent H., Carter Smith, <strong>and</strong> Charles H. Calisher. Mapping Epidemics:<br />
A Historical Atlas of Disease. New York: Franklin Watts,<br />
2000.
Ultracentrifuge<br />
Ultracentrifuge<br />
The invention: A super-high-velocity centrifuge designed to separate<br />
colloidal or submicroscopic substances, the ultracentrifuge<br />
was used to measure the molecular weight of proteins <strong>and</strong><br />
proved that proteins are large molecules.<br />
The people behind the invention:<br />
Theodor Svedberg (1884-1971), a Swedish physical chemist <strong>and</strong><br />
1926 Nobel laureate in chemistry<br />
Jesse W. Beams (1898-1977), an American physicist<br />
Arne Tiselius (1902-1971), a Swedish physical biochemist <strong>and</strong><br />
1948 Nobel laureate in chemistry<br />
Svedberg Studies Colloids<br />
815<br />
Theodor “The” Svedberg became the principal founder of molecular<br />
biology when he invented the ultracentrifuge <strong>and</strong> used it to<br />
examine proteins in the mid-1920’s. He began to study materials<br />
called “colloids” as a Swedish chemistry student at the University<br />
of Uppsala <strong>and</strong> continued to conduct experiments with colloidal<br />
systems when he joined the faculty in 1907. A colloid is a kind of<br />
mixture in which very tiny particles of one substance are mixed<br />
uniformly with a dispersing medium (often water) <strong>and</strong> remain<br />
suspended indefinitely. These colloidal dispersions play an important<br />
role in many chemical <strong>and</strong> biological systems.<br />
The size of the colloid particles must fall within a certain<br />
range. The force of gravity will cause them to settle if they are too<br />
large. If they are too small, the properties of the mixture change,<br />
<strong>and</strong> a solution is formed. Some examples of colloidal systems include<br />
mayonnaise, soap foam, marshmallows, the mineral opal,<br />
fog, India ink, jelly, whipped cream, butter, paint, <strong>and</strong> milk.<br />
Svedberg wondered what such different materials could have in<br />
common. His early work helped to explain why colloids remain<br />
in suspension. Later, he developed the ultracentrifuge to measure<br />
the weight of colloid particles by causing them to settle in a controlled<br />
way.
816 / Ultracentrifuge<br />
Svedberg Builds an Ultracentrifuge<br />
Svedberg was a successful chemistry professor at the University<br />
of Uppsala in Sweden when he had the idea that colloids could be<br />
made to separate from suspension by means of centrifugal force.<br />
Centrifugal force is caused by circular motion <strong>and</strong> acts on matter<br />
much as gravity does. A person can feel this force by tying a ball to a<br />
rope <strong>and</strong> whirling it rapidly in a circle. The pull on the rope becomes<br />
stronger as the ball moves faster in its circular orbit. A centrifuge<br />
works the same way: It is a device that spins balanced containers of<br />
substances very rapidly.<br />
Svedberg figured that it would take a centrifugal force thous<strong>and</strong>s<br />
of times the force of gravity to cause colloid particles to settle. How<br />
fast they settle depends on their size <strong>and</strong> weight, so the ultracentrifuge<br />
can also provide a measure of these properties. Centrifuges were<br />
already used to separate cream from whole milk <strong>and</strong> blood corpuscles<br />
from plasma, but these centrifuges were too slow to cause the<br />
separation of colloids. An ultracentrifuge—one that could spin samples<br />
much faster—was needed, <strong>and</strong> Svedberg made plans to build one.<br />
The opportunity came in 1923, when Svedberg spent eight months<br />
as visiting professor in the chemistry department of the University<br />
of Wisconsin at Madison <strong>and</strong> worked with J. Burton Nichols, one of<br />
the six graduate students assigned to assist him. Here, Svedberg announced<br />
encouraging results with an electrically driven centrifuge—not<br />
yet an ultracentrifuge—which attained a rotation equal<br />
to about 150 times the force of gravity. Svedberg returned to Sweden<br />
<strong>and</strong>, within a year, built a centrifuge capable of generating 7,000<br />
times the force of gravity. He used it with Herman Rinde, a colleague<br />
at the University of Uppsala, to separate the suspended particles<br />
of colloidal gold. This was in 1924, which is generally accepted<br />
as the date of the first use of a true ultracentrifuge. From 1925 to<br />
1926, Svedberg raised the funds to build an even more powerful ultracentrifuge.<br />
It would be driven by an oil turbine, a machine capable<br />
of producing more than 40,000 revolutions per minute to generate<br />
a force 100,000 times that of gravity.<br />
Svedberg <strong>and</strong> Robin Fahraeus used the new ultracentrifuge to<br />
separate the protein hemoglobin from its colloidal suspension. Together<br />
with fats <strong>and</strong> carbohydrates, proteins are one of the most
abundant organic constituents of living organisms. No protein had<br />
been isolated in pure form before Svedberg began this study, <strong>and</strong> it<br />
was uncertain whether proteins consisted of molecules of a single<br />
compound or mixtures of different substances working together in<br />
biological systems. The colloid particles of Svedberg’s previous<br />
studies separated at different rates, some settling faster than others,<br />
showing that they had different sizes <strong>and</strong> weights. Colloid particles<br />
of the protein, however, separated together. The uniform separation<br />
observed for proteins, such as hemoglobin, demonstrated for the<br />
first time that each protein consists of identical well-defined molecules.<br />
More than one hundred proteins were studied by Svedberg<br />
<strong>and</strong> his coworkers, who extended their technique to carbohydrate<br />
polymers such as cellulose <strong>and</strong> starch.<br />
Impact<br />
Svedberg built more <strong>and</strong> more powerful centrifuges so that smaller<br />
<strong>and</strong> smaller molecules could be studied. In 1936, he built an ultracentrifuge<br />
that produced a centrifugal force of more than a halfmillion<br />
times the force of gravity. Jesse W. Beams was an American<br />
pioneer in ultracentrifuge design. He reduced the friction of an airdriven<br />
rotor by first housing it in a vacuum, in 1934, <strong>and</strong> later by<br />
supporting it with a magnetic field.<br />
The ultracentrifuge was a central tool for providing a modern underst<strong>and</strong>ing<br />
of the molecular basis of living systems, <strong>and</strong> it is employed<br />
in thous<strong>and</strong>s of laboratories for a variety of purposes. It is<br />
used to analyze the purity <strong>and</strong> the molecular properties of substances<br />
containing large molecules, from the natural products of the biosciences<br />
to the synthetic polymers of chemistry. The ultracentrifuge is<br />
also employed in medicine to analyze body fluids, <strong>and</strong> it is used in biology<br />
to isolate viruses <strong>and</strong> the components of fractured cells.<br />
Svedberg, while at Wisconsin in 1923, invented a second, very<br />
different method to separate proteins in suspension using electric<br />
currents. It is called “electrophoresis,” <strong>and</strong> it was later improved by<br />
his student, Arne Tiselius, for use in his famous study of the proteins<br />
in blood serum. The technique of electrophoresis is as widespread<br />
<strong>and</strong> important as is the ultracentrifuge.<br />
See also Ultramicroscope; X-ray crystallography.<br />
Ultracentrifuge / 817
818 / Ultracentrifuge<br />
Further Reading<br />
Lechner, M. D. Ultracentrifugation. New York: Springer, 1994.<br />
Rickwood, David. Preparative Centrifugation: A Practical Approach.<br />
New York: IRL Press at Oxford University Press, 1992.<br />
Schuster, Todd M. Modern Analytical Ultracentrifugation: Acquisition<br />
<strong>and</strong> Interpretation of Data for Biological <strong>and</strong> Synthetic Polymer Systems.<br />
Boston: Birkhäuser, 1994.<br />
Svedberg, Theodor B., Kai Oluf Pedersen, <strong>and</strong> Johannes Henrik<br />
Bauer. The Ultracentrifuge. Oxford: Clarendon Press, 1940.
Ultramicroscope<br />
Ultramicroscope<br />
The invention: A microscope characterized by high-intensity illumination<br />
for the study of exceptionally small objects, such as colloidal<br />
substances.<br />
The people behind the invention:<br />
Richard Zsigmondy (1865-1929), an Austrian-born German<br />
organic chemist who won the 1925 Nobel Prize in Chemistry<br />
H. F. W. Siedentopf (1872-1940), a German physicist-optician<br />
Max von Smouluchowski (1879-1961), a German organic<br />
chemist<br />
Accidents of Alchemy<br />
819<br />
Richard Zsigmondy’s invention of the ultramicroscope grew out<br />
of his interest in colloidal substances. Colloids consist of tiny particles<br />
of a substance that are dispersed throughout a solution of another<br />
material or substance (for example, salt in water). Zsigmondy<br />
first became interested in colloids while working as an assistant to<br />
the physicist Adolf Kundt at the University of Berlin in 1892. Although<br />
originally trained as an organic chemist, in which discipline<br />
he took his Ph.D. at the University of Munich in 1890, Zsigmondy<br />
became particularly interested in colloidal substances containing<br />
fine particles of gold that produce lustrous colors when painted on<br />
porcelain. For this reason, he ab<strong>and</strong>oned organic chemistry <strong>and</strong> devoted<br />
his career to the study of colloids.<br />
Zsigmondy began intensive research into his new field of interest<br />
in 1893, when he returned to Austria to accept a post as lecturer at a<br />
technical school at Graz. Zsigmondy became especially interested<br />
in gold-ruby glass, the accidental invention of the seventeenth century<br />
alchemist Johann Kunckle. Kunckle, while pursuing the alchemist’s<br />
pipe dream of transmuting base substances (such as lead)<br />
into gold, discovered instead a method of producing glass with a<br />
beautiful, deep red luster by suspending very fine particles of gold<br />
throughout the liquid glass before it was cooled. Zsigmondy also<br />
began studying a colloidal pigment called “purple of Cassius,” the<br />
discovery of another seventeenth century alchemist, Andreas Cassius.
820 / Ultramicroscope<br />
Zsigmondy soon discovered that purple of Cassius was a colloidal<br />
solution <strong>and</strong> not, as most chemists believed at the time, a chemical<br />
compound. This fact allowed him to develop techniques for<br />
glass <strong>and</strong> porcelain coloring with great commercial value, which led<br />
directly to his 1897 appointment to a research post with the Schott<br />
Glass Manufacturing Company in Jena, Germany. With the Schott<br />
Company, Zsigmondy concentrated on the commercial production<br />
of colored glass objects. His most notable achievement during this<br />
period was the invention of Jena milk glass, which is still prized by<br />
collectors throughout the world.<br />
Brilliant Proof<br />
While studying colloids, Zsigmondy devised experiments that<br />
proved that purple of Cassius was colloidal. When he published the<br />
results of his research in professional journals, however, they were<br />
not widely accepted by the scientific community. Other scientists<br />
were not able to replicate Zsigmondy’s experiments <strong>and</strong> consequently<br />
denounced them as flawed. The criticism of his work in<br />
technical literature stimulated Zsigmondy to make his greatest discovery,<br />
the ultramicroscope, which he developed to prove his theories<br />
regarding purple of Cassius.<br />
The problem with proving the exact nature of purple of Cassius<br />
was that the scientific instruments available at the time were not<br />
sensitive enough for direct observation of the particles suspended<br />
in a colloidal substance. Using the facilities <strong>and</strong> assisted by the staff<br />
(especially H. F. W. Siedentopf, an expert in optical lens grinding) of<br />
the Zeiss Glass Manufacturing Company of Jena, Zsigmondy developed<br />
an ingenious device that permitted direct observation of individual<br />
colloidal particles.<br />
This device, which its developers named the “ultramicroscope,”<br />
made use of a principle that already existed. Sometimes called “darkfield<br />
illumination,” this method consisted of shining a light (usually<br />
sunlight focused by mirrors) through the solution under the microscope<br />
at right angles to the observer, rather than shining the light directly<br />
from the observer into the solution. The resulting effect is similar<br />
to that obtained when a beam of sunlight is admitted to a closed<br />
room through a small window. If an observer st<strong>and</strong>s back from <strong>and</strong> at
ight angles to such a beam, many dust particles suspended in the air<br />
will be observed that otherwise would not be visible.<br />
Zsigmondy’s device shines a very bright light through the substance<br />
or solution being studied. From the side, the microscope then<br />
focuses on the light shaft. This process enables the observer using<br />
the ultramicroscope to view colloidal particles that are ordinarily<br />
invisible even to the strongest conventional microscope. To a scientist<br />
viewing purple of Cassius, for example, colloidal gold particles<br />
as small as one ten-millionth of a millimeter in size become visible.<br />
Impact<br />
Richard Zsigmondy<br />
Ultramicroscope / 821<br />
Born in Vienna, Austria, in 1865, Richard Adolf Zsigmondy<br />
came from a talented, energetic family. His father, a celebrated<br />
dentist <strong>and</strong> inventor of medical equipment, inspired his children<br />
to study the sciences, while his mother urged them to<br />
spend time outdoors in strenuous exercise. Although his father<br />
died when Zsigmondy was fifteen, the teenager’s interest in<br />
chemistry was already firmly established. He read advanced<br />
chemistry textbooks <strong>and</strong> worked on experiments in his own<br />
home laboratory.<br />
After taking his doctorate at the University of Munich <strong>and</strong><br />
teaching in Berlin <strong>and</strong> Graz, Austria, he became an industrial<br />
chemist at the glassworks in Jena, Germany. However, pure research<br />
was his love, <strong>and</strong> he returned to it, working entirely on<br />
his own after 1900. In 1907 he received an appointment as professor<br />
<strong>and</strong> director of the Institute of Inorganic Chemistry at the<br />
University of Göttingen, one of the scientific centers of the<br />
world. There he accomplished much of his ground-breaking<br />
work on colloids <strong>and</strong> Brownian motion, despite the severe<br />
shortages that hampered him during the economic depression<br />
in Germany following World War I. His 1925 Nobel Prize in<br />
Chemistry, especially the substantial money award, helped him<br />
overcome his supply problems. He retired in early 1929 <strong>and</strong><br />
died seven months later.<br />
After Zsigmondy’s invention of the ultramicroscope in 1902,<br />
the University of Göttingen appointed him professor of inorganic
822 / Ultramicroscope<br />
chemistry <strong>and</strong> director of its Institute for Inorganic Chemistry.<br />
Using the ultramicroscope, Zsigmondy <strong>and</strong> his associates quickly<br />
proved that purple of Cassius is indeed a colloidal substance.<br />
That finding, however, was the least of the spectacular discoveries<br />
that resulted from Zsigmondy’s invention. In the next decade,<br />
Zsigmondy <strong>and</strong> his associates found that color changes in colloidal<br />
gold solutions result from coagulation—that is, from changes in the<br />
size <strong>and</strong> number of gold particles in the solution caused by particles<br />
bonding together. Zsigmondy found that coagulation occurs when<br />
the negative electrical charge of the individual particles is removed<br />
by the addition of salts. Coagulation can be prevented or slowed by<br />
the addition of protective colloids.<br />
These observations also made possible the determination of the<br />
speed at which coagulation takes place, as well as the number of particles<br />
in the colloidal substance being studied. With the assistance of<br />
the organic chemist Max von Smouluchowski, Zsigmondy worked<br />
out a complete mathematical formula of colloidal coagulation that is<br />
valid not only for gold colloidal solutions but also for all other<br />
colloids. Colloidal substances include blood <strong>and</strong> milk, which both coagulate,<br />
thus giving Zsigmondy’s work relevance to the fields of<br />
medicine <strong>and</strong> agriculture. These observations <strong>and</strong> discoveries concerning<br />
colloids—in addition to the invention of the ultramicroscope—earned<br />
for Zsigmondy the 1925 Nobel Prize in Chemistry.<br />
See also Scanning tunneling microscope; Ultracentrifuge; X-ray<br />
crystallography.<br />
Further Reading<br />
Zsigmondy, Richard, <strong>and</strong> Jerome Alex<strong>and</strong>er. Colloids <strong>and</strong> the Ultramicroscope.<br />
New York: J. Wiley & Sons, 1909.<br />
Zsigmondy, Richard, Ellwood Barker Spear, <strong>and</strong> John Foote Norton.<br />
The Chemistry of Colloids. New York: John Wiley & Sons, 1917.
Ultrasound<br />
Ultrasound<br />
The invention: A medically safe alternative to X-ray examination,<br />
ultrasound uses sound waves to detect fetal problems in pregnant<br />
women.<br />
The people behind the invention:<br />
Ian T. Donald (1910-1987), a British obstetrician<br />
Paul Langévin (1872-1946), a French physicist<br />
Marie Curie (1867-1946) <strong>and</strong> Pierre Curie (1859-1906), the<br />
French husb<strong>and</strong>-<strong>and</strong>-wife team that researched <strong>and</strong><br />
developed the field of radioactivity<br />
Alice Stewart, a British researcher<br />
An Underwater Beginning<br />
823<br />
In the early 1900’s, two major events made it essential to develop<br />
an appropriate means for detecting unseen underwater objects. The<br />
first event was the Titanic disaster in 1912, which involved a largely<br />
submerged, unseen, <strong>and</strong> silent iceberg. This iceberg caused the sinking<br />
of the Titanic <strong>and</strong> resulted in the loss of many lives as well as<br />
valuable treasure. The second event was the threat to the Allied<br />
Powers from German U-boats during World War I (1914-1918). This<br />
threat persuaded the French <strong>and</strong> English Admiralties to form a joint<br />
committee in 1917. The Anti-Submarine Detection <strong>and</strong> Investigation<br />
Committee (ASDIC) found ways to counter the German naval<br />
developments. Paul Langévin, a former colleague of Pierre Curie<br />
<strong>and</strong> Marie Curie, applied techniques developed in the Curies’ laboratories<br />
in 1880 to formulate a crude ultrasonic system to detect submarines.<br />
These techniques used beams of sound waves of very high<br />
frequency that were highly focused <strong>and</strong> directional.<br />
The advent of World War II (1939-1945) made necessary the development<br />
of faster electronic detection technology to improve the efforts<br />
of ultrasound researchers. Langévin’s crude invention evolved<br />
into the sophisticated system called “sonar” (sound navigation ranging),<br />
which was important in the success of the Allied forces. Sonar<br />
was based on pulse echo principles <strong>and</strong>, like the system called “ra-
824 / Ultrasound<br />
dar” (radio detecting <strong>and</strong> ranging), had military implications. This vital<br />
technology was classified as a military secret <strong>and</strong> was kept hidden<br />
until after the war.<br />
An Alternative to X Rays<br />
Ian Donald<br />
Ian Donald was born in Paisley, Scotl<strong>and</strong>, in 1910 <strong>and</strong> educated<br />
in Edinburgh until he was twenty, when he moved to<br />
South Africa with his parents. He graduated with a bachelor of<br />
arts degree from Diocesan College, Cape Town, <strong>and</strong> then moved<br />
to London to study medicine, graduating from the University of<br />
London in 1937. During World War II he served as a medical officer<br />
in the Royal Air Force <strong>and</strong> received a medal for rescuing flyers<br />
from a burning airplane. After the war he began his long<br />
teaching career in medicine, first at St. Thomas Hospital Medical<br />
School <strong>and</strong> then as the Regius Professor of Midwifery at Glasgow<br />
University. His specialties were obstetrics <strong>and</strong> gynecology.<br />
While at Glasgow he accomplished his pioneering work<br />
with diagnostic ultrasound technology, but he also championed<br />
laparoscopy, breast feeding, <strong>and</strong> the preservation of membranes<br />
during the delivery of babies. In addition to his teaching<br />
duties <strong>and</strong> medical practice he wrote a widely used textbook,<br />
oversaw the building of the Queen Mother’s Hospital in Glasgow,<br />
<strong>and</strong> campaigned against Engl<strong>and</strong>’s 1967 Abortion Act.<br />
His expertise with ultrasound came to his own rescue after<br />
he had cardiac surgery in the 1960’s. He diagnosed himself as<br />
having internal bleeding from a broken blood vessel. The cardiologists<br />
taking care of him were skeptical until an ultrasound<br />
proved him right. Widely honored among physicians, he died<br />
in Engl<strong>and</strong> in 1987.<br />
Ian Donald’s interest in engineering <strong>and</strong> the principles of<br />
sound waves began when he was a schoolboy. Later, while he was<br />
in the British Royal Air Force, he continued <strong>and</strong> maintained his<br />
enthusiasm by observing the development of the anti-U-boat<br />
warfare efforts. He went to medical school after World War II <strong>and</strong><br />
began a career in obstetrics. By the early 1950’s, Donald had em-
Ultrasound / 825<br />
Safe <strong>and</strong> not requiring surgery, ultrasonography has become the principal means for obtaining<br />
information about fetal structures. (Digital Stock)<br />
barked on a study of how to apply sonar technology in medicine.<br />
He moved to Glasgow, Scotl<strong>and</strong>, a major engineering center in<br />
Europe that presented a fertile environment for interdisciplinary<br />
research. There Donald collaborated with engineers <strong>and</strong> technicians<br />
in his medical ultrasound research. They used inanimate<br />
<strong>and</strong> tissue materials in many trials. Donald hoped to apply ultrasound<br />
technology to medicine, especially to gynecology <strong>and</strong> obstetrics,<br />
his specialty.<br />
His efforts led to new pathways <strong>and</strong> new discoveries. He was interested<br />
in adapting a certain type of ultrasound technology method<br />
(used to probe metal structures <strong>and</strong> welds for cracks <strong>and</strong> flaws) to<br />
medicine. Kelvin Hughes, the engineering manufacturing company<br />
that produced the flaw detector apparatus, gave advice, expertise,<br />
<strong>and</strong> equipment to Donald <strong>and</strong> his associates, who were then able to<br />
devise water tanks with flexible latex bottoms. These were coated<br />
with a film of grease <strong>and</strong> placed into contact with the abdomens of<br />
pregnant women.<br />
The use of diagnostic radiography (such as X rays) became controversial<br />
when it was evident that it caused potential leukemias
826 / Ultrasound<br />
<strong>and</strong> other injuries to the fetus. It was realized from the earliest days<br />
of radiology that radiation could cause tumors, particularly of the<br />
skin. The aftereffects of radiological studies were recognized much<br />
later <strong>and</strong> confirmed by studies of atomic bomb survivors <strong>and</strong> of patients<br />
receiving therapeutic irradiation. The use of radiation in obstetrics<br />
posed several major threats to the developing fetus, most<br />
notably the production of tumors later in life, genetic damage, <strong>and</strong><br />
developmental anomalies in the unborn fetus.<br />
In 1958, bolstered by earlier clinical reports <strong>and</strong> animal research<br />
findings, Alice Stewart <strong>and</strong> her colleagues presented a major case<br />
study of more than thirteen hundred children in Engl<strong>and</strong> <strong>and</strong> Wales<br />
who had died of cancer before the age of ten between 1953 <strong>and</strong> 1958.<br />
There was a 91 percent increase in leukemias in children who were<br />
exposed to intrauterine radiation, as well as a higher percentage of<br />
fetal death. Although controversial, this report led to a reduction in<br />
the exposure of pregnant women to X rays, with subsequent reductions<br />
in fetal abnormalities <strong>and</strong> death.<br />
These reports came at a very opportune time for Donald: The development<br />
of ultrasonography would provide useful information<br />
about the unborn fetus without the adverse effects of radiation.<br />
Stewart’s findings <strong>and</strong> Donald’s experiments convinced others of<br />
the need for ultrasonography in obstetrics.<br />
Consequences<br />
Diagnostic ultrasound first gained clinical acceptance in obstetrics,<br />
<strong>and</strong> its major contributions have been in the assessment of fetal<br />
size <strong>and</strong> growth. In combination with amniocentesis (the study of<br />
fluid taken from the womb), ultrasound is an invaluable tool in operative<br />
procedures necessary to improve the outcomes of pregnancies.<br />
As can be expected, safety has been a concern, especially for a developing,<br />
vulnerable fetus that is exposed to high-frequency sound.<br />
Research has not been able to document any harmful effect of ultrasonography<br />
on the developing fetus. The procedure produces neither<br />
heat nor cold. It has not been shown to produce any toxic or destructive<br />
effect on the auditory or balancing organs of the<br />
developing fetus. Chromosomal abnormalities have not been reported<br />
in any of the studies conducted.
Ultrasonography, because it is safe <strong>and</strong> does not require surgery,<br />
has become the principal means for obtaining information about fetal<br />
structures. With this procedure, the contents of the uterus—as<br />
well as the internal structure of the placenta, fetus, <strong>and</strong> fetal organs—can<br />
be evaluated at any time during pregnancy. The use of<br />
ultrasonography remains a most valued tool in medicine, especially<br />
obstetrics, because of Donald’s work.<br />
See also Amniocentesis; Birth control pill; CAT scanner; Electrocardiogram;<br />
Electroencephalogram; Mammography; Nuclear magnetic<br />
resonance; Pap test; Sonar; Syphilis test; X-ray image intensifier.<br />
Further Reading<br />
Ultrasound / 827<br />
Danforth, David N., <strong>and</strong> James R. Scott. Danforth’s Obstetrics <strong>and</strong> Gynecology.<br />
7th ed. Philadelphia: Lippincott, 1994.<br />
DeJauregui, Ruth. 100 Medical Milestones That Shaped World History.<br />
San Mateo, Calif.: Bluewood Books, 1998.<br />
Rozycki, Grace S. Surgeon-Performed Ultrasound: Its Use in Clinical<br />
Practice. Philadelphia: W. B. Saunders, 1998.<br />
Wolbarst, Anthony B. Looking Within: How X-ray, CT, MRI, Ultrasound,<br />
<strong>and</strong> Other Medical Images Are Created, <strong>and</strong> How They Help<br />
Physicians Save Lives. Berkeley: University of California Press,<br />
1999.
828<br />
UNIVAC computer<br />
UNIVAC computer<br />
The invention: The first commercially successful computer system.<br />
The people behind the invention:<br />
John Presper Eckert (1919-1995), an American electrical engineer<br />
John W. Mauchly (1907-1980), an American physicist<br />
John von Neumann (1903-1957), a Hungarian American<br />
mathematician<br />
Howard Aiken (1900-1973), an American physicist<br />
George Stibitz (1904-1995), a scientist at Bell Labs<br />
The Origins of Computing<br />
On March 31, 1951, the U.S. Census Bureau accepted delivery of<br />
the first Universal Automatic Computer (UNIVAC). This powerful<br />
electronic computer, far surpassing anything then available in technological<br />
features <strong>and</strong> capability, ushered in the first computer generation<br />
<strong>and</strong> pioneered the commercialization of what had previously<br />
been the domain of academia <strong>and</strong> the interest of the military. The fanfare<br />
that surrounded this historic occasion, however, masked the turbulence<br />
of the previous five years for the young upstart Eckert-<br />
Mauchly Computer Corporation (EMCC), which by this time was a<br />
wholly owned subsidiary of Remington R<strong>and</strong> Corporation.<br />
John Presper Eckert <strong>and</strong> John W. Mauchly met in the summer of<br />
1941 at the University of Pennsylvania. A short time later, Mauchly,<br />
then a physics professor at Ursinus College, joined the Moore School<br />
of Engineering at the University of Pennsylvania <strong>and</strong> embarked on a<br />
crusade to convince others of the feasibility of creating electronic digital<br />
computers. Up to this time, the only computers available were<br />
called “differential analyzers,” which were used to solve complex<br />
mathematical equations known as “differential equations.” These<br />
slow machines were good only for solving a relatively narrow range<br />
of mathematical problems.<br />
Eckert <strong>and</strong> Mauchly l<strong>and</strong>ed a contract that eventually resulted in<br />
the development <strong>and</strong> construction of the world’s first operational
general-purpose electronic computer, the Electronic Numerical Integrator<br />
<strong>and</strong> Calculator (ENIAC). This computer, used eventually<br />
by the Army for the calculation of ballistics tables, was deficient in<br />
many obvious areas, but this was caused by economic rather than<br />
engineering constraints. One major deficiency was the lack of automatic<br />
program control; the ENIAC did not have stored program<br />
memory. This was addressed in the development of the Electronic<br />
Discrete Variable Automatic Computer (EDVAC), the successor to<br />
the ENIAC.<br />
Fighting the Establishment<br />
UNIVAC computer / 829<br />
A symbiotic relationship had developed between Eckert <strong>and</strong><br />
Mauchly that worked to their advantage on technical matters.<br />
They worked well with each other, <strong>and</strong> this contributed to their<br />
success in spite of external obstacles. They both were interested in<br />
the commercial applications of computers <strong>and</strong> envisioned uses for<br />
these machines far beyond the narrow applications required by<br />
the military.<br />
This interest brought them into conflict with the administration<br />
at the Moore School of Engineering as well as with the noted mathematician<br />
John von Neumann, who “joined” the ENIAC/EDVAC<br />
development team in 1945. Von Neumann made significant contributions<br />
<strong>and</strong> added credibility to the Moore School group, which often<br />
had to fight against the conservative scientific establishment<br />
characterized by Howard Aiken at Harvard University <strong>and</strong> George<br />
Stibitz at Bell Labs. Philosophical differences between von Neumann<br />
<strong>and</strong> Eckert <strong>and</strong> Mauchly, as well as patent issue disputes with<br />
the Moore School administration, eventually caused the resignation<br />
of Eckert <strong>and</strong> Mauchly on March 31, 1946.<br />
Eckert <strong>and</strong> Mauchly, along with some of their engineering colleagues<br />
at the University of Pennsylvania, formed the Electronic<br />
Control Company <strong>and</strong> proceeded to interest potential customers<br />
(including the Census Bureau) in an “EDVAC-type” machine. On<br />
May 24, 1947, the EDVAC-type machine became the UNIVAC. This<br />
new computer would overcome the shortcomings of the ENIAC<br />
<strong>and</strong> the EDVAC (which was eventually completed by the Moore<br />
School in 1951). It would be a stored-program computer <strong>and</strong> would
830 / UNIVAC computer<br />
allow input to <strong>and</strong> output from the computer via magnetic tape. The<br />
prior method of input/output used punched paper cards that were<br />
extremely slow compared to the speed at which data in the computer<br />
could be processed.<br />
A series of poor business decisions <strong>and</strong> other unfortunate circumstances<br />
forced the newly renamed Eckert-Mauchly Computer<br />
Corporation to look for a buyer. They found one in Remington R<strong>and</strong><br />
in 1950. Remington R<strong>and</strong> built tabulating equipment <strong>and</strong> was a<br />
competitor of International Business Machines Corporation (IBM).<br />
IBM was approached about buying EMCC, but the negotiations fell<br />
apart. EMCC became a division of Remington R<strong>and</strong> <strong>and</strong> had access<br />
to the resources necessary to finish the UNIVAC.<br />
Consequences<br />
Eckert <strong>and</strong> Mauchly made a significant contribution to the advent<br />
of the computer age with the introduction of the UNIVAC I.<br />
The words “computer” <strong>and</strong> “UNIVAC” entered the popular vocabulary<br />
as synonyms. The efforts of these two visionaries were rewarded<br />
quickly as contracts started to pour in, taking IBM by surprise<br />
<strong>and</strong> propelling the inventors into the national spotlight.<br />
This spotlight shone brightest, perhaps, on the eve of the national<br />
presidential election of 1952, which pitted war hero General Dwight<br />
D. Eisenhower against statesman Adlai Stevenson. At the suggestion<br />
of Remington R<strong>and</strong>, CBS was invited to use UNIVAC to predict<br />
the outcome of the election. Millions of television viewers watched<br />
as CBS anchorman Walter Cronkite “asked” UNIVAC for its predictions.<br />
A program had been written to analyze the results of thous<strong>and</strong>s<br />
of voting districts in the elections of 1944 <strong>and</strong> 1948. Based on<br />
only 7 percent of the votes coming in, UNIVAC had Eisenhower<br />
winning by a l<strong>and</strong>slide, in contrast with all the prior human forecasts<br />
of a close election. Surprised by this answer <strong>and</strong> not willing to<br />
suffer the embarrassment of being wrong, the programmers quickly<br />
directed the program to provide an answer that was closer to the<br />
perceived situation. The outcome of the election, however, matched<br />
UNIVAC’s original answer. This prompted CBS commentator Edward<br />
R. Murrow’s famous quote, “The trouble with machines is<br />
people.”
The development of the UNIVAC I produced many technical innovations.<br />
Primary among these is the use of magnetic tape for input<br />
<strong>and</strong> output. All machines that preceded the UNIVAC (with one<br />
exception) used either paper tape or cards for input <strong>and</strong> cards for<br />
output. These methods were very slow <strong>and</strong> created a bottleneck of<br />
information. The great advantage of magnetic tape was the ability<br />
to store the equivalent of thous<strong>and</strong>s of cards of data on one 30-centimeter<br />
reel of tape. Another advantage was its speed.<br />
See also Apple II computer; BINAC computer; Colossus computer;<br />
ENIAC computer; IBM Model 1401 computer; Personal computer;<br />
Supercomputer.<br />
Further Reading<br />
UNIVAC computer / 831<br />
Metropolis, Nicholas, Jack Howlett, <strong>and</strong> Gian Carlo Rota. A History<br />
of Computing in the Twentieth Century: A Collection of Essays. New<br />
York: Academic Press, 1980.<br />
Slater, Robert. Portraits in Silicon. Cambridge, Mass.: MIT Press,<br />
1987.<br />
Stern, Nancy B. From ENIAC to UNIVAC: An Appraisal of the Eckert-<br />
Mauchly Computers. Bedford, Mass.: Digital Press, 1981.
832<br />
Vacuum cleaner<br />
Vacuum cleaner<br />
The invention: The first portable domestic vacuum cleaner successfully<br />
adapted to electricity, the original machine helped begin<br />
the electrification of domestic appliances in the early twentieth<br />
century.<br />
The people behind the invention:<br />
H. Cecil Booth (1871-1955), a British civil engineer<br />
Melville R. Bissell (1843-1889), the inventor <strong>and</strong> marketer of the<br />
Bissell carpet sweeper in 1876<br />
William Henry Hoover (1849-1932), an American industrialist<br />
James Murray Spangler (1848-1915), an American inventor<br />
From Brooms to Bissells<br />
During most of the nineteenth century, the floors of homes<br />
were cleaned primarily with brooms. Carpets were periodically<br />
dragged out of the home by the boys <strong>and</strong> men of the family,<br />
stretched over rope lines or fences, <strong>and</strong> given a thorough beating<br />
to remove dust <strong>and</strong> dirt. In the second half of the century, carpet<br />
sweepers, perhaps inspired by the success of street-sweeping machines,<br />
began to appear. Although there were many models, nearly<br />
all were based upon the idea of a revolving brush within an outer<br />
casing that moved on rollers or wheels when pushed by a long<br />
h<strong>and</strong>le.<br />
Melville Bissell’s sweeper, patented in 1876, featured a knob for<br />
adjusting the brushes to the surface. The Bissell Carpet Company,<br />
also formed in 1876, became the most successful maker of carpet<br />
sweepers <strong>and</strong> dominated the market well into the twentieth century.<br />
Electric vacuum cleaners were not feasible until homes were<br />
wired for electricity <strong>and</strong> the small electric motor was invented.<br />
Thomas Edison’s success with an inc<strong>and</strong>escent lighting system in<br />
the 1880’s <strong>and</strong> Nikola Tesla’s invention of a small electric motor<br />
that was used in 1889 to drive a Westinghouse Electric Corporation<br />
fan opened the way for the application of electricity to household<br />
technologies.
Cleaning with Electricity<br />
Vacuum cleaner / 833<br />
In 1901, H. Cecil Booth, a British civil engineer, observed a London<br />
demonstration of an American carpet cleaner that blew compressed<br />
air at the fabric. Booth was convinced that the process<br />
should be reversed so that dirt would be sucked out of the carpet. In<br />
developing this idea, Booth invented the first successful suction<br />
vacuum sweeper.<br />
Booth’s machines, which were powered by gasoline or electricity,<br />
worked without brushes. Dust was extracted by means of a<br />
suction action through flexible tubes with slot-shaped nozzles.<br />
Some machines were permanently installed in buildings that had<br />
wall sockets for the tubes in every room. Booth’s British Vacuum<br />
Cleaner Company also employed horse-drawn mobile units from<br />
which white-uniformed men unrolled long tubes that they passed<br />
into buildings through windows <strong>and</strong> doors. His company’s commercial<br />
triumph came when it cleaned Westminster Abbey for the<br />
coronation of Edward VII in 1902. Booth’s company also manufactured<br />
a 1904 domestic model that had a direct-current electric motor<br />
<strong>and</strong> a vacuum pump mounted on a wheeled carriage. Dust was<br />
sucked into the nozzle of a long tube <strong>and</strong> deposited into a metal<br />
container. Booth’s vacuum cleaner used electricity from overhead<br />
light sockets.<br />
The portable electric vacuum cleaner was invented in 1907 in the<br />
United States by James Murray Spangler. When Spangler was a janitor<br />
in a department store in Canton, Ohio, his asthmatic condition<br />
was worsened by the dust he raised with a large Bissell carpet<br />
sweeper. Spangler’s modifications of the Bissell sweeper led to his<br />
own invention. On June 2, 1908, he received a patent for his Electric<br />
Suction Sweeper. The device consisted of a cylindrical brush in the<br />
front of the machine, a vertical-shaft electric motor above a fan in<br />
the main body, <strong>and</strong> a pillowcase attached to a broom h<strong>and</strong>le behind<br />
the main body. The brush dislodged the dirt, which was sucked into<br />
the pillowcase by the movement of air caused by a fan powered by<br />
the electric motor. Although Spangler’s initial attempt to manufacture<br />
<strong>and</strong> sell his machines failed, Spangler had, luckily for him, sold<br />
one of his machines to a cousin, Susan Troxel Hoover, the wife of<br />
William Henry Hoover.
834 / Vacuum cleaner<br />
The Hoover family was involved in the production of leather<br />
goods, with an emphasis on horse saddles <strong>and</strong> harnesses. William<br />
Henry Hoover, president of the Hoover Company, recognizing that<br />
the adoption of the automobile was having a serious impact on the<br />
family business, was open to investigating another area of production.<br />
In addition, Mrs. Hoover liked the Spangler machine that she<br />
had been using for a couple of months, <strong>and</strong> she encouraged her husb<strong>and</strong><br />
to enter into an agreement with Spangler. An agreement made<br />
on August 5, 1908, allowed Spangler, as production manager, to<br />
manufacture his machine with a small work force in a section of<br />
Hoover’s plant. As sales of vacuum cleaners increased, what began<br />
as a sideline for the Hoover Company became the company’s main<br />
line of production.<br />
Few American homes were wired for electricity when Spangler<br />
<strong>and</strong> Hoover joined forces; not until 1920 did 35 percent of American<br />
homes have electric power. In addition to this inauspicious fact, the<br />
first Spangler-Hoover machine, the Model O, carried the relatively<br />
high price of seventy-five dollars. Yet a full-page ad for the Model O<br />
in the December, 1908, issue of the Saturday Evening Post brought a<br />
deluge of requests. American women had heard of the excellent performance<br />
of commercial vacuum cleaners, <strong>and</strong> they hoped that the<br />
Hoover domestic model would do as well in the home.<br />
Impact<br />
As more <strong>and</strong> more homes in the United States <strong>and</strong> abroad became<br />
wired for electric lighting, a clean <strong>and</strong> accessible power<br />
source became available for household technologies. Whereas electric<br />
lighting was needed only in the evening, the electrification of<br />
household technologies made it necessary to use electricity during<br />
the day. The electrification of domestic technologies therefore<br />
matched the needs of the utility companies, which sought to maximize<br />
the use of their facilities. They became key promoters of electric<br />
appliances. In the first decades of the twentieth century, many<br />
household technologies became electrified. In addition to fans <strong>and</strong><br />
vacuum cleaners, clothes-washing machines, irons, toasters, dishwashing<br />
machines, refrigerators, <strong>and</strong> kitchen ranges were being<br />
powered by electricity.
Vacuum cleaner / 835<br />
The application of electricity to household technologies came as<br />
large numbers of women entered the work force. During <strong>and</strong> after<br />
World War I, women found new employment opportunities in industrial<br />
manufacturing, department stores, <strong>and</strong> offices. The employment<br />
of women outside the home continued to increase throughout the<br />
twentieth century. Electrical appliances provided the means by which<br />
families could maintain the same st<strong>and</strong>ards of living in the home while<br />
both parents worked outside the home.<br />
It is significant that Bissell was motivated by an allergy to dust<br />
<strong>and</strong> Spangler by an asthmatic condition. The employment of the<br />
carpet sweeper, <strong>and</strong> especially the electric vacuum cleaner, not only<br />
H. Cecil Booth<br />
Although Hubert Cecil Booth (1871-1955), an English civil<br />
engineer, designed battleship engines, factories, <strong>and</strong> bridges, he<br />
was not above working on homier problems when they intrigued<br />
him. That happened in 1900 when he watched the demonstration<br />
of a device that used forced air to blow the dirt out of<br />
railway cars. It worked poorly, <strong>and</strong> the reason, it seemed to<br />
Booth, was that blowing just stirred up the dirt. Sucking it into a<br />
receptacle, he thought, would work better. He tested his idea by<br />
placing a wet cloth over furniture upholstery <strong>and</strong> sucking through<br />
it. The grime that collected on the side of the cloth facing the upholstery<br />
proved him right.<br />
He built his first vacuum cleaner—a term that he coined—in<br />
1901. It cleaned houses, but only with considerable effort. Measuring<br />
54 inches by 42 inches by 10 inches, it had to be carried in<br />
a horse-driven van to the cleaning site. A team of workmen<br />
from Booth’s Vacuum Cleaner Company then did the cleaning<br />
with hoses that reached inside the house through windows <strong>and</strong><br />
doors. Moreover, the machine cost the equivalent of more than<br />
fifteen hundred dollars. It was beyond the finances <strong>and</strong> physical<br />
powers of home owners.<br />
Booth marketed the first successful British one-person vacuum<br />
cleaner, the Trolley-Vac, in 1906. Weighing one hundred<br />
pounds, it was still difficult to wrestle into position, but it came<br />
with hoses <strong>and</strong> attachments that made possible the cleaning of<br />
different types of surfaces <strong>and</strong> hard-to-reach areas.
836 / Vacuum cleaner<br />
made house cleaning more efficient <strong>and</strong> less physical but also led to<br />
a healthier home environment. Whereas sweeping with a broom<br />
tended only to move dust to a different location, the carpet sweeper<br />
<strong>and</strong> the electric vacuum cleaner removed the dirt from the house.<br />
See also Disposable razor; Electric refrigerator; Microwave cooking;<br />
Robot (household); Washing machine.<br />
Further Reading<br />
Jailer-Chamberlain, Mildred. “This Is the Way We Cleaned Our<br />
Floors.” Antiques & Collecting Magazine 101, no. 4 (June, 1996).<br />
Kirkpatrick, David D. “The Ultimate Victory of Vacuum Cleaners.”<br />
New York Times (April 14, 2001).<br />
Shapiro, Laura. “Household Appliances.” Newsweek 130, no. 24A<br />
(Winter, 1997/1998).
Vacuum tube<br />
Vacuum tube<br />
The invention: A sealed glass tube from which air <strong>and</strong> gas have<br />
been removed to permit electrons to move more freely, the vacuum<br />
tube was the heart of electronic systems until it was displaced<br />
by transistors.<br />
The people behind the invention:<br />
Sir John Ambrose Fleming (1849-1945), an English physicist<br />
<strong>and</strong> professor of electrical engineering<br />
Thomas Alva Edison (1847-1931), an American inventor<br />
Lee de Forest (1873-1961), an American scientist <strong>and</strong> inventor<br />
Arthur Wehnelt (1871-1944), a German inventor<br />
A Solution in Search of a Problem<br />
837<br />
The vacuum tube is a sealed tube or container from which almost<br />
all the air has been pumped out, thus creating a near vacuum. When<br />
the tube is in operation, currents of electricity are made to travel<br />
through it. The most widely used vacuum tubes are cathode-ray<br />
tubes (television picture tubes).<br />
The most important discovery leading to the invention of the<br />
vacuum tube was the Edison effect by Thomas Alva Edison in<br />
1884. While studying why the inner glass surface of light bulbs<br />
blackened, Edison inserted a metal plate near the filament of one<br />
of his light bulbs. He discovered that electricity would flow from<br />
the positive side of the filament to the plate, but not from the neg<br />
ative side to the plate. Edison offered no explanation for the effect.<br />
Edison had, in fact, invented the first vacuum tube, which was<br />
later termed the diode; at that time there was no use for this device.<br />
Therefore, the discovery was not recognized for its true significance.<br />
A diode converts electricity that alternates in direction (alternating<br />
current) to electricity that flows in the same direction (direct<br />
current). Since Edison was more concerned with producing<br />
direct current in generators, <strong>and</strong> not household electric lamps, he<br />
essentially ignored this aspect of his discovery. Like many other in-
838 / Vacuum tube<br />
ventions or discoveries that were ahead of their time—such as the<br />
laser—for a number of years, the Edison effect was “a solution in<br />
search of a problem.”<br />
The explanation for why this phenomenon occurred would not<br />
come until after the discovery of the electron in 1897 by Sir Joseph<br />
John Thomson, an English physicist. In retrospect, the Edison effect<br />
can be identified as one of the first observations of thermionic emission,<br />
the freeing up of electrons by the application of heat. Electrons<br />
were attracted to the positive charges <strong>and</strong> would collect on the positively<br />
charged plate, thus providing current; but they were repelled<br />
from the plate when it was made negative, meaning that no current<br />
was produced. Since the diode permitted the electrical current to<br />
flow in only one direction, it was compared to a valve that allowed a<br />
liquid to flow in only one direction. This analogy is popular since<br />
the behavior of water has often been used as an analogy for electricity,<br />
<strong>and</strong> this is the reason that the term valves became popular for<br />
vacuum tubes.<br />
Same Device, Different Application<br />
Sir John Ambrose Fleming, acting as adviser to the Edison Electric<br />
Light Company, had studied the light bulb <strong>and</strong> the Edison effect<br />
starting in the early 1880’s, before the days of radio. Many years<br />
later, he came up with an application for the Edison effect as a radio<br />
detector when he was a consultant for the Marconi Wireless Telegraph<br />
Company. Detectors (devices that conduct electricity in one<br />
direction only, just as the diode does, but at higher frequencies)<br />
were required to make the high-frequency radio waves audible by<br />
converting them from alternating current to direct current. Fleming<br />
was able to detect radio waves quite effectively by using the Edison<br />
effect. Fleming used essentially the same device that Edison had created,<br />
but for a different purpose. Fleming applied for a patent on his<br />
detector on November 16, 1904.<br />
In 1906, Lee de Forest refined Fleming’s invention by adding a<br />
zigzag piece of wire between the metal plate <strong>and</strong> the filament of the<br />
vacuum tube. The zigzag piece of wire was later replaced by a<br />
screen called a “grid.” The grid allowed a small voltage to control a<br />
larger voltage between the filament <strong>and</strong> plate. It was the first com-
John Ambrose Fleming<br />
Vacuum tube / 839<br />
John Ambrose Fleming had a remarkably long <strong>and</strong> fruitful<br />
scientific career. He was born in Lancaster, Engl<strong>and</strong>, in 1849, the<br />
eldest son of a minister. When he was a boy, the family moved<br />
to London, which remained his home for the rest of his life. An<br />
outst<strong>and</strong>ing student, Fleming matriculated at University College,<br />
London, graduating in 1870 with honors. Scholarships<br />
took him to other colleges until his skill with electrical experiments<br />
earned him a job as a lab instructor at Cambridge University<br />
in 1880. In 1885, he returned to University College, London,<br />
as professor of electrical technology. He taught there for the following<br />
forty-one years, occasionally taking time off to serve as a<br />
consultant for such electronics industry leaders as Thomas Edison<br />
<strong>and</strong> Guglielmo Marconi.<br />
Fleming’s passion was electricity <strong>and</strong> electronics, <strong>and</strong> he<br />
was sought after as a teacher with a knack for memorable explanations.<br />
For instance, he thought up the “right-h<strong>and</strong>” rule (also<br />
called Fleming’s rule) to illustrate the relation of electromagnetic<br />
forces during induction: When the thumb, index finger,<br />
<strong>and</strong> middle finger of a human h<strong>and</strong> are held at right angles to<br />
one another so that the thumb points in the direction of motion<br />
through a magnetic field—which is indicated by the index finger—then<br />
the middle finger shows the direction of induced<br />
current. During his extensive research, Fleming investigated<br />
transformers, high-voltage transmitters, electrical conduction,<br />
cryogenic electrical effects, radio, <strong>and</strong> television, <strong>and</strong> also invented<br />
the vacuum tube.<br />
Advanced age hardly slowed him down. He wrote three<br />
books <strong>and</strong> more than one hundred articles <strong>and</strong> remarried at<br />
eighty-four. He also delivered public lectures—to audiences at<br />
the Royal Institution <strong>and</strong> the Royal Society among other venues—<br />
until he was ninety. He died in 1945, ninety-five years old,<br />
having helped give birth to telecommunications.<br />
plete vacuum tube <strong>and</strong> the first device ever constructed capable of<br />
amplifying a signal—that is, taking a small-voltage signal <strong>and</strong> making<br />
it much larger. He named it the “audion” <strong>and</strong> was granted a U.S.<br />
patent in 1907.
840 / Vacuum tube<br />
In 1907-1908, the American Navy carried radios equipped with<br />
de Forest’s audion in its goodwill tour around the world. While useful<br />
as an amplifier of the weak radio signals, it was not useful at this<br />
point for the more powerful signals of the telephone. Other developments<br />
were made quickly as the importance of the emerging<br />
fields of radio <strong>and</strong> telephony were realized.<br />
Impact<br />
With many industrial laboratories working on vacuum tubes,<br />
improvements came quickly. For example, tantalum <strong>and</strong> tungsten<br />
filaments quickly replaced the early carbon filaments. In 1904, Arthur<br />
Wehnelt, a German inventor, discovered that if metals were<br />
coated with certain materials such as metal oxides, they emitted far<br />
more electrons at a given temperature. These materials enabled<br />
electrons to escape the surface of the metal oxides more easily. Thermionic<br />
emission <strong>and</strong>, therefore, tube efficiencies were greatly improved<br />
by this method.<br />
Another important improvement in the vacuum tube came with<br />
the work of the American chemist Irving Langmuir of the General<br />
Electric Research Laboratory, starting in 1909, <strong>and</strong> Harold D. Arnold<br />
of Bell Telephone Laboratories. They used new devices such as<br />
the mercury diffusion pump to achieve higher vacuums. Working<br />
independently, Langmuir <strong>and</strong> Arnold discovered that very high<br />
vacuum used with higher voltages increased the power these tubes<br />
could h<strong>and</strong>le from small fractions of a watt to hundreds of watts.<br />
The de Forest tube was now useful for the higher-power audio signals<br />
of the telephone. This resulted in the introduction of the first<br />
transamerican speech transmission in 1914, followed by the first<br />
transatlantic communication in 1915.<br />
The invention of the transistor in 1948 by the American physicists<br />
William Shockley, Walter H. Brattain, <strong>and</strong> John Bardeen ultimately<br />
led to the downfall of the tube. With the exception of the cathode-ray<br />
tube, transistors could accomplish the jobs of nearly all vacuum tubes<br />
much more efficiently. Also, the development of the integrated circuit<br />
allowed the creation of small, efficient, highly complex devices that<br />
would be impossible with radio tubes. By 1977, the major producers<br />
of the vacuum tube had stopped making it.
See also Color television; FM radio; Radar; Radio; Radio crystal<br />
sets; Television; Transistor; Transistor radio.<br />
Further Reading<br />
Vacuum tube / 841<br />
Baldwin, Neil. Edison: Inventing the Century. Chicago: University of<br />
Chicago Press, 2001.<br />
Fleming, John Ambrose. Memories of a Scientific Life. London: Marshall,<br />
Morgan & Scott, 1934.<br />
Hijiya, James A. Lee de Forest <strong>and</strong> the Fatherhood of Radio. Bethlehem,<br />
Pa.: Lehigh University Press, 1992.<br />
Read, Oliver, <strong>and</strong> Walter L. Welch. From Tin Foil to Stereo: Evolution of<br />
the Phonograph. 2d ed. Indianapolis: H. W. Sams, 1976.
842<br />
Vat dye<br />
Vat dye<br />
The invention: The culmination of centuries of efforts to mimic the<br />
brilliant colors displayed in nature in dyes that can be used in<br />
many products.<br />
The people behind the invention:<br />
Sir William Henry Perkin (1838-1907), an English student in<br />
Hofmann’s laboratory<br />
René Bohn (1862-1922), a synthetic organic chemist<br />
Karl Heumann (1850-1894), a German chemist who taught Bohn<br />
Rol<strong>and</strong> Scholl (1865-1945), a Swiss chemist who established the<br />
correct structure of Bohn’s dye<br />
August Wilhelm von Hofmann (1818-1892), an organic chemist<br />
Synthesizing the Compounds of Life<br />
From prehistoric times until the mid-nineteenth century, all dyes<br />
were derived from natural sources, primarily plants. Among the<br />
most lasting of these dyes were the red <strong>and</strong> blue dyes derived from<br />
alizarin <strong>and</strong> indigo.<br />
The process of making dyes took a great leap forward with the<br />
advent of modern organic chemistry in the early years of the nineteenth<br />
century. At the outset, this branch of chemistry, dealing with<br />
the compounds of the element carbon <strong>and</strong> associated with living<br />
matter, hardly existed, <strong>and</strong> synthesis of carbon compounds was not<br />
attempted. Considerable data had accumulated showing that organic,<br />
or living, matter was basically different from the compounds<br />
of the nonliving mineral world. It was widely believed that although<br />
one could work with various types of organic matter in<br />
physical ways <strong>and</strong> even analyze their composition, they could be<br />
produced only in a living organism.<br />
Yet, in 1828, the German chemist Friedrich Wöhler found that it<br />
was possible to synthesize the organic compound urea from mineral<br />
compounds. As more chemists reported the successful preparation<br />
of compounds previously isolated only from plants or animals,<br />
the theory that organic compounds could be produced only in a living<br />
organism faded.
One field ripe for exploration was that committed to exploiting the<br />
uses of coal tar. Here, August Wilhelm von Hofmann was an active<br />
worker. He <strong>and</strong> his students made careful studies of this complex<br />
mixture. The high-quality stills they designed allowed for the isolation<br />
of pure samples of important compounds for further study.<br />
Of greater importance was the collection of able students Hofmann<br />
attracted. Among them was Sir William Henry Perkin, who is regarded<br />
as the founder of the dyestuffs industry. In 1856, Perkin undertook<br />
the task of synthesizing quinine (a bitter crystalline alkaloid<br />
used in medicine) from a nitrogen-containing coal tar material<br />
called toluidine. Luck played a decisive role in the outcome of his<br />
experiment. The sticky compound Perkin obtained contained no<br />
quinine, so he decided to investigate the simpler related compound<br />
aniline. A small amount of the impurity toluidine in his aniline gave<br />
Perkin the first synthetic dye, Mauveine.<br />
Searching for Structure<br />
Vat dye / 843<br />
From this beginning, the great dye industries of Europe, particularly<br />
Germany, grew. The trial-<strong>and</strong>-error methods gave way to more<br />
systematic searches as the structural theory of organic chemistry<br />
was formulated.<br />
As the twentieth century began, great progress had been made,<br />
<strong>and</strong> German firms dominated the industry. Badische Anilin- und<br />
Soda-Fabrik (BASF) was incorporated at Ludwigshafen in 1865 <strong>and</strong><br />
undertook extensive explorations of both alizarin <strong>and</strong> indigo. A<br />
chemist, René Bohn, had made important discoveries in 1888, which<br />
helped the company recover lost ground in the alizarin field. In<br />
1901, he undertook the synthesis of a dye he hoped would combine<br />
the desirable attributes of both alizarin <strong>and</strong> indigo.<br />
As so often happens in science, nothing like the expected occurred.<br />
Bohn realized that the beautiful blue crystals that resulted<br />
from his synthesis represented a far more important product. Not<br />
only was this the first synthetic vat dye, Indanthrene, ever prepared,<br />
but also, by studying the reaction at higher temperature, a useful<br />
yellow dye, Flavanthrone, could be produced.<br />
The term vat dye is used to describe a method of applying the dye,<br />
but it also serves to characterize the structure of the dye, because all
844 / Vat dye<br />
William Henry Perkin<br />
Born in Engl<strong>and</strong> in 1838, William Henry Perkin saw a chemical<br />
experiment for the first time when he was a small boy. He<br />
found his calling there <strong>and</strong> then, much to the dismay of his father,<br />
who wanted him to be a builder <strong>and</strong> architect like himself.<br />
Perkin studied chemistry every chance he found as a teenager<br />
<strong>and</strong> was only seventeen when he won an appointment as<br />
the assistant to the German chemist August Wilhelm von Hofmann.<br />
A year later, while trying to synthesize quinine at Hofmann’s<br />
suggestion, Perkin discovered a deep purple dye—now<br />
known as aniline purple or Mauveine, but popularly called<br />
mauve. In 1857 he opened a small dyeworks by the Gr<strong>and</strong><br />
Union Canal in West London, hoping to make his fortune by<br />
manufacturing the dye.<br />
He succeeded brilliantly. His ambitions were helped along<br />
royally when Queen Victoria wore a silk gown dyed with Mauveine<br />
to the Royal Exhibition of 1862. In 1869, he perfected a<br />
method for producing another new dye, alizarin, which is red.<br />
A wealthy man, he sold his business in 1874 when he was just<br />
thirty-six years old <strong>and</strong> devoted himself to research, which included<br />
isolation of the first synthetic perfume, coumarin, from<br />
coal tar.<br />
Perkin died in 1907, a year after receiving a knighthood, one<br />
of his many awards <strong>and</strong> honors for starting the artificial dye industry.<br />
His son William Henry Perkin, Jr. (1860-1927) also became<br />
a well-known researcher in organic chemistry.<br />
currently useful vat dyes share a common unit. One fundamental<br />
problem in dyeing relates to the extent to which the dye is watersoluble.<br />
A beautifully colored molecule that is easily soluble in water<br />
might seem attractive given the ease with which it binds with the fiber;<br />
however, this same solubility will lead to the dye’s rapid loss in<br />
daily use.<br />
Vat dyes are designed to solve this problem by producing molecules<br />
that can be made water-soluble, but only during the dyeing or<br />
vatting process. This involves altering the chemical structure of the<br />
dye so that it retains its color throughout the life of the cloth.<br />
By 1907, Rol<strong>and</strong> Scholl had showed unambiguously that the
chemical structure proposed by Bohn for Indanthrene was correct,<br />
<strong>and</strong> a major new area of theoretical <strong>and</strong> practical importance was<br />
opened for organic chemists.<br />
Impact<br />
Bohn’s discovery led to the development of many new <strong>and</strong> useful<br />
dyes. The list of patents issued in his name fills several pages in<br />
Chemical Abstracts indexes.<br />
The true importance of this work is to be found in a consideration<br />
of all synthetic chemistry, which may perhaps be represented by<br />
this particular event. More than two hundred dyes related to Indanthrene<br />
are in commercial use. The colors represented by these substances<br />
are a rainbow making nature’s finest hues available to all.<br />
The dozen or so natural dyes have been synthesized into more than<br />
seven thous<strong>and</strong> superior products through the creativity of the<br />
chemist.<br />
Despite these desirable outcomes, there is doubt whether there is<br />
any real benefit to society from the development of new dyes. This<br />
doubt is the result of having to deal with limited natural resources.<br />
With so many urgent problems to be solved, scientists are not sure<br />
whether to search for greater luxury. If the field of dye synthesis reveals<br />
a single theme, however, it must be to expect the unexpected.<br />
Time after time, the search for one goal has led to something quite<br />
different—<strong>and</strong> useful.<br />
See also Buna rubber; Color film; Neoprene.<br />
Further Reading<br />
Vat dye / 845<br />
Clark, Robin J. H., et al. “Indigo, Woad, <strong>and</strong> Tyrian Purple: Important<br />
Vat Dyes from Antiquity to the Present.” Endeavour 17, no. 4<br />
(December, 1993).<br />
Farber, Eduard. The Evolution of Chemistry: A History of Its Ideas,<br />
Methods, <strong>and</strong> Materials. 2d ed. New York: Ronald Press, 1969.<br />
Partington, J. R. A History of Chemistry. Staten Isl<strong>and</strong>, N.Y.: Martino,<br />
1996.<br />
Schatz, Paul F. “Anniversaries: 2001.” Journal of Chemical Education<br />
78, no. 1 (January, 2001).
846<br />
Velcro<br />
Velcro<br />
The invention: A material comprising millions of tiny hooks <strong>and</strong><br />
loops that work together to create powerful <strong>and</strong> easy-to-use fasteners<br />
for a wide range of applications.<br />
The person behind the invention:<br />
Georges de Mestral (1904-1990), a Swiss engineer <strong>and</strong> inventor<br />
From Cockleburs to Fasteners<br />
Since prehistoric times, people have walked through weedy fields<br />
<strong>and</strong> arrived at home with cockleburs all over their clothing. In 1948, a<br />
Swiss engineer <strong>and</strong> inventor, Georges de Mestral, found his clothing<br />
full of cockleburs after walking in the Swiss Alps near Geneva. Wondering<br />
why cockleburs stuck to clothing, he began to examine them<br />
under a microscope. De Mestral’s initial examination showed that<br />
each of the thous<strong>and</strong>s of fibrous ends of the cockleburs was tipped<br />
with a tiny hook; it was the hooks that made the cockleburs stick to<br />
fabric. This observation, combined with much subsequent work, led<br />
de Mestral to invent velcro, which was patented in 1957 in the form of<br />
two strips of nylon material. One of the strips contained millions of<br />
tiny hooks, while the other contained a similar number of tiny loops.<br />
When the two strips were pushed together, the hooks were inserted<br />
into the loops, joining the two strips of nylon very firmly. This design<br />
makes velcro extremely useful as a material for fasteners that is used<br />
in applications ranging from sneaker fasteners to fasteners used to<br />
join heart valves during surgery.<br />
Making Velcro Practical<br />
Velcro is not the only invention credited to de Mestral, who also<br />
invented such items as a toy airplane <strong>and</strong> an asparagus peeler, but it<br />
was his greatest achievement. It is said that his idea for the material<br />
was partly the result of a problem his wife had with a jammed dress<br />
zipper just before an important social engagement. De Mestral’s<br />
idea was to design a sort of locking tape that used the hook-<strong>and</strong>loop<br />
principle that he had observed under the microscope. Such a
tape, he believed, would never jam. He also believed that the tape<br />
would do away with such annoyances as buttons that popped open<br />
unexpectedly <strong>and</strong> knots in shoelaces that refused to be untied.<br />
The design of the material envisioned by de Mestral took seven<br />
years of painstaking effort. When it was finished, de Mestral named<br />
it “velcro” (a contraction of the French phrase velvet crochet, meaning<br />
velvet hook), patented it, <strong>and</strong> opened a factory to manufacture<br />
it. Velcro’s design required that de Mestral identify the optimal<br />
number of hooks <strong>and</strong> loops to be used. He eventually found that using<br />
approximately three hundred per square inch worked best. In<br />
addition, his studies showed that nylon was an excellent material<br />
for his purposes, although it had to be stiffened somewhat to work<br />
well. Much additional experimentation showed that the most effective<br />
way of producing the necessary stiffening was to subject the<br />
velcro to infrared light after manufacturing it.<br />
Other researchers have demonstrated that velcrolike materials<br />
need not be made of nylon. For example, a new micromechanical<br />
velcrolike material (microvelcro) that medical researchers believe<br />
will soon be used to hold together blood vessels after surgery is<br />
made of minute silicon loops <strong>and</strong> hooks. This material is thought to<br />
be superior to other materials for such applications because it will<br />
not be redissolved prematurely by the body. Other uses for microvelcro<br />
may be to hold together tiny electronic components in miniaturized<br />
computers without the use of glue or other adhesives. A major<br />
advantage of the use of microvelcro in such situations is that it is<br />
resistant to changes of temperature as well as to most chemicals that<br />
destroy glue <strong>and</strong> other adhesives.<br />
Impact<br />
Velcro / 847<br />
In 1957, when velcro was patented, there were four main ways to<br />
hold things together. These involved the use of buttons, laces, snaps,<br />
<strong>and</strong> zippers (which had been invented by Chicagoan Whitcomb L.<br />
Judson in 1892). All these devices had drawbacks; zippers can jam,<br />
buttons can come open at embarrassing times, <strong>and</strong> shoelaces can<br />
form knots that are difficult to unfasten. Almost immediately after<br />
velcro was introduced, its use became widespread; velcro fasteners<br />
can be found on or in clothing, shoes, watchb<strong>and</strong>s, wallets, back-
848 / Velcro<br />
packs, bookbags, motor vehicles, space suits, blood-pressure cuffs,<br />
<strong>and</strong> in many other places. There is even a “wall jumping” game incorporating<br />
velcro in which a wall is covered with a well-supported<br />
piece of velcro. People who want to play put on jackets made of<br />
velcro <strong>and</strong> jump as high as they can. Wherever they l<strong>and</strong> on the wall,<br />
the velcro will join together, making them stick.<br />
Wall jumping, silly though it may be, demonstrates the tremendous<br />
holding power of velcro; a velcro jacket can keep a twohundred-pound<br />
person suspended from a wall. This great strength is<br />
used in a more serious way in the design of the items used to anchor<br />
astronauts to space shuttles <strong>and</strong> to buckle on parachutes. In addition,<br />
velcro is washable, comes in many colors, <strong>and</strong> will not jam. No<br />
doubt many more uses for this innovative product will be found.<br />
See also Artificial heart.<br />
Georges de Mestral<br />
Georges de Mestral got his idea for Velcro in part during a<br />
hunting trip on his estates <strong>and</strong> in part before an important formal<br />
social function. These contexts are evidence of the high<br />
st<strong>and</strong>ing in Swiss society held by de Mestral, an engineer <strong>and</strong><br />
manufacturer. In fact, de Mestral, who was born in 1904, came<br />
from a illustrious line of noble l<strong>and</strong>owners. Their prize possession<br />
was one of Switzerl<strong>and</strong>’s famous residences, the castle of<br />
Saint Saphorin on Morges.<br />
Built on the site of yet older fortifications, the castle was<br />
completed by François-Louis de Pesme in 1710. An enemy of<br />
King Louis XIV, de Pesme served in the military forces of Austria,<br />
Holl<strong>and</strong>, <strong>and</strong> Engl<strong>and</strong>, rising to the rank of lieutenant general,<br />
but he is best known for driving off a Turkish invasion fleet<br />
on the Danube in 1695. Other forebears include the diplomat<br />
Arm<strong>and</strong>- François Louis de Mestral (1738-1805) <strong>and</strong> his father,<br />
Albert-Georges-Constantin de Mestral (1878-1966), an agricultural<br />
engineer.<br />
The castle passed to the father’s four sons <strong>and</strong> eventually<br />
into the care of the inventor. It in turn was inherited by Georges<br />
de Mestral’s sons Henri <strong>and</strong> François when he died in 1990 in<br />
Genolier, Switzerl<strong>and</strong>.
Further Reading<br />
Velcro / 849<br />
“George De Mestral: Inventor of Velcro Fastener.” Los Angeles Times<br />
(February 13, 1990).<br />
LaFavre Yorks, Cindy. “Hidden Helpers Velcro Fasteners, Pull-On<br />
Loops <strong>and</strong> Other Extras Make Dressing Easier for People with<br />
Disabilities.” Los Angeles Times (November 1, 1991).<br />
Roberts, Royston M., <strong>and</strong> Jeanie Roberts. Lucky Science: Accidental<br />
Discoveries from Gravity to Velcro, with Experiments. New York:<br />
John Wiley, 1994.<br />
Stone, Judith. “Stuck on Velcro!” Reader’s Digest (September, 1988).<br />
“Velcro-wrapped Armor Saves Lives in Bosnia.” Design News 52, no.<br />
7 (April 7, 1997).
850<br />
Vending machine slug rejector<br />
Vending machine slug rejector<br />
The invention: A device that separates real coins from counterfeits,<br />
the slug rejector made it possible for coin-operated vending<br />
machines to become an important marketing tool for many<br />
products<br />
The people behind the invention:<br />
Thomas Adams, the founder of Adams Gum Company<br />
Frederick C. Lynde, an Englishman awarded the first American<br />
patent on a vending machine<br />
Nathaniel Leverone (1884-1969), a founder of the Automatic<br />
Canteen Company of America<br />
Louis E. Leverone (1880-1957), a founder, with his brother, of the<br />
Automatic Canteen Company of America<br />
The Growth of Vending Machines<br />
One of the most imposing phenomena to occur in the United<br />
States economy following World War II was the growth of vending<br />
machines. Following the 1930’s invention <strong>and</strong> perfection of the slug<br />
rejector, vending machines became commonplace as a means of<br />
marketing gum <strong>and</strong> c<strong>and</strong>y. By the 1960’s, almost every building had<br />
soft drink <strong>and</strong> coffee machines. Street corners featured machines<br />
that dispensed newspapers, <strong>and</strong> post offices even used vending machines<br />
to sell stamps. Occasionally someone fishing in the backwoods<br />
could find a vending machine next to a favorite fishing hole<br />
that would dispense a can of fishing worms upon deposit of the correct<br />
amount of money. The primary advantage offered by vending<br />
machines is their convenience. Unlike people, machines can provide<br />
goods <strong>and</strong> services around the clock, with no charge for the “labor”<br />
of st<strong>and</strong>ing duty.<br />
The decade of the 1950’s brought not only an increase in the number<br />
of vending machines but also an increase in the types of goods<br />
that were marketed through them. Before World War II, the major<br />
products had been cigarettes, c<strong>and</strong>y, gum, <strong>and</strong> soft drinks. The<br />
1950’s brought far more products into the vending machine market.
Vending machine slug rejector / 851<br />
The first recognized vending machine in history was invented in<br />
the third century b.c.e. by the mathematician Hero. This first machine<br />
was a coin-activated device that dispensed sacrificial water in<br />
an Egyptian temple. It was not until the year 1615 that another<br />
vending machine was recorded. In that year, snuff <strong>and</strong> tobacco<br />
vending boxes began appearing in English pubs <strong>and</strong> taverns. These<br />
tobacco boxes were less sophisticated machines than was Hero’s,<br />
since they left much to the honesty of the customer. Insertion of a<br />
coin opened the box; once it was open, the customer could take out<br />
as much tobacco as desired. One of the first United States patents on<br />
a machine was issued in 1886 to Frederick C. Lynde. That machine<br />
was used to vend postcards.<br />
If any one person can be considered the father of vending machines<br />
in the United States, it would probably be Thomas Adams,<br />
the founder of Adams Gum Company. Adams began the first successful<br />
vending operation in America in 1888 when he placed gum<br />
machines on train platforms in New York City.<br />
Other early vending machines included scales (which vended a<br />
service rather than a product), photograph machines, strength testers,<br />
beer machines, <strong>and</strong> hot water vendors (to supply poor people<br />
who had no other source of hot water). These were followed, around<br />
1900, by complete automatic restaurants in Germany, cigar vending<br />
machines in Chicago, perfume machines in Paris, <strong>and</strong> an automatic<br />
divorce machine in Utah.<br />
Also around 1900 came the introduction of coin-operated gambling<br />
machines. These “slot machines” are differentiated from normal<br />
vending machines. The vending machine industry does not<br />
consider gambling machines to be a part of the vending industry<br />
since they do not vend merch<strong>and</strong>ise. The primary importance of the<br />
gambling machines was that they induced the industry to do research<br />
into slug rejection. Early machines allowed coins to be retrieved<br />
by the use of strings tied to them <strong>and</strong> accepted counterfeit<br />
lead coins, called slugs. It was not until the 1930’s that the slug<br />
rejector was perfected. Invention of the slug rejection device gave<br />
rise to the tremendous growth in the vending machine industry in<br />
the 1930’s by giving vendors more confidence that they would be<br />
paid for their products or services.<br />
Soft drink machines got their start just prior to the beginning of
852 / Vending machine slug rejector<br />
the twentieth century. By 1906, improved models of these machines<br />
could dispense up to ten different flavors of soda pop. The<br />
drinks were dispensed into a drinking glass or tin cup that was<br />
placed near the machine (there was usually only one glass or cup<br />
to a machine, since paper cups had not been invented). <strong>Public</strong><br />
health officials became concerned that everyone was drinking<br />
from the same cup. At that point, someone came up with the idea<br />
of setting a bucket of water next to the machine so that each customer<br />
could rinse off the cup before drinking from it. The year 1909<br />
witnessed one of the monumental inventions in the history of<br />
vending machines, the pay toilet.<br />
Impact<br />
The 1930’s witnessed improved vending machines. Slug rejectors<br />
were the most important introduction. In addition, change-making<br />
machines were instituted, <strong>and</strong> a few machines would even say<br />
“thank you” after a coin was deposited. These improved machines<br />
led many marketers to experiment with automatic vending. Coinoperated<br />
washing machines were one of the new applications of the<br />
1930’s. During the Depression, many appliance dealers attached<br />
coin metering devices to washing machines, allowing the user to accumulate<br />
money to make the monthly payments by using the appliance.<br />
This was a form of forced saving. It was not long before some<br />
enterprising appliance dealer got the idea of placing washing machines<br />
in apartment house basements. This idea was soon followed<br />
by stores full of coin-operated laundry machines, giving rise to a<br />
new kind of automatic vending business.<br />
Following World War II, there was a surge of innovation in the<br />
vending machine industry. Much of that surge resulted from the<br />
discovery of vending machines by industrial management. Prior to<br />
the war, the managements of most factories had been tolerant of<br />
vending machines. Following the war, managers discovered that<br />
the machines could be an inexpensive means of keeping workers<br />
happy. They became aware that worker productivity could be increased<br />
by access to c<strong>and</strong>y bars or soft drinks. As a result, the dem<strong>and</strong><br />
for machines exceeded the supply offered by the industry<br />
during the late 1940’s.
Vending machines have had a surprising effect on the total retail<br />
sales of the U.S. economy. In 1946, sales through vending machines<br />
totaled $600 million. By 1960, that figure had increased to $2.5 billion;<br />
by 1970, it exceeded $6 billion. The decade of the 1950’s began<br />
with individual machines that would dispense cigarettes, c<strong>and</strong>y,<br />
gum, coffee, <strong>and</strong> soft drinks. By the end of that decade, it was much<br />
more common to see vending machines in groups. The combination<br />
of machines in a group could, in many cases, meet the requirements<br />
to assemble a complete meal.<br />
Convenience is the key to the popularity of vending machines.<br />
Their ability to sell around the clock has probably been the major<br />
impetus to vending machine sales as opposed to more conventional<br />
marketing. Lower labor costs have also played a role in their popularity,<br />
<strong>and</strong> their location in areas of dense pedestrian traffic prompts<br />
impulse purchases.<br />
Despite the advances made by the vending machine industry<br />
during the 1950’s, there was still one major limitation to growth, to<br />
be solved during the early 1960’s. That problem was that vending<br />
machines were effectively limited to low-priced items, since the machines<br />
would accept nothing but coins. The inconvenience of inserting<br />
many coins kept machine operators from trying to market expensive<br />
items; as they expected consumer reluctance. The early<br />
1960’s witnessed the invention of vending machines that would accept<br />
<strong>and</strong> make change for $1, $5, <strong>and</strong> $10 bills. This invention paved<br />
the way for expansion into lines of grocery items <strong>and</strong> tickets.<br />
The first use of vending machines to issue tickets was at an Illinois<br />
race track, where pari-mutuel tickets were dispensed upon deposit of<br />
$2. Penn Central Railroad was one of the first transportation companies<br />
to sell tickets by means of vending machines. These machines,<br />
used in high-traffic areas on the East Coast, permitted passengers to<br />
deal directly with a computer when buying reserved-seat train tickets.<br />
The machines would accept $1 bills <strong>and</strong> $5 bills as well as coins.<br />
Limitations to Vending Machines<br />
Vending machine slug rejector / 853<br />
There are limitations to the use of vending machines. Primary<br />
among these are mechanical failure <strong>and</strong> v<strong>and</strong>alism of machines.<br />
Another limitation often mentioned is that not every product can be
854 / Vending machine slug rejector<br />
sold by machine. There are several factors that make some goods<br />
more vendable than others. National advertising <strong>and</strong> wide consumer<br />
acceptance help. Product must have a high turnover in order<br />
to justify the cost of a machine <strong>and</strong> the cost of servicing it. A third<br />
factor in measuring the potential success of an item is where it will<br />
be consumed or used. The most successful products are used within<br />
a short distance of the machine; consumers must be made willing to<br />
pay the usually higher prices of machine-bought products by the<br />
convenience of machine location.<br />
The automatic vending of merch<strong>and</strong>ise plays the largest role in<br />
the vending machine industry, but the vending of services also<br />
plays a role. The largest percentage of service vending comes from<br />
coin laundries. Other types of services are vended by weighing machines,<br />
parcel lockers, <strong>and</strong> pay toilets. By depositing a coin, a person<br />
can even get shoes shined. Some motel beds offer a “massage.” Even<br />
the lowly parking meter is an example of a vending machine that<br />
dispenses services. Coin-operated photocopy machines account for<br />
a large portion of service vending.<br />
A later advance in the vending machine industry is the use of<br />
credit. The cashless society began to make strides with vending machines<br />
as well as conventional vendors. As of the early 1990’s, credit<br />
cards could be used to operate only a few types of vending machines,<br />
primarily those that dispense transportation tickets. Vending machines<br />
operated by banks dispense money upon deposit of a credit<br />
card. Credit-card gasoline pumps reduced labor requirements at gasoline<br />
stations, pushing the concept of self-service a step further. As<br />
credit card transactions become more common in general <strong>and</strong> as the<br />
cost of making them falls, use of credit cards for vending machines<br />
will increase.<br />
Thous<strong>and</strong>s of items have been marketed through vending machines,<br />
<strong>and</strong> firms must continue to evaluate the use of automatic retailing<br />
as a marketing channel. Many products are not conducive to<br />
automatic vending, but before dismissing that option for a particular<br />
product, a marketer should consider the range of products sold<br />
through vending machines. The producers of B<strong>and</strong>-Aid flexible plastic<br />
b<strong>and</strong>ages saw the possibilities in the vending field. The only product<br />
modification necessary was to put B<strong>and</strong>-Aids in a package the<br />
size of a c<strong>and</strong>y bar, able to be sold from renovated c<strong>and</strong>y machines.
The next problem was to determine areas where there would be a<br />
high turnover of B<strong>and</strong>-Aids. Bowling alleys were an obvious answer,<br />
since many bowlers suffered from abrasions on their fingers.<br />
The United States is not alone in the development of vending machines;<br />
in fact, it is not as advanced as some nations of the world. In<br />
Japan, machines operated by credit cards have been used widely<br />
since the mid-1960’s, <strong>and</strong> the range of products offered has been<br />
larger than in the United States. Western Europe is probably the<br />
most advanced area of the world in terms of vending machine technology.<br />
Germany of the early 1990’s probably had the largest selection<br />
of vending machines of any European country. Many gasoline<br />
stations in Germany featured beer dispensing machines. In rural areas<br />
of the country, vending machines hung from utility poles. These<br />
rural machines provided c<strong>and</strong>y <strong>and</strong> gum, among other products, to<br />
farmers who did not often travel into town.<br />
Most vending machine business in Europe was done not in individual<br />
machines but in automated vending shops. The machines offered<br />
a creative solution to obstacles created by regulations <strong>and</strong><br />
laws. Some countries had laws stating that conventional retail stores<br />
could not be open at night or on Sundays. To increase sales <strong>and</strong> satisfy<br />
consumer needs, stores built vending operations that could be<br />
used by customers during off hours. The machines, or combinations<br />
of them, often stocked a tremendous variety of items. At one German<br />
location, consumers could choose among nearly a thous<strong>and</strong><br />
grocery items.<br />
The Future<br />
Vending machine slug rejector / 855<br />
The future will see a broadening of product lines offered in vending<br />
machines as marketers come to recognize the opportunities that<br />
exist in automatic retailing. In the United States, vending machines<br />
of the early 1990’s primarily dispensed products for immediate consumption.<br />
If labor costs increase, it will become economically feasible<br />
to sell more items from vending machines. Grocery items <strong>and</strong><br />
tickets offered the most potential for expansion.<br />
Vending machines offer convenience to the consumer. Virtually<br />
any company that produces for the retail market must consider<br />
vending machines as a marketing channel. Machines offer an alter-
856 / Vending machine slug rejector<br />
native to conventional stores that cannot be ignored as the range of<br />
products offered through machines increases.<br />
Vending machines appear to be a permanent fixture <strong>and</strong> have<br />
only scratched the surface of the market. Although machines have a<br />
long history, their popularization came from innovations of the<br />
1930’s, particularly the slug rejector. Marketing managers came to<br />
recognize that vending machine sales are more than a sideline. Increasingly,<br />
firms established separate departments to h<strong>and</strong>le sales<br />
through vending machines. Successful companies make the best<br />
use of all channels of distribution, <strong>and</strong> vending machines had become<br />
an important marketing channel.<br />
See also Geiger counter; Sonar; Radio interferometer.<br />
Further Reading<br />
Ho, Rodney. “Vending Machines Make Change—-Now They Sell<br />
Movie Soundtracks, Underwear—Even Art.” Wall Street Journal<br />
(July 7, 1999).<br />
Rosen, Cheryl. “Vending Machines Get a High-Tech Makeover.<br />
Informationweek 822 (January 29, 2001).<br />
Ryan, James. “In Vending Machine, Brains That Tell Good Money<br />
from Bad.” New York Times (April 8, 1999).<br />
Tagliabue, John. “Vending Machines Face an Upheaval of Change.”<br />
New York Times (February 16, 1999).
Videocassette recorder<br />
Videocassette recorder<br />
The invention: A device for recording <strong>and</strong> playing back movies<br />
<strong>and</strong> television programs, the videocassette recorder (VCR) revolutionized<br />
the home entertainment industry in the late 1970’s.<br />
The company behind the invention:<br />
Philips Corporation, a Dutch Company<br />
Videotape Recording<br />
857<br />
Although television sets first came on the market before World<br />
War II, video recording on magnetic tape was not developed until<br />
the 1950’s. Ampex marketed the first practical videotape recorder<br />
in 1956. Unlike television, which manufacturers aimed at retail<br />
consumers from its inception, videotape recording was never expected<br />
to be attractive to the individual consumer. The first videotape<br />
recorders were meant for use within the television industry.<br />
Developed not long after the invention of magnetic tape recording<br />
of audio signals, the early videotape recorders were large machines<br />
that employed an open reel-to-reel tape drive similar to that<br />
of a conventional audiotape recorder. Recording <strong>and</strong> playback heads<br />
scanned the tape longitudinally (lengthwise). Because video signals<br />
have a much wider frequency (“frequency” is the distance between<br />
the tops <strong>and</strong> the bottoms of the signal waves) than audio signals do,<br />
this scanning technique meant that the amount of recording time<br />
available on one reel of tape was extremely limited. In addition,<br />
open reels were large <strong>and</strong> awkward, <strong>and</strong> the magnetic tape itself<br />
was quite expensive.<br />
Still, within the limited application area of commercial television,<br />
videotape recording had its uses. It made it possible to play<br />
back recorded material immediately rather than having to wait for<br />
film to be processed in a laboratory. As television became more popular<br />
<strong>and</strong> production schedules became more hectic, with more material<br />
being produced in shorter <strong>and</strong> shorter periods of time, videotape<br />
solved some significant problems.
858 / Videocassette recorder<br />
Helical Scanning Breakthrough<br />
Engineers in the television industry continued to search for innovations<br />
<strong>and</strong> improvements in videotape recording following<br />
Ampex’s marketing of the first practical videotape recorder in the<br />
1950’s. It took more than ten years, however, for the next major<br />
breakthrough to occur. The innovation that proved to be the key to<br />
reducing the size <strong>and</strong> awkwardness of video recording equipment<br />
came in 1967 with the invention by the Philips Corporation of helical<br />
scanning.<br />
All videocassette recorders eventually employed multiple-head<br />
helical scanning systems. In a helical scanning system, the record<br />
<strong>and</strong> playback heads are attached to a spinning drum or head that rotates<br />
at exactly 1,800 revolutions per minute, or 30 revolutions per<br />
second. This is the number of video frames per second used in the<br />
NTSC-TV broadcasts in the United States <strong>and</strong> Canada. The heads<br />
are mounted in pairs 180 degrees apart on the drum. Two fields on<br />
the tape are scanned for each revolution of the drum. Perhaps the<br />
easiest way to underst<strong>and</strong> the helical scanning system is to visualize<br />
the spiral path followed by the stripes on a barber’s pole.<br />
Helical scanning deviated sharply from designs based on audio<br />
recording systems. In an audiotape recorder, the tape passes over<br />
stationary playback <strong>and</strong> record heads; in a videocassette recorder,<br />
both the heads <strong>and</strong> the tape move. Helical scanning is, however, one<br />
of the few things that competing models <strong>and</strong> formats of videocassette<br />
recorders have in common. Different models employ different<br />
tape delivery systems <strong>and</strong>, in the case of competing formats such as<br />
Beta <strong>and</strong> VHS, there may be differences in the composition of the<br />
video signal to be recorded. Beta uses a 688-kilohertz (kHz) frequency,<br />
while VHS employs a frequency of 629 kHz. This difference<br />
in frequency is what allows Beta videocassette recorders (VCRs) to<br />
provide more lines of resolution <strong>and</strong> thus a superior picture quality;<br />
VHS provides 240 lines of resolution, while Beta has 400. (For this<br />
reason, it is perhaps unfortunate that the VHS format eventually<br />
dominated the market.)<br />
In addition to helical scanning, Philips introduced another innovation:<br />
the videocassette. Existing videotape recorders employed a<br />
reel-to-reel tape drive, as do videocassettes, but videocassettes en-
close the tape reels in a protective case. The case prevents the tape<br />
from being damaged in h<strong>and</strong>ling.<br />
The first VCRs were large <strong>and</strong> awkward compared to later models.<br />
Industry analysts still thought that the commercial television<br />
<strong>and</strong> film industries would be the primary markets for VCRs. The<br />
first videocassettes employed wide— 3 4-inch or 1-inch—videotapes,<br />
<strong>and</strong> the machines themselves were cumbersome. Although Philips<br />
introduced a VCR in 1970, it took until 1972 before the machines actually<br />
became available for purchase, <strong>and</strong> it would be another ten<br />
years before VCRs became common appliances in homes.<br />
Consequences<br />
Videocassette recorder / 859<br />
Following the introduction of the VCR in 1970, the home entertainment<br />
industry changed radically. Although the industry did not<br />
originally anticipate that the VCR would have great commercial potential<br />
as a home entertainment device, it quickly became obvious<br />
that it did. By the late 1970’s, the size of the cassette had been reduced<br />
<strong>and</strong> the length of recording time available per cassette had<br />
been increased from one hour to six. VCRs became so widespread<br />
that advertisers on television became concerned with a phenomenon<br />
known as “timeshifting,” which refers to viewers setting the<br />
VCR to record programs for later viewing. Jokes about the complexity<br />
of programming VCRs appeared in the popular culture, <strong>and</strong> an<br />
inability to cope with the VCR came to be seen as evidence of technological<br />
illiteracy.<br />
Consumer dem<strong>and</strong> for VCRs was so great that, by the late 1980’s,<br />
compact portable video cameras became widely available. The same<br />
technology—helical scanning with multiple heads—was successfully<br />
miniaturized, <strong>and</strong> “camcorders” were developed that were not much<br />
larger than a paperback book. By the early 1990’s, “reality television”—that<br />
is, television shows based on actual events—began relying<br />
on video recordings supplied by viewers rather than material<br />
produced by professionals. The video recorder had completed a circle:<br />
It began as a tool intended for use in the television studio, <strong>and</strong> it<br />
returned there four decades later. Along the way, it had an effect no<br />
one could have predicted; passive viewers in the audience had evolved<br />
into active participants in the production process.
860 / Videocassette recorder<br />
See also Cassette recording; Color television; Compact disc;<br />
Dolby noise reduction; Television; Walkman cassette player.<br />
Further Reading<br />
Gilder, George. Life After Television. New York: W. W. Norton, 1992.<br />
Lardner, James. Fast Forward: Hollywood, the Japanese, <strong>and</strong> the Onslaught<br />
of the VCR. New York: Norton, 1987.<br />
Luther, Arch C. Digital Video in the PC Environment. New York:<br />
McGraw-Hill, 1989.<br />
Wassser, Frederick. Veni, Vidi, Video: The Hollywood Empire <strong>and</strong> the<br />
VCR. Austin: University of Texas Press, 2001.
Virtual machine<br />
Virtual machine<br />
The invention: The first computer to swap storage space between<br />
its r<strong>and</strong>om access memory (RAM) <strong>and</strong> hard disk to create a<br />
larger “virtual” memory that enabled it to increase its power.<br />
The people behind the invention:<br />
International Business Machines (IBM) Corporation, an<br />
American data processing firm<br />
Massachusetts Institute of Technology (MIT), an American<br />
university<br />
Bell Labs, the research <strong>and</strong> development arm of the American<br />
Telephone <strong>and</strong> Telegraph Company<br />
A Shortage of Memory<br />
861<br />
During the late 1950’s <strong>and</strong> the 1960’s, computers generally used<br />
two types of data storage areas. The first type, called “magnetic<br />
disk storage,” was slow <strong>and</strong> large, but its storage space was relatively<br />
cheap <strong>and</strong> abundant. The second type, called “main memory”<br />
(also often called “r<strong>and</strong>om access memory,” or RAM), was<br />
much faster. Computation <strong>and</strong> program execution occurred primarily<br />
in the “central processing unit” (CPU), which is the “brain”<br />
of the computer. The CPU accessed RAM as an area in which to<br />
perform intermediate computations, store data, <strong>and</strong> store program<br />
instructions.<br />
To run programs, users went through a lengthy process. At that<br />
time, keyboards with monitors that allowed on-line editing <strong>and</strong><br />
program storage were very rare. Instead, most users used typewriter-like<br />
devices to type their programs or text on paper cards.<br />
Holding decks of such cards, users waited in lines to use card readers.<br />
The cards were read <strong>and</strong> returned to the user, <strong>and</strong> the programs<br />
were scheduled to run later. Hours later or even overnight,<br />
the output of each program was printed in some predetermined<br />
order, after which all the outputs were placed in user bins. It might<br />
take as long as several days to make any program corrections that<br />
were necessary.
862 / Virtual machine<br />
Because CPUs were expensive, many users had to share a single<br />
CPU. If a computer had a monitor that could be used for editing or<br />
could run more than one program at a time, more memory was required.<br />
RAM was extremely expensive, <strong>and</strong> even multimilliondollar<br />
computers had small memories. In addition, this primitive<br />
RAM was extremely bulky.<br />
Virtually Unlimited Memory<br />
The solution to the problem of creating affordable, convenient<br />
memory came in a revolutionary reformulation of the relationship<br />
between main memory <strong>and</strong> disk space. Since disk space was large<br />
<strong>and</strong> cheap, it could be treated as an extended “scratch pad,” or temporary-use<br />
area, for main memory. While a program ran, only small<br />
parts of it (called pages or segments), normally the parts in use at<br />
that moment, would be kept in the main memory. If only a few<br />
pages of each program were kept in memory at any time, more programs<br />
could coexist in memory. When pages lay idle, they would be<br />
sent from RAM to the disk, as newly requested pages were loaded<br />
from the disk to the RAM. Each user <strong>and</strong> program “thought” it had<br />
essentially unlimited memory (limited only by disk space), hence<br />
the term “virtual memory.”<br />
The system did, however, have its drawbacks. The swapping <strong>and</strong><br />
paging processes reduced the speed at which the computer could<br />
process information. Coordinating these activities also required<br />
more circuitry. Integrating each program <strong>and</strong> the amount of virtual<br />
memory space it required was critical. To keep the system operating<br />
accurately, stably, <strong>and</strong> fairly among users, all computers have an<br />
“operating system.” Operating systems that support virtual memory<br />
are more complex than the older varieties are.<br />
Many years of research, design, simulations, <strong>and</strong> prototype testing<br />
were required to develop virtual memory. CPUs <strong>and</strong> operating<br />
systems were designed by large teams, not individuals. Therefore,<br />
the exact original discovery of virtual memory is difficult to trace.<br />
Many people contributed at each stage.<br />
The first rudimentary implementation of virtual memory concepts<br />
was on the Atlas computer, which was constructed in the early<br />
1960’s in Engl<strong>and</strong>, at the University of Manchester. It coupled RAM
Virtual machine / 863<br />
with a device that read a magnetizable cylinder, or drum, which<br />
meant that it was a two-part storage system.<br />
In the late 1960’s, the Massachusetts Institute of Technology<br />
(MIT), Bell Telephone Labs, <strong>and</strong> the General Electric Company<br />
(later Honeywell) jointly designed a high-level operating system<br />
called MULTICS, which had virtual memory.<br />
During the 1960’s, IBM worked on virtual memory, <strong>and</strong> the IBM<br />
360 series supported the new memory system. With the evolution<br />
of engineering concepts such as circuit integration, IBM produced<br />
a new line of computers called the IBM 370 series. The IBM 370<br />
supported several advances in hardware (equipment) <strong>and</strong> software<br />
(program instructions), including full virtual memory capabilities.<br />
It was a platform for a new <strong>and</strong> powerful “environment,”<br />
or set of conditions, in which software could be run; IBM called<br />
this environment the VM/370. The VM/370 went far beyond virtual<br />
memory, using virtual memory to create virtual machines. In a<br />
virtual machine environment, each user can select a separate <strong>and</strong><br />
complete operating system. This means that separate copies of operating<br />
systems such as OS/360, CMS, DOS/360, <strong>and</strong> UNIX can all<br />
run in separate “compartments” on a single computer. In effect,<br />
each operating system has its own machine. Reliability <strong>and</strong> security<br />
were also increased. This was a major breakthrough, a second<br />
computer revolution.<br />
Another measure of the significance of the IBM 370 was the commercial<br />
success <strong>and</strong> rapid, widespread distribution of the system.<br />
The large customer base for the older IBM 360 also appreciated the<br />
IBM 370’s compatibility with that machine. The essentials of the<br />
IBM 370 virtual memory model were retained even in the 1990’s<br />
generation of large, powerful mainframe computers. Furthermore,<br />
its success carried over to the design decisions of other computers in<br />
the 1970’s.<br />
The second-largest computer manufacturer, Digital Equipment<br />
Corporation (DEC), followed suit; its popular VAX minicomputers<br />
had virtual memory in the late 1970’s. The celebrated UNIX operating<br />
system also added virtual memory. IBM’s success had led to industry-wide<br />
acceptance.
864 / Virtual machine<br />
Consequences<br />
The impact of virtual memory extends beyond large computers<br />
<strong>and</strong> the 1970’s. During the late 1970’s <strong>and</strong> early 1980’s, the computer<br />
world took a giant step backward. Small, single-user computers<br />
called personal computers (PCs) became very popular. Because<br />
they were single-user models <strong>and</strong> were relatively cheap,<br />
they were sold with weak CPUs <strong>and</strong> deplorable operating systems<br />
that did not support virtual memory. Only one program could run<br />
at a time. Larger <strong>and</strong> more powerful programs required more<br />
memory than was physically installed. These computers crashed<br />
often.<br />
Virtual memory raises PC user productivity. With virtual memory<br />
space, during data transmissions or long calculations, users can<br />
simultaneously edit files if physical memory runs out. Most major<br />
PCs now have improved CPUs <strong>and</strong> operating systems, <strong>and</strong> these<br />
advances support virtual memory. Popular virtual memory systems<br />
such as OS/2, Windows/DOS, <strong>and</strong> MAC-OS are available. Even old<br />
virtual memory UNIX has been used in PCs.<br />
The concept of a virtual machine has been revived, in a weak<br />
form, on PCs that have dual operating systems (such as UNIX <strong>and</strong><br />
DOS, OS/2 <strong>and</strong> DOS, MAC <strong>and</strong> DOS, <strong>and</strong> UNIX <strong>and</strong> DOS combinations).<br />
Most powerful programs benefit from virtual memory. Many<br />
dazzling graphics programs require massive RAM but run safely in<br />
virtual memory. Scientific visualization, high-speed animation, <strong>and</strong><br />
virtual reality all benefit from it. Artificial intelligence <strong>and</strong> computer<br />
reasoning are also part of a “virtual” future.<br />
See also Colossus computer; Differential analyzer; ENIAC computer;<br />
IBM Model 1401 computer; Personal computer; Robot (industrial);<br />
SAINT; Virtual reality.<br />
Further Reading<br />
Bashe, Charles J. IBM’s Early Computers. Cambridge, Mass.: MIT<br />
Press, 1986.<br />
Ceruzzi, Paul E. A History of Modern Computing. Cambridge, Mass.:<br />
MIT Press, 2000.
Virtual machine / 865<br />
Chposky, James, <strong>and</strong> Ted Leonsis. Blue Magic: The People, Power, <strong>and</strong><br />
Politics Behind the IBM Personal Computer. New York: Facts on File,<br />
1988.<br />
Seitz, Frederick, <strong>and</strong> Norman G. Einspruch. Electronic Genie: The<br />
Tangled History of Silicon. Urbana: University of Illinois Press,<br />
1998.
866<br />
Virtual reality<br />
Virtual reality<br />
The invention: The creation of highly interactive, computer-based<br />
multimedia environments in which the user becomes a participant<br />
with the computer in a “virtually real” world.<br />
The people behind the invention:<br />
Ivan Sutherl<strong>and</strong> (1938- ), an American computer scientist<br />
Myron W. Krueger (1942- ), an American computer scientist<br />
Fred P. Brooks (1931- ), an American computer scientist<br />
Human/Computer Interface<br />
In the early 1960’s, the encounter between humans <strong>and</strong> computers<br />
was considered to be the central event of the time. The computer<br />
was evolving more rapidly than any technology in history; humans<br />
seemed not to be evolving at all. The “user interface” (the devices<br />
<strong>and</strong> language with which a person communicates with a computer)<br />
was a veneer that had been applied to the computer to make it<br />
slightly easier to use, but it seemed obvious that the ultimate interface<br />
would be connecting the human body <strong>and</strong> senses directly to the<br />
computer.<br />
Against this background, Ivan Sutherl<strong>and</strong> of the University of<br />
Utah identified the next logical step in the development of computer<br />
graphics. He implemented a head-mounted display that allowed<br />
a person to look around in a graphically created “room” simply<br />
by turning his or her head. Two small cathode-ray tubes, or<br />
CRTs (which are the basis of television screens <strong>and</strong> computer monitors),<br />
driven by vector graphics generators (mathematical imagecreating<br />
devices) provided the appropriate view for each eye, <strong>and</strong><br />
thus, stereo vision.<br />
In the early 1970’s, Fred P. Brooks of the University of North Carolina<br />
created a system that allowed a person to h<strong>and</strong>le graphic objects<br />
by using a mechanical manipulator. When the user moved the<br />
physical manipulator, a graphic manipulator moved accordingly. If<br />
a graphic block was picked up, the user felt its weight <strong>and</strong> its resistance<br />
to his or her fingers closing around it.
A New Reality<br />
Virtual reality / 867<br />
Beginning in 1969, Myron W. Krueger of the University of Wisconsin<br />
created a series of interactive environments that emphasized<br />
unencumbered, full-body, multisensory participation in computer<br />
events. In one demonstration, a sensory floor detected participants’<br />
movements around a room. A symbol representing each participant<br />
moved through a projected graphic maze that changed in playful<br />
ways if participants tried to cheat. In another demonstration, participants<br />
could use the image of a finger to draw on the projection<br />
screen. In yet another, participants’ views of a projected threedimensional<br />
room changed appropriately as they moved around<br />
the physical space.<br />
It was interesting that people naturally accepted these projected<br />
experiences as reality. They expected their bodies to influence graphic<br />
objects <strong>and</strong> were delighted when they did. They regarded their electronic<br />
images as extensions of themselves. What happened to their<br />
images also happened to them; they felt what touched their images.<br />
These observations led to the creation of the Videoplace, a graphic<br />
world that people could enter from different places to interact with<br />
each other <strong>and</strong> with graphic creatures. Videoplace is an installation<br />
at the Connecticut Museum of Natural History in Storrs, Connecticut.<br />
Videoplace visitors in separate rooms can fingerpaint together,<br />
perform free-fall gymnastics, tickle each other, <strong>and</strong> experience additional<br />
interactive events.<br />
The computer combines <strong>and</strong> alters inputs from separate cameras<br />
trained on each person, each of whom responds in turn to the computer’s<br />
output, playing games in the world created by Videoplace<br />
software. Since participants’ live video images can be manipulated<br />
(moved, scaled, or rotated) in real time, the world that is created is<br />
not bound by the laws of physics. In fact, the result is a virtual reality<br />
in which new laws of cause <strong>and</strong> effect are created, <strong>and</strong> can be<br />
changed, from moment to moment. Indeed, the term “virtual reality”<br />
describes the type of experience that can be created with Videoplace<br />
or with the technology invented by Ivan Sutherl<strong>and</strong>.<br />
Virtual realities are part of certain ongoing trends. Most obvious<br />
are the trend from interaction to participation in computer events<br />
<strong>and</strong> the trend from passive to active art forms. In addition, artificial
868 / Virtual reality<br />
Ivan Sutherl<strong>and</strong><br />
Ivan Sutherl<strong>and</strong> was born in Hastings, Nebraska, in 1938.<br />
His father was an engineer, <strong>and</strong> from an early age Sutherl<strong>and</strong><br />
considered engineering his own destiny, too. He earned a<br />
bachelor’s degree from the Carnegie Institute of Technology in<br />
1959, a master’s degree from the California Institute of Technology<br />
in 1960, <strong>and</strong> a doctorate from the Massachusetts Institute<br />
of Technology (MIT) in 1963.<br />
His adviser at MIT was Claude Shannon, creator of information<br />
theory, who directed Sutherl<strong>and</strong> to find ways to simplify<br />
the interface between people <strong>and</strong> computers. Out of this<br />
research came Sketchpad. It was software that allowed people<br />
to draw designs on a computer terminal with a light pen,<br />
an early form of computer-assisted design (CAD). The U.S.<br />
Defense Department’s Advanced Research Projects Center<br />
became interested in Sutherl<strong>and</strong>’s work <strong>and</strong> hired him to direct<br />
its Information Processing Techniques Office in 1964. In<br />
1966 he left to become an associate professor of electrical engineering<br />
at Harvard University, moving to the University of<br />
Utah in 1968, <strong>and</strong> then to Caltech in 1975. During his academic<br />
career he developed the graphic interface for virtual<br />
reality, first announced in his ground-breaking 1968 article<br />
“A Head-Mounted Three-Dimensional Display.”<br />
In 1980 Sutherl<strong>and</strong> left academia for industry. He already<br />
had business experience as cofounder of Evans & Sutherl<strong>and</strong><br />
in Salt Lake City. The new firm, Sutherl<strong>and</strong>, Sproull, <strong>and</strong> Associates,<br />
which provided consulting services <strong>and</strong> venture capital,<br />
later became part of Sun Microsystems, Inc. Sutherl<strong>and</strong> remained<br />
as a research fellow <strong>and</strong> vice president. A member of<br />
the National Academy of Engineering <strong>and</strong> National Academy<br />
of Sciences, in 1988 Sutherl<strong>and</strong> was awarded the AM Turing<br />
Award, the highest honor in information technology.<br />
experiences are taking on increasing significance. Businesspersons<br />
like to talk about “doing it right the first time.” This can now be<br />
done in many cases, not because fewer mistakes are being made by<br />
people but because those mistakes are being made in simulated environments.<br />
Most important is that virtual realities provide means of express-
ing <strong>and</strong> experiencing, as well as new ways for people to interact. Entertainment<br />
uses of virtual reality will be as economically significant<br />
as more practical uses, since entertainment is the United States’<br />
number-two export. Vicarious experience through theater, novels,<br />
movies, <strong>and</strong> television represents a significant percentage of people’s<br />
lives in developed countries. The addition of a radically new<br />
form of physically involving, interactive experience is a major cultural<br />
event that may shape human consciousness as much as earlier<br />
forms of experience have.<br />
Consequences<br />
Most religions offer their believers an escape from this world,<br />
but few technologies have been able to do likewise. Not so with<br />
virtual reality, the fledgling technology in which people explore a<br />
simulated three-dimensional environment generated by a computer.<br />
Using this technology, people can not only escape from this<br />
world but also design the world in which they want to live.<br />
In most virtual reality systems, many of which are still experimental,<br />
one watches the scene, or alternative reality, through threedimensional<br />
goggles in a headset. Sound <strong>and</strong> tactile sensations enhance<br />
the illusion of reality. Because of the wide variety of actual<br />
<strong>and</strong> potential applications of virtual reality, from three-dimensional<br />
video games <strong>and</strong> simulators to remotely operated “telepresence”<br />
systems for the nuclear <strong>and</strong> undersea industries, interest in the field<br />
is intense.<br />
The term “virtual reality” describes the computer-generated<br />
simulation of reality with physical, tactile, <strong>and</strong> visual dimensions.<br />
The interactive technology is used by science <strong>and</strong> engineering researchers<br />
as well as by the entertainment industry, especially in<br />
the form of video games. Virtual reality systems can, for example,<br />
simulate a walk-through of a building in an architectural graphics<br />
program. Virtual reality technology in which the artificial world<br />
overlaps with reality will have major social <strong>and</strong> psychological implications.<br />
See also Personal computer; Virtual machine.<br />
Virtual reality / 869
870 / Virtual reality<br />
Further Reading<br />
Earnshaw, Rae A., M. A. Gigante, <strong>and</strong> H. Jones. Virtual Reality Systems.<br />
San Diego: Academic Press, 1993.<br />
Moody, Fred. The Visionary Position: The Inside Story of the Digital<br />
Dreamers Who Are Making Virtual Reality a Reality. New York:<br />
Times Business, 1999.<br />
Sutherl<strong>and</strong>, Ivan Edward. Sketchpad: A Man-Machine Graphical Communication<br />
System. New York: Garl<strong>and</strong>, 1980.
V-2 rocket<br />
V-2 rocket<br />
The invention: The first first long-range, liquid-fueled rocket, the<br />
V-2 was developed by Germany to carry bombs during World<br />
War II.<br />
The people behind the invention:<br />
Wernher von Braun (1912-1977), the chief engineer <strong>and</strong> prime<br />
motivator of rocket research in Germany during the 1930’s<br />
<strong>and</strong> 1940’s<br />
Walter Robert Dornberger (1895-1980), the former comm<strong>and</strong>er<br />
of the Peenemünde Rocket Research Institute<br />
Ing Fritz Gosslau, the head of the V-1 development team<br />
Paul Schmidt, the designer of the impulse jet motor<br />
The “Buzz Bomb”<br />
871<br />
On May 26, 1943, in the middle of World War II, key German military<br />
officials were briefed by two teams of scientists, one representing<br />
the air force <strong>and</strong> the other representing the army. Each team had<br />
launched its own experimental aerial war craft. The military chiefs<br />
were to decide which project merited further funding <strong>and</strong> development.<br />
Each experimental craft had both advantages <strong>and</strong> disadvantages,<br />
<strong>and</strong> each counterbalanced the other. Therefore, it was decided<br />
that both craft were to be developed. They were to become the V-1<br />
<strong>and</strong> the V-2 aircraft.<br />
The impulse jet motor used in the V-1 craft was designed by Munich<br />
engineer Paul Schmidt. On April 30, 1941, the motor had been<br />
used to assist power on a biplane trainer. The development team for<br />
the V-1 was headed by Ing Fritz Gosslau; the aircraft was designed<br />
by Robert Lusser.<br />
The V-1, or “buzz bomb,” was capable of delivering a one-ton warhead<br />
payload. While still in a late developmental stage, it was<br />
launched, under Adolf Hitler’s orders, to terrorize inhabited areas of<br />
London in retaliation for the damage that had been wreaked on Germany<br />
during the war. More than one hundred V-1’s were launched<br />
daily between June 13 <strong>and</strong> early September, 1944. Because the V-1
872 / V-2 rocket<br />
flew in a straight line <strong>and</strong> at a constant speed, Allied aircraft were<br />
able to intercept it more easily than they could the V-2.<br />
Two innovative systems made the V-1 unique: the drive operation<br />
<strong>and</strong> the guidance system. In the motor, oxygen entered the<br />
grid valves through many small flaps. Fuel oil was introduced <strong>and</strong><br />
the mixture of fuel <strong>and</strong> oxygen was ignited. After ignition, the exp<strong>and</strong>ed<br />
gases produced the reaction propulsion. When the exp<strong>and</strong>ed<br />
gases had vacated, the reduced internal pressure allowed the valve<br />
flaps to reopen, admitting more air for the next cycle.<br />
The guidance system included a small propeller connected to a<br />
revolution counter that was preset based on the distance to the target.<br />
The number of propeller revolutions that it would take to reach<br />
the target was calculated before launch <strong>and</strong> punched into the counter.<br />
During flight, after the counter had measured off the selected<br />
number of revolutions, the aircraft’s elevator flaps became activated,<br />
causing the craft to dive at the target. Underst<strong>and</strong>ably, the accuracy<br />
was not what the engineers had hoped.<br />
Vengeance Weapon 2<br />
According to the Treaty of Versailles (1919), world military forces<br />
were restricted to 100,000 men <strong>and</strong> a certain level of weaponry. The<br />
German military powers realized very early, however, that the<br />
treaty had neglected to restrict rocket-powered weaponry, which<br />
did not exist at the end of World War I (1914-1918). Wernher von<br />
Braun was hired as chief engineer for developing the V-2 rocket.<br />
The V-2 had a lift-off thrust of 11,550.5 newtons <strong>and</strong> was propelled<br />
by the combustion of liquid oxygen <strong>and</strong> alcohol. The propellants<br />
were pumped into the combustion chamber by a steampowered<br />
turboprop. The steam was generated by the decomposition<br />
of hydrogen peroxide, using sodium permanganate as a catalyst.<br />
One innovative feature of the V-2 that is still used was regenerative<br />
cooling, which used alcohol to cool the double-walled<br />
combustion chamber.<br />
The guidance system included two phases: powered <strong>and</strong> ballistic.<br />
Four seconds after launch, a preprogrammed tilt to 17 degrees<br />
was begun, then acceleration was continued to achieve the desired<br />
trajectory. At the desired velocity, the engine power was cut off via
one of two systems. In the automatic system, a device shut off the<br />
engine at the velocity desired; this method, however, was inaccurate.<br />
The second system sent a radio signal to the rocket’s receiver,<br />
which cut off the power. This was a far more accurate method, but<br />
the extra equipment required at the launch site was an attractive target<br />
for Allied bombers. This system was more often employed toward<br />
the end of the war.<br />
Even the 907-kilogram warhead of the V-2 was a carefully tested<br />
device. The detonators had to be able to withst<strong>and</strong> 6 g’s of force during<br />
lift-off <strong>and</strong> reentry, as well as the vibrations inherent in a rocket<br />
flight. Yet they also had to be sensitive enough to ignite the bomb<br />
upon impact <strong>and</strong> before the explosive became buried in the target<br />
<strong>and</strong> lost power through diffusion of force.<br />
The V-2’s first successful test was in October of 1942, but it continued<br />
to be developed until August of 1944. During the next eight<br />
months, more than three thous<strong>and</strong> V-2’s were launched against Engl<strong>and</strong><br />
<strong>and</strong> the Continent, causing immense devastation <strong>and</strong> living<br />
up to its name: Vergeltungswaffe zwei (vengeance weapon 2). Unfortunately<br />
for Hitler’s regime, the weapon that took fourteen years of<br />
research <strong>and</strong> testing to perfect entered the war too late to make an<br />
impact upon the outcome.<br />
Impact<br />
V-2 rocket / 873<br />
The V-1 <strong>and</strong> V-2 had a tremendous impact on the history <strong>and</strong> development<br />
of space technology. Even during the war, captured V-2’s<br />
were studied by Allied scientists. American rocket scientists were<br />
especially interested in the technology, since they too were working<br />
to develop liquid-fueled rockets.<br />
After the war, German military personnel were sent to the United<br />
States, where they signed contracts to work with the U.S. Army in a<br />
program known as “Operation Paperclip.” Testing of the captured<br />
V-2’s was undertaken at White S<strong>and</strong>s Missile Range near Alamogordo,<br />
New Mexico. The JB-2 Loon Navy jet-propelled bomb was<br />
developed following the study of the captured German craft.<br />
The Soviet Union also benefited from captured V-2’s <strong>and</strong> from the<br />
German V-2 factories that were dismantled following the war. With<br />
these resources, the Soviet Union developed its own rocket technol-
874 / V-2 rocket<br />
ogy, which culminated in the launch of Sputnik 1, the world’s first artificial<br />
satellite, on October 4, 1957. The United States was not far behind.<br />
It launched its first satellite, Explorer 1, on January 31, 1958. On<br />
April 12, 1961, the world’s first human space traveler, Soviet cosmonaut<br />
Yuri A. Gagarin, was launched into Earth orbit.<br />
See also Airplane; Cruise missile; Hydrogen bomb; Radar; Rocket;<br />
Stealth aircraft.<br />
Further Reading<br />
Bergaust, Erik. Wernher von Braun: The Authoritative <strong>and</strong> Definitive<br />
Biographical Profile of the Father of Modern Space Flight. Washington:<br />
National Space Institute, 1976.<br />
De Maeseneer, Guido. Peenemünde: The Extraordinary Story of Hitler’s<br />
Secret Weapons V-1 <strong>and</strong> V-2. Vancouver: AJ Publishing, 2001.<br />
Piszkiewicz, Dennis. Wernher von Braun: The Man Who Sold the Moon.<br />
Westport, Conn.: Praeger, 1998.
Walkman cassette player<br />
Walkman cassette player<br />
The invention: Inexpensive portable device for listening to stereo<br />
cassettes that was the most successful audio product of the 1980’s<br />
<strong>and</strong> the forerunner of other portable electronic devices.<br />
The people behind the invention:<br />
Masaru Ibuka (1908-1997), a Japanese engineer who cofounded<br />
Sony<br />
Akio Morita (1921-1999), a Japanese physicist <strong>and</strong> engineer,<br />
cofounder of Sony<br />
Norio Ohga (1930- ), a Japanese opera singer <strong>and</strong><br />
businessman who ran Sony’s tape recorder division before<br />
becoming president of the company in 1982<br />
Convergence of Two Technologies<br />
875<br />
The Sony Walkman was the result of the convergence of two<br />
technologies: the transistor, which enabled miniaturization of electronic<br />
components, <strong>and</strong> the compact cassette, a worldwide st<strong>and</strong>ard<br />
for magnetic recording tape. As the smallest tape player devised,<br />
the Walkman was based on a systems approach that made use of advances<br />
in several unrelated areas, including improved loudspeaker<br />
design <strong>and</strong> reduced battery size. The Sony company brought them<br />
together in an innovative product that found a mass market in a remarkably<br />
short time.<br />
Tokyo Telecommunications Engineering, which became Sony,<br />
was one of many small entrepreneurial companies that made audio<br />
products in the years following World War II. It was formed in the<br />
ruins of Tokyo, Japan, in 1946, <strong>and</strong> got its start manufacturing components<br />
for inexpensive radios <strong>and</strong> record players. They were the<br />
ideal products for a company with some expertise in electrical engineering<br />
<strong>and</strong> a limited manufacturing capability.<br />
Akio Morita <strong>and</strong> Masaru Ibuka formed Tokyo Telecommunications<br />
Engineering to make a variety of electrical testing devices <strong>and</strong><br />
instruments, but their real interests were in sound, <strong>and</strong> they decided<br />
to concentrate on audio products. They introduced a reel-to-reel
876 / Walkman cassette player<br />
tape recorder in 1946. Its success ensured that the company would<br />
remain in the audio field. The trade name of the magnetic tape they<br />
manufactured was “Soni,” this was the origin of the company’s new<br />
name, adopted in 1957. The 1953 acquisition of a license to use Bell<br />
Laboratories’ transistor technology was a turning point in the fortunes<br />
of Sony, for it led the company to the highly popular transistor<br />
radio <strong>and</strong> started it along the path to reducing the size of consumer<br />
products. In the 1960’s, Sony led the way to smaller <strong>and</strong> cheaper radios,<br />
tape recorders, <strong>and</strong> television sets, all using transistors instead<br />
of vacuum tubes.<br />
The Consumer Market<br />
The original marketing strategy for manufacturers of mechanical<br />
entertainment devices had been to put one into every home. This<br />
was the goal for Edison’s phonograph, the player piano, the Victrola,<br />
<strong>and</strong> the radio receiver. Sony <strong>and</strong> other Japanese manufacturers<br />
found out that if a product were small enough <strong>and</strong> cheap enough,<br />
two or three might be purchased for home use, or even for outdoor<br />
use. This was the marketing lesson of the transistor radio.<br />
The unparalleled sales of transistor radios indicated that consumer<br />
durables intended for entertainment were not exclusively<br />
used in the home. The appeal of the transistor radio was that it made<br />
entertainment portable. Sony applied this concept to televisions<br />
<strong>and</strong> tape recorders, developing small portable units powered by<br />
batteries. Sony was first to produce a “personal” television set, with<br />
a five-inch screen. To the surprise of many manufacturers who said<br />
there would never be a market for such a novelty item, it sold well.<br />
It was impossible to reduce tape recorders to the size of transistor<br />
radios because of the problems of h<strong>and</strong>ling very small reels of tape<br />
<strong>and</strong> the high power required to turn them. Portable tape recorders required<br />
several large flashlight batteries. Although tape had the advantage<br />
of recording capability, it could not challenge the popularity<br />
of the microgroove 45 revolution-per-minute (rpm) disc because the<br />
tape player was much more difficult to operate. In the 1960’s, several<br />
types of tape cartridge were introduced to overcome this problem, including<br />
the eight-track tape cartridge <strong>and</strong> the Philips compact cassette.<br />
Sony <strong>and</strong> Matsushita were two of the leading Japanese manu-
facturers that quickly incorporated the compact cassette into their<br />
audio products, producing the first cassette players available in the<br />
United States.<br />
The portable cassette players of the 1960’s <strong>and</strong> 1970’s were based<br />
on the transistor radio concept: small loudspeaker, transistorized<br />
amplifier, <strong>and</strong> flashlight batteries all enclosed in a plastic case. The<br />
size of transistorized components was being reduced constantly,<br />
<strong>and</strong> new types of batteries, notably the nickel cadmium combination,<br />
offered higher power output in smaller sizes. The problem of<br />
reducing the size of the loudspeaker without serious deterioration<br />
of sound quality blocked the path to very small cassette players.<br />
Sony’s engineers solved the problem with a very small loudspeaker<br />
device using plastic diaphragms <strong>and</strong> new, lighter materials for the<br />
magnets. These devices were incorporated into tiny stereo headphones<br />
that set new st<strong>and</strong>ards of fidelity.<br />
The first “walkman” was made by Sony engineers for the personal<br />
use of Masaru Ibuka. He wanted to be able to listen to high-fidelity<br />
recorded sound wherever he went, <strong>and</strong> the tiny player was small<br />
enough to fit inside a pocket. Sony was experienced in reducing the<br />
size of machines. At the same time the walkman was being made up,<br />
Sony engineers were struggling to produce a video recording cassette<br />
that was also small enough to fit into Ibuka’s pocket.<br />
Although the portable stereo was part of a long line of successful<br />
miniaturized consumer products, it was not immediately recognized<br />
as a commercial technology. There were already plenty of cassette<br />
players in home units, in automobiles, <strong>and</strong> in portable players.<br />
Marketing experts questioned the need for a tiny version. The board<br />
of directors of Sony had to be convinced by Morita that the new<br />
product had commercial potential. The Sony Soundabout portable<br />
cassette player was introduced to the market in 1979.<br />
Impact<br />
Walkman cassette player / 877<br />
The Soundabout was initially treated as a novelty in the audio<br />
equipment industry. At a price of $200, it could not be considered as<br />
a product for the mass market. Although it sold well in Japan, where<br />
people were used to listening to music on headphones, sales in the<br />
United States were not encouraging. Sony’s engineers, working un-
878 / Walkman cassette player<br />
der the direction of Kozo Ohsone, reduced the size <strong>and</strong> cost of the<br />
machine. In 1981, the Walkman II was introduced. It was 25 percent<br />
smaller than the original version <strong>and</strong> had 50 percent fewer moving<br />
parts. Its price was considerably lower <strong>and</strong> continued to fall.<br />
The Walkman opened a huge market for audio equipment that<br />
nobody knew existed. Sony had again confounded the marketing<br />
experts who doubted the appeal of a new consumer electronics<br />
product. It took about two years for Sony’s Japanese competitors,<br />
including Matsushita, Toshiba, <strong>and</strong> Aiwa, to bring out portable personal<br />
stereos. Such was the popularity of the device that any miniature<br />
cassette player was called a “walkman,” irrespective of the<br />
manufacturer. Sony kept ahead of the competition by constant innovation:<br />
Dolby noise reduction circuits were added in 1982, <strong>and</strong> a rechargeable<br />
battery feature was introduced in 1985. The machine became<br />
smaller, until it was barely larger than the audio cassette it<br />
played.<br />
Sony developed a whole line of personal stereos. Waterproofed<br />
Walkmans were marketed to customers who wanted musical accompaniment<br />
to water sports. There were special models for tennis<br />
players <strong>and</strong> joggers. The line grew to encompass about forty different<br />
types of portable cassette players, priced from about $30 to $500<br />
for a high-fidelity model.<br />
In the ten years following the introduction of the Walkman,<br />
Sony sold fifty million units, including twenty-five million in the<br />
United States. Its competitors sold millions more. They were manufactured<br />
all over the Far East <strong>and</strong> came in a broad range of sizes<br />
<strong>and</strong> prices, with the cheapest models about $20. Increased competition<br />
in the portable tape player market continually forced down<br />
prices. Sony had to respond to the huge numbers of cheap copies<br />
by redesigning the Walkman to bring down its cost <strong>and</strong> by automating<br />
its production. The playing mechanism became part of the<br />
integrated circuit that provided amplification, allowing manufacturing<br />
as one unit.<br />
The Walkman did more than revive sales of audio equipment in<br />
the sagging market of the late 1970’s. It stimulated dem<strong>and</strong> for cassette<br />
tapes <strong>and</strong> helped make the compact cassette the worldwide<br />
st<strong>and</strong>ard for magnetic tape. At the time the Walkman was introduced,<br />
the major form of prerecorded sound was the vinyl micro-
Masaru Ibuka<br />
Walkman cassette player / 879<br />
Nicknamed “genius inventor” in college, Masaru Ibuka developed<br />
into a visionary corporate leader <strong>and</strong> business philosopher.<br />
Born in Nikko City, Japan, in 1908, he took a degree in engineering<br />
from Waseda University in 1933 <strong>and</strong> went to work<br />
at Photo-Chemical Laboratory, which developed movie film.<br />
Changing to naval research during World War II, he met Akio<br />
Morita, another engineer. After the war they opened an electronics<br />
shop together, calling it the Tokyo Telecommunications<br />
Engineering Corporation, <strong>and</strong> began experimenting with tape<br />
recorders.<br />
Their first model was a modest success, <strong>and</strong> the business<br />
grew under Ibuka, who was president <strong>and</strong> later chairman He<br />
thought up a new, less daunting name for his company, Sony, in<br />
the 1950’s, when it rapidly became a leader in consumer electronics.<br />
His goal was to make existing technology useful to people<br />
in everyday life. “He sowed the seeds of a deep conviction<br />
that our products must bring joy <strong>and</strong> fun to users,” one of his<br />
successors as president, Nobuyuki Idei, said in 1997.<br />
While American companies were studying military applications<br />
for the newly developed transistor in the 1950’s, Ibuka<br />
<strong>and</strong> Morita put it to use in an affordable transistor radio <strong>and</strong><br />
then found ways to shrink its size <strong>and</strong> power it with batteries so<br />
that it could be taken anywhere. In a similar fashion, they made<br />
tape recorders <strong>and</strong> players (such as the Walkman), video players,<br />
compact disc players, <strong>and</strong> televisions ever cheaper, more reliable,<br />
<strong>and</strong> more efficiently designed.<br />
A hero in the Japanese business world, Ibuka retired as Sony<br />
chairman in 1976 but continued to help out as a consultant until<br />
his death in 1997.<br />
groove record. In 1983, the ratio of vinyl to cassette sales was 3:2. By<br />
the end of the decade, the audio cassette was the bestselling format<br />
for recorded sound, outselling vinyl records <strong>and</strong> compact discs<br />
combined by a ratio of 2:1. The compatibility of the audio cassette<br />
used in personal players with the home stereo ensured that it would<br />
be the most popular tape recording medium.<br />
The market for portable personal players in the United States<br />
during the decade of the 1990’s was estimated to be more than
880 / Walkman cassette player<br />
twenty million units each year. Sony accounted for half of the 1991<br />
American market of fifteen million units selling at an average price<br />
of $50. It appeared that there would be more than one in every<br />
home. In some parts of Western Europe, there were more cassette<br />
players than people, reflecting the level of market penetration<br />
achieved by the Walkman.<br />
The ubiquitous Walkman had a noticeable effect on the way<br />
that people listen to music. The sound from the headphones of a<br />
portable player is more intimate <strong>and</strong> immediate than the sound<br />
coming from the loudspeakers of a home stereo. The listener can<br />
hear a wider range of frequencies <strong>and</strong> more of the lower amplitudes<br />
of music, while the reverberation caused by sound bouncing<br />
off walls is reduced. The listening public has become accustomed<br />
to the Walkman sound <strong>and</strong> expects it to be duplicated on<br />
commercial recordings. Recording studios that once mixed their<br />
master recordings to suit the reproduction characteristics of car<br />
or transistor radios began to mix them for Walkman headphones.<br />
Personal stereos also enable the listener to experience more of the<br />
volume of recorded sound because it is injected directly into the<br />
ear.<br />
The Walkman established a market for portable tape players that<br />
exerted an influence on all subsequent audio products. The introduction<br />
of the compact disc (CD) in 1983 marked a completely new<br />
technology of recording based on digital transformation of sound. It<br />
was jointly developed by the Sony <strong>and</strong> Philips companies. Despite<br />
the enormous technical difficulties of reducing the size of the laser<br />
reader <strong>and</strong> making it portable, Sony’s engineers devised the Discman<br />
portable compact disc player, which was unveiled in 1984. It<br />
followed the Walkman concept exactly <strong>and</strong> offered higher fidelity<br />
than the cassette tape version. The Discman sold for about $300<br />
when it was introduced, but its price soon dropped to less than<br />
$100. It did not achieve the volume of sales of the audio cassette version<br />
because fewer CDs than audio cassettes were in use. The slow<br />
acceptance of the compact disc hindered sales growth. The Discman<br />
could not match the portability of the Walkman because vibrations<br />
caused the laser reader to skip tracks.<br />
In the competitive market for consumer electronics products, a<br />
company must innovate to survive. Sony had watched cheap compe-
Walkman cassette player / 881<br />
tition erode the sales of many of its most successful products, particularly<br />
the transistor radio <strong>and</strong> personal television, <strong>and</strong> was committed<br />
to both product improvement <strong>and</strong> new entertainment technologies.<br />
It knew that the personal cassette player had a limited sales potential<br />
in the advanced industrial countries, especially after the introduction<br />
of digital recording in the 1980’s. It therefore sought new technology<br />
to apply to the Walkman concept. Throughout the 1980’s, Sony <strong>and</strong><br />
its many competitors searched for a new version of the Walkman.<br />
The next generation of personal players was likely to be based on<br />
digital recording. Sony introduced its digital audio tape (DAT) system<br />
in 1990. This used the same digital technology as the compact<br />
disc but came in tape form. It was incorporated into expensive<br />
home players; naturally, Sony engineered a portable version. The<br />
tiny DAT Walkman offered unsurpassed fidelity of reproduction,<br />
but its incompatibility with any other tape format <strong>and</strong> its high price<br />
limited its sales to professional musicians <strong>and</strong> recording engineers.<br />
After the failure of DAT, Sony refocused its digital technology<br />
into a format more similar to the Walkman. Its Mini Disc (MD) used<br />
the same technology as the compact disc but had the advantage of a<br />
recording capability. The 2.5-inch disc was smaller than the CD, <strong>and</strong><br />
the player was smaller than the Walkman. The play-only version fit<br />
in the palm of a h<strong>and</strong>. A special feature prevented the skipping of<br />
tracks that caused problems with the Discman. The Mini Disc followed<br />
the path blazed by the Walkman <strong>and</strong> represented the most<br />
advanced technology applied to personal stereo players. At a price<br />
of about $500 in 1993, it was still too expensive to compete in the<br />
audio cassette Walkman market, but the history of similar products<br />
illustrates that rapid reduction of price could be achieved even with<br />
a complex technology.<br />
The Walkman had a powerful influence on the development of<br />
other digital <strong>and</strong> optical technologies. The laser readers of compact<br />
disc players can access visual <strong>and</strong> textual information in addition to<br />
sound. Sony introduced the Data Discman, a h<strong>and</strong>held device that<br />
displayed text <strong>and</strong> pictures on a tiny screen. Several other manufacturers<br />
marketed electronic books. Whatever the shape of future entertainment<br />
<strong>and</strong> information technologies, the legacy of the Walkman<br />
will put a high premium on portability, small size, <strong>and</strong> the<br />
interaction of machine <strong>and</strong> user.
882 / Walkman cassette player<br />
See also Cassette recording; Compact disc; Dolby noise reduction;<br />
Electronic synthesizer; Laser; Transistor; Videocassette recorder.<br />
Further Reading<br />
Bull, Michael. Sounding Out the City: Personal Stereos <strong>and</strong> the Management<br />
of Everyday Life. New York: Berg, 2000.<br />
Lyons, Nick. The Sony Vision. New York: Crown Publishers, 1976.<br />
Morita, Akio, with Edwin M. Reingold, <strong>and</strong> Mitsuko Shimomura.<br />
Made in Japan: Akio Morita <strong>and</strong> Sony. London: HarperCollins, 1994.<br />
Nathan, John. Sony: The Private Life. London: HarperCollins Business,<br />
2001.<br />
Schlender, Brenton R. “How Sony Keeps the Magic Going.” Fortune<br />
125 (February 24, 1992).
Washing machine<br />
Washing machine<br />
The invention: Electrical-powered machines that replaced h<strong>and</strong>operated<br />
washing tubs <strong>and</strong> wringers, making the job of washing<br />
clothes much easier.<br />
The people behind the invention:<br />
O. B. Woodrow, a bank clerk who claimed to be the first to<br />
adapt electricity to a remodeled h<strong>and</strong>-operated washing<br />
machine<br />
Alva J. Fisher (1862-1947), the founder of the Hurley Machine<br />
Company, who designed the Thor electric washing machine,<br />
claiming that it was the first successful electric washer<br />
Howard Snyder, the mechanical genius of the Maytag<br />
Company<br />
H<strong>and</strong> Washing<br />
883<br />
Until the development of the electric washing machine in the<br />
twentieth century, washing clothes was a tiring <strong>and</strong> time-consuming<br />
process. With the development of the washboard, dirt was loosened<br />
by rubbing. Clothes <strong>and</strong> tubs had to be carried to the water, or the<br />
water had to be carried to the tubs <strong>and</strong> clothes. After washing <strong>and</strong><br />
rinsing, clothes were h<strong>and</strong>-wrung, hang-dried, <strong>and</strong> ironed with<br />
heavy, heated irons. In nineteenth century America, the laundering<br />
process became more arduous with the greater use of cotton fabrics.<br />
In addition, the invention of the sewing machine resulted in the mass<br />
production of inexpensive ready-to-wear cotton clothing. With more<br />
clothing, there was more washing.<br />
One solution was h<strong>and</strong>-operated washing machines. The first<br />
American patent for a h<strong>and</strong>-operated washing machine was issued<br />
in 1805. By 1857, more than 140 patents had been issued; by 1880, between<br />
4,000 <strong>and</strong> 5,000 patents had been granted. While most of<br />
these machines were never produced, they show how much the<br />
public wanted to find a mechanical means of washing clothes.<br />
Nearly all the early types prior to the Civil War (1861-1865) were<br />
modeled after the washboard.
884 / Washing machine<br />
Washing machines based upon the rubbing principle had two<br />
limitations: They washed only one item at a time, <strong>and</strong> the rubbing<br />
was hard on clothes. The major conceptual breakthrough was to<br />
move away from rubbing <strong>and</strong> to design machines that would clean<br />
by forcing water through a number of clothes at the same time.<br />
An early suction machine used a plunger to force water through<br />
clothes. Later electric machines would have between two <strong>and</strong> four<br />
suction cups, similar to plungers, attached to arms that went up <strong>and</strong><br />
down <strong>and</strong> rotated on a vertical shaft. Another h<strong>and</strong>-operated washing<br />
machine was used to rock a tub on a frame back <strong>and</strong> forth. An<br />
electric motor was later substituted for the h<strong>and</strong> lever that rocked<br />
the tub. A third h<strong>and</strong>-operated washing machine was the dolly<br />
type. The dolly, which looked like an upside-down three-legged<br />
milking stool, was attached to the inside of the tub cover <strong>and</strong> was<br />
turned by a two-h<strong>and</strong>led lever on top of the enclosed tub.<br />
Machine Washing<br />
The h<strong>and</strong>-operated machines that would later dominate the<br />
market as electric machines were the horizontal rotary cylinder<br />
<strong>and</strong> the underwater agitator types. In 1851, James King patented<br />
a machine of the first type that utilized two concentric half-full<br />
cylinders. Water in the outer cylinder was heated by a fire beneath<br />
it; a h<strong>and</strong> crank turned the perforated inner cylinder that<br />
contained clothing <strong>and</strong> soap. The inner-ribbed design of the rotating<br />
cylinder raised the clothes as the cylinder turned. Once the<br />
clothes reached the top of the cylinder, they dropped back down<br />
into the soapy water.<br />
The first underwater agitator-type machine, the second type,<br />
was patented in 1869. In this machine, four blades at the bottom of<br />
the tub were attached to a central vertical shaft that was turned by<br />
a h<strong>and</strong> crank on the outside. The agitation created by the blades<br />
washed the clothes by driving water through the fabric. It was not<br />
until 1922, when Howard Snyder of the Maytag Company developed<br />
an underwater agitator with reversible motion, that this type<br />
of machine was able to compete with the other machines. Without<br />
reversible action, clothes would soon wrap around the blades <strong>and</strong><br />
not be washed.
Claims for inventing the first electric washing machine came<br />
from O. B. Woodrow, who founded the Automatic Electric Washer<br />
Company, <strong>and</strong> Alva J. Fisher, who developed the Thor electric<br />
washing machine for the Hurley Machine Corporation. Both Woodrow<br />
<strong>and</strong> Fisher made their innovations in 1907 by adapting electric<br />
power to modified h<strong>and</strong>-operated, dolly-type machines. Since only<br />
8 percent of American homes were wired for electricity in 1907, the<br />
early machines were advertised as adaptable to electric or gasoline<br />
power but could be h<strong>and</strong>-operated if the power source failed. Soon,<br />
electric power was being applied to the rotary cylinder, oscillating,<br />
<strong>and</strong> suction-type machines. In 1910, a number of companies introduced<br />
washing machines with attached wringers that could be operated<br />
by electricity. The introduction of automatic washers in 1937<br />
meant that washing machines could change phases without the action<br />
of the operator.<br />
Impact<br />
Washing machine / 885<br />
By 1907 (the year electricity was adapted to washing machines),<br />
electric power was already being used to operate fans, ranges, coffee<br />
percolators, flatirons, <strong>and</strong> sewing machines. By 1920, nearly 35<br />
percent of American residences had been wired for electricity; by<br />
1941, nearly 80 percent had been wired. The majority of American<br />
homes had washing machines by 1941; by 1958, this had risen to an<br />
estimated 90 percent.<br />
The growth of electric appliances, especially washing machines,<br />
is directly related to the decline in the number of domestic servants<br />
in the United States. The development of the electric washing machine<br />
was, in part, a response to a decline in servants, especially<br />
laundresses. Also, rather than easing the work of laundresses with<br />
technology, American families replaced their laundresses with washing<br />
machines.<br />
Commercial laundries were also affected by the growth of electric<br />
washing machines. At the end of the nineteenth century, they<br />
were in every major city <strong>and</strong> were used widely. Observers noted<br />
that just as spinning, weaving, <strong>and</strong> baking had once been done in<br />
the home but now were done in commercial establishments, laundry<br />
work had now begun its move out of the home. After World
886 / Washing machine<br />
War II (1939-1945), however, although commercial laundries continued<br />
to grow, their business was centered more <strong>and</strong> more on institutional<br />
laundry, rather than residential laundry, which they had lost<br />
to the home washing machine.<br />
Some scholars have argued that, on one h<strong>and</strong>, the return of laundry<br />
to the home resulted from marketing strategies that developed<br />
the image of the American woman as one who is home operating<br />
her appliances. On the other h<strong>and</strong>, it was probably because the electric<br />
washing machine made the task much easier that American<br />
women, still primarily responsible for the family laundry, were able<br />
to pursue careers outside the home.<br />
See also Electric refrigerator; Microwave cooking; Robot (household);<br />
Vacuum cleaner; Vending machine slug rejector.<br />
Further Reading<br />
Ierley, Merritt. Comforts of Home: The American House <strong>and</strong> the Evolution<br />
of Modern Convenience. New York: C. Potter, 1999.<br />
“Maytag Heritage Embraces Innovation, Dependable Products.”<br />
Machine Design 71, no. 18 (September, 1999).<br />
Shapiro, Laura. “Household Appliances.” Newsweek 130, no. 24A<br />
(Winter, 1997/1998).
Weather satellite<br />
Weather satellite<br />
The invention: A series of cloud-cover meteorological satellites<br />
that pioneered the reconnaissance of large-scale weather systems<br />
<strong>and</strong> led to vast improvements in weather forecasting.<br />
The person behind the invention:<br />
Harry Wexler (1911-1962), director of National Weather Bureau<br />
meteorological research<br />
Cameras in Space<br />
887<br />
The first experimental weather satellite, Tiros 1, was launched<br />
from Cape Canaveral on April 1, 1960. Tiros’s orbit was angled to<br />
cover the area from Montreal, Canada, to Santa Cruz, Argentina, in<br />
the Western Hemisphere. Tiros completed an orbit every ninetynine<br />
minutes <strong>and</strong>, when launched, was expected to survive at least<br />
three months in space, returning thous<strong>and</strong>s of images of large-scale<br />
weather systems.<br />
Tiros 1 was equipped with a pair of vidicon scanner television<br />
cameras, one equipped with a wide-angle lens <strong>and</strong> the other with a<br />
narrow-angle lens. Both cameras created pictures with five hundred<br />
lines per frame at a shutter speed of 1.5 milliseconds. Each television<br />
camera’s imaging data were stored on magnetic tape for downloading<br />
to ground stations when Tiros 1 was in range. The wideangle<br />
lens provided a low-resolution view of an area covering 2,048<br />
square kilometers. The narrow-angle lens had a resolution of half a<br />
kilometer within a viewing area of 205 square kilometers.<br />
Tiros transmitted its data to ground stations, which displayed<br />
the data on television screens. Photographs of these displays were<br />
then made for permanent records. Tiros weather data were sent to<br />
the Naval Photographic Interpretation Center for detailed meteorological<br />
analysis. Next, the photographs were passed along to the<br />
National Weather Bureau for further study.<br />
Tiros caused some controversy because it was able to image large<br />
areas of the communist world: the Soviet Union, Cuba, <strong>and</strong> Mongolia.<br />
The weather satellite’s imaging system was not, however, partic-
888 / Weather satellite<br />
Hurricane off the coast of Florida photographed from space. (PhotoDisc)<br />
ularly useful as a spy satellite, <strong>and</strong> only large-scale surface features<br />
were visible in the images. Nevertheless, the National Aeronautics<br />
<strong>and</strong> Space Administration (NASA) skirted adverse international reactions<br />
by carefully scrutinizing Tiros’s images for evidence of sensitive<br />
surface features before releasing them publicly.<br />
A Startling Discovery<br />
Tiros 1 was not in orbit very long before it made a significant <strong>and</strong><br />
startling discovery. It was the first satellite to document that large<br />
storms have vortex patterns that resemble whirling pinwheels. Within<br />
its lifetime, Tiros photographed more than forty northern mid-latitude<br />
storm systems, <strong>and</strong> each one had a vortex at its center. These storms<br />
were in various stages of development <strong>and</strong> were between 800 <strong>and</strong><br />
1,600 kilometers in diameter. The storm vortex in most of these was located<br />
inside a 560-kilometer-diameter circle around the center of the<br />
storm’s low-pressure zone. Nevertheless, Tiros’s images did not reveal<br />
at what stage in a storm’s development the vortex pattern formed.
This was typical of Tiros’s data. The satellite was truly an experiment,<br />
<strong>and</strong>, as is the case with most initial experiments, various new<br />
phenomena were uncovered but were not fully understood. The data<br />
showed clearly that weather systems could be investigated from orbit<br />
<strong>and</strong> that future weather satellites could be outfitted with sensors that<br />
would lead to better underst<strong>and</strong>ing of meteorology on a global scale.<br />
Tiros 1 did suffer from a few difficulties during its lifetime in orbit.<br />
Low contrast in the television imaging system often made it difficult<br />
to distinguish between cloud cover <strong>and</strong> snow cover. The magnetic<br />
tape system for the high-resolution camera failed at an early<br />
stage. Also, Earth’s magnetic field tended to move Tiros 1 away<br />
from an advantageous Earth observation attitude. Experience with<br />
Tiros 1 led to improvements in later Tiros satellites <strong>and</strong> many other<br />
weather-related satellites.<br />
Consequences<br />
Weather satellite / 889<br />
Prior to Tiros 1, weather monitoring required networks of groundbased<br />
instrumentation centers, airborne balloons, <strong>and</strong> instrumented<br />
aircraft. Brief high-altitude rocket flights provided limited coverage<br />
of cloud systems from above. Tiros 1 was the first step in the development<br />
of the permanent monitoring of weather systems. The resulting<br />
early detection <strong>and</strong> accurate tracking of hurricanes alone have resulted<br />
in savings in both property <strong>and</strong> human life.<br />
As a result of the Tiros 1 experiment, meteorologists were not<br />
ready to discard ground-based <strong>and</strong> airborne weather systems in<br />
favor of satellites alone. Such systems could not provide data<br />
about pressure, humidity, <strong>and</strong> temperature, for example. Tiros 1<br />
did, however, introduce weather satellites as a necessary supplement<br />
to ground-based <strong>and</strong> airborne systems for large-scale monitoring<br />
of weather systems <strong>and</strong> storms. Satellites could provide<br />
more reliable <strong>and</strong> expansive coverage at a far lower cost than a<br />
large contingent of aircraft. Tiros 1, which was followed by nine<br />
similar spacecraft, paved the way for modern weather satellite<br />
systems.<br />
See also Artificial satellite; Communications satellite; Cruise<br />
missile; Radio interferometer; Rocket.
890 / Weather satellite<br />
Further Reading<br />
Fishman, Jack, <strong>and</strong> Robert Kalish. The Weather Revolution: Innovations<br />
<strong>and</strong> Imminent Breakthroughs in Accurate Forecasting. New<br />
York: Plenum Press, 1994.<br />
Kahl, Jonathan D. Weather Watch: Forecasting the Weather. Minneapolis,<br />
Minn.: Lerner, 1996.<br />
Rao, Krishna P. Weather Satellites: Systems, Data, <strong>and</strong> Environmental<br />
Applications. Boston: American Meteorological Society, 1990.<br />
Artist’s depiction of a weather satellite. (PhotoDisc)
Xerography<br />
Xerography<br />
The invention: Process that makes identical copies of documents<br />
with a system of lenses, mirrors, electricity, chemicals that conduct<br />
electricity in bright light, <strong>and</strong> dry inks (toners) that fuse to<br />
paper by means of heat.<br />
The people behind the invention:<br />
Chester F. Carlson (1906-1968), an American inventor<br />
Otto Kornei (1903- ), a German physicist <strong>and</strong> engineer<br />
Xerography, Xerography, Everywhere<br />
The term xerography is derived from the Greek for “dry writing.”<br />
The process of xerography was invented by an American, Chester F.<br />
Carlson, who made the first xerographic copy of a document in<br />
1938. Before the development of xerography, the preparation of copies<br />
of documents was often difficult <strong>and</strong> tedious. Most often, unclear<br />
carbon copies of typed documents were the only available medium<br />
of information transfer.<br />
The development of xerography led to the birth of the giant<br />
Xerox Corporation, <strong>and</strong> the term xerographic was soon shortened to<br />
Xerox. The process of xerography makes identical copies of a document<br />
by using lens systems, mirrors, electricity, chemicals that conduct<br />
electricity in bright light (“semiconductors”), <strong>and</strong> dry inks<br />
called “toners” that are fused to copy paper by means of heat. The<br />
process makes it easy to produce identical copies of a document<br />
quickly <strong>and</strong> cheaply. In addition, xerography has led to huge advances<br />
in information transfer, the increased use of written documents,<br />
<strong>and</strong> rapid decision-making in all areas of society. Xeroxing<br />
can produce both color <strong>and</strong> black-<strong>and</strong>-white copies.<br />
From the First Xerox Copy to Modern Photocopies<br />
891<br />
On October 22, 1938, after years of effort, Chester F. Carlson produced<br />
the first Xerox copy. Reportedly, his efforts grew out of his<br />
1930’s job in the patent department of the New York firm P. R.
892 / Xerography<br />
Mallory <strong>and</strong> Company. He was looking for a quick, inexpensive<br />
method for making copies of patent diagrams <strong>and</strong> other patent<br />
specifications. Much of Carlson’s original work was conducted in<br />
the kitchen of his New York City apartment or in a room behind a<br />
beauty parlor in Astoria, Long Isl<strong>and</strong>. It was in Astoria that Carlson,<br />
with the help of Otto Kornei, produced the first Xerox copy (of the<br />
inscription “10-22-38 Astoria”) on waxed paper.<br />
The first practical method of xerography used the element selenium,<br />
a substance that conducts electricity only when it is exposed<br />
to light. The prototype Xerox copying machines were developed as<br />
a result of the often frustrating, nerve-wracking, fifteen-year collaboration<br />
of Carlson, scientists <strong>and</strong> engineers at the Battelle Memorial<br />
Institute in Columbus, Ohio, <strong>and</strong> the Haloid Company of Rochester,<br />
New York. The Haloid Company financed the effort after 1947,<br />
based on an evaluation made by an executive, John H. Dessauer. In<br />
return, the company obtained the right to manufacture <strong>and</strong> market<br />
Xerox machines. The company, which was originally a manufacturer<br />
of photographic paper, evolved into the giant Xerox Corporation.<br />
Carlson became very wealthy as a result of the royalties <strong>and</strong><br />
dividends paid to him by the company.<br />
Early xerographic machines operated in several stages. First, the<br />
document to be copied was positioned above a mirror so that its image,<br />
lit by a flash lamp <strong>and</strong> projected by a lens, was reflected onto a<br />
drum coated with electrically charged selenium. Wherever dark<br />
sections of the document’s image were reflected, the selenium coating<br />
retained its positive charge. Where the image was light, the<br />
charge of the selenium was lost, because of the photoactive properties<br />
of the selenium.<br />
Next, the drum was dusted with a thin layer of a negatively<br />
charged black powder called a “toner.” Toner particles stuck to positively<br />
charged dark areas of the drum <strong>and</strong> produced a visible image<br />
on the drum. Then, Xerox copy paper, itself positively charged, was<br />
put in contact with the drum, where it picked up negatively charged<br />
toner. Finally, an infrared lamp heated the paper <strong>and</strong> the toner, fusing<br />
the toner to the paper <strong>and</strong> completing the copying process.<br />
In ensuing years, the Xerox Corporation engineered many changes<br />
in the materials <strong>and</strong> mechanics of Xerox copiers. For example, the<br />
semiconductors <strong>and</strong> toners were changed, which increased both the
Chester F. Carlson<br />
Xerography / 893<br />
The copying machine changed Chester Floyd Carlson’s life<br />
even before he invented it. While he was experimenting with<br />
photochemicals in his apartment, the building owner’s daughter<br />
came by to complain about the stench Carlson was creating.<br />
However, she found Carlson himself more compelling than her<br />
complaints <strong>and</strong> married him not long afterward. Soon Carlson<br />
transferred his laboratory to a room behind his mother-in-law’s<br />
beauty parlor, where he devoted ten dollars a month from his<br />
meager wages to spend on research.<br />
Born in Seattle, Washington, in 1906, Carlson learned early<br />
to husb<strong>and</strong> his resources, set his goals high, <strong>and</strong> never give up.<br />
Both his father <strong>and</strong> mother were sickly, <strong>and</strong> so after he was fourteen,<br />
Carlson was the family’s main breadwinner. His relentless<br />
drive <strong>and</strong> native intelligence got him through high school <strong>and</strong><br />
into a community college, where an impressed teacher inspired<br />
him to go even further—into the California Institute of Technology.<br />
After he graduated, he worked for General Electric but lost<br />
his job during the layoffs caused by the Great Depression. In<br />
1933 he hired on with P. R. Mallory Company, an electrical component<br />
manufacturer, which, although not interested in his invention,<br />
at least paid him enough in wages to keep going.<br />
His thirteen-year crusade to invent a copier <strong>and</strong> then find a<br />
manufacturer to build it ended just as Carlson was nearly<br />
broke. In 1946 Haloid Corporation licensed the rights to Carlson’s<br />
copying machine, but even then the invention did not become<br />
an important part of American communications culture<br />
until the company marketed the Xerox 914 in 1960. The earnings<br />
for Xerox Corporation (as it was called after 1961) leapt<br />
from $33 million to more than $500 million in the next six years,<br />
<strong>and</strong> Carlson became enormously wealthy. He won the Inventor<br />
of the Year Award in 1964 <strong>and</strong> the Horatio Alger Award in 1966.<br />
Before he died in 1968, he remembered the hardships of his<br />
youth by donating $100 million to research organizations <strong>and</strong><br />
charitable foundations.<br />
quality of copies <strong>and</strong> the safety of the copying process. In addition,<br />
auxiliary lenses of varying focal length were added, along with other<br />
features, which made it possible to produce enlarged or reduced copies.<br />
Furthermore, modification of the mechanical <strong>and</strong> chemical prop-
894 / Xerography<br />
erties of the components of the system made it possible to produce<br />
thous<strong>and</strong>s of copies per hour, sort them, <strong>and</strong> staple them.<br />
The next development was color Xerox copying. Color systems<br />
use the same process steps that the black-<strong>and</strong>-white systems use,<br />
but the document exposure <strong>and</strong> toning operations are repeated<br />
three times to yield the three overlaid colored layers (yellow, magenta,<br />
<strong>and</strong> cyan) that are used to produce multicolored images in<br />
any color printing process. To accomplish this, blue, green, <strong>and</strong> red<br />
filters are rotated in front of the copier’s lens system. This action<br />
produces three different semiconductor images on three separate<br />
rollers. Next, yellow, magenta, <strong>and</strong> cyan toners are used—each on<br />
its own roller—to yield three images. Finally, all three images are<br />
transferred to one sheet of paper, which is heated to produce the<br />
multicolored copy. The complex color procedure is slower <strong>and</strong><br />
much more expensive than the black-<strong>and</strong>-white process.<br />
Impact<br />
The quick, inexpensive copying of documents is commonly performed<br />
worldwide. Memor<strong>and</strong>a that must be distributed to hundreds<br />
of business employees can now be copied in moments, whereas in the<br />
past such a process might have occupied typists for days <strong>and</strong> cost<br />
hundreds of dollars. Xerox copying also has the advantage that each<br />
copy is an exact replica of the original; no new errors can be introduced,<br />
as was the case when documents had to be retyped. Xerographic<br />
techniques are also used to reproduce X rays <strong>and</strong> many other<br />
types of medical <strong>and</strong> scientific data, <strong>and</strong> the facsimile (fax) machines<br />
that are now used to send documents from one place to another over<br />
telephone lines are a variation of the Xerox process.<br />
All this convenience is not without some problems: The ease of<br />
photocopying has made it possible to reproduce copyrighted publications.<br />
Few students at libraries, for example, think twice about<br />
copying portions of books, since it is easy <strong>and</strong> inexpensive to do so.<br />
However, doing so can be similar to stealing, according to the law.<br />
With the advent of color photocopying, an even more alarming<br />
problem has arisen: Thieves are now able to use this technology to<br />
create counterfeit money <strong>and</strong> checks. Researchers will soon find a<br />
way to make such important documents impossible to copy.
See also Fax machine; Instant photography; Laser-diode recording<br />
process.<br />
Further Reading<br />
Xerography / 895<br />
Kelley, Neil D. “Xerography: The Greeks Had a Word for It.”<br />
Infosystems 24, no. 1 (January, 1977).<br />
McClain, Dylan L. “Duplicate Efforts.” New York Times (November<br />
30, 1998).<br />
Mort, J. The Anatomy of Xerography: Its Invention <strong>and</strong> Evolution. Jefferson,<br />
N.C.: McFarl<strong>and</strong>, 1989.
896<br />
X-ray crystallography<br />
X-ray crystallography<br />
The invention: Technique for using X rays to determine the crystal<br />
structures of many substances.<br />
The people behind the invention:<br />
Sir William Lawrence Bragg (1890-1971), the son of Sir William<br />
Henry Bragg <strong>and</strong> cowinner of the 1915 Nobel Prize in Physics<br />
Sir William Henry Bragg (1862-1942), an English mathematician<br />
<strong>and</strong> physicist <strong>and</strong> cowinner of the 1915 Nobel Prize in<br />
Physics<br />
Max von Laue (1879-1960), a German physicist who won the<br />
1914 Nobel Prize in Physics<br />
Wilhelm Conrad Röntgen (1845-1923), a German physicist who<br />
won the 1901 Nobel Prize in Physics<br />
René-Just Haüy (1743-1822), a French mathematician <strong>and</strong><br />
mineralogist<br />
Auguste Bravais (1811-1863), a French physicist<br />
The Elusive Crystal<br />
A crystal is a body that is formed once a chemical substance has<br />
solidified. It is uniformly shaped, with angles <strong>and</strong> flat surfaces that<br />
form a network based on the internal structure of the crystal’s atoms.<br />
Determining what these internal crystal structures look like is<br />
the goal of the science of X-ray crystallography. To do this, it studies<br />
the precise arrangements into which the atoms are assembled.<br />
Central to this study is the principle of X-ray diffraction. This<br />
technique involves the deliberate scattering of X rays as they are<br />
shot through a crystal, an act that interferes with their normal path<br />
of movement. The way in which the atoms are spaced <strong>and</strong> arranged<br />
in the crystal determines how these X rays are reflected off them<br />
while passing through the material. The light waves thus reflected<br />
form a telltale interference pattern. By studying this pattern, scientists<br />
can discover variations in the crystal structure.<br />
The development of X-ray crystallography in the early twentieth<br />
century helped to answer two major scientific questions: What are X
ays? <strong>and</strong> What are crystals? It gave birth to a new technology for<br />
the identification <strong>and</strong> classification of crystalline substances.<br />
From studies of large, natural crystals, chemists <strong>and</strong> geologists<br />
had established the elements of symmetry through which one<br />
could classify, describe, <strong>and</strong> distinguish various crystal shapes.<br />
René-Just Haüy, about a century before, had demonstrated that diverse<br />
shapes of crystals could be produced by the repetitive stacking<br />
of tiny solid cubes.<br />
Auguste Bravais later showed, through mathematics, that all<br />
crystal forms could be built from a repetitive stacking of three-dimensional<br />
arrangements of points (lattice points) into “space lattices,”<br />
but no one had ever been able to prove that matter really was<br />
arranged in space lattices. Scientists did not know if the tiny building<br />
blocks modeled by space lattices actually were solid matter<br />
throughout, like Haüy’s cubes, or if they were mostly empty space,<br />
with solid matter located only at the lattice points described by<br />
Bravais.<br />
With the disclosure of the atomic model of Danish physicist Niels<br />
Bohr in 1913, determining the nature of the building blocks of crystals<br />
took on a special importance. If crystal structure could be<br />
shown to consist of atoms at lattice points, then the Bohr model<br />
would be supported, <strong>and</strong> science then could ab<strong>and</strong>on the theory<br />
that matter was totally solid.<br />
X Rays Explain Crystal Structure<br />
X-ray crystallography / 897<br />
In 1912, Max von Laue first used X rays to study crystalline matter.<br />
Laue had the idea that irradiating a crystal with X rays might<br />
cause diffraction. He tested this idea <strong>and</strong> found that X rays were<br />
scattered by the crystals in various directions, revealing on a photographic<br />
plate a pattern of spots that depended on the orientation<br />
<strong>and</strong> the symmetry of the crystal.<br />
The experiment confirmed in one stroke that crystals were not<br />
solid <strong>and</strong> that their matter consisted of atoms occupying lattice sites<br />
with substantial space in between. Further, the atomic arrangements<br />
of crystals could serve to diffract light rays. Laue received the 1914<br />
Nobel Prize in Physics for his discovery of the diffraction of X rays in<br />
crystals.
(Library of Congess)<br />
898 / X-ray crystallography<br />
Sir William Henry Bragg<br />
<strong>and</strong> Sir William Lawrence Bragg<br />
William Henry Bragg, senior member of one of the most illustrious<br />
father-son scientific teams in history, was born in Cumberl<strong>and</strong>,<br />
Engl<strong>and</strong>, in 1862. Talented at mathematics, he<br />
studied that field at Trinity College, Cambridge, <strong>and</strong><br />
physics at the Cavendish Laboratory, then moved into<br />
a professorship at the University of Adelaide in<br />
Australia. Despite an underequipped laboratory, he<br />
proved that the atom is not a solid body, <strong>and</strong> his<br />
work with X rays attracted the attention of Ernest<br />
Rutherford in Engl<strong>and</strong>, who helped him win a professorship<br />
at the University of Leeds in 1908.<br />
By then his eldest son, William Lawrence Bragg,<br />
William Henry Bragg<br />
was showing considerable scientific abilities of his<br />
own. Born in Adelaide in 1890, he also attended Trinity<br />
College, Cambridge, <strong>and</strong> performed research at the Cavendish.<br />
It was while there that father <strong>and</strong> son worked together to<br />
establish the specialty of X-ray crystallography. When they<br />
shared the 1915 Nobel Prize in Physics for their work, the son<br />
was only twenty-five years old—the youngest person ever to<br />
receive a Nobel Prize in any field.<br />
The younger Bragg was also an artillery officer in France<br />
during World War I. Meanwhile, his father worked for the<br />
Royal Admiralty. The hydrophone he invented to detect submarines<br />
underwater earned him a knighthood in 1920. The father<br />
moved to University College, London, <strong>and</strong> became director<br />
of the Royal Institution. His popular lectures about the latest<br />
scientific developments made him famous among the public,<br />
while his elevation to president of the Royal Society in 1935<br />
placed him among the most influential scientists in the world.<br />
He died in 1942.<br />
The son taught at the University of Manchester in 1919 <strong>and</strong><br />
then in 1938 became director of the National Physics Laboratory<br />
<strong>and</strong> professor of physics at the Cavendish. Following the<br />
father’s example, he became an administrator <strong>and</strong> professor at<br />
the Royal Institution, where he also distinguished himself with<br />
his popular lectures. He encouraged research using X-ray crystallography,<br />
including the work that unlocked the structure of<br />
deoxyribonucleic acid (DNA). Knighted in 1941, he became a<br />
royal Companion of Honor in 1967. He died in 1971.
Still, the diffraction of X rays was not yet a proved scientific fact.<br />
Sir William Henry Bragg contributed the final proof by passing one of<br />
the diffracted beams through a gas <strong>and</strong> achieving ionization of the<br />
gas, the same effect that true X rays would have caused. He also used<br />
the spectrometer he built for this purpose to detect <strong>and</strong> measure specific<br />
wavelengths of X rays <strong>and</strong> to note which orientations of crystals<br />
produced the strongest reflections. He noted that X rays, like visible<br />
light, occupy a definite part of the electromagnetic spectrum. Yet<br />
most of Bragg’s work focused on actually using X rays to deduce<br />
crystal structures.<br />
Sir Lawrence Bragg was also deeply interested in this new phenomenon.<br />
In 1912, he had the idea that the pattern of spots was an indication<br />
that the X rays were being reflected from the planes of atoms in the crystal.<br />
If that were true, Laue pictures could be used to obtain information<br />
about the structures of crystals. Bragg developed an equation that described<br />
the angles at which X rays would be most effectively diffracted<br />
by a crystal. This was the start of the X-ray analysis of crystals.<br />
Henry Bragg had at first used his spectrometer to try to determine<br />
whether X rays had a particulate nature. It soon became evident,<br />
however, that the device was a far more powerful way of analyzing<br />
crystals than the Laue photograph method had been. Not<br />
long afterward, father <strong>and</strong> son joined forces <strong>and</strong> founded the new<br />
science of X-ray crystallography. By experimenting with this technique,<br />
Lawrence Bragg came to believe that if the lattice models of<br />
Bravais applied to actual crystals, a crystal structure could be<br />
viewed as being composed of atoms arranged in a pattern consisting<br />
of a few sets of flat, regularly spaced, parallel planes.<br />
Diffraction became the means by which the Braggs deduced the<br />
detailed structures of many crystals. Based on these findings, they<br />
built three-dimensional scale models out of wire <strong>and</strong> spheres that<br />
made it possible for the nature of crystal structures to be visualized<br />
clearly even by nonscientists. Their results were published in the<br />
book X-Rays <strong>and</strong> Crystal Structure (1915).<br />
Impact<br />
X-ray crystallography / 899<br />
The Braggs founded an entirely new discipline, X-ray crystallography,<br />
which continues to grow in scope <strong>and</strong> application. Of partic-
900 / X-ray crystallography<br />
ular importance was the early discovery that atoms, rather than<br />
molecules, determine the nature of crystals. X-ray spectrometers of<br />
the type developed by the Braggs were used by other scientists to<br />
gain insights into the nature of the atom, particularly the innermost<br />
electron shells. The tool made possible the timely validation of some<br />
of Bohr’s major concepts about the atom.<br />
X-ray diffraction became a cornerstone of the science of mineralogy.<br />
The Braggs, chemists such as Linus Pauling, <strong>and</strong> a number of<br />
mineralogists used the tool to do pioneering work in deducing the<br />
structures of all major mineral groups. X-ray diffraction became the<br />
definitive method of identifying crystalline materials.<br />
Metallurgy progressed from a technology to a science as metallurgists<br />
became able, for the first time, to deduce the structural order of<br />
various alloys at the atomic level. Diffracted X rays were applied in<br />
the field of biology, particularly at the Cavendish Laboratory under<br />
the direction of Lawrence Bragg. The tool proved to be essential for<br />
deducing the structures of hemoglobin, proteins, viruses, <strong>and</strong> eventually<br />
the double-helix structure of deoxyribonucleic acid (DNA).<br />
See also Field ion microscope; Geiger counter; Holography;<br />
Mass spectrograph; Neutrino detector; Scanning tunneling microscope;<br />
Thermal cracking process; Ultramicroscope.<br />
Further Reading<br />
Achilladelis, Basil, <strong>and</strong> Mary Ellen Bowden. Structures of Life. Philadelphia:<br />
The Center, 1989.<br />
Bragg, William Lawrence. The Development of X-Ray Analysis. New<br />
York: Hafner Press, 1975.<br />
Thomas, John Meurig. “Architecture of the Invisible.” Nature 364<br />
(August 5, 1993).
X-ray image intensifier<br />
X-ray image intensifier<br />
The invention: A complex electronic device that increases the intensity<br />
of the light in X-ray beams exiting patients, thereby making<br />
it possible to read finer details.<br />
The people behind the invention:<br />
Wilhelm Conrad Röntgen (1845-1923), a German physicist<br />
Thomas Alva Edison (1847-1931), an American inventor<br />
W. Edward Chamberlain, an American physician<br />
Thomson Electron Tubes, a French company<br />
Radiologists Need Dark Adaptation<br />
901<br />
Thomas Alva Edison invented the fluoroscope in 1896, only one<br />
year after Wilhelm Conrad Röntgen’s discovery of X rays. The primary<br />
function of the fluoroscope is to create images of the internal<br />
structures <strong>and</strong> fluids in the human body. During fluoroscopy, the radiologist<br />
who performs the procedure views a continuous image of<br />
the motion of the internal structures.<br />
Although much progress was made during the first half of the<br />
twentieth century in recording X-ray images on plates <strong>and</strong> film,<br />
fluoroscopy lagged behind. In conventional fluoroscopy, a radiologist<br />
observed an image on a dim fluoroscopic screen. In the same<br />
way that it is more difficult to read a telephone book in dim illumination<br />
than in bright light, it is much harder to interpret a dim<br />
fluoroscopic image than a bright one. In the early years of fluoroscopy,<br />
the radiologist’s eyes had to be accustomed to dim illumination<br />
for at least fifteen minutes before performing fluoroscopy.<br />
“Dark adaptation” was the process of wearing red goggles under<br />
normal illumination so that the amount of light entering the eye<br />
was reduced.<br />
The human retina contains two kinds of light-sensitive elements:<br />
rods <strong>and</strong> cones. The dim light emitted by the screen of the<br />
fluoroscope, even under the best conditions, required the radiologist<br />
to see only with the rods, <strong>and</strong> vision is much less accurate in<br />
such circumstances. For normal rod-<strong>and</strong>-cone vision, the bright-
902 / X-ray image intensifier<br />
ness of the screen might have to be increased a thous<strong>and</strong>fold.<br />
Such an increase was impossible; even if an X-ray tube could have<br />
been built that was capable of emitting a beam of sufficient intensity,<br />
its rays would have been fatal to the patient in less than a<br />
minute.<br />
Fluoroscopy in an Undarkened Room<br />
In a classic paper delivered at the December, 1941, meeting of<br />
the Radiological Society of North America, Dr. W. Edward Chamberlain<br />
of Temple University Medical School proposed applying to<br />
fluoroscopy the techniques of image amplification (also known as<br />
image intensification) that had already been adapted for use in the<br />
electron microscope <strong>and</strong> in television. The idea was not original<br />
with him. Four or five years earlier, Irving Langmuir of General<br />
Electric Company had applied for a patent for a device that would<br />
intensify a fluoroscopic image. “It is a little hard to underst<strong>and</strong> the<br />
delay in the creation of a practical device,” Chamberlain noted.<br />
“Perhaps what is needed is a realization by the physicists <strong>and</strong> the<br />
engineers of the great need for brighter fluoroscopic images <strong>and</strong><br />
the great advantage to humanity which their arrival would entail.”<br />
Chamberlain’s brilliant analysis provided precisely that awareness.<br />
World War II delayed the introduction of fluoroscopic image<br />
intensification, but during the 1950’s, a number of image intensifiers<br />
based on the principles Chamberlain had outlined came on the<br />
market.<br />
The image-intensifier tube is a complex electronic device that receives<br />
the X-ray beam exiting the patient, converts it into light, <strong>and</strong><br />
increases the intensity of that light. The tube is usually contained in<br />
a glass envelope that provides some structural support <strong>and</strong> maintains<br />
a vacuum. The X rays, after passing through the patient, impinge<br />
on the face of a screen <strong>and</strong> trigger the ejection of electrons,<br />
which are then speeded up <strong>and</strong> focused within the tube by means of<br />
electrical fields. When the speeded-up electrons strike the phosphor<br />
at the output end of the tube, they trigger the emission of light photons<br />
that re-create the desired image, which is several thous<strong>and</strong><br />
times brighter than is the case with the conventional fluoroscopic<br />
screen. The output of the image intensifier can be viewed in an
undarkened room without prior dark adaptation, thus saving the<br />
radiologist much valuable time.<br />
Moving pictures can be taken of the output phosphor of the intensifying<br />
tube or of the television receiver image, <strong>and</strong> they can be<br />
stored on motion picture film or on magnetic tape. This permanently<br />
records the changing image <strong>and</strong> makes it possible to reduce<br />
further the dose of radiation that a patient must receive. Instead of<br />
prolonging the radiation exposure while examining various parts of<br />
the image or checking for various factors, the radiologist can record<br />
a relatively short exposure <strong>and</strong> then rerun the motion picture film or<br />
tape as often as necessary to analyze the information that it contains.<br />
The radiation dosage that is administered to the patient can be<br />
reduced to a tenth or even a hundredth of what it had been previously,<br />
<strong>and</strong> the same amount of diagnostic information or more can<br />
be obtained. The radiation dose that the radiologist receives is reduced<br />
to zero or almost zero. In addition, the combination of the<br />
brighter image <strong>and</strong> the lower radiation dosage administered to the<br />
patient has made it possible for radiologists to develop a number of<br />
important new diagnostic procedures that could not have been accomplished<br />
at all without image intensification.<br />
Impact<br />
X-ray image intensifier / 903<br />
The image intensifier that was developed by the French company<br />
Thomson Electron Tubes in 1959 had an input-phosphor diameter,<br />
or field, of four inches. Later on, image intensifiers with<br />
field sizes of up to twenty-two inches became available, making it<br />
possible to create images of much larger portions of the human<br />
anatomy.<br />
The most important contribution made by image intensifiers was<br />
to increase fluoroscopic screen illumination to the level required for<br />
cone vision. These devices have made dark adaptation a thing of the<br />
past. They have also brought the television camera into the fluoroscopic<br />
room <strong>and</strong> opened up a whole new world of fluoroscopy.<br />
See also Amniocentesis; CAT scanner; Electrocardiogram; Electroencephalogram;<br />
Mammography; Nuclear magnetic resonance;<br />
Ultrasound.
904 / X-ray image intensifier<br />
Further Reading<br />
Glasser, Otto. Dr. W. C. Röntgen. 2d ed. Springfield, Ill.: Charles C.<br />
Thomas, 1972.<br />
Isherwood, Ian, Adrian Thomas, <strong>and</strong> Peter Neil Temple Wells. The<br />
Invisible Light: One Hundred Years of Medical Radiology. Cambridge,<br />
Mass.: Blackwell Science, 1995.<br />
Lewis, Ricki. “Radiation Continuing Concern with Fluoroscopy.”<br />
FDA Consumer 27 (November, 1993).
Yellow fever vaccine<br />
Yellow fever vaccine<br />
The invention: The first safe vaccine agaisnt the virulent yellow fever<br />
virus, which caused some of the deadliest epidemics of the<br />
nineteenth <strong>and</strong> early twentieth centuries.<br />
The people behind the invention:<br />
Max Theiler (1899-1972), a South African microbiologist<br />
Wilbur Augustus Sawyer (1879-1951), an American physician<br />
Hugh Smith (1902-1995), an American physician<br />
A Yellow Flag<br />
905<br />
Yellow fever, caused by a virus <strong>and</strong> transmitted by mosquitoes,<br />
infects humans <strong>and</strong> monkeys. After the bite of the infecting mosquito,<br />
it takes several days before symptoms appear. The onset of<br />
symptoms is abrupt, with headache, nausea, <strong>and</strong> vomiting. Because<br />
the virus destroys liver cells, yellowing of the skin <strong>and</strong> eyes is common.<br />
Approximately 10 to 15 percent of patients die after exhibiting<br />
terrifying signs <strong>and</strong> symptoms. Death occurs usually from liver necrosis<br />
(decay) <strong>and</strong> liver shutdown. Those that survive recover completely<br />
<strong>and</strong> are immunized.<br />
At the beginning of the twentieth century, there was no cure for<br />
yellow fever. The best that medical authorities could do was to quarantine<br />
the afflicted. Those quarantines usually waved the warning yellow<br />
flag, which gave the disease its colloquial name, “yellow jack.”<br />
After the Aëdes aegypti mosquito was clearly identified as the carrier<br />
of the disease in 1900, efforts were made to combat the disease<br />
by wiping out the mosquito. Most famous in these efforts were the<br />
American army surgeon Walter Reed <strong>and</strong> the Cuban physician<br />
Carlos J. Finlay. This strategy was successful in Panama <strong>and</strong> Cuba<br />
<strong>and</strong> made possible the construction of the Panama Canal. Still, the<br />
yellow fever virus persisted in the tropics, <strong>and</strong> the opening of the<br />
Panama Canal increased the danger of its spreading aboard the<br />
ships using this new route.<br />
Moreover, the disease, which was thought to be limited to the<br />
jungles of South <strong>and</strong> Central America, had begun to spread arounds
906 / Yellow fever vaccine<br />
the world to wherever the mosquito Aëdes aegypti could carry the<br />
virus. Mosquito larvae traveled well in casks of water aboard<br />
trading vessels <strong>and</strong> spread the disease to North America <strong>and</strong> Europe.<br />
Immunization by Mutation<br />
Max Theiler received his medical education in London. Following<br />
that, he completed a four-month course at the London School of<br />
Hygiene <strong>and</strong> Tropical Medicine, after which he was invited to come<br />
to the United States to work in the department of tropical medicine<br />
at Harvard University.<br />
While there, Theiler started working to identify the yellow fever<br />
organism. The first problem he faced was finding a suitable<br />
laboratory animal that could be infected with yellow fever. Until<br />
that time, the only animal successfully infected with yellow fever<br />
was the rhesus monkey, which was expensive <strong>and</strong> difficult to care<br />
for under laboratory conditions. Theiler succeeded in infecting<br />
laboratory mice with the disease by injecting the virus directly into<br />
their brains.<br />
Laboratory work for investigators <strong>and</strong> assistants coming in contact<br />
with the yellow fever virus was extremely dangerous. At least<br />
six of the scientists at the Yellow Fever Laboratory at the Rockefeller<br />
Institute died of the disease, <strong>and</strong> many other workers were<br />
infected. In 1929, Theiler was infected with yellow fever; fortunately,<br />
the attack was so mild that he recovered quickly <strong>and</strong> resumed<br />
his work.<br />
During one set of experiments, Theiler produced successive generations<br />
of the virus. First, he took virus from a monkey that had died<br />
of yellow fever <strong>and</strong> used it to infect a mouse. Next, he extracted the<br />
virus from that mouse <strong>and</strong> injected it into a second mouse, repeating<br />
the same procedure using a third mouse. All of them died of encephalitis<br />
(inflammation of the brain). The virus from the third mouse was<br />
then used to infect a monkey. Although the monkey showed signs of<br />
yellow fever, it recovered completely. When Theiler passed the virus<br />
through more mice <strong>and</strong> then into the abdomen of another monkey,<br />
the monkey showed no symptoms of the disease. The results of these<br />
experiments were published by Theiler in the journal Science.
Yellow fever vaccine / 907<br />
This article caught the attention of Wilbur Augustus Sawyer, director<br />
of the Yellow Fever Laboratory at the Rockefeller Foundation<br />
International Health Division in New York. Sawyer, who was working<br />
on a yellow fever vaccine, offered Theiler a job at the Rockefeller<br />
Foundation, which Theiler accepted. Theiler’s mouse-adapted, “attenuated”<br />
virus was given to the laboratory workers, along with human<br />
immune serum, to protect them against the yellow fever virus.<br />
This type of vaccination, however, carried the risk of transferring<br />
other diseases, such as hepatitis, in the human serum.<br />
In 1930, Theiler worked with Eugen Haagen, a German bacteriologist,<br />
at the Rockefeller Foundation. The strategy of the Rockefeller<br />
laboratory was a cautious, slow, <strong>and</strong> steady effort to culture a strain<br />
of the virus so mild as to be harmless to a human but strong enough<br />
to confer a long-lasting immunity. (To “culture” something—tissue<br />
cells, microorganisms, or other living matter—is to grow it in a specially<br />
prepared medium under laboratory conditions.) They started<br />
with a new strain of yellow fever harvested from a twenty-eightyear-old<br />
West African named Asibi; it was later known as the “Asibi<br />
strain.” It was a highly virulent strain that in four to seven days<br />
killed almost all the monkeys that were infected with it. From time<br />
to time, Theiler or his assistant would test the culture on a monkey<br />
<strong>and</strong> note the speed with which it died.<br />
It was not until April, 1936, that Hugh Smith, Theiler’s assistant,<br />
called to his attention an odd development as noted in the laboratory<br />
records of strain 17D. In its 176th culture, 17D had failed to kill<br />
the test mice. Some had been paralyzed, but even these eventually<br />
recovered. Two monkeys who had received a dose of 17D in their<br />
brains survived a mild attack of encephalitis, but those who had<br />
taken the infection in the abdomen showed no ill effects whatever.<br />
Oddly, subsequent subcultures of the strain killed monkeys <strong>and</strong><br />
mice at the usual rate. The only explanation possible was that a mutation<br />
had occurred unnoticed.<br />
The batch of strain 17D was tried over <strong>and</strong> over again on monkeys<br />
with no harmful effects. Instead, the animals were immunized<br />
effectively. Then it was tried on the laboratory staff, including<br />
Theiler <strong>and</strong> his wife, Lillian. The batch injected into humans had the<br />
same immunizing effect. Neither Theiler nor anyone else could explain<br />
how the mutation of the virus had resulted. Attempts to dupli-
908 / Yellow fever vaccine<br />
cate the experiment, using the same Asibi virus, failed. Still, this was<br />
the first safe vaccine for yellow fever. In June, 1937, Theiler reported<br />
this crucial finding in the Journal of Experimental Medicine.<br />
Impact<br />
Following the discovery of the vaccine, Theiler’s laboratory became<br />
a production plant for the 17D virus. Before World War II<br />
(1939-1945), more than one million vaccination doses were sent to<br />
Brazil <strong>and</strong> other South American countries. After the United States<br />
entered the war, eight million soldiers were given the vaccine before<br />
being shipped to tropical war zones. In all, approximately fifty million<br />
people were vaccinated in the war years.<br />
Yet although the vaccine, combined with effective mosquito control,<br />
eradicated the disease from urban centers, yellow fever is still<br />
present in large regions of South <strong>and</strong> Central America <strong>and</strong> of Africa.<br />
The most severe outbreak of yellow fever ever known occurred<br />
from 1960 to 1962 in Ethiopia; out of one hundred thous<strong>and</strong> people<br />
infected, thirty thous<strong>and</strong> died.<br />
The 17D yellow fever vaccine prepared by Theiler in 1937 continues<br />
to be the only vaccine used by the World Health Organization,<br />
more than fifty years after its discovery. There is a continuous effort<br />
by that organization to prevent infection by immunizing the people<br />
living in tropical zones.<br />
See also Antibacterial drugs; Penicillin; Polio vaccine (Sabin); Polio<br />
vaccine (Salk); Salvarsan; Tuberculosis vaccine; Typhus vaccine.<br />
Further Reading<br />
DeJauregui, Ruth. One Hundred Medical Milestones That Shaped World<br />
History. San Mateo, Calif.: Bluewood Books, 1998.<br />
Delaporte, François. The History of Yellow Fever: An Essay on the Birth<br />
of Tropical Medicine. Cambridge, Mass.: MIT Press, 1991.<br />
Theiler, Max, <strong>and</strong> Wilbur G. Downs. The Arthropod-borne Viruses of<br />
Vertebrates: An Account of the Rockefeller Foundation Virus Program,<br />
1951-1970. New Haven, Conn.: Yale University Press, 1973.<br />
Williams, Greer. Virus Hunters. London: Hutchinson, 1960.
Time Line<br />
Time Line<br />
Date Invention<br />
c. 1900 Electrocardiogram<br />
1900 Brownie camera<br />
1900 Dirigible<br />
1901 Artificial insemination<br />
1901 Vat dye<br />
1901-1904 Silicones<br />
1902 Ultramicroscope<br />
1903 Airplane<br />
1903 Disposable razor<br />
1903-1909 Laminated glass<br />
1904 Alkaline storage battery<br />
1904 Photoelectric cell<br />
1904 Vacuum tube<br />
1905 Blood transfusion<br />
1905-1907 Plastic<br />
1906 Gyrocompass<br />
1906 Radio<br />
1906-1911 Tungsten filament<br />
1907 Autochrome plate<br />
1908 Ammonia<br />
1908 Geiger counter<br />
1908 Interchangeable parts<br />
1908 Oil-well drill bit<br />
1908 Vacuum cleaner<br />
1910 Radio crystal sets<br />
1910 Salvarsan<br />
1910 Washing machine<br />
1910-1939 Electric refrigerator<br />
1912 Color film<br />
1912 Diesel locomotive<br />
1912-1913 Solar thermal engine<br />
1912-1914 Artificial kidney<br />
1912-1915 X-ray crystallography<br />
909
910 / Time Line<br />
Date Invention<br />
1913 Assembly line<br />
1913 Geothermal power<br />
1913 Mammography<br />
1913 Thermal cracking process<br />
1915 Long-distance telephone<br />
1915 Propeller-coordinated machine gun<br />
1915 Pyrex glass<br />
1915 Long-distance radiotelephony<br />
1916-1922 Internal combustion engine<br />
1917 Food freezing<br />
1917 Sonar<br />
1919 Mass spectrograph<br />
1921 Tuberculosis vaccine<br />
1923 Rotary dial telephone<br />
1923 Television<br />
1923 <strong>and</strong> 1951 Syphilis test<br />
1924 Ultracentrifuge<br />
1925-1930 Differential analyzer<br />
1926 Buna rubber<br />
1926 Rocket<br />
1926 Talking motion pictures<br />
1927 Heat pump<br />
1928 Pap test<br />
1929 Electric clock<br />
1929 Electroencephalogram<br />
1929 Iron lung<br />
1930’s Contact lenses<br />
1930’s Vending machine slug rejector<br />
1930 Refrigerant gas<br />
1930 Typhus vaccine<br />
1930-1935 FM Radio<br />
1931 Cyclotron<br />
1931 Electron microscope<br />
1931 Neoprene<br />
1932 Fuel cell<br />
1932-1935 Antibacterial drugs
Date Invention<br />
1933-1954 Freeze-drying<br />
1934 Bathysphere<br />
1935 Nylon<br />
1935 Radar<br />
1935 Richter scale<br />
1936 Fluorescent lighting<br />
1937 Yellow fever vaccine<br />
1938 Polystyrene<br />
1938 Teflon<br />
1938 Xerography<br />
1940’s Carbon dating<br />
1940 Color television<br />
1940 Penicillin<br />
1940-1955 Microwave cooking<br />
1941 Polyester<br />
1941 Touch-tone telephone<br />
1941 Turbojet<br />
1942 Infrared photography<br />
1942-1950 Orlon<br />
1943 Aqualung<br />
1943 Colossus computer<br />
1943 Nuclear reactor<br />
1943-1946 ENIAC computer<br />
1944 Mark I calculator<br />
1944 V-2 rocket<br />
1945 Atomic bomb<br />
1945 Tupperware<br />
1946 Cloud seeding<br />
1946 Synchrocyclotron<br />
1947 Holography<br />
1948 Atomic clock<br />
1948 Broadcaster guitar<br />
1948 Instant photography<br />
1948-1960 Bathyscaphe<br />
1949 BINAC computer<br />
1949 Community antenna television<br />
Time Line / 911
912 / Time Line<br />
Date Invention<br />
1950 Cyclamate<br />
1950-1964 In vitro plant culture<br />
1951 Breeder reactor<br />
1951 UNIVAC computer<br />
1951-1952 Hydrogen bomb<br />
1952 Amniocentesis<br />
1952 Hearing aid<br />
1952 Polio vaccine (Salk)<br />
1952 Reserpine<br />
1952 Steelmaking process<br />
1952-1956 Field ion microscope<br />
1953 Artificial hormone<br />
1953 Heart-lung machine<br />
1953 Polyethylene<br />
1953 Synthetic amino acid<br />
1953 Transistor<br />
1953-1959 Hovercraft<br />
mid-1950’s Synthetic RNA<br />
1954 Photovoltaic cell<br />
1955 Radio interferometer<br />
1955-1957 FORTRAN programming language<br />
1956 Birth control pill<br />
1957 Artificial satellite<br />
1957 Nuclear power plant<br />
1957 Polio vaccine (Sabin)<br />
1957 Transistor radio<br />
1957 Velcro<br />
1957-1972 Pacemaker<br />
1958 Ultrasound<br />
1959 Atomic-powered ship<br />
1959 COBOL computer language<br />
1959 IBM Model 1401 computer<br />
1959 X-ray image intensifier<br />
1960’s Rice <strong>and</strong> wheat strains<br />
1960’s Virtual machine<br />
1960 Laser
Date Invention<br />
Time Line / 913<br />
1960 Memory metal<br />
1960 Telephone switching<br />
1960 Weather satellite<br />
1961 SAINT<br />
1962 Communications satellite<br />
1962 Laser eye surgery<br />
1962 Robot (industrial)<br />
1963 Cassette recording<br />
1964 Bullet train<br />
1964 Electronic synthesizer<br />
1964-1965 BASIC programming language<br />
1966 Tidal power plant<br />
1967 Coronary artery bypass surgery<br />
1967 Dolby noise reduction<br />
1967 Neutrino detector<br />
1967 Synthetic DNA<br />
1969 Bubble memory<br />
1969 The Internet<br />
1969-1983 Optical disk<br />
1970 Floppy disk<br />
1970 Videocassette recorder<br />
1970-1980 Virtual reality<br />
1972 CAT scanner<br />
1972 Pocket calculator<br />
1975-1979 Laser-diode recording process<br />
1975-1990 Fax machine<br />
1976 Supercomputer<br />
1976 Supersonic passenger plane<br />
1976-1988 Stealth aircraft<br />
1977 Apple II computer<br />
1977 Fiber-optics<br />
1977-1985 Cruise missile<br />
1978 Cell phone<br />
1978 Compressed-air-accumulating power plant<br />
1978 Nuclear magnetic resonance<br />
1978-1981 Scanning tunneling microscope
914 / Time Line<br />
Date Invention<br />
1979 Artificial blood<br />
1979 Walkman cassette player<br />
1980’s CAD/CAM<br />
1981 Personal computer<br />
1982 Abortion pill<br />
1982 Artificial heart<br />
1982 Genetically engineered insulin<br />
1982 Robot (household)<br />
1983 Artificial chromosome<br />
1983 Aspartame<br />
1983 Compact disc<br />
1983 Hard disk<br />
1983 Laser vaporization<br />
1985 Genetic “fingerprinting”<br />
1985 Tevatron accelerator<br />
1997 Cloning<br />
2000 Gas-electric car
Topics by Category<br />
Topics by Category<br />
Agriculture<br />
Artificial insemination<br />
Cloning<br />
Cloud seeding<br />
In vitro plant culture<br />
Rice <strong>and</strong> wheat strains<br />
Astronomy<br />
Artificial satellite<br />
Communications satellite<br />
Neutrino detector<br />
Radio interferometer<br />
Weather satellite<br />
Aviation <strong>and</strong> space<br />
Airplane<br />
Artificial satellite<br />
Communications satellite<br />
Dirigible<br />
Radio interferometer<br />
Rocket<br />
Stealth aircraft<br />
Turbojet<br />
V-2 rocket<br />
Weather satellite<br />
Biology<br />
Artificial chromosome<br />
Artificial insemination<br />
Cloning<br />
Genetic “fingerprinting”<br />
In vitro plant culture<br />
Synthetic amino acid<br />
Synthetic DNA<br />
Synthetic RNA<br />
Ultracentrifuge<br />
Chemistry<br />
Ammonia<br />
Fuel cell<br />
Refrigerant gas<br />
Silicones<br />
Thermal cracking process<br />
Ultracentrifuge<br />
Ultramicroscope<br />
Vat dye<br />
X-ray crystallography<br />
Communications<br />
915<br />
Cassette recording<br />
Cell phone<br />
Color television<br />
Communications satellite<br />
Community antenna television<br />
Dolby noise reduction<br />
Electronic synthesizer<br />
Fax machine<br />
Fiber-optics<br />
FM radio<br />
Hearing aid<br />
Laser-diode recording process<br />
Long-distance radiotelephony<br />
Long-distance telephone
916 / Topics by Category<br />
Radar<br />
Radio<br />
Radio crystal sets<br />
Rotary dial telephone<br />
Sonar<br />
Talking motion pictures<br />
Telephone switching<br />
Television<br />
Touch-tone telephone<br />
Transistor radio<br />
Vacuum tube<br />
Videocassette recorder<br />
Xerography<br />
Computer science<br />
Apple II computer<br />
BASIC programming language<br />
BINAC computer<br />
Bubble memory<br />
COBOL computer language<br />
Colossus computer<br />
Computer chips<br />
Differential analyzer<br />
ENIAC computer<br />
Floppy disk<br />
FORTRAN programming<br />
language<br />
Hard disk<br />
IBM Model 1401 computer<br />
Internet<br />
Mark I calculator<br />
Optical disk<br />
Personal computer<br />
Pocket calculator<br />
SAINT<br />
Supercomputer<br />
UNIVAC computer<br />
Virtual machine<br />
Virtual reality<br />
Consumer products<br />
Apple II computer<br />
Aspartame<br />
Birth control pill<br />
Broadcaster guitar<br />
Brownie camera<br />
Cassette recording<br />
Cell phone<br />
Color film<br />
Color television<br />
Compact disc<br />
Cyclamate<br />
Disposable razor<br />
Electric refrigerator<br />
FM radio<br />
Gas-electric car<br />
Hearing aid<br />
Instant photography<br />
Internet<br />
Nylon<br />
Orlon<br />
Personal computer<br />
Pocket calculator<br />
Polyester<br />
Pyrex glass<br />
Radio<br />
Rotary dial telephone<br />
Teflon<br />
Television<br />
Touch-tone telephone<br />
Transistor radio<br />
Tupperware<br />
Vacuum cleaner<br />
Velcro
Videocassette recorder<br />
Walkman cassette player<br />
Washing machine<br />
Drugs <strong>and</strong> vaccines<br />
Abortion pill<br />
Antibacterial drugs<br />
Artificial hormone<br />
Birth control pill<br />
Genetically engineered insulin<br />
Penicillin<br />
Polio vaccine (Sabin)<br />
Polio vaccine (Salk)<br />
Reserpine<br />
Salvarsan<br />
Tuberculosis vaccine<br />
Typhus vaccine<br />
Yellow fever vaccine<br />
Earth science<br />
Aqualung<br />
Bathyscaphe<br />
Bathysphere<br />
Cloud seeding<br />
Richter scale<br />
X-ray crystallography<br />
Electronics<br />
Cassette recording<br />
Cell phone<br />
Color television<br />
Communications satellite<br />
Compact disc<br />
Dolby noise reduction<br />
Electronic synthesizer<br />
Fax machine<br />
Fiber-optics<br />
FM radio<br />
Hearing aid<br />
Laser-diode recording process<br />
Long-distance radiotelephony<br />
Long-distance telephone<br />
Radar<br />
Radio<br />
Radio crystal sets<br />
Rotary dial telephone<br />
Sonar<br />
Telephone switching<br />
Television<br />
Touch-tone telephone<br />
Transistor<br />
Transistor radio<br />
Vacuum tube<br />
Videocassette recorder<br />
Walkman cassette player<br />
Xerography<br />
Energy<br />
Topics by Category / 917<br />
Alkaline storage battery<br />
Breeder reactor<br />
Compressed-air-accumulating<br />
power plant<br />
Fluorescent lighting<br />
Fuel cell<br />
Gas-electric car<br />
Geothermal power<br />
Heat pump<br />
Nuclear power plant<br />
Nuclear reactor<br />
Oil-well drill bit<br />
Photoelectric cell<br />
Photovoltaic cell
918 / Topics by Category<br />
Solar thermal engine<br />
Tidal power plant<br />
Vacuum tube<br />
Engineering<br />
Airplane<br />
Assembly line<br />
Bullet train<br />
CAD/CAM<br />
Differential analyzer<br />
Dirigible<br />
ENIAC computer<br />
Gas-electric car<br />
Internal combustion engine<br />
Oil-well drill bit<br />
Robot (household)<br />
Robot (industrial)<br />
Steelmaking process<br />
Tidal power plant<br />
Vacuum cleaner<br />
Washing machine<br />
Exploration<br />
Carbon dating<br />
Aqualung<br />
Bathyscaphe<br />
Bathysphere<br />
Neutrino detector<br />
Radar<br />
Radio interferometer<br />
Sonar<br />
Food science<br />
Aspartame<br />
Cyclamate<br />
Electric refrigerator<br />
Food freezing<br />
Freeze-drying<br />
Genetically engineered insulin<br />
In vitro plant culture<br />
Microwave cooking<br />
Polystyrene<br />
Refrigerant gas<br />
Rice <strong>and</strong> wheat strains<br />
Teflon<br />
Tupperware<br />
Genetic engineering<br />
Amniocentesis<br />
Artificial chromosome<br />
Artificial insemination<br />
Cloning<br />
Genetic “fingerprinting”<br />
Genetically engineered insulin<br />
In vitro plant culture<br />
Rice <strong>and</strong> wheat strains<br />
Synthetic amino acid<br />
Synthetic DNA<br />
Synthetic RNA<br />
Home products<br />
Cell phone<br />
Color television<br />
Community antenna television<br />
Disposable razor<br />
Electric refrigerator<br />
Fluorescent lighting<br />
FM radio<br />
Microwave cooking<br />
Radio<br />
Refrigerant gas
Robot (household)<br />
Rotary dial telephone<br />
Television<br />
Touch-tone Telephone<br />
Transistor radio<br />
Tungsten filament<br />
Tupperware<br />
Vacuum cleaner<br />
Videocassette recorder<br />
Washing machine<br />
Manufacturing<br />
Assembly line<br />
CAD/CAM<br />
Interchangeable parts<br />
Memory metal<br />
Polystyrene<br />
Steelmaking process<br />
Materials<br />
Buna rubber<br />
Contact lenses<br />
Disposable razor<br />
Laminated glass<br />
Memory metal<br />
Neoprene<br />
Nylon<br />
Orlon<br />
Plastic<br />
Polyester<br />
Polyethylene<br />
Polystyrene<br />
Pyrex glass<br />
Silicones<br />
Steelmaking process<br />
Teflon<br />
Topics by Category / 919<br />
Tungsten filament<br />
Velcro<br />
Measurement <strong>and</strong> detection<br />
Amniocentesis<br />
Atomic clock<br />
Carbon dating<br />
CAT scanner<br />
Cyclotron<br />
Electric clock<br />
Electrocardiogram<br />
Electroencephalogram<br />
Electron microscope<br />
Geiger counter<br />
Gyrocompass<br />
Mass spectrograph<br />
Neutrino detector<br />
Radar<br />
Sonar<br />
Radio interferometer<br />
Richter scale<br />
Scanning tunneling microscope<br />
Synchrocyclotron<br />
Tevatron accelerator<br />
Ultracentrifuge<br />
Ultramicroscope<br />
Vending machine slug rejector<br />
X-ray crystallography<br />
Medical procedures<br />
Amniocentesis<br />
Blood transfusion<br />
CAT scanner<br />
Cloning<br />
Coronary artery bypass surgery<br />
Electrocardiogram
920 / Topics by Category<br />
Electroencephalogram<br />
Heart-lung machine<br />
Iron lung<br />
Laser eye surgery<br />
Laser vaporization<br />
Mammography<br />
Nuclear magnetic resonance<br />
Pap test<br />
Syphilis test<br />
Ultrasound<br />
X-ray image intensifier<br />
Medicine<br />
Abortion pill<br />
Amniocentesis<br />
Antibacterial drugs<br />
Artificial blood<br />
Artificial heart<br />
Artificial hormone<br />
Artificial kidney<br />
Birth control pill<br />
Blood transfusion<br />
CAT scanner<br />
Contact lenses<br />
Coronary artery bypass surgery<br />
Electrocardiogram<br />
Electroencephalogram<br />
Genetically engineered insulin<br />
Hearing aid<br />
Heart-lung machine<br />
Iron lung<br />
Laser eye surgery<br />
Laser vaporization<br />
Mammography<br />
Nuclear magnetic resonance<br />
Pacemaker<br />
Pap test<br />
Penicillin<br />
Polio vaccine (Sabin)<br />
Polio vaccine (Salk)<br />
Reserpine<br />
Salvarsan<br />
Syphilis test<br />
Tuberculosis vaccine<br />
Typhus vaccine<br />
Ultrasound<br />
X-ray image intensifier<br />
Yellow fever vaccine<br />
Music<br />
Broadcaster guitar<br />
Cassette recording<br />
Dolby noise reduction<br />
Electronic synthesizer<br />
FM Radio<br />
Radio<br />
Transistor radio<br />
Photography<br />
Autochrome plate<br />
Brownie camera<br />
Color film<br />
Electrocardiogram<br />
Electron microscope<br />
Fax machine<br />
Holography<br />
Infrared photography<br />
Instant photography<br />
Mammography<br />
Mass spectrograph<br />
Optical disk<br />
Talking motion pictures<br />
Weather satellite
Xerography<br />
X-ray crystallography<br />
Physics<br />
Atomic bomb<br />
Cyclotron<br />
Electron microscope<br />
Field ion microscope<br />
Geiger counter<br />
Hydrogen bomb<br />
Holography<br />
Laser<br />
Mass spectrograph<br />
Scanning tunneling microscope<br />
Synchrocyclotron<br />
Tevatron accelerator<br />
X-ray crystallography<br />
Synthetics<br />
Artificial blood<br />
Artificial chromosome<br />
Artificial heart<br />
Artificial hormone<br />
Artificial insemination<br />
Artificial kidney<br />
Artificial satellite<br />
Aspartame<br />
Buna rubber<br />
Cyclamate<br />
Electronic synthesizer<br />
Genetically engineered insulin<br />
Neoprene<br />
Topics by Category / 921<br />
Synthetic amino acid<br />
Synthetic DNA<br />
Synthetic RNA<br />
Vat dye<br />
Transportation<br />
Airplane<br />
Atomic-powered ship<br />
Bullet train<br />
Diesel locomotive<br />
Dirigible<br />
Gas-electric car<br />
Gyrocompass<br />
Hovercraft<br />
Internal combustion engine<br />
Supersonic passenger plane<br />
Turbojet<br />
Weapons technology<br />
Airplane<br />
Atomic bomb<br />
Cruise missile<br />
Dirigible<br />
Hydrogen bomb<br />
Propeller-coordinated machine<br />
gun<br />
Radar<br />
Rocket<br />
Sonar<br />
Stealth aircraft<br />
V-2 rocket
This Page Intentionally Left Blank
Index<br />
Index<br />
Abbe, Ernst, 678<br />
ABC. See American Broadcasting<br />
Company<br />
Abel, John Jacob, 50, 58, 60<br />
Abortion pill, 1-5<br />
Adams, Ansel, 430<br />
Adams, Thomas, 850<br />
Advanced Research Projects Agency,<br />
446-447<br />
AHD. See Audio high density disc<br />
Aiken, Howard H., 187, 417, 490, 828<br />
Airplane, 6-10<br />
Aldrin, Edwin, 8<br />
Alferov, Zhores I., 320-321<br />
Alkaline storage battery, 11-15<br />
Ambrose, James, 167<br />
American Broadcasting Company, 215<br />
American Telephone <strong>and</strong> Telegraph<br />
Company, 741<br />
Amery, Julian, 714<br />
Amino acid, synthetic, 724-728<br />
Ammonia, 16-19; <strong>and</strong> atomic clock, 81-<br />
82; as a refrigerant, 290-291, 345,<br />
631, 746<br />
Amniocentesis, 20-23<br />
Anable, Gloria Hollister, 100<br />
Anschütz-Kaempfe, Hermann, 382<br />
Antibacterial drugs, 24-27<br />
Antibiotics, 24-27, 47, 813; penicillin,<br />
553-557, 676, 738<br />
Apple II computer, 28-32<br />
Appliances. See Electric clock; Electric<br />
refrigerator; Microwave cooking;<br />
Refrigerant gas; Robot (household);<br />
Vacuum cleaner; Washing machine<br />
Aqualung, 33-37<br />
Archaeology, 158-162<br />
Archimedes, 687<br />
Armstrong, Edwin H., 339<br />
Armstrong, Neil, 8<br />
Arnold, Harold D., 477<br />
Arnold, Henry Harley, 807<br />
ARPAnet, 447-448<br />
Arsonval, Jacques Arsène d’, 351<br />
Arteries <strong>and</strong> laser vaporization, 472-476<br />
Artificial blood, 38-40<br />
Artificial chromosome, 41-44<br />
Artificial heart, 45-49<br />
Artificial hormone, 50-53<br />
Artificial insemination, 54-57<br />
Artificial intelligence, 668, 671, 864<br />
Artificial kidney, 58-62<br />
Artificial satellite, 63-66<br />
Artificial sweeteners, 67-70;<br />
Aspartame, 67-70; cyclamates, 248-<br />
251<br />
ASCC. See Automatic Sequence<br />
Controlled Calculator<br />
Aspartame, 67-70<br />
Assembly line, 71-75, 197, 434, 436, 439<br />
Aston, Francis William, 494, 496<br />
Astronauts, 749, 848<br />
AT&T. See American Telephone <strong>and</strong><br />
Telegraph Company<br />
Atanasoff, John Vincent, 312<br />
Atomic bomb, 76-79, 84, 118-119, 255,<br />
412, 414, 521, 525, 697, 721<br />
Atomic clock, 80-83<br />
Atomic Energy Commission, 119, 521,<br />
523<br />
Atomic force microscope, 681<br />
Atomic mass, 494-497<br />
Atomic-powered ship, 84, 86-87<br />
Audiffren, Marcel, 289<br />
Audio high density disc, 220<br />
Audrieth, Ludwig Frederick, 67<br />
Autochrome plate, 88-91<br />
Automatic Sequence Controlled<br />
Calculator, 187<br />
Automobiles; <strong>and</strong> assembly lines, 71,<br />
75; <strong>and</strong> interchangeable parts, 434-<br />
441; <strong>and</strong> internal combustion<br />
engine, 442-445<br />
Avery, Oswald T., 733<br />
Aviation. See Airplane; Dirigible;<br />
Rockets; Stealth aircraft; Supersonic<br />
passenger plane; Turbojet<br />
Babbage, Charles, 417<br />
Backus, John, 347<br />
923
924 / Index<br />
Bacon, Francis Thomas, 355, 358<br />
Baekel<strong>and</strong>, Leo Hendrik, 571<br />
Baeyer, Adolf von, 571<br />
Bahcall, John Norris, 511<br />
Bain, Alex<strong>and</strong>er, 316<br />
Baker, William Oliver, 172, 174<br />
Banting, Frederick G., 375<br />
Baran, Paul, 446<br />
Bardeen, John, 782, 786, 789<br />
Barnay, Antoine, 663<br />
Barton, Otis, 95, 100<br />
BASIC computer language, 29-30, 92-<br />
94, 559<br />
Bathyscaphe, 95-99<br />
Bathysphere, 100-103<br />
Batteries, 11, 227; alkaline storage, 11-<br />
15; <strong>and</strong> electric cars, 360, 363; <strong>and</strong><br />
fuel cells, 356; <strong>and</strong> hearing aids, 390,<br />
392; <strong>and</strong> pacemakers, 547; silicon<br />
solar, 569; <strong>and</strong> transistor radios, 780,<br />
875, 878<br />
Battery jars, 454, 607<br />
Baulieu, Étienne-Émile, 1-2<br />
Bavolek, Cecelia, 394<br />
Bazooka, 659<br />
BCS theory, 789<br />
Beams, Jesse W., 815<br />
Becquerel, Alex<strong>and</strong>re-Edmond, 562<br />
Becquerel, Antoine-Henri, 365<br />
Beebe, William, 95, 100<br />
Bélanger, Alain, 1<br />
Bell, Alex<strong>and</strong>er Graham, 320-322, 390,<br />
482-483, 663-665<br />
Bell Telephone Laboratories, 101, 138-<br />
140, 172-173, 204-205, 217, 229-230,<br />
323, 390-391, 482, 567, 614, 625, 678,<br />
744, 752, 774-775, 778-779, 786, 829,<br />
840, 861, 863, 876<br />
Belzel, George, 558<br />
Benedictus, Edouard, 454<br />
Bennett, Frederick, 434<br />
Bennett, W. R., 217<br />
Berger, Hans, 298<br />
Bergeron, Tor, 183<br />
Berliner, Emil, 279<br />
Berthelot, Marcellin Pierre, 597<br />
Bessemer, Henry, 701, 704<br />
Bessemer converter, 701-702, 704<br />
Best, Charles H., 375<br />
Bethe, Hans, 412, 720<br />
Bevis, Douglas, 20<br />
Billiard balls, 572-573<br />
BINAC. See Binary Automatic<br />
Computer<br />
Binary Automatic Computer, 104-107,<br />
315, 330, 348<br />
Binnig, Gerd, 678, 680<br />
Birdseye, Clarence, 343<br />
Birth control pill, 108-112<br />
Bissell, Melville R., 832<br />
Blodgett, Katherine Ann, 454<br />
Blood plasma, 38<br />
Blood transfusion, 113-117<br />
Bobeck, Andrew H., 138-139<br />
Bohn, René, 842<br />
Bohr, Niels, 76, 520, 695<br />
Bolton, Elmer Keiser, 507, 529<br />
Booth, Andrew D., 330<br />
Booth, H. Cecil, 832, 835<br />
Borlaug, Norman E., 638, 643<br />
Borsini, Fred, 151<br />
Bothe, Walter, 367<br />
Bragg, Lawrence, 896, 898<br />
Bragg, William Henry, 896, 898<br />
Brain, <strong>and</strong> nuclear magnetic<br />
resonance, 516, 519<br />
Brattain, Walter H., 782, 786, 789<br />
Braun, Wernher von, 63, 871<br />
Bravais, Auguste, 896<br />
Breast cancer, 486, 489<br />
Breeder reactor, 118-121<br />
Broadcaster guitar, 122-129<br />
Broadcasting. See FM radio; Radio;<br />
Radio crystal sets; Television;<br />
Transistor radio<br />
Broglie, Louis de, 302, 678<br />
Brooks, Fred P., 866<br />
Brownell, Frank A., 130<br />
Brownie camera, 130-137<br />
Bubble memory, 138-141<br />
Buehler, William, 498<br />
Bullet train, 142-145<br />
Buna rubber, 146-150<br />
Burks, Arthur Walter, 312<br />
Burton, William M., 765<br />
Busch, Adolphus, 259<br />
Busch, Hans, 302, 679<br />
Bush, Vannevar, 262, 264
CAD. See Computer-Aided Design<br />
CAD/CAM, 151-157<br />
Calculators; desktop, 232; digital, 490-<br />
493; electromechanical, 313;<br />
mechanical, 104; pocket, 576-580;<br />
punched-card, 104<br />
California Institute of Technology, 646,<br />
731, 782<br />
Callus tissue, 421<br />
Calmette, Albert, 791<br />
Cameras; Brownie, 130-137; <strong>and</strong> film,<br />
88-91, 192-195; <strong>and</strong> infrared film,<br />
426; instant, 430-433; in space, 887-<br />
889; video, 165, 859; <strong>and</strong> virtual<br />
reality, 867; <strong>and</strong> X rays, 901-904. See<br />
also Photography<br />
Campbell, Charles J., 468<br />
Campbell, Keith H. S., 177<br />
Cancer, 4, 324, 376; <strong>and</strong> cyclamates, 69,<br />
249-250; <strong>and</strong> infrared photography,<br />
428; <strong>and</strong> mammography, 486-489;<br />
therapy, 40; uterine, 549-552<br />
Capek, Karel, 650, 654<br />
Carbohydrates, 374<br />
Carbon dating, 158-162<br />
Carlson, Chester F., 891, 893<br />
Carnot, Sadi, 398<br />
Carothers, Wallace H., 507, 510, 529,<br />
574, 589<br />
Carrel, Alexis, 113<br />
Carty, John J., 477, 484<br />
Cary, Frank, 558<br />
Cascariolo, Vincenzo, 335<br />
Cassette recording, 163-166, 221, 223,<br />
279, 538; <strong>and</strong> Dolby noise reduction,<br />
282; <strong>and</strong> microcomputers, 386; <strong>and</strong><br />
Sony Walkman, 788; <strong>and</strong> transistors,<br />
784<br />
CAT scanner, 167-171<br />
Cathode-ray tubes, 170, 303, 315, 326,<br />
564, 611; <strong>and</strong> television, 757-758,<br />
760, 837<br />
Caton, Richard, 298<br />
CBS. See Columbia Broadcasting<br />
System<br />
CD. See Compact disc<br />
CDC. See Control Data Corporation<br />
Cell phone, 172-176<br />
Celluloid, 454, 571-573<br />
Centrifuge, 815-818<br />
Index / 925<br />
Cerf, Vinton G., 446, 448<br />
Chadwick, James, 367<br />
Chain, Ernst Boris, 553<br />
Chamberlain, W. Edward, 901<br />
Chance, Ronald E., 374<br />
Ch<strong>and</strong>ler, Robert F., Jr., 638<br />
Chang, Min-Chueh, 108<br />
Chanute, Octave, 6<br />
Chapin, Daryl M., 567<br />
Chardonnet, Hilaire de, 589<br />
Chemotherapy, 24, 40, 676<br />
Cho, Fujio, 360<br />
Christian, Charlie, 122, 126<br />
Chromosomes. See Artificial<br />
chromosome<br />
Clark, Barney, 45<br />
Clark, Dugold, 257<br />
Clarke, Arthur C., 63, 204<br />
Cloning, 177-182<br />
Cloud seeding, 183-186<br />
Coal tars, 593, 843<br />
COBOL computer language, 92, 187-<br />
191, 350<br />
Cockerell, Christopher Sydney, 407<br />
Cohen, Robert Waley, 442<br />
Coleman, William T., Jr., 714<br />
Collins, Arnold Miller, 507<br />
Color film, 192-195<br />
Color photography, 88-91<br />
Color television, 196-199<br />
Colossus computer, 200-203<br />
Columbia Broadcasting System, 196,<br />
215, 830<br />
Communications satellite, 204-207<br />
Community antenna television, 208-216<br />
Compaan, Klaas, 537<br />
Compact disc, 217-224<br />
Compressed-air-accumulating power<br />
plant, 225-228<br />
Computer-Aided Design (CAD), 151-<br />
157<br />
Computer chips, 140, 229-234<br />
Computer languages, 154; ALGOL, 92-<br />
93; BASIC, 29-30, 92-94, 559;<br />
COBOL, 92, 187-191, 350;<br />
FORTRAN, 92-93, 189, 347-350<br />
Computerized axial tomography, 167-<br />
171<br />
Computers; <strong>and</strong> information storage,
926 / Index<br />
104-107, 138-141, 165, 330-334, 386-<br />
389, 537-540; <strong>and</strong> Internet, 446-450.<br />
See also Apple II computer; Personal<br />
computers<br />
Concorde, 714-719<br />
Condamine, Charles de la, 146<br />
Contact lenses, 235-239<br />
Conti, Piero Ginori, 378<br />
Contraception, 1-5, 108-112<br />
Control Data Corporation, 709, 711<br />
Cooking; microwave, 502-506; <strong>and</strong><br />
Pyrex glass, 607, 609; <strong>and</strong> Tefloncoating,<br />
748-749<br />
Coolidge, William David, 795<br />
Cormack, Allan M., 167<br />
Corning Glass Works, 323, 606-610<br />
Coronary artery bypass surgery, 240-<br />
243<br />
Cource, Geoffroy de, 714, 716<br />
Cousins, Morison, 799<br />
Cousteau, Jacques-Yves, 33, 35, 102<br />
Cray, Seymour R., 709, 711-712<br />
Crick, Francis, 41, 177, 729, 733<br />
Crile, George Washington, 113<br />
Critical mass, 77, 119, 521<br />
Crookes, William, 365<br />
CRT. See Cathode-ray tubes<br />
Cruise missile, 244-247<br />
Curie, Jacques, 692<br />
Curie, Marie, 823<br />
Curie, Pierre, 692, 695<br />
Curtis, William C., 611-612<br />
Cyclamate, 248-251<br />
Cyclotron, 252-256<br />
DAD. See Digital audio disc<br />
Daimler, Gottlieb, 257<br />
Dale, Henry Hallett, 50<br />
Damadian, Raymond, 516<br />
Datamath, 579<br />
Davis, Raymond, Jr., 511<br />
Deep-sea diving, 95-103<br />
De Forest, Lee, 477-478, 480, 483, 837-<br />
838<br />
Dekker, Wisse, 217<br />
Deoxyribonucleic acid; characteristics,<br />
733. See also DNA<br />
Depp, Wallace Andrew, 751<br />
Desert Storm, 699<br />
Devol, George C., Jr., 654<br />
DeVries, William Castle, 45-46<br />
Diabetes, 51-52, 374-377<br />
Dichlorodifluoromethane, 630-633<br />
Diesel, Rudolf, 257-258<br />
Diesel locomotive, 257-261<br />
Differential analyzer, 262-266<br />
Digital audio disc, 219<br />
Dirigible, 267-271<br />
Disposable razor, 272-278<br />
Diving. See Aqualung<br />
DNA, 41, 177; <strong>and</strong> artificial<br />
chromosomes, 41-44; <strong>and</strong> cloning,<br />
177; <strong>and</strong> genetic “fingerprinting,”<br />
370-373; recombinant, 41; synthetic,<br />
729-732; <strong>and</strong> X-ray crystallography,<br />
900. See also Deoxyribonucleic acid;<br />
Synthetic DNA<br />
Dolby, Ray Milton, 279, 281<br />
Dolby noise reduction, 279-283<br />
Domagk, Gerhard, 24<br />
Donald, Ian T., 823-824<br />
Dornberger, Walter Robert, 871<br />
Drew, Charles, 113, 115<br />
Drinker, Philip, 451<br />
Dulbecco, Renato, 581<br />
Dunwoody, H. H., 621<br />
Du Pont. See Du Pont de Nemours <strong>and</strong><br />
Company<br />
Du Pont de Nemours <strong>and</strong> Company,<br />
77, 149, 248, 508-509, 529, 531, 542,<br />
589-590, 746-748, 799, 803<br />
Durfee, Benjamin M., 490<br />
Durham, Eddie, 126<br />
Durrer, Robert, 701<br />
Dyes, 593; <strong>and</strong> acrylics, 543; <strong>and</strong><br />
infrared radiation, 425, 428; <strong>and</strong><br />
microorganism staining, 24-25; <strong>and</strong><br />
photographic film, 192-194; poison,<br />
674; <strong>and</strong> polyesters, 591; vat, 842-<br />
845<br />
Earthquakes, measuring of, 645-649<br />
Eastman, George, 130, 135<br />
Eckert, John Presper, 104, 312, 828<br />
Edison, Thomas Alva, 11, 335, 479, 616,<br />
744, 839; <strong>and</strong> batteries, 12-14; <strong>and</strong><br />
Edison effect, 837; <strong>and</strong> electric light,<br />
795, 832; <strong>and</strong> fluoroscope, 901; <strong>and</strong><br />
phonograph, 217, 279
Edison effect, 837-838<br />
Edlefsen, Niels, 252<br />
EDVAC. See Electronic Discrete<br />
Variable Automatic Computer<br />
Effler, Donald B., 240<br />
Ehrlich, Paul, 24, 673<br />
Einstein, Albert, 82, 472, 497, 563, 695,<br />
721<br />
Einthoven, Willem, 293, 295<br />
Eisenhower, Dwight D., 84, 415<br />
Elastomers, 148, 507-510, 598<br />
Electric clock, 284-288<br />
Electric refrigerator, 289-292<br />
Electricity, generation of, 79, 378, 569<br />
Electrocardiogram, 293-297<br />
Electroencephalogram, 298-301<br />
Electrolyte detector, 479<br />
Electron microscope, 302-306, 403, 902<br />
Electron theory, 562-565<br />
Electronic Discrete Variable Automatic<br />
Computer, 105-107, 314, 829<br />
Electronic Numerical Integrator <strong>and</strong><br />
Calculator, 105-106, 312-315, 347,<br />
668, 829<br />
Electronic synthesizer, 307-311<br />
Eli Lilly Research Laboratories, 374-377<br />
Elliott, Tom, 360<br />
Elmquist, Rune, 545<br />
Elster, Julius, 562, 564<br />
Engelberger, Joseph F., 654<br />
ENIAC. See Electronic Numerical<br />
Integrator <strong>and</strong> Calculator<br />
Ericsson, John, 687<br />
Espinosa, Chris, 28<br />
Estridge, Philip D., 386<br />
Evans, Oliver, 71<br />
Ewan, Harold Irving, 625<br />
“Excalibur,” 416<br />
Eyeglasses; <strong>and</strong> contact lenses, 235-239<br />
; frames, 498, 500; <strong>and</strong> hearing aids,<br />
391, 787<br />
Fabrics; <strong>and</strong> dyes, 842-845; orlon, 541-<br />
544; polyester, 589-592; <strong>and</strong> washing<br />
machines, 883-886<br />
Fahlberg, Constantin, 67<br />
Fasteners, velcro, 846-849<br />
Favaloro, Rene, 240<br />
Fax machine, 316-319<br />
Index / 927<br />
FCC. See Federal Communications<br />
Commission<br />
Federal Communications Commission;<br />
<strong>and</strong> cell phones, 173, 175; <strong>and</strong><br />
communication satellites, 204; <strong>and</strong><br />
FM radio, 341; <strong>and</strong> microwave<br />
cooking, 505; <strong>and</strong> television, 196-<br />
197, 208-210<br />
Fefrenkiel, Richard H., 172<br />
Feinbloom, William, 235, 237<br />
Fender, Leo, 122<br />
Ferguson, Charles Wesley, 158<br />
Fermi, Enrico, 76, 84, 412, 520, 525<br />
Fessenden, Reginald, 13, 477-480, 616-<br />
618<br />
Fiber-optics, 320-324<br />
Fick, Adolf Eugen, 235<br />
Field ion microscope, 325-329, 679<br />
FIM. See Field ion microscope<br />
Finlay, Carlos J., 905<br />
Fischer, Rudolf, 192<br />
Fisher, Alva J., 883<br />
Fleming, Alex<strong>and</strong>er, 553, 555<br />
Fleming, John Ambrose, 478, 621, 837,<br />
839<br />
Flick, J. B., 394<br />
Floppy disk, 330-334<br />
Florey, Baron, 553<br />
Flosdorf, Earl W., 351<br />
Flowers, Thomas H., 200<br />
FLOW-MATIC, 187<br />
Fluorescent lighting, 335-338<br />
FM radio, 339-342<br />
Fokker, Anthony Herman Gerard, 601,<br />
603<br />
Food; artificial sweeteners, 67-70, 248-<br />
251; freeze-drying, 351-354;<br />
freezing, 343-346; microwave<br />
cooking, 502-506; packaging, 598-<br />
599; <strong>and</strong> refrigeration, 289-292, 343-<br />
346, 630, 632; rice <strong>and</strong> wheat, 638-<br />
644; storage, 799-806<br />
Food <strong>and</strong> Drug Administration, 45,<br />
111, 375<br />
Ford, Henry, 11, 71, 74, 257, 434<br />
Forel, François-Alphonse, 645<br />
Forest de Bélidor, Bernard, 770<br />
FORTRAN programming language,<br />
92-93, 189, 347-350<br />
Foucault, Jean-Bernard-Léon, 382
928 / Index<br />
Fox Network, 215<br />
Francis, Thomas, Jr., 585<br />
Freeze-drying, 351-354<br />
Frerichs, Friedrick von, 673<br />
Frisch, Otto Robert, 76, 520<br />
Fuchs, Klaus Emil Julius, 412<br />
Fuel cell, 355-359<br />
Fuller, Calvin S., 567<br />
Fulton, Robert, 335<br />
Gabor, Dennis, 402, 404<br />
Gagarin, Yuri A., 874<br />
Gagnan, Émile, 33, 102<br />
Gamow, George, 325, 412, 414, 720-721<br />
Garcia, Celso-Ramon, 108<br />
Garros, Rol<strong>and</strong>, 601<br />
Garwin, Richard L., 414<br />
Gas-electric car, 360-364<br />
Gates, Bill, 92, 94<br />
Gaud, William S., 638<br />
Gautheret, Roger, 421<br />
GE. See General Electric Company<br />
Geiger, Hans, 365, 367<br />
Geiger counter, 365-369<br />
Geissler, Heinrich, 335<br />
Geitel, Hans Friedrich, 562, 564<br />
General Electric Company, 101, 183-<br />
185, 219, 264, 290, 341, 356, 384, 440,<br />
455, 477, 617, 683, 685, 795-796, 809,<br />
840, 863, 893, 902<br />
Genetic “fingerprinting,” 370-373<br />
Genetically engineered insulin, 374-377<br />
Geothermal power, 378-381<br />
Gerhardt, Charles, 597<br />
Gershon-Cohen, Jacob, 486<br />
Gibbon, John H., Jr., 394<br />
Gibbon, Mary Hopkinson, 394<br />
Gillette, George, 272<br />
Gillette, King Camp, 272, 276<br />
Glass; coloring of, 819-820; fibers, 322-<br />
323, 591; food containers, 800; goldruby,<br />
819; high-purity, 322;<br />
laminated, 454-458; Pyrex, 606-610<br />
Glass fiber. See Fiber-optics<br />
Goddard, Robert H., 63, 65, 658, 660,<br />
662<br />
Goldmark, Peter Carl, 196<br />
Goldstine, Herman Heine, 312, 347<br />
Goodman, Benny, 126<br />
Goodyear, Charles, 146-147, 335<br />
Gosslau, Ing Fritz, 871<br />
Gould, R. Gordon, 472<br />
Goulian, Mehran, 729<br />
Graf Zeppelin, 271<br />
Gray, Elisha, 663<br />
Greaves, Ronald I. N., 351<br />
Green Revolution, 638-639, 641-644<br />
Grove, William Robert, 355<br />
Groves, Leslie R., 76, 747<br />
Grunberg-Manago, Marianne, 733<br />
Guérin, Camille, 791<br />
Guitar, electric, 122-129<br />
Gutenberg, Beno, 645<br />
Gyrocompass, 382-385<br />
Haas, Georg, 59<br />
Haber, Fritz, 16-19<br />
Haberl<strong>and</strong>t, Gottlieb, 421<br />
Hahn, Otto, 84, 520<br />
Haldane, John Burdon S<strong>and</strong>erson, 724<br />
Haldane, T. G. N., 398<br />
Hall, Charles, 335<br />
Halliday, Don, 151<br />
Hallwachs, Wilhelm, 562<br />
Hamilton, Francis E., 490<br />
Hammond, John, 126<br />
Hanratty, Patrick, 151<br />
Hard disk, 386-389<br />
Hata, Sahachiro, 673<br />
Haüy, René-Just, 896<br />
Hayato, Ikeda, 142<br />
Hayes, Arthur H., Jr., 67<br />
Hazen, Harold L., 262<br />
Health Company, 650<br />
Hearing aid, 390-393<br />
Heart; <strong>and</strong> pacemakers, 545-548. See<br />
also Artificial heart<br />
Heart-lung machine, 394-397<br />
Heat pump, 398-401<br />
Heilborn, Jacob, 272<br />
Henne, Albert, 630, 746<br />
Hero (Greek mathematician), 851<br />
Hero 1 robot, 650-653<br />
Herschel, William, 425, 427<br />
Hertz, Heinrich, 502, 621<br />
Heumann, Karl, 842<br />
Hewitt, Peter Cooper, 335
Hindenburg, 271<br />
Hitler, Adolf, 414, 509, 807, 871<br />
Hoff, Marcian Edward, Jr., 229<br />
Hoffman, Frederick de, 413<br />
Hoffmann, Erich, 676<br />
Hofmann, August Wilhelm von, 593,<br />
842, 844<br />
Hollerith, Herman, 417<br />
Holography, 402-406, 537<br />
Homolka, Benno, 192<br />
Honda Insight, 360<br />
Hoover, Charles Wilson, Jr., 751<br />
Hoover, William Henry, 832<br />
Hopper, Grace Murray, 187-188<br />
Hormones. See Artificial hormone<br />
Hounsfield, Godfrey Newbold, 167,<br />
169<br />
House appliances. See Appliances<br />
Houtz, Ray C., 541<br />
Hovercraft, 407-411<br />
Howe, Elias, 335<br />
Hughes, Howard R., 533, 535<br />
Hulst, Hendrik Christoffel van de, 625<br />
Humphreys, Robert E., 765<br />
Humulin, 374, 377<br />
Hyatt, John Wesley, 571, 573<br />
Hyde, James Franklin, 683<br />
Hydrofoil, 665<br />
Hydrogen bomb, 412-416<br />
IBM. See International Business Machines<br />
IBM Model 1401 computer, 417-420<br />
Ibuka, Masaru, 778, 786, 875, 879<br />
ICBM. See Intercontinental ballistic<br />
missiles<br />
Idaho National Engineering<br />
Laboratory, 119, 521<br />
Immelmann, Max, 601<br />
Immunology. See Polio vaccine;<br />
Tuberculosis vaccine; Typhus<br />
vaccine; Yellow fever vaccine<br />
In vitro plant culture, 108, 421-424<br />
INEL. See Idaho National Engineering<br />
Laboratory<br />
Infantile paralysis. See Polio<br />
Infrared photography, 425-429<br />
Instant photography, 430-433<br />
Insulin, genetically engineered, 374-377<br />
Index / 929<br />
Intel Corporation, 153, 232, 234, 559<br />
Interchangeable parts, 434-441<br />
Intercontinental ballistic missiles, 63-64<br />
Internal combustion engine, 442-445<br />
International Business Machines, 31,<br />
140, 187, 189, 313, 330-331, 333, 347-<br />
350, 386, 388, 395, 420, 490-493, 680-<br />
681, 830, 861-865; Model 1401<br />
computer, 417-420; personal<br />
computers, 558-561<br />
Internet, 446-450<br />
Iron lung, 451-453<br />
Isotopes, <strong>and</strong> atomic mass, 494<br />
Ivanov, Ilya Ivanovich, 54<br />
Ives, Frederick E., 90<br />
Jansky, Karl, 614, 625<br />
Jarvik, Robert, 45<br />
Jarvik-7, 45, 49<br />
The Jazz Singer, 742<br />
Jeffreys, Alec, 370<br />
Jenkins, Charles F., 756<br />
Jet engines; <strong>and</strong> hovercraft, 408, 410;<br />
impulse, 871; <strong>and</strong> missiles, 244;<br />
supersonic, 714-719; turbo, 807-810<br />
Jobs, Steven, 28, 30<br />
Johnson, Irving S., 374<br />
Johnson, Lyndon B., 206<br />
Johnson, Reynold B., 330<br />
Joliot, Frédéric, 76<br />
Jolson, Al, 742<br />
Jones, Am<strong>and</strong>a Theodosia, 343, 345<br />
Joyce, John, 272<br />
Judson, Walter E., 634<br />
Judson, Whitcomb L., 847<br />
Kahn, Reuben Leon, 737<br />
Kamm, Oliver, 50<br />
Kao, Charles K., 320<br />
Kelvin, Lord, 398<br />
Kemeny, John G., 92<br />
Kettering, Charles F., 11, 630<br />
Kidneys, 58, 62, 374; <strong>and</strong> blood, 39;<br />
<strong>and</strong> cyclamate, 248; problems, 634<br />
Kilby, Jack St. Clair, 151, 229, 231, 576,<br />
578<br />
Kipping, Frederic Stanley, 683<br />
Kitchenware. See Polystyrene; Pyrex<br />
glass; Teflon; Tupperware
930 / Index<br />
Knoll, Max, 302<br />
Kober, Theodor, 267<br />
Koch, Robert, 791<br />
Kolff, Willem Johan, 58<br />
Kornberg, Arthur, 729<br />
Kornei, Otto, 891<br />
Korolev, Sergei P., 63-64<br />
Kramer, Piet, 537<br />
Krueger, Myron W., 866<br />
Kruiff, George T. de, 537<br />
Kunitsky, R. W., 54<br />
Kurtz, Thomas E., 92<br />
Lake, Clair D., 490<br />
Laminated glass, 454-458<br />
L<strong>and</strong>, Edwin Herbert, 430, 432<br />
Langévin, Paul, 692, 695, 823<br />
Langmuir, Irving, 183<br />
Laser, 459-463<br />
Laser-diode recording process, 464-467<br />
Laser eye surgery, 468-472<br />
Laser vaporization, 472-476<br />
Laservision, 219, 465<br />
Laue, Max von, 896<br />
Lauterbur, Paul C., 516<br />
Lawrence, Ernest Orl<strong>and</strong>o, 252, 254,<br />
720<br />
Lawrence-Livermore National<br />
Laboratory, 416, 671<br />
Leclanché, Georges, 355<br />
Leeuwenhoek, Antoni van, 678<br />
Leith, Emmett, 402<br />
Lel<strong>and</strong>, Henry M., 434, 437<br />
Lengyel, Peter, 733<br />
Lenses; camera, 130, 132-134;<br />
electromagnetic, 303; electron, 302-<br />
303; <strong>and</strong> fax machines, 317; <strong>and</strong><br />
laser diodes, 465; microscope, 678;<br />
<strong>and</strong> optical disks, 539; Pyrex, 609;<br />
railroad lantern, 606; scleral, 235;<br />
television camera, 887; <strong>and</strong><br />
xerography, 891-894. See also<br />
Contact lenses<br />
Leonardo da Vinci, 235<br />
Leverone, Louis E., 850<br />
Leverone, Nathaniel, 850<br />
Lewis, Thomas, 293<br />
LGOL computer language, 92-93<br />
Libby, Willard Frank, 158, 160<br />
Lidwell, Mark, 545<br />
Lincoln, Abraham, 320, 439<br />
Lindbergh, Charles A., 661<br />
Littleton, Jesse T., 606, 608<br />
Livestock, artificial insemination of,<br />
54-57<br />
Livingston, M. Stanley, 252<br />
Locke, Walter M., 244<br />
Lockhead Corporation, 697<br />
Long-distance radiotelephony, 477-<br />
481<br />
Long-distance telephone, 482-485<br />
Loosley, F. A., 701<br />
Lumière, Auguste, 88-89<br />
Lumière, Louis, 88-89<br />
Lynde, Frederick C., 850-851<br />
Lyons, Harold, 80<br />
McCabe, B. C., 378<br />
McCormick, Cyrus Hall, 335<br />
McCormick, Katherine Dexter, 108<br />
McCune, William J., 430<br />
Machine guns, 601-605<br />
McKay, Dean, 558<br />
McKenna, Regis, 28<br />
McKhann, Charles F., III, 451<br />
McMillan, Edwin Mattison, 720<br />
McWhir, J., 177<br />
Magnetron, 504<br />
Maiman, Theodore Harold, 320, 459,<br />
468, 472<br />
Mallory, Joseph, 432<br />
Mammography, 486-489<br />
Manhattan Project, 77-78, 412, 414, 525,<br />
747-748<br />
Mansfield, Peter, 516<br />
Marconi, Guglielmo, 477, 616, 619, 621,<br />
839<br />
Mariano di Jacopo detto Taccola, 770<br />
Mark I calculator, 490-493<br />
Marrison, Warren Alvin, 284, 286<br />
Marsden, Ernest, 367<br />
Mass spectrograph, 494-497<br />
Massachusetts Institute of Technology,<br />
861<br />
Mauchly, John W., 104, 312, 347, 828<br />
Maxwell, James Clerk, 88, 502, 621<br />
Meitner, Lise, 76, 520<br />
Memory metal, 498-501
Mercalli, Giuseppe, 645<br />
Merrill, John P., 61<br />
Merryman, Jerry D., 576, 578<br />
Mestral, Georges de, 846, 848<br />
Metchnikoff, Élie, 673-674<br />
Microprocessors, 94, 229-234, 287, 419,<br />
538<br />
Microscopes; atomic force, 681;<br />
electron, 302-306, 403, 902; field ion,<br />
325-329, 679; scanning tunneling,<br />
678-682; ultra-, 819-822<br />
Microvelcro, 847<br />
Microwave cooking, 502-506<br />
Midgley, Thomas, Jr., 444, 630, 746<br />
Miller, Bernard J., 394<br />
Miller, Stanley Lloyd, 724<br />
Millikan, Robert A., 646<br />
Millikan, Robert Andrews, 722<br />
Milunsky, Aubrey, 20<br />
Missiles; cruise, 244-247; guided, 385;<br />
intercontinental, 63-64; Sidewinder,<br />
698; Snark, 106. See also Rockets; V-2<br />
rocket<br />
Mixter, Samuel Jason, 113<br />
Mobile Telephone Service, 172<br />
Model T, 14, 71, 75, 439-440<br />
Monitor, 687<br />
Monocot plants, 422<br />
Monomers, 148, 541, 590-591<br />
Moog, Robert A., 307, 309<br />
Moon; distance to, 462; <strong>and</strong> lasers, 459;<br />
<strong>and</strong> radar, 614; <strong>and</strong> radio signals,<br />
614<br />
Morel, Georges Michel, 421-423<br />
Morganthaler, Ottmar, 335<br />
Morita, Akio, 217, 222, 778, 786, 875<br />
Morse, Samuel F. B., 320, 335<br />
Morse code, 477, 616, 621<br />
Motion picture sound, 741-745<br />
Mouchout, Augustin, 687<br />
Movies. See Talking motion pictures<br />
Müller, Erwin Wilhelm, 325, 327, 679<br />
Murray, Andrew W., 41<br />
Murrow, Edward R., 830<br />
Naito, Ryoichi, 38<br />
National Broadcasting Company, 198,<br />
215<br />
National Geographic, 665<br />
Index / 931<br />
National Geographic Society, 665<br />
National Radio Astronomy<br />
Observatory, 628<br />
Natta, Giulio, 593<br />
Nautilus, 84, 521<br />
NBC. See National Broadcasting<br />
Company<br />
Neoprene, 507-510<br />
Neumann, John von, 92, 104, 312, 347,<br />
710, 828<br />
Neurophysiology, 298, 300<br />
Neutrino detector, 511-515<br />
Newman, Max H. A., 200<br />
Newton, Isaac, 659<br />
Nickerson, William Emery, 272<br />
Nieuwl<strong>and</strong>, Julius Arthur, 507<br />
Nipkow, Paul Gottlieb, 756<br />
Nirenberg, Marshall W., 733<br />
Nitinol, 498-501<br />
Nitrogen, 16<br />
Nobécourt, P., 421<br />
Nobel Prize winners, 174; Chemistry,<br />
16, 18-19, 50, 52, 158, 160, 183, 455,<br />
494, 496, 595, 720, 724, 819, 821-822;<br />
Physics, 229, 231, 252, 254-255, 302,<br />
304, 321, 402, 404, 459, 520, 619, 678,<br />
680, 782, 789, 896-898; Physiology<br />
or Medicine, 24, 41, 167, 169, 293,<br />
295, 375, 553, 555, 581, 674, 676, 730,<br />
733<br />
Nordwestdeutsche Kraftwerke, 225<br />
Northrop Corporation, 106, 697<br />
Noyce, Robert, 151, 229<br />
NSFnet, 447<br />
Nuclear fission, 76, 84, 118-121, 185,<br />
412, 520-528<br />
Nuclear fusion, 78<br />
Nuclear magnetic resonance, 516-519<br />
Nuclear power plant, 520-524<br />
Nuclear reactor, 118-121, 520-528<br />
Nylon, 510, 529-532, 541, 574, 590;<br />
Helance, 591; <strong>and</strong> velcro, 846-847<br />
Oak Ridge National Laboratory, 77,<br />
525-528<br />
Ochoa, Severo, 733<br />
Ohain, Hans Pabst von, 807<br />
Ohga, Norio, 875<br />
Oil-well drilling, 345, 533-536<br />
Oparin, Aleks<strong>and</strong>r Ivanovich, 724
932 / Index<br />
Opel, John, 558<br />
Ophthalmology, 468<br />
Oppenheimer, J. Robert, 76, 325<br />
Optical disk, 537-540<br />
Orlon, 541-544<br />
Ottens, Lou F., 537<br />
Otto, Nikolaus, 257<br />
Oxytocin, 50<br />
Pacemaker, 545-548<br />
Painter, William, 272<br />
Paley, William S., 196<br />
Pap test, 549-552<br />
Papanicolaou, George N., 549<br />
Parsons, Charles, 378<br />
Parsons, Ed, 208<br />
Particle accelerators, 252, 256, 720-723,<br />
761-764<br />
Paul, Les, 122<br />
Pauli, Wolfgang, 511<br />
PC. See Personal computers<br />
PCM. See Pulse code modulation<br />
Pearson, Gerald L., 567<br />
Penicillin, 553-557<br />
Peoples, John, 761<br />
Perkin, William Henry, 842, 844<br />
Perrin, Jean, 695<br />
Persian Gulf War, 246, 698-699<br />
Personal computers, 153, 558-561, 864;<br />
Apple, 28-32; <strong>and</strong> floppy disks, 332-<br />
333; <strong>and</strong> hard disks, 389; <strong>and</strong><br />
Internet, 447, 449<br />
Pfleumer, Fritz, 163<br />
Philibert, Daniel, 1<br />
Philips Corporation, 464, 857<br />
Photocopying. See Xerography<br />
Photoelectric cell, 562-566<br />
Photography; film, 88-91, 192-195, 430-<br />
433. See also Cameras<br />
Photovoltaic cell, 567-570<br />
Piccard, Auguste, 36, 95, 97, 103<br />
Piccard, Jacques, 95<br />
Piccard, Jean-Félix, 97<br />
Pickard, Greenleaf W., 621<br />
Pierce, John R., 204<br />
Pincus, Gregory, 108<br />
Planck, Max, 563<br />
Plastic, 571-575; Tupperware, 799-806<br />
Plunkett, Roy J., 746, 748<br />
Pocket calculator, 576-580<br />
Polaroid camera, 170, 430-433<br />
Polio, 451-453<br />
Polio vaccine, 581-588<br />
Polyacrylonitrile, 541-543<br />
Polyester, 589-592<br />
Polyethylene, 593-596<br />
Polystyrene, 597-600<br />
Porter, Steven, 272<br />
Powers, Gary, 245<br />
Pregnancy. See Abortion pill;<br />
Amniocentesis; Birth control pill;<br />
Ultrasound<br />
Priestley, Joseph, 146<br />
Propeller-coordinated machine gun,<br />
601-605<br />
Protein synthesis, 735<br />
Prout, William, 494<br />
Pulse code modulation, 217-220<br />
Purcell, Edward Mills, 625<br />
Purvis, Merton Brown, 751<br />
Pye, David R<strong>and</strong>all, 442<br />
Pyrex glass, 606-610<br />
Quadrophonic sound, 221<br />
Quantum theory, 563<br />
Quartz crystals, 81, 284-288<br />
Radar, 229, 265, 314, 391, 504, 611-612,<br />
614-615; <strong>and</strong> sonar, 693, 824; <strong>and</strong><br />
bathyscaphe, 96; <strong>and</strong> laser<br />
holography, 405; <strong>and</strong> stealth aircraft,<br />
697-699<br />
Radio, 616-620; FM, 339-342<br />
Radio Corporation of America, 196-<br />
199, 210, 213, 219, 340-341, 464, 537,<br />
618-619, 741, 758-759, 787<br />
Radio crystal sets, 621-624<br />
Radio frequency, 616; <strong>and</strong> cell phones,<br />
172, 175; <strong>and</strong> crystral radio, 622; <strong>and</strong><br />
microwave heating, 505<br />
Radio interferometer, 625-629<br />
Radioactivity, 720, 734; <strong>and</strong> barium, 76,<br />
520; carbon dating, 158-162; <strong>and</strong><br />
DNA, 371; <strong>and</strong> isotopes, 494, 497;<br />
measuring, 365-369; <strong>and</strong> neutrinos,<br />
511-512<br />
Radiotelephony, 477-481<br />
Rainfall, induced, 183-186<br />
RAM. See R<strong>and</strong>om access memory
R<strong>and</strong>om access memory, 140, 387, 559,<br />
861-862, 864<br />
Raytheon Company, 503, 505, 786<br />
Razors, 272-278<br />
RCA. See Radio Corporation of<br />
America<br />
Reagan, Ronald, 415<br />
Reber, Grote, 625<br />
Recombinant DNA, 41<br />
Recording; cassettes, 163-166, 538, 784,<br />
788, 875-882; compact discs, 217-224;<br />
Dolby noise reduction, 279-283;<br />
laser-diodes, 464-467; sound, 741-<br />
742; video, 857-860<br />
Reed, Walter, 905<br />
Refrigerant gas, 630-633<br />
Reichenbach, Henry M., 130<br />
Rein, Herbert, 541<br />
Remsen, Ira, 67<br />
Reserpine, 634-637<br />
Ribonucleic acid, 734. See also Synthetic<br />
RNA<br />
Ricardo, Harry Ralph, 442<br />
Rice <strong>and</strong> wheat strains, 638-644<br />
Rice-Wray, Edris, 108<br />
Richter, Charles F., 645-646<br />
Richter scale, 645-649<br />
Rickover, Hyman G., 520<br />
Riffolt, Nils, 661<br />
Ritchie, W. A., 177<br />
Rizzo, Paul, 558<br />
RNA, synthetic, 733-736<br />
Robot, household, 650-653<br />
Robot, industrial, 654-657<br />
Rochow, Eugene G., 683, 685<br />
Rock, John, 108<br />
Rockets; <strong>and</strong> satellites, 63-66; design,<br />
712; liquid-fuel-propelled, 658-662,<br />
871-874. See also Missiles<br />
Rogers, Howard G., 430<br />
Rohrer, Heinrich, 678, 680<br />
Röntgen, Wilhelm Conrad, 167, 365,<br />
896, 901<br />
Roosevelt, Franklin D., 264, 588, 770-771<br />
Root, Elisha King, 71<br />
Rosen, Charles, 362<br />
Rosing, Boris von, 758<br />
Rossi, Michele Stefano de, 645<br />
Rotary cone drill bit, 533, 536<br />
Index / 933<br />
Rotary dial telephone, 663-667, 751,<br />
774-776<br />
Rotary engine, 362<br />
Roux, Pierre-Paul-Émile, 673<br />
Rubber, synthetic, 146-150, 507-510,<br />
530, 593, 595<br />
Ruska, Ernst, 302, 304, 678, 680<br />
Russell, Archibald, 714<br />
Rutherford, Ernest, 252, 365-368, 455,<br />
494, 564, 720-721, 898<br />
Ryle, Martin, 625<br />
Sabin, Albert Bruce, 581, 583<br />
Saccharin, 248<br />
Sachs, Henry, 272<br />
SAINT, 668-672<br />
Salk, Jonas Edward, 581, 585-586<br />
Salomon, Albert, 486<br />
Salvarsan, 673-674, 676-677<br />
Sanger, Margaret, 108, 110, 112<br />
Sarnoff, David, 196-197, 210, 339-340,<br />
758<br />
Satellite, artificial, 63-66<br />
Satre, Pierre, 714<br />
Saulnier, Raymond, 601<br />
Savannah, 85<br />
Sawyer, Wilbur Augustus, 905<br />
Sayer, Gerry, 807<br />
Scanning tunneling microscope, 678-682<br />
Schaefer, Vincent Joseph, 183<br />
Schaudinn, Fritz, 673<br />
Schawlow, Arthur L., 459<br />
Schlatter, James M., 67<br />
Schmidt, Paul, 871<br />
Scholl, Rol<strong>and</strong>, 842<br />
Schönbein, Christian Friedrich, 571<br />
Schrieffer, J. Robert, 789<br />
SDI. See Strategic Defense Initiative<br />
Selectavision, 219<br />
Semiconductors, 139-140, 218, 229-234,<br />
317, 464-466, 568, 786-787, 892; <strong>and</strong><br />
calculators, 577, 579; defined, 229,<br />
232, 891<br />
Senning, Ake, 545<br />
Serviss, Garrett P., 659<br />
Seyewetz, Alphonse, 88<br />
Shannon, Claude, 868<br />
Sharp, Walter B., 533, 535<br />
Sharpey-Schafer, Edward Albert, 50
934 / Index<br />
Shaw, Louis, 451<br />
Shaw, Ronald A., 407<br />
Sheep, cloning of, 177-182<br />
Shellac, 572<br />
Shockley, William B., 229, 778, 782, 786,<br />
789<br />
Shroud of Turin, 161<br />
Shugart, Alan, 330, 386<br />
Shuman, Frank, 687<br />
Sidewinder missile, 698<br />
Siedentopf, H. F. W., 819<br />
Siegrist, H., 192<br />
Silicones, 683-686<br />
Simon, Edward, 597<br />
The Singing Fool, 742<br />
Sinjou, Joop, 537<br />
Sinsheimer, Robert L., 729<br />
Sketchpad, 868<br />
Slagle, James R., 668, 671<br />
Sloan, David, 252<br />
Smith, Hugh, 905<br />
Smith, Robert, 427<br />
Smouluchowski, Max von, 819<br />
Snark missile, 106<br />
Snyder, Howard, 883<br />
Sogo, Shinji, 142<br />
Solar energy, 567-568, 687-688, 690<br />
Solar thermal engine, 687-691<br />
Sonar, 692-696; <strong>and</strong> radar, 823<br />
Sones, F. Mason, 240<br />
Sony Corporation, 165, 218-224, 539,<br />
778, 781, 783-785, 788-789, 875-881<br />
Spaeth, Mary, 459, 461<br />
Spallanzani, Lazzaro, 54<br />
Spangler, James Murray, 832<br />
Spencer, Percy L., 502, 504<br />
Sperry, Elmer Ambrose, 382, 384<br />
Sputnik, 63-66, 446, 874<br />
“Star Wars” (Strategic Defense<br />
Initiative), 699<br />
Staudinger, Hermann, 530<br />
Stealth aircraft, 697-700<br />
Steelmaking process, 701-708<br />
Steenstrup, Christian, 289<br />
Stewart, Alice, 823<br />
Stewart, Edward J., 272<br />
Stibitz, George, 828<br />
Stine, Charles M. A., 529<br />
STM. See Scanning tunneling microscope<br />
Stockard, Charles, 549<br />
Stokes, T. L., 394<br />
Storax, 597<br />
Strassmann, Fritz, 76<br />
Strategic Defense Initiative, 416, 699<br />
Strowger, Almon B., 751, 753<br />
Styrene, 148-149, 597-598<br />
Submarines; detection of, 692, 695, 823;<br />
navigation, 382-385; nuclear, 98, 521;<br />
weapons, 245-246<br />
Sucaryl, 248<br />
Suess, Theodor, 701<br />
Sulfonamides, 24<br />
Sullivan, Eugene G., 606<br />
Sun, 514, 725; energy, 725; <strong>and</strong> nuclear<br />
fusion, 511, 515, 567, 687; <strong>and</strong><br />
timekeeping, 80<br />
Sun Power Company, 688<br />
Supercomputer, 709-713<br />
Supersonic passenger plane, 714-719<br />
Surgery; <strong>and</strong> artificial blood, 39; <strong>and</strong><br />
artificial heart, 46, 48; <strong>and</strong> blood<br />
transfusion, 113-117; <strong>and</strong> breast<br />
cancer, 486-487, 489; cardiac, 499;<br />
coronary artery bypass, 240-243;<br />
<strong>and</strong> heart-lung machine, 394-397;<br />
kidney-transplant, 61; laser eye,<br />
468-472; laser vaporization, 472-476;<br />
transplantation, 61<br />
Sutherl<strong>and</strong>, Ivan, 866, 868<br />
Sveda, Michael, 67, 248<br />
Svedberg, Theodor, 815<br />
Swarts, Frédéric, 630<br />
Swinton, Alan A. Campbell, 756<br />
Sydnes, William L., 558<br />
Synchrocyclotron, 720-723<br />
Synthetic amino acid, 724-728<br />
Synthetic DNA, 729-732<br />
Synthetic RNA, 733-736<br />
Syphilis, 24, 554-556, 673, 676, 744; test,<br />
737-740; treatment of, 673-674, 676-<br />
677<br />
Szostak, Jack W., 41<br />
Talking motion pictures, 741-745<br />
Tarlton, Robert J., 208-209<br />
Tassel, James Van, 576<br />
Taylor, Frederick Winslow, 71<br />
Taylor, William C., 606
Tee-Van, John, 100<br />
Teflon, 746-750<br />
Telecommunications Research<br />
Establishment, 201<br />
Telegraphy, radio, 616<br />
Telephone; cellular, 172-176; longdistance,<br />
482-485; rotary dial, 663-<br />
667, 751, 774-776; touch-tone, 667,<br />
774-777<br />
Telephone switching, 751-755<br />
Television, 756-760<br />
Teller, Edward, 78, 412, 414, 416<br />
Tesla, Nikola, 13, 832<br />
Teutsch, Georges, 1<br />
Tevatron accelerator, 761-764<br />
Texas Instruments, 140, 153, 232-233,<br />
419, 577-579, 787-788<br />
Theiler, Max, 905<br />
Thein, Swee Lay, 370<br />
Thermal cracking process, 765-769<br />
Thermionic valve, 564<br />
Thomson Electron Tubes, 901<br />
Thomson, Joseph John, 494, 496, 563-<br />
564, 838<br />
Thornycroft, John Isaac, 407, 409<br />
Thornycroft, Oliver, 442<br />
Tidal power plant, 770-773<br />
Tiros 1, 887-890<br />
Tiselius, Arne, 815<br />
Tokyo Telecommunications<br />
Engineering Company, 778, 780,<br />
787, 875. See also Sony Corporation<br />
Tomography, 168, 170<br />
Topografiner, 679<br />
Torpedo boat, 409<br />
Touch-tone telephone, 667, 774-777<br />
Townes, Charles Hard, 459<br />
Townsend, John Sealy Edward, 365<br />
Toyota Prius, 363<br />
Transistor radio, 786-790<br />
Transistors, 172, 229, 232, 390-391, 418-<br />
419, 778-788, 875-876; invention of,<br />
840<br />
Traut, Herbert, 549<br />
Tressler, Donald K., 343<br />
Truman, Harry S., 78<br />
Tsiolkovsky, Konstantin, 63, 65<br />
Tuberculosis vaccine, 791-794<br />
Tungsten filament, 795-798<br />
Index / 935<br />
Tuohy, Kevin, 235<br />
Tupper, Earl S., 799, 803<br />
Tupperware, 799-806<br />
Turbojet, 807-810<br />
Turing, Alan Mathison, 104, 200, 668<br />
Turner, Ted, 208, 211<br />
Tuskegee Airmen, 612<br />
Typhus vaccine, 811-814<br />
U2 spyplane, 245, 432<br />
U-boats. See Submarines<br />
Ulam, Stanislaw, 412, 414<br />
Ultracentrifuge, 815-818<br />
Ultramicroscope, 819-822<br />
Ultrasound, 823-827<br />
Unimate robots, 654-656<br />
UNIVAC. See Universal Automatic<br />
Computer<br />
Universal Automatic Computer, 106,<br />
315, 331, 348, 711, 828-831<br />
Upatnieks, Juris, 402<br />
Urey, Harold Clayton, 724<br />
Uterine cancer, 549, 552<br />
V-2 rocket, 65, 244, 659, 662, 871-874<br />
Vaccines. See Polio vaccine;<br />
Tuberculosis vaccine; Typhus<br />
vaccine; Yellow fever vaccine<br />
Vacuum cleaner, 832-836<br />
Vacuum tubes, 339, 837-841; <strong>and</strong><br />
computers, 106, 201-202, 313-314;<br />
<strong>and</strong> radar, 391; <strong>and</strong> radio, 478, 623;<br />
<strong>and</strong> television, 783; thermionic<br />
valve, 564; <strong>and</strong> transistors, 229, 391,<br />
778-780, 786-787, 876. See also<br />
Cathode-ray tubes<br />
Vat dye, 842-845<br />
VCR. See Videocassette recorder<br />
Vectograph, 432<br />
Veksler, Vladimir Iosifovich, 720<br />
Velcro, 846-849<br />
Vending machine slug rejector, 850-856<br />
Videocassette recorder, 214, 218, 857-<br />
860; <strong>and</strong> laservision, 465<br />
Videodisc, 219<br />
Vigneaud, Vincent du, 50<br />
Virtual machine, 861-865<br />
Virtual reality, 866-870<br />
Vitaphone, 742<br />
Vogel, Orville A., 638, 643
936 / Index<br />
Volta, Aless<strong>and</strong>ro, 355<br />
Vonnegut, Bernard, 183<br />
Vulcanization of rubber, 146, 149<br />
Wadati, Kiyoo, 645<br />
Waldeyer, Wilhelm von, 673<br />
Walker, William H., 130<br />
Walkman cassette player, 165, 784, 788,<br />
875-882<br />
Waller, Augustus D., 293<br />
Warner Bros., 741-745<br />
Warner, Albert, 741, 744<br />
Warner, Harry, 741, 744<br />
Warner, Jack, 741, 744<br />
Warner, Samuel, 741, 744<br />
Warren, Stafford L., 487<br />
Washing machine, electric, 883-886<br />
Washington, George, 289<br />
Wassermann, August von, 676, 737<br />
Watson, James D., 41, 177, 729, 733<br />
Watson, Thomas A., 482<br />
Watson, Thomas J., 394, 558<br />
Watson, Thomas J., Jr., 386<br />
Watson-Watt, Robert, 611<br />
Weather; <strong>and</strong> astronomy, 609; cloud<br />
seeding, 183-186; <strong>and</strong> rockets, 712<br />
Weather satellite, 887-890<br />
Wehnelt, Arthur, 837<br />
Wells, H. G., 659<br />
Westinghouse, George, 335<br />
Westinghouse Company, 101, 440, 758-<br />
759, 832<br />
Wexler, Harry, 887<br />
Whinfield, John R., 589<br />
Whitaker, Martin D., 525<br />
White, Philip Cleaver, 421<br />
White S<strong>and</strong>s Missile Range, 873<br />
Whitney, Eli, 71, 335<br />
Whittle, Frank, 807<br />
Wichterle, Otto, 235<br />
Wigginton, R<strong>and</strong>y, 28<br />
Wigner, Eugene, 525, 789<br />
Wilkins, Arnold F., 611<br />
Wilkins, Maurice H. F., 733<br />
Wilkins, Robert Wallace, 634<br />
Williams, Charles Greville, 146<br />
Wilmut, Ian, 177-178<br />
Wilson, Robert Rathbun, 761<br />
Wilson, Victoria, 370<br />
Wise, Brownie, 799<br />
Wolf, Fred, 289-290<br />
Woodrow, O. B., 883<br />
World War I, <strong>and</strong> nitrates, 18<br />
World War II; <strong>and</strong> Aqualung, 36;<br />
atomic bomb, 84, 118, 521, 525-527,<br />
697, 721; <strong>and</strong> computers, 92; spying,<br />
34, 200, 202, 668; V-2 rocket, 65, 244,<br />
659, 662, 871-874<br />
Wouk, Victor, 360, 362<br />
Wozniak, Stephen, 28, 30<br />
Wright, Almroth, 555<br />
Wright, Orville, 6-10, 658<br />
Wright, Wilbur, 6-10, 335, 658<br />
Wynn-Williams, C. E., 200<br />
Xerography, 891-895<br />
Xerox Corporation, 891-894<br />
X-ray crystallography, 896-900<br />
X-ray image intensifier, 901-904<br />
X-ray mammography, 486-489<br />
Yellow fever vaccine, 905-908<br />
Yoshino, Hiroyuki, 360<br />
Zaret, Milton M., 468<br />
Zenith Radio Corporation, 209, 341<br />
Zeppelin, Ferdin<strong>and</strong> von, 267-270<br />
Ziegler, Karl, 593<br />
Zinn, Walter Henry, 118<br />
Zinsser, Hans, 811<br />
Zippers, 846-847<br />
Zoll, Paul Maurice, 545<br />
Zsigmondy, Richard, 819, 821<br />
Zweng, H. Christian, 468<br />
Zworykin, Vladimir, 756, 758