YSM Issue 86.1

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Yale Scientific

Established in 1894


January 2013

Vol. 86 No. 1



of Life

How water is

pouring new

power into



PAGES 14-16

ecology microbiology bioethics

Mollusk Mystery

New fossil evidence sheds

light on the evolutionary

history of mollusks

Secrets of Bee Bacteria

Studies on gut microbiota

may yield promising clues to

honey bee health

Vaccination Decisions

The emerging influence

of altruism on vaccine

coverage rates

PAGE 11 PAGES 17-19 PAGES 22-23

everyday Q&A


What Happens During a Nuclear Meltdown?

The science behind core melt accidents.

Few news headlines sound as frightening

as those containing the words “nuclear

meltdown.” But despite the flurry of fear

that this term can generate, what exactly

does it mean?

The core of a nuclear reactor contains

water and fuel rods, which are usually made

of zirconium and contain nuclear fuel such

as uranium. The fuel rods split atomic particles

into even smaller particles, a process

called nuclear fission, in a self-sustaining

nuclear chain reaction, and the energy

released from fission is then used to heat

the water. This event produces steam as

hot as 550°F, which is used to drive turbines

and generate electricity.


A nuclear meltdown results from overheating the fuel rods

in the core of the nuclear reactor. Usually, water is circulated

through the reactor to maintain a stable temperature. If water

circulation ceases, the fuel rods can melt, and the inner nuclear

Nuclear reactors rely on radioactive

energy to generate electricity. Courtesy

of High Tech.

material will fall to the bottom of the

reactor, further clogging cooling channels.

This is known as a partial meltdown. The

more serious concern is if the temperature

inside the reactors cannot be lowered,

leading to displacement of the reactor lid

and the escape of radioactive particles

into the environment, a phenomenon

known as a full meltdown.

Both the 1979 Three Mile Island

accident in Pennsylvania and the 2011

Fukushima nuclear accident were partial

meltdowns. On the other hand, the 1986

Chernobyl disaster was a total meltdown,

and the long-term effects such as cancer

and birth deformities resulting from its

huge release of radiation are still being accounted for. Takeaway

lesson? Although nuclear power is seen as more environmentally

friendly than other energy resources, it is a force to keep under

close control.


Death by Super Solar Flare?

The possibility of an Armageddon from our Sun.

As the year draws to a close, the alleged Mayan doomsday date

of December 21, 2012 has led many to wonder if natural disasters

such as earthquakes and tsunamis will shake the Earth before the

next year arrives. According to some rumors, we may be due for

one form of these purported natural disasters soon: giant solar

storms. But could a solar flare actually lead to the end of the world?

NASA says not likely.

A solar flare is characterized by sudden and intense changes in

brightness, resulting from the abrupt release of huge amounts of

accumulated magnetic energy in the solar atmosphere. This process

liberates significant quantities of electromagnetic radiation and

high-energy particles that are harmful to the Earth. In turn, such

energy can affect the upper regions of Earth’s atmosphere and

disrupt satellite signals, interrupting GPS systems, the Internet, and

other modern technological essentials. More intense explosions on

the Sun’s surface that send larger amounts of energetic particles

into Earth’s atmosphere may even be powerful enough to disturb

power grids at ground level.

Solar activity peaks about every 11 years, which has led to the

hype that an intense solar flare will coincide with the Mayan doomsday.

Although solar activity is reaching a high point in its cycle at

the moment, scientists are quite certain that the heat from a solar

flare cannot directly harm Earth or its inhabitants because of its

distance from the Sun. So rest assured, knowing that fears of a

real-life reenactment of the film 2012 can be quelled.


A solar maximum is a super storm on the sun that can release

a lot of high-energy particles, which can damage power systems

on Earth. The sun is expected to reach its peak by next

year, 2013. Courtesy of Renewable Power News.

2 Yale Scientific Magazine | January 2013 www.yalescientific.org



Letter from the Editor


January 2013 / Vol. 86 / Issue No. 1




Quantum Transportation

Spielman 2012 MacArthur Fellow







Smoking in Bars and Drinking

Zeldovich Medal

Studying Bats to Improve Sensors

Ketamine Depression Treatment

Tweezing Out the SNARE Complex

Shucking the Mollusk Mystery













The Splitting of the Indo-Australian

Tectonic Plate

Global Health

Test Tube Meat


Biological Warfare


The Worst Epidemics in History

Undergraduate Profile

Aaron Fueur, ES '14

Alumni Profile

Jonathon Rothberg, '91

Science Essay Contest

Improving Science Education

Book Reviews

-The Doomsday Handbook: 50

Ways the World Could End

-The End of the Line: A Timely

Wake-Up Call

-The Human Quest: Prospering

Within Planetary Boundaries


Frankenstein Jellyfish


The End of the World?



Fighting the Freeze:

How Antarctica’s Shifting

Landscape Shaped Notothenioid


Yale researcher Dr. Thomas Near

uncovers the extent to which the

changing landscape of Antarctica has

shaped the evolution of its icefish.


The Secret Life of Bee Bacteria:

Gut Microbiota May

Yield Clues to Honey Bee Health

Honey bees, nature’s primary

pollinator, have been plagued in recent

years by unexplained disappearances

characteristic of Colony Collapse

Disorder. Research by Yale Professor

of Ecology and Evolutionary Biology

Nancy Moran into the gut microbiota

of these bees may yield clues to

understanding this mysterious disorder.


The Elixir of Life: Generating Electricity from Water

With the growing global demand for electricity, the need for

viable alternative energy sources is ever-present. Dr. Menachem

Elimelech, Professor of Chemical and Environmental

Engineering at Yale University, studies how water can be used as

a sustainable and cost-effective energy source.

Vaccination Decisions:

Selfish, Selfless, or Both?

Marketing to Mixed Motives

While game theory implies that

individual vaccination decisions would

be driven solely by self-interest, new

research by former and incoming Yale

faculty members suggests that nonselfish

motivations such as altruism

and cooperation may have a significant

influence on vaccine coverage rates.

Volcanic Eruptions

Modeling Magma Wagging to


Anticipate Volcanic Behavior


Microbots: Using

Nanotechnology In Medicine


January 2013 | Yale Scientific Magazine 4




This is the way the world ends.

Not with a bang but a whimper.

T.S. Eliot








World demand for fresh water has jumped 30 percent over the past two decades,

with agriculture comprising 70 percent of increased demand. Close to four billion

people, half the world’s population is living in an area of high water stress.

The ecosystem is also a very complex matrix of interdependent relationships; if

mankind continues to destroy the planetary ecosystem, eventually a destructive

critical mass will be reached. Crops will not grow because the insects that pollinate

them have died off.

In brief, a Super Volcano is a giant volcano that will generate an eruption in the

VEI 7 to VEI 9 category. The largest explosive super eruption identified, a VEI

9, ejected 5,000km 3 of material. In deep-geological-time, the event happened

yesterday, and we are already overdue for the next one.









4 Yale Scientific Magazine | January 2013

The world’s farmers are fighting a losing battle. Agricultural productivity began

leveling off several years ago as water scarcity and the effects of climate change

became more pronounced. With the worst effects of climate change yet to be felt

and energy prices still rising, the long term picture is not promising.

Nuclear powers start using weapons on each other and mutually assured

destruction occurs. Radiation would engulf Earth on an unimaginable scale.




January 2013

ecology microbiology bioethics

Vol. 86 No. 1

January 2013 Volume 86 No. 1



Managing Editors

Articles Editors

News Editor

Features Editor

Copy Editors

Production Manager

Layout Editors

Arts Editor

Online Editor

Multimedia Editor

Advertising Manager

Distribution Manager

Subscriptions Manager

Outreach Chair

Special Events Coordinator


Daniel Arias

Andrew Deveau

Andrew Goldstein

Walter Hsiang

Bridget Kiely

Katie Leiby

Kaitlin McLean


Shaunak Bakshi

Grace Cao

Kirsten Dowling

Selin Isguvin

Sophie Janaskie

Savina Kim

Jennifer Ky

Yale Scientific


Established 1894

William Zhang

Elizabeth Asai

Jonathan Hwang

Robyn Shaffer

Nancy Huynh

Shirlee Wohl

Mansur Ghani

Renee Wu

Ike Lee

Jessica Hahne

Li Boynton

Jessica Schmerler

John Urwin

Jeremy Puthumana

Jonathan Liang

Chukwuma Onyebeke

Stella Cao

Naaman Mehta

Karthikeyan Ardhanareeswaran

Lara Boyle

Mary Labowsky

Theresa Oei

Terin Patel-Wilson

Rebecca Su

Nicole Tsai

Elisa Visher

Dennis Wang

Jason Young

Jared Milford

Meredith Redick

Josephine Smit

Ike Swetlitz

Nicole Tsai

Elisha Visher

Joyce Xi

Advisory Board

Sean Barrett, Chair


Priyamvada Natarajan


Kurt Zilm


Fred Volkmar

Child Study Center

Stanley Eisenstat

Computer Science

James Duncan

Diagnostic Radiology

Melinda Smith

Ecology & Evolutionary Biology

Peter Kindlmann

Electrical Engineering

Werner Wolf


John Wettlaufer

Geology & Geophysics

William Summers History of Science & History of Medicine

Jeremiah Quinlan

Undergraduate Admissions

Carl Seefried Yale Science & Engineering Association


Science and the End of the World

Despite the flurry of apprehension, the world did not end on December 21, 2012. As the winter

solstice passed and midnight crept by on the 21st, there was no onset of natural disasters, no

planetary collision, no apocalyptic catastrophe. Just like a classic automobile resets to zero after

reaching 99,999.9 miles and like our calendars restarted in the year 2000 after the conclusion of

1999, the course of the supposed doomsday only brought about the beginning of the next day.

And while the Mayan calendar may have ended on this day, this culmination likewise only signified

the end of a cycle — not the end of the world. As the New Year was ushered in, any credence of

Mayan doomsday theories have largely dissipated; however, it is likely that new doomsday theories

will take its place, nestled again in popular culture.

It would seem wise to learn from these false alarms, but tales of brimstone and fire have spread

throughout the course of history. For example, the Millerites believed the world was ending in

1843; an ancient Sumerian culture is claimed to have predicted the encounter of Earth with another

celestial body in 2003; and the evangelist radio broadcaster Harold Camping forecasted dates of

supposed rapture in both 1994 and 2011. Clearly the world did not end in any of these instances,

and experts assured that there was no reason to buy into the hype of the Mayan doomsday —

there was no scientific basis for these predictions, no hard evidence, but still, many entrenched

themselves into the phenomenon.

Although these cycles of doomsday frenzy will likely continue to occur, this is not to say that

the world will not end. According to scientific data, Earth has a defined expiration date of approximately

four to five billion years as the supply of hydrogen from the sun dwindles. Scientists

also speculate the possibility of catastrophic collision of meteors or comets, wiping out all life

before the biological expiration — though the estimated timeline is still sometime far in the future.

Until then, scholars suggest that humans are accelerating our own demise as we are unable to

resolve aspects of problems such as diminishing natural resources, thinning ozone, increasingly

pervasive natural disasters, and emerging epidemics. Though some of the rhetoric in arguments

may be exaggerated, these issues shine light on arguably more realistic threats to our lives, those

that have grounding in actual evidence, as opposed to doomsday theories that are generally based

on superstition and speculative rumors. In this issue of the Yale Scientific, we found it apt to explore

some potentially disastrous threats and the scientific developments in these fields, ranging from

the mysterious phenomenon of honey bee colony collapse with potential ripple effects in the

greater ecosystem to the perils of biological warfare and research at Yale conducted on predicting

the theoretically catastrophic events of volcanic eruptions.

As the 2012 Masthead concludes its tenure, we thank you all for your readership and support

as we welcome in the new year, the new Mayan era, and the scientific advancements that will

hopefully preclude the world from ending anytime soon.

William Zhang


About the Art

The Yale Scientific Magazine (YSM) is published four times a year by

Yale Scientific Publications, Inc. Third class postage paid in New

Haven, CT 06520. Non-profit postage permit number 01106 paid

for May 19, 1927 under the act of August 1912. ISN:0091-287.

We reserve the right to edit any submissions, solicited or unsolicited,

for publication. This magazine is published by Yale College

students, and Yale University is not responsible for its contents.

Perspectives expressed by authors do not necessarily reflect the

opinions of YSM. We retain the right to reprint contributions,

both text and graphics, in future issues as well as a non-exclusive

right to reproduce these in electronic form. The YSM welcomes

comments and feedback. Letters to the editor should be under

200 words and should include the author’s name and contact

information. We reserve the right to edit letters before publication.

Please send questions and comments to ysm@yale.edu.

Yale Scientific

Established in 1894




of Life

How water is

pouring new

power into



PAGES 14-16

Mollusk Mystery Secrets of Bee Bacteria Vaccination Decisions

New fossil evidence sheds

Studies on gut microbiota

The emerging influence

light on the evolutionary

may yield promising clues to

of altruism on vaccine

history of mollusks

honey bee health

coverage rates

PAGE 11 PAGES 17-19 PAGES 22-23

The cover, designed by Contributing Artist Chanthia Ma, depicts a

glass of water — the elixir of life — transformed into electricity to

power a city (image adapted from work by Ferdi Rizkiyanto). With the

growing global demand for electricity, the need for viable alternative

energy sources is ever-present. Dr. Menachem Elimelech, Professor of

Chemical and Environmental Engineering at Yale University, studies

how water can be used as a sustainable and cost-effective energy source.

The theme page and headers on pages 12 and 14 were designed by

Production Manager Li Boynton. The header on page 20 was designed

by Arts Editor Jeremy Puthumana.


Quantum Teleportation

Current developments of quantum technology may

provide a faster and safer method of information

processing. Quantum teleportation is a crucial building

block for many quantum information processing

schemes. It does not physically transport any objects, but

relies on the states of quantum particles,

which can be viewed as information

catalogs fully characterizing these particles.

Quantum teleportation utilizes

quantum and classical communication

channels. The quantum channel

comprises pairs of entangled photons.

Entangled particles exhibit correlations

that are stronger and more intricate than

allowed by the laws of classical physics.



An illustration of a quantam

state being transmitted between

the two Canary Islands, La

Palma and Tenerife. Courtesy of


Educational Foundation.

Professor Spielman Receives “Genius Grant”


In a recent article in September issue

of Nature, Xiao-Song Ma, together with

his colleagues from Austria and Canada,

achieved the teleportation of a quantum

state from La Palma to Tenerife, two

Canary Islands separated by a striking distance

of 143 kilometers. The goal for future projects is

to continue increasing the distance over which quantum

states can be delivered, which can be best achieved in a

distance from ground to satellite. Ma says, “to develop

[quantum technology] in full scale and allow it to be

usable by everyone, one needs

to make the information processing

platform smaller.” After

accomplishing the work at Canary

Islands, he joined Professor Tang’s

group (Yale Nanodevices Laboratory)

as a Postdoctoral Fellow this

July. His current work is focused

on integrated quantum optics.

According to Ma, with the fastpaced

work at Tanglab and other

facilities around the world, there is

tremendous potential that quantum


technology will usher in a new era

of information processing, with

semiconductor chips.

On October 2, 2012, Yale Computer Science

Professor Daniel Spielman was awarded the MacArthur

Fellowship. Spielman received the $500,000

no-strings-attached award, often referred to as the

“genius grant,” to support the next five years of

his research in theoretical computer science and

applied math.

Spielman’s research focuses on the design of

efficient algorithms, a field that has fascinated him

since he was a teenager developing algorithms to

solve mathematical puzzles. Some of his most

important work deals with the development of new

error-correcting codes, which has critical applications

in the field of electronic communication. For

example, when two people exchange messages by

cellphone, some of the information is lost because

of interference. Spielman’s algorithm can be used to

reconstruct messages despite the loss of information.

“The main limiting factor in the speed at which

your cell phone can communicate is the amount of

interference,” Spielman says. Creating better errorcorrecting

codes is one of the most important ways

to increase how quickly data can be sent to cell

phones across the world.

Spielman also developed a revolutionary method

of analyzing how quickly algorithms run, called

“smoothed analysis,” for which he won two of

computer science’s most prestigious prizes. With

this new prize, Spielman says he will be able to

spend more time on his research, and he hopes it

will attract more attention and students to his field

in the next half-decade.

Professor Daniel Spielman was awarded

a MacArthur fellowship. Courtesy of

Professor Daniel Spielman.

6 Yale Scientific Magazine | January 2013 www.yalescientific.org


Curtailing Dangerous Habits



Can public policy curb two dangerous habits at once?

Dr. Sherry McKee and her colleagues from the Roswell

Park Cancer Institute believe so. In a recent study documented

in Drug and Alcohol Dependence, McKee investigated

the results of an international tobacco survey

to study the correlation between smoking bans and

reduced alcohol use. In bars that had recently enacted

or had preexisting smoking bans, researchers found a

significant reduction in the frequency

of alcohol consumption in customers

who were classified as heavy smokers

and drinkers.

McKee’s study built upon prior

findings, which demonstrated that

codependency between smoking

and alcohol use partially stems from

a pharmacological effect. “I believe

that alcohol and tobacco interactions

involve potentiated reinforcement,”

McKee said. “This study demonstrates

that policies designed to reduce tobacco

Research suggests that

smoking bans will reduce

alcohol consumption.

Courtesy of chuckography.


use also reduce alcohol use among certain segments

of the population, which has important public health


With alcohol abuse as the third leading cause of

preventable death in the U.S., these implications are

tremendously important. In addition, a reduction

of public smoking protects both smokers and nonsmokers

from risks of tobacco-related illnesses, such

as cardiovascular disease and cancer.

Although these health threats are

well documented, during the study

period (2005-2008) only 59.9 percent

of American bars were reported to

be smoke-free, while closer to 100

percent of bars in the U.K., Australia,

and Canada were smoke-free. Continuing

this line of work, McKee is

now studying the effects of tobacco

taxes on alcohol consumption and

alcohol abuse disorders.


Professor Smooke Awarded Zeldovich Medal

Mitchell Smooke, Professor of Mechanical Engineering

and Materials Science and Applied Physics

at Yale University, was awarded this year’s Zeldovich

Gold Medal by the Combustion Institute for his work

in combustion theory.

The Combustion Institute awards

three Gold Medals every two years.

One of them, the Zeldovich medal,

is named in honor of Russian scientist

Yakov B. Zeldovich, the father

of combustion theory. This honor

was recently awarded to Smooke in

Warsaw, Poland .

“I was thrilled about it,” said

Smooke, who has been a key contributor

to developing numerical and

computation procedures. His work

has helped to solve problems relating

to chemically reacting flows, most

notably flame structures.

When asked what made him unique

out of the other candidates across the

world, Smooke replied, “I think part


Professor Smooke

was awarded the 2012

Zeldovich Modeal for

his contributions to

Combustion Theory.

Courtesy of Professor

Mitchell Smooke.

of it was that we had developed software that could

be distributed, and we put this general area on the

map when there were not a lot of people doing [computational

combustion] many years ago.” Indeed, the

number of researchers in this field has

expanded exponentially within the last

few years, and Smooke’s work has been

critical to expanding this discipline.

Currently, Smooke and his colleagues

have been busy developing solutions to

problems that involve surrogate fuels,

mixtures of chemical components that

mimic the behavior of transportation

fuels. Funded by NASA, Smooke

and his colleague Marshall Long are

performing one of the five modeling

projects at the International Space

Station, known as the ACME project.

Smooke continues to contribute

towards solving problems of computational

combustion and improving

our knowledge of molecular diffusion

and chemical kinetics today.

www.yalescientific.org January 2013 | Yale Scientific Magazine 7


A Bat’s World

BY Achutha Raman

Yale Professor of Electrical Engineering Roman Kuc has answered

an important question regarding the ability of a bat to detect motionless

prey. For more than 25 years, Kuc has been exploring the world

of sonar at the Intelligent Sensors Laboratory, pioneering exploration

even as many other researchers, disgruntled with early failures of sonar

technology for robot motion, have turned to focus on camera vision.

He has taken a keen interest in investigating bats, visiting several of

the hundred or so bat laboratories around the world. During the

2000s, he oversaw progress of the EU-sanctioned project ChiRoPing

(so named for the bat order Chiropetra, the interest in application

to robots, and the audible “ping” of sonar scans), which analyzed

bat motion and perception with the goal of creating more effective

perception systems. Inspired by an observation from this project, he

published a paper co-authored with his son Victor that shed light on

the pivotal question of bat perception this September.

In bat echolocation, a bat emits either a continuous sonar signal

of a constant frequency, a swept frequency that falls from a higher

to lower frequency or a mixture of the two. This signal departs

from the area of the bat’s mouth, bounces off the surroundings,

and then returns to the often-graded pinna of the bat’s ears to be

instantaneously processed. Since the mid-twentieth century, bats

have been understood to make use of Doppler shifting to identify

prey. For moving objects, the Doppler shift of the returning signal

can be used to identify the direction in which the body is moving. A

lower frequency than emitted signifies a fleeing body, and a higher

frequency signifies motion towards the bat. Prey that is motionless,

however, would not Doppler shift sonar echoes, and if silent, could

not be separately heard via chirps or squeaks. Such prey would then

be lost in the clutter of background. In observing a video released by

the ChiRoPing project, however, Kuc speculated that bats do in fact

identify such uncooperative prey by sonar signals through induced


In a slow-motion video released from ChiRoPing, a bat approaches

a silent and motionless dragonfly, hovers above it to cause the dragonfly’s

wings to flutter, and then grabs and eats its prey. Kuc and Victor

suspected that air from the bat’s movements induced the dragonfly’s

wing motion that subsequently alerted the bat to the presence of

food. They built a biomimetic model to test whether enough information

was provided

from the bat’s hovering

to reasonably

identify the dragonfly’s

presence. Their experimental

model used a

modified sonar sensor

that could transmit

and receive sonar, a

sample dragonfly from

the Peabody Museum

A common North American big-eared

bat. Courtesy of Wikipedia Commons.

placed upon a plastic

leaf, and a second

leaf surface positioned

The experimental set-up used by Professor Kuc and his son: a

sonar system (far right) emits pulses and detects echoes while

a computer-controlled airbrush (center) provides air puffs that

deflect the dragonfly wings (far left). The response of the more

massive leaves to the air puffs is negligible. Courtesy of Roman

Kuc, Yale University Department of Electrical Engineering.

slightly closer to the sonar source as a potential deterrent. An automated

air brush simulated air pressure from the bat’s moving wings

while hovering above the insect. Kuc found that from a significant

distance, it was possible to identify the general location of something

unusual such as a protrusion on an otherwise flat surface via echo

intensity and return time. Moreover, when the air pressure was calibrated

to correspond with the bat’s extended hovering, the induced

dragonfly wing motion resulted in an increasing echo return time and

diminished intensity from the earlier data reading. Processing these

observations would hence alert the nocturnal animal of unique and

potentially tasty winged prey hiding in the clutter.

Current studies are investigating if experiments with wingless dragonflies

will have similar results and whether moveable structures are

necessary to identify silent and soundless objects otherwise hidden

in the background. These studies have important implications for

sonar technology. Heightened sonar clarity would be very important

for underwater mine hunting in the ocean, which the military has

been attempting to do with dolphins. It would also prove useful for

automated navigation of the ocean, deep ravines, and other unlit

areas of the earth.

At the same time, there is also something to be said for simply

understanding the bounty of riches provided through the senses. As

Kuc exclaims, “All our senses are amazing!” In studying acoustics,

he says we have “just another view of the world that might be useful

to sonar and might also be helpful in understanding perception in


8 Yale Scientific Magazine | January 2013 www.yalescientific.org



Special K: Leader of the Antidepressants Revolution

Ronald Duman, Professor of Psychiatry at the Yale School of

Medicine, has made an influential discovery: ketamine may become a

new revolutionary medicine for the treatment of depression due to

its ability to restore and promote synapse connections.

Clinical studies conducted by John Krystal, Chair of the Yale

Department of Psychiatry, initially revealed ketamine’s rapid treatment

of depression. Krystal and his colleagues found that ketamine

promoted an elevation of mood within four to six hours. Currently

available antidepressants, by contrast, require several weeks or months

before showing any therapeutic response. Furthermore, patients who

are considered treatment-resistant and fail to respond to conventional

medicine have shown immediate improvements when treated with

ketamine. Common treatment options such as Prozac and related

antidepressants have been minimally improved since the initial drugs

were discovered over 50 years ago, except for a reduction in side

effects. These agents act by “blocking the reuptake of serotonin by

increasing the synaptic levels of serotonin over the course of several

weeks; [conventional drugs] are able to produce a slow anti-depressant

response but not as effectively as ketamine,” Duman said. Concerning

ketamine, he believes “this was the biggest finding in the field of

depression in terms of treatment in over 50 years.”

Ketamine is a dissociative anesthetic used for pediatric and veterinary

medicine, and it acts as an NMDA-glutamate receptor antagonist in

Ketamine reinstates the control of mood and anxiety by restoring

synaptic connections in some brain regions. Courtesy of


The human body does not produce ketamine endogenously.

Courtesy of BBC.

the brain. Ketamine causes a burst of glutamate transmission, promoting

new synaptic connections through a mechanism that is similar to

models of learning and memory. What makes ketamine a very safe

anesthetic for pediatric medicine is that it does not bring about a

decline in respiration or heart activity, unlike other types of anesthetics.

However, at lower doses it produces some mild psychotomimetic

side effects, creating hallucinations and dissociative effects which make

it problematic as a ubiquitous treatment of depression. Because of

the pleasure associated with such side effects, ketamine has become

a drug of abuse, known by the street name “Special K.” There is no

endogenous substance like ketamine, and there is no known way of

inducing the production of ketamine in the body.

The question of what causes depression still has not been answered

thoroughly and is an ongoing topic of research. “It is a very heterogeneous

illness and is probably caused by a number of different

abnormalities,” says Duman. He believes that repeated stress and

unbalanced production of hormones are among the main factors.

Studies conducted on both animals and humans to identify the roots

of depression suggest that chronic stress can actually cause atrophy

and loss of connections between neurons in a certain region of the

brain. Such loss of connections can bring a detectable reduction in

the volume of the region, including the hippocampus and prefrontal

cortex of some depressed patients. In animals, this effect was attributed

to a decreased number of synapses between neurons. Some of these

regions, such as the prefrontal cortex, are responsible for the control

of mood, motion, and anxiety, and this control deteriorates with the

loss of synaptic connections. Duman has discovered that ketamine not

only fixes some of these deficits in synaptic connections very rapidly

but also creates even more synaptic connections in these brain regions,

reestablishing the proper control of anxiety, mood, and behavior of

depressed patients. This effect can last for seven to ten days before

the patient starts relapsing.

Research is ongoing to discover ketamine modifications that have

longer-lasting positive effects and allow repeated use without negative

side effects. Other potential ways of enhancing ketamine as a revolutionary

anti-depressant are to develop oral or intranasal administration

methods and to add chemical compounds which sustain the effect.

The ongoing research in this field indicates great potential for the

development of a powerful defense against depression.

www.yalescientific.org January 2013 | Yale Scientific Magazine 9


Using new technology dubbed “optical tweezers,” Yale researchers,

led by Professors of Cell Biology Yongli Zhang and Jim Rothman, have

discovered intricate details about the workings of a protein complex

that is the engine of membrane fusion in mammals and yeast.

SNARE proteins, or Soluble NSF Attachment Protein Receptors,

fuse membranes through a process known to scientists as the “zippering

model.” A SNARE complex consists of two sets of proteins,

T-SNARE and V-SNARE, which are located on two membranes and

join to “zipper” the membranes together. SNARE protein complexes

are involved in all intercellular trafficking processes and are also key

players in many diseases, such as those in which pathogens take advantage

of the SNARE mechanism to infect cells.

In the late 1990s, the crystal structure of this important protein

complex was solved, largely due to the contributions of A.T. Brunger

and colleagues at the Yale Department of Molecular Biophysics and

Biochemistry. At that time, Zhang was a rotation graduate student in

Brunger’s lab. Years later, after completing his postdoctoral work and

starting his own lab, Zhang realized that the SNARE complex could

be a prime target for optical tweezer technology.

“An optical tweezer basically extends our hand such that we can

grab a polystyrene bead attached to a single molecule and move the

bead,” said Zhang. “I decided that the SNARE protein would be the

perfect subject to study with optical tweezers because in this protein,

mechanical force is very significant. It takes a lot of force to draw two

membranes together for fusion, and using optical tweezers, we can

directly measure this force.”

Using optical tweezers, Zhang and his team pull on a single SNARE

complex to measure the force it takes to unzip the complex and the

extension change to study how it re-zippers. The force is related to the

strength of the complex and the folding energy of the protein, and the

Tweezing out the SNARE Complex


An example of a cellular process involving SNARE membrane fusion. Courtesy of Nature

Reviews: Molecular Cell Biology.

extension is related to its structure. From these data, the researchers

can deduce the amount of energy produced by the SNARE zippering

process and the process’s intermediate states.

Their single-molecule experiments led Zhang and his team to confirm

that the SNARE engine is indeed a powerful molecular machine that

is well-suited for membrane fusion. They found that a single SNARE

complex can generate up to 65 k B

T of free energy, which is likely the

most energy generated by the folding of any single protein complex.

Zhang and Rothman also identified several intermediate states of

the SNARE zippering process and published their results in the September

issue of Science. The half-zippered state they identified has a

particularly significant biological role because the SNARE complex is

only partially zippered at first in a primed state. Only once a membrane

fusion-triggering action potential arrives can the half-zippered state

continue to fold rapidly and finish fusing the membranes.

“The fully-folded SNARE complex has a very beautiful four-helix

structure,” said Zhang. “What was missing was how it folds, how it

assembles, and using optical tweezers, we investigated how a single

SNARE complex assembles and folds in real time.”

Applying optical tweezer technology to the SNARE complex did

not come without its challenges, however. At first, the researchers had

trouble forming the complex and attaching it to the polystyrene beads.

“There were a lot of challenges in the molecular biology of forming

these kinds of linkages before we got to the optical tweezers,” said

Zhang. “After we found a way to attach a single SNARE complex

between the two beads, the rest was quite straightforward.”

Now that details of the energetics and kinetics of the SNARE zippering

process have been elucidated, a large focus of SNARE research

is to understand how SNARE zippering is regulated. Synaptical membrane

fusion, an important role of SNARE zippering, occurs only after

an action potential arrives — thus,

there are many proteins regulating

calcium signals sent to the SNARE


Although throughout the course

of his research, Zhang used optical

tweezers primarily on the SNARE

protein complex, he stresses what

he believes is the wide applicability

of this technology. “Many

people don’t know about optical

tweezers or consider them to be a

very specialized tool,” said Zhang.

“However, I think one can find a

wide application for them in biology

and especially in molecular

biology. At Yale I have been trying

to let more and more people know

about optical tweezers — they are

one of the few tools that allow us

to catch a single biomolecule and

play with it.”

10 Yale Scientific Magazine | January 2013 www.yalescientific.org



A Hard Case: Shucking the Mollusk Mystery

Specimen of Kulindroplax perissokomos,

the shelled aplacophoran

from the Silurian of Herefordshire,

as seen in the split

concretion before reconstruction

by grinding. Courtesy of Derek


New findings suggest that one version of the proverbial chickenor-egg

dilemma has been resolved. The answer, however, applies not

to poultry but to mollusks.

A team of researchers led by Mark Sutton of Imperial College

London and Derek E. G. Briggs, Director of the Yale Peabody

Museum of Natural History,

recently discovered a new fossil

which has helped elucidate the

evolutionary history of mollusks,

a category of invertebrate

species that includes octopuses,

chitons, snails, and oysters. The

find represents the only known

sample of the ancestral species

Kulindroplax perissokomos. The

living examples of the Aplacophora

class of mollusks, characterized

as worm-like creatures,

are believed to have evolved

from an animal like Kulindroplax.

Before the discovery, scientists

have long debated the origins of

the Aplacophora class of mollusks,

whose modern-day members

are a collection of shell-less

species. One hypothesis posited

that the aplacophorans deviated

early on in the evolution

of the mollusks, representing a

“primitive” line of organisms.

The other theory, known as the

Aculifera hypothesis, argued

that Aplacophora evolved from shelled ancestors and are closely related

to other shelled species in the Polyplacophora group, such as chitons.

Ultimately, these two hypotheses touch upon much deeper questions

regarding the origin of shells in mollusks and the evolutionary relationship

between shelled and unshelled species.

Though recent molecular studies corroborated the Aculifera model,

researchers lacked concrete fossil evidence of an ancestral species

common to both shelled and shell-less mollusks that would help verify

this hypothesized evolutionary relationship.

“The prediction that [this] model makes is that we should find some

kind of intermediate morphology between chitons (Polyplacophora)

and worm-like mollusks (Aplacophora),” Briggs said.

The fossil discovery of the Kulindroplax specimen at a deposit on

the English-Welsh border known as the Herefordshire Lagerstätte

was exactly the kind of physical evidence needed to corroborate the

Aculifera hypothesis. Briggs calls this “the missing link.” Using a

grinding apparatus that removed layers of rock only a few microns

thick, the researchers were slowly able to reveal the fossil and produce

a three-dimensional digital reconstruction. According to Briggs, the

fossil was “so remarkably preserved” in the concretion that the form

of Kulindroplax could be accurately described, almost as if it were a

living animal.

The formal report was published in Nature by Sutton, Briggs, and

fellow researchers David Siveter, Derek Siveter, and Julia Sigwart. The

study presents a morphological analysis demonstrating that Kulindroplax

perissokomos is an ancestral species that possesses both aplacophoran

and polyplacophoran characteristics. The most striking physical trait

of the four centimeter-long specimen is a series of seven valves or

shells coating the exterior of the organism, which suggests that Aplacophora

and Polyplacophora evolved from a common, shelled ancestor

and that modern day shell-less aplacophorans arose after losing

their shells in the course of evolution. In fact, the round body shape

of Kulindroplax resembled that of existing aplacophorans, while the

valve morphology resembled that of modern day polyplacophoran

chitons. The specimen was also covered in spicules or short spines; the

researchers hypothesized that these were used for movement through

thick sea-floor sediment.

As a consequence of the find, researchers now believe that aplacophorans

emerged more than 50 million years after the Cambrian

Explosion — a period of time approximately 540 million years ago

that witnessed a sudden increase in animal diversity.

According to Briggs, the finding not only allows researchers to better

understand the evolutionary history of mollusks, but also highlights

how the fossil record continues to provide new insights into the evolution

of living groups.

Virtual reconstruction of Kulindroplax perissokomos (upper and

side views). The specimen is about 4 centimeters long and there

has been some loss of detail due to decay at the front and rear.

Courtesy of Mark Sutton.

www.yalescientific.org January 2013 | Yale Scientific Magazine 11

Thousands of feet underneath the

shifting ice sheets of Antarctica lurks

one of the world’s greatest evolutionary

success stories. A variety of distinct fish

species, collectively called the notothenioids,

have developed the ability to avoid freezing

under extreme conditions through the evolution

of antifreeze glycoproteins. Today, these

fish make up approximately 75% of the biodiversity

and 95% of the biomass in Antarctica.

Their story not only illustrates how relatively

small changes in temperature have led to

major differences in species survival but also

the consequences that global warming may

have on this intrepid family of fish.

Antarctica’s Changing Landscape Shaped the Evolution

of the Notothenioids

The story of the notothenioids begins

roughly 40 million years ago with the separation

of Antarctica from the supercontinent

Gondwana. The split led to the creation of

an isolated continent and the Antarctic Circumpolar

Current. Driven by strong westerly

winds found in the latitudes of the Southern

Ocean, the Antarctic Circumpolar Current

blocked warm water from reaching Antarctica’s

shores. Antarctica’s formerly tropic

climate shifted dramatically and the temperature

plummeted. These changes exerted great

evolutionary pressure on the endemic species

of the region and, for many species, resulted

in mass extinction.

The notothenioids would have likely suffered

the same fate as their relatives had it not

been for the evolution of antifreeze glycoproteins

approximately 35 million years ago. Antarctic

marine fish drink water that contains

small ice crystals. Antifreeze proteins bind to

the crystals and prevent their growth, which

otherwise would lead to complete freezing of

the organism. However, the exact mechanism

of action for these antifreeze proteins is still

poorly understood.

The Diversification of Notothenioids Occurred Long

After Antifreeze Glycoprotein Evolution

For many years, researchers argued that

the evolution of antifreeze glycoproteins was

the driver of the diversification of Antarctica

notothenioids. In a recent paper, Yale University

researcher Dr. Thomas Near instead suggests

that the spread and diversification of the

notothenioids is due to climate change events

occurring at least 10 million years following

the evolution of these proteins. Near states

that while antifreeze glycoproteins are critical

for notothenioid survival, the morphological

and ecological diversity in Antarctic notothenioids

is correlated with events of the Late

Miocene, a time period approximately 11.6 to

5.3 million years ago.

During the Middle Miocene, approximately

20 to 15 million years ago, a warming

occurred in Antarctica that resulted in

temperatures significantly higher than those

today, causing the melting and shifting of ice

sheets in Antarctica. “This ice destruction,”

reports Near, “may have led to the extirpation

of [many Antarctic species] and created

all these open niches for notothenioids to

occupy and subsequently diversify.” The subsequent

Middle Miocene Climatic Transition

led to the polar conditions that exist today

in Antarctica.

This period of climatic turmoil resulted in

the extinction of many of the notothenioids’

competitors and a changed geographic environment.

The notothenioids expanded into

open niches and became physically and thermally

isolated by the cooling temperatures of

the Middle Miocene Climatic Transition. Near

states, “It is thought that dynamic history of

Vomeridens infuscipinnis, a semi-pelagic “dragonfish” species. This specimen was

captured at 410 m near the South Orkney Islands. The specimen is approximately

19 cm in length. Courtesy of Dr. Thomas Near, Yale Department of Ecology and

Evolutionary Biology.

12 Yale Scientific Magazine | January 2013 www.yalescientific.org


Chionodraco myersi, a semi-pelagic “icefish” that lacks hemoglobin. This specimen

was captured at 250 m near the South Orkney Islands. The specimen is approximately

40 cm in length. Courtesy of Dr. Thomas Near, Yale Department of Ecology

and Evolutionary Biology.

ice, the climatic history of Antarctica, has also

led to a very dynamic physical environment

of Antarctica.” These conditions may account

for the evolution of the hundreds of different

species of notothenioids that exist today.

The Effects of Global Warming on Notothenioid


Near’s research indicates that the development

of polar climatic conditions created a

number of open niches, allowing notothenioids

to diversify. However, global warming

and the increasing temperature of the Southern

Ocean are currently reversing the polar

climatic conditions . Notothenioids have

reached a critical stage in their evolutionary

history. While humans and other animals have

heatshock proteins that allow cells to respond

to increases of temperature, notothenioids

have lost the ability to express these proteins.

Studies show that taking notothenioids from

their ambient temperature of -1.5 degrees

Celsius into a water temperature of 4 degrees

is fatal. Even if the fish are acclimated slowly,

they cannot survive beyond 10 to 15 degrees

Celsius. Yet according to Near, “It is not clear

given the precipitous yet gradual change we

are seeing in the Southern Ocean, and, keep

in mind, the Antarctic Peninsula has been

documented as the fastest warming part of

the planet right now. It’s not clear how these

fish will acclimate.”

Regardless of the thermal consequences,

there will be other new pressures on notothenioid

survival. As temperatures rise in

Antarctica, invasive species, likely from

South America, may move into the territory

of notothenioids. King crabs that eat hardshelled

organisms have already begun their

march into the Southern Ocean. Whether

these invasive species directly target the notothenioids

or target the krill that notothenioids

rely on for food, they may fundamentally

disrupt Antarctica’s food webs. Realistically,

a combination of the physiological stress

imposed by changing climate conditions and

the ecological impacts of altering the overall

compositions of these communities will

severely threaten notothenioid survival.

What the Extinction of the Notothenioids Means

for Humans

Fishermen in Chile, the Faukland Islands,

and South Georgia may be the first to recognize

the consequences that notothenioid

extinction would bring. In these regions, the

Antarctic notothenioid called the Chilean

Sea Bass is actively fished and considered an

economically viable resource. Further, the

antifreeze glycoproteins found in these fish

have countless applications, ranging from

increasing the shelf life of frozen foods to

enhancing preservation of tissues for transplant

or transfusion in medicine. In fact, these

proteins can already be found in certain types

of popsicles and ice cream bars.

Perhaps more devastating than the economic

consequences caused by the extinction

of the notothenioids would be the devastation

to an ecological community that has a rich

35-million-year-old history. According to

Near, the notothenioids “represent a canary

in a coal-mine…when you have a continental

ecosystem potentially taking a dive, that just

raises a lot of warnings for unanticipated

consequences that we would see elsewhere.”

If the notothenioids begin to disappear,

humans may see what happens when an entire

ecosystem is turned inside out.

About the Author

Lara Boyle is a senior in Branford majoring in Biology with a focus in neurobiology.

She is the Outreach Chair for the Yale Scientific Magazine and works in Professor

Schafe’s lab studying the changes in gene expression and synapse regulation that

appear with Post Traumatic Stress Disorder.


The author would like to thank Dr. Thomas Near for the interesting and enlightening

conversation about his work.

Further Reading

• Sidell, B., and O’Brien, K, When bad things happen to good fish: the loss of hemoglobin

expression in Antarctic icefishes. The Journal of Experimental Biology 209,

1791-1802 (2006).

• Near, T., Parker, S., and Detrich III, H. A Genomic Fossil Reveals Key Steps in

Hemoglobin Loss by the Antarctic Icefishes. Molecular Biology Evolution 23 (11),

2008-2016 (2006).

www.yalescientific.org January 2013 | Yale Scientific Magazine 13

O: as the most abundant molecule


on earth, this simple yet unique

substance exhibits remarkable

properties that make it irreplaceable. It is the

universal solvent, the major component of the

human body, the molecule that sustains life on

our planet. And now, with the development

of three water-based methods for electricity

generation, it may become the newest form

of green energy.

Renewable and emission-free, water

provides an appealing alternative to fossil

fuels. Learning to harness its power-generating

potential could help reduce mankind’s

carbon footprint and limit global warming.

Moreover, water covers 71% of the Earth’s

surface, making it cheap and readily accessible.

Obtaining electricity from water could

significantly alleviate the global energy crisis.

Menachem Elimelech, Professor of Chemical

and Environmental Engineering at Yale

University, is a leading expert on producing

electricity using water. Specifically, he studies

how energy can be captured from differences

in the salinity, or salt concentration, of water.

“When you separate fresh water from salt

water, you need energy to do it,” Elimelech

explains. “Similarly, from a thermodynamic

point of view, when you have two streams

mixing together, there is energy that is


So far, Elimelech and his collaborators have

developed two techniques to harness energy

from water: pressure-retarded osmosis (PRO)

and reverse electrodialysis (RED). Another

water-based technology, microbial fuel-cells

(MFC), uses a different principle to generate

electricity from wastewater.

Pressure-Retarded Osmosis (PRO)

From the earliest water wheels in ancient

Greece to modern-day hydroelectric dams,

humans have historically relied on the motion

of water to produce energy. Pressure-retarded

osmosis relies on a similar principle, generating

electricity through the diffusion of water

across a membrane. Osmosis occurs when a

water-permeable membrane separates two

solutions of unequal salinity, and the pure

water diffuses from the less concentrated to

the more concentrated region. The energy of

this motion can be captured and converted

to electricity.

PRO exploits the concentration difference

between two water sources: one with a high

salinity (generally seawater) and another more

dilute freshwater source (river, brackish, or

waste water). The two types of water are

placed in adjacent chambers separated by a

special membrane. Due to the resulting concentration

gradient, water diffuses across the

membrane from the freshwater chamber to

the seawater chamber. The buildup of water

volume in the more concentrated region creates

pressure that spins a turbine, generating

electricity. Meanwhile, a second channel

recycles the freshwater by returning it to its

original chamber.

The major challenge in producing lowcost

energy with PRO lies in designing an

appropriate semi-permeable membrane. The

14 Yale Scientific Magazine | January 2013 www.yalescientific.org


A schematic of PRO. When the dilute and concentrated solutions mix, they generate

osmotic pressure that spins a turbine to produce electricity. Courtesy of Menachem


material must allow water to diffuse freely

across it but simultaneously block the passage

of salts and other dissolved substances. These

ideal conditions are difficult to attain: small

amounts of salt from the concentrated chamber

are able to pass through the membrane

into the dilute chamber, and water from the

dilute stream may flow into the concentrated

stream as well. The net effect of these factors

is to increase the salt concentration of the

dilute stream and reduce the overall driving

force for osmosis, a phenomenon known as

internal concentration polarization.

An additional challenge with membranebased

electricity production is membrane

fouling: natural water sources contain organic

material, bacteria, and other contaminants

that can become trapped in the pores of

the membrane and lower its efficacy over

time. Since water treatment is an energyconsuming

process, the Elimelech group is

working to find fouling-resistant materials.

“Current membranes that produce a very

high water flux have some inherent surface

roughness…and organic matter likes to stick

to it,” says Elimelech. “The key is to make

more smooth membranes that organic matter

will not attach to.”

Reverse Electrodialysis (RED)

Unlike PRO, which relies on water transport,

reverse electrodialysis captures energy

from the movement of ions. Ions, charged

particles formed when a salt dissolves in

water, are abundant in seawater. When seawater

mixes with freshwater, ions naturally

diffuse into the less concentrated freshwater

to create energy. Just as the name implies, this

is the opposite of electrodialysis, which uses

energy to force ions against their concentration


RED uses two types of semi-permeable

membranes: anion-exchange membranes,

which only allow the passage of negativelycharged

ions, and cation-exchange membranes,

which only allow the passage of

positively-charged ions. The RED system is

set up with alternating salt water and fresh

water channels separated by membranes.

Each salt water channel lies between two

fresh water channels, bounded by an anionexchange

membrane on one side and a cationexchange

membrane on the other. A typical

RED apparatus consists

of many stacks

of these alternating

membrane pairs. As

salt water and fresh

water mix, anions

and cations diffuse

in opposite directions

toward two

electrodes on either

end of RED apparatus.


receive the ions and

convert this energy

into an electrical

current carried by a

connecting wire.

Research has

revealed that the

maximum amount

of energy that RED

can theoretically

produce depends on

the salinity, or salt

concentration, of the water source. Whereas

typical seawater can produce just under 1

kilowatt-hours of energy, highly concentrated

salt water sources like the Dead Sea can generate

over 14 times that amount. To optimize

power density, researchers are also working

to redesign spacers, structures that provide

mechanical support between membranes.

Conventional spacers interfere with ion transport,

but newly-developed conductive spacers

are permeable to ion flow.

A major challenge to implementing widespread

RED systems is the cost of the ionexchange

membranes, but researchers hope

that prices will go down as global demand

increases. Better production technologies and

more efficient membranes will also contribute

to a lower cost.

Microbial Fuel-Cells (MFC)

Thanks to the advent of new technologies,

modern methods of acquiring energy have

become remarkably diverse. In addition to

the water from oceans and rivers, scientists

have found that wastewater can be a valuable

resource. In particular, this energy can be

converted into electricity-using bacteria. Since

current wastewater treatment plants already

use bacteria to remove organic material from

the water, microbial-fuel cell (MFC) technology

can transform these treatment plants into

the power plants of the future.

Elimelech suggests that the energy created by PRO and RED

could be reused to desalinate water, creating a closed-loop system.

Courtesy of Menachem Elimelech.


January 2013 | Yale Scientific Magazine 15


MFC relies on the natural metabolic processes

of living bacteria; virtually any type

of organic matter can be consumed and

converted into electricity, including algae and

cellulose products from plants.

A typical microbial fuel cell consists of two

compartments, each containing an electrode.

In one chamber, bacteria consume organic

material in the wastewater and release electrons.

These electrons are transferred from

the anode to the cathode through a connecting

wire, which creates an electrical current.

In the other compartment, the electrons

combine with protons and oxygen to create

water as a by-product.

Due to lower power densities and the high

cost of cathodes made from precious metals,

current MFC models are not yet viable for

commercial-scale energy production. In the

meantime, they may reduce the power consumption

of wastewater treatment plants.

What’s next?

By using the salinity gradients between

fresh water and seawater, PRO and RED can

take advantage of numerous water resources

worldwide. Any site at which oceans and rivers

meet is a theoretical energy supply: in fact, the

energy stored by the concentration gradient

between sea water and fresh water is as much

as 0.8 kilowatts per cubic meter — equivalent

to the energy produced by water falling over a

280 meter high dam (by comparison, the largest

dam in the world is only 185 meters high).

According to Elimelech, not all of this is

extractable due to

technical issues in the

energy conversion

process, but water

technologies can still

contribute to alternative

energy sources.

Current efforts focus

on making water

technology viable

for large-scale energy


Meanwhile, in the

midst of ongoing

research, early prototypes

of waterbased

power plants

have already been

put to the test. In

2009, Norway started

operating one of the

world’s first osmotic

power plants, which

generates electricity

using PRO. Located

in the Oslo Fjord, the

plant could generate

enough to power a coffee-maker, around

one watt per square meter. If scientists can

achieve a five-fold increase in power density,

a larger plant will be built in 2015. “If you

have something around ten watts per square

meter, which is very doable technologically,

then it would be economical and compete

with renewable energy,” says Elimelech.

While production challenges remain, breakthroughs

in engineering offer a variety of

A schematic of RED. As anions and cations flow in opposite

directions through selectively-permeable membranes, they generate

electricity. Courtesy of Menachem Elimelech.

innovative solutions. According to Elimelech,

these technologies, especially PRO and RED,

are closely linked with existing desalination


“We don’t need to come up with completely

new concepts or modules,” he says. “We can

use the same systems used in desalination and

just operate them in a way that can produce

energy… All of these processes are moving

in the right direction.”

About the Author

Rebecca Su is a freshman in Silliman College studying biomedical engineering and



The author would like to thank Professor Elimelech for his time and devotion to his


Menachem Elimelech, professor of

chemical and environmental engineering

at Yale, studies pressure-retarded

osmosis and other membrane-based

methods for electricity generation. Courtesy

of Yale University.

16 Yale Scientific Magazine | January 2013

Further Reading

• Logan, Bruce E. and Elimelech, Menachem. Membrane-based processes for sustainable

power generation using water. Nature 488, 313–319 (2012).

• Ramon, Guy Z., Feinberg, Benjamin J. and Hoek, Eric M.V. Membrane-based

production of salinity-gradient power. Energy & Environmental Science 4, 4423-

4434 (2011).



What if the most important cells

in your body were not your own

but rather those of bacteria?

Researchers continue to uncover evidence

that communities of bacterial symbionts

within the gut play important roles in the

health of their hosts. Nancy Moran, the

William H. Fleming, M.D. ’57 Professor of

Biology in Yale’s Department of Ecology

and Evolutionary Biology, studies these

bacteria and their influence on not humans

butbut the health of honey bees.

Until now, few have taken an interest

in understanding the nature of the honey

bee gut microbiota. A strong link exists

between honey bees and the bacteria intheir

guts; each organism depends on the other,

yet the precise nature of this relationship

is still largely uncharted. The remarkable

community consists predominately of eight

bacterial species found only in honey bees

and some bumblebees. Moran says, “I feel

like we’re really studying something that is

part of the bee.”

An Important Pollinator on the Decline

Honey bees are nature’s primary pollinators:

at least 80% of agricultural crops, from

almonds to avocados and, soybeans, andto

sunflowers, depend on them for growth.

Approximately one-third of everything we

eat has in some way benefited from honey

bee pollination. Honey bee-pollinated crops

are worth $15 billion in the United States

annually, in addition to the value of the honey

produced. Increasingly, the bees we depend on

for pollination are managed by beekeepers and

transported across the country to pollinate

specific crops. The number of wild bees is

decreasing: fewer than 20% of honey bees

today are feral with almost none in heavilyfarmed


The number of managed honey bee

colonies has decreased from 4 million in the

1940s to 2.5 million today. In recent years, this

decline is due in part to a phenomenon termed

Colony Collapse Disorder (CCD). During

the winter of 2006–2007, beekeepers began

experiencing unexplained losses of 30–90%

of their hives. Adult worker bees would suddenly

disappear, with no dead bee bodies in

or near the hive. Although the queen would

still be alive, colonies were often reduced to

fewer than the 10,000 bees needed for colony


Each winter between 2006 and 2010, honey

bee populations fell 33% with one-third of

those losses contributable to CCD. CCD is

thought to result from a multitude of interacting

factors, including habitat loss, insecticide

use, parasites, stress due to managed pollination

migrations, and the quality of available of

food and water. Managed bee losses during

the winter of 2011–2012 were only 22%,

which may represent a decline in CCD or

may merely be the consequence of changing

environmental conditions is unclear.

Beneficial Gut Communities Revealed

The great diversity of bacteria that make

up the gut community of host organisms,


January 2013 | Yale Scientific Magazine 17


The number of managed honey bee colonies in the United States has declined from over 4 million in the 1940s to less than 2.5

million today. Courtesy of Mrymecos.

whether humans or honey bees, was for

a long time inaccessible to scientists. Due

to the environmental conditions bacteria

normally habitat— many cannot live in the

presence of atmospheric oxygen levels, for

example— they are not easily cultured, or

grown, in a lab, making their study very


In more recent years, the development of

DNA sequencing and other approaches that

provide insight into an organism’s genome

has allowed Moran to to elucidate the essential

role of gut bacterial species in bees.

Moran describes bacteria as largely beneficial

in honey bees and “incredibly important

in basic functioning,” particularly in metabolism.

Animals lack the ability to synthesize

vitamins and ten of the twenty amino acids

necessary to make proteins. These nutrients

must instead be ingested in food or produced

by bacteria symbionts in the gut. In bees, gut

microbiota also play an important role in

metabolizing sugars from nectar and pollen

into energy as well as in breaking down

pectin in pollen cell walls that can be toxic

to honey bees. In one European bumblebee

species, gut microbiota have been shown to

protect against an intestinal pathogen;, and

bacteria likely have the same protective role

in honey bees.

The Consistent Simplicity of the Honey Bee


Moran analyzed 16S ribosomal RNA

(rRNA) in order to characterize honey bee

microbiota. 16S rRNA, found in the ribosomes

used for protein synthesis in bacteria,

is like a “barcode for bacteria” with variations

in sequence specific to different organisms.

By comparing Based on 16S rRNA

sequences to a database of known species,

Moran found eight clusters of related bacteria

that make up between 95 and 99% of

honey bee microbiota.

This simple gut community reflects the

nature of these bees as social insects. Honey

bees have a highly-structured society with

shared resources in which the adult worker

bees divide up labor, the younger bees

nurse the larvae, and the older bees forage

for pollen and nectar. Whereas non-social

insects pick up their gut microbiota from the

environment, for example from food, honey

bees obtain and transfer their gut microbiota

through social interactions. When a honey

bee larva emerges from the pupa, its gut

contains no bacteria. The bee gains its gut

microbiota through grooming, contact with

the honey comb, and trophallaxis, the sharing

of nectar between bees.

This is not to say that the bees are isolated

from other bacteria in their environment.

Moran says that when bees fly around to

pollinate, “they encounter everything…there

is an immense possibility of exposure” to

other bacterial species. Yet the honey bee

gut community is remarkably well-conserved

over the lifespan of the bee and appears

to be largely independent of diet and age.

Moran speculates that all of the niches

within the honey bee gut are filled by the

eight main species, preventing other bacteria

from colonizing.

Limited Species, Great Diversity

Although up to 99% of the honey bee

microbiota is composed of just eight “species”

that each share 97% of their 16S

rRNA, calling the gut community simple

18 Yale Scientific Magazine | January 2013



would be a misnomer. Even within a single

bacterium species, the bacteria of the gut

microbiota can be divided into distinct

strains that vary widely in their metabolic

behavior. Different strains colonize the gut

in distinct patterns, with each strain likely

filling a unique niche. This large diversity

may confer an advantage to the host in

responding to a wide variety of environmental

conditions, fighting off pathogens

and metabolizing a range of toxins.

A Possible Link to Bee Deaths

One of Moran’s recent discoveries suggests

a possible link between the health

of the bee microbiota and CCD. Since

the 1950s, honey bees in the United States

have been treated with an antibiotic called

tetracycline to combat American foulbrood

(AFB), a disease characterized by infection

Beekeepers began reported losses of 30-90% of their honey bee colonies in the fall

of 2006. Courtesy of Waldan Kwong.

microbiota likewise developed resistance to

tetracycline in the form of eight different

resistance genes. As the introduction of the

new drug tylosin to target AFB coincided

with the outbreak of CCD in 2006, Moran

suggests the possibility that the drug was

disruptive to the gut community. Other

countries that restrict the use of antibiotics

in beekeeping did not experience a disappearance

of bees as large and as abrupt as

that in the United States in 2006.

Although this link between antibiotic use

and CCD has not been proven, it neverthe-

less suggests the importance of considering

the vital role that the gut microbiota of

honey bees play in keeping bees and their

colonies healthy. The gut community, welldefined

in the honey bee, may be inseparable

from its host. These symbionts are crucial

for the honey bees in their absorption of

food, removal of toxins, and potentially in

their defense against pathogens. What other

benefits these bacteria confer is unknown,

but Moran’s research is helping us take steps

toward understanding what may lie at the

very core of honey bee health.

About the Author

Katie Leiby is a junior in Silliman College majoring in biomedical engineering. She

works in Dr. Laura Niklason’s lab characterizing the extracellular matrix of decellularized



The author would like to thank Professor Moran for her time and her enthusiasm in

sharing her research.

The antibiotics used to combat American

foulbrood may be linked to Colony

Collapse Disorder. Courtesy of beeinformed.org.

by spores. Once a colony gets AFB, the

entire colony needs to be destroyed, resulting

in large costs for the beekeeper. By the

1990s, the drug target had become resistant,

and efforts were initiated to find a new antibiotic.

Moran found that the honey bee gut

Further Reading

• Nancy Moran et al., “Distinctive Gut Microbiota of Honey Bees Assessed Using

Deep Sampling from Individual Worker Bees,” PLOS ONE, April 27, 2012; 7(4):

e36393. doi: 10.1371/journal.pone.0036393.

• Engel, Philipp, Vincent G. Martinson, and Nancy A. Moran, “Functional diversity

within the simple gut microbiota of the honey bee,” PNAS, June 18, 2012; 109(43).

doi: 10.1073/pnas.1202970109.


January 2013 | Yale Scientific Magazine 19

Predicting Volcanic Eruptions

Modeling Magma Wagging to Anticipate Volcanic Behavior


On Mount Vesuvius, broad sheets

of fire and leaping flames blazed

at several points, their bright glare

emphasized by the darkness of night.” –

Pliny the Younger’s account of Mount Vesuvius

and the destruction of Pompeii, 79 AD.

For centuries, volcanoes have been an

incomprehensible natural phenomenon. To

further our understanding of unexplained

eruptions, Professor David Bercovici of

Yale’s Geology and Geophysics Department

studies volcano behavior and recently proposed

a new predictive model that describes

a mechanism called magma wagging that

occurs just before a volcano erupts.

Understanding Volcanoes

There are three major types of volcanoes:

scoria cone, shield volcano, and stratovolcano,

which have eruption types of Strombolian,

Hawaiian, and Plinian, respectively.

The most dangerous of these classified

volcanoes is the Plinian, the type of volcano

that famously buried Pompeii. These volcanoes

have an explosive column of lava that

typically reaches 45 km.

For each of these volcano types, plate tectonics

are an integral part of why volcanoes

erupt. The majority of volcanoes occur in

subduction zones, an area where the sea floor

sinks into the Earth’s mantle. The slabs of

sea floor drag down wet crust and sediment

into the hot mantle, which cooks out their

water into the surrounding mantle rock. The

wetted mantle melts and the resulting magma

rises into the over-lying crust. As it passes

through the crust, it melts other rocks and

creates a viscous paste. This paste gets stuck

in the column and causes a buildup of pressure.

When the pressure reaches a certain

threshold, the volcano erupts and breaks

the rock surrounding the column. Ash and

sulfate are blasted from the volcano and enter

Photograph of the Santiaguita Volcano

Magma Ring. Courtesy of Professor


the surrounding air, creating an atmosphere

that may endanger the health of nearby

plants and animals.

Volcanic Tremors and Magma Wagging

Volcanic tremors occur before the eruption

of a volcano and may last for a few

minutes or several days. The tremors are

defined by a low frequency shaking, about

0.5 to 5 hertz, that is approximately the same

despite the size or location of the volcano.

The universality of this tremor frequency was

puzzling to geologists, who sought to create

mathematical models that could predict

volcanic eruptions.

The model created by Bercovici explains

low frequency tremors by studying magma

wagging in the magma-conduit system. This

system encompasses a column of magma and

a surrounding area of gas. Magma consists

of a mixture of viscous rock melt, crystals

and bubbles. As it rises within the walls of

the conduit, the edges of the column develop

into a thin, permeable annulus that surrounds

the central magma column. The permeability

of the annulus allows it to transport gas into

the surrounding space.

Magma wagging occurs when magma is

20 Yale Scientific Magazine | January 2013 www.yalescientific.org


pushed up through the conduit construct

in the center of the volcano. A layer of gas

develops between the conduit and the rock

of the volcano, and magma comes out of

the conduit, causing the magma column to

shake within the gas layer. The shaking and

lateral movement of the column is countered

by the spongy, springy foam of the annulus,

which returns the column to its original

position, causing oscillation described as

“wagging.” The pressure changes within the

annulus are transmitted to the walls of the

magma-conduit system, which leads to an

observable tremor.

Bercovici developed a mathematical

equation to describe the oscillatory magma

wagging phenomenon. Bercovici’s equation

shows that the frequency of the magma

wagging is only weakly dependent on the

size of the volcano. This explains why all

volcanoes have frequencies close to 1 hertz

and little frequency variation is observed

across volcanoes.

Before a volcanic eruption, the frequency

of these seismic tremors increases. In

Sketch of the magma wagging model. Courtesy of David Bercovici.

Mathematical model for magma wagging.

Courtesy of David Bercovici.

general, the magma wagging frequency is

low and constant but closer to eruption

the frequency increases, as does the range

of the frequencies. This increase can be

explained by two different factors. First, the

gas layer surrounding the column or conduit

of magma becomes narrower. Second, the

walls of the volcano start to fall apart and

collapse inwards, increasing the pressure on

the column of magma. As these two events

occur, the frequency of magma wagging


Applying the Model

The model designed by Bercovici explains

the mechanism for volcanic tremors, but it is

difficult to apply this model to accurately predict

eruptions. Eruption depends on many

properties and environmental circumstances,

complicated by our inability to observe the

inside of volcanoes. However, it is possible to

observe the change in tremor frequencies as

volcanoes approach eruption, which allows

some forecasting of volcanic behavior.

To determine their predictive accuracy,

volcano models are tested using supercomputers

that simulate volcanic eruptions. Ultimately,

scientists hope that the magma wagging

model performs well in supercomputer

modeling trials. Bertovici notes, however,

that while theoretical modeling can be helpful,

“nothing can compare to empirical data.”

About the Author

Data from past volcanic eruptions may

serve as guidelines for judging the impact

of volcanoes in the future. For example, the

tremor frequency and amplitude may reveal

the amount of damage a volcano’s eruption

will cause. Although predicting volcano

behavior remains imperfect, models such as

Bercovici’s bring us closer to understanding

the incredibly powerful natural phenomenon

of volcanic eruptions.

Theresa Oei is a sophomore Molecular Biophysics and Biochemistry major in

Pierson college. She is on the board of Synapse, Yale Scientific Magazine’s Outreach

Program, and works in Professor Steitz’s lab studying target genes of the viral miRNAs

HSUR4 and HSUR5 for their role in tumorigenesis.


The author would like to thank Professor Bercovici for his time and consideration in

describing his research in geological development.

Further Reading

• Jellinek, Mark A., and Bercovici, David. “Seismic Tremors and Magma Wagging

During Explosive Volcanism,” Nature Journal, Vol. 470 (2011): 522-525.

www.yalescientific.org January 2013 | Yale Scientific Magazine 21

Vaccination Decisions: Selfish, Selfless, or Both?

Marketing to Mixed Motives


Many medical decisions appear to

be isolated events affecting only

the health of the individuals seeking

treatment. However, popular perception

influences individual decisions, which in

turn determine public health outcomes. This

interplay becomes particularly apparent in the

prevention of contagious disease — while

vaccination is a decision individuals make,

cumulative choice influences disease incidence.

The question that agencies such as the Centers

for Disease Control and Prevention (CDC)

must ask in promoting vaccination is whether

awareness of this influence on the health of

whole communities drives individual choices

to undergo or abstain from vaccination. While

epidemiological game theory would predict

self-interest as the primary motivator, new

research by former and incoming Yale faculty

members suggests that non-selfish motivations

such as altruism and cooperation may have a

significant influence on individual vaccination


The Altruistic Anomaly

Assistant Professor of Epidemiology at

the University of Pittsburgh (and former Yale

Research Associate of Epidemiology) Eunha

Shim’s research focuses on the computation

and modeling of infectious diseases and vaccination.

In line with a recent shift in epidemiological

research toward analysis through game

theoretic modeling, Shim used a modified

formula to examine the self-interested and

altruistic motivations of individuals to receive

or refuse influenza vaccinations.

In the context of Shim’s study, selfinterested

motivations consider the “direct

protection” of vaccinated individuals against

disease. Altruistic motivations consider the

“indirect benefits” or increased protection

for unvaccinated individuals as those around

them become vaccinated, the concept of herd

immunity. Traditionally, game theory assumes

that all “players” in a situation will act selfishly

and try to maximize personal payoffs, thus

suggesting sole reliance on self-interested


Challenging this assumption, Shim and her

colleagues developed a vaccination model to

incorporate consideration of altruism, noting

that in similar behavioral game theory studies,

“behavior frequently deviates from the

predictions of game theory because players

care about the outcomes of other players.”

The team conducted a survey study among

427 university employees that included questions

gauging subjects’ worry about catching

the flu themselves (“self-interest”), as well as

about spreading the flu to others (“altruism”)

with and without vaccination.

After analyzing the survey results with the

modified game theoretic formula, The study

concluded that self-interest is more influential,

accounting for 75 percent of vaccination

decisions, but that a significant 25 percent

of motivations for vaccination decisions are

altruistic. An outcome almost counterintuitive

to the assumed selfishness of game theory,

these results suggest that altruism can effect a

“deviation from self-interest.” Shim suggests

that altruism, though frequently overlooked,

is a significant factor to be considered by the

CDC in framing vaccination campaigns. She

concludes that “promoting altruistic vaccination

can be an effective strategy to promote

optimal vaccination.”

Intuitive Cooperation, Rational Self-interest

The effect of altruism in Shim’s study can

be linked to another game-theoretic study

conducted by incoming Yale Assistant Professor

of Psychology David Rand. Rand’s study

examined the “cognitive mechanisms” of

human cooperation on a broader, behavioral

level. Through the lens of the dual-processing

model of cognition, in which the conflict

between “automatic, intuitive processes and

controlled, reflective, rational processes”

drives decision-making, the study sought to

distinguish whether intuitive behavior is selfish

or cooperative.

Rand and his colleagues conducted a series

of economic games using online participants

from around the world. In each game, individuals

were given a certain amount of money

and asked how much they would contribute

toward a common pool with three other

players. Contributions toward the common

pool were doubled and split evenly among the

four group members. The research team then

compared the calculated correlation between

question-answer reaction time and degree of

cooperation in each group.

Results demonstrated a correlation between

cooperation and reaction time. Confirming this

general trend, certain games forcing subjects

to decide more quickly increased cooperative

contributions, while games forcing them to

22 Yale Scientific Magazine | January 2013 www.yalescientific.org


decide more slowly decreased contributions.

Further variations of the game began with

writing prompts asking subjects to recall situations

when either intuitive or reflective thinking

had been beneficial. Whereas inducing an

intuitive mindset in subjects increased cooperation

priming a reflective mindset decreased

contributions. Together, these results suggest

that cooperation in social situations is intuitive,

but self-interest is rational.)

In the context of his study, Rand describes

cooperation as being “willing to make sacrifices

for the common good,” a definition

similar to Shim’s description of altruism as

Assistant Professor David Rand’s study

showed a negative correlation between

average cooperative contribution and

decision time. Courtesy of David Rand.

“deviation from self-interest in the direction

towards the community optimum.” Rand

considers the specific case of vaccination a

cooperative action: “Vaccination is a public

good. If everyone gets vaccinated, everyone is

better off.” According to the results of Rand’s

study, patients’ intuitive urge to cooperate

through vaccination competes with rational

adverseness to sacrificing self-interest.

Marketing against Misconception

Shim states that current vaccination campaigns

are beginning to address altruistic motivations

but adds, “I think they can do better.”

She suggests the campaigns can become more

persuasive by appealing to targeted age groups.

For example, because children have high

susceptibility and can easily spread disease,

the CDC should aim to protect the overall

population by focusing on persuading parents

that the benefits of vaccination outweigh the

risks. As children tend to

increase infection rates

among parents, vaccinating

both achieves a “dual

impact” of increased protection.

The most common

obstacle to this approach,

however, is popular misconception

of the risks of


In a follow-up research

article, Shim incorporated

patients’ perceptions

regarding vaccination and

the reasoning behind their

decisions. She comments,

“a lot of people choose

not to get the influenza vaccine because it is

inconvenient for them.” Additionally, “some

parents think it’s not safe to vaccinate their

kids.” Citing the perceived link between autism

and vaccines as an example, she explains that

while such claims are not supported scientifically,

they can shape the public perception and

stimulate fear.

Rand expresses a similar opinion on popular

misconceptions regarding the costs of vaccination.

He asserts, “It’s not like you are paying an

individual cost for the great good; it is in your

own interest to get vaccinated,” and proposes

that the CDC focus on communicating the

faultiness of misconceptions about personal

cost. If individuals gain a more accurate

understanding of the costs and benefits of

vaccination, the conflict between intuitive

cooperation and rational self-interest will be

minimized and vaccination will be viewed as

a gain for self-interest rather than a sacrifice.

Shim’s approach to increasing the effectiveness

of current vaccination campaigns

Assistant Professor David Rand researches the cognitive

mechanisms of human behavior. Courtesy of David Rand.

through altruism challenges the fundamental

assumption of epidemiological game theory

by suggesting potential for deviation from

self-interest. However, both she and Rand

also recognize the importance of appealing to

self-interest by correcting misconceptions. The

findings of each study, as well as the persuasive

strategies Shim and Rand propose, suggest that

vaccination campaigns should take both altruistic

cooperation and self-interest into account.

One of this year’s CDC publications urges

the reader to “get a flu vaccine to protect me,

my family, and my coworkers!” and promotes

the slogan “the FLU ends with U.” Another

brochure reads, “Flu viruses are unpredictable,

and every season puts you at risk. Besides…

You don’t want to be the one spreading flu, do you?”

Current vaccination campaigns appear to

recognize the importance of appealing both

to patients’ self-interest as well as their concern

for the common good — the issue that

remains is how to prioritize these persuasive

approaches most effectively.

About the Author

Jessica Hahne is a sophomore English major in Silliman College. She is a copy

editor and an online articles editor for the Yale Scientific Magazine.


The author would like to thank Dr. Eunha Shim and Dr. David Rand for their time

and enthusiasm about their research.

Further Reading

• Shim, Eunha, Gretchen B. Chapman, Jeffrey P. Townsend, and Alison P. Galvani.

“The influence of altruism on influenza vaccination decisions.” Journal of the Royal

Socciety Interface. 9. no. 74 (2012): 2234-2243.

www.yalescientific.org January 2013 | Yale Scientific Magazine 23



Microbots: Using Nanotechnology in Medicine

by Jenna Kainic

The human body houses a complex of twisted pathways, labyrinths

of tunnels, unimaginably small. The biological systems responsible for

the flow of the blood, oxygen, and electrical impulses that sustain us

are intricate and delicately coordinated. And so, when these systems go

wrong, when our bodies are vulnerable to cancers and diseases, it seems

at first ideal to have medicine that can perform on a scale as small and

complex as the circuitry on which it acts. Rather than exposing the whole

body to toxic chemotherapy drugs, imagine

cancer treatments that could deliver drugs

directly to malignant cells. Consider swallowing

a device that could travel through

your body, looking for signs of irritation

and illness.

Such a world seems surreal and evokes

images of science fiction stories and children’s

books. However, the possibility of

having tiny robots navigate the smallest

passages of the human body is not far from

being a reality. In fact, important steps have

already been taken towards the creation and

use of such nanotechnologies. When perfected,

these microbots will enable doctors

to explore and mend patients’ ailments with

greater insight and precision.

First Steps

The first step toward using nanotechnology

in medicine occurred in 2001, when

Given Imaging introduced the PillCam.

The PillCam is a capsule containing a light

and camera that a patient swallows. Images

beamed wirelessly from the capsule can be

analyzed and used for diagnostic purposes,

The PillCam is about the size of a large pill

and can be swallowed. Courtesy of afutured.


thus replacing procedures like the traditional endoscopy, in which a flexible

tube containing a flashlight and camera is inserted into the digestive

tract. The PillCam, at about the size of a normal pill, is ideal for use in

the passageways of the gastrointestinal system since it can be swallowed.

However, the digestive system is comprised of relatively large pathways

compared to those of the arteries and capillaries, which can be as small

as a few micrometers in diameter. The PillCam is thus still too large to

travel through the entire circulatory system. Additionally, the device lacks

a means of navigating itself through the body; it merely travels passively

along the natural course of the digestive system.

Thus, in order to explore passageways like those in the circulatory

system, scientists needed to find a means

of creating a smaller device that would be

able to propel itself against the flow of the

bloodstream. The difficulty of this task was

largely in the size of the technology needed.

Any traditionally built battery-powered motor

would be far too large to fit through passages

only micrometers thick.

Drug-Delivering Devices

Scientists have managed to overcome this

obstacle by using magnets instead of motors

to propel the devices. Dr. Sylvain Martel, the

founder and director of the NanoRobotics

Laboratory at the École Polytechnique de

Montréal, and his team have developed microcarriers

that are able to pass through the larger

arteries. These microcarriers are navigated by

the magnetic coils of an MRI machine and have

successfully delivered drugs to rabbits’ livers.

Similarly, a team in Dresden has created

microtubes made of titanium, iron, and platinum.

According to a paper written by this team

on their research, these rolled-up microbots are

capable of “the selective loading, transportation,

and delivery of microscale objects in a

fluid.” Like Martel’s technology, external magnets control the motion

of these tubes. However, these microbots are also propelled by microbubbles.

The tubes are partially filled with hydrogen peroxide, which, in

a reaction catalyzed by the platinum, decomposes into oxygen and water.

24 Yale Scientific Magazine | January 2013 www.yalescientific.org



The force of the bubbles ejected from the tube during this reaction

propels the microtubes through the body’s passageways. Additionally, the

diameter of these tubes is at around five micrometers, about one-tenth

the size of the microcarriers utilized by Martel’s team, thus enabling them

to transverse much smaller arteries.

According to an article written by Martel, “the [technology’s] first real

application will be in treating cancers.” These drug-delivering microbots

are preferable to current means of fighting cancer because they can bring

the medicine directly to the tumor, helping to avoid killing healthy cells

along with the cancerous ones.

Bacterial Microcarriers

Another solution to the problem of size can be found in nature. The

MC-1 strain of bacteria, discovered in 1993, is magnetic and propels

itself with spinning tails. This strain is ideal for use as a microcarrier

of drugs because, at 2 micrometers in diameter, it is small enough to

navigate even our bodies’ tiniest capillaries and can be controlled by use

of a relatively weak magnetic field. Martel’s team has already tested this

system on mice, guiding a swarm of drug-carrying bacteria to tumors in

the animals’ bodies. A team at Purdue University has performed similar

experiments in mice, adhering genes to the surface of bacteria to alter

gene expression in cells. In test cases, mice were injected with bacteria

carrying genes for luminescence. The scientists found that certain organs

in the animals’ bodies successfully expressed the luminescent genes, suggesting

that these bacteria could be used for altering the gene expression

in diseased cells.

Though this bacteria-driven technique does navigate through more

of the human body than its man-made counterpart, this method is not

without faults. In Martel’s experiments, for example, most of the bacteria

never reached the tumors. The bacteria has a half-life of about 30

or 40 minutes, so many of the bacteria died before reaching the tumor.

Additionally, many were misdirected by the strong currents in the animals’

larger vessels. Martel offers a hybrid solution to this problem. His

team is currently pursuing the possibility of using man-made microbots

to transport the drug-carrying bacteria through the larger vessels to get

closer to the tumors. Then, when the microbots are unable to go farther

through the small vessels, they would release the bacteria. Since, in this

scenario, the bacteria would be dispatched closer to the target, the hope

is that there would be a greater likelihood in reaching the tumor.

Diabetes Regulation

In addition to cancer treatment, microbots are also being considered

potentially useful for other medical purposes. For example, a team in Australia

has proposed a concept and simulation using nanobots to regulate

diabetes. Diabetes patients have to test their blood multiple times daily to

ensure that their glucose levels are stable. The Australian team proposes

using nanorobots to travel through patients’ bloodstreams and send data

about glucose levels to external electronic sources. Using nanorobots

would enable doctors to receive data from many different locations

simultaneously throughout the body and allow for a more continuous

monitoring of blood sugar levels without the pain and inconvenience of

self-testing. Additionally, unlike the technologies explored for drug delivery,

these nanobots would not require active motion. Rather, they could

travel with the natural flow of the bloodstream, sensing blood sugar levels

along the way. Their passive movement makes it much easier for scientists

to design these structures, which do not have to include any means of

propelling or navigating themselves through the circulatory system. Like

the proposals for cancer treatment, these nanotechnologies afford a more

convenient and precise methodology for diabetes regulation.

Looking Ahead

Though these microscopic methods of treatment and diagnostic tests

are not yet perfected, scientists are getting closer to realizing a world of

medicine that is able to navigate at the minute scale of our bodies’ smallest

pathways. Concepts of and experiments on medical nanotechnology

are presenting doctors with new potential for treating their patients precisely

and conveniently. As Martel notes in an article in IEEE Spectrum

Magazine, “There is no shortage of possibilities… a real-life fantastic

voyage is just beginning.”

An artist’s representation of a theoretical nanobot treating a

blood cell. Courtesy of Nanotechnology 4 Today.

An artist’s depiction of a nanobot performing cell surgery.

Courtesy of Nanotechnology News Network.

www.yalescientific.org January 2013 | Yale Scientific Magazine 25


Tearing At the Seams:

The Splitting of the Indo-Australian Tectonic Plate


On April 11, 2012 a giant earthquake and a massive aftershock rocked

the seafloor of the Indian Ocean off the coast of Indonesia. Not only

were the earthquakes some of the most powerful ever recorded, but they

also puzzled scientists. Massive slabs of crust slid as far as thirty meters,

creating tremors that could be felt in India, and twisted the bedrock with

such intensity that several new fault lines formed. But these earthquakes

were centered in the middle of a tectonic plate, far from any established

fault lines, which was highly unusual. Recent studies of the earthquakes

suggest an uncommon and significant explanation for this incident: the

splitting of one of Earth’s tectonic plates.

The vast majority of earthquakes are caused by the movement of

tectonic plates, pieces of Earth’s crust and upper mantle that fit together

like the pieces of a jigsaw puzzle. The plates are not static, and build

up immense amounts of strain at their borders as the mantle flowing

underneath propels them forward, making them catch on other plates.

When the plates finally break free, they release this energy in a matter of

minutes as powerful waves, generating an earthquake. Since the plates

only interact with each other at the borders between them, this is where

the vast majority of earthquakes occur.

However, the 8.6 and 8.2 magnitude April earthquakes were focused

in the center of the enormous Indo-Australian Plate, a point hundreds

of kilometers from the closest plate boundaries.

“These were the kind of events that made seismologists do a double-

A schematic of a strike-slip fault similar to those observed in

the Indian Ocean earthquakes last April. Large slabs of crust

slipped over 20 meters along several such faults, unleashing

massive amounts of energy. Courtesy of the Southern California

Earthquake Center at USC.

take,” said Maureen Long, an assistant professor in the Geology and

Geophysics Department at Yale University, in reference to the April

earthquakes. “If you had taken a poll of seismologists before these events

and said ‘Is this possible?,’ most seismologists would have told you no,

including me.” For incredibly powerful earthquakes like these to occur

so far out of the way, an unconventional explanation was necessary.

Scientists now agree that the unusual earthquakes are matched by

an equally unusual cause. In 1986, an article published in the journal

Techtonophysics observed “intense intraplate deformation” on the Indo-

The tectonic plates of Earth. These regions of the lithosphere

fit together like a jigsaw puzzle, constantly colliding and interacting

with each other as they drift. Courtesy of University of

Wisconsin Eau-Claire.

Australian Plate. The lithosphere in this region was being warped like

modeling clay in the same location where the 2012 earthquakes would

later occur. According to seismologists today, the deformation mentioned

by the paper’s authors is an active process that is gradually ripping

the tectonic plate in two. Eventually, this may create a localized boundary

on the Indo-Australian Plate. Creating two new plates would certainly

involve magnitudes of energy on par with those seen in April, but the

necessary strain needs to come from somewhere else.

A recent study by a team at the University of California Santa Cruz

concluded that the internal strain on the plate is the result of differences

in the movement of the Indian and Australian regions of the plate.

While the northwestern Indian section collides with the Eurasian plate,

the Australian border shoves into Sumatra to the northeast. Like pulling

on the opposite ends of a wishbone, these opposing forces subject the

center of the plate to great internal stress. When the crust reaches its

breaking point, an event like the one in April this year occurs.

“Given the way these things were moving, something had to be happening

between them,” said Long. “The earthquakes in April provided

some direct data confirming that this diffuse zone of deformation is in

the process of localizing into a new plate boundary.”

The severe strain in this diffuse deformation zone led to a peculiar

series of earthquakes. The Santa Cruz team found that the earthquakes

were caused by ruptures along four distinct faults whereas most earthquakes

involve just one. The faults were also incredibly deep, extending

far into the mantle, and were reported to slip 20–30 meters in just a

few minutes. Such a massive release of energy is seldom seen, even on

the most active plate boundaries. Only the enormous internal strain

caused by the splitting of a major plate could create extreme lithospheric

deformation and earthquakes unheard of miles from the nearest plate


The earthquakes of April 2012 have been some of the most intensively

studied tectonic events and their significance has yet to be fully understood.

Such powerful ruptures far from plate boundaries have altered

the way seismologists think about the causes of earthquakes. Uncommon

earthquakes like these provide scientists with significant insights

into how our planet is shifting and changing, right underneath our feet.

26 Yale Scientific Magazine | January 2013 www.yalescientific.org



Match, Manipulate, Medicate:

Old Drugs Targeted for New Use


In 1993, scientists working for pharmaceutical giant Pfizer faced

a conundrum. Their new wonder drug, the product of years of

intense and costly research, failed to show any effectiveness in treating

patients with angina pectoris, a common cause of severe chest

pain. Then named UK-92,480, the drug was destined to be shelved.

If it were not for the keen observation that some patients reported

sustained erections as a side effect of their treatment. The researchers

were baffled at first but then realized the opportunity on their

hands. Five years later, UK-92,480 gained FDA approval as an oral

treatment for erectile dysfunction and was released to the market.

Today it has been rebranded as sildenafil — or Viagra — and has

become a windfall for Pfizer.

Could other failed drugs find their own stories of serendipity?

Certainly, finding novel uses for failed drugs is not a new idea. Aside

from Viagra, a number of well-known drugs had originally been

developed for other purposes, such as Rogaine, which had started as

a drug for high blood pressure, and AZT, the anti-HIV drug that was

originally supposed to be a cancer drug. In each case, advances in our

understanding of diseases and human biology led researchers back

to the past, repurposing old drugs based on a better understanding

of their mechanisms of action.

Sildenafil, better known as Viagra: the little blue pill that

began its career as heart medication. Courtesy of The New

York Times.

Pharmaceutical companies have taken an interest in reviving their

failed drugs. From their perspective, drug development is a risky

business. Bringing a drug from the lab to the clinic typically takes 13

years and an investment of around $1 billion, with a 95 percent risk

of failure. Some drugs may not be structurally suitable for efficient

mass production, some show dangerous side effects, and some simply

do not work against the target disease. In total, around 30,000 drugs

have been shelved by pharmaceutical companies over the past three

decades, and some of these failed drugs have shown new promise

for treating other diseases. Because they have already been tested in

humans, details about their production, dosage, and toxicity are readily

available, which can expedite the process of developing new disease

treatments. Instead of starting from scratch, successful repurposing

of even a few drugs could save companies substantial costs and time.

The National Institute of Health is soliciting applications to

investigate new applications for drugs shelved by pharmaceutical

corporations. Courtesy of the Medical College of Wisconsin.

A new development in drug retargeting strategies has been the

creation of drug libraries that allow receptor sites to be matched up

with pre-existing chemical compounds. Last year, Dr. Elias Lolis,

Professor of Pharmacology at the Yale School of Medicine, and Dr.

Michael Cappello, Professor of Pediatrics, Microbial Pathogenesis,

and Public Health at the Yale School of Medicine, jointly published

a paper detailing how this approach can be applied to treating hookworm

infestations. Previous research had suggested that hookworms

manipulate the human immune system by mimicking a key human

regulator with their own protein, AceMIF. Together, Lolis and Cappello’s

research teams screened a chemical library of almost a thousand

FDA-approved compounds for possible drugs that could inhibit

AceMIF activity, effectively preventing a hookworm from shutting

down the human immune response. From this study, they were able

to identify two potential anti-hookworm drugs previously tested for

other purposes: sodium meclofenamate, an anti-inflammatory drug,

and furosemide, a diuretic.

Recently, the National Institute of Health, through their National

Center for Advancing Translational Sciences (NCATS), launched a

massive $20 million program to reopen research into 58 drugs shelved

by various pharmaceutical companies. Worth up to $2 million each,

these grants will be awarded to proposals from academics, non-profit

groups, and biotechnology corporations investigating novel applications

for these failed drugs. Even this effort, however, has not been

without controversy. Some, like former Pfizer President John LaMattina,

have criticized the NCATS undertaking, claiming that companies

themselves have already taken similar rediscovery initiatives.

Others worry about potential intellectual property issues that may

impede the process to push the repurposed drug through to the

clinic. Companies may be hesitant to sacrifice any measure of intellectual

property rights of their compounds, which are central to their

value. On the other hand, without patent protection, researchers will

have a difficult time convincing companies to continue developing

off-patent drugs and bring them to market. Although advances in

both biological understanding and computational technology offer

exciting possibilities for old drugs, the road to the clinic remains

long and treacherous.

www.yalescientific.org January 2013 | Yale Scientific Magazine 27



Test Tube Meat

It’s What’s for Dinner


Imagine sitting down at the dinner table and staring at a green

algae sludge soup, a grilled grasshopper appetizer, and an entrée

consisting of thin turkey strips grown in a large glass vat. It may

not be an enticing image, but one or more of these cuisines may

grace your family meals sooner than you think.

The issues of food security and of meeting the nutritional

demands of a growing world population are constant challenges that

many scientists and policy makers are trying to address today. But it

is certainly no simple task: at current rates, the global population is

projected to reach 9 billion people by 2050. In other words, science

must act fast if we expect to maintain the health and nutrition of

an already burgeoning society.

No More Room

According to the estimates

of the Food and Agriculture

Organization of the United

Nations, current food production

must be doubled by

2050 in order to keep up with

future demands from population

growth and dietary

changes. But how exactly

can farmers and other food

manufacturers double production?

Agriculture, which

includes croplands and pastures,

already occupies nearly

40 percent of the earth’s

terrestrial surface. The rest

of the planet is covered by

deserts, mountains, and other

lands unsuited for additional cultivation. Radical climate changes

and expanding water shortages further impede agricultural developments.

Simply put, there is not enough space or resources to readily

expand food production in the traditional way.

Algae, Insects, and Genetically Modified Foods

Samples of lab-grown meat in media. Courtesy of Reuters.

Several creative techniques are being developed to find alternatives

for sustainable food production. One idea has been to harness

the simplicity, diversity, and robustness of algae. Commercial algae

farms can be constructed in places unsuitable for conventional

agriculture and can yield enormous amounts of algae with minimal

cost of resources. In addition to providing a source of nutrition

for humans, algae can also act as animal feed, fertilizer, and biofuel.

“Mico-livestock farming,” which involves large-scale insect rearing

farms, is another potential avenue to tackle the food production

problem. Insects are high in protein, calcium, and iron, while low

in cholesterol and fat. In comparison to conventional livestock like

cows and pigs, insects require much less land and resources to grow

and reproduce, emit many fewer greenhouse gases, and can more

efficiently convert biomass into protein. However, it is difficult to

appease the Western palate with beetles, spiders, and worms, even

though hundreds of species of insects are eaten in Asia, Africa,

and South America.

The advent of genetic engineering technologies about two

decades ago has also restructured approaches to food production

and security. Scientists currently use these novel techniques such as

microinjection and bioballistics to modify the genes of crops like

corn, soybeans, and tomatoes. These genetically modified foods

have enhanced traits, whether it is improved crop yield, increased

resistance to plant diseases

and pests, increased shelf life,

or enhanced nutritional value.

While genetic engineering

approaches have permeated

most areas of agriculture to

improve crops (around 90

percent of soybeans, corn, and

canola grown in America are

genetically modified), these

technologies still do not solve

one of the most pressing nutritional

concerns today.

The Protein Problem

“The challenge in feeding a

future 9 billion is not so much

that we lack the land to feed

people in general, but that we

lack land to feed people meat

specifically,” says Mark Bomford,

director of the Yale Sustainable Food Project. Bomford has

been involved in creating and managing sustainable food systems

for the last 15 years, specializing in urban agriculture, community

food security, and food systems modeling and research.

“The issue is a growing demand for animal protein,” says Bomford.

Bomford explains that as populations and incomes increase,

especially in cities, the demand for animal protein increases. Currently,

more than one in seven people do not have enough protein

in their diet. The reasons for lack of protein are manifold, from

deficiency of local productive capability to political turmoil, and

can vary from region to region.

Simply increasing the production of livestock will not resolve

the lack of protein in today’s diet. The reason is that the current

meat production business is already one of leading causes of environmental

degradation. Ranches require large open spaces to raise

livestock, which means clearing many acres of land and causing

severe deforestation. This is compounded by the fact that current

livestock populations emit one-fifth of the world’s greenhouse gas

28 Yale Scientific Magazine | January 2013 www.yalescientific.org



Current meat production cannot satisfy the world’s demand for protein. One in seven people do not have sufficient protein in

their diets today. Courtesy of the American interest blog.

emissions, further exacerbating global warming. Providing fresh

water to billions of animals also becomes a critical issue, as one in

nine people in the world already lack access to clean water. Expanding

the livestock base will only further the pressure on current

demands for land, resources, and the environment.

Test Tube Meat

One potential solution to the protein problem? Test tube meat.

Better known as cultured or in vitro meat, it is the muscle or tissue

of an animal that is biosynthetically grown from a sample of a living

animal. To be clear, test tube meat should not be confused with

imitation meat, which is produced from vegetable proteins. Rather,

this synthetic meat is more or less “real” meat containing genuine

animal proteins. The difference is that no animals are harmed in

the biosynthetic process of producing the meat.

Cultured meat utilizes stem cell technology, a technology in which

most of its applications lie in medicine. Stem cells are isolated from

a chicken, pig, or cow and subsequently converted to muscle cells.

These cells are then grown on a scaffold surrounded by a serum

of nutrients, growth supplements, and important vitamins. While

the muscle cells are growing, they are electrically and mechanically

stimulated to closely resemble the cells inside a real animal. Together,

the scaffold and stimulation provide the synthetic meats with similar

structure and texture to real meats.

The ability to produce synthetic meats on a massive scale would

address many of the limitations meat producers face today. According

to a report led by scientists from Oxford University and the

University of Amsterdam, cultured meat could generate 96 percent

lower greenhouse gas emissions, 99 percent lower land use, and

96 percent lower water use than conventionally produced meat.

Cultured meat may also prove to be much more efficient than conventional

meat. Currently, 100 grams of vegetable protein must be

fed to livestock to produce 15 grams of animal protein, a 15 percent

efficiency rate. Cultured meat could be produced with a 50 percent

equivalent energy efficiency rate. With these numbers in mind, test

tube meat almost seems like a dream too good to be true.

And maybe it is. Current research and technology is still far away

from synthetically manufacturing meat in an economic fashion. Mark

Post, a professor at Maastricht University, is currently developing

the world’s first in vitro hamburger, but its cost will be well over


Other more fundamental factors may complicate the idea of test

tube meat.

“It may be a feasible technology, but it does not mean it is an

appropriate technology. You have to bear in mind that people do

not demand meat solely on the basis of its nutritional value. There

are complex social, cultural, economic and personal considerations

at play,” says Bomford.

While there are a few methods including commercial algae farming,

micro-livestock farming, and genetic engineering to free up land

and increase agricultural yield, they still do not fully address the

protein problem, the chief concern in maintaining the future’s diet

and nutrition. In the next 50 years, the demand for familiar meats

like beef, chicken, and pork will increase substantially. Yet, we do not

have the resources to continue rearing livestock with conventional

methods. Test tube meat has the potential to tackle this challenge,

but several big questions remain. Would people really be willing to

eat lab-grown meat? Would synthetic meats change the arena of

religious and ethical restrictions towards meat consumption? And

what about you? Would you eat something grown artificially in a

vat if it tasted the same as an animal raised on a farm?

www.yalescientific.org January 2013 | Yale Scientific Magazine 29




Science as a Double-Edged Sword


“A little knowledge is a dangerous thing. So is a lot.” -Albert Einstein

Spreading Disease, Stirring Fear

Last year, scientists identified five mutations of the bird flu virus,

H5N1, that make the disease highly transmissible among ferrets. The

scary part? Humans have similar immune vulnerability. The controversial

research sparked a debate over whether potentially dangerous

information should be public knowledge. Although some believed

that the world needed the biomedical research to determine proper

public health measures, critics argued that releasing information would

potentially allow rogue scientists to concoct deadly H5N1 strains for

bioterrorism. Although research on using biology as warfare has a

relatively short history in comparison to the history of traditional

weaponry, biological warfare has proven to be just as powerful.

What are biological weapons?

Biological warfare is the use

of infectious agents or artificially

made toxic substances that kill

or weaken humans, animals, or

plants. Unlike chemical weapons,

biological weapons are living

organisms or replicating agents

that reproduce within their victims.

These microbes may be deadly

or incapacitating and may affect

specific individuals, a community,

or an entire population.

Given their lower costs of production

and storage, biological

weapons can destroy populations

at much higher rates than their

nuclear, chemical, or conventional

counterparts. Relative to nuclear

energy, biological weapons are

easily produced and obtained by

non-experts. Development, however,

is much easier than deployment.

Aerosols and bombs are

the most common methods of

releasing these infectious agents.

Yet the technology is imperfect,

as microbes can be fragile and

difficult to control once released.

For some plotters, the unpredictable

and uncontrollable nature of

infectious diseases actually make

bioweapons that much more

useful and terrifying.

Biological weapons can also target non-humans. Anti-agriculture

weapons, such as “rice blast” fungus, can wipe out food supplies of

an enemy nation. Anti-livestock biological warfare similarly targets

animal resources of transportation and food.

Yersinia pestis, or the black plague, is history’s deadliest

epidemic disease. In the 14th century, the Black Death killed

one-third of Europe’s population. The plague remains one of

the world’s most threatening bioweapons today. Courtesy of

the National Institutes of Health.

Infectious disease is an effective weapon for two reasons: it kills,

and it stirs fear even when not deadly. In other words, it is not only

the disease the bioweapon may spread that is contagious: fear is even

more easily transmitted. Fear of catching the disease, of dehumanizing

symptoms, and of permanent physical or emotional damage

can lead to the overturn of societies.

Some choose non-contagious infectious agents for biological

warfare, as these bioweapons can be powerful in other ways. In

2001, letters full of inhalation anthrax infected 22 people and killed

five. Anthrax is a bacterial spore that naturally occurs in the soil,

where animals usually encounter the bacteria while grazing. When

inhaled by humans, the bacteria

travel to the lungs and eventually

to the lymph nodes, where spores

multiply and release toxins that

cause fever, respiratory failure,

fatigue, muscle aches, nausea,

vomiting, diarrhea, and black

ulcers. The bacteria are extremely

deadly. Inhalation of anthrax kills

100 percent of the time when

untreated and 75 percent of

the time even with medical aid.

Anthrax is also highly storable,

with a shelf life of over 40 years.

Given the low-risk for the population

overall, only high-risk people

such as health workers, military

workers, and veterinarians receive

the vaccine. But low-risk is not

equivalent to “no risk,” and so we

still have reason to fear anthrax.

Biology as an Offense

In the twentieth century, scientists

and governments actively

developed biological weapons.

In 1934, Japanese military physician

General Shiro Ishii created

a biological warfare program

that would eventually weaponize

infectious disease for the first

time in the century. The scientists

at “Unit 731,” a large-scale facility

of laboratories, detention camps,

insect houses, animal houses,

airfields, and barracks in occupied Manchuria, experimented with

anthrax, typhoid, dysentery, and plague on their prisoners to find

the most effective biological weapons. Using the research of Unit

731, the Japanese military dropped porcelain bombs of fleas infected

30 Yale Scientific Magazine | January 2013 www.yalescientific.org



with plague on the Chinese cities of Ningbo and Changde.

Other nations also saw biological weapons as a means of national

self-defense, though this was often based on a misunderstanding that

the enemy had more advanced biological warfare programs. As early

as the end of WWI, France began developing biological weapons

programs due to fear that Germany had biological weapons. Between

the world wars, biological warfare took a backseat to chemical warfare

for the Western powers, while Stalin politically repressed biological

research. With the onset of the Cold War, however, the U.S. and

U.S.S.R. began to research biological warfare once again, sparking a

rivalry in scientific knowledge that would last throughout the Cold

War. From the Potsdam conference until the fall of the Berlin Wall,

the U.S. and U.S.S.R. engaged in an “arms race” of biological weapons

with constant fear that the enemy was getting ahead.

As it turned out, scientists are not immune to politics. Unbeknownst

to the general public, American scientists and physicians

experimented with various infectious diseases as possible bioweapons.

In the 1940s, scientists at Fort Detrick experimented with anticrop

agents, anthrax, and brucellosis. With virtually no oversight

from government agencies, the military, or Congress from the 1960s

into the Vietnam War, scientists and physicians conducted tularaemia

research on volunteer servicemen with the goal of creating

an aerosol weapon for anti-civilian attacks in Vietnam. Only when

President Nixon faced pressure from civilian scientists to reform

the government’s chemical and biological warfare research policy

Bioweapons are infectious agents or artificially made toxic

substances that can kill or weaken humans. State or non-state

actors have used bioweapons to gain tactical advantage over

their enemies. Courtesy of X Comp Online Products.

weapon because it is as contagious as the flu and kills 33 percent

of its victims. By 2011, the U.S. had grown its stockpile of antidotes

to more than 300 million treatment courses in the event of

a smallpox outbreak, and biodefense research is now working on

an anthrax vaccine.

A Double-Edged Sword

Scientists can research biological weapons for defensive, but

not offensive purposes. Proponents of this policy argue that

understanding diseases will better prepare us in the event of a

biological attack. Courtesy of the High Scope Program.

did the U.S. begin to move away from offensive biological weapons.

The terrorist attacks in 2001 prompted a new age in biological

warfare in the U.S. Since 2001, we have invested more than $60 billion

in developing air sensors, educating doctors about symptoms

of bioweapons and distributing biodefense materials. Biodefense

experts have identified smallpox as the most threatening biological

The mass production and build-up of bioweapons has been illegal

since the 1972 Biological Weapons Convention. But only 165

countries have signed the treaty, which means that other nations and

non-state actors may still choose to use bioweapons. Thus, many

countries conduct defensive bioweapon research to learn more about

dangerous biological agents. Research in general supports public

health efforts against naturally occurring outbreaks and potential

bioweapon attacks.

Supporters for the controversial H5N1 research argued exactly

this to justify publishing the controversial findings. In the study

published in Science, scientists at Erasmus Medical Center in the

Netherlands identified five mutations that make bird flu very contagious

among ferrets, which catch the same flus as humans do.

They argue that the benefits of allowing the world to design the

most effective strategies to defend against the disease outweighs the

potential risks of bioterror.

Censorship would have set a potentially harmful precedent. The

National Science Advisory Board for Biosecurity, a U.S. government

agency, was the leading critic of publishing the papers. Although

the U.S. eventually reversed its stance under international pressure,

censorship would have signaled a charged message about the role of

politics and government in what is supposed to be an unbiased field.

The H5N1 controversy reminds us that scientific knowledge is a

double-edged sword — it can empower those who want to conquer

disease as well as those who wish to exploit it for sinister purposes.

www.yalescientific.org January 2013 | Yale Scientific Magazine 31



Ruthless Microbes:

The Worst Epidemics in History


Plagues are perhaps the most relentless,

egalitarian killers that humanity has ever feared;

in fact, many of the worst not only killed, but

also brutally sculpted the history of whole generations

and regimes. Breaking through power structures,

destroying entire populations and often even ushering

in their own virulent successors, diseases on a mass

scale have truly painted the violent history of our

planet. Below are history’s most notorious diseases,

in roughly increasing order of the chaos they caused:

1. The Archetypal Plague: The Black


Courtesy of the International World History Project.

2. The Contagious Conquerer: Smallpox

Historians believe smallpox was first seen in the

mummy of Ramses V and later in Indian records

from 400 AD. The virus enters the respiratory

tract and passes to the liver through blood before

reaching skin cells but can also be passed through

direct skin-to-skin contact. After two weeks,

patients experience delirium and diarrhea before

severe fever and a raised pink rash that turns into

crusty, bumpy sores that hemorrhage. It yields a

roughly 30 percent mortality rate.

Historically, smallpox was the Spanish conquest’s

greatest ally in the 15 th and 16 th centuries,

wiping out over 57 percent of the native population

of Santa Domingo. It went on to crush half

of the Cherokee Indian population by 1738. After

WHO mass vaccination in 1967, scientists isolated

the last case in Somalia in 1977.

Jump back to mid-14 th century Europe. As

small towns consolidate, a pandemic approaches

from the east. Caused by the Yersinia pestis

bacterium, the Black Death arrived via the Silk

Road, carried by fleas living on the black rats

of merchant ships. Victims grew buboes (black

swellings in the armpits, legs, and groin), which

filled with blackened blood tinged by greenish


The plague reduced the world population of

roughly 450 million by 75 million before rats

were identified as the vectors and were heavily

exterminated. In England, people grew disillusioned

with the Church and, with the scarcity

of labor brought on by the Black Death, gained

a deeper sense of self-worth, ultimately leading

to the English Reformation.

32 Yale Scientific Magazine | January 2013

Courtesy of the New York State Department of Health.



3. The Greater War: The Flu of 1918

Courtesy of the U.S. Naval Center.

4. The Malady of the Americas: Yellow


As Europeans continued to colonize North

America, epidemics from Africa took even

deeper root in new, damp environments. Yellow

fever, caused by a variant of the Flavivirus family

and spread by the Aedes Aegypti mosquito,

entered through the African slave trade. Victims

developed muscle aches leading to liver failure

with jaundice and bled profusely from the eyes

and mouth.

Napoleon attempted to wrest control of

French colonies from rebelling slaves — until

over 80 percent of his troops sent to certain

North American territories perished to the fever,

allowing Toussaint L’Ouverture to liberate Haiti

and persuade Napoleon to sell the Louisiana territory.

Although controlled by removing stagnant

mosquito breeding water and a mandated vaccine,

yellow fever paved the road for regimes in

the New World.

Courtesy of www2.cedarcrest.edu.

1918 saw the end of grueling World War I.

As civilians rejoiced, a new, biologically deadly

war began to brew. As soldiers returned home,

an H1N1 avian influenza virus entered fresh

populations, spreading through bodily fluids.

Bizarrely targeting healthy young adults, the

virus caused fever, nausea, and hemorrhagic

diarrhea, followed by dark lesions upon the

skin. The dark lesions eventually turned blue

due to lack of oxygen as the lungs filled with a

bloody froth.

European businesses suffered heavy losses

following the wartime struggle. By the time

the virus evolved into less virulent strains, the

flu had took more casualties than all of World

War I , with an estimated global death toll of

50 million.

Courtesy of GAVI Alliance.

5. The Waterborne Killer: Cholera

Cholera had already plagued India’s contaminated

sewage and water systems for millennia

before cramped European cities of the Industrial

Revolution allowed the disease to move. Spread

through flies in contact with contaminated

water, Vibrio cholera caused severe vomiting

and diarrhea, which led to extreme dehydration.

Oftentimes, given continued exposure, entire

populations succumbed.

Better sanitation curbed the disease until 1961,

when a new Indonesian strain (the Ogawa strain)

spread rapidly through Bangladesh, India, the

USSR, Iran, and Iraq. The same strain would

later shatter Haiti after the 2010 earthquake,

with 530,000 cases stemming from the crippled

water infrastructure following the cataclysm.

Although later controlled through chlorination

of water, cholera limited metropolitan growth

for centuries.

January 2013 | Yale Scientific Magazine 33



Educational Emissary: Aaron Feuer, Ezra Stiles ’13


“There’s a rare moment when you realize you have something to

contribute, you hope, and I think that’s what gets me excited. It’s the

scale and importance and scope of the problem and the fact that I can

actually do something about it,” says Aaron Feuer, a senior in Ezra

Stiles. By combining his interests in programming and education reform,

Feuer is working to improve the educational system through Panorama

Education, the startup company he co-founded with Xan Tanner PC

’13, David Carel PC ’13, and John Gerlach TC ’14.

Feuer attended a big inner city urban high school in Los Angeles. As

president of the California Association of Student Councils, he already

had experience with finding ways to engage students with their schools.

At the end of his senior year, he was astounded that of a freshman

class of 1600 students, only about

800 graduated. Describing the high

school dropout rate as “ridiculous,”

Feuer was driven to pinpoint the

reasons for such scholastic problems.

“We want to offer intelligence,

the same way the military gathers

intelligence. We need to diagnose

what’s wrong. I’m really excited that,

working with the Los Angeles school

system now, we can go back and

figure out what’s wrong with schools

like mine,” says Feuer.

To accomplish this, Feuer turned

to his interest in coding and programming.

He began coding in fourth

grade and has worked as a freelance

web developer and programmer

in projects ranging from a desktop

widget app for Google to album

promotion for Madonna to a Facebook

app for National Public Radio.

Using these skills, his idea was to offer

information derived from analyzing

surveys to help schools achieve their

goals in solving their most pressing

problems, including improving the

graduation rate, stopping bullying,

and retaining excellent teachers. This

is critical because many problems,

Feuer notes, are not immediately


Aaron Feuer, ES ’13, is the CEO and lead developer of

his startup, Panorama, an education analytics company

that seeks to uncover and solve the most pressing

problems in our nation’s schools. Photo by Chukwuma


The summer after his freshman year at Yale University, Feuer applied

for Yale fellowship funding with the intent of building a system for

student council members in public schools to conduct teacher-feedback

surveys using paper forms. The project was called Classroom Compass,

but by the fall semester of his junior year, only a few schools had adopted

the technology.

He decided that the project was not moving quickly enough, and so in

the following January, he recruited three other Yale students to found a

new system. This time, Feuer and his group were fueled by a total $50,000

grant: a $25,000 grant from the Yale Entrepreneurial Institute (YEI)

and other Yale fellowships, and another $25,000 grant won from a YEI

pitch competition. That summer, Feuer and his team worked on coding

a new analytical system almost completely from scratch, using only the

skeleton from Feuer’s original code for Classroom Compass. The project

effectively transformed from the one-dimensional Classroom Compass,

a tool for teachers to collect feedback from their students, into Panorama

Education, a 360-degree feedback system that was more comprehensive

in the design, analysis, and presentation of survey information.

As CEO and lead developer for Panorama Education, Feuer wrote

the basic algorithms that underlie Panorama’s technology. One innovation

is the usage of white paper instead of traditional Scantron forms

for their surveys, which are much more expensive to print. After having

students fill out surveys sent by Panorama,

the papers are then returned

and scanned, and afterwards, computers

collect information about the students’

choices. Feuer designed the algorithms

necessary to analyze the scanned surveys

and extract the appropriate information,

rotating the scanned piece of paper into

the right configuration and overlaying

a grid structure to find filled multiple

choice answers and record them.

But the “game-changing technology”

that distinguishes his company is the

algorithms that Panorama is developing

to make the information it collects

more accessible to educators. A lot

of information about schools is collected

— things like student attendance

records, student discipline records, and

previous student survey results. Given

the vast quantity of data, most of it is

hard to interpret and visualize. For Panorama,

Feuer wants to think about the

process in which a human would think

about this data, automate the thinking

process through algorithms, and output

the results in a clear and coherent way.

Some of the factors involved in these

analysis algorithms include different

groupings of students (such as by gender

and race), how questions in surveys are

linked, and the relative deviations in the student’s hand as he fills out a

form. The result is a comprehensive analysis, taking into account a complex

interplay of factors and presented in a rational, beautiful way. Feuer

hopes that these analyses will reveal important insights for schools in

ways that basic human observations cannot determine without hard data.

Panorama is snowballing. In this past week, it added 380 additional

schools for “a really good week,” as Feuer says. The company is currently

working in six states, collaborating with organizations including

two state governments, major school districts, and Teach for America-

Connecticut. Feuer hopes Panorama will expand even further. “Every

school in America should be using this,” Feuer remarks with a wry smile.

34 Yale Scientific Magazine | January 2013 www.yalescientific.org



Dr. Jonathan Rothberg’s journey in next-generation personal genome

sequencing began in the neonatal intensive care unit. His newborn son

Noah was completely blue from the inability to breathe properly, and

Rothberg and the doctors did not know why. Noah would be okay, but

little did Rothberg know that this heart-pounding experience, combined

with his passion for engineering, would lead to one of the most widely

used technologies for genomic sequencing and one of the most important

recent inventions in medicine.

As Rothberg’s father was a chemical engineer, Rothberg spent much

of his childhood exploring and tinkering around in his father’s lab in the

basement of their home. He remembers stumbling upon a collection of

various scientific books, conducting chemical experiments, and programming

his own computer. In some ways, Rothberg began his engineering

career in the lab, building his own communication systems and using

pyrotechnics to build fireworks as a personal hobby.

After majoring in chemical engineering with an option in bioengineering

at Carnegie Mellon University in 1985, Rothberg went to Yale

University to earn his M.S., M.Phil., and Ph.D. in biology. With the new

molecular biology program, he was able to delve into genomic sequencing

and began to think of a more systematic way for automating highly

manual, costly DNA sequencing used at the time. Eventually, his project

led to one of the earliest understandings of genes involved in the wiring

of the nervous system.

The silicon semiconductor chip used for DNA sequencing

measures the charge of the ions released during DNA replication.

The chip allows the DNA sequence to be read directly

without the physical information or optical systems that other

sequencing machines require. Courtesy of MIT Technology


Just two years after finishing his doctorate program, Rothberg launched

his first startup company, CuraGen, which developed drugs by using

information from the human genome. It was a huge success. In 1998,

CuraGen went public and, in subsequent years, raised over $600 million

from public markets — and the company was worth more than either

American Airlines or U.S. Steel.

Although he had a secure career with CuraGen, Rothberg’s interest

shifted to personal genomics. It was amidst this success that his son was

born with breathing troubles. He realized then that he did not want to

mine the consensus human genome but rather to understand “what

made Noah unique and why he wasn’t breathing.” Thus, despite much

criticism from his peers who said the human genome had already been

sequenced, he shifted his efforts from drug development to personal

From Engineering to Entrepreneurship:

Jonathon Rothberg, Ph.D. ’91


Forbes praised the Ion Torrent and Rothberg’s personal genomic

techniques as the “Next $100 Billion Technology Business,”

for its potential to sequence the entire human genome in a few

hours and make the sequencing viable for the public. Courtesy

of Kris Krug on Flickr.

genomics, starting another company, 454 Life Sciences.

Rothberg brought about a scientific breakthrough in personal genomics.

He developed a parallel sequencing technique to produce millions of

DNA sequences at once, selling more than $100 million worth of new

sequencing machines in the first year on the market. He was shocked

to find one morning that his company had been sold to another company,

but the event could not stop him from pursuing further research

innovation. He founded Ion Torrent, a new company that would invent

semiconductor sequencing, enabling sequencing machinery to exist on

a tiny disposable chip. His technique leveraged an innovative approach

to sequencing; it directly translated chemical information to digital

information by detecting the number of hydrogen ions released with the

addition of each nucleotide during DNA replication. More importantly,

it had the potential to decode the genome in a few hours for less than

a thousand dollars.

One can only imagine the excitement this semiconductor sequencing

discovery would have brought to molecular biologists. The World

Economic Forum called it a “pioneer of new approach to genetic

sequencing,” the CT Medal of Technology praised it as the “first personal

genome machine,” and Rothberg was featured on the covers of leading

science journals such as Nature, Cell, and Science. From its onset, Life

Technologies offered more than $375 million for the technology and

eventually bought the company for a total of $725 million.

It is not an easy task to build three companies in a lifetime and nurture

all three to success. Rothberg attributes much of his achievement to his

training as a scientist. “Researchers know that you have to be smart; you

have to go through the ups and downs,” he says. “And you can’t quit

until you have solved the problems.” Taking such a progressive attitude

into entrepreneurship has been one of his greatest assets. He believes

that the key to his entrepreneurial success has been meeting the bright

people determined to achieve their goals.

Just last summer, Rothberg’s sequencing machines rapidly analyzed E.

coli that caused foodborne illnesses in Germany, allowing prompt treatment

of patients in hospitals. His techniques have been useful in efficient

agriculture and fuel production, directly affecting the lives of millions.

With his sequencing techniques, Rothberg is, as he describes it, “feeding

the world, fueling the world, and healing the world.”


January 2013 | Yale Scientific Magazine 35



Improving Science Education: An Inquiry-Based Approach

This year, we hosted the inaugural Yale Scientific Magazine National Essay Competition and asked high school students

about the importance of science education. We received entries from many talented writers across the country and after

careful consideration, selected this essay as our 1st place winner. Our editorial board and contest judges were particularly

impressed with the tangible solution and concrete examples that this piece presented to support a paradigm of the inquirybased

approach, especially as this system has currently been widely recognized. In fact, the Yale Rain Forest Expedition

and Laboratory course has recently been awarded the Science Prize for Inquiry-Based Discovery by the journal Science.

By Jimmy Zhang

1st Place Winner, Yale Scientific Magazine National Essay Competition

Lawton Chiles High School

Tallahassee, Florida

Science has recently taken on an increasingly important role in the

continual advancement of American society. As citizens of the United

States of America, one of the most influential countries of the world,

we must lead the development of new scientific discoveries and their

accompanying breakthroughs in technology. Improving the scientific

aspect of our nation involves making sure that science education in

United States schools is as effective and thorough as possible. After

all, the young people and students of today will be the leaders of

tomorrow. Various programs encouraging greater emphasis on studies

and careers in the Science, Technology, Engineering, and Mathematics

(STEM) fields have already been developed, but these programs may

not be entirely effective in achieving their goals. Children as young as

elementary school age need to be opened to the world of science, but

this exposure should be done in a manner that is relevant to the issues

and struggles we are presented with in the 21st century. Inquiry-based

learning is an approach to teaching science that has much to offer,

potentially transforming American society completely in a positive way.

The best way to offer science education to children in the United

States is through an inquiry-based method of learning. Just as young

children all over the world like to experiment with different varieties of

toys and games, people learn most efficiently when given something to

manipulate on their own. This is in contrast to the rigid benchmarks

and textbook studying that are prevalent in the United States today.

Of course, it would still be necessary to instill basic scientific concepts

and principles in the students, but the curriculum should then add an

additional aspect to the course that asks students to think of responses

to different situations that apply to what they are currently studying. For

instance, instead of simply teaching the process of cellular respiration, a

student might be confronted with a situation in which some variable in

the cycle was altered and then asked to determine how that might affect

various levels of biological organization. Or, better yet, the students

might be asked to change some relevant variable themselves and predict

how the end result might differ from the norm.

In a simple classroom setting, the furthest that students would

typically be able to delve into a topic would be to predict changes due to

variable alteration. But what good is prediction and speculation without

“In a typical classroom setting,

students do not usually have the

opportunity to delve into topics

using experimentation.”

The AP program is evolving to include more inquiry-based

approaches into its curriculum. Courtesy of Pinelands

Regional School District.

subsequent experimentation? That is where the idea of the laboratory

comes in. Currently, many schools in the United States provide a

laboratory component as part of their science curriculum. However,

these laboratory procedures generally provide mostly inflexible step-bystep

instructions for students to follow. In the end, all students and

lab groups conducting the same experiment are expected to obtain the

same data and results. While this approach facilitates understanding

of scientific concepts, it does not really allow for further study of

the subject by the students themselves; they are simply following a

procedure which has been drafted up by some science professional

in an area likely far away from theirs. Children need to be pushed to

put their naturally inquisitive minds to work, rather than being told to

complete described tasks and making already-established conclusions.

In an inquiry-based laboratory experience, students would work, usually

in small lab groups, on an experiment pertaining to the topic they

are currently studying. In contrast to the traditional approach, these

students would decide, collaboratively, on a variable to manipulate in

their own unique experiment. After predicting the effect of the change,

the students would carry out their designed experiment and collect data,

analyze it, and draw conclusions as they would in a traditional laboratory


36 Yale Scientific Magazine | January 2013 www.yalescientific.org



Below are excerpts from the essays of our other contest winners:

Zachary Miller

2nd Place Winner

Brandywine Heights High School

Mertztown, Pennsylvania

“The same fundamental stuff comprises both the stars and

the seas. The same basic rules apply to Voyager 1 as well as to

Earth-bound beings like ourselves. Even across those billions

of kilometers, the laws of physics hold constant. So far as we

can tell, they hold over much greater distances. Astronomers,

armed with telescopes capable of peering across light-years,

observe the universe far beyond our own solar system. Everywhere

they look, quasars radiate light no different than light

from our own sun. Stars in distant galaxies churn with the

same elements found in our own cells.”

Elementary school students take part in inquiry-based learning.

Courtesy of Wayne Elementary School.

To catch a glimpse of what this future world of science education

would look like, we can turn to the College Board’s Advanced Placement

(AP) program, which has done and is continuing to revamp its science

exams, thus affecting their corresponding courses as well. The AP

program is geared towards offering challenging courses, designed to

emulate the difficulty of college classes, to high schools students in

the United States and around the world. Recently, the leaders of the

program have felt a need to implement more inquiry-based learning

into their AP Sciences (AP Biology, AP Chemistry, AP Physics B and

C, and AP Environmental Science). AP Biology has been redesigned

for the current 2012-2013 school year, and that AP Chemistry and AP

Physics will both be seeing changes take place in the following two

years. Edits will allow for deeper coverage of a narrower spectrum

of topics, highlighting those that are considered most applicable to

real-world science. These changes will better prepare AP students for

college and scientific careers that may follow.

Ideally, this makeover of the scientific education process in the United

States would make young people in this nation more capable once they

reach the workplace. With prior knowledge and application of science,

students would be able to handle real-life situations more adequately

than if they did not have any inquiry-based scientific education. From

engineers to architects, from mathematicians to biologists, the world of

scientific careers demands skills in critical thinking, as well as creative

minds. Inquiry-based science, involving both textbook studying and

laboratory exercises, is the best way to foster this essential creativity.

Replacing traditional science education with a more inquirybased

approach would certainly be a feasible solution to the issue of

improving science education. The basic curriculum, involving the

establishing and learning of scientific principles, would not have to be

significantly altered. In many instances, this could be kept in place, but

students would be forced to think deeper in response to more analytical

situations. The cost of this transformation would be insignificant

relative to the enormous benefit that it would provide the future leaders

of America with. Science education has come a long way in the United

States, but can still be improved further through the introduction of

more inquiry-based methods of learning and discovery.

Samantha Holmes

3rd Place Winner

Ridgefield High School

Ridgefield, Connecticut

“With a biology degree from the Massachusetts Institute

of Technology under her belt, [Katharine Dexter McCormick]

became the Vice President of the National American

Woman Suffrage Movement and championed women’s

suffrage. She examined the misogyny that permeated the

20th century and then proceeded to question the status quo.

When she was leading demonstrations against misogyny,

her background in biology was of use to her. She attributed

her conviction and motivation to her college years. Science

provided the foundation for a life of activism.”

Nike Cosmides

Honorable Mention Winner

Pueblo Vista Academy

Santa Barbara, California

“How does our common sense about the human mind,

responsibility, and legal culpability fare when looked at from

the platform of neuroscience? Can our criminal justice

system survive the encounter with a science that views human

behavior as physically caused? We live within an active legal

system, so the ongoing dialog between judicial decisions

and a scientific approach to the brain has the potential to

transform the way we live and our ideas about what is just.”

Maria Grishanina


Hill Regional Career High School

New Haven, Connecticut

“So go on and ask: what is it? Why did it happen? How did

it happen? Science is not merely the answer but the question.

This inquiry is the true starting place of invention and

innovation, where the scientific process begins in attempt to

reveal a puzzle to which we are constantly adding.”

www.yalescientific.org January 2013 | Yale Scientific Magazine 37



The Human Quest: Prospering within Planetary Boundaries


Rating: &&&&&

In The Human Quest: Prospering within Planetary Boundaries by scientist Johan Rockström and photographer

Mattias Klum, we learn that Earth is entering a new age: the Anthropocene, an era in which

humanity recognizes that it is a force capable of significantly changing the Earth. By acknowledging

the global interconnectivity of humans and natural systems, the authors argue that humans must

respect scientifically defined planetary boundaries in order to maintain environmental conditions in

which humans can thrive.

The book accomplishes this task with detailed anecdotes, sweeping generalizations, stunning photography,

and artistic maps and graphs. To a certain extent, these elements are effective: the photographs

are emotionally moving, the anecdotes compelling. However, it is difficult to determine the

target audience, as different elements appeal to people with different levels of background knowledge.

In addition, the author argues that we should change our practices to support a growing population. I question this — might we

not accept that our population growth will be limited by our practices? Though this view is morally objectionable, it is more feasible

than many of the sweeping changes it advocates.

Overall, The Human Quest presents a compelling case for humans to better their environmental stewardship and lays out a framework

for us to do so. However, the book is not without its flaws, as its stunning images do not quite compensate for the lack of a feasible

plan. Hopefully it will appeal to our reason and our emotions enough to force us to craft such a plan ourselves.

The End of the Line:

A Timely Wake-Up Call

The Doomsday Handbook:

50 Ways the World Could End


Rating: &&&&&

Charles Clover’s The End of the Line:

How Overfishing is Changing the World and

What We Eat takes the form of a tour

around the world, exploring the fishing

practices and realities in diverse oceans,

the problems involved and the potential

solutions. The outlook it provides is

bleak: from the collapse of cod populations

in New England’s waters to the

contamination of the North Sea, and from the shutting out of developing

countries’ fishermen by large scale European trawlers off the

coast of Africa to the threats to wild fish populations brought about

by farmed salmon, we are clearly in the midst of an overfishing crisis.

Taking such an approach to explaining overfishing could leave

the reader overwhelmed by the sheer amount of information and

seemingly insurmountable nature of the problem. Yet as Clover so

clearly shows, unless large-scale changes are made in the procurement

and consumption of seafood, we are looking at a very dark future

indeed for both the world’s seas and humans’ diets. The problems,

he says, are big, but not unsolvable.

The End of the Line is readable and well-researched, and Clover’s

goal is not to depress his readers but to incite them, hopefully inspiring

action for the necessary solutions. Overfishing is a global issue

with many complex and entangled contributing factors, but Clover

does an admirable job balancing and presenting the information in

a way that serves as an effective warning to industry leaders, governments,

and the greater public.


Rating: &&&&&

Written in clear, concise prose,

The Doomsday Handbook by Alok Jha

describes the many doomsday scenarios

currently theorized by leading scientists.

In just under three hundred pages, Jha

comprehensively covers virtually every

possible end to the world, drawing from

the influences of Stephen Hawking and

Ronald Reagan. To provide a crystal clear idea of each doomsday

scenario, Jha elegantly fuses history and speculation, seamlessly

bringing us up to speed in each of the many relevant fields of

science. Well-researched and sourced, the book is a quick read,

perfect for a commute or an occasional read. For the data junkie,

Jha includes just enough numbers and figures to keep us on our toes

without leading us into a jungle of convoluting numbers.

Although Jha’s words are clear, his organization of the doomsday

scenarios leaves something to be desired. While the descriptions

of potential scenarios are illuminating, Jha fails to provide a sense

of comparison between each situation. For example, the extinction

of the honeybees and an invasion of extra-terrestrials clearly differ

in likelihood and their impact on humans, but the author does not

acknowledge how much the two scenarios differ. It may have been

helpful if Jha included a chart at the beginning of each section

displaying the likelihood, potential impact, and time frame of each

doomsday scenario. Overall, the Doomsday Handbook presents

an excellent overview of current doomsday scenarios but lacks in

organization and clarity.

38 Yale Scientific Magazine | January 2013 www.yalescientific.org



Frankenstein Jellyfish:

the Surprising Link between Jellyfish and the Human Heart


Science fiction just became reality. After four years of research,

bioengineers at Harvard and Caltech have created an artificial jellyfish

from the muscle cells of a rat. With the help of an electric shock

to jumpstart the cells, the contraption, nicknamed the Medusoid,

“swims” just like its real-life counterpart. Researchers hope that it

can provide a better understanding of other muscular pumps in

nature — most notably, the human heart.

Dr. Kevin Kit Parker, professor of bioengineering and applied

physics at Harvard University, found his inspiration for the Medusoid

at the New England Aquarium. As a scientist involved in cardiovascular

drug development, he was frustrated by how little the

field actually knew about the heart. Thus, when he saw a jellyfish

pumping water to propel itself forward, expanding and contracting

in rhythmic, fluid motions, its resemblance to a human heart was


By engineering a very simple biological pump like a jellyfish, Parker

sought to model the fundamental pumping mechanism behind a

complex organ like the heart. In partnership with Dr. John Dabiri,

Professor of Aeronautics and Bioengineering at Caltech, his team

began studying factors that affect the motion of real jellyfish: the

shape and thickness of the bell, the speed of each contraction, and

the arrangement of muscle tissue. Their final product was simple but

astoundingly true-to-life: muscle cells arranged in a circular pattern,

held together by a silicone membrane. When submerged in saltwater

and shocked with an electric current, the muscle cells began moving

in synchronized contractions, bringing the Medusoid to “life.”

By studying the motion of the Medusoid, researchers can further

investigate how a beating heart regulates blood flow. This is

crucial to diagnosing heart failures and designing cardiovascular

drugs more effectively. “[The Medusoid] might be a good way to

study how the heart works and how the heart responds to different

environments,” says Dr. Paul Van Tassel, Professor of Chemical

and Environmental Engineering at Yale University. The model can

also be applied to study how the heart responds to disease in a

controlled laboratory setting.

In addition to serving as a modeling tool, the prospect of a

bioengineered muscular pump has significant implications for

cardiac patients. Current pacemakers and artificial heart valves

are problematic because they are made of plastic and aluminum.

“The natural response of a living tissue to a synthetic object is

to avoid or actively reject it,” says Van Tassel. “Now what people

try to do is make material that either mimics biology or actively

engages biology.” Thus, the Medusoid project is an early step in

integrating biological cells with these synthetic devices, resulting in

a more durable implant that the body is less likely to reject. Like the

Medusoid, future bioengineered medical devices can benefit from

a hybrid of natural and synthetic materials.

Looking ahead, the team plans to add features to the Medusoid

that make it more lifelike. A future model may be able to change

direction while swimming, and it could also include a primitive

“brain” that makes it respond to stimuli such as light and food.

Ultimately, a self-sustaining Medusoid with features like these

would better represent an organ like the heart, which independently

responds to various signals in the body.

Meanwhile, researchers remain optimistic about future Medusoidinspired

projects. According to Parker, the Medusoid provides an

ideal “design algorithm” for reverse-engineered organs: rather than

blindly mimicking an organ in nature, scientists should first isolate

the exact factors that contribute to its function, and then recreate

that function. Additionally, the Medusoid adds an entirely new

dimension to bioengineering: while previous research has primarily

focused on manipulating cells and molecules, an artificial jellyfish

is a step towards engineering whole organisms. “We’re reimagining

how much we can do in terms of synthetic biology,” says Dabiri.

The End of the World?





January 2013 | Yale Scientific Magazine 39

Entering an

Unseen World

Carol L. Moberg

Pioneering scientists, new instruments, new discoveries, a

new science...firsthand human stories from the laboratory

where events coalesced to give birth to modern cell biology


Available in hardcover and eBook


Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!