YSM 85-3

yalescientific
  • No tags were found...

Yale Scientific

Established in 1894

THE NATION’S OLDEST COLLEGE SCIENCE PUBLICAITON

April 2012

Vol. 85 No. 3

Morality

Computing right and wrong

in robotics PAGES 20-23

BIOETHICS

Placebo

Pill-popping

The ethics of using

placebos in research

COGNITIVE SCIENCE

The Chase...

Is On

Detection of chasing by lowlevel

cognitive processes

RELIGION

A Synergy of

Science and Religion

Bridging the gap between

these disparate trades

PAGES 12-13 PAGES 18-19

PAGES 23-24


everyday Q&A

Q&A How Does Sunscreen Protect You?

Sunscreen is a ubiquitous part of our culture, but how does it actually work?

BY SELIN ISGUVEN

With spring in full force and summer just around the corner, we

will all doubtlessly be getting more sun. But while basking in the

warmth, you won’t want to worry about sunburn. As we all have

been endlessly reminded from our favorite television shows and

perhaps our nagging mothers, sunscreen is essential for protecting

our skin from UV light — but how exactly does this lotion work?

Sunscreen works by blocking and absorbing UV rays through

a combination of physical and chemical particles. Physical particles,

such as zinc oxide and titanium dioxide, are used to reflect

UV radiation from the skin. At the same time, complex chemical

ingredients in sunscreen react with radiation before it penetrates

the skin, absorbing the rays and releasing the energy as heat.

A combination of blocking and absorbing UV radiation is

especially important to combat both UVB and UVA rays. UVB

radiation is the main cause of sunburn and skin cancer. UVA rays,

on the other hand, penetrate more deeply into the skin and were

once thought to only cause skin aging and wrinkling. However,

recent research has confirmed that UVA rays also play a significant

role in the development of skin cancer. Still, many sunscreens on

the market contain ingredients that only block UVB rays, thus

providing insufficient protection against harmful UVA radiation.

Another factor to consider in sunscreen is the sun protection

factor, or SPF. which is commonly misconceived as the strength

of protection. However, it actually refers to how much longer it

takes for UVB rays to redden the skin with sunscreen compared

Sunscreen is comprised of particles that physically

bounce radiation off the skin and react with UV rays.

Courtesy of Korean Beacon

to without sunscreen. For example, an SPF of 15 means it will

take 15 times longer for skin to burn while using the product

compared to without the product.

Therefore, look for a sunscreen that offers both UVA and UVB

protection with an SPF of 15 or higher so that you can better

enjoy fun in the sun.

Q&A

Why a sudden reversal of your compass to point South is part of a natural phenomenon.

BY JOSEPHINE SMIT

Will a Magnetic Pole Shift Mean the End of the World?

Some say the world will end in fire

and some say in ice; yet, other doomsday-sayers

predict our demise in massive

earthquakes and bombardment by

solar radiation resulting from a reversal

of the Earth’s magnetic poles. But rest

assured — there is no scientific basis

for such a cataclysmic scenario.

Reversals of the Earth’s magnetic

field are natural and regular phenomena

that occur approximately

every 250,000 years. The last reversal

occurred 720,000 years ago, and evidence

of a ten percent decline in the

Earth’s magnetic field since the nineteenth

century has led some scientists

to speculate that we may currently

A reversal of the magnetic field would make a

compass point South. Courtesy of

lifeslittlemysteries.com

be heading into a reversal. Because Earth’s magnetic field has

always been dynamic, though, it is difficult to know for sure. We

can speculate that because the Earth’s

magnetic field acts as a shield against

solar radiation, a weakening of the

magnetic field may result in an increase

in solar radiation. Although this would

not be harmful to humans, technology

such as satellites and power grids could

suffer more outages because of their

vulnerability to solar storms. There

are also concerns about how a reversal

might affect migrating species that use

Earth’s magnetic field for navigation.

As there is no convincing evidence that

magnetic field reversals have triggered

species extinctions in the past, it seems

that species are capable of adjusting to

a pole reversal.

So, if you see your compass needle starting to point south, no

need to worry — it is not the end of the world.

2 Yale Scientific Magazine | April 2012 www.yalescientific.org


NEWS

4

6

6

7

7

8

9

10

11

FEATURES

2

26

27

28

30

32

33

34

35

36

37

38

Letter from the Editor

Amasia: The Next Supercontinent

Supercomputing

Intelligent Buildings

"Kracking" the KRAS Code

Making Your Own Luck

New Definition of Autism: From Flexible

to Few

Cryogenic IR Spectroscopy

Elegant and Simple: Understanding the

Interaction Between MDA5 and Viral

RNA

Q&A

Pharmacology

Study Drugs and Neural Enhancers

Biochemistry

The Marijuana Receptors

Health

What am I Eating? The Infiltration of

Genetically Modified Foods

Medicine

Limiting Embryo Implantation

Bioethics

Selling Sex... Cells

Biology

Let Them Eat Cake: More to Obesity

than Just the Extra Calories

Science Education, Part II

Sizing Up Science at Yale

Book Review

The Aha! Moment: A Scientist's Take on

Creativity

Undergraduate Profile

Darren Zhu, B.S. '13

Alumni Profile

Dr. Edward Cheung

Neurology

Left Brain, Right Brain:

An Outdated Argument

www.yalescientific.org

contents

April 2012 / Vol. 85 / Issue No. 3

20

12

Using Placebos in

Research: Ethical or Not?

The use of placebos in clinical research

trials has been a hot-button issue in

bioethics for decades. Bioethicist and

Yale Professor Dr. Robert Levine

believes that placebo in research is ethical,

but only under the right circumstances.

14

The Healing Art of

Meditation

Meditation has been believed for

thousands of years to have healing

powers. Now we are beginning to

understand how meditation affects the

brain. Dr. Judson Brewer of the Yale

Medical School has shown meditation

decreases activity of the parts of the

mind associated with self-referential

processing and mind-wandering.

16

One of the Many Quirks

of Quantum: Perpetual

Motion and Never-Ending

Currents

Yale Professors Jack Harris and Leonid

Glazman put to rest quandaries surrounding

the existence and nature of

non-dissipating quantum currents.

Rooted firmly in quantum theory and

zero-point motion, these tiny currents

are on the vanguard of our knowledge

of perpetual motion.

ON THE COVER

Machine Morality: Computing Right and Wrong

Yale researcher Wendell Wallach considers the ethical, technical,

and legal difficulties of creating machines that are capable of moral

decision-making.

18

24

Wolves and Wolfpacks:

The Chase is On

Science and Religion:

Reality's Toolkit

April 2012 | Yale Scientific Magazine 3


THEME

13 th Century: The manuscript of a

French romance titled Silence, by Heldriss of

Cornwall, first uses the terms “nature” and

“nurture” to describe the debate over human

development.

Controversies

& Ethics

i n s c i e n c e

Compiled by Katie Leiby and Terin Patel-Wilson

1859: Charles Darwin publishes his On the

Origin of Species, in which he contradicts the

Church’s dogma of creationism and puts forth

the theory of evolution and natural selection.

1930: Georges Lemaître writes a report in

Nature, proposing that the Universe expanded

from a single “Primeval Atom.” This idea

would later be coined as the Big Bang Theory.

1975: Geologist Wallace Broecker introduces

the term “global warming” for the first

time in a paper published in Science.

1945: The Atomic Age begins with the detonation

of the first nuclear bomb test: “the gadget.”

The device was detonated in New Mexico,

despite fears by some that the test would ignite

Earth’s atmosphere and destroy all life.

2006: The International Astronomical

Union officially declares eight planets in

the solar system. Pluto, because it does not

“clear the neighborhood around its orbit,”

is demoted from a full-fledged planet to the

status of a dwarf planet.

1996: Dolly the sheep becomes the first

cloned mammal via somatic cell nuclear transfer.

Her premature death in 2003 from progressive

lung disease revived debate over the ethics of

cloning.

4 Yale Scientific Magazine | April 2012 www.yalescientific.org


April 2012

BIOETHICS

COGNITIVE SCIENCE

RELIGION

Vol. 85 No. 3

April 2012 Volume 85 No. 3

Editor-in-Chief

Publisher

Managing Editors

Articles Editors

News Editor

Features Editor

Copy Editors

Production Manager

Layout Editors

Arts Editor

Online Editor

Multimedia Editor

Advertising Manager

Distribution Manager

Subscriptions Manager

Outreach Chair

Special Events Coordinator

Staff

Kara Brower

Matthew Chalkley

Sunny Chung

Andrew Goldstein

Spencer Katz

Sunny Kumar

Katie Leiby

Contributors

Kevin Bohem

Arash Fereydooni

Walter Hsiang

Selin Isguvin

Sophie Janaski

Bridget Kiely

Achyut Patil

Yale Scientific

M A G A Z I N E

Established 1894

William Zhang

Elizabeth Asai

Jonathan Hwang

Robyn Shaffer

Nancy Huynh

Shirlee Wohl

Mansur Ghani

Renee Wu

Ike Lee

Jessica Hahne

Li Boynton

Somin Lee

Jessica Schmerler

Jeremy Puthumana

Jonathan Liang

Chukwuma Onyebeke

Stella Cao

Naaman Mehta

Karthikeyan Ardhanareeswaran

Lara Boyle

Mary Labowsky

Kaitlin McLean

Jonathan Setiabrata

Dennis Wang

Jonathan Greco

John Urwin

Dennis Wang

Katherine Zhou

Josephine Smit

Nicole Tsai

Selen Uman

Andrea White

Joyce Xi

Jason Young

Zoe Kitchel

Advisory Board

Sean Barrett, Chair

Physics

Priyamvada Natarajan

Astronomy

Kurt Zilm

Chemistry

Fred Volkmar

Child Study Center

Stanley Eisenstat

Computer Science

James Duncan

Diagnostic Radiology

Melinda Smith

Ecology & Evolutionary Biology

Peter Kindlmann

Electrical Engineering

Werner Wolf

Emeritus

John Wettlaufer

Geology & Geophysics

William Summers History of Science & History of Medicine

Jeremiah Quinlan

Undergraduate Admissions

Carl Seefried Yale Science & Engineering Association

F R O M T H E E D I T O R

Controversies & Ethics in Science

As a kid, science always fascinated me for holding the explanations to the seemingly inexplicable,

for providing the “why” behind life and the natural world. And while science certainly does

continue to offer this same appeal, I have come to find that some answers are often much more

convoluted than direct, with numerous contentious theories, competing interpretations, and widespread

uncertainties, all nestled within each other like Russian dolls. Our established dogma in any

given field seems to be a moving target with new data able to decisively shift the current system.

Inevitably, closer inspection of any subject will reveal the cracks, but it can be staggering to hear

about the vast amount of intense scientific debates, from the technicalities of the most efficient

assays to the heated disputes over the impact of climate change aand genetically modified foods.

Controversy permeates through all scientific disciplines — and with the aid of modern media,

it frequently becomes infused with ethical concerns. Compounding this contention, these issues

thus do not solely grapple with the scientific sphere but also with the perception of this work in

‘tomorrow’s rear-view mirror’ and the fundamental principles of such endeavors.

While it is easy to become caught up in the inherently loaded and innately interesting arguments,

what is important to recognize is what scientists do agree on: the fact that the topic at hand is

worth their investment and dedication to pursue, that they feel so compelled to search for the

truth and support their claims. Perhaps a unique feature of scientific controversy is that it does

not necessarily prevent progress but rather can create it, spurring research and galvanizing new

discoveries. In fact, some may even use controversy as a measure of the health of a scientific

field.

With this in mind, we delve into our theme of controversy and ethics in science, viewing the

field of science as a growing process and a work in progress. For example, Yale Geology and

Geophysics doctoral student Ross Mitchell has recently proposed in Nature a new high resolution

model of supercontinent formation, called orthoversion, as just one of the many paths of investigation

stemming from Alfred Wegener’s initial — and extremely controversial — 1912 idea of

continental drift. The concepts of disease have also been contentious and consequently evolving

and expanding: upon proposed changes to the official definition of autism in the Diagnostic and

Statistical Manual of Mental Disorders, Yale Child Study Center Director Fred Volkmar and his

team are now actively working to demonstrate the oversimplification of the new definition. And

among the flurry of bioethical debates, Wendell Wallach, a leading researcher at Yale’s Interdisciplinary

Center for Bioethics, explores the ethical, technical, and legal difficulties of creating

machines that are capable of moral decision-making, which is especially relevant as Yale Associate

Professor of Computer Science Brian Scassellati is now leading a federally funded, $10 million

multi-university initiate to build a breed of “socially assertive” robots for assisting young children.

Despite the nature of these contentious arguments, they have produced tremendous leaps in

scientific knowledge and are far from solely unproductive squabbling. We hope that this issue of

the Yale Scientific will provide a glimpse of the controversy and ethics in modern science, along

with a certain perspective to appreciate the relentless search for advancement and truth that will

ultimately benefit the entire scientific community.

William Zhang

Editor-in-Chief

About the Art

The Yale Scientific Magazine (YSM) is published four times a year by

Yale Scientific Publications, Inc. Third class postage paid in New

Haven, CT 06520. Non-profit postage permit number 01106 paid

for May 19, 1927 under the act of August 1912. ISN:0091-287.

We reserve the right to edit any submissions, solicited or unsolicited,

for publication. This magazine is published by Yale College

students, and Yale University is not responsible for its contents.

Perspectives expressed by authors do not necessarily reflect the

opinions of YSM. We retain the right to reprint contributions,

both text and graphics, in future issues as well as a non-exclusive

right to reproduce these in electronic form. The YSM welcomes

comments and feedback. Letters to the editor should be under

200 words and should include the author’s name and contact

information. We reserve the right to edit letters before publication.

Please send questions and comments to ysm@yale.edu.

Yale Scientific

Established in 1894

THE NATION’S OLDEST COLLEGE SCIENCE PUBLICAITON

Morality

Computing right and wrong

in robotics PAGES 20-23

Placebo

The Chase...

A Synergy of

Pill-popping

Is On

Science and Religion

The ethics of using

Detection of chasing by lowlevel

cognitive processes

these disparate trades

Bridging the gap between

placebos in research

PAGES 12-13 PAGES 18-19

PAGES 23-24

The cover, designed by Arts Editor Jeremy Puthumana and

Editor-in-Chief William Zhang, depicts a dominant, overpowering

robot-figure standing against a molecular background

representing the struggle between man, responsible robotics,

and the the potential consequences we need to evaluate. Wendell

Wallach, a leading researcher at Yale’s Interdisciplinary Center

for Bioethics, investigates the ethical challenges of future technologies

and how to implement moral decisions, particularly

in robots and computers. Production Manager Li Boynton

designed the headers on pages 16, 18, and 24.


GEOLOGY

Amasia: The Next Supercontinent

Yale Geology and Geophysics Ph.D. candidate Ross

Mitchell and other Yale researchers have developed a

new model of how Earth’s continents

will shift over the next several million

years, and the result is a supercontinent

they are naming Amasia.

Their model comes from a new

dominant theory, called orthoversion,

about how these supercontinents

form. It predicts that continents will

COMPUTER SCIENCE

BY ANDREA WHITE

Researchers predict

the formation of a

new supercontinent

that will form as

America and Asia fuse.

Courtesy of Nature

shift 90 latitudinal degrees away from

their original continental ancestors.

Amasia will form when America and

Asia fuse at the North Pole, located

approximately 90 degrees north of

where the previous supercontinent,

Pangaea, formed around 300 million

years ago. Pangaea itself was 90 degrees away from

its predecessor, Rodinia, which was preceded by Nuna

2 billion years ago.

The orthoversion pattern was discovered by analyzing

ancient paleomagnetic data from over the past few

billion years to understand how the Earth’s crust and

mantle shift around the liquid outer core

— a process distinct from plate tectonics,

which analyzes how individual plates shift

and interact with each other. Orthoversion

replaces the previously held theories

of introversion, where supercontinents

form in the same place, and extroversion,

where supercontinents shift 180 degrees

to the opposite side of the globe.

Mitchell, the primary author of the

research published in Nature, said the

theory has great implications for the

future of geology. “Now that we have a

clear picture of what the supercontinent

cycle actually looks like, we can begin to

answer the questions of why the supercontinent cycle

SUBJECT

operates as it does,” said Mitchell. The research team

predicts Amasia will form within the next 50 to 200

million years.

Supercomputing Reveals Universe Secrets

Over 1.5 million luminous galaxies are currently

being measured by astronomers to better understand

the expansion of the Universe, and a third of them may

already contain the key to

reveal what it is really made

of. A Yale research group

led by Professor of Physics

Nikhil Padmanabhan

hopes to change this by

using supercomputers to

analyze data from the Sloan

Digital Sky Survey (SDSS).

Working with researchers

from dozens of other

BY NICOLE TSAI

Artist’s depiction of an expanding Universe.

Courtesy of Chris Blake and Sam Moorfield

major institutions, Padmanabhan

and his team are

working to create a 3-D

map of the luminous galaxies in the universe. The

SDSS Collaboration began gathering data in 2008 and

will continue through the year 2014. The full-color map

is composed of more than one trillion pixels and is so

detailed that viewing it entirely would require approximately

500,000 high-definition televisions. Padmanabhan

and Antonio J. Cuesta, a postdoctoral student in the

group, currently use the BulldogM supercomputer in the

Yale High Performance Computing Group to interpret

the cosmological parameters

obtained from SDSS measurements.

“We need to explore the

effect of each and every value of

our 10 parameters simultaneously,

and the supercomputer is needed

to sample the huge number of

data points,” says Cuesta.

Ultimately, the resulting map

can be used to gain insight into

the spatial distribution and clustering

of galaxies while allowing

for a better understanding of the

relative quantities of ordinary

matter, dark matter, dark energy, and neutrinos in the

universe. This data contributes to the search for an

accurate theoretical model of the evolution of the

universe. “The map will bring the universe into sharp

focus,” Cuesta believes, and there is no telling what new

understanding of the universe such clarity will bring.

6 Yale Scientific Magazine | April 2012 www.yalescientific.org


ENGINEERING

New Grant for Intelligent Buildings Project

BY JOYCE XI

JECT

Online classification

of building electricity

consumption based on

building usage patterns

Courtesy of Professor

Savvides

Yale’s Intelligent Buildings Project has

received a $200,000 grant from the Wells Fargo

Foundation to conduct research on building

energy consumption. The grant will aid the

project’s efforts to lay the foundations for

significant energy conservation in existing and

future building systems.

Founded in September 2010 as a collaborative

effort between Yale’s School of

Engineering and Applied Sciences, School of

Architecture, and Climate and Energy Institute,

the Intelligent Building Project aims to create

methods that more precisely monitor and

handle energy usage in buildings, ultimately

allowing for systematic reduction of waste.

The project integrates engineering research

on advanced analytical and sensing capabilities

with architectural characterization of building

functionality to create more specific and targeted

measurements of consumption.

The project stresses the importance of avoiding

inefficiency and prioritizing energy distribution

within buildings. According to Dr.

Andreas Savvides, Professor of Electrical

Engineering and Computer Science and a

head of the project, “Just as important as

the efficient production of energy, is the

intelligent management of it.”

With the grant, the project hopes to

investigate new technologies that take

into consideration the behavior of

building occupants to achieve high levels

of energy efficiency. Researchers have

been developing and testing intelligent

sensor prototypes that analyze building

performance across many subsystems

and energy usage areas in buildings. With

this sensing, they hope to replace coarse

existing metrics used to analyze building

energy performance with new ones that

also capture occupant related factors, and

also identify the the relative contribution

of building features and systems in the

performance numbers.

MEDICINE

“Kracking” the KRAS code

A recent study led by Professor of Obstetrics,

Gynecology, and Reproductive Science, Hugh Taylor,

at the Yale School of Medicine has

identified the first genetic marker

for endometriosis, an invasive disease

characterized by infertility and

chronic pelvic pain that afflicts 70

million women worldwide.

While mouse models have demonstrated

a causal link between

the overexpression of KRAS, a

proto-oncogene involved in tissue

signaling, and the development of

endometriosis, a genetic basis had

never been implicated in humans

before this discovery.

With his team, Taylor, an infertility

specialist who has studied the

development and function of the uterus for twenty

years, identified how a variant in a regulatory region

of KRAS leads to the gene’s overexpression and an

increased likelihood of developing a genetic form of

the disease. The researchers screened 132 women with

BY JASON YOUNG

Pictures of human endometrial

tissue with the KRAS variant

(right) and the wild-type (left).

Courtesy of ProfessorTaylor

endometriosis and found that 31 percent carried the

variant as opposed to 5.8 percent in the general population.

Additionally, endometriosis

patients with this marker tended

to develop an invasive type of the

disease that was more likely to lead

to infertility.

The team’s discovery will allow

physicians to develop novel treatments

to target the inhibition of this

genetic pathway. Additionally, genetic

testing may become an effective tool

to prevent development of the disease

in individuals with a family history of

the condition.

Taylor attributes his success to the

collaborative environment that fostered

discussions with fellow professors

Joanne Weidhass and Frank Slack, who discovered

the KRAS variant and its effects on gene expression in

ovarian cancer. “This cooperation between physicians,

physician-scientists, and scientists,” Taylor states, “is the

type of interaction that leads to success.”

www.yalescientific.org April 2012 | Yale Scientific Magazine 7


MATH

Making Your Own Luck

BY ACHYUT PATIL

When an NBA basketball player seems to be making shot after shot

in a game, we naturally expect that the probability of his next shot being

successful will be higher than normal. However, in 1985, Cornell

professor Thomas Gilovich conducted analyses of basketball freethrow

statistics to report that the idea of a player being “on fire” is

nothing more than a misperception of random sequences. Yet, 27

years later, the work of Dr. Gur Yaari, Postdoctoral Associate in Department

of Pathology, Yale School of Medicine and Dr. Gil David,

Postdoctoral Associate in Applied Mathematics, has shown that the

“hot hand” may not just be luck after all.

In 2011, Yaari and colleague, Shmuel Eisenmann, took another look

at free-throw statistics and published an article suggesting that the

“hot hand” was not simply a matter of chance, as previously supposed.

Noting that free-throw shooting is too rare in basketball to

draw conclusions of causility or correlation — a main problem with

the 1985 study — he and David turned instead to statistics on strikes

in bowling. Bowling, Yaari observed, offered “much more data, up to

100 frames per day,” for analysis.

The pair analyzed nine seasons of Professional Bowling Association

data at the game, season, tournament, and career levels. There was a

strong correlation between successes (strikes) at all levels, except for

at the game level. Throughout a season or career, a player’s successes

were concentrated into distinct periods of time, but within a single

game, successes seemed to be randomly distributed. However, while

the bowler’s performance in a prior frame had no causal effect on his

next frame, his performance in the prior few frames could be used as

a predictor. That is, a strike in the fourth frame alone did not mean a

higher chance of a strike in the fifth frame, but if the bowler had been

successful for eight frames, it indicated a “good game” and predicted

a higher chance of success in the ninth frame. Thus, Yaari and David

concluded that the “hot hand” does exist as something more than luck,

but in a correlative, not causative, role. Hot streaks are not a matter

of “success breeding success,” but rather one of having a “good” or

“bad” day.

The significance of these results extends well beyond the sporting

world. “While the examples we studied came from sports,” said Yaari,

While a 1985 study attempted to refute the “hot hand” phenomenon

citing basketball statistics, Ya’ari claims that it lacked an

adequate sample size. Courtesy of Dr. Gur Ya’ari

“the implications are much more far-reaching.” The results indicate

that there is some component of the “hot hand” that could be under

our control. If one can isolate the conditions that lead to better performance,

sequences of higher success may be achieved. While it may

seem like common sense, this radically contradicts the previously held

statistical belief from Gilovich’s 1985 study.

Yaari plans to expand his studies with computer games, testing the

existence of the “hot hand” in the mental realm of concentration.

In addition, Yaari plans to use similar techniques to model a number

of financial trends. Such work could better predict the time scales of

certain events and determine, for example, whether a certain stock’s

value fluctuates monthly or yearly. If Yaari’s hypotheses prove correct,

it seems there may be no end to human endeavors in which “luck”

can be made.

Ya’ari’s finding that early success in a bowling game predicts

later success indicates that the “hot hand” is not simply a lucky

sequence of events. Courtesy of Photobucket

Bowling data for Michael Machuga. The more strikes he has

had over the first 8 frames, the higher the probability of a strike

in the ninth. Courtesy of Dr. Gur Yaari

8 Yale Scientific Magazine | April 2012 www.yalescientific.org


PSYCHOLOGY

New Definition of Autism: From Flexible to Few

Proposed changes to the definition of autism might make

it much less likely for a person to be diagnosed with the

disorder.

A panel of experts from the American Psychiatric Association

(APA) is reevaluating the definition of autism currently

published in the Diagnostic and Statistical Manual of Mental

Disorders (DSM). The DSM is the standard reference to

determine treatment, insurance coverage, and access to services

for a variety of mental illnesses. Although no current

patient will be affected by the new rules, research by Yale

Child Study Center Director Fred Volkmar suggests that the

revision may disqualify a large number of intellectually disabled

patients from receiving a diagnosis of autism spectrum

disorder in the future.

Currently, a person would qualify for a diagnosis of

autism spectrum disorder by exhibiting six of the twelve

behaviors on the criteria list, which include social troubles,

communication and play troubles, and restrictive repetitive

interests. However, the potential change would eliminate

Asperger syndrome and pervasive development disorder.

“They are lumping things together so that there is going to

be less flexibility,” Volkmar said. “They are going from 2000

combinations of the spectrum to a handful.”

Volkmar formerly served on the APA’s expert panel to

update the manual, but he resigned early on. Volkmar and

colleagues conducted their own study to see the revision’s

impact by analyzing data from a 1994 study that served as a basis for the

current autism criteria included in DSM-IV, which is the most current

edition of the DSM. The researchers found that under the new definition,

65 percent of children and adults with high-functioning forms

of autism would not meet the current definition of autism. “This is

a concern,” Volkmar explains, “because getting the label means they

BY SELEN UMAN

Cases of diagnosed autism spectrum disorders in the U.S. have increased

dramatically. Today, scientists estimate that 1 in 150 children have an autism

spectrum disorder. Courtesy of crisisboom.com

Close networks of parents have bonded over common experiences with

children diagnosed with autism spectrum disorders. The children, too,

may grow to find a sense of their own identity in their struggle with the

disorder. Courtesy of The Wall Street Journal

are getting services.”

Experts currently working on the revised definition of autism

disagree with Volkmar’s early findings. The study’s focus on a highfunctioning

group may have slightly exaggerated the estimated impact,

the authors acknowledge. “Valid criticism would be that it’s not perfect.

We don’t have the exact same definition for the disorder that we had

in 1994. However, there should be studies independent of what the

APA is doing for all the obvious reasons,” Volkmar said.

He stressed that studies in Finland and Louisiana are

showing similar results and in the next few months more

studies will be available. Volkmar stated that he hopes the

APA is going to be responsive to the data.

While the APA is struggling to draw the line between

usual and abnormal, parents and loved ones of those

with disorders are getting worried. Tens of thousands of

people are receiving state-backed services to help offset

the disorders’ disabling effects, which sometimes include

severe learning and social problems, and the diagnosis is,

in many ways, central to their lives.

Volkmar first presented his preliminary research results

in September at Yale and in October at the Institute on

Autism American Academy of Child Adolescent Psychiatry

Meeting in Toronto. Hoping to have a voice in the

ongoing debate over definitions, Volkmar and colleagues

are publishing full study results in the April print edition

of Journal of the American Academy of Child and Adolescent

Psychiatry.

www.yalescientific.org April 2012 | Yale Scientific Magazine 9


CHEMISTRY

Yale Professor of Chemistry Mark Johnson and his group have

recently developed a new technique called cryogenic infrared (IR)

spectroscopy. “This technique can provide a stop-action picture of

what is happening through a series of stages,” Johnson explains. The

lab has demonstrated the power of the technique in a recent Science

paper by elucidating hydrogen-bonding interactions in a supramolecular

complex held together by hydrogen bonds. Understanding the

pattern of hydrogen bonding is exceedingly important in the contexts

of enzyme-substrate and drug-protein interactions.

According to Johnson, the new technique provides the resolution

and sensitivity of mass spectrometry coupled with structural characterization

by FT-infrared spectroscopy. The molecules of interest

are ionized using electrospray ionization and then frozen into stable

configurations by cooling to around 10 kelvins. The cryogenic temperature

traps some of these molecules while they are in otherwise

short-lived states. An atmosphere enriched in D 2

is introduced, causing

a D 2

molecule to associate with each of the trapped species. Then the

desired D 2

adduct can be separated by its heavier mass and exposed to

the IR radiation. The temperature increase caused by absorption of

IR energy causes the D 2

molecule to evaporate from the adduct. By

detecting the lighter fragment arising from D 2

loss as the frequency is

scanned, the vibrational IR spectrum is obtained with great sensitivity.

This technique was successfully used to investigate hydrogen bonding

in a simple peptide catalyst-biaryl substrate system developed by

Professor Scott Miller of the Yale Chemistry Department. Hydrogen

bonding between carbonyl (C=O) and amine (N-H) moieties is very

important in biochemistry. The pattern of these linkages effectively

Cryogenic IR Spectroscopy

BY MATTHEW J. CHALKLEY

An image highlighting a single hydrogen bond and how its surroundings can

be determined. Courtesy of Professor Johnson

Sample difference spectrum that shows how the 13C shifts a

single oscillator. Courtesy of Professor Johnson

determines the complex formed by two molecules, but there are typically

a wide variety of different possible conformations. Information

provided by the cryogenic spectroscopy experiment

dramatically narrows the computational search for

the correct complex by indicating which C=O and

N-H groups are active in a particular binding pair.

The critical information was obtained using singlesite

isotopic substitution. The relevant isotopes ( 13 C

and 15 N) increase the mass of the oscillator, causing

a reduction in its resonant frequency. Often, careful

comparison will show that only a single peak is

significantly shifted in the isotope-labeled spectrum.

This peak can then be unambiguously identified as

representing the frequency of the bond where the

substitution has occurred. The value of this frequency

is strongly dependent on the local hydrogen-bonding

environment of the oscillator. The existence of a

hydrogen bond with hydrogen results in a slight red

shift of the resonance to a lower frequency.

According to Johnson, “This technique is powerful

because it provides a way to gain access to chemical

events in the midst of bond rearrangement, which is

the essence of chemistry.” He believes that this new

technology has great potential to become a commonly

used spectroscopic technique. The Johnson lab plans

to extend the work to areas of interest in energy

chemistry such as water splitting and CO 2

activation.

10 Yale Scientific Magazine | April 2012 www.yalescientific.org


Though the innate immune system can appear at first glance to be

an incomprehensible array of biological interactions, researchers are

constantly uncovering nature’s simple solutions to complex immunobiological

problems. Two such researchers at Yale, Dr. Ian Berke and

Associate Professor of Molecular Biophysics and Biochemistry Yorgo

Modis, focus on understanding the mechanisms by which viruses

enter and infect the cell, as well as how host cells detect and mount an

immune response to the presence of such viruses. Their most recent

work, published in The EMBO Journal, has provided insight into the

role of the MDA5 protein in these immune responses.

MDA5 is a cytosolic protein that detects viral RNA upon infection

by a virus. By binding to RNA, MDA5 initiates a cascade of

reactions, including the activation of MAVS, a protein that prompts

the release of infection-fighting interferon. However, the exact

mechanism by which MDA5 recognizes segments of viral RNA has

long been unclear.

A unique characteristic of MDA5 caught the interest of Berke and

Modis: it only signals effectively with double-stranded RNA (dsRNA)

of at least two kilobases. Whereas most protein binding is dependent

on specific chemical structures, MDA5-RNA signaling seems largely

to be governed by the physical property of RNA length. This lengthsensing

property allows the cell to detect the strands of RNA that are

often part of a viral genome or a byproduct of infection. By solving

the crystal structure of MDA5, Berke and Modis hoped to uncover

the biophysical mechanism behind this phenomenon. According to

Berke, “It’s like being able to look at the blueprint of an engine to

understand how it works — you can design hypotheses based on its

structure and test them.”

Originally the team hypothesized a number of complex conformational

solutions to the length-sensing problem, but after employing

a battery of low-resolution methods, such as electron microscopy

and small angle X-ray scattering, what they discovered caught them

completely by surprise. “It turns out that nature has thought of a

much more elegant and simple solution,” said Modis. Upon placing an

BY SOPHIE JANASKIE

BIOLOGY

Elegant and Simple: Understanding the

Interaction between MDA5 and Viral RNA

In the proposed

model of MDA5

signaling, MDA5

binds cooperatively

to and forms

filaments along

dsRNA. If they

reach a certain

length, MDA5 filaments

can interact

with MAVS (red),

which trigger the

interferon response

through the formation

of signaling

complexes. Courtesy

of Dr. Berke

MDA5-RNA mixture under a microscope, they observed formation

of MDA5 filaments along the dsRNA. The filament formation is a

result of cooperative binding which Berke and Modis discovered in

MDA5. When one protein binds, it favors binding of the next. They

hypothesize that the assembly dynamics of these filaments allow

MDA5 to discriminate between long and short RNA; when RNA is

too short the filaments are not stable enough to remain intact. This

filament formation may also allow MDA5 to interact with MAVS and

initiate the interferon response.

Although their hypothesis is consistent with in vivo data, Berke and

Modis say their model needs to be tested further. Berke is currently

studying filament dynamics to determine the interplay of RNA length

and MAVS response levels. Furthermore, this research has a number

of public health applications. MDA5 may be a target for future vaccines

that take advantage of its ability to prime the adaptive immune

system for an effective response.

Negative stain electron microscopy shows MDA5 filament

formation when MDA5 is mixed with dsRNA. The longest

observed filament lengths were comparable to the length of the

largest 6.4 kb genomic segment. Courtesy of Dr. Berke

This model, developed through the use of crystal structures and

small angle X-ray scattering data, shows the structure of MDA5

when it is not bound to dsRNA. Courtesy of Dr. Berke

www.yalescientific.org April 2012 | Yale Scientific Magazine 11


Using Placebos in Research

Ethical or Not?

BY BRIDGET KIELY

In 1996, George Doeschner, a 53-yearold

electrician with Parkinson’s disease,

flew to the University of Colorado to

enroll in a clinical trial. Researchers there

were testing an experimental treatment

that involved injecting embryonic cells into

the brains of patients with Parkinson’s, in

the hopes of providing them with some

symptom relief. On the day of Doeschner’s

procedure, his surgeon put him under anesthesia,

drilled several holes into his skull,

and then closed him up again — without

injecting the cells.

This was no surgical error. Doeschner had

been randomly assigned to the control group

in the clinical trial. Of the trial’s 40 participants,

only 20 received the actual treatment;

the rest, including Doeschner, were given a

“sham” surgery in which their skulls were

opened up but no cells were injected. Going

into the operation, Doeschner knew that

there was a chance that he would be assigned

to the control group, but it was not until a

year later — after the clinical trial was over

— that he discovered for certain that he had

not received the treatment.

Doeschner’s surgery took place more than

15 years ago, but the questions it inspires

about the ethics of placebo use in medical

research are still pertinent today. When is it

acceptable to use placebos in clinical trials?

Are doctors, in giving a group of patients

a sugar pill or a sham procedure, violating

their duty to protect their patients’ interests?

Is it ever acceptable to give clinical trial participants

a placebo in the process of testing

a new treatment for a medical condition?

Dr. Robert J. Levine, a professor at the

Yale School of Medicine and internationally

renowned expert on bioethics, has written

extensively on this subject. The ethics of

placebo use, like many issues in human

research, are highly complicated and fraught

with questions about the responsibilities

physicians have to their patients. There are

no easy answers, but the medical community,

working together with bioethicists, has

reached some conclusions about how to deal

with these issues.

From Therapy to Research

In the absence of effective medications,

placebos were a first-line treatment for many

diseases prior to the 20 th century. Physicians

assigned long, unpronounceable names,

such as the “Tincture of Condurango,” to

remedies that were of little therapeutic value,

In some situations, the use of placebos

in clinical trials may conflict with the

responsibility of physicians to show

undivided loyalty to the interests of their

patients. Courtesy of DrexelMed.edu

relying primarily on the power of positive

suggestion to treat their patients.

Despite their widespread use in the 19 th

century as a medical therapy, it wasn’t until

after World War II that placebos became an

important tool in clinical research. In his

landmark 1955 paper The Powerful Placebo,

anesthesiologist Henry K. Beecher became

the first doctor to quantify the “placebo

effect,” claiming that nearly a third of

patients across fifteen trials for different

diseases experienced some improvement

from placebos alone. Although his data has

since been criticized, Beecher’s paper had a

revolutionary effect. Scientists realized that,

when testing new therapies, they needed

to control for the powerful psychological

effects of taking a pill — even if that

pill contained nothing more than sugar.

Whereas the FDA had previously approved

new drugs as long as they were shown to be

safe, in 1962 it began granting approval only

after they were also shown to be effective

through controlled clinical trials. Doubleblind,

placebo-controlled trials became the

gold standard in clinical research. In these

sorts of trials, neither the patients nor the

researchers know who is getting a placebo

until the end of the study.

Today, most bioethicists agree that prescribing

placebos as a therapeutic treatment,

as practiced prior to the 20 th century,

is deceptive and unacceptable. If placebos

are used in research, patients should be

informed before giving consent that they

might receive a placebo treatment. The

tricky part is determining when it is accept-

12 Yale Scientific Magazine | April 2012

www.yalescientific.org


BIOETHICS

able for a clinical trial to use a placebo as

a control — a question that has sparked

explosive debate within the bioethics community

for decades.

“Undivided Loyalty”

The relationship between the physician

and the patient is at the heart of the placebouse

debate. Physicians are bound by a legal

concept called the fiduciary obligation,

which calls for the physician’s undivided

loyalty to the interests of the patient. Therefore,

is it possible for a physician to give a

patient a placebo while still upholding this

obligation? If a physician conducts a clinical

trial knowing that half of the patients will

be given nothing more than a sugar pill, is

the physician truly fulfilling his or her duty?

The short answer, most bioethicists say, is

that it depends on the situation.

Use of placebo control groups has been

the gold standard of clinical research

since the 1960s. Courtesy of Tulane.edu

According to Levine, the use of a placebo

as a control is unquestionably ethical under

certain circumstances. One circumstance

arises when testing a therapy for a condition

for which there is no proven treatment. “If

there’s no medication out there that anyone

thinks is any good, then it is considered ethically

acceptable to give half of the patients

a placebo,” Levine said.

On the other hand, there are some circumstances

under which it is clearly unacceptable

to use placebos. If researchers are

testing a new therapy for a serious condition

that has a known effective treatment, it is

not considered ethical to give placebos to

any of the patients. “This would be considered

a serious violation of the fiduciary

obligation,” Levine said. If, for example, a

researcher were conducting a clinical trial on

a new medication that might prevent heart

attacks, they should not give any of the

trial participants a placebo if there already

existed a reasonably effective medication for

that purpose. Because use of a placebo in

such a situation could cause death or permanent

harm to participants, the researcher

would be obligated to test the performance

of the new drug by comparing it to that of

the known treatment.

Other examples, however, are not so

straightforward. Levine points to hypertension,

commonly known as high blood

pressure, as an example of a condition for

which the acceptability of placebo use is less

clear. Over time, untreated hypertension can

be deadly, increasing the risk for conditions

such as heart attack, stroke, and kidney

failure. Yet in the short-term, it is unlikely

to be fatal for individuals who are otherwise

healthy and are under the observation of

experienced doctors. Therefore, is it ethical

to use placebo controls when testing new

medications for hypertension?

Levine claims that, in these situations, it is

ethical to use placebos, provided that a few

conditions are met. First, the placebos must

only be used in individuals with a conditions

so mild that they are unlikely to develop

serious complications. Second, all trial participants

must be monitored carefully, and

withdrawn immediately from the trial at the

first sign of adverse effects. Finally, the trial

must only be conducted over a short period

of time — on the order of months, rather

About the Author

Dr. Levine believes that placebo use

in research on conditions like hypertension

is acceptable if certain

precautions are taken. Courtesy of

TheSurvivorsClub.org

than years. “We do these sorts of trials

under circumstances where nobody’s going

to get into serious trouble,” Levine said.

Above all, Levine stresses the importance

of acknowledging the complexities

of human research ethics. “Almost everything

you read in the papers is sound-byte

ethics,” he says. “Things are virtually never

as simple as they appear to be in the media.”

The complicated issues raised by the ethics

of placebo use force physicians to tackle

difficult questions about their responsibilities

to their patients, the place of scientific

research in society, and the moral duties that

bind us together as human beings.

Bridget Kiely is a freshman in Calhoun College planning to major in Molecular,

Cellular, and Developmental Biology.

Acknowledgements

The author would like to thank Dr. Levine for sharing his expertise concerning

placebo use in bioethics.

Further Reading

• Levine, Robert. “Placebo controls in clinical trials of new therapies for conditions

for which there are known effective treatments.” The Science of the Placebo: Toward

an Interdisciplinary Research Agenda. Ed. H. Guess, A. Kleinman, J. Kusek

and L. Engel. London: BMJ Books, 2002.

www.yalescientific.org April 2012 | Yale Scientific Magazine 13


The Healing Art of Meditation

How Learning to Stop Mind-Wandering Can Improve More Than Productivity

For thousands of years, Buddhist monks

have used meditation to obtain a transcendental

experience on the path to

enlightenment. More recently, physicians have

clinically employed meditation to successfully

help treat disorders like depression, anxiety,

addictions, and chronic pain. However, until

recently, the effects of meditation on the brain

were largely unknown. Dr. Judson Brewer of

the Yale School of Medicine has identified

functional changes in the brains of experienced

meditators in an exciting new fMRI brain imaging

study, one of the first to show the impact of

meditation on brain function and connectivity.

Brewer started meditating during medical

school to help cope with stress. He found it

very helpful, and about 10 years later, began

studying it clinically using functional magnetic

resonance imaging (fMRI). fMRI is a safe,

non-invasive technique that measures

oxygen levels in the brain,

correlating oxygen concentration

to brain activity; more active

regions require more oxygen.

In his study, Brewer performed

fMRI brain scans on experienced

meditators and controls (inexperienced

meditators) both at

rest and while using mindfulness

meditation, a meditation that

encourages acute awareness of

physical or spiritual realities. The

subjects in the study used three

types of mindfulness meditation

techniques: concentration,

in which the subject focuses on

breathing; loving-kindness, where

the subject focuses on a feeling

By Kaitlin McLean

of good will toward oneself and others that is

supported by silently repeating phrases such as

“may X be happy”; and choiceless awareness,

where the subject can focus on whatever object

comes to them.

Brewer and his team found two notable

trends in the results of the study. First, experienced

meditators showed deactivation of the

part of the brain known as the default mode

network (DMN), a region involved in selfreferential

processing, including daydreaming.

All three forms of meditation showed similar

results. This discovery is interesting because

one of the goals of meditation is to remain

focused, and deactivation of the DMN seems to

show that meditation is functionally doing just

that in the brain. As meditators self-reported

significantly less mind-wandering, these results

support the hypothesis that deactivation of

the DMN is related to a reduction in mindwandering.

Second, the brains of experienced meditators

showed different connectivity patterns,

i.e. different networks of the brains talking to

each other, which had not been seen before

in this context. They found that experienced

meditators showed co-activation of the posterior

cingulate cortex (PCC), dorsal anterior

cingulate cortex (dACC), and dorsolateral prefrontal

cortex (dlPFC) at baseline and during

meditation. These altered connectivity patterns

were consistent both during rest and during

meditation. The PCC is an important part of

the default mode network, and the dACC and

dlPFC are both crucial regions for cognitive

control. Exactly how these connectivity changes

translate into functional changes is currently

unclear, but the fact that the changes were seen

The study shows that experienced meditators demonstrate coactivation of the medial prefrontal

cortex (mPFC), insula, and temporal lobes during meditation. This figure shows the differential

functional connectivity with the mPFC region and left posterior insula is shown in meditators vs

controls: (A) at baseline and (B) during meditation. Courtesy of Dr. Brewer (Fig. 3 of Brewer et

al. (2011) Meditation experience is associated with differences in default mode network activity

and connectivity. PNAS 108:20254-20259)

14 Yale Scientific Magazine | April 2012 www.yalescientific.org


PSYCHIATRY

during both meditation and resting periods

suggests that with practice, meditation may

transform the normal, resting functioning of

the brain into one that more closely resembles

a meditative state. In other words, the default

state could shift from that of mind-wandering

to one of being centered in the present.

Why a decrease in activity in the part of the

brain involved in self-referential processing and

daydreaming is beneficial may not be immediately

intuitive. Most people spend a great deal

of time allowing their minds to wander since

it usually seems more enjoyable than writing

papers, crunching numbers, or listening during

class. However, a 2010 study conducted by

researchers from Harvard published in Science,

entitled “A Wandering Mind is an Unhappy

Mind,” found that subjects daydreamed during

nearly 50 percent of their waking hours, and

The fMRI machine uses magnetic resonance

to determine the amount of oxygen

usage in the brain. Higher oxygen

levels correlate to higher activity of that

area of the brain. Courtesy of Wikipedia

that overall, doing so made them unhappy. Not

surprisingly, when their minds wandered toward

negative things, they were less happy, but when

daydreaming about positive things, they were

actually no happier than when thinking about

their current task. Given that on average, “a

wandering mind is an unhappy mind,” it is easy

to understand the overall benefits of reducing

daydreaming on emotional health.

Meditation, however, does not only have

emotional benefits. A growing body of evidence

suggests that mindfulness training can

help anxiety, chronic pain, addictions, and other

disorders, but exactly how meditation affects

these conditions is still unknown. As Brewer

asserts, what is exciting about his research is that

it “might bring in some of the neurobiological

evidence as to what’s actually happening, and

how the brain might be changing with practice.”

He also points out that his study is a preliminary

cross-sectional study that only examines one

time point, but the evidence is quite persuasive.

This research supports the role that meditation

can play in the clinic. According to Brewer,

“It [meditation] could certainly be used to help

people work through frustration and anxiety

so that they don’t move into a clinical depression

or clinical anxiety disorder, or start using

drugs. But at the same time, it can be used

when people already have these disorders.” In

a previous study, Brewer taught mindfulness

training to people who wanted to quit smoking

who had tried unsuccessfully an average of five

to six times before. Meditation was able to help

these individuals quit smoking when all other

methods failed.

His recent discovery of functional changes in

the brains of experienced meditators is a starting

point down a longer road of determining

the biological changes associated with meditation;

Brewer says:

My hope is that we can start to marry some

of these ancient techniques that have been

around for 2,500 years with some of the

modern technology that might help actually

synergize with these such that we actually help

people. We’re excited that this could lead to

some clinical benefit tangibly down the road,

and not just in reaffirming what’s going on in

the brain, but bringing what we learn from these

neurobiological studies to actually augment the

clinical practice.

The future for clinical meditation seems

bright. Indeed, it is remarkable to observe that

cutting-edge techniques are confirming what

certain cultures have practiced for thousands

of years.

The art of meditation has been practiced

for thousands of years. Buddhist

monks in particular have long used this

technique to obtain a transcendental

experience on the path to achieving enlightenment.

Courtesy of Fotopedia

If you are interested in getting involved with

meditation, there are many easy ways to get

started. Mindfulness in Plain English is a book on

mindfulness training that can be downloaded

for free online, and a free app called “Get

Some Headspace” gives ten days of free guided

meditations. Joining a local meditation group

is also a common way to begin. Brewer leads a

drop-in meditation group at Yale’s Dwight Hall

Chapel on Mondays and Thursdays at 8:00 that

is open to all who are interested. Beginners are

given instruction and guided meditation, and the

session ends with a discussion about meditative

practices. Brewer urges, “Don’t get discouraged

if at first seeing how much your mind wanders

when trying to meditate is distressing. Stick

with it. Find a good group and a good teacher.”

Research shows that, with practice, meditation

can be helpful in reaching a happier, healthier

state of mind.

About the Author

Kaitlin McLean is a junior in Jonathan Edwards College majoring in Molecular,

Cellular and Developmental Biology with a concentration in Neurobiology. She

works in Professor Crews’ lab studying the molecular mechanisms of limb regeneration

in axolotls.

Acknowledgements

The author would like to thank Dr. Judson Brewer for his time, enthusiasm in his

research, and contributions to the article.

Further Reading

• Brewer, JA, Worhunsky, PD et al. “Meditation Experience is Associated with Differences

in Default Mode Network Activity and Connectivity.” Proc. Natl. Acad. Sci.

108 (2011) 20254-9.

www.yalescientific.org April 2012 | Yale Scientific Magazine 15


The world of quantum mechanics is,

to the average person, a world of

science fiction. This quirky corner

of physics tends to be far removed from our

everyday lives. However, the phenomena

that emerge from the equations of quantum

mechanics are very real. One of the most

intriguing facets of quantum mechanics — to

scientists and science fiction fans alike — is

that of persistent current: a natural electric

current that flows perpetually without a

power source .

Though the theoretical physics behind this

phenomenon is relatively well-established,

only recently have these currents been measured

flowing through large resistive metal

wires. On the vanguard of this research are

Yale Professors of Physics and Applied Physics

Jack Harris and Leonid Glazman, whose

recent work measuring quantum currents in

metal wires has reinvigorated the field. At

first seeming to defy conventional notions

of energy conservation, these perpetual currents

are quite possibly as intriguing as they

are revolutionary.

Quantum in a Nutshell (The ABCs of E=hv)

The core tenet of quantum mechanics is

the postulate that all particles exhibit both

wave and particle properties. Strange, yes,

but true according to theory and experiment.

While we may not be able to completely

understand exactly what this means, we

can appreciate the apparently paradoxical

conclusions that emerge from the equations

of quantum theory. For instance, we cannot

precisely know both the position and speed

of a particle at one moment in time. Furthermore,

we know that energy is quantized,

meaning that particles can only absorb and

emit energy in definite amounts.

Quantization emerges naturally from the

fundamental wave character of matter. Just

as waves on a string can only possess certain

energy levels, so too do three-dimensional

standing waves. Because particles behave as

both particles and waves in quantum mechanics,

particles are treated as standing waves

and thus have wave properties, including

quantized energy levels.

Larger masses can be accurately described

by classical mechanics since the effects of

quantization are not noticeable because they

are so small—think pixels on an HD-TV.

At small masses, however, the wave properties

become crucial for accurately modeling

particle behavior. For instance, instead of

treating electrons as point particles with a

set mass of 9.31 × 10 -31 kg, they are treated

as three-dimensional standing waves that

occupy “clouds.” The exact shapes of these

“clouds” are dictated by wave equations,

mathematical formulas whose solutions are

known as atomic orbitals. Described by four

quantum numbers, atomic orbitals are the

basis for modern atomic theory.

Upon further analysis, another interesting

phenomenon emerges: zero-point energy.

Unlike waves on a string, whose lowest

energy state is motionless, the lowest energy

state for a quantum wave is both fluctuating

and dispersed. In other words, no quantum

equivalent exists for a flat string. Rather,

the lowest energy state for a quantum wave

involves motion, meaning that every electron

around every nucleus is constantly moving.

Theoretically, if one were able to orient the

electrons in a metal ring, a current could be

produced that would never dissipate!

Standing waves on a string come in

discrete energy levels (n = 1,2,3,4…)

in the same way that standing waves

in three dimensions are only allowed

certain energy levels. The top string

has energy n=1 and contains one node

at each end. The next allowed energy

level is n = 2, with an additional node in

the center. As can be seen, quantization

emerges naturally from the boundary

conditions of standing waves. Courtesy

of UCLA

From e and I: Orienting zero-point motion

Just as a ball falls down when released in

the air, electrons too fall down their potential

energy gradient to the lowest possible energy

16 Yale Scientific Magazine | April 2012 www.yalescientific.org


PHYSICS

level. However, unlike a ball, whose lowest

energy level is the ground, an electron’s

lowest state is an orbital. As discussed above,

these orbitals contain energy, even at their

lowest level. For a single electron orbiting a

single atom, this ground state is the 1s orbital,

a sphere without any nodes. Unlike the ball,

however, electrons cannot be “lifted” by any

arbitrary amount. Because of energy quantization,

electrons can only occupy discrete

orbitals with unique energy levels. The next

level up from the 1s orbital is the 2s orbital,

which is also spherical but more dispersed.

One more level up, and the electron enters

the 2p orbital, which resembles a dumbbell.

With benzene, a hexagonal ring of six carbons,

the first electron added would occupy

a hybrid orbital that permeates the entire

ring. The second does the same. And the

third. All the way until all 42 electrons have

been added. Again, applying a magnetic field

perpendicular to the face of this molecule

will orient the charge and cause an internal

rotation. However, because benzene is a ring,

the electrons will circulate around the entire

ring, not just around a molecule, causing a

current to be produced. Moreover, when the

field is held at a constant value, the oriented

electrons will continue to circulate and form

a small current around the ring.

Going one step further: take a metal wire a

few micrometers in diameter, comprising millions

upon millions of electrons and protons.

Just as with benzene, applying a magnetic

field will induce a tiny current around the wire

and, again, after the induced current fades

away, a persistent current should remain. In

this way, a magnetic field theoretically should

be able to induce a persistent current in small

metal rings, one that should never dissipate.

Putting Theory to Rest

While scientists theorized and debated

these small currents throughout the 1980s, it

wasn’t until this decade that researchers were

able to devise sensitive enough instrumentation

to accurately measure them. Currents can

be measured both directly and indirectly. The

direct method, utilized in everyday multimeters,

opens the circuit and redirects the current

through a device that measures it directly.

Even though the persistent currents are small,

they are well within the detectable range of

household multimeters commercially available.

However, this method breaks the circuit,

which destroys the persistent current.

Diagram of the cantilever instrument used to measure the magnitude of the persistent

currents. Courtesy of Professor Harris

In contrast, the indirect approach infers the

current by measuring the magnetic fields the

circulating charge produces. While straightforward,

the old 1980s magnetometers were

neither sensitive nor reliable enough to give

accurate data. Moreover, they often gave contradictory

data, which only fueled the debates

over the nature of these persistent currents.

Harris and Glazman recently devised a

simple yet elegant solution. By placing aluminum

rings on micromechanical cantilevers

in a magnetic field, Harris and his students

could accurately measure the current produced

by measuring the angle displaced by

the cantilever supporting the rings. Placing an

electrical current in a magnetic field creates a

force that ultimately causes the displacement

observed in the cantilever. Remarkably, this

technique yielded data with sensitivity orders

of magnitude greater than what was previously

achievable.

Harris and Glazman also tested a wide

range of variables theorized to affect the

strength and character of the currents produced,

including temperature, ring size, and

orientation of the applied magnetic field.

Most remarkable is how consistent their

data was with all the theoretical predictions.

For instance, as expected, as the temperature

increases, the amperage of the current

dwindles, resulting in a minuscule current at

normal temperatures.

While we will not be using persistent currents

to power our cell phones any time soon,

the ramifications of this research are widereaching.

Besides putting to rest a decades-old

debate, these elegant experiments once again

confirm the accuracy of quantum mechanics

and reveal the strange and quirky beauty of

our universe.

About the Author

John Urwin is a sophomore Molecular Biophysics and Biochemistry major in Jonathan

Edwards College. He has previously worked in Professor Colón-Ramos’ lab studying

nervous system development in C. elegans.

Acknowledgements

The author would like to thank Professor Jack Harris for his time and expertise.

Further Reading

• L.P. Levy et al, “Magnetization of mesoscopic copper rings: Evidence for persistent

currents,” Physical Review Letters 17 (April 1990).

• Will Shanks, “Persistent Current in Normal Metal Rings,” Ph.D. thesis, Yale University,

2011.

www.yalescientific.org April 2012 | Yale Scientific Magazine 17


You’re out of breath. Your heart is

pounding as you make your way

through the crowd. You’re being

chased, and you know it. But you’re surrounded

by hundreds of bystanders, and not

sure where your pursuer is. Then suddenly,

you notice someone, and know he is the one

chasing you without a second thought. How

did your brain determine that?

Brian Scholl, Professor of Psychology at

Yale University, is attempting to answer that

question by studying the cognitive mechanisms

responsible for the detection of chasing.

This research is part of the Yale Perception

and Cognition Lab’s broader investigation

of how humans perceive animacy — the

ability of objects to have motivations or goals

and to act accordingly.

Psychologists first recognized animacy

as a distinct property of visual experience

during the early 20th century. An important

early development in the field came in 1944,

when Fritz Heider and Marianne Simmel

showed that observers attributed animacy, to

simple geometric figures. Since then, multiple

research groups have found that animacy perception

persists when observers have explicit

knowledge that objects are not animate, and

that animacy perception occurs across cultures

and even in infants.

Scholl became interested in animacy

research when he asked himself the simple

question, “What is it that I see, and what is

it that I’m thinking about?” He realized that

objects’ animacy stood out with as much

immediacy as their color or shape, which led

him to wonder whether animacy might be

processed at a fundamental level in the brain,

instead of at the higher levels on which most

previous research had been focused.

Quantifying Animacy

When Scholl and graduate student Tao Gao

began to study the perception of animacy several

years ago, they faced a lack of quantitative

methods to measure animacy perception. As

Scholl puts it, animacy perception has been

“fascinating psychologists … for decades as

demonstration, and we’ve been in search of a

way to turn it into rigorous science.”

The lack of quantitative methods resulted

from two main methodological issues. First,

most of the animations used in animacy

studies were scripted manually and included

multiple types of implied behavior, making

the influence of any single feature difficult to

isolate. Second, the most common measurement

of animacy perception was a subjective

questionnaire. The combination of these

two issues made it difficult for researchers to

distinguish animacy perception in the visual

system from higher-level inferences.

To overcome these challenges, Scholl and

Gao developed two models to measure one

kind of animacy perception, chasing detection.

Both involve three types of simple

shapes moving on a two-dimensional screen:

one “sheep,” one “wolf,” and multiple “distractors”

identical in appearance, but not

behavior, to the wolf. The behaviors of both

the distractors and the wolf are generated by

mathematical algorithms, allowing systematic

control of the differences between them.

The first experiment (“Find the Chase”)

generates the sheep’s movements algorithmically

and asks observers to identify whether

any chasing behavior is present, and if so, to

identify the sheep and the wolf. The second

(“Don’t Get Caught”) requires the observer

to control the sheep and attempt to avoid

the wolf for a fixed duration. In both experiments,

observer performance can be objectively

quantified by the number of correct

detections and the number of escapes in the

second, respectively.

Cues for Chasing

Using these new methods, Scholl and Gao

examined different features of wolf motion,

attempting to determine which were important

for chase detection. One important cue

that they identified was the maximum deviation

of the wolf from the line between it and

the sheep, which they called “chasing subtlety.”

At a chasing subtlety of zero degrees, detecting

a chase initially seemed very difficult, but

the wolf and sheep quickly became obvious,

allowing observers to detect chases in nearly

90 percent of “Find the Chase” trials and to

escape 60 percent of “Don’t Get Caught”

trials. In constrast, at a chasing subtlety of

60 degrees, the wolf and sheep failed to stand

out and performance decreased drastically,

with chase detection falling to 60 percent and

escape rate to 25 percent.

Scholl and Gao then decided to study

the impact of object orientation on chasing

detection. By switching the shapes used to

represent wolves and distractors from circles

18 Yale Scientific Magazine | April 2012 www.yalescientific.org


COGNITIVE SCIENCE

to arrowheads, they found that escape from

the wolf was significantly easier when the

darts were oriented in the direction of motion,

rather than in a perpendicular or random

direction. Critically, these results demonstrate

that a very specific correlation between the

motion and orientation of the perceived

chasers enhances the perception of chasing.

Based on the subtlety and orientation

results, Scholl and Gao investigated whether

the brain detected chasing only by accumulating

positive evidence for it or by looking

for both positive and negative evidence. In a

variation on the “Don’t Get Caught” task, they

introduced interruptions into the movement

of the wolf, during which it moved randomly,

stayed stationary, or oscillated around a single

point. Since the three conditions contain

identical chasing behaviors, any difference

between them must arise from the different

types of negative evidence, i.e. non-chasing

motion, present in each.

When the interrupting motion was random,

the escape rate was high for both very low

and very high proportions of interruption,

but it was low when interruptions and

chasing motion were present in a 1:1 ratio.

However, when the wolf was stationary or

oscillating, the drop in escape rate was much

less pronounced, a finding that only a model

involving both positive and negative evidence

can explain.

Behavioral and Neural Effects of Chasing

Building on this new knowledge of the

visual features that contribute to chasing perception,

Scholl and Gao began to investigate

how the perception of being chased affects

behavior. Instead of using an actual chase

scenario, they focused on the related “wolfpack”

phenomenon, in which darts oriented

toward the sheep create the impression of

being chased. In this task, subjects controlled

the sheep and were presented with a screen

divided into regions, in which darts were

either oriented towards or perpendicular to

the sheep. Even though the motions of the

darts are identical, the two conditions appear

and feel very different, and impact behavior in

striking ways. When asked to avoid all darts,

subjects spent significantly less time in the

wolfpack regions. In other words, the mere

perception of being chased by objects causes

people to avoid those objects.

Having observed such a pronounced effect

of perceived chasing on behavior, Scholl and

(Left): As chasing subtlety increases, observers’ ability to detect a chase falls dramatically,

becoming no better than chance by 120 degrees of subtlety. (Right): To test the

behavioral effects of perceived chasing, Scholl and Gao used this experimental setup,

in which half the screen contained “wolfpack” darts (red regions) and the other half

contained perpendicular darts (blue regions). Courtesy of Professor Scholl

Gao formed a collaboration with Gregory

McCarthy, Professor of Psychology in Yale’s

Human Neuroscience Lab. Using functional

magnetic resonance imaging (fMRI), they

monitored the brain activity of subjects completing

dart-avoidance tests in the wolfpack

or perpendicular orientations. These scans

revealed three regions whose activity increased

in the wolfpack condition relative to the perpendicular

condition. The posterior superior

temporal sulcus (pSTS), had previously been

shown to relate to animacy perception, but the

middle temporal (MT) area and the fusiform

face area (FFA) had previously been thought

to function only in lower-level visual processing.

Scholl calls these results “incredibly

interesting and even surprising,” suggesting

that animacy processing “seems to pervade

many more regions of the brain than we might

have thought.”

These discoveries point the way toward

further investigations on animacy perception

in the Perception and Cognition Lab.

They demonstrate that seemingly inefficient

behaviors, such as not moving directly towards

the target and interrupting chasing motion

with random motion, strongly disrupt chasing

detection. Based on this evidence, Scholl

suggests that a “rationality principle,” which

states that animate actors will behave in ways

that most efficiently accomplish their goals,

may form the basis of animacy perception.

Future investigation of these principles could

reveal fundamentals underlying our perceptions

of the world — and certainly make for

an exciting scholastic chase.

About the Author

Jonathan Liang is a junior Molecular Biophysics & Biochemistry major in Ezra

Stiles College and the Online Editor for the Yale Scientific Magazine. He works in the

laboratory of Professor Ronald Breaker, studying the natural diversity of riboswitches.

Acknowledgements

The author would like to thank Professor Scholl for his valuable time and insight into

this area of research.

Further Reading

• Gao T, Newman GE, Scholl BJ. (2009). The psychophysics of chasing: a case study

in the perception of animacy. Cognitive Psychology 59:154-179.

• Videos used in these experiments: www.yale.edu/perception/Brian/bjs-demos.html

www.yalescientific.org April 2012 | Yale Scientific Magazine 19


MACHINE MORALITY

Computing Right

and Wrong

BY SHERWIN YU

Yale researcher Wendell Wallach grapples with the

ethical, technical, and legal difficulties of creating

machines that are capable of moral decision-making.


ROBOTICS

Imagine a future in which artificial intelligence

can match human intelligence,

and advanced robotics is commonplace:

robotic police guards patrol the streets, smart

cars yield to one another, and robotic babysitters

care for children. Such a world may appear

to lie in the realm of science fiction, but many

of its features are increasingly realistic. While

the benefits of such a world are enticing and

fascinating, advanced artificial intelligence and

robots bring a whole set of ethical challenges.

Wendell Wallach, a scholar at Yale’s Interdisciplinary

Center for Bioethics, researches

the potential ethical challenges of future technologies

and how to accommodate potential

societal changes. Wallach is a leading researcher

in the field of machine ethics, also known as

robot ethics, machine morality, or friendly

AI. The central question of machine ethics

is: “How can we implement moral decisionmaking

in computers and robots?” This inherently

interdisciplinary field is at the interface

of philosophy, cognitive science, psychology,

computer science and robotics.

Different Levels of Moral Agency

As artificial intelligence and robotics continue

to advance, we reach the possibility of

computer systems making potentially moral

decisions by themselves — artificial moral

agents (AMAs). Wallach proposes a continuum

of moral agency for all technology, from everyday

objects completely lacking agency to fullfledged

sentient robots with full moral agency.

The continuum exists along two dimensions:

autonomy, which indicates what the technology

has power to do, and ethical sensitivity,

which reflects what inputs the technology can

use to make decisions. For example, a hammer

has no autonomy and no sensitivity, while a

thermostat has sensitivity to temperature and

autonomy to turn on a furnace or a fan.

As robots gain increasing autonomy

and sensitivity, so too do they have greater

moral agency. Wallach explains that the most

advanced machines today only have operational

morality — the moral significance of

their actions lies entirely in the humans

involved in their design and use, far from full

moral agency. The scientists and software

architects designing today’s robots and software

can generally anticipate all the possible

scenarios the robot will encounter.Consider a

robot caregiver taking care of the elderly. The

designers of the robot can anticipate possible

ethically-charged situations such as a patient

April 2012 | Yale Scientific Magazine 21


ROBOTICS

The creators of ASIMO hope that their robot can assist people in the home, even

those confined to a bed or wheelchair. Courtesy of Honda

refusing to take medication. Because the

robot’s autonomy and sensitivity is limited, the

designers can feasibly account for all possible

situations, and desired behavior in expected

situations can be programmed directly.

But what happens when the designers can

no longer predict the outcomes? When both

autonomy and sensitivity increase, greater

moral agency and more complex systems arise.

Functional morality refers to the ability of an

AMA to make moral judgments when deciding

a course of action without direct instructions

from humans.

Wallach explains that implementing machine

morality has two basic approaches — topdown

and bottom-up — as well as a hybrid

approach. In a top-down approach, a limited

number of rules or principles governing moral

behavior are prescribed and implemented.

The top-down approach characterizes most

moral frameworks in philosophy, such as

Kant’s Categorical Imperative, utilitarianism,

the Ten Commandments, or Isaac Asimov’s

Three Laws.

The system attempts to learn appropriate

responses to moral considerations in bottom

up approaches, which take their inspiration

from evolutionary and developmental

psychology as well as game theory. Instead

of selecting a specific moral framework, the

objective is to provide an environment in

which appropriately moral behavior is developed,

which is roughly analogous to how most

humans “learn” morality; growing children

gain a sense of what is right and wrong based

on social context and experiences. Similarly,

bottom-up approaches, including evolutionary

algorithms, machine learning techniques,

or direct manipulation in order to optimize a

particular outcome, can be applied to facilitate

a machine achieving a goal..

Wallach notes that both approaches have

their weaknesses. The broad principles in topdown

approaches may be flexible, but they can

also be too broad or abstract, making them

less applicable to specific scenarios. Bottomup

approaches are good at combining different

inputs, but they can be difficult to guide

toward an explicitly ethical goal. Ultimately,

AMAs will need both top-down principles

as an overall guide as well as the flexible and

dynamic morality of bottom-up approaches.

Challenges in Machine Morality

Two main challenges stand in the way of

implementing moral decision-making: first,

implementing the chosen approach computationally.

For example, utilitarianism might

look attractive because it is inherently computational:

choose the action that produces

the result with the highest utility. But what is

the stopping point for what is considered a

result of an action? How far in the future is

an AMA expected to calculate? Furthermore,

how does one computationally define utility

for the calculation, and how does an AMA

evaluate the utility of different outcomes?

The da Vinci Surgical System, a robotic

surgical system and an example of robotics

performing a highly critical role.

Courtesy of Wikipedia

The difficulty of computational instantiation

of decision making is also showcased in

the short stories of Isaac Asimov, in which

robots obey three laws in order: 1) do not

harm humans, 2) obey humans, and 3) protect

its own existence. Asimov wrote more

than 80 short stories, exploring how many

unexpected and potentially dangerous conditions

arise from the combination of these

rules. Furthermore, to function properly

in a society of humans, AMAs may require

the computational instantiation of human

capabilities beyond reason, many of which

we take for granted, such as emotions, social

intelligence, empathy, and consciousness.

The second problem for implementing

moral decision-making is what Wallach calls

the “frames problem.” How does a system

even know it is in a morally significant

situation? How does it determine which

information is morally relevant for making a

decision and whether sufficient information

is available? How does the system realize it

has applied all considerations appropriate for

the situation?

Practical Difficulties

With all of these complicated questions, one

might wonder just how far along modern day

technology is. Wallach explains while we are

far away from any machines with full moral

agency, it is not too early to give serious consideration

to these ethical questions. “We are

22 Yale Scientific Magazine | April 2012 www.yalescientific.org


ROBOTICS

Moral agency increases as autonomy and ethical sensitivity increase. Courtesy of

Moral Machines (Oxford University Press)

beginning to have driverless cars on the road,

and soon there will be surveillance drones in

domestic airspace. We are not far away from

introducing a robot to take care of the elderly

at home. We already have low-budget robots

that entertain: robotnannies and robopets.”

With the advent of robots in daily life, many

security, privacy, and legal quagmires remain

unresolved. Robots placed in domestic environments

pose privacy concerns. To perform

their job, they likely need to record and process

private information. If they are connected

to the Internet, then they can potentially be

hacked. The security for a robot performing

a critical role, such as pacemakers, cars, or

planes is even more paramount. Failure could

be catastrophic and directly result in deaths.

Google’s self-driving cars, which are being

piloted in Nevada, pose legal issues as well.

How do we legally resolve a complicated accident

involving a self-driving car? What should

a self-driving car do if a situation forces it to

choose between two options that both might

cause loss of human life? Wallach proposes a

question: suppose self-driving cars are found

to cause 50 percent fewer accidents than

human drivers. Should we reward the robot

companies for reducing deaths, or will we sue

them for accidents in which robot cars were

involved? Wallach says, “If you can’t solve

these ethical problems of who’s culpable and

who’s liable, you’ll have public concern about

letting robots into the commerce of daily life.

If you can, new markets open up.”

The Future of Machine Morality

Wallach ultimately tries to anticipate what

sort of frameworks could be put in place to

minimize the risks and maximize the benefits

of a robot-pervasive society. Wallach points

out that considering the ethical implications

About the Author

Sherwin Yu is a senior in Morse College studying Computer Science and Molecular

Biophysics & Biochemistry.

Acknowledgements

As robots become more advanced and

involved in our lives, tackling potential

ethical issues also becomes more important.

Courtesy of NASA

of AMAs falls into the broader discipline of

engineering ethics and safety. Engineers need

to be sensitive to these ideas when they think

about the safety of their systems. Balancing

safety and societal benefit has always been

a core responsibility of engineering; today’s

systems, however, are rapidly approaching the

complexity where the systems themselves will

need to make moral decisions. Thus, Wallach

explains that “moral decision making can be

thought of as a natural extension to engineering

safety for systems with more autonomy

and intelligence.”

When asked whether ethics should be a

priority, Wallach responds with fervor: “I think

it will have to be. There remain some technical

challenges, but we will have to think through

these problems eventually for our society to be

comfortable accepting robots and AI as part

of everyday life.”

The author would like to thank Wendell Wallach for his help in writing this article.

Further Reading

• Moral Machines: Teaching Robots Right From Wrong (Oxford University Press

2009). Wendell Wallach and Colin Allen

• Robots to Techno Sapiens: Ethics, Law and Public Policy in the Development of

Robotics and Neurotechnologies. Wendell Wallach. Law, Innovation and Tchnology

(2011) 3:185-207.

www.yalescientific.org

April 2012 | Yale Scientific Magazine 23


Science and religion are often regarded

as opposites; even today, many people

may believe that they must compromise

one to accept the other. Dr. Nihal de

Lanerolle, a neurobiologist at the Yale School

of Medicine who also serves as chaplain of

the Episcopal Church at Yale, believes otherwise.

De Lanerolle recalls a story about Niels

Bohr — the father of quantum mechanics

— and his first inklings about the nature of

the universe. As a child, Bohr would spend

hours gazing into a fishpond, contemplating

the unawareness of the fish that they were

being watched and that any reality, such as

the source of sunlight or rain that penetrated

the surface, existed outside the pond. Bohr

wondered if humans were like these fish,

being acted upon by multiple dimensions

of reality, but aware of only a limited frame

of reference. Following this reasoning, de

Lanerolle expands that “Religion and philosophy

are tools for acquiring knowledge of

the other dimensions of reality that cannot

yet be explored with the tools of science.”

The Religious Scientist

In every area of his life, de Lanerolle

exhibits a passion for solving problems,

whether the problem happens to be the

decline of brain function or the decline of

a church congregation. This mentality allows

him to see his religion and his scientific

career as equal and intertwined parts of his

life: “It’s all one to me; I don’t really consciously

think of myself in these separate

dimensions.”

De Lanerolle has been working in the Neurosurgery

Department at the Yale School of

Medicine, where he researches epilepsy and

brain trauma, since 1979. Motivated by the

death of a friend from an epileptic seizure,

he began researching the causes and effects

of temporal lobe epilepsy in 1985 and has

since linked the condition to the hippocampus

and to helper cells in the brain called

astrocytes. Three years ago, his research took

on a second focus: brain trauma induced by

proximity to bomb explosions. Beginning

with animal subjects and expanding to studies

on humans, he and his colleagues have

confirmed that there is some biological basis

to the damage in memory, cognitive function,

and emotional response exhibited by soldiers

returning from war.

Expanding his problem-solving skills

beyond the medical realm, de Lanerolle also

Dr. de Lanerolle’s research suggests that

astrocytes, glial cells that support neurons,

may play a role in temporal lobe

epilepsy. In this image, astrocytes (red

and yellow) surround neurons (green).

Courtesy of wikimedia.org

“provide[s] spiritual resources” as a chaplain.

He has been involved at the Episcopal

Church since first coming to Yale, serving

on the Board of Governors for many years

and eventually becoming the vice president,

which involved running the church’s ministry

for nine years. After leaving briefly to

serve as College Chaplain at Trinity College

in Hartford, he returned to Yale in 2002 to

find the Episcopal Church struggling with

membership and finances. It was then that he

took up his post as chaplain of the Episcopal

Church at Yale, determined to “put the house

back in order.” Since then, he has recruited

an organist and choir singers, reached out

to increase membership, and arranged for

regular dinners to follow services. As much

time and effort as it requires, de Lanerolle

does not consider his religious work his

profession: “I do it as a sort of avocation,

rather than a vocation.”

A Rational Faith

De Lanerolle describes the typical dialogue

between science and religion as having

two sides: scientists who have a “simplistic

understanding of theological thinking” and

theologians who “haven’t taken the trouble

to understand the entire breadth of a scientific

field.” While he does not claim expertise

in both, de Lanerolle submits these as the

major shortcomings of each side.

These shortcomings are particularly

evident, de Lanerolle says, when he acts

as a counselor to Yale students who have

questions about their faith. He comments

24 Yale Scientific Magazine | April 2012

www.yalescientific.org


NEUROSCIENCE

Though many people consider evolution

to be contradictory to Christianity, Dr.

de Lanerolle suggests that individuals

can believe in both. Courtesy of wikimedia.org

that when students come into Yale with

“fundamentalist” beliefs that take every word

of the Bible literally, they begin to see contradictions

between their faith and academics,

particularly in the sciences. Students often

come to Yale with the notion that faith and

reason must be treated separately. As a result,

when they are forced to confront conflicts

such as “creation versus evolution” and “randomness

versus divine intervention” at the

university level, some students become distraught

and may give up their faith entirely.

To prevent this distress, he encourages them

to see the Bible not as a rulebook, but as an

exemplary story about people in the past

interacting with God. True to his tradition,

this view allows him to integrate faith and

reason in his own daily life.

De Lanerolle describes his faith as rational:

“Some think I am a heretic, but I challenge

them to think in ways that are truthful.”

In other words, he asks others to question

what is generally accepted. He does not see

religion and science as dead-end contradictions,

but as more of a “two-way street,” with

scientific discovery sometimes causing him

to “rethink [his] religious understanding.”

He adds, “A lot of our religious understanding

has been passed down, and they didn’t

have the scientific information I have.”

De Lanerolle believes that just as science

consists of both verified laws and theories

that are continually proposed, tested, and

adjusted, religion should maintain its basic

framework over time while still being open

to new developments.

Revising the Blueprints

When asked about the debate over evolution

and creation, de Lanerolle answers

that the writers of the Old Testament formulated

an explanation for how the world

came about using the limited knowledge that

was available to them. He suggests that the

creation story at its core implies the same

general theme as evolution, with order and

complexity developing from formless chaos.

Holding this theme as the central significance

of the story, rather than the detail of

creation in seven days, he describes the Old

Testament as a collection of “stories told

in a simplified form so we can grapple with

the big questions of life.” He also explains

that while scripture and tradition provide a

strong foundation for faith, natural reason

and experience play an integral role as well

— without them, “God would cease to exist

for each individual.”

Evolution aside, one of the most common

questions people tend to ask is, “How do you

know there is a God?” De Lanerolle’s first

answer is simply “I don’t know” — but he

goes on to explain that exploring the character

of God is not unlike exploring a scientific

question; both call upon the scientific

method. “So, let’s assume God is like my big

daddy,” he poses as an example. “If I pray in

a certain way, such-and-such should happen,

right? If I ask my daddy for something I

About the Author

really want desperately, then maybe I should

get that. And so, I might pray that way. Then

I probably don’t get what I want, right? So

does that mean God doesn’t exist? Well, it

could mean God doesn’t exist; that’s one possibility.

Or it could be that my notion of God

is not what it should be. Or it could be my

experiment was wrong…I constantly keep

evolving my understanding about God.” De

Lanerolle would investigate a neuroscience

question, such as how thoughts are formed,

in the same manner: hypothesizing, experimenting,

analyzing, revising, and concluding.

The scientific method is universal in that

“every time, even in science, an experiment

fails, you learn something from it. You grow

from it. The same thing happens with religion

— you grow from grappling with this

notion of God.”

Even more pressing than the debate over

God’s existence, is the search for a purpose

in life. Whether you tackle this quest from

a scientific perspective or from a religious

one, it is evident that each individual must

discover the answer for him or herself. De

Lanerolle encourages seeking an answer

through both perspectives, using faith as a

starting point and science as a guide, so as

to evaluate life’s purpose not just through a

single dimension of reality — the world of

our senses and conscious experience — but

also through other dimensions of reality of

which we are less aware. While many people

see science and religion as tools of different

trades, he sees them as tools of the same kit

for constructing reality. “Whether that reality

is true or not, we don’t know,” he concludes.

“But it’s the impact [it] has on your life that

I think matters.”

Jessica Hahne is a freshman English major in Silliman College. She works as a

copy editor for the Yale Scientific Magazine.

Acknowledgements

The author would like to thank Dr. Nihal de Lanerolle for taking the time to explain

his work and share his perspective.

Further Reading

• de Lanerolle, N.C., Lee, T.S. and Spencer, D.D. (2010) Astrocytes and Epilepsy.

Neurotherapeutics 7: 424 – 438. PMID: 20880506.

www.yalescientific.org April 2012 | Yale Scientific Magazine 25


FEATURE

PHARMACOLOGY

Study Drugs and Neural Enhancers

Science and Controversy

BY CHUKWUMA ONYEBEKE

Imagine a world in which you could increase your IQ by taking a

pill. With the help of neuroenhancing drugs, such a world may already

exist on college campuses across the country. According to a 2005 study

published in the journal Addiction, seven percent of college students

have admitted to using some kind of neuroenhancing drug for nonmedical

uses. Although taking drugs without a prescription is illegal,

not all scientists agree that consuming neuroenhancing drugs may be a

bad thing. Are drugs like Ritalin and Adderall a glimpse into a smarter

future, or are we toying with health risks we do not yet fully understand?

The two favorite neuroenhancing drugs used on college campuses are

methylphenidate and

amphetamine salts—

or, as we more commonly

refer to them,

Ritalin and Adderall.

These drugs are

typically prescribed

to assist children and

adults with attention

deficit hyperactivity

disorder (ADHD).

As for the efficacy of

these drugs in adults

without ADHD, Hedy

Kober, Assistant Professor

of Psychology

and Psychiatry at Yale

University states, “we

have scientifically a lot

dangers of Adderall and Ritalin, even ranking them as less dangerous

than beer and cigarettes.

Despite the potential health risks of such drugs, not all scientists are

on the same side of the argument. A recently published article in Nature

extols the benefits of neuroenhancing drugs. Contending that humans

should be able to use drugs to improve brain function in the same way

that they use food, sleep, and exercise to improve brain function, these

scientists and ethicists argue that as long as such drugs are proven to

be safe, they should be embraced and not stigmatized. They also reject

the notion that humans should not use mind-enhancing drugs because

they are “unnatural,”

stating that much of our

lives today are already

unnatural. Furthermore,

the article urges scientists

to start researching

the benefits and risks

of using these drugs

on healthy adults and

the government to alter

laws surrounding drugs

so that those attempting

to enhance their cognitive

abilities will not be

punished.

Dr. Nora Volkow,

the director of the

National Institute on

Drug Abuse, has called

the article “irresponsible”

and states that she

strongly disagrees with

its authors. She adds that Adderall and Ritalin are stimulants that can lead

to severe addiction and psychosis. Furthermore, Volkow argues, because

there have been no long-term studies on the effects of these drugs on

young brains, these drugs could result in long-term, adverse side effects.

On the other hand, Dr. Kober agrees with many of the claims made

in the Nature article. “In a world where healthy people could get these

drugs through a prescription,” she states, “this kind of drug use could

be very helpful and practical.” She lists some studies that show that

regular smokers perform better cognitively when they are given nicotine.

“We all intake things like food, sugar, and caffeine in order to alter

our current state and enhance performance,” Kober argues. As long

as neuroenhancing drugs are safe and legal, she contends that people

should be allowed to use them.

As Adderall and Ritalin consumption among college students remains

prevalent, the fierce debate about the efficacy and safety of these neuroenhancing

drugs will continue. The future implications of such a

trend are much bigger than simply earning an “A” on an exam, though:

although many health risks need to be evaluated, the use of cognitive

enhancing drugs may not only improve the performance of college

students but also increase the cognitive abilities of humanity as a whole.

Ritalins block dopamine reuptake in the same way that cocaine does. Courtesy

of canadian-seeker.com

of reason to believe

that these drugs can in

fact enhance the performance

of healthy adults as measured by, for example, reaction time,

focus, and some forms of memory.” Kober explains that by blocking

or otherwise altering the function of the transporters in the synapse

that reuptake the neurotransmitter monoamines, Ritalin and Adderall

increase monoamines levels, especially dopamine, in synapses. The

resulting high levels of dopamine are able to enhance signals between

the hippocampus, which is involved in memory, and the prefrontal

cortex, which is involved in decision-making. These enhanced signals

are thought to be responsible for the increase in cognitive abilities and

working memory upon taking these drugs.

Although she acknowledges the value of neuroenhancing drugs,

Kober cautions students about taking drugs that are not prescribed to

them. “There is always a concern of drug interaction when you don’t

know what you are taking and you haven’t consulted a doctor about issues

like dose and other existing medical conditions,” she warns. Furthermore,

she explains that both Adderall and Ritalin are schedule II drugs, meaning

that they have an “abuse potential” and can lead to dependence and

withdrawal. Though this may seem like an obvious caveat, many students

nonetheless fail to understand these risks. A survey conducted at the

University of Kentucky shows that many students underestimate the

26 Yale Scientific Magazine | April 2012 www.yalescientific.org


BIOCHEMISTRY

FEATURE

Banned by the U.S. government and legalized by other nations,

praised by some college students and disdained by others, marijuana

is an issue of contention not only in policy writing but also in the

medical sphere. The endocannabinoid system — which regulates the

psychoactive effects of marijuana — is emerging as a new medical

target in pain research and many other areas, such as weight loss and

neurological disorders. However, the stigma associated with cannabis

and cannabinoids leave scientists and doctors in a controversial

conundrum.

Although the endocannabinoid system (ECS) is a relatively

“young” discovery in the world of signaling systems (the term

“endocannabinoid” was coined

in the mid-1990s), it is surprisingly

heavily involved with a

number of our bodily functions

and pathological conditions. This

elaborate network of receptors

and proteins — which includes

endocannabinoids, enzymes, and

the cannabinoid receptors CB1

and CB2 — plays major roles

in our immune function, autonomic

nerve function, memory,

stress response, and appetite.

Research even demonstrates a

clear relationship between altered

endocannabinoid signaling with

cancer, cardiovascular disorders,

eating disorders, and psychiatric

and neurological disorders.

Enticed by the system’s powerful

role in regulating cravings,

mood, pain, and memory, drug

designers have endeavored to

develop drugs that can improve

one’s physical and mental health

despite much controversy. While

some researchers have been successful,

others point out gaps in our knowledge and understanding

of the system. In 2006, the pharmaceutical company Sanofi-Aventis

began selling a new weight-loss drug, Rimonabant, that worked by

targeting CB1 receptors and reducing patients’ appetites. Within six

months, however, the company had received more than 900 reports

of side effects, such as depression and nausea, and the drug was

then pulled from the market. Studies later linked the existence of

a previously undiscovered second receptor, CB2, to these effects.

“It’s against the propagation of life in the long run to interfere

with the central components of appetite,” asserts Dr. Tamas Horvath,

who serves as Chair of the Section of Comparative Medicine and

works as a Professor of Neurobiology and Obstetrics, Gynecology,

and Reproductive Sciences at the Yale School of Medicine. For this

reason, the development of such drugs that target ECS receptors has

The Marijuana Receptors

A New Medical Target?

BY ARASH FEREYDOONI

raised ethical controversies; when we try to manipulate a system as

widespread and effective as the ECS without sufficient understanding,

we can end up with a broad spectrum of unintended effects.

“It is virtually impossible to selectively interfere with either a brain

region or a peripheral organ such as the liver, without having any

other impact on other tissues,” Horvath adds.

Other pharmaceutical companies have tried to develop powerful

painkillers that imitate how the active ingredient in marijuana, delta-

9-tetrahydrocannabinol (THC), reacts with receptors in the body’s

endocannabinoid system. The British company GW Pharmaceuticals

is one of these companies currently seeking FDA approval for their

marijuana mouth spray, Sativex.

GW Pharmaceuticals is attempting

to quell concerns of recreational

drug abuse by utilizing only two

cannabinoid compounds that

could counterbalance each other:

THC and CBD (cannabidiol).

While THC would activate CB1

receptors for pain relief, CBD is

believed to suppress the “high”

feeling that comes as a side effect

of THC. The company also advertises

that Sativex, as a mouth spray,

takes at least an hour to start producing

an effect — a time period

too long to be easily abused by

drug users.

Although the FDA has set strict

guidelines regarding cannabinoid

pharmaceutical drugs, there is

evidence that a policy shift may

soon allow more of these drugs

to appear on the market. In the

1980s, Marinol and Cesamet, two

The ECS is responsible for appetite and award-seeking behavior.

drugs based on synthetic cannabinoids,

were approved for

Courtesy of BBC Science Features

alleviating the nausea and vomiting

for patients undergoing chemotherapy and for stimulating the

appetite of patients with AIDS. Further trials for these drugs have

concentrated on movement disorders such as Parkinson’s syndrome,

chronic pain, dystonia, and multiple sclerosis. Despite claims that

these drugs are based on synthetic compounds and have low risk

of physical or mental dependence, concerns still exist over whether

such cannabinoid-based drugs could or would still be abused for

recreational use.

While developing new drugs targeting the ECS can bring about

groundbreaking medical treatments, the extensive and highly integrated

nature of the ECS has made it very difficult to avoid negatively

affecting the other parts of body. The quest and urge to better

understand the endocannabinoid system needs to be tackled before

we can begin to see the ECS as a common medical target.

www.yalescientific.org April 2012 | Yale Scientific Magazine 27


FEATURE

What Am I Eating?

The Infiltration of Genetically Modified Foods

BY WALTER HSIANG

HEALTH

Though the thought of glow-in-the-dark cigarettes may seem bizarre,

they are actually scientifically feasible to produce. The glowing tobacco

plant, which made its debut in 1986 when scientists at University of California,

San Diego inserted the genes of a firefly into the tobacco genome,

was one of the first genetic engineering experiments and introduced the

world to the concept of genetic manipulation of crops. Since then, genetically

modified plants have become ubiquitous in society. In fact, most

products — the foods you eat and even the clothes you wear — contain

a genetically modified plant or organism.

Why Produce Genetically Modified Foods?

Organisms whose genes have been artificially engineered are called

genetically modified (GM) organisms. A number of methods exist to

insert new genes for the modification of the original organism, including

biological vector techniques, in which a specific gene is inserted into a

plasmid DNA that is introduced to the organism, and microinjection, in

which genetic material is inserted directly into a single living cell.

Since these changes occur at the molecular level, it may seem difficult

to distinguish genetically modified foods from regular foods at the

macroscopic level. In today’s society, however, it is valid to assume that

most of the produce you see at the supermarket is genetically modified,

as GM foods tend to have a more appetizing appearance. For example,

GM strawberries are bigger, sweeter, and more enticing when compared

to their organic counterparts, which are likely to contain a bug bite or two.

GM foods are, from a superficial standpoint, incredibly appealing. Their

enhanced traits can include increased crop yield, better resistance against

pests, increased shelf life, higher nutritional value, and better taste. Genetic

engineering techniques produce these alluring qualities more rapidly and

efficiently than selective breeding, the traditional approach for breeding

organisms with specific traits. In fact, if it were not for these methods,

the seedless grape may still be just an idea of science fiction.

The majority of common vegetables and fruits are now

genetically enhanced to improve traits such as increased shelf

life and increased crop yield. Courtesy of Dr. Fans

GM Foods: Here, There, Everywhere

Whether to grow vitamin-enriched rice or pest-resistant corn, a significant

portion of the agriculture industry has adopted GM foods to provide

consumers with “better” products. In fact, over eight out of ten packaged

foods in the United States contain some type of GM product. So what

foods are not genetically engineered? Honestly, not many. When looking

at the major agriculture crops grown in America, around 90 percent

of all canola, cotton, corn, and soybeans grown on U.S. soil have been

genetically modified.

And it does not stop there. GM foods have also made their way into

many processed foods, such as oils and sweeteners, and other products like

sugar substitutes and vitamin C tablets. Therefore, whether you shop at a

supermarket or eat at a restaurant, you are much more likely to consume

a GM food than a non-GM, or “natural” food.

Potential Health Risks

A common genetic engineering technique called microinjection.

This technique does not rely on biological vectors such

as viruses to insert DNA into a new genome; it simply injects

genetic material containing the new gene into the recipient

cell. Courtesy of GenerationGreen

Even though GM foods constitute a majority of American agriculture,

scientists have performed shockingly few studies investigating their safety.

Overall, there have not been many definitive findings exposing the dangers

of GM foods, though recent studies implicate GM foods with potential

health concerns.

For most of us without food allergies, we would not think twice about,

for example, chomping down on a sweet, unblemished ear of corn.

However, perhaps we should. One study performed in 2003 disturbingly

confirmed that farm laborers exposed to GM cotton and corn developed

allergies that irritated the skin, eyes, and upper respiratory tract. Another

study conducted in 2005 showed that GM foods elicited an allergic inflammatory

response in mice from consumption of GM field peas. Because

foods produced through biotechnology may result in the introduction of

proteins new to the human diet, these new proteins can sometimes induce

an allergic response to sensitive members of the population.

It is also imperative to examine the heavy use of chemicals in the

28 Yale Scientific Magazine | April 2012 www.yalescientific.org


HEALTH

FEATURE

farming of GM foods. Monsanto, one of the world’s largest agricultural

biotechnology corporations, manufactures specific herbicides and crops.

These GM crops can survive the harsh herbicides, which kill weeds.

Although this agricultural method is efficient, a troubling study performed

in 2010 linked two Monsanto products — Roundup® herbicide and

Roundup Ready® soy — with birth defects in animals and humans living

nearby. Malformations affecting the skull, face, and developing brain and

spinal cord in frog and chicken embryos were traced back to Roundup®

herbicide. And, these birth defects present in animals were frighteningly

similar to the birth defects appearing in humans around parts of Argentina

growing genetically modified Roundup Ready® soy.

Despite the shocking results of these preliminary studies, it is important

to note that they are still not definitive. Dr. Kelly Brownell, Director of the

Rudd Center for Food Policy and Obesity and Professor of Epidemiology

and Public Health comments, “The potential downsides [of GM foods]

have not been fully tested and are not discussed much by the public.”

Overall, the process of fully evaluating the safety of a single GM food is

extremely long and convoluted, and many health consequences cannot

be predicted or measured in hindsight.

GM Foods and the Environment

Just as it can be difficult for drivers to fully realize their contributions

to air pollution, the environmental consequences of GM foods are not

always immediately apparent. One such consequence is that the use of

GM foods increases the resistance of pests to pesticides, making them

harder to kill. In 2008, the first case of a pest resistant to a GM crop

was documented. A certain type of bollworm, which was supposed to

be eradicated by the toxin of a genetically engineered cotton crop, had

developed resistance and began to spread through many parts of the U.S.

In order to combat this pesticide-resistant insect, farmers utilized more

potent toxins — an environmentally detrimental practice.

This same situation is currently plaguing the weeds of the Midwest.

Through cross pollination, herbicide-resistant genes in GM food crops

are spreading to wild weed populations, resulting in new “superweeds.”

These superweeds are resistant to most herbicides and have been infesting

both human and natural ecosystems.

Fueling this vicious circle, scientists plan to develop GM food crops

resistant to an even deadlier poison called dicamba with the hope of spraying

crops with this herbicide to eliminate the encroaching superweeds.

However, dicamba is classified by the Environmental Protection Agency

as a developmental and reproductive toxin. Thus, the end result of these

GM foods is a dangerous cycle of increased resistance and increased use

of potent chemicals. This not only places all members of society at risk

of potentially life-threatening health complications but also creates the

possibility of destroying natural ecosystems.

The Benefits of GM Foods

Despite the many concerns of GM foods, there is no doubt that the

rise of these products has significantly contributed to the development

of the world’s economies and societies. Some say that GM foods may

even be the answer to feeding a booming global population and satisfying

world hunger. Gale Buchanan, a former U.S. Department of Agriculture

Chief Scientist, is a proponent of GM foods and advocates that they can

address climate change, improve agricultural productivity, and extend food

security to developing countries.

Regardless of their risks and benefits, however, GM foods are here to

stay. Concerning the future of GM foods, Brownell notes, “There might

be tighter government regulation, increased evaluations, and better labeling

so that consumers understand when they are consuming [products]

that contain genetically modified crops.” These actions could minimize

the majority of risks posed by GM foods, and in this sense, GM foods

could potentially tackle some of the world’s most fundamental problems.

Yet GM food companies are becoming so powerful in the U.S. that

increased government restrictions and labeling of GM foods are not

likely to emerge. In the last decade, the GM food industry has spent nearly

$600 million to lobby in favor of relaxed regulation. Thus, while there

are laws that require the labeling of GM products in many countries, the

U.S. is one of the only developed countries in the world that does not

require the food industry to disclose whether or not their ingredients are

genetically engineered.

(Left): 93 percent of the cotton crop grown in the United States is genetically modified. The majority of other major crops

grown in the United States, including corn, soybeans, and canola, are also genetically modified. Courtesy of Adidas Group

(Right): Genetically modified foods have the potential to tackle world hunger issues. However, world hunger problems mostly

lie not in food production, but in the infrastructure for and distribution of food. Courtesy of Cross Catholic Field Blog

www.yalescientific.org

April 2012 | Yale Scientific Magazine 29


FEATURE

MEDICINE

The More The Merrier: Limiting the Number of

Embryo Implantations

BY JESSICA SCHMERLER

For the hundreds of thousands of couples facing infertility in the

United States, the practice of in vitro fertilization (IVF) provides

a promising alternative to have children. The practice common

in American medicine to implant multiple embryos all at one time,

however, can lead to a long list of complications for both mothers and

babies, most of which are related to an increased likelihood of multiple

gestations. On the other hand, many other countries have implemented

limitations on the number of embryos allowed to be transferred in one

IVF cycle, and clear medical evidence supports the implementation of

such a policy. Why, then, do we not follow their lead for single-embryo

transfer IVF?

What is in vitro fertilization?

Québec demonstrated that, following the implementation of a singleembryo

transfer (SET) policy in August 2010, the pregnancy success

rate decreased from 42 percent to 32 percent.Due to this increased

likelihood of pregnancy associated with multiple embryo transfers,

transferring more than one embryo per cycle is also much more

financially practical. In the United States, insurance companies in

only eight states cover IVF; non-insured IVF cycles cost on average

$9,500. In other countries, SET policies have been more successful

because national health insurance covers IVF, and so there is no

penalty for transferring embryos one at a time until pregnancy is

achieved. However, in the United States, it is not financially desirable

— or possible — for many couples to pay out-of-pocket medical fees

multiple times.

Under normal circumstances, an egg is fertilized inside a woman’s

body and develops into an embryo, which then implants in the wall

of the womb. When a woman cannot conceive naturally, she and her

partner can turn to a number of assisted reproductive technologies

(ARTs), the most popular of which is in vitro fertilization.IVF involves

five steps: (1) ovarian stimulation, in which the woman is given drugs

to boost egg production; (2) egg retrieval, in which she undergoes a

minor surgery to remove eggs from her ovaries; (3) insemination and

fertilization, in which her eggs are exposed to her partner’s sperm in

a controlled environment and fertilized; (4) embryo culture, in which

the fertilized eggs are grown in petri dishes; and (5) embryo transfer, in

which the embryos are transferred

to the womb via a catheter,

hopefully resulting in implantation

of embryo(s), i.e. pregnancy and

live birth. Through this process,

couples who have been told they

cannot have children are provided

with hope.

Advantages of Multiple Embryo

Transfer

Despite our advances in IVF

technology, however, we still

cannot gauge the likelihood of

a successful embryo transfer. As

many as 80 percent of embryos

transferred into the uterus

fail to implant, mostly due to

chromosomal abnormalities, and

only about three out of ten IVF

procedures lead to a successful

live birth. Consequently, doctors

frequently opt to transfer multiple

embryos in one cycle to increase

the chance of pregnancy. For

example, a study performed in

Disadvantages of Multiple Embryo Transfer

Some clinics choose to culture embryos slightly longer, to

the blastocyst stage, before reintroducing them in order to

increase likelihood of implantation. Courtesy of nfcares.com

Although multiple embryo transfers may be easier on the wallet, this

practice is certainly not easier on mothers or babies. By transferring

more than one embryo at a time, multiple gestation likelihood is greatly

increased, and, medically speaking, this in turn increases the medical

dangers posed to both mother and baby. In the United States, twins

account for 35 percent of IVF pregnancies and triplets account for

seven-eight percent; between 1980 and 2009, the twin birth rate has

increased by 76 percent. This increase was coupled with the transfer

of an average of between two and 3.2 embryos at a time (depending

on age, with younger women

receiving fewer on average than

older women) in 2009.

Multiple pregnancies are associated

with increased likelihood of

neonatal complications. Nearly

all triplets and 70 percent of

twins are born prematurely, and

these babies are thus at greater

risk for low birth weight, mental

and physical handicaps, and even

death. The mother, meanwhile,

is at risk for hypertension,

gestational diabetes, heart stress,

and placental problems. In

Canada, IVF accounts for only

one percent of all births but 17

percent of neonatal intensive care

unit admissions. Also, the cost of

caring for twins, triplets, or higherorder

multiples is four, 11, and

18 times higher, respectively, than

caring for a single child.

Many women choose to have

multiple embryos transferred

anyway, as they believe that the

importance of getting pregnant

30 Yale Scientific Magazine | April 2012 www.yalescientific.org


MEDICINE

FEATURE

outweighs the risk of multiple pregnancies. In the situation of multiple

pregnancies, women often choose to terminate one of the pregnancies,

creating the risk of unintentional termination of all pregnancies. Antiabortion

ethical arguments also arise when terminating pregnancies.

ovarian stimulation drugs, and 12 percent used other ARTs. While a

SET policy might have some effect, there are still many other sources

of multiple pregnancies.

To address these challenges facing the proposal of a SET policy,

Patrizio, his colleagues, and the March of Dimes Foundation have

organized a summit to take place this June. The summit will bring

together experts in the field of ART, insurance managers, lawyers,

patient advocacy representatives, and obstetricians who will “spend

two days in June trying to discuss all the possible components that go

into the decision of transferring more than one embryo [and] all the

possible factors that play a role.”

Often cited as the downside of multiple embryo transfers,

Nadya Suleman had twelve embryos transferred and gave birth

to octuplets in 2009. Courtesy of news.softpedia.com

How Could a Single-Embryo Transfer Policy Work?

According to Dr. Pasquale Patrizio, Director of the Yale Fertility

Center, a SET policy should be favored, but several obstacles impede

the implementation of such an umbrella policy. The first main hurdle

is the difficulty in efficiently and safely assessing the developmental

competence of embryos. In other words, when it becomes possible

to ascertain an embryo’s likelihood of implantation, it will be simpler

to implement such a policy. Second, it is important to note that while

twin pregnancies may not be a medically favorable outcome, it is the

desired outcome for many women, and a SET policy would restrict

that freedom of choice. Third, IVF is a costly procedure, involving

out-of-pocket expenses, time off from work, and the physical taxation

of the procedure itself. To force women to undergo this procedure

more than once with a decreasing likelihood of successful pregnancy

seems somewhat unethical. Patrizio also points out that although

the reduction from three to two embryos transferred has decreased

the rate of triplet or higher-order pregnancies without decreasing

the pregnancy success rate, an additional reduction of that number

to one embryo — in the absence of markers predictive of succesful

impolantation — would likely decrease the pregnancy success rate as

well. Finally, what Patrizio considers the most pressing issue is that of

record-keeping. While the number of multiple pregnancies resulting

from IVF is well-documented, those from all other forms of ART are

not. According to Patrizio, non-IVF forms of ART, such as ovarian

stimulation, generate equally high risks for multiple pregnancies,

but “unlike IVF,” he notes, “these outcomes are not recorded in a

national registry and data on births resulting from its use are not easily

available.” A recent survey of women who had multiple pregnancies

showed that only 0.7 percent underwent IVF, while four percent used

Multiple embryo transfers during in vitro fertilization can often

lead to the undesirable consequence of multiple gestations,

as shown in this ultrasound of triplets. Courtesy of Raphael

Gonzalez & Richard Woods

The Current State of the Embryo Transfer Debate

While implementing an umbrella policy regarding the number of

embryos transferred per cycle is not currently feasible, much has been

proposed in terms of how to maximize pregnancy success rates while

minimizing medical complications. In 2011, Congress introduced the

Family Act, which proposes to introduce a progressive tax credit on outof-pocket

expenses for IVF, with a 50/50 cost-share and a maximum

lifetime credit of $13,360. The American Society for Reproductive

Medicine and the Society for Assisted Reproductive Technologies

created general guidelines for IVF, recommending a maximum number

of embryo transfers dependent on the age and medical condition of

women. Many clinics culture embryos for five days instead of only

three days in order to maximize the likelihood of only transferring

embryos with high potential for implantation, and research is currently

being conducted on how to predetermine this potential. Taking all

this into consideration, a policy regarding embryo transfers in IVF

would need to consider the differing medical, financial, and ethical

circumstances of each individual case. IVF is a biomedical engineering

feat that has brought much joy to many couples who would previously

have suffered under the dark cloud of infertility. In light of this huge

contribution, it is our responsibility, as a nation, to find the balance

between hope and medical safety.

www.yalescientific.org

April 2012 | Yale Scientific Magazine 31


FEATURE

BIOETHICS

Selling Sex... Cells

Navigating the Ethics of The Sex Cell Market

BY MEREDITH REDICK

For several decades, assisted reproduction technology has helped

infertile couples realize their dreams of having children. However, this

process requires gamete donors — men and women willing to give

up their own sex cells. Thus, while the emerging market for sex cells

has ushered in hope for many couples, this promise comes at a price..

While sperm donation can hardly be called invasive, egg donation,

on the other hand, poses many risks that are not all fully understood.

To donate, a woman must take hormone injections for several weeks.

Typically, a gonadotropin-releasing hormone agonist is injected to

prevent normal pituitary stimulation of egg development. After one

to two weeks, follicle-stimulating hormones are injected to promote

artificial maturation of multiple eggs. At the end of the cycle, an

injection of human chorionic gonadotropin stimulates egg release.

Retrieval requires surgical insertion of an ultrasound-guided needle

through the vaginal wall into the ovary.

For some women, maturation of multiple eggs results in severe

Almeling explains. “We don’t even really have short-term studies. I

think when a woman goes in and is asked to give informed consent,

it can’t be true informed consent. They need to be advised that we

don’t have good data.”

Three days after conception, this embryo consists of 8 cells.

Courtesy of Wikipedia

A basic overview of how eggs are transferred in assisted

reproductive technologies. Courtesy of the Fertility Institute

of New Jersey & New York

abdominal cramping. Women preparing to donate might have as many

as 40 follicles ripening in their ovaries as a result of the hormone

treatments. For some, the problems exceed discomfort. In her book

Sex Cells: The Medical Market for Eggs and Sperm, Dr. Rene Almeling,

Assistant Professor of Sociology,, reports estimates from the American

Society for Reproductive Medicine that in one to two percent of

egg donors, too many eggs mature simultaneously and produce serious

complications. In women with severe ovarian hyperstimulation

syndrome (OHSS), fluid can leak out of blood vessels and into the

abdomen, causing bloating and, in rare cases, kidney failure or death.

In addition to the risk of OHSS, egg donation may pose long-term

risks. Unfortunately, the seriousness of these risks remains fairly

nebulous. Several cases of cancer have been linked to hormone treatments,

but thorough research has not been conducted. “We do not

have good long-term follow-up clinical studies about what happens

to young women who are repeatedly exposed to fertility medications,”

The increased risk associated with egg donation means that women

are typically paid much more to donate sex cells than men. Men are

typically paid about $100 per sperm sample, whereas women can be

paid more than $5,000 for a single cycle. In other words, egg donation

can be quite financially rewarding, which leads to concerns about

exploitation of the sex cell market. In response, the American Society

of Reproductive Medicine has issued guidelines recommending against

remuneration exceeding $10,000. Furthermore, Almeling writes that

egg donation agencies tend to use altruistic and “gift of life” language

to persuade women to donate. She suggests that this advertising

method might be partially due to the need to screen out women who

are interested only in making money from donation, since women in

dire financial straits are more vulnerable to exploitation.

The effects of such ethical concerns about egg cell donation are

present not only in fertility clinics but also in research laboratories.

Researchers have established that immature egg cells are excellent

candidates for stem cell research. Instead of using embryonic stem

cells, scientists could grow their stem cells in the lab from somatic cell

DNA and egg cells. This practice would be a win-win for both the

public and scientists, eliminating the current controversy concerning

the destruction of human embryos. However, the federal government

does not currently compensate women who donate eggs to stem cell

research, which has resulted in an egg cell shortage in laboratories and

a continuation of the stem cell debate.

For the time being, standing ethical and biological issues shroud

the sex cell donation market, limiting the potential benefits that a

well-regulated market could provide. Although the procedure of

obtaining egg cells has been standardized and proven successful for

many couples, nuanced questions of how to compensate donors and

to prevent incest between children from a single sperm donor have yet

to be resolved. But for now, egg cells are a coveted yet controversial

commodity for infertile couples.

32 Yale Scientific Magazine | April 2012 www.yalescientific.org


BIOLOGY

FEATURE

Let Them Eat Cake

More To Obesity Than Just The Extra Calories

BY KATHERINE LEIBY

With a quick glance around, we can tell that the obesity epidemic is

becoming more and more prevalent. And the greater our body weights

become, the more we see the whole host of diseases that come along

with obesity as well, including diabetes, heart disease, and even some

types of cancer.

We like to think that preventing obesity is a simple matter of personal

responsibility, and that if we only used our willpower, we could right the

imbalance between our caloric intake and expenditure. But it is becoming

increasingly clear that solving the obesity problem will require more than

the customary fix of a little less chocolate cake and a little more time on

the treadmill.

There are at least 32 gene variants that have been implicated as links

to obesity, including one that decreases our propensity to choose healthy,

low-fat foods. At various times, hormones in our gut, like glucagon-like

peptide 1 and oxyntomodulin, are sending satiety signals to our brain,

reducing our food intake, while others hormones, like ghrelin, are stimulating

our appetites. The peptide hormone leptin, which is secreted by fat

tissue and suppresses hunger, was once even a prime candidate for drug

therapies — until we realized that blood concentrations of leptin actually

increase with body fat and that most obese individuals have developed a

resistance to the hormone.

Researchers used mice like these in order to better understand

the role of the peptide hormone leptin. Courtesy of

Proceedings of the National Academy of Sciences

Ashley Gearhardt, a clinical psychology doctoral student at Yale, also

discovered that some people show both the neurological and behavioral

signs of substance dependence in response to certain craveable types

of food, such as ice cream, french fries, and candy, much like a smoker

responds to cigarettes. What is intriguing, Gearhardt says, is that not all

obese individuals exhibit “food addiction” and that not all lean individuals

are free from these signs of dependence. But those who are addicted

to food demonstrate the same neural response patterns to food cues as

would be predicted for those with substance dependence.

Food addiction brings into question the role of our food environment,

in which highly processed foods are advertised through “guerrilla

marketing,” particularly to children. We do not yet understand all of the

potential consequences that these highly marketed, rewarding foods may

have, though Gearhardt suggests that they offer “the potential of hijacking

control for at-risk individuals.” When faced with such an intense food

environment, efforts at self-control may not be enough.

Striking as well are findings that obesity may even in fact have infectious

origins, an idea encompassed in the term “infectobesity.” Already,

six different pathogens have been indicted as causing obesity in animals,

two of which are also associated with human obesity. Many more obese

individuals than lean individuals harbor antibodies in their blood to the

human adenovirus, or Ad-36, a pathogen responsible for causing respiratory

tract infections, suggesting a link between human obesity and Ad-36.

The obesity epidemic is on the rise worldwide. Courtesy of

The Nutrition Post

An even more startling discovery supporting the idea of infectobesity

was made in a recent study led by Dr. Jorge Henao-Mejia, Dr. Eran

Elinav, and Mrs. Chengcheng Jin in the laboratory of Dr. Richard Flavell,

a Sterling Professor of Immunobiology at the Yale School of Medicine

and a Howard Hughes Medical Institute Investigator. Their study suggests

yet another way in which obesity could be infectious: through gut

microbiota, bacteria that live in our intestines. These bacteria populations

are regulated by proteins called inflammasomes that are responsible for

sensing particular stimuli and triggering an inflammatory response. The

team observed that mice that were deficient in certain inflammasomes

— and therefore had altered gut bacteria populations — exhibited a

higher incidence of fatty liver disease and obesity when fed a high-fat diet.

The same severity of disease was reproduced in healthy mice merely by

cohousing the two populations, suggesting that the altered gut bacteria

condition was infectious.

Elinav is quick to emphasize that this study does not suggest that gut

microbiota cause obesity, but rather that it is the combination of diet,

genetics, and microflora that promoted obesity in the mice. “This is

another contributing mechanism [to obesity],” he explains, “ [but] the

dietary component is of no less importance.” Both the high-fat diet and the

genetic mutations in inflammasomes, by altering the gut microbiota, were

necessary driving factors for the development of liver disease and obesity.

All of these findings add yet additional threads to what is becoming

the more and more complex web of obesity, one in which we as humans

are becoming increasingly trapped. Obesity is not a simple problem with

a straightforward solution, and it does not look like we will be stumbling

upon a quick fix anytime soon. But with the whole multitude of contributing

factors — genetics, hormones, addiction, viruses, gut microbiota

— we know that there is much more to be blamed in this epidemic than

just that extra slice of chocolate cake.

www.yalescientific.org April 2012 | Yale Scientific Magazine 33


FEATURE EDUCATION

Sizing Up Science at Yale

part ii of science education at yale

BY DENNIS WANG

Much has changed since the Sheffield Scientific School, separate

from Yale College, was founded in 1854. Today, science is an integral

part of the liberal arts education at Yale and is a required component of

the distributional requirements. Science education at Yale has evolved,

and quantitative measures of the present state of science at Yale are the

subject of constant scrutiny.

Quantifying the Experience

According to a recent Yale Daily News article, 30 percent of freshmen at

Yale start out in science, technology, engineering, or mathematics (STEM),

but only 19.9 percent of seniors graduate with STEM degrees, compared

to 28.5 percent of Harvard graduates and 36 percent of Stanford graduates.

Retention in the sciences is low at Yale, which may be partly due

to the strength of its Humanities and Social Science Departments. The

Yale Scientific Magazine conducted a survey which found that 56 percent

of science majors had seriously considered non-science majors, and that

64 percent of science majors were pre-medical students. Matriculation in

the sciences, however, is lower than at Harvard, Stanford, or MIT, and

explains the lower percentage of science graduates at Yale.

The numbers alone may not be reflective of the quality of Yale’s science

programs. As noted by William Segraves, Associate Dean for Science

Education, science at Yale is “an enormous enterprise with many dimensions.”

Its merits and shortcomings deserve to be assessed independently

by the students and faculty that they most affect.

Qualifying the Experience

The Yale Scientific Magazine asked 103 Yale science and non-science

majors whether they agreed or disagreed with the following statements

on a ten-point Likert scale, where “1” represented strong disagreement

and “10” represented strong agreement. The results are arranged in the

following table:

Claim

Science at Yale is stronger than social sciences

or humanities at Yale.

Science at Yale is stronger than science at

Harvard or other comparable schools.

Likert Scale

Additionally, survey respondents were asked what they considered to

be the greatest strengths and weaknesses of science at Yale. Students

believe that the sciences at Yale are stronger than their reputations imply,

citing the low student-faculty ratio in advanced classes and peers with

interests not limited to science. While science students at Yale represent

a smaller minority of the population than at Harvard and Stanford,

Michael McBride, Colgate Professor of Chemistry, notes that the focus

2.6

4.8

Science at Yale has a strong reputation. 6.0

Science at Yale has a strong program. 7.0

Yale is doing enough to promote science education.

5.6

The overall science experience at Yale is good. 6.3

Sterling Professor of Molecular Biophysics and Biochemistry

Thomas Steitz won the 2009 Nobel Prize in Chemistry for

solving the structure and function of the ribosome. Like many

Yale professors, Steitz has undergraduates working in his

laboratory. Courtesy of Yale University

of professors on undergraduate STEM education distinguishes Yale

from comparable institutions with larger programs. Other students

note the abundance of research opportunities and funding through

the Science, Technology and Research Scholars (STARS) program and

the freshman Perspectives on Science and Engineering (PSE) program.

Segraves, who organizes PSE, notes that “research is deeply embedded

in the culture here.”

However, science at Yale still has room for improvement. Science

majors complain about the inconvenient location of Science Hill,

ambiguity in grading, lack of course variety, poorly-taught introductory

classes, and the lack of a full credit for labs. Science majors and nonmajors

identify a rift between the sciences and other disciplines, though

student opinions vary on science distributional requirements and courses

for non-majors. One student, in naming the best and worst things about

science at Yale, quipped “being at a liberal arts school.”

Looking Ahead

In 2000, President Richard Levin announced a $1 billion commitment

to the sciences, expressing his hope that “our billion dollar investment

means that the University’s image will change somewhat and that we’ll be

recognized for our excellence in all fields.” Levin, who is a member of the

President’s Council of Advisors on Science and Technology, called for the

STEM Teaching Transformation Committee to improve the quality of

science facilities and education. Dean Mary Miller indicated in the 2011

Yale College Curricular Review that “Yale should make a commitment to

do more to prevent [attrition from STEM majors] and potentially even

attract more of our students to science and engineering,” noting that “the

deferral of improvements to science and engineering facilities with the

economic downturn has squeezed teaching and learning opportunities.”

It is not yet certain how science at Yale will change over the next five,

ten, or 20 years as facilities, programs, and students continue to develop

and evolve, but as Segraves says, “The only thing that’s certain is that it

will be different!”

34 Yale Scientific Magazine | April 2012 www.yalescientific.org


BOOK REVIEW FEATURE

The Aha! Moment: A Scientist’s Take on Creativity

BY RENEE WU

Rating:

&&&&&

We all dream of being the next Steve Jobs of our field, the one

who revolutionizes the industry with a brilliant idea that changes the

world. Musicians compose in hopes of becoming as commonly known

a name as Beethoven; authors look

to Hemingway and Fitzgerald for

inspiration for their own works.

Regardless of whether we aspire to

be the next Picasso or Armani, we

all look to “the Greats” with slackjawed

wonder, each of us longing to

achieve the crown jewel of human

thought: a creative idea. So where

do these ideas come from?

A retired chemist best known

for his columns as the comical

Daedalus in the New Scientist and

Nature, David Jones describes his

theory of creativity in his recent

book The Aha! Moment: A Scientist’s

Take on Creativity. Writing about his

own creative journeys in science as

well as offering dozens of other

scientific examples, Jones shows

how we may change our ways of

thinking to enhance our creative

thoughts.

Jones simplifies the human

mental structure into three parts:

the Observer-Reasoner in the

conscious mind, the Censor in

the subconscious mind, and the

Random Ideas-Generator (RIG)

in the unconscious mind. We have

different levels of access to each of

the three levels, with full awareness

David Jones describes how we can stimulate our creative

muscles in his new book. Courtesy of Idaho Falls

Public Library

of the Observer-Reasoner, some control over the filter of the Censor,

and no controllable contact with the RIG. All of our thoughts, decisions,

and ideas depend on the interaction of the three layers, but we

can change how strong each of the layers are to enhance our creativity.

Most of The Aha! Moment is spent describing the importance of the

RIG, with Jones comparing it to “a pet animal … you can be fond

of and pleased with what it gives you.” The RIG is described like a

vast, swirling pool of information that is collected and enhanced as

you learn, read, and experience more. Like a “pet,” the RIG is always

active and plays with this information to combine or generalize into a

package that is sent to our conscious minds, the Observer-Reasoner.

Before reaching the Observer-Reasoner, the idea must first pass

through the Censor, which strictly tries to oppose whatever the RIG

sends “up.” Although the Censor is important to filter out “all sorts

of dangerous absurdities and mad possibilities [from] being tossed

around” in the RIG, it can also impede creativity. In the case that

an idea is able to break through the barrier of the Censor and is not

discarded as too illogical by Observer-Reasoner, Jones writes that this is

the “aha!” moment that we all crave

We are able to enhance our creativity by flexing and playing with our

RIG and “outwitting” our Censor, though

most of his book elaborates on the

former. According to Jones, there are two

components in the RIG to being creative:

(1) amassing a large pool of knowledge

and experience through life experiences,

reading articles, vigilant curiosity, and

constant observation; and (2) encouraging

the RIG to play with information

and stimulating it to pass up ideas to the

Observer-Reasoner. The book perfunctorily

encourages us to always carry a

paper and pencil to jot down ideas the

RIG may send at any moment, to try our

hand at different academic fields to give

our RIG more challenges to play with, to

keep a personal database of documents

we find intriguing, and to recognize the

power of emotional feelings, sexual tension,

and intuition.

In the second half of the book, readers

are treated to a long parade of the

triumphs and experiments in Jones’

career that he attributes to this mode of

creative thinking. Describing the thought

processes that led to his accomplishments

working with NASA, his study of

bicycle stability (which he said he was first

inspired to study because of his amazement

that he could ride a bicycle even

when drunk), and dozens more, Jones

hammers home the same message with

his many examples: Creativity requires a cross-training kind of thinking

in which novel ideas are based on pattern recognition and looking at

things in fresh perspectives, not inspired guesswork.

Despite the many original descriptions of the thought processes that

led to novel discoveries, The Aha! Moment itself does not provide any

extraordinarily novel perspectives on creativity. Although Jones writes

about his examples in a playful and light-hearted manner, the sheer

number of examples he includes to emphasize his message feel somewhat

tiring by the end of the book. Additionally, at some points, the book

seems to take the idea of the RIG too far, such as when Jones suggests

that a woman’s unconscious mind selects eggs it believes are well-suited

for the men in her life.

Nonetheless, The Aha! Moment is an entertaining book filled with

intriguing reflections, sincere advice, and playful ideas. Although we may

not become the next Steve Jobs or Picasso, Jones lets us enjoy a journey

inside his creative mind in hopes that we may branch off onto our own

innovative paths.

www.yalescientific.org

April 2012 | Yale Scientific Magazine 35


FEATURE

UNDERGRADUATE PROFILE

The Road Less Traveled: Darren Zhu, B.S. ’13

BY ANDREW GOLDSTEIN

Last year, Darren Zhu was in his sophomore year

as a chemistry major; this year, the former member

of the Yale Guild of Carilloneurs is marching to

a different tune as he launches a biotech start-up

company in Palo Alto, California. The 2,600 mile

transition was facilitated by Zhu’s selection as one

of the recipients of the 20 Under 20 Thiel Fellowship,

sponsored by Peter Thiel, a prominent Silicon

Valley venture capitalist who is perhaps best known

as the co-founder of PayPal and the first investor in

Facebook. The son of a professor and an engineer,

Zhu cultivated his interest in science and technology

through science fairs, classes at the North Carolina

School of Science and Math, and research experience

during his senior year of high school at the University

of North Carolina at Chapel Hill. At Yale, he dabbled

in chemistry, mathematics, economics, computer science,

and biology. However, he realized that his true

interests lay at the intersection of science, technology,

and business after helping start and run Yale’s

team in the International Genetically Engineered

Machines (iGEM) Competition. While working with

this group of students to engineer E. coli to churn out

a new anti-freeze protein, Zhu became immersed in

the field of synthetic biology, an area with a diverse

array of applications in biotechnology. “Founding a

biotech start-up became a dream of mine, but I didn’t

think that it would be accessible until much later in

my career,” he explains.

Yet less than a year later, Zhu won a $100,000 Thiel Fellowship to

pursue his interest in biotech. He started by teaming up with David

Luan, another Yale student and Thiel Fellow, to start a biotech robotics

“ ”

You truly have to be

a jack of all trades to

found a start-up.

company, Dextro. Zhu and Luan sought to build a company that provided

end-to-end automation of large-scale experimental procedures to

remove the tedium and error that frequently plagues scientific research.

This process involved making connections with various professionals,

including pharmaceutical/biotech industry officials, business advisors,

lawyers, and various science/technical experts because, as Zhu explains,

“you truly have to be a jack of all trades to found a start-up.” Recently,

Zhu and Luan decided to abandon the project due to the fact that high

system infrastructure costs limited the product’s market to big biotech

companies and that many of those potential customers indicated that

they were planning to significantly reduce research and development

spending. However, Zhu is far from discouraged. In fact, he found

the process to be a fascinating learning experience, adding that “the

Darren Zhu ’13, speaking at the 2009 Davidson Fellows Ceremony in Washington,

D.C. Courtesy of Darren Zhu

best way to learn how to run a start-up is to jump right in there and

try your hand at it: try to make something out of nothing.”

Zhu recently received a new $100,000 grant from the Bill and

Melinda Gates Foundation and plans to shift his focus for at least the

next 12 to 18 months back towards synthetic biology to develop an in

vivo diagnostic biosensor. He is currently fine-tuning the parameters

of this project, working through logistics to acquire lab space to

carry out his research and exploring a host of foundational synthetic

biology techniques, such as next-generation DNA sequencing and

synthesis that will support the research. His current project aims to

supplant chemical- and microscopy-based diagnostic methods by

developing an organism which is capable of detecting disease via the

use of engineered microbes coated with surface receptors specific to

various antigens.

While Zhu is doubtlessly excited about his work in Silicon Valley, it

is also clear that a part of him misses Yale. His suite at Yale has been

replaced by a house in Palo Alto with seven other start-up entrepreneurs,

working on everything from electric motors to an e-commerce

website and online educational tools. “Being away from Yale makes

you appreciate various things that you don’t really see while you are

immersed in it,” he said. “It is rare to be surrounded by constant collaboration,

constant social engagement, and literally thousands of high

caliber people with a diverse array of interests.” Would Zhu consider

returning to Yale eventually? “Of course,” he says, “but only if I run

out of money or ideas.”

36 Yale Scientific Magazine | April 2012 www.yalescientific.org


Like many children, five-year-old Edward Cheung wanted to know

how everything on Earth worked. He recalls passing a day in his

grandfather’s shop when a transistor radio fell to the ground and broke

open so that its inner components were exposed. “I felt I was looking

at magic,” he recounts. Since then, Cheung has transformed his boyhood

curiosity into an extensive career that goes beyond planet Earth.

Although Cheung grew up in Aruba, a small island in the Caribbean,

he attended college at the Worcester Polytechnic Institute in Massachusetts.

After graduating with a Bachelor’s of Science in Electrical

Engineering, he continued his academic career in a microelectronics

program at the Yale School of Engineering and Applied Science, where

his career-defining interest in

robotics took off. Under the

guidance of Vladamir Lumelsky,

Cheung worked on the

development of robotic arms.

“Before, a robot’s purpose was

to be the object that puts [its

components] all together,”

he explains, “but Lumelsky

had a new take.” Instead of

programming specific directions

and defined skills into

a robot, this team focused on

designing robots that could

deal with unknown stimuli in

novel environments. Cheung

developed an array of sensors

that essentially served

as skin, covering the arm

in a thin and flexible circuit

board that allowed it to move

around. “Initially, I did not

see it much of a research area,” Cheung explains, but he would soon

change his mind.

Because the intensity of graduate school was much greater than

he expected, Cheung made considerable efforts to engage himself

outside of his research, particularly by bonding with undergraduates.

“You have an active social life [to find fun]. I had to connect with the

younger students to find mine at Yale,” he recalls. During his years at

Yale, he greatly enjoyed his work as a teaching assistant, as well as his

involvement with the residential college system. One of his favorite

memories was his Taekwondo classes in the tower of Payne Whitney

Gymnasium. “In fact, in 1987, I was Connecticut state champion of

my weight and belt division,” he chuckles.

After receiving his doctorate in Electrical Engineering in 1990,

Cheung’s career took an exciting leap when he was recruited by the

Kennedy Space Center in Florida and eventually offered a permanent

position by the National Aeronautics and Space Administration

(NASA) at the Goddard Space Flight Center, where he has remained

for the duration of his career. Although he has worked at the Center for

over 20 years, his research focus has nonetheless changed throughout

his time there. When he first began working, the robotic arms on the

International Space Station were manufactured by Canada. Concerned

ALUMNI PROFILE

FEATURE

An Engineer’s Journey, Building Gadgets from Aruba to NASA

Dr. Edward Cheung, School of Engineering and Applied Science, Ph.D. ’90

Cheung worked to construct the cyrogenic cooler, the current main

instrument on the Hubble Space Telescope Wide Field Camera 3.

Courtesy of Dr. Cheung

BY ZOE KITCHEL

about competition, Congress mandated NASA to develop the Flight

Telerobotic Servicer, a space telerobot that would serve as a safe

replacement of human crew in space. Due to Cheung’s experience

with robotic arms, he was assigned to the project for five years until

“passing political winds” terminated it. Afterward, Cheung served as

principle engineer of the Hubble Space Telescope Service Project,

developing “several new components for the telescope, including the

cryogenic cooler, the current main instrument (Wide Field Camera

3), and portions of the power control system.”

Following the culmination of the Space Shuttle Program last

summer, Cheung shifted to his current focus on the maintenance

of geo-synchronous communication

satellites. “These

satellites sit in a very special

orbit … because it takes these

satellites 24 hours to travel to

their original spots. It turns

out that the Earth also rotates

in this way,” he explains, “and

as a result, they are stationary in

the sky in respect to the Earth.”

For each satellite to stay in its

unique orbit, a small amount

of rocket fuel is required that

keeps the satellite in place for

five to seven years, at which

point it is released out of orbit

and destroyed. To Cheung and

his team, though, this practice

seemed impractical: “Why can’t

we fill up the fuel tank with an

external satellite in order to

prolong the life, saving NASA

$500 million per satellite?” he asked. NASA agreed. Now Cheung

serves as the electrical lead of his team, designing, constructing, and

testing robots to repair and refuel these satellites.

Although content with his current position, Cheung worries about

the future of space exploration in the U.S. The end of the Space Shuttle

Program leaves the United States dependent on other countries for

space travel for at least another decade until the commencement of

NASA’s Space Launch System. Nonetheless, he feels fortunate that

he has found an area of this field that not only piques his interest in

electrical engineering but also sparks his creativity. “There are many

ways to solve any problem,” he explains, “but the way in which you

choose to solve it is a reflection of you.” Cheung’s achievements have

been recognized not only in the United States, but also in his home

island of Aruba, the Netherlands, and throughout the world of robotics

and space exploration. In 2010, he was knighted by the Queen

of the Netherlands and also received NASA’s Medal for Exceptional

Engineering Achievement. Even though he has come a long way from

his roots, he has never lost his childhood delight for understanding

how things work. Today, he collects and refurbishes pinball machines

for the joy of taking things apart and putting them back together, “just

like in my grandfather’s shop.”

www.yalescientific.org

April 2012 | Yale Scientific Magazine 37


FEATURE

NEUROLOGY

Left Brain, Right Brain

An Outdated Argument

BY KEVIN BOEHM

“I am definitely a left-brained person — I am not very artistic.” How

many times have we characterized ourselves as either left-brained and

logical people or right-brained and creative people? This popular myth,

which conjures up an image of one side of our brains crackling with

activity while the other lies dormant, has its roots in outdated findings

from the 1970s, and it seems to imply that humans strongly favor using

one hemisphere over the other. More recent findings have shown that

although there are indeed differences between the hemispheres, they

may not be as clear-cut as we once thought.

Our personalities and abilities are not determined by favoring one

hemisphere over the other — that much is certain. Many other functions,

however, such as response to danger and language generation, are

lateralized in the brain. Researchers hypothesize that these differences

arose from early vertebrates. Originally, it seems that the right hemisphere

began to respond more quickly to danger. In fact, when we are

suddenly confronted by a dangerous stimulus, we will respond more

quickly with our left hand, which is controlled by the right hemisphere.

The left hemisphere, on the other hand, has developed to handle more

common, routine tasks, such as feeding and hand control. Since this

hemisphere controls the right hand, a strong right-handed preference

has arisen in most of us, providing one explanation of why most people

are right-hand dominant.

Language is another process that is lateralized in the brain, though a

study conducted by researchers at Ghent University has shown that the

asymmetry differs when generating versus receiving language. When

children were shown images and asked to tell a story about them, function

was lateralized strongly in the left hemisphere for over 90 percent

of participating children. However, when asked to listen to an emotional

story, both hemispheres of the brain were activated to a similar degree

as planning and articulation require more processing involving more

regions on both sides of

the brain. The stories

the children listened to,

unlike the pictures, were

emotional, which may

indicate that the observed

involvement of the right

hemisphere is linked to

emotional regulation.

Olivia Farr, a neuroscience

Ph.D. candidate

at the Yale School of

Medicine, explains that

this language lateralization

is the source of many

The right-handed bias has always been

evident in humans, but scientists now

are discovering that it is not uniquely

human. Monkeys and other primates

prefer to hold food with their right

hands. Courtesy of 123rf.com

generalizations. “In some

of the first studies conducted

on hemispheric

lateralization, splitbrained

patients without

an intact corpus callosum,

or bridge between

the two hemispheres,

were examined,” says

Farr. Because visual

information from the

right eye goes to the

left hemisphere, when

split-brained patients

saw a word with their

right eye, they could

speak it but not draw

it. When the patients

saw a word with the left

eye, they could draw

but not speak it. These

results contributed to

the belief that hemispheres

operate independently

of each other

for most tasks, which

then developed into the

myth of being exclusively

left-brained or

right-brained. There was

so little known about

the brain that it was

The two hemispheres are separated

except for the relatively narrow corpus

callosum connecting them. This

is the link between the processing

centers and the source of many of our

advanced abilities. Courtesy of Gray’s

Anatomy, 1918

convenient to attribute poorly understood traits, such as personality

or thinking habits, to a clear-cut difference in lateralization. However,

“we now know that hemispheres are always communicating, and that

even these lateralization rules don’t always apply,” Farr affirms.

Hemispheres sometimes do perform tasks nearly independently, but

the integration of the two yields some of our most uniquely human

characteristics. For example, when we make errors, our realization and

ability to correct them is a result of the synergy of the two halves of

our brain. In fact, patients with damage to the corpus callosum have

difficulties correcting their errors as compared to patients with intact

corpora callosa, further suggesting that the two halves of the brain are

both involved in processing the error.

Even though some tasks usually occur preferentially in one half of

the brain, it is possible for the part directly opposite to take control

of the process. Such a process takes time, but after damage in the left

inferior frontal gyrus (referred to as Broca’s area) — a region of the

brain linked to speech production — researchers have found that activity

in the right inferior frontal gyrus begins to increase during language

generation. Our brains have enough plasticity to adapt to damage and

change conformations, even as adults.

Knowing that language processing usually occurs on the left side of

the brain and response to danger generally occurs on the right does not

comprehensively summarize our beings. Lateralization of the brain is

still not well understood, and there are very few, if any, hard and fast

rules of lateralization that actually make an impact on our behavior.

We are still every bit as human and unpredictable as before, but we

now understand a bit more of what makes us that way.

38 Yale Scientific Magazine | April 2012 www.yalescientific.org


CARTOON

FEATURE

Aliens

BY SPENCER KATZ


More magazines by this user
Similar magazines