[Catalyst 2018]


VOLUME 11, 2018


written in the

(dead) stars

The Black Hole Information



+ Bioengineering Methods to

Foster Neonatal Care

+ Mechanobiology of Lung Cancer

Cells During Metastasis



Dear Reader,

Welcome to the eleventh annual edition of Catalyst,

Rice’s premier Undergraduate Science Research Journal.

We are a peer-edited publication founded to showcase

student perspectives on popular science topics and

scientific research. For the past eleven years, we have

been committed to fostering interdisciplinary interest

in scientific writing and dialogue at Rice and beyond.

We are passionate about making science accessible and

engaging, whether you’re a Ph.D. research scientist or

a casual reader. In this year’s publication, you can find

articles about fields as diverse as quantum computing,

cancer therapy and marine forensics.


Rishi Sinha

Ruchi Gupta

Shashank Mahesh

Rachita Pandya



Elaine Hu, Activities Chair

Sanket Mehta, Editor-in-Chief


Lin Guo

Juliana Wang [ASSISTANT]


Jacob Mattia


Kseniya Anishchenko

This year, Catalyst has grown tremendously, both

inside and outside of the Rice hedges. We began

the year with several goals in mind: to improve our

internal organization in response to our growing

member body and growing reach, to better assess

writer development, and to develop and extend our

non-print platforms, such as podcast and blogs. This

year was about achieving those goals, but it was also

about strengthening old connections and forging new

ones, both among Catalyst members and with our

partners. It was about creating several initiatives to

increase cohesiveness within the organization, including

study breaks and social events all throughout the

year. The year also marked the launch of the Catalyst

Research Fair, a well-attended university-wide event

that connected students interested in undergraduate

research directly with lab positions advertised by

graduate students and postdocs. And it was about

continued leadership at the university level, as Catalyst

worked alongside the Center for Civic Leadership and

the Dean’s Office to help organize Rice Inquiry Week, a

celebration of inquiry-based pedagogy and research at

the university, particularly by undergraduates and their

faculty mentors.



Jack Trouvé


Brianna Garcia, Meredith

Brown, Jason Lee,

Mahesh Krishna, Kelsey

Sanders, Anna Croyle


Albert Truong, Kalia

Pannell, Vatsala

Mundra, Roma Nyaar,

Pujita Munnangi, Axel

Ntamatungiro, Deepu




Ajay Subramanian


Olivia Zhang, Rishab Ramapriyan,

Jenny Wang, Shrey Agarwal


Nigel Edward, Avanthika

Mahendrababu, Shikha Avancha, Tom


The progress and expansion we have undergone this

year would not have been possible without the support

of the Rice community, our partners, our mentors, and

our absolutely amazing staff. In particular, we would

like to thank the Rice Center for Civic Leadership,

the Rich Endowment, the Program in Writing and

Communication, and the Student Activities President’s

Programming Fund for their continued generous

support of Rice Catalyst’s endeavors. Of course, we also

want to especially thank Dr. Dan Wagner, our faculty

sponsor who has provided us with invaluable advice

and guidance throughout this entire process.

We are proud of how far Catalyst has come this year

and we are excited for our growth in the years to come.

From the entire Catalyst staff, we hope you enjoy our

latest issue as much as we enjoyed making it!

Sanket Mehta




Katrina Cherk, Christina Tan, Kaitlyn Xiong, Evelyn Syau


Sara Ho, Nancy Cui, NamTip Phongmekhin, Maddy Tadros, J. Riley

Holmes, Sahana Prabhu, Priscilla Li, Jenny Wang, Anna Croyle,

Jessica Lee


Natasha Mehta, Kaitlyn Xiong, Krithika Kumar, Pujita Munnangi,

Priyansh Lunia, Emre Yurtbay, Shruti Shah, Evelyn Syau


Dr. Daniel Wagner

Cover Image by Ute Kraus from Space Time Travel Gallery





























SCI-FI TO DIY: The Evolution of Genetic Engineering // Dora Huang

WRITTEN IN THE (DEAD) STARS: The Black Hole Information Paradox // Jenny L. Wang

HIGH-ALTITUDE SULFUR INJECTION: Insane or Insanely Genius? // Meredith Brown

PHOTOSYNTHETIC BACTERIA: Shining Light on Heart Disease // Swathi Rayasam

WHO’S SAVING LIVES? Robots // Jenny S. Wang

MIRROR NEURONS: Unlocking the Mind // Samantha Chao

CHAGAS DISEASE: A Silent Killer // Maishara Muquith

CPR For the Vaquita // Celina Tran

QUANTUM COMPUTING: A Leap Forward in Processing Power // Valerie Hellmer


ROBOTS AND MEDICINE: A Connection of Many Degrees // Alan Ji

AMPing Up the Defense System // Preetham Bachina

CREATING GLOBAL CHANGE: Bioengineering Methods to Foster Neonatal Care // Pujita Munnangi

MITOCHONDRIAL HEALTH: Implications for Breakthrough Cancer Treatment // Sarah Kim

THE WATERWORKS: Drinking Water for All // Andrew Mu

LIGHT SHOW: Using Light-Activated Metal Complexes to Combat Alzheimer’s // Oliver Zhou

Sumo Wrestling With Heart Diseases // Amna Ali

SONGBIRDS, AGING, AND AUTISM: The Exciting New Field of Neurogenesis // Christine Tang

Water Security in the Middle East // Sree Yeluri


Methods of Mosquito Vector Surveillance and Population Control // Owais Fazal

The Emergence of Number Theoretic Questions from a Geometric Investigation // Jacob Kesten

The Effect of Dasatinib on Mechanobiology of Lung Cancer Cells During Metastasis // Shaurey

Vetsa et al.

BIODIVERSITY IN A DROP OF WATER: A Glance into Marine Forensics // Elaine Shen

Reviewing the Relationship Between Inflammatory Bowel Disease and Primary Sclerosing

Cholangitis // Mahesh Krishna


Science of Beauty // Krithika Kumar

THE TICKING TIME BOMB: Hereditary Cancer Syndromes // Shruti Shah




d.i.y. :

Kac created Alba’s

glow through a feat of

genetic engineering:

synthetic mutation of

the green fluorescent

protein (GFP) gene

from the jellyfish type

Aequorea victoria

Genetic engineering is an area of

science that never fails to intrigue

people, mainly because the field

seems like something directly

out of a sci-fi flick or a superhero comic.

Although the practice has been around

since the 1970s, the intricacy involved in

genetic engineering has recently made a

splashing impact in the world of science

and technology, as well as in our daily lives.

Through the mediums of bio-art, biohacking,

human genetic engineering, and GMOs,

genetic engineering is paving its way towards

becoming a staple within our culture, and we

may not be far from a world where this “scifi”

becomes a scientific standard.

In order to trace the history of genetic

engineering, we must examine its origins:

GMOs. GMO stands for genetically modified

organisms, and they are commonly seen in

the form of produce at local supermarkets

or in angry online posts lamenting about the

downfall of health standards. These GMOs

were created by removing DNA from one

plant and inserting it into a separate plant,

giving rise to new abilities, such as herbicide

tolerance and self-sustaining insecticide.

Despite its polarizing connotation, GMOs

are relevant to industry and to our own

consumption, as much of purchased

produce, including corn, soybeans, cotton,

are genetically modified. 1

GMOs can prove to be beneficial for

generations to come, as seen in a study

at the University of Washington, where

researchers have been working since the

early ‘90s to develop poplar plants that

can clean up pollutants found in both the

ground and the air. 2,3 Their genetically

engineered poplar plants can take in 91% of

trichloroethylene, which is the most common

groundwater contaminant in the U.S.. 2,3

In Japan, another team of researchers

is working with Mammalian cytochrome

P450, which is a gene found in mammal

livers. They are implementing this gene into

rice plants, allowing them to degrade and

detoxify herbicides. 2 A detoxifying poplar

and rice plants provides evidence of the use

of genetic engineering in creating GMOs for

the environmentalist movement, yielding an

interesting solution to a pressing issue.

Moving beyond studying plants, genetic

engineers began to experiment on animals.

Though this move sparked controversy in

the scientific community, researchers were

able to create unique organisms in a new

artistic field known as Bio-Art. Brazilian artist

Eduardo Kac sparked the movement of Bio-

Art, a new brand of innovation that combines

the skills of scientists and engineers with

the creativity of artists. Kac rose to fame

due to his project “Alba,” a bunny that glows

green in the dark. 5 Collaborating with a team

of scientists in France, Kac created Alba’s

glow through a feat of genetic engineering:

synthetic mutation of the green fluorescent

protein (GFP) gene from the jellyfish type

Aequorea victoria. 4,5 In Aequorea victoria, a

protein releases a blue light when it bonds

with calcium. 4 The GFP gene then absorbs

this blue light, and green light is emitted. 4

An enhanced version of the GFP gene was

inserted into Alba, amplifying fluorescence in

mammalian cells. 5 Alba and other transgenic

animals provoke feelings of astonishment,

indignation, and curiosity, and it will be

interesting to see what technology the world

of bio-art will embrace next.

Along with being an integral part of the

Bio-Art movement, genetic engineering was

also used in more environmental studies,

such as with the invention of Enviropigs in

Canada. 6 As with all living organisms, pigs

require phosphorus in their food, but have




By Dora Huang

the inability to digest phytase, an enzyme

that is used to digest the phosphorus found

in the grains and seeds they consume. 6,7

Therefore, they must intake a supplement

of these phytase enzymes, which have

been found to be ineffective, causing

phosphorous to get flushed out as waste. 6,7

However, with Enviropig, the need for

ingesting phytase supplements is eliminated,

because the pig would ideally generate its

own phosphorus-dissolving enzyme. 6,7 This

genetically engineered pig has urine and

feces that contain 40-65% less phosphorus,

which is beneficial for cutting maintenance

and cleaning costs for pig farmers, as well

as “[complying] with the “zero discharge”

rules...that allow no nitrogen or phosphorus

runoffs from animal operations,” as they can

cause dead zones in nearby water sources. 6,7

Though the operation was terminated

after just two years due to loss of funding,

Enviropigs remain a unique solution to an

environmentally threatening problem, and

remains a useful product of genetically


Scientists have modified plant and animals,

and it was only a matter of time before

humans started genetically modifying

themselves. Just this last summer, the first

human embryos were edited in Portland,

Oregon, involving the changing of DNA of

one-cell embryos using the gene-editing

technique CRISPR. 9,10 CRISPR is a system that

“target[s] specific stretches of genetic code

and... edit[s] DNA at precise locations.” 8

Researchers can use CRISPR to modify the

genes in living organisms and precisely

correct mutations in the human genome. 8

In using CRISPR on editing human embryos,

scientists can change the genetics of a

family for generations to come, as the

genetically modified child would pass down

the modifications to their future offspring. 9,10

Although these “designer babies” have faced

intense backlash due to concerns that this is

a new form of eugenics, CRISPR and human

genetic engineering are relevant because

they have the potential to eliminate fatal

illnesses and genetic mutations before

birth. 9,10 In such cases, it is important to

weigh the costs and benefits of performing

this research, as well as the ethics behind the

entire operation.

In a realm of pure recreational use,

biohackers have begun taking over the

genetic engineering industry using the

CRISPR technique in their own homes and

communities. 11, 12 Biohackers are a group of

amateur and largely untrained biologists that

work together in community laboratories to

create “DIY” genetically modified organisms,

such as growing organs, fiddling with

yeast, and creating vegan cheese. 11 Most of

these experiments are innocuous and help

cultivate and promote scientific research

for those who never believed they would be

scientists, expanding science as a fun, leisure

activity. These clever inventions have largely

been tame and innocently curious, but there

is always a chance for biohacking to be taken

to the next level as the knowledge and use of

CRISPR expands.

Genetic engineering is a controversial topic,

and it can be uncomfortable to think about.

However, with its growing emergence into

our everyday lives, genetic engineering is a

topic that is influencing our future and will

continue to grow in prominence as we look

towards genetic engineering for the answers

to our medical and environmental issues.

Works Cited

[1] Benbrook, C. Summary of Major Findings and Definitions

of Important Terms. http://news.cahnrs.wsu.edu/blog/


(accessed Oct. 20, 2017).

[2] Hines, S. Scientists ramp up ability of poplar plants

to disarm toxic pollutants. http://www.washington.edu/


(accessed Oct. 20, 2017).

[3] Choi, C. Genetically Engineered Plants Could Clean

Humanity’s Messes. https://www.livescience.com/1959-


(accessed Oct. 20, 2017).

[4] Green Fluorescent Protein. https://www.conncoll.edu/

ccacad/zimmer/GFP-ww/GFP-1.htm (accessed Oct. 7, 2017).

[5] Slawson, K. Eduardo Kac’s GFP Bunny, a Work of

Transgenic Art, or, It’s not Easy Being Green. http://www.

ekac.org/slawson%203.html (accessed Oct. 7, 2017).

[6] Minard, A. Gene-Altered “Enviropig” to Reduce Dead

Zones? https://news.


(accessed Oct. 20, 2017).

[7] Maglona, M. Enviropig: A Genetically Engineered Pig.



(accessed Oct. 20, 2017).

[8] Questions and Answers about CRISPR. https://www.



(accessed Oct. 20, 2017).

[9] Connor, S. First Human Embryos Edited in U.S. https://


(accessed Oct. 20, 2017).

[10] Belluck, Pam. In Breakthrough, Scientists Edit a

Dangerous Mutation From Genes in Human Embryos. The

New York Times [Online], Aug. 2, 2017. https://www.nytimes.


gene-editing-human-embryos.html?_r=0 (accessed Oct. 20,


[11] Ledford, H. Biohackers gear up for genome editing.


/biohackers-gear-up-for-genome-editing-1.18236 (accessed

Oct. 20, 2017).

[12] Ossola, A. Biohackers are Now Using CRISPR. Popular

Science [Online], Aug. 26, 2015. https://www.popsci.com/

biohackers-are-now-using-crispr (accessed Oct. 20, 2017).

Design By Kaitlyn Xiong

Edited by Jason Lee


written in the (dead) stars:

the black hole

information paradox

by Jenny L Wang

From Nolan’s Interstellar to Muse’s

2006 hit single, black holes serve

as some of the most widely fantasized

plot devices and symbols in popular

culture. Perhaps our fixation with black

holes stems from our lack of scientific

understanding about them. After

all, black holes pose one of the most

contentious dilemmas facing modern

physicists: the black hole information

paradox. Arising from a contradiction

between several fundamental physical

principles, this paradox continues to

challenge not only astrophysicists, but

also our tenuous assumptions and

ideas about how the universe really


Let’s begin by examining black

holes identifying several of their

properties. In simple terms, a black

hole is an enormous amount of

matter concentrated within a relatively

small volume. Most black holes are

classified as “stellar” and form when

a massive star dies and collapses

upon itself, resulting in a supernova

that ejects most of the star’s mass yet

mysteriously retains a black hole in the

center. 1 In addition to stellar, mini and

supermassive black holes also exist.

While their origins are more ambiguous,

scientists hypothesize that their

formation is tied to the very beginning

of the universe. 1

Black holes pose one

of the most contentious

dilemmas facing modern

physicists: the black hole

information paradox.

Although NASA estimates that as many

as ten million to one billion stellar black

holes populate the Milky Way alone,1

the vast majority remain difficult to

observe with current technology due in

part to a black hole’s incredibly strong

gravitational field. As predicted by

general relativity, the center of a black

hole collapses into a “gravitational

singularity” around which space-time

curves. This forms a sort of sphere

around a black hole with an outer

boundary known as the black hole’s

event horizon. 2 The event horizon of

a black hole represents the “point of

no return” beyond which nothing, not

even light, can escape. 2 Past the event

horizon, the black hole’s gravity is so

strong that it renders direct observation

of a black hole using telescopes and

other electromagnetic radiationmeasurement

devices useless. Instead,

scientists must infer the position and

behavior of black holes by observing the

ways a black hole impacts surrounding

stars and gases. 1

Another interesting phenomenon

that results from a black hole’s

incredible gravity is time dilation. To

an observer outside the event horizon,

time seems to slow down relative to

time experienced by an object falling

towards the black hole. The Stanford

Encyclopedia of Philosophy illustrates

time dilation with the example of

watching someone who, “while falling

[into a black hole]...flashes a light

signal to us every time her watch hand


ticks.” 2 While the falling person would

not feel as if time slows down as she

approaches the event horizon, to us

(outside observers), the time interval

between each successive light would

appear to increase. When the falling

person finally crosses the event horizon,

light no longer reaches our eyes and

the person would suddenly appear to

freeze, seemingly stuck on the black

hole’s surface. 2

The black hole information

paradox arises from a

contradiction between

two foundational physical

theories: quantum

mechanics and general


Prior to the 1970s, black hole theory

assumed that when something was

pulled into a black hole, all of the

object’s features (every particle, the

quality of every particle, and the

probability of future behavior of

every particle) became inaccessible to

anything outside the event horizon. 3

According to quantum mechanics,

information must be preserved in

a similar fashion as energy. Thus,

scientists hypothesized that as objects

fall into a black hole, their quantum

information is simply stored away

somewhere inside, unreachable yet still


However, in 1975, Stephen Hawking

observed that black holes evaporate

over time through a process called

Hawking radiation. 3 Hawking proposed

that the universe is filled with “virtual

particles”: particle-antiparticle pairs

which constantly pop in and out of

existence, appearing and rapidly

cancelling one another out. 4 However,

near the event horizon of a black hole,

instead of being annihilated by its

particle or antiparticle counterpart,

one virtual particle could fall into the

black hole while the other, along with

a miniscule fraction of the black hole’s

mass, escapes as thermal energy known

as Hawking radiation.

The discovery of Hawking radiation

introduced one of the most urgent

and contentious issues that continues

to puzzle theoretical physicists today:

the black hole information paradox.

In simplified terms, the black hole

information paradox arises from a

contradiction between two foundational

physical theories: quantum mechanics

and general relativity, proposed by

Einstein in 1915. 6 As Clara Moskowitz

summarized on behalf of Scientific

American in 2015, black holes “invoke

two different theories of nature—

quantum mechanics, which governs the

subatomic world, and general relativity,

which describes gravity and reigns on

large cosmic scales.” 7 While quantum

mechanics asserts that information

cannot be created or destroyed, classical

general relativity states that information

cannot escape a black hole. If Hawking

radiation follows principles of general

relativity, then black holes do not

emit information while evaporating.

However, this implies that after a black

hole dissipates completely into random

radiation, the information of everything

that it swallowed vanishes as well: a

contradiction of quantum mechanics. 8

At best, this incongruity causes a

headache for a few star-crossed

astrophysicists. At worst, the paradox

compromises decades of progress

in quantum mechanics and general


Since the discovery of Hawking

radiation, scientists continue to debate

and propose solutions to the black

hole information paradox. Theoretical

physicist Sabine Hossenfelder sorts

tentative solutions into four general

categories: 9

1 1

1 2

1 3

1 4

Information is somehow encoded

in Hawking radiation and escapes

as the black hole dissolves. This

statement contradicts Hawking’s

original conclusions about the

radiation being random and

introduces yet another paradox

called the “firewall” paradox.

Information is stored in or on

the surface of a black hole. The

formation of baby universes,

the holographic principle, and

Hawking’s most recent “soft

hair” explanation fall into this


Information actually is destroyed,

and quantum mechanics requires

serious modification.

There is no black hole, and

information never crosses its

boundary. General relativity,

which predicts the existence

and behavior of the black hole,

requires serious modification.


Of course, many other proposed

solutions fall outside of these mentioned

categories. Each come with their own

associated strengths and shortcomings,

but all challenge previously-held

assumptions about the field and

contribute to a growing body of literature

about the black hole information paradox.

However, in spite of this progress, it will

likely take many more years to observe

what actually happens to information in a

black hole, since current technology limits

our ability to measure Hawking radiation

and quantitatively observe the behavior

of black holes. 8

Instead of viewing the

paradox as a conflict

between quantum mechanics

and general relativity, many

regard it as an opportunity

to reconcile the two

foundational theories.

Despite these challenges, the implications

of the black hole information paradox

remain profound. Instead of viewing the

paradox as a conflict between quantum

mechanics and general relativity,

many regard it as an opportunity to

reconcile the two foundational theories.

As Moskowitz writes, the black hole

information paradox urges a deeper

need to “describe gravity according to

quantum rules” and perhaps suggests the

existence of another theory which unites

the other two. Most scientists agree that

discovering such a unifying theory of

quantum gravity could marry quantum

mechanics and general relativity, offering

a satisfying resolution to the information

paradox and inspiring a “conceptually

new understanding of nature.” 3

In the meantime, by improving scientific

instruments, conducting research, and

reexamining our underlying assumptions

about the physical world, we can refine

our existing theories about the black

hole information paradox and develop

new ones. Although the final answer

remains ambiguous, we advance with

tireless inquiry and curiosity about the

mystique of space. In the process, our

understanding—like our universe—

inevitably continues to grow.


[1] Nagaraja, M. Black Holes. [Online] n.d. NASA. https://


/focus-areas/black-holes (accessed Oct. 8, 2017).

[2] Curiel, E.; Bokulich, P. Singularities and Black

Holes. [Online] 2012, Fall 2012 Edition, n.p. Stanford

Encyclopedia of Philosophy. https://plato.stanford.

edu/entries/spacetime-singularities/ (accessed Oct. 28,


[3] Hossenfelder, S. Forbes. [Online] 2017. https://www.



(accessed Nov. 11,


[4] Baez, J.; Schmelzer, I. UCR Mathematics. [Online]

1997. http://math.ucr.edu/home


(accessed Oct. 28, 2017).

[5] Cain, F. Universe Today. [Online] 2015. https://www.


(accessed Oct. 29, 2017).

[6] Toth, V. Forbes. [Online] 2017. https://www.forbes.


(accessed Nov. 8, 2017).

[7] Moskowitz, C. Scientific American. [Online] 2015.



(accessed Nov. 8, 2017).

[8] Strassler, M. Of Particular Significance. [Online] 2014.



(accessed Nov. 12, 2017).

DESIGN BY Katrina Cherk

EDITED BY Kelsey Sanders





quiet region:










by Meredith Brown


f global warming increases the temperature

of the Earth by more than two degrees

Celsius, there will be catastrophic

consequences - major flooding will wipe out

homes, businesses, and ecosystems. 1 Sulfur

dioxide may hold the answer to combating the

threat of global warming. Although aerosols

and sulfur dioxide are detrimental to the

ozone layer and human health, studies have

shown that sulfur dioxide increases both plant

growth and the overall reflectivity of solar

radiation of the Earth’s atmosphere, known as

the Earth’s albedo. 2 Scientists hypothesize that

by injecting sulfur dioxide into the atmosphere,

we could combat the gradual temperature

increase without harming human and plant


The negative consequences of aerosols

and sulfur dioxide are well documented.

Both are banned in large quantities by the

MARPOL regulations and by the National

Ambient Air Quality Standards enforced by

the Environmental Protection Agency.3 The

most harmful aerosols found in hairspray

cans and other products were made illegal in

the United States in the 1970s. 2 Additionally,

atmospheric sulfur dioxide precipitates as

acid rain, negatively affecting plants and

ecosystems. Furthermore, sulfur emissions

are responsible for a large percentage of

particulate matter, a class of air pollutants that

is responsible for approximately two million

deaths per year globally due to respiratory

problems. 4 Sulfur dioxide and aerosols also

play key roles in the destruction of the ozone

layer, the main component of the atmosphere

that protects humans from harmful UV rays. 3

In order to utilize sulfur dioxide to counteract

greenhouse gases, these negative impacts

must be mitigated.

Despite these numerous risks, aerosols

and sulfur dioxide have potential that

encourage scientists to look beyond their

negative qualities. Although sulfur dioxide is

a particulate matter pollutant, it also acts as a

cloud catalyst in certain layers of

the atmosphere.

In this

insane or insanely genius?

phenomenon, water vapor in the air is able

to cling to sulfur dioxide molecules more

easily than to salt crystals or to other water

molecules, resulting in a larger and brighter

cloud cover. This is desirable because larger

and brighter clouds reflect more solar

radiation than smaller and dimmer clouds that

lack sulfur. More solar radiation is reflected

back into space, and thus, less solar energy is

Water vapor in the air is able to

cling to sulfur dioxide molecules

more easily, causing clouds to be

larger, more frequent, and brighter.

available for greenhouse gases to trap near

the Earth’s surface. Also, sulfur dioxide plays a

critical role in plant growth. When a solar ray

cuts through Earth’s atmosphere, it generally

travels in a straight line. But if the ray hits the

large sulfur dioxide particles, the light scatters.

This scattered light is able to reach more plants

on the ground, allowing plants to grow larger.

Larger plants are desirable because the leaves

have a greater surface area and are thus able

to absorb more carbon dioxide, removing this

potent greenhouse gas from the atmosphere.

Dr. David Keith, a professor of physics and

public policy at Harvard University, has

outlined one cheap method of using sulfur

dioxide to combat global temperature

increases: high altitude injection. 3 He proposes

that specialized planes could fly annually and

inject one million tons of sulfur dioxide to

create a thin atmospheric layer intended to

reflect one percent of solar rays. These planes

would fly 20 kilometers high, beyond the cloud

formation point, to avoid contributing to acid

rain, while still reflecting solar radiation before

it ever hits Earth’s surface. Therefore, this

solar radiation is unable to become trapped

by greenhouse gases, thereby slowing Earth’s

temperature increase. In fact, this concept was

underscored by a mega-volcanic eruption in

1991, where a major increase in sulfur caused

a decrease in global temperature by half a

degree Celsius for two years. 5

However, this solution still has some

flaws. Scientists fear disruptions in

precipitation patterns,

reductions in

atmospheric ozone, and obstacles in logistics

regarding “who will inject the sulfur, where

will they inject the sulfur, and who will pay

for this?” Furthermore, the molecular size

of sulfur dioxide causes the particle to

have a short atmospheric life, so gas would

have to be continuously pumped into the

atmosphere, costing billions of dollars and

acting as a mere band-aid for our emissions

problems. 6 Scientists argue against this

tactic by suggesting that if the world were to

develop low-emissions energy systems and

transportation methods, such steps would fix

the greenhouse gas problem in a cheaper and

more sustainable manner in the long-run. Al

Gore denounced this geo-engineering plan,

calling it “insane, utterly mad and delusional in

the extreme.” 7

While high altitude sulfur dioxide injections

are possible with current technology, there

is not enough information to determine the

full scope of the environmental impact. Many

scientists have simply suggested decreasing

emissions to combat the source of the

problem instead of putting on the “bandaid”

of high-altitude sulfur dioxide injection.

However, most scientists can agree that

something must be done to mitigate our air

pollution problem, but global warming is a

challenge that we must tackle immediately.


[1] Aton, ClimateWire Adam. “Earth Almost Certain to Warm

by 2 Degrees Celsius.” Scientific American, 1 Aug. 2017, www.


[2] “Sulfur Dioxide: Its Role in Climate Change.” Institute for

Global Environmental Strategies >> Sulfur Dioxide: Its Role

in Climate Change, esseacourses.strategies.org/module.


[3] “Setting and Reviewing Standards to Control SO2

Pollution.” EPA, Environmental Protection Agency, 13 Oct.

2017, www.epa.gov/so2-pollution/setting-and-reviewingstandards-control-so2-pollution#standards.

[4] Physics, Institute of. Researchers Estimate over Two Million

Deaths Annually from Air Pollution, www.iop.org/news/13/jul/


[5] Rotman, D. A Cheap and Easy Plan to Stop Global Warming

. MIT Technology Review. https://www.technologyreview.


(accessed Nov 7, 2017).

[6] Stephen et al. Self, pubs.usgs.gov/pinatubo/self/.

[7] Hansman, Heather. “Is This Plan to Combat Climate

Change Insane or Insanely Genius?”Smithsonian.com,

Smithsonian Institution, 14 May 2015, www.smithsonianmag.


DESIGN BY Katrina Cherk

EDITED BY Anna Croyle




Shining Light on Heart Disease

Swathi Rayasam


lashing lights. Chest compressions.

A cry of “clear!” We commonly

associate heart attacks with this

frantic mental picture. However, a heart

attack will only develop into fatal heart

rhythms through a cascade of cardiac

events. Before we explore the evolution of

a basic heart attack into cardiac death, we

need a broad overview of heart disease.

The leading cause of death worldwide,

cardiovascular disease is costly – both in

terms of money and in terms of years of

life. 1 Although several recent advances

have explored repair and regeneration

after significant cardiac trauma, 2-3

tackling cardiac injury closer to its onset

would minimize serious damage due to

treatment delay.

A heart attack is characterized by an

obstruction that prevents proper blood

Scientists at Stanford

University have explored

the possibility of

introducing other

photosynthetic agents

into the body to provide a

source of oxygen for

cardiac cells.

flow to the heart. 4 While this is certainly

serious on its own, the cascade of

events that are triggered more largely

contributes to patient death. First, the

blockage of circulation can cause a lack of

oxygen in the heart, known as ischemia. 5

This condition can lead to cardiac cell

death if present for an extended period

of time, as in the case of delayed CPR and

transport to the hospital. If the human

body had an alternate route to bypass the

roadblock and deliver oxygen to the heart,

then there would likely be less heart

tissue injury and improved survival.

The fact that trees generate oxygen

may imply that introducing plant cells

internally might prevent and resolve

ischemia. Photosynthetic processes in

such a situation would merely rely on a

light source and chemical compounds

abundant in the human body to engineer

oxygen production in ischemic areas. 6

This new oxygen source would also

lessen the immediate need for proper

blood flow. However, while maintaining

internal plant cells is not exactly feasible,

scientists at Stanford University have

explored the possibility of introducing

other photosynthetic agents into the body

to provide a source of oxygen for cardiac


Stanford cardiovascular surgeon Dr.

Joseph Woo recently began a research

study to bring this fantasy to fruition.

He initially limited his efforts to plants,

by grinding kale and spinach to obtain

chloroplasts, plant organelles that

perform photosynthesis. 6 When these

structures did not survive outside of the

plant cell itself, Dr. Woo and his colleagues

found an alternate option. They identified

Synechococcus elongatus, originally

used to study circadian rhythms, 7-8 as

a viable photosynthetic cyanobacteria

for introduction into the body. The

team considered S. elongatus an ideal

candidate because they could easily

engineer it to produce more metabolites,

such as oxygen or glucose. 9-10

To test whether this cyanobacterium

could immediately deliver oxygen to a

tissue, the researchers induced ischemia

in several rodents. They then randomly

grouped these rodents and injected

their hearts with S. elongatus in the

light, S. elongatus in the dark, or saline.

The researchers prevented any light

exposure in the dark group but exposed

the other two groups directly to light to

examine any differences in oxygenation

levels due to photosynthesis. Originally,

baseline oxygen levels were comparable

between the groups and dropped close

to zero when ischemia was induced. At

reassessment 10 and 20 minutes after

injection, S. elongatus caused a 25-fold

increase in oxygen from the onset of

ischemia in the light group. This was

astounding when compared to the merely

3-fold increase in oxygen levels in the

other two treatment groups. This finding

supports the idea that injection with

S. elongatus in light leads to enhanced

oxygenation in ischemic conditions,

suggesting improved metabolism and

cardiac function. 11

Dr. Woo and his team next aimed to

evaluate the metabolic state of the

heart in the living rodent body, using

temperature as an indicator of activity.

Using a form of videography, they found


Cardiac ischemia

was induced in


Rodents were

injected with



After exposure to

light, rodents had

higher oxygen

levels and increased

heart activity

no difference in left ventricular surface

temperature between the groups at

baseline and at 10 minutes. However,

at 20 minutes, the S. elongatus-treated

group exhibited improved preservation of

surface temperature in the area lacking

oxygen. Furthermore, the saline-treated

group demonstrated a decrease in

surface temperature in this area over

time, while the S. elongatus-treated group

had an increase in surface temperature

over time. This finding indicates that S.

elongatus enhances metabolic activity in

ischemic areas, unlike in the unassisted

The study’s findings

present S. elongatus as an

ideal agent for mitigating

cardiac injury in patients

experiencing ischemia.

heart. Furthermore, cardiac output – the

amount of blood that the heart pumps

in a minute – had a greater increase

from time of ischemia in the light group

than the dark or the saline group. This

provides further support that only

actively photosynthesizing S. elongatus

bacteria offer these benefits. The team

conducted further testing to explore

long-term effects of S. elongatus injection.

Magnetic resonance imaging (MRI) of

the heart four weeks after therapy in

the light group demonstrated enhanced

cardiac performance associated with

photosynthetic therapy. 11

The study’s findings present S. elongatus

as an ideal agent for mitigating

cardiac injury in patients experiencing

ischemia. However, other effects of the

cyanobacterial species on the human

body should also be considered; it would

not be sensible to assume only health

benefits by introducing an unknown

agent. Dr. Woo and his team conducted

tests with the rodent models and found

no sign of infection, unintended bacterial

growth, or immune response in the

organisms after introduction of S. elongatus.

By the 24-hour point after injection, only a few

injected bacterial cells remained. Four weeks

after therapy, the team euthanized the rodents

to examine their hearts and found no abscess

formation or residual S. elongatus. Overall,

these findings suggest that S. elongatus is nontoxic

when injected. 11

Though the findings are promising,

researchers have yet to conduct human trials,

and the bacteria could potentially exhibit

a different effect in the human body. Such

differences may be due to varying physiology,

such as the thicker cardiac muscle of humans

in comparison to rats. 12 Other obstacles could

also block widespread use of the therapy,

due to issues with availability in the absence

of a medical professional. In this case, a few

factors to consider are the safety of personal

access to S. elongatus injections, the feasibility

of injection maintenance and self-use in the

home, and the criteria to qualify for injection

access. Nonetheless, current research

indicates S. elongatus therapy has immense

potential. The cyanobacteria could even be

useful in ischemic tissues outside of the heart

and in procedures such as cardiopulmonary

bypass surgery. Overall, this therapy aims

to tackle the root of sudden cardiac death;

while S. elongatus may not be a plant, it could

blossom into a solution all the same.


[1] Heart Disease. Centers for Disease Control and Prevention,

Centers for Disease Control and Prevention, Aug. 24, 2017.

www.cdc.gov/heartdisease/facts.htm (accessed Oct. 15, 2017).

[2] Tian, Y. et al. Science Translational Medicine 2015, 7 (279).

[3] Polizzotti, B. D. et al. Science Translational Medicine 2015,

7 (281).

[4] What Is a Heart Attack? National Heart Lung and Blood

Institute, U.S. Department of Health and Human Services,

Jan. 27, 2015. www.nhlbi.nih.gov/health/health-topics/topics/

heartattack (accessed Oct. 15, 2017).

[5] Silent Ischemia and Ischemic Heart Disease. www.heart.



UCM_434092_Article.jsp#.Wfy8GrpFyuU (accessed Oct. 18,


[6] White, T. About Scope. Scope Blog, June 14, 2017. scopeblog.stanford.edu/2017/06/14/solar-powered-heart-stanford-scientists-explore-using-photosynthesis-to-help-damaged-hearts/

(accessed Oct. 19, 2017).

[7] Espinosa, J. et al. Proceedings of the National Academy of

Sciences 2015, 112 (7), 2198–2203.

[8] Kondo, T. et al. Science 1997, 275 (5297), 224–227.

[9] Shih, P. M. et. al. J. Biol. Chem. 2014, 289 (14), 9493–9500.

[10] Niederholtmeyer, H. et al. Appl. Environ. Microbiol. 2010,

76 (11), 3462–3466.

[11] Cohen, J. E. et al. Science Advances 2017, 3 (6).

[12] Price, M. Light-activated bacteria protect rats from

heart attacks. Science, June 14, 2017. http://www.sciencemag.org/news/2017/06/light-activated-bacteria-protect-rats-heart-attacks

(accessed Oct. 19, 2017).

Image from freepik.com.


EDITED BY Brianna Garcia


who’s saving lives?


by jenny s wang


obot-assistance has been on the

rise in the medical field in recent

years and its applications in surgical

procedures are showing significant

advantages compared to the standard

of practice. In particular, this technology

has been integrated with retroperitoneal

oncological surgery because it overcomes

some of the shortcomings of traditional

surgery practices, which can be imprecise,

more painful, and lead to slower recovery

time, in order to potentially improve

postoperative outcomes.

The retroperitoneal space, or

retroperitoneum, is the anatomical

space in the abdominal cavity behind the

peritoneum, which is the tissue that lines

the abdominal wall. In other words, the

space contains the organs related to the

urinary system that urological surgeons

operate on. These organs include the

kidneys, ureters, and adrenal glands. 1 Due

to the limitations of operative space in the

retroperitoneum, traditional procedures

usually required a substantial incision, a

disruption of soft internal organs, and a

lengthy recovery period. 2 Due to this, renal

surgeons have since adopted a minimally

invasive technique known as laparoscopic

technology, which overcomes the space

limitations that hindered early surgeons. 2

The technology has evolved quickly and

has proven more beneficial for patients

than previous approaches.

The introduction of robot-assisted

surgery was one such turning point in

laparoscopic technology. Progress in the

field of robotics has allowed for evolution

from the first ‘master-slave’ robotic

systems, which allowed surgeon control

of the robot from afar but had limited

functionality, to systems capable of more

precision and complex procedures. In

2000, the first robot-assisted laparoscopic

surgery was performed and the practice

has become increasingly popular over

the past two decades in renal surgery. 1

Today, robot assistance is favored over

pure laparoscopy due to technical ease

and comfort for the surgeon because

it provides clearer 3D visualization and

magnified imaging compared to the

human eye. 1 Additional advantages


include a greater range of motion and a

shorter training time compared to pure

laparoscopy, allowing for greater precision

and ease of use. 1

Robot-assisted approaches have been

developed for multiple surgical procedures

involved in removal of a tumor mass in

the retroperitoneal space. For example,

in stage 1 of kidney cancer when the

tumor mass is small, the procedures

intend to remove the tumor with the

goal of preserving the rest of the kidney

and surrounding tissue 1 . This technique,

a partial nephrectomy, is the standard

approach in removal of small renal

tumors. It has similar survival outcomes

as produced by radical nephrectomy, in

which the entire kidney associated with

the tumor is removed during surgery.






However, renal functionality is generally

improved for patients undergoing partial

nephrectomy as compared to those

undergoing radical nephrectomy. The

introduction of robot-assisted partial

nephrectomy (RAPN), which has allowed

for a shorter operative time, faster return

to normal function, and less invasion, has

increased in popularity as even larger,

more complex renal tumors have been

successfully treated. 1 Moreover, the 5-year

survival rates of patients who underwent

the procedure have been outstanding and

have thus made this approach a standardof-care

therapy. 1

For later stages that involve larger and

more complex tumors, other procedures

utilizing robot assistance have also

been compared to typical approaches.

A prospective comparison between the

laparoscopic form and the robot-assisted

form of radical nephrectomy, a procedure

typically used for stage 2 kidney cancer

tumors. The study demonstrated that the

operative time and survival outcomes

are similar. However, the robot-assisted

procedure is more expensive. 1 For

treatment of especially invasive tumors

that have not yet spread from the upper

urinary tract, radical nephroureterectomy

(RNU) is favored. The benefits of robotassisted

RNU are decreased operative time

and postoperative complications, but the

disadvantage is that the cost is greater

than laparoscopic nephroureterectomy

operations alone. Overall, survival

outcomes are similar in both open and

robot-assisted approaches in RNU, but

advancement of the robot-assisted

technique over time will prove if there are

additional benefits and risks.

The comparisons of robot-assisted

and standard laparoscopic approaches

demonstrate that the robot-assisted

approach has proven useful in reducing

operation time, discomfort to the patient,

and post-surgery complications. Though

this technology is still facing barriers

of high cost and associated risks, the

continual advancement and improvements

for the surgeon and patients have

demonstrated that we should be optimistic

about what lies in the future for the

growing application of robot-assisted

oncological surgery in the retroperitoneal

space as well as other sites.


[1] Ludwig, W. W. et al. Frontiers in robot-assisted

retroperitoneal oncological surgery. Nature Reviews

Urology [Online]. September 12, 2017, p 1-11. https://


[2] Velanovich, V. et al. Laparoscopic vs open surgery:

a preliminary comparison of quality-of-life outcomes.

National Center for Biotechnology Information [Online],

January 14, 2000, p 16-21. https://www.ncbi.nlm.nih.gov/


DEsign bY Nancy Cui

edited by Kelsey Sanders


Mirror Neurons: Unlocking the Mind


ilm, music, and literature capitalize on

conveying emotion to an audience. These

forms of art crate emotions by crafting

narratives that their audiences connect with

and believe in. Most people would place the

agency of this emotional transaction on the

movie, song, or book. However, the audience

must also be receptive to the story. In other

words, the audience must have the capacity

to empathize with the emotions portrayed by

the art. This fascinating ability for audience

members to ‘feel’ an external stimulus as

if they were experiencing it themselves is

mediated by a newly discovered type of

neuron, the mirror neuron.

Mirror neurons were first discovered in

the 1980s in macaque monkeys when

neurophysiologists at the University of

Parma, Italy, studied neurons specialized for

hand and mouth control. 1 They discovered

that certain neurons in the monkeys’

premotor cortices lit up in the same pattern

both when monkeys picked up food for

themselves and when they merely observed

other monkeys pick up food. This fascinating

discovery led to a new area of study that

focuses on the brain’s ability to adapt and

respond to its environment. Using functional

neuroimaging, which attempts to associate

cortices of the brain to certain functions,

other scientists have discovered similar

mirror neurons in the human

somatosensory cortex

that allow people to

vicariously experience

emotions when

others perform or

experience different

actions. 4

“Mirror neurons

constantly dictate our

social responses, even

without our knowledge.”

Mirror neurons

constantly dictate

our social responses,

even without our

knowledge. For instance,

many moviegoers cry at

the heartbreaking ending of

the movie ‘Titanic’, which exhibits the power

of mirror neurons to create a false sense of

personal reality. A more concrete example

of the power of mirror neurons is when Rose

attempts to free Jack from a pair of handcuffs

with an axe. The audience cringes as Rose

swings the axe at Jack’s hand, as if their own

hands are in danger.

These examples fall in line with a classic

experiment performed in the study of mirror

neurons. 2 During the study, participants hid

their hands behind a divide, then watched

as a fake, rubber hand was simultaneously

stroked along with their own, real hands.

Halfway through the experiment, without

warning, the rubber hand is smashed

with a hammer. Nearly all participants

recoiled in surprise and fear because they

established “a feeling of ownership of [a]

fake [rubber] hand”. 2 Functional magnetic

resonance imaging (fMRI) scanned

each participant’s brain activity and

revealed “evidence that cells in

the premotor cortex respond

both when a specific area of

the body is touched and when

an object is seen approaching

that area”. 3

It is easy to see the benefits

that mirror neurons

provide. New models of

human perception are

currently forming around

the concept of mirror neurons,

which may have broad implications in

the field of psychology and the study of

the human mind. This in turn may lead

to the development of novel methods of

interpersonal interaction, which has a heavy

hand in the business and marketing world.

Though the world currently knows little

about these fascinating neurons and their

mechanism of action, their far-reaching

significance may have much to contribute to

future human development and innovation.

The mirror neuron is a relatively new

finding and requires much more testing and

experimentation before it can be labeled as

an official hallmark of human physiology.

However, even with limited information, it

is abundantly clear that the neuron sparks

a rich debate regarding the world’s current

understanding of psychological study and

evaluation. Through further study, we may

be able to better understand the social

component of human nature. Perhaps

mirror neurons can even answer the age-old

question of why people love the humanities:

people enjoy being spectators.

Works Cited

[1] Winerman, L. American Psychological Association. 2005,

36, 48.

[2] Ehrsson, H. H. et al. J. Neurophysiol. 2000, 83, 528–536.

[3] Acharya, S. et al. J Nat Sci Biol Med. 2012, 3, 118-124.

[4] Kilner, J. M. et al. Curr Biol. 2013, 23, 1057-1062.

Design by: Anna Croyle

Edited by: Meredith Brown





by Maishara Muquith

ilent. Deadly. With an estimated 6 to

7 million people affected worldwide,

Chagas disease is a neglected tropical

disease found mainly in Latin American

countries¹. Classified as a disease of

poverty, Chagas perpetuates inequality by

creating a disease burden in the poorest

economies². The total annual cost to

society stemming from healthcare and

lost productivity due to the disease is

$4,059 annually in Latin America for each

individual afflicted². Since Chagas causes

such a massive economic burden, more

attention needs to be paid to this lifethreatening


Chagas disease is transmitted by the blood

sucking triatomine bug, also known as the

kissing bug due to bites near the mouth or

the eyes³. As the bug bites and defecates

near the wound site, the protozoan

parasite, Trypanosoma Cruzi (T. Cruzi),

enters the body from the infected feces or

urine.In Latin American towns, the kissing

bug is found mostly in adobe houses or

enters through cracks of poorly constructed

houses¹. Other modes of infection include

mother to child transmission, consumption

of contaminated food, and blood


Chagas disease has two phases: acute and

chronic. The acute phase, is characterized

by muscle pain, fever, enlarged lymph

nodes, and headaches⁴. Despite this, some

people are asymptomatic. In all acute

phases, however, there is a large

amount of parasite circulating

in the blood. As the

disease develops

into the

chronic stage, the parasites hide in the

heart and the digestive tract and, as a

result, are hard to detect. Thus, people

who progress to chronic Chagas may not

show any symptoms, and this may result

in sudden death due to inflammation and

cardiac arrest years later¹.

Current treatment of Chagas include only

two drugs: benznidazole and nifurtimox.

While benznidazole is FDA approved,

nifurtimox is not. Although these drugs

are most effective in young patients and

patients with acute Chagas, the drugs have

shown a low cure rate in the chronic phase⁴.

Furthermore, both drugs have adverse

side effects including anorexia, headache,

nausea, and neuropathy⁵. They also pose a

barrier to vulnerable populations who have

limited access to healthcare since these

drugs have a long regimen (benznidazole is

taken for 60 days and nifurtimox for 60-90

days) and require constant blood work

to ensure that there is no adverse side

effects. Currently, there are no vaccines

against Chagas. Thus, the need for novel

therapeutic treatments and vaccines is


One of the leading research institute for

Chagas, the Sabin Vaccine Institute, is

currently working hard to develop a human

vaccine against Chagas⁶. With this vaccine,

researchers hope to improve Chagas

prognosis, lower treatment cost, decrease

treatment duration, and slow disease

progression. They have already developed

a successful vaccine candidate which

can either act as a vaccine alone or be

combined with benznidazole as a form of

chemotherapy, demonstrating potential as

both a preventive and a therapeutic agent.

Chagas disease is

transmitted by

the blood sucking

triatomine bug,

also known as the

kissing bug

Early animal studies of these vaccines have

shown promising results:mouse models

treated with the vaccine have shown potent

immune response, increased host survival,

diminished cardiac fibrosis and pathology,

and reduced cardiac parasite loads⁶.

Another potential vaccine uses recombinant

adenovirus carrying sequences of the

T-cruzi parasite’s proteins. Essentially these

viruses cannot replicate in the body, but

they still produce viral proteins. Since these

viruses have T-cruzi DNA integrated into

them, they produce the T-cruzi proteins as

well. The body recognizes these elements

as foreign and mounts an immune

response against all parasite remnants.

Results from this study show that these

vaccines are effective in inciting an immune

response and decreasing parasitic load in

both the acute and the chronic phase of

Chagas⁷. The study also demonstrates that

this vaccine can be used therapeutically

to delay disease progression and reverse

heart tissue damage.

Chagas disease is a serious condition

with little attention given to the disease.

Moreover, current treatments for this

disease are few in number and pose various

adverse side effects. Current research

focuses on novel vaccine candidates aimed

at both preventing and treating the disease.

As scientists continue to focus more on this

neglected disease, a possible cure might be

on the horizon.

Works Cited

[1] Chagas disease (American trypanosomiasis). (2017). In

WHO. Retrieved from http://www.who.int/mediacentre/


[2] Lee BY et al. Lancet. 2013, 13, 342-348

[3] Montgomery, S. P. et al. Am. J. Trop. Med. 2014, 90, 814-


[4] Sales, P. A., Jr. et al. Am. J. Trop. Med. 2014, 97, 1289-


[5] Castro, J. A., et al. Human Exp.Toxicol. 2006, 25, 471-479.

Barry, M. A. et al. Human Vac. Immunotherapeutics. 2016,

[6] 12, 976-987.

[7] Pereira, I. R. et al. (2015). PLoS Pathogens, 11(1),

e1004594. http://doi.org/10.1371/journal.ppat.1004594

design by Madeleine Tadros

edited by Mahesh Krishna



Celina Tran


ndrea, Fathom, Katrina, and Splash.

These were the names of the four U.S.

Navy-trained dolphins which bore

the responsibility of saving their distant

porpoise relatives, the vaquita, from

extinction. 1

The vaquita are the porpoises that live in

the Gulf of California, the little pocket of

sea neighbored on the west by the Baja

California peninsula and on the east by the

Mexico mainland. At their largest, they are

five feet long and have characteristic dark

“eyeliner” rings around each of their eyes.

There are an estimated fewer than thirty of

them left, 2 largely due to totoaba fishing.

The totoaba is a species of fish that shares

the same habitat as the vaquita - in the

Gulf of California. It is highly coveted for its

bladder, similar to how a rhino is coveted

for its ivory horns. The bladder is used

by the Chinese in a soup called fish maw,

which is believed to boost fertility. The

bladders are smuggled into the United

States, and then shipped to China to be

sold. Even though totoaba fishing has been

banned by the Mexican government, the

activity is still worth the risk because each

bladder can yield up to ten thousand dollars

or more. 3 The fishermen use gill nets to

catch the totoaba, unintentionally also

trapping the vaquita as bycatch. Tangled in

the nets, because they cannot resurface, the

oxygen-deprived vaquita drown.

On April 3, 2017, the government of Mexico

made an announcement: $3 million would

be given to the VaquitaCPR initiative; CPR

stands for conservation, protection and

recovery. 4 This ambitious, high-risk plan

aims to save the last few remaining vaquita.

As a part of this project, dolphins Andrea,

Fathom, Katrina, and Splash would use

echolocation to seek out the shy vaquita

so that they can be captured. They would

then be gathered and transported to a

holding pen on the west side of the gulf,

which would keep the vaquita away from

the gill nets that have been otherwise lethal

to them. 4 It was hoped that the vaquita

would adapt to captivity, breed, and their

population would grow large enough to be

eventually released back into the wild.

At least, that was the plan. Because the

vaquita had never been captured live

before, little could be done to predict

how they might handle the stress, which

scientists learned the hard way. The first

animal that was successfully captured was

a juvenile female. She had to be quickly

released because she was under too much

stress. The second, a female of reproductive

age, died in captivity. Just a few hours after

being placed in the pen, she most likely

suffered from a heart attack. 5 For the

already-tiny population, the death was a

huge loss. As a consequence, VaquitaCPR

was suspended, and the decision to cease

the capture portion of the operation was

unanimous. It was just November.

The fate of the vaquita

is in the hands of law


Now what? The fate of the vaquita is in

the hands of law enforcement. Poachers

must be stopped, even if they do not

target the vaquita. At the same time, any

solution must be tailored to benefit both

humans and vaquitas, as conservation

biology is intertwined with local economies

and communities. Illegal fishing must be

replaced with an alternative source of

livelihood for the fishermen. 5 Currently,

VaquitaCPR is working on identifying

the last few vaquita, using the markings

on their fins that are unique to each

individual. Acoustic recording devices

track and monitor the ranges of these

unique porpoises. Veterinarians assess

blood samples from the lab, to determine

what part of the plan - capture, transport,

or enclosure - made the vaquita panic. 5

Hopefully, the “panda of the sea” can be

saved from extinction. It serves as a story

that has greater implications, highlighting

how conservation is about balancing the

ecological concerns and the needs of the


Works Cited

[1] Joyce, C. Chinese Taste for Fish Bladder Threatens Rare

Porpoise in Mexico. NPR, Feb. 9, 2016. https://www.npr.

org/466185043/ (accessed Jan. 17, 2018).

[2] Grens, K. US Navy Dolphins to Capture Vaquitas to Save

Them from Extinction. The Scientist, Oct. 6, 2017. https://



(accessed Jan. 17, 2018).

[3] Albeck-Ripka, L. 30 Vaquita Porpoises Are Left. One

Died in a Rescue Mission. The New York Times, Nov. 11,

2017. https://nyti.ms/2hsMV5j (accessed Jan. 17, 2018).

[4] Nicholis, H. A Last-Ditch Attempt to Save the World’s

Most Endangered Porpoise. Nature, Apr. 7, 2017. http://

dx.doi.org/10.1038/nature.2017.21791 (accessed Jan. 17,


[5] Brulliard, K. A final bid to save the world’s rarest

porpoise ends in heartbreak. Is extinction next?

The Washington Post, Nov. 9, 2017. https://www.



term=.79abaa181c86 (accessed Jan. 17, 2018).

Images courtesy of Pixabay

Vectors courtesy of chiccabubble and Xihn


Designed By J. Riley Holmes

Edited By Anna Croyle


Quantum Computing

A Leap Forward in Processing Power


e live in the information age, defined

by the computers and technology that

reign over modern society. Computer

technology progresses rapidly every year,

enabling modern day computers to process

data using smaller and faster components

than ever before. However, we are quickly

approaching the limits of traditional

computing technology.

Typical computers process

data with transistors. 1

Transistors act as tiny

switches in one of

two definite states:

ON or OFF. 2

These states are


by binary digits

known as “bits,”

1 for ON and 0 for

OFF. 2 Combinations

of bits let us describe

more complex data,

which ultimately becomes

the basis for a computer.

For instance, a 2-bit computer

has four possible bit combinations at

any given time: 11, 10, 01, and 00. Every

additional bit doubles the number of

possible combinations and increases the

computer’s ability to store and process

data. 3 Shrinking the size of transistors

allows more transistors to fit on a single

chip, giving us greater processing power

per chip. However, modern transistors

are reaching the size of only a few atoms. 4

We will soon reach the physical limit to

how small and fast a transistor can be.

Since 1975, the computer chip industry

has followed Moore’s Law, the notion

that the number of transistors on a chip

will double every two years, but recently,

delays in advancements have caused some

to announce the death of Moore’s Law. 5

Though we may not be capable of making

transistors much smaller, we can push past

their limits with a new type of computer:

the quantum computer.

Quantum computers use quantum bits,

or “qubits,” rather than bits. Qubits are

incredibly tiny particles that experience

by Valerie Hellmer


and entanglement

allow quantum

computers to process

data faster than



quantum effects like superposition and

entanglement due to their small size. 2 A

qubit is in superposition when it is in a

combination of two states simultaneously.

So while a normal bit must be either 1 or

0, a qubit can be both 1 and 0. 2 This can

be difficult to imagine since it goes against

everything we encounter throughout our

lives; a flipped coin can either land on

heads or tails, not both sides

at once. Yet qubits seemingly

defy our reality and do just

that. A 2-qubit computer still

possesses the four original

bit combinations, only

now the qubits can

simultaneously hold

all four combinations. 6

The qubits have a

probability of being in

each of the four states,

but the qubits’ actual

combination is revealed only

after being observed, which

collapses the superposition. 6

Even stranger than superposition,

quantum entanglement is when

the state of one qubit affects the state of

another instantaneously over any distance. 7

For instance, if two qubits are entangled in

opposites states, then when

one qubit changes from 1 to

0, the other changes from

0 to 1 without any delay.

This switch allows

information to travel

incredibly quickly

in a quantum

computer. But

that is not all


has to offer;


also lets you receive

information on a group

of entangled qubits by

checking the state of only

one qubit. 3 Superposition

and entanglement allow quantum

computers to process data faster than

traditional computers such that a 56-qubit

computer would contain more processing

power than any traditional computer

A 56-qubit

computer would

contain more

processing power than

any traditional

computer ever


ever built. 8 This achievement is known

as quantum supremacy over traditional

computing--an impressive feat considering

modern supercomputers can perform

93,000 trillion calculations per second. 8-9

Today’s top tech companies are getting

incredibly close to the quantum supremacy


IBM has been researching quantum

computers for over 35 years, and has

recently shown rapid progress. 2 In May

2016, IBM released access to a 5-qubit

quantum computer online, where anyone

can develop and run their own quantum

algorithms. 1 This quantum computer has

created opportunities for both scientists

and enthusiasts to interact with qubits and

has already been used for over 1.7 million

public experiments. 10 IBM then established

a new quantum computing division called

“IBM Q” in March 2017. 1 And more recently,

in January 2018, the company developed a

50-qubit quantum computer prototype. 10

While these computers cannot beat any

classical machine at present, IBM has their

vision set on a future powered through

quantum computing. 1

Another computer company, D-Wave

Systems, is known for advancing

quantum computing with

a different approach. The

company made headlines in

2013 by selling a 512-qubit

computer called the

D-Wave Two to NASA

and Google. 11 And in

2017, the company

released a 2000-qubit

computer called

the D-Wave 2000Q,

which can run certain

algorithms 100 million

times faster than an average

classical computer. 11-12 While

these numbers make it sound

as if D-Wave Systems has easily

achieved quantum supremacy, this is not

necessarily the case. D-Wave Systems’

computers have faced a lot of controversy

since they use a “quantum annealing”


approach which have minimal control

of their qubits.” 12-13 As a result, D-Wave

Systems’ “quantum annealing” systems can

only be applied to optimization problems,

as compared to IBM’s “gate-based” systems,

which have a much wider

scope of applications. 14-15

Despite the fact D-Wave

Systems is not

considered to have

achieved quantum

supremacy, the

company has

certainly pushed

forward the

boundaries of

computing. 16


computers may


the fields of

machine learning

and artificial


While quantum

computing promises

to dramatically increase

processing power, there

are still limitations to its potential. For one,

a quantum computer cannot replace your

laptop anytime soon. Quantum computers

are extremely task specific, meaning that

even with IBM’s gate-based approach, those

ultra-high processing speeds will only work

best on certain types of problems. 16 In

addition, qubits are very unstable particles;

to be used in a computer, they have to be

kept at a temperature just a fraction of a

degree above absolute zero while shielded

from nearly all light and Earth’s magnetic

field. 16 Even the smallest vibration could

disrupt their state. Consequently, quantum

computers’ extreme operating conditions

currently confine their use to research

companies or data centers instead of

households. Likewise, the fragility of qubits

means they produce a lot of error. 16 Some

experts estimate that checking the results

of one qubit will require 100 additional

qubits, meaning that even when quantum

supremacy is reached, more qubits will

be necessary to make the computers

practical. 16 More realistically, quantum

computers could be used in conjunction

with traditional computers to yield

interesting results.

Despite its limitations, quantum computing

has the potential to transform the world.

One potential application is improving

chemical and biological models, which

need a lot of processing power to fully

represent the complex characteristics

of molecules. 16 Driven by quantum

computers, improvements in these models

could revolutionize our understanding

of chemistry, physics, and medicine. 16

Furthermore, experts believe that quantum

computers may revolutionize the fields of

machine learning and artificial intelligence,

which in turn could impact just about

every aspect of society. 16 However, some

companies and governments are also

preparing their defenses for “Y2Q,” the

year when a large-scale quantum computer

could bring down our current system of

encryption, which protects everything from

credit-card numbers to nationally guarded

secrets. 16 While some experts predict Y2Q

could occur as soon as 2026, it is difficult

to predict how security systems will change

alongside developments in computing, and

what the true impact of quantum

computers will be.

Works Cited

Much like how no one could

have predicted the future

of computers in the

1960s, we cannot tell

how much quantum

computing will shape

our lives in the coming

decades. However, with

every technology come

risks and benefits, and no

matter what, researchers

will continue to push the

boundaries of human capability.

[1] Murphy, M. IBM Thinks It’s Ready to Turn Quantum

Computing into an Actual Business. Quartz, Mar. 05,

2017. https://qz.com/924433/ibm-thinks-its-ready-to-turnquantum-computing-into-an-actual-business/


Oct. 20, 2017).

[2] Castelvecchi, D. Quantum computers ready to leap

out of the lab in 2017. Nature News [Online], Jan. 03,

2017. Nature Publishing Group. http://www.nature.com/


(accessed Nov. 17, 2017).

[3] Steane, A. Quantum computing. Rep. Prog. Phys. 1998,

61, 117-173.

[4] Markoff, J. Smaller, Faster, Cheaper, Over: The Future

of Computer Chips. The New York Times [Online], Sept. 26,

2015. https://www.nytimes.com/2015/09/


(accessed Oct. 20, 2017).

[5] Novet, J. Intel Shows off Its Latest Chip for Quantum

Computing as It Looks past Moore’s Law. CNBC, Oct. 10,

2017. https://www.cnbc.com/2017/10/10/intel- delivers-


html (accessed Oct. 20, 2017).

[6] Lochan, K.; Singh, T. Nonlinear quantum

mechanics, the superposition principle,

and the quantum measurement

problem, 2010, arXiv:0912.2845 [quantph]

(accessed Jan. 19, 2018).

[6] Raimond, J. et al. Manipulating

quantum entanglement with atoms

and photons in a cavity. Rev. Mod.

Phys. 2001, 73, 565-582.

[8] Kim, Mark. Google’s quantum

computing plans threatened by

IBM curveball. New Scientist,

Oct. 20 2017. https://



(accessed Jan.

19, 2018).

[9] Dongarra, J. China

builds world’s most

powerful computer.

BBC News [Online],

June 20 2016.



technology-36575947 (accessed Jan. 19, 2018).

[10] Morse, J. IBM’s quantum computer could change the

game, and not just because you can play Battleship on it.

Mashable, Jan. 08, 2018. http://mashable.com/2018/01/08/


(accessed Jan. 19, 2018).

[11] Jones, N. Google and NASA Snap Up Quantum

Computer D-Wave Two. Scientific American [Online], May

17, 2013. https://www.scientificamerican.com/article/


(accessed Nov. 17, 2017).

[12] Gibney, E. D-Wave upgrade: How scientists are using

the world’s most controversial quantum computer.

Nature News [Online], Jan. 24, 2017. Nature Publishing

Group https://www.nature.com/news/d-wave-upgradehow-scientists-are-using-the-world-s-most-controversialquantum-computer-1.21353

(accessed Nov. 17, 2017).

[13] Denchev, V. et al. Phys. Rev. X. 2016, 6, 031015.

[14] Michielsen, K. et al. Benchmarking gate-based

quantum computers, 2017. arXiv:1706.04341 [quant-ph]

(accessed Nov. 17, 2017).

[15] Marchenkova, A. What’s the difference between

quantum annealing and universal gate quantum

computers?. Medium, Feb. 28, 2017. https://medium.com/


(accessed Jan. 19, 2018).

[16] Nicas, J. How Google’s Quantum Computer Could

Change the World. The Wall Street Journal [Online], Oct.

16, 2017. Dow Jones & Company. https://www.wsj.com/


(accessed Oct. 20, 2017).

design by Namtip Phongmekhin

edited by Brianna Garcia




rom completing common household

activities to performing dangerous tasks

on the International Space Station,

the modern day robot has become more

notably human-centered, picking up where

humanity’s physical capabilities have left off.

As they continue to become more applicable

to human life, the functionalities of robots

have expanded to cover many different fields

of study, including biomedicine, where a

robot’s practicality has been scaled down to

the molecular level.

Dr. Lydia Kavraki, a professor and

researcherat Rice University’s School of

Engineering, is one of the pioneers pursuing

the connections between the two domains

of robotics and bioinformatics. As a young

graduate student, Dr. Kavraki became

interested in robotics when she realized

how helpful robots can be for humans. She

remembers, “there were robots in these labs

and they didn’t do anything! They were just

static! And I was fascinated and said, okay,

these robots should do something and I

should work on motion planning.” As it is

defined today, the motion planning problem

is the problem of moving our “robot,” no

matter how small the size, from one place to

another while avoiding obstacles and walls.

Dr. Kavraki’s motion-planning algorithm

was developed many years ago, but it has

provided a foundation for modern-day

projects and research to expand upon. To

understand the algorithm, the concept of

“degrees of freedom” must be introduced.

These degrees can be thought of as the range

or mobility of certain mechanical systems,

such as the arm of a robot. More complex

robots would require systems with a greater

number of degrees of freedom and would be

much more difficult to plan a path for. A valid

motion can only be achieved by performing

actions that are feasible, or in other words,

within the specified degrees of freedom and

the robot’s physical constraints. 1

The complexity of this problem involves

determining the solution space, which is

defined as the set of all solutions that satisfy

the problem’s constraints. Previous methods

used in the robotics community have tried to

plan a path by partitioning the solution space

into its free parts, where the robots could

move, and forbidden parts, areas that were

obstacles and infeasible for motion. However,

Dr. Kavraki’s algorithm, called the Probabilistic

Roadmap Method (PRM), actually takes

samples of the space to test the mobility

of the robot. After sampling the solution

space, the algorithm then finds connections

among the different configurations of the

robots, and captures this connectivity in

a series of queues, forming approximate

maps of the free space. 2 The effects of Dr.

Kavraki’s research are prominent throughout

the robotics and biological industries, and

these sampling-based motion planners have

since been implemented in a variety of her

projects. Dr. Kavraki’s Open Motion Planning

Library (OMPL) has not only been used in

60 different robotics systems, but has also

become a fundamental component of the

Robot Operating System, the conventional

framework for writing robot software. 3

Ultimately, this computational approach

has become the basis for more complex

problems brought up within the robotics

and bioinformatics communities, regardless

of the dimensions of the solution space. In

the words of Dr. Kavraki, “We [now] have

efficient techniques for a very large number

of robots that [were] just impossible before

the invention of the sampling space and the

motion planning algorithms.” Furthermore,

these techniques can be scaled down to the

molecular level, where even motions of 1000

degrees of freedom can be measured. 1

It turns out that some of

the algorithmics that we

use for modeling robots

and modeling motion can

be used and adapted to

reason about shape and

function of molecules.

While working on her research, Dr. Kavraki

discovered a remarkable relationship

between robots and another kind of machine

billions of times smaller that resides in

our bodies - the protein. “It turns out that

some of the algorithmics that we use for

modeling robots and modeling motion can

be adapted to reason about shape and

function of molecules,” Dr. Kavraki explains.

She also expresses the idea that there is

an “underlying theoretical component” that

connects these two domains, which prompted

her to work on projects integrating both


Perhaps the most significant application in

the robotics domain is with the NASA Johnson

Space Center. The Robonaut 2 (R2), which

is currently up in the International Space

Station, was designed and created by NASA

in collaboration with Dr. Kavraki’s team.



R2 is incredibly dexterous, having a total

of 34 degrees of freedom in its main body,

and is able to perform simple tasks such

as using human tools, fetching cargo bags

and measuring the quality of the air tank.

Furthermore, this humanoid robot takes over

repetitive and potentially life threatening

procedures that the astronauts on board can

avoid. 3 For example, it inspects the outside

of the space station and aids the docking of a

shuttle to a space station that is in continual

orbit. 4 This difficult procedure relies on

identifying and calculating the degrees of

freedom of both the shuttle and docking

platform. “You really want a robot to do this

job. You don’t want humans out to get all

the radiation as they inspect the outside of

the space station,” Dr. Kavraki comments.

Looking forward, she says, “NASA is interested

in robotics applications for caretaking

operations for astronauts, for future missions

to Mars, for establishing habitats that would

later be populated by humans.”

In the biomolecular domain, Dr. Kavraki’s

algorithm has enabled researchers to

determine the structures and motions of

molecules. It has also helped develop handson

techniques that have become integral

in medical procedures, such as the closing

of wounds by tying knots used in surgical

suturing. 3 In the past, suturing techniques

were extremely challenging and required

precise measurements to perform, but

by applying Dr. Kavraki’s motion planning

research, robotic surgical systems were able

to determine a motion pathway for directing

the surgical needle to specific locations that

minimize the interaction forces between the

needle and tissue. This has allowed suturing

to become more automated and precise.

In addition, her work has led to an efficient

way of analyzing molecular binding

conformations, another form of docking,

where interactions between molecules such

as those between a ligand and receptor can

be modeled to design new therapeutics.

Ligand docking can be determined in a

manner similar to shuttle docking. A motion

space is first explored. Then, depending on

the size of the ligand--the larger and more

flexible the ligand is, the more degrees of

freedom these large molecules have--docking

can be modeled computationally. These

interactions are often more challenging,

as the motion space for a large ligand has

increased dimensionality. 5 By undergoing

dimension-reducing techniques that use the

PRM as a foundation, Dr. Kavraki’s research

is able to simplify the process of obtaining a

valid motion space.

Dr. Kavraki’s research in the molecular

domain has led to applications in the

medical field, specifically the Kavraki Lab’s

immunotherapy project in collaboration

with MD Anderson. While the medical labs

perform the actual experiments, the Kavraki

Lab helps design the receptors involved

in immunotherapy and does modeling to

narrow down the many possible receptor

conformations for them to test. In other

words, they perform the experiments from a

computational perspective. 3 “We try to show

them how different proteins and receptors

and peptides dock, that is, what is the relative

position when they interact,” Dr. Kavraki says.

Currently, Kavraki and her lab are trying to

understand the mechanics of the motions of

these large molecules using techniques that

have already been established by the PRM.

In the future, Kavraki hopes to see her work

being implemented for constructing robots

that aid people with special needs and

accomplish tasks that are beyond physical

reach. She also looks forward to progress

on the immunotherapy project, where the

applications of her research can help patients

with severe medical conditions. “To improve

the quality of life, “ Dr. Kavraki says, is the

main objective of her passion and dedication

to these studies.


[1] Teodoro, M. L. et al. IEEE International

Conference on Robotics and Automation

2011, 1, 960-966.

[2] Kavraki, L. et al. IEEE Transactions on

Robotics and Automation 1998, 14, 166-171.

[3] Kavraki Lab. kavrakilab.org (accessed Nov.

05, 2017)

[4] Robonaut.robonaut.jsc.nasa.gov/R2

(accessed Nov. 05, 2017)

[5] Dhanik, Ankur. et al. BMC Structural

Biology 2012, 13, 48-55

DESIGN BY Christina Tan

EDITED BY Albert Truong

by Alan Ji

Dr. Lydia Kavraki

Lydia E. Kavraki is the

Noah Harding Professor of

Computer Science, professor

of Bioengineering, professor

of Electrical and Computer

Engineering, and professor

of Mechanical Engineering at

Rice University. She received

her B.A. in Computer Science

from the University of Crete

in Greece and her Ph.D.

in Computer Science from

Stanford University. Her

research contributions are in

physical algorithms and their

applications in robotics (robot

motion planning, hybrid

systems, formal methods in

robotics, assembly planning,

micromanipulation, and

flexible object manipulation),

as well as in computational



translational bioinformatics,

and biomedical informatics

(modeling of proteins and

biomolecular interactions,

large-scale functional

annotation of proteins,

computer-assisted drug

design, and the integration

of biological and biomedical

data for improving human




ing up the

defense system



rguably one of the greatest achievements

in modern medicine has been the

discovery and creation of antibiotics to help

the human body combat bacterial infections.

However, the overuse of antibiotics has led

to the development of another crisis: growing

bacterial resistance to antibiotics as evidenced

by the growing strains of Methicillin-resistant

Staphylococcus aureus, or MRSA, and other

similar organisms. This phenomenon of

antibiotic resistance is an unfortunate result

of the natural process of evolution. As more

antibiotics are used, the nonresistant strains

of bacteria will succumb to the antibiotic

while the resistant strains will proliferate

at a high rate. There is a dire need for the

development of novel antibiotics and other

similar treatments. 4 Fortunately, a new class

of antibiotics has recently been discovered

that may serve as a potential solution to

antibiotic resistance. Because they were

just recently discovered and have not been

used extensively in modern medicine, most

bacterial strains have not developed any form

of resistance to these new antibiotics making

them very promising therapeutic agents in the


This new class of antibiotics, called antimicrobial

peptides (AMPs), works to kill

bacteria by destroying their lipid membranes. 2

A cell is defined by its ability to maintain a

this new class of antibiotics, called anti-microbial peptides (amps), works

to kill bacteria by destroying their lipid membranes...when the membrane

is structurally attacked and subsequently compromised, the cell becomes

dysfunctional and dies.

separate environment inside from the outside

via a lipid membrane; when the membrane

is structurally attacked and subsequently

compromised, the cell becomes dysfunctional

and dies. A large subset of AMPs work by

creating holes in the bacteria’s membrane

while other AMPs work by leaking selective

ions to kill the bacteria. On the other hand,


conventional antibiotics, like penicillin, act Dr. Huang and his team developed and

on bacterial proteins in order to hijack the modified methods to study such a unique

bacteria’s internal machinery to prevent cell system. “It’s so thin, just 40 Å thick. How do

wall formation and mitigate bacterial growth. 9 you study this structure? It cannot be optical;

While conventional antibiotics have been it’s far too thin for that … For the detail, we

researched extensively, AMPs still hold many end up relying on x-rays and neutron in-plate

mysteries that researchers are still trying to scattering. These methods use x-rays or


neutrons and send them as a beam and see

“It’s [the membrane] so thin, just 40 Å thick. How do you study this

structure? It cannot be optical; it’s far too thin for that…For the

detail, we end up relying on x-rays and neutron in-plate scattering.”

how they are scattered. From these scatter

Dr. Huey Huang, a professor of physics at patterns we can extrapolate structure,” Dr.

Rice University, has spent much of his career Huang explains. In order to study the effect of

studying membrane biophysics and the AMPs on a cell membrane without having to

effect of drugs like AMPs on the membrane. deal with living bacterial cells every time, Dr.

Dr. Huang was drawn to the membrane, Huang uses giant unilamellar vesicles (GUVs),

because unlike other biological systems, the which are large cell membranes without cell

membrane is relatively physical, in nature. organelles inside. According to Dr. Huang,

Moreover, membranes are extremely difficult using GUVs with a similar composition to

to study due to their small size, as evidenced bacterial cell membranes in order to study

by their thickness of just 40 Angstroms (4 the effects of AMPs is far easier than using

10-9 m), as well as the fact that membranes actual living bacteria since the bacteria will

must be studied in an aqueous setting. inevitably die during the course of the study.

Dr. Huang was attracted to the challenges Using these methods, Dr. Huang has been

studying cellular membranes posed and has able to elucidate the mechanism behind the

spent much of his career developing tools membrane-active variety of AMPs. By placing

and methods to do so. These challenges are AMPs into a giant unilamellar vesicle and

using neutron scattering, Dr. Huang found

that for the AMP to successfully attack the

membrane and create pores, the AMP must

be on both sides of the GUV membrane.

To enter the interior of the vesicle, the

AMP binds to the GUV membrane from the

outside, which allows a minuscule amount

of AMPs to diffuse into the interior. Once a

particular ratio of AMP to phospholipids in the

best exemplified by the amount of scientific membrane is exceeded, pores are formed and

progress that has occurred in the study of the membrane is destroyed. 3

proteins. While over 12,000 soluble proteins

have been crystallized, only a little over 500 Recently, Dr. Huang was part of a team

membrane proteins have been crystallized that discovered the mechanism of action

to have their structure revealed, despite behind daptomycin, an AMP drug that

representing somewhere between 20 and had been approved by the Food and Drug

30% of all proteins in an organism. 1,3 Administration despite its mode of action


Giant unilamellar vesicle

AMPs bind and diffuse inside the vesicle,

destroying the vesicle’s membrane

To enter the interior of the vesicle, the AmP binds to the guv membrane from the outside, which allows

a minuscule amount of AmPs to diffuse into the interior. Once a particular ratio of AmP to phospholipids

in the membrane is exceeded, pores are formed and the membrane is destroyed.

being unknown. Using a combination of

imaging techniques, Dr. Huang and his team

found that daptomycin binds to a certain

component of the GUV membrane called

phosphatidylglycerol (PG). 4 He and his team

also found that daptomycin’s mechanism

of action depends on the concentration

of extracellular calcium, an important ion

that the cell regulates carefully. 4 After a

certain calcium concentration is exceeded,

daptomycin begins to bind to the membrane

and makes it permeable to potassium. 4 This

permeability to potassium prevents the

bacteria from maintaining the proper ion

concentrations it needs to survive. The exact

moment in which this permeability is induced

appears to be when there is a daptomycin/

calcium ratio of 2:3 and a daptomycin/PG

ratio close to 1:1. 4

According to Dr. Huang, much work in

the study of AMPs remains to be done as

the “[it is] a young field and [is] growing in

importance.” One of Dr. Huang’s main goals

is: to find general groups of AMPs that have

similar mechanisms of actions. Finding these

general groups would make development

of newer AMP-based antibiotics a much

easier task and allow for greater variety in

therapeutic products. Another major aim is

to find classes of AMPs that are effective at

differentiating between human and bacterial

cells; this is a difficult process because AMPs

target the cell membrane, which is shared

by both organisms., However, these cell

membranes have some minor compositional

differences, and drugs like daptomycin can

bind to components unique to bacteria, like

PG. 2 In order for AMPs to be more effective

in a clinical context, Dr. Huang hopes to find

other AMPs like daptomycin and study them

to determine how this specificity is achieved.

Dr. Huang is not alone in his research

on AMPs. Many other researchers and

corporations around the world are trying

to discover or create new AMPs in order to

combat the growing problem of antibiotic

resistance. According Dr. Huang, “Lots of

people are trying to develop AMPs for clinical

use; the money incentive is always there.

Some have succeeded, like daptomycin has.

Relatively fewer people are trying to figure out

the exact mechanism.” However, collaboration

with the pharmaceutical companies that

are trying to develop AMP drugs hasn’t

One of dr. huang’s main goals is to find general groups of Amps that have

similar mechanisms of actions. Finding these general groups would make

development of newer AmP-based antibiotics a much easier task and allow

for greater variety in therapeutic products.

always been easy. “I have talked to many

biotechnology companies,” says Dr. Huang.

“They are sort of interested in mechanisms,

that’s why they talk to me but they don’t

give me much detail of their clinical trials or

anything. They consider that to be business

secrets.” This lack of collaboration makes it

difficult for progress in the field of AMPs since

information sharing allows researchers to

build upon the work of others and inspire new

experiments. 5

As the nuances of AMPs are further

researched and discovered, this class of drug

may soon become a viable alternative to

antibiotics. With continued innovation as well

as exponential advances in technology, we

will be able to mitigate and eventually cure

the ever-evolving problem of antibacterial

resistance, and the answer to this debilitating

issue lies in the foundation of AMPs. By

harnessing the power of AMPs and their

exploitable capacity to kill bacteria while

facing no resistance, we will be able to solve

the problem of antibacterial resistance, once

and for all.


[1] Carpenter, E. P., et al. Curr Opin Struct Biol. 2008, 5,


[2] Epand, R. M., et al. Mol Biosyst. 2009, 1788, 289-294

[3] Huang, H. W., et al. Phys Rev Lett. [Online] 2004, 92,

198304. http://hwhuang.rice.edu/pdfs/Huang2004.pdf

(accessed Feb. 21, 2018)

[4] Lee, MT., et al. Biophys J. 2017, 113, 82-90

[5] Mahlapuu, M., et al. Front Cell Infect Microbiol. [Online]

2016, 6, 194. https://www.frontiersin.org/articles/10.3389/

fcimb.2016.00194/full (accessed

Feb. 21, 2018).

[6] Parker, Joanne L., et al. Adv Exp Med Biol. 2016, 922, 61-72.

[7] World Health Organization. http://www.who.int/


2017/running-out-antibiotics/en/ (accessed December 22,


[8] USGS. https://www2.usgs.gov/datamanagement/share/

guidance.php (accessed Dec. 22, 2017).

[9] Yocum, R. R., et al. J Biol Chem. 1980, 255, 3977-3986



Evelyn Syau

Vatsala Mundra


Creating Global

Bioengineering Methods to Foste


round the world, more than 3

million children under the age

of 5 die every year from acute

respiratory infections 1 , most of which are

in the developing world. However, the

development of accessible and low-cost

technology in low-resource settings can

easily save these children’s lives. Recently,

a team of engineers at Rice University

led by Dr. Maria Oden has developed the

Pumani CPAP (Continuous Positive Airway

Pressure) machine, a novel mechanism

featuring cost-effective methods to solve

the problem of acute respiratory infections

in infants in low-resource settings.

Dr. Maria Oden, the director of the

Oshman Engineering Design Kitchen

(OEDK) and a professor in the department

of bioengineering at Rice University, guides

undergraduate students to create devices

solving global health issues. Dr. Oden

was inspired to research these types of

technologies to provide interesting design

projects for bioengineering students,

especially since “babies around the world

are dying needlessly from conditions that

we treat easily in the developed setting,

[and] something [can be done] to prevent

these types of deaths.”

The CPAP machine solves one of these

problems, helping newborn infants breathe

by treating acute respiratory infections.

By continuously pumping pressurized

flow into the lungs, the machine ensures

air sacs do not deflate, making it easier

to breathe for the infant. According to Dr.

Oden, the CPAP system works by keeping

an infant’s lungs inflated, since premature

infants often have lungs that collapse or

are too small. By blowing pressurized air

into lungs, the machine has helped bring

newborn survival up from 24% previously

to 65% currently.

The design process necessary to create the

CPAP machine was extensive, requiring

Dr. Oden and her colleague Dr. Richards-

Kortum to travel to Malawi to talk to

physicians about their technological needs.

Through interviews and observation, they

learned that almost 50% of the babies who

are prematurely born have respiratory

By continuously pumping pressurized flow into the lungs, the

cpap machine ensures air sacs do not deflate, making it easier to

breathe for the infant.

distress. But there was an even bigger

problem. The systems available at the time

cost $8,000 each, making them difficult to

access in places that needed them most,

low-resource settings.

Dr. Oden and her colleagues saw this

as an opportunity to engage senior

bioengineering design students. According

to Dr. Oden, “the student team worked

over the course of the year and came up

with a really amazing design.” Despite

its lack of aesthetic appeal (Dr.

Oden says it looked like a

plastic shoebox from

Target), the students

were able to achieve

pressures and flows

similar to that of

the expensive

system used at

Texas Children’s

Hospital. The

student project

also cost about

$140 - more than 50

times less than the

industry cost. Upon

testing, the student

project matched the flows

and pressures of existing CPAP

machines. With this promising data,

the system was taken to Malawi, where it

received user feedback and suggestions

for improvement, all culminating in a unit

ready for clinical trial.

Looking back at the experience, Dr. Oden’s

favorite recollection was during the start of

clinical trials. “We were training the nurses

on how to use the system, and we got a

call from one of our physician-colleagues

who was on call at the emergency room

in the hospital, and he said: “I have a

baby here that really needs CPAP. Are

you guys ready?” Dr. Oden and her team

were excited to put their machine to the

test, finally being able to use it to help the

sick baby who had a low oxygen

saturation and whose eyes

were rolled up in her

head. After putting

the baby on CPAP,

the baby became

alert within the

hour; she was

able to nurse

and breathe


Dr. Oden could

see the impact

this design had

on its users: “I

felt directly the

work was impacting

this baby’s life.” But

she also realized the

impact of her guidance and

mentorship. She saw greegone

of the student designers of the CPAP,

who had taken a job with the Rice

bioengineering team that ran the clinical

trial, watch her device save a baby. For Dr.



r Neonatal Care

by Pujita Munnagi

Oden, “the product of [her] work is the

baby that was saved but also this former

student who had a growth experience and

is capable of doing many amazing things

beyond that. And we’ve seen her do that

now. So for me, that one moment was just


that’s really our focus

now - how do you

implement an entire

neonatal nursery? what

we’re trying to do is

create that same kind

of neonatal intensive

care unit, but that is

appropriate for the

world’s poorest places

The Pumani CPAP project is one of several

completed by the Rice 360° Global Health

program. In her time in the program, Dr.

Oden has worked with colleagues and

students to fulfill most needs of neonates

and infants. Even though the CPAP

machine will

help premature

infants breathe,

they cannot survive

without right amount of

heat, the right fluids, or the

right nutrition. Since they are

prone to so many issues, they need

a comprehensive care unit that addresses

all of their needs. According to Dr. Oden,

“That’s really our focus now - how do you

implement an entire neonatal nursery?

What we’re trying to do is create that same

kind of neonatal intensive care unit, but

that is appropriate for the world’s poorest


For Dr. Oden, a big part of her work is

collaborating with colleagues of different

disciplines and cultures and making sure

engineers use their skills to help the

world at-large. Dr. Oden hopes the future

also involves creating neonatal intensive

care units (also known as NEST systems)

and implementing them in hospitals all

around the world. If successful, at least

half a million lives a year would be saved,

according to Dr. Oden. Another future

area of focus is maternal health: care

throughout pregnancy, gestation, and the

delivery period. In the words of Dr. Oden,

such care is critical because “if the babies

have a safer delivery and birth, they’re



to survive.

And if the

mom has a safer

delivery, she’s more likely

to survive to be able to raise that baby

well.” In addition to creating NEST systems

and teaching students how to solve

problems that impact global health, she

enjoys knowing the students she instructs

will go on to have influences in several

ways. According to her, it’s about enabling

engineers to ensure that they can and

should make a difference.

Works Cited

[1] Denny FW, Loda FA. Acute respiratory infections are

the leading cause of death in children in developing

countries. The American Journal of Tropical Medicine

and Hygeine 1986, 35, 1-2. https://www.ncbi.nlm.nih.gov/

pubmed/3946732 (accessed November 15, 2017).

[2] Rice 360° Institute for Global Health. bCPAP

Continuous Positive Airway Pressure. http://www.rice360.

rice.edu/bcpap (accessed November 15, 2017).

Design By Madeleine Tadros

Edited By Deepu Karri


tubing port



total flow


O2 flow

o2 inlet


port CATALYST | 23

Mitochondrial Health




eginning in the late 1980s, health

disorders and genetic diseases have

become increasingly attributed to the

mitochondria. Current research projects

use model organisms to understand the

implications of mitochondrial health on the

whole organism. Some of the most fruitful

research has been performed using the

model organism Caenorhabditis elegans, or

C. elegans. C. elegans is a type of nematode

(roundworm) that is only 1 millimeter in

length. Most viewers peer into a microscope

of these nematodes and see modest, squirmy

lines. Dr. Natasha Kirienko peers into the

microscope and sees limitless potential for

discovery. In an attempt to redefine disease

treatment at a global level, Dr. Kirienko

studies mitochondria surveillance pathways

and their implications on genetics and cancer


Dr. Kirienko was brought to Rice University

by a $2 million grant from the Cancer

Prevention Research Institute of Texas

(CPRIT). As an undergraduate and graduate

student in Russia, Dr. Kirienko didn’t have

the opportunities or equipment to pursue

the research she was interested in. Coming

to America, however, she found herself with

access to advanced lab equipment and a

relatively enormous stipend—compared

to her maximum stipend of $12/month in

Russia—with which she could do whatever

she wanted. “Suddenly, the sky is your limit,”

she declared, as a sure smile reached her

eyes. “I didn’t need any encouragement to

work hard.” Dr. Kirienko’s mindset has been

infectious, as it has definitely motivated

Elissa Tjahjono, a graduate student currently

working in the Kirienko Lab. Dr. Kirienko

elaborates on how “hardworking and

motivated” Ms. Tjahjono was during her

studies and how her determination has led

her into graduate school, allowing her “to do

substantial amount of work in a year or so.”

Ms. Tjahjono was the first author of a recent,

monumental paper in Dr. Kirienko’s lab on a

mitochondrial surveillance pathway important

in the pathogenesis of Pseudomonas

aeruginosa, a bacteria that affects cell iron

availability and causes organism death. 1

Dr. Kirienko’s fascination with the

mitochondria began in graduate school.

She had read about a particular gene motif,

or a distinct sequence of DNA, called the

Ethanol Stress Response Element (ESRE).

This motif had been identified by different

scientists seven different times, and it was

shown to be upregulated (expressed more

as a gene) by ethanol-induced heat shock

(heat shock occurs when a cell is subjected

to a higher temperature than ideal). 1 During


diseases have

now become

the number

one genetic


her PhD studies, Dr. Kirienko discovered an

anomaly: a genetic mutant that was actually

supposed to reduce expression of the ESRE

gene instead caused upregulation. Further,

she found that the mutant was sensitive

to not just one , but multiple stressors. So,

“there was this puzzle [relating to ESRE]

that [involved] multiple conditions that

were different from each other.” She began

asking the questions: what is the underlying

mechanism? What triggers ESRE activation?

This led into her postdoctoral studies, during

which she researched interactions between

C. elegans and its accompanying pathogen,

Pseudomonas aeruginosa. During that time,

Dr. Kirienko and her colleagues found a

siderophore (iron carrier) called pyoverdine,

produced by P. aeruginosa, kills C. elegans by

causing severe mitochondrial damage. Almost

all living organisms require iron for their

survival, but it is difficult to acquire iron from

the environment. Animals have complicated

immune systems that limit the ability of

pathogens to acquire iron during infection. 2

Pyoverdine has evolved to surmount this

difficulty, and it is capable of getting inside

of host cells, taking away iron, and bringing

it back to bacteria. Pyoverdine, she found,

can remove up to a third of iron (III), which

is about 20-25% of iron within the host. This

results in organismal death. 1

How are the two distinct concepts of ESRE

and pyoverdine related, one may ask? At Rice,

Dr. Kirienko found that the ESRE pathway

was also upregulated after exposure to

pyoverdine, leading her to understand that

pyoverdine exposure and heat shock are

two very different stressors. It also led her

to draw the connection between ESRE and

mitochondrial damage. ESRE is upregulated

in mutants that are affected by a variety of

stressors. Pyoverdine is a direct stressor

that upregulates ESRE in the mitochondria.

According to the two previous statements,

there must be some kind of association

between ESRE and mitochondrial damage.

With this juncture acting both as a conclusion

and a foundation, the Kirienko lab took the

next step. They used small molecule drugs

such as rotenone and antimycin (known

mitochondrial poisons) to test the possibility

of the effects of ESRE on mitochondrial

damage. After much experimentation in the

lab, they “were able to link this presence of

[ESRE] in the promoter of effector genes of

mitochondrial damage.”

Now, the Kirienko lab is working on

understanding how pyoverdine is produced

in bacteria. Testing for drugs that may inhibit

this pyoverdine factor, the lab recently

found small molecules that can prevent

pyoverdine synthesis or function. The tests

are on a path to success, and a collaborator is


currently testing the drugs in mice. Beyond

these projects, Dr. Kirienko has a broad

vision for the future implications of her

lab’s work. One of those is developing

future medicine for patients with cystic

fibrosis, a disease tied deeply to excessive

inflammation influenced by pyoverdine

presence. 3 Another goal to understand how

to “leverage mitochondrial dysfunction

[in] cancer.” Dr. Kirienko knows that

cancers, in general, tend to accumulate

lots of mutations, and the mutation rate in

mitochondria is noticeably higher, due to

less checkpoint mechanisms. This means

that mitochondria become dysfunctional

much faster than, for example, nuclear

DNA. 4 Accordingly, the lab was able to

identify a subset of cancer cells that are

most sensitive to mitochondria-damaging

chemotherapeutics. They are now using

C. elegans to find exactly which mutations

cause the sensitivity. If this research

succeeds, then the lab “will be able to

get a step closer to personalized cancer


“Mitochondrial diseases [have] now

[become the] number one genetic

disorder,” Dr. Kirienko states.

Mitochondrial diseases have a variety

of phenotypes, spanning from subtle

muscle fatigue to complete nonfunctional

muscles and neurons. The severity, she

hypothesizes, depends on how pathways

mitigate mitochondrial damage. If these

pathways, like the ESRE pathway, are

performing well, then patients with

affected mitochondria can have a much

better healthspan, she extrapolates.

Thus, if her lab can find the “members

of the pathway,” they can find drugs that

work with molecules activated within

dysfunctional mitochondria. “Being able

to transfer that to human health is a big

thing,” she commented. The Kirienko

lab is in the process of finding specific

transcription factors that bind to ESRE.

They carry the hope of developing future

medicine that will specifically target

mitochondrial-related cancers.

Works Cited

[1] Tjahjono E., Kirienko NV. PLoS Genetics. 2017, 6,


article?id=10.1371/journal.pgen.1006876 (accessed Jan.

28, 2018).

[2] Our Need for Iron. http://www.irondisorders.org/

our-need-for-iron/ (accessed February 20, 2018), Iron

Disorders Institute.

[3] National Cancer Institute Dictionary of Cancer Terms.


dictionaries/cancer-terms (accessed January 28, 2018),

National Institutes of Health.

[4] All About Mitochondria. http://www.lhsc.on.ca/


Inherited_Metabolic/Mitochondria/ (accessed February

20, 2018), London Health Sciences Centre.







The ESRE motif is activated by

mitochondrial damage. Sequence

of the Ethanol and Stress

Response Element (top).

Activation of ESRE-controlled GFP

expression after exposure to a

mitochondria-damaging drug

rotenone (bottom).

Images by Photoroylaty at Freepik

Design By J. Riley Holmes

Edited By Kalia Pannell


The Wateworks


by Andrew Mu


ater is the most abundant natural

resource on Earth , yet clean drinking

water remains inaccessible for almost

one billion people across the globe. 1 This is

because drinking water requires the right

infrastructure, which isn’t always available.

Refugee camps, small islands, and developing

countries are just a few examples of settings

where obtaining clean water is a daily struggle.

Though improving water accessibility is a

hefty task, Dr. Qilin Li, a professor of civil

and environmental engineering, chemical

and biomolecular engineering, and materials

science and nanoengineering at Rice

University isn’t one to shy away from the

challenge. Dr. Li and her team are addressing

issues regarding water accessibility by

developing technologies that harness the

power of the sun to purify water.

One in nine people worldwide do not have

access to clean, safe drinking water, and

these people are closer to home than you

would imagine. 2 Some residents along the

Texas-Mexico border, for example, don’t

have access to municipal water, or any easily

obtained source of water for that matter.

Locals obtain their water from a large tank

that must be transported in and treated onsite

to ensure safety. This water treatment

system is inconvenient. More established

water treatment sites, as present in municipal

water sources, experience a different problem

called biological fouling, a phenomenon in

which bacteria grow on and damage the

membranes that are used to filter water. Li

explains, “This is a very big problem in water

treatment. So, for example, a big seawater

desalination plant in Tampa Bay, Florida was

built many years ago, but when it was built

and designed, they did not consider fouling

of this membrane material. Eventually fouling

got completely out of control so they had to

shut down and completely redesign the plant.”

The lack of portable water treatments systems

and fouling-resistant membrane materials can

make treating water prohibitively expensive.

According to Dr. Li, “We can treat any water,

it's not [an] exaggeration, any water, to

drinking water quality, but ultimately the

problem is the cost.” Dr. Li seeks to provide

clean water to all, utilizing nanophotonicsenabled

solar membrane distillation and

biological fouling-resistant membranes.

Nanophotonics-solar membrane distillation

is a solar desalination technology in which

the energy in a solar cell is used to heat

water into vapor. As sunlight hits the solar

cell, a photothermal process converts photon

energy into heat, which is then concentrated

and used to boil salty, impure water. The

resultant water vapor is then transported

through a thin, porous membrane. Salts and

other contaminants cannot pass through and

are left on one side of the membrane, while

on the other side, the vapor condenses into

pure water. 1 Dr. Li explains that this water

purification method is advantageous because

you can “easily scale it up, and it's modular,

so by adding more solar cells you can enlarge

the capacity of the plant.” This is the first

approach to solar distillation that is scalable,

which makes it suitable for communities in offgrid

locations and allows for greatly reduced

One in nine people

worldwide do not have

access to clean, safe

drinking water, and these

people are closer to home

than you would imagine.

energy costs in desalination. 1 In addition,

because this technology is so portable, it can

be used in military applications. According to

Dr. Li, a surprisingly large number of military

fatalities are the result of transporting water

to the frontlines of a battlefield, so a portable

water purification system could save lives. “In

the past what they used is reverse osmosis,

but then they have to carry heavy pumps and

big batteries. With this, hopefully all they need

to do is unfold the membrane and put it under

the sun and produce clean water wherever

you have any source of water,” Dr. Li explains.

While solar distillation is intended for

communities and military applications,

biological fouling-resistant membranes are

targeted for large water treatment plants.

Dr. Li tackles biological fouling by creating

membranes in which nanoparticles have

been embedded. These nanoparticles contain

antimicrobials that are slowly released and

interfere with biofilm formation. Once the

chemicals are depleted, nanoparticles can

be easily reloaded onto the membrane. 3 Dr.

Li believes that approaching the problem

with slow release mechanisms will present

a longer term control strategy compared to

previous methods. By using fouling-resistant

membranes, large amounts of money and

energy can be saved that would have been

spent pre-treating the water to prevent this

fouling. The development of these foulingresistant

membranes would have a huge

impact on industrial water treatment. This is

particularly significant for Houston because

of the applications of fouling-resistant

membranes to the oil and gas industry. When

drilling for oil and gas, large amounts of

wastewater are produced, which are normally

injected back into the ground. With more

efficient large-scale water treatment, this

wastewater can be repurposed to address

water shortages and supply irrigation systems

in agriculture.

Currently, Dr. Li is working to disseminate

the solar purification and fouling-resistant

membrane technologies developed in

her laboratory. NEWT, the Nanosystems

Engineering Research Center for

Nanotechnology-Enabled Water Treatment, at

Rice is teaming up with industry partners to

commercialize water purification technologies.

One such partner, Localized Water Solutions

Inc., is a company based in Austin committed

to reducing water shortages by building

smarter water systems and has licensed

solar membrane distillation for emergency

response and military applications. Dr. Li

is also part of a startup company called

SOLMEM, which is looking to apply the

technologies to bigger markets--seawater

desalination and industrial wastewater

treatment in the form of zero liquid discharge,

a wastewater management strategy that

eliminates liquid waste, for example. By

studying and applying novel water treatment

technologies, Dr. Li hopes to create a

sustainable water supply for those in need.


[1] Dongare, P. T. et al. Proc. Natl. Acad. Sci. U.S.A. 2017, 114,


[2] The Water Project. https://thewaterproject.org/waterscarcity/water_stats

(accessed Nov. 9, 2017)

[3] Wu, J. et al. J. Membr. Sci. 2017, 531, 68-76

DESIGN BY Juliana Wang

EDITED BY Albert Truong



By Oliver Zhou


magine slowly losing your memory,

motivation, and communication skills—the

things that make you who you are. These

are just a few of the effects of Alzheimer’s

disease, and occur over the course of only a

few years. When someone has Alzheimer’s,

amyloid beta proteins, which are products

of a normal protein recycling process in

their brain, improperly join together to form

long strands called fibrils. 1 Even though the

individual protein monomers are harmless,

these fibrils can then stick together to

become toxic plaques that inhibit neuron

function and cause cell death, thus leading

to the debilitating effects of Alzheimer’s

on the brain. 1 Although the molecular

mechanisms of Alzheimer’s are understood,

the causes of these microscopic failures

are unclear, and no treatment to stop or

reverse its progression currently exists. In

addition, there are no definitive practices

or measures to significantly decrease risk

of developing Alzheimer’s. 2,3 As Alzheimer’s

patients deteriorate over the course of several

years, supportive and palliative care are the

only means of assistance, leading to higher

and higher costs of patient care. Altogether,

Alzheimer’s disease is one of the most costly

and serious diseases 4 that plagues the world


Enter Dr. Angel Martí. An inorganic chemist

from Puerto Rico, Dr. Martí has spent his

whole life dreaming of being a scientist.

Dr. Martí began his career studying the

photophysical properties of metal complexes

at the University of Puerto Rico. In 2004,

he joined a research group at Columbia

University, where he contributed to the

development of fluorescent probes formed

from metal complexes for detection of DNA

and RNA. In 2008, he joined Rice University’s

Department of Chemistry, where he now

combines his past knowledge of metal

complexes and biological proteins for use

in the area of neurodegenerative diseases.

This combination of the building blocks of

inorganic chemistry with the fundamentally

biological issues of proteins and diseases

is what makes his research exciting. As Dr.

Martí put it, since “people don’t tend to study

amyloid beta through the eyes of an inorganic

chemist…being an inorganic chemist allows

me to bring something new to the table,

and that something new is the use of metal


Metal complexes have special photophysical

properties that allow Dr. Martí to study the

amyloid beta buildup of Alzheimer’s in new

ways. A metal complex is essentially a metal

atom, such as iron, or ruthenium, or rhenium,

surrounded by and bound to organic

molecules—a “hybrid of organic and inorganic

materials,” as Dr. Martí describes. An earlier

project of his involved the use of ruthenium

metal complexes, which would increase in

fluorescence over 100 fold when bound to

amyloid beta aggregates. This complex is

useful in the detection and assessment of the

extent of amyloid beta protein aggregation

in the brain, and with its higher visibility and

long lifetime, it holds numerous advantages

over the more commonly used indicator,

thioflavin T.

Ruthenium metal complexes have many great

uses, but Dr. Martí discovered something

even greater. “[When] we changed that

metal [in the metal complex] to rhenium”,

Being an inorganic

chemist allows me

to bring something

new to the table, and

that something new

is the use of metal


Dr. Martí describes, “very strange and

wonderful things started happening.” Once

irradiated with blue light, rather than merely

fluorescing, the rhenium metal complex

could actually oxidize the parts of the

amyloid beta aggregate that it bound to.

This new discovery is called footprinting,

and it can reveal exactly where hydrophobic

compounds like the rhenium metal complex

bind, making it easier to engineer drugs to

bind to those sites and combat Alzheimer’s.

However, this is not all the rhenium metal

complex can do. When binding to the large

amyloid beta aggregates, the oxidizing effects

of the metal complex were insignificant for

purposes other than simply signaling where

they could bind. However, when bound to the

harmless, monomeric form of amyloid beta,

the oxidation can significantly change their

individual shapes. This is enough to prevent

them from forming fibrils and aggregating

altogether. If there were some way to

preemptively insert and activate this rhenium

metal complex in the brain before symptoms

of Alzheimer’s began to show, the amyloid

Using Light-Activated Metal

Complexes to Combat


beta monomers would never begin to

aggregate at all. This technique could lead to

a potential “vaccine” to prevent Alzheimer’s.

This Alzheimer’s vaccine is the end goal of

Dr. Martí’s research, but there are still many

challenges that the lab faces. Currently, the

only way to activate the metal complexes

after they bind to amyloid beta is by

irradiating them with blue light. This would be

impossible in a human, since our tissues are

not transparent to blue light, only red. Think

about what happens when you shine a light

through your hand—the light appears red. If

the rhenium metal complex could be altered

in a way so that it could be activated by red

light, it would allow for potential use inside

humans. Another challenge is ensuring that

the rhenium metal complexes are not toxic

to humans. This also brings up the question

of how the rhenium metal complexes would

make it to the brain. Most drugs are delivered

via the bloodstream, and the brain is

separated from the blood by a highly selective

blood-brain barrier. Currently, the metal

complex would not be able to get through

this membrane to the brain.

Despite these challenges, Dr. Martí’s research

presents a beacon of hope in the fight against

Alzheimer’s. Knowledge of the binding sites of

hydrophobic substances on the amyloid beta

aggregates is critical in design of future drugs

that may be able to neutralize or disintegrate

them. And if a rhenium metal complex that

can absorb red light and make it into the

brain could be synthesized, the progression

of Alzheimer’s could be stopped or even

prevented altogether.


[1] What Happens to the Brain in Alzheimer’s Disease? National

Institute on Aging [Online], May 16, 2017. https://www.


(accessed Dec 16, 2017).

[2] More research needed on ways to prevent Alzheimer’s,

panel finds. National Institute on Aging [Online], June 15,

2010. https://www.nia.nih.gov/news/more-research-neededways-prevent-alzheimers-panel-finds

(accessed Jan 7, 2018).

[3] Alzheimer’s & Dementia Prevention and Risk. Alzheimer’s

Association Research Center [Online]. https://www.alz.


(accessed Jan 7, 2018).

[4] Latest Alzheimer’s Facts and Figures. Alzheimer’s Association

[Online]. https://www.alz.org/facts/ (accessed Jan 7,


Image from freepik.com


EDITED BY Roma Nayyar




by amna ali


eart failure. Congenital heart defects. Each

year, both of these diseases take millions

of lives, yet much remains to be learned

about their mechanism and their treatment.

According to the Center for Disease Control

and Prevention (CDC), 6.5 million Americans

have heart failure, a condition in which the

patient’s heart cannot effectively pump blood

throughout the body, and the number of

individuals with heart failure will increase

by 46 percent by the year 2030. 1 In addition,

among people who develop heart failure, half

of them die within five years of the diagnosis. 1

This can be attributed to both an aging

population and the rise of risk factors such as

hypertension, disease, and diabetes. 2 Though

congenital heart defects do not affect nearly

as many individuals as heart failure, they still

pose problems in terms of treatment cost and

infant mortality, especially in underdeveloped

countries. 3 As of 2017, the average cost to

treat a congenital heart defect from infancy

to age 21 is anywhere between $47,500

and $73,600, which is an astronomical cost

in many healthcare settings. 3 In addition,

hospitalizations for these adult patients

have doubled over the past 20 years, with

congenital heart surgery accounting for nearly

20% of these admissions. 4 Although both

heart failure and congenital heart defects have

definitely become more manageable, we have

yet to identify a process through which we can

control these diseases from developing in the

first place.

Dr. Jun Wang, a principal investigator at the

Texas Heart Institute, seeks to discover the

underlying cause behind cardiovascular

abnormalities. His research team is

currently “investigating the mechanisms of

congenital heart defects and heart failure…

[by] examining how cardiomyocyte survival

and death is controlled.” Cardiomyocytes, or

heart cells, undergo cell death when there is a

defect which causes excess production of the

apoptosis-inducing factor (AIF), a protein that

sits in the cell and waits for a “death” signal






from the nucleus. 5 In a typical heart cell, the

nucleus releases this “death” signal when the

cell is somehow injured or too old. 5 Once the

AIF receives this signal, it causes the DNA of

the cell to condense and fragment in order to

prepare for apoptosis, or cell death. 5 In the

heart cells of an individual with heart failure,

too much of the “death” signal is released,

causing a release of too much AIF, leading

to excessive and unnecessary cell death.

Fortunately, certain modification processes

can be used to control levels of AIF.

Dr. Wang’s research team is specifically

examining how a process known as sumo

conjugation, a modification process in which

a small protein known as sumo is attached

and detached from other proteins in the cell,

can regulate AIF. Through sumo conjugation,

various processes in the cell, including the

processes that control cell death, can be

turned on and off. Dr. Wang is exploring how

sumo conjugation can be used to regulate the

production of AIF in the cells of a person with

heart failure. As heart failure involves random

apoptosis of cardiomyocytes, understanding

the signaling process associated with cell

death and its regulation is crucial step in

curing heart failure. Heart failure, however, is

not the only disease that could be potentially

managed through sumo conjugation.

Dr. Wang’s team is also researching how

the processes associated with congenital

heart defects can be regulated via sumo

conjugation. Congenital heart defects,

problems in the structure of the heart at

birth, often involve cells that have either

grown too much or too little. This results

in defects, such as holes in the separating

wall between two chambers when the cells

don’t grow enough, or a heart valve that is

too small when the cells grow too much.

Congenital heart defects originate in the

fetus, where stem cells that have not yet

decided what specific type of organ or tissue

they are going to become undergo a process

known as the hippo pathway, a type of sumo

conjugation. If the hippo pathway is turned

on, the stem cells stop growing; if the hippo

pathway is turned off, the stem cells continue

to differentiate into heart cells. 6 Dr. Wang’s





research team is attempting to determine

precisely which genes are responsible for

controlling the on and off switch of the hippo

pathway. This process is tested by “knocking

out”, or making ineffective, certain genes in

stem cells and determining the effects of

the knock out on the hippo pathway.6 Thus

far, in its preliminary research, Dr. Wang’s

stem cells

no further growth

further differentiation



research team has found that a gene known

as ari3b that may play a crucial role in the

hippo pathway; cells with ari3b “knocked

out” stopped differentiating. In other words,

although further research is required, Dr.

Wang’s research team has possibly found a

way to control incorrect or excessive heart cell

growth that contributes to congenital heart

defects, a potentially crucial step towards

being able to end the disease once and for all.

Dr. Wang hopes that with further research

about sumo conjugation and the hippo

pathway, his work will one day help other

researchers better understand cardiomyocyte

death in heart failure and random

uncontrolled growth in congenital heart

disease. By understanding the mechanisms

that cause these diseases with further

research, Dr. Wang states, rather than treating

the symptoms through surgeries, transplants,

and drugs, we are on the path to possibly

find a cure to the diseases that have afflicted

humanity since the beginning of time.


[1] Thacker, C; Del Barto, J. Latest statistics show heart failure

on the rise; cardiovascular diseases remain leading killer.

American Heart Association, Jan. 26, 2017. http://newsroom.



Nov. 3, 2017).

[2] Heart Failure to Increase by Nearly 40 Percent in Next

15 Years. American Heart Association News, Sept. 29 2015.


(accessed Jan. 8, 2018).

[3] Congenital Heart Defects (CHD): Data and Statistics.

Division of Birth Defects and Developmental Disabilities,

Center for Disease Control and Prevention, Jan. 8, 2018.


(accessed Feb. 7, 2018).

[4] Dmyterko, Kaitlyn. Circ: Pediatric congenital heart surgery

costs more than adult surgery. Cardiovascular Business, Oct.

4, 2011. http://www.cardiovascularbusiness.com/topics/


(accessed Feb. 8, 2018).

[5] Cande, C; Vahsen, N; Garrido, C; Kroemer, G. Cell Death

and Differentiation. 2004, 11, 591-595.

[6] Yu, F; Guan, K. Cold Spring Harbor Laboratory Press.

[online] 2013, 27, 355-371. http://genesdev.cshlp.org/

content/27/4/355. (accessed November 2017).

Icon from pngtree

DESIGN BY Evelyn Syau

EDITED BY Roma Nayyar


1. Dr. Wang’s favorite procedure to conduct in the lab is injecting of genes, proteins,

or viruses with green fluorescent protein (GFP), which has a fluorescent emission

wavelength in the green region of the visible light spectrum. This means that when the

GFP is exposed to certain wavelengths of light, it will turn a bright green color.

Watching the specimen “light up like stars in the night sky, all under the microscope” is

beautiful, Dr. Wang says.

2. GFP has existed in jellyfish for millions of years, but it has very recently begun to

be used in research labs. Dr. Wang uses GFP to tag “knock out” genes in his samples

so that he can determine which samples can be used in his study.



the exciting

new field of neurogenesis

aging & autism

by Christine Tang


or a long time, scientists believed that

humans are born with all the neurons

that they will ever have, and that these

neurons can only die. However, scientific

evidence from the 1960s started to show

that neurogenesis, the production of

neurons from neural stem cells, occurs in a

portion of the brain called the

hippocampus, which is the

center of learning and

memory in the brain.

Though this finding

was received

with skepticism

in the scientific


it was further

confirmed when

researchers found

in the 1980s that

adult songbirds

produced new neurons

when they learned new

today, we know that

each of the two human

hippocampi makes about

700 new neurons each


songs. 1 Today, we know that each of the two

human hippocampi makes about 700 new

neurons each day. 2

Neurogenesis is the process by which new

neurons form from primary neural stem

cells (NSCs) in the brain. 3-4 This process starts

very early in development in the womb, but

also occurs in the hippocampus throughout

the lifetime. However, neurogenesis is much

more prominent in a young brain than in an

older brain, and a decrease in neurogenesis

is associated with memory decline and

mood changes. Dr. Mirjana Maletic-Savatic,

an assistant professor and child neurologist

at Baylor College of Medicine and Texas

Children’s Hospital, is investigating this

process of neurogenesis in the hopes of

making the old hippocampus young, so that

older people will still be able to learn and

remember at the same capacity of a young

child. Development of new neurons is not

a simple process, and involves a series of

events called the neurogenesis cascade.

In the subventricular zone (SVZ) of the

hippocampus, a NSC divides into a daughter

cell. Then, the daughter cell produces many

clones, which each have three fates: they

can divide more, differentiate, or die. Dying

neurons are part of a natural process

called pruning, which is the reason

why adults have fewer neurons

than infants do. The cells that are

stimulated to differentiate will

gradually progress into immature

and then mature neurons,

which are integrated into the


The mother NSC does not die,

but it does have a big limitation. It

can only divide into daughter cells

a few times, which means that the

amount of daughter cells that can originate

from one primary NSC is limited.5 After

dividing several times, the primary

NSC transforms itself into an

astrocyte, which is a type of

glial cell (non-neuronal cell)

that cannot differentiate

into new neurons. Even

though daughter cells can

reproduce, the high death

rate of differentiated cells

and immature neurons

limits the production of new

mature neurons. Therefore,

during aging, there is a decrease

in the number of stem cells, a

decrease in the number of newborn

neurons, and an increase of astrocyte

density in the hippocampus. These factors

contribute to cognitive decline and diseases

as we age. Dr. Fatih Semerci, a postdoctoral

student in her lab, and Dr. Maletic-Savatic

identified a gene called Lunatic fringe that is

a selective marker of NSCs. 6 Lunatic fringe

mediates Notch signaling, which allows for

communication between the mother and

daughter cells and is involved in quiescence

(dormancy) and differentiation of NSCs. This

is a unique mechanism in which progeny

of NSCs can send feedback signals to the

mother NSC to modify its fate. Therefore,

Lunatic fringe is a control step; it allows NSCs

to decide when to divide or stop, as NSCs

can only divide a limited amount of times. In

addition, Dr. Maletic-Savatic and Dr. Semerci

developed a new mouse model with the

Lunatic fringe gene that traces the lineage of

NSCs. 2,6 This allows for future experiments

that investigate other unknown mechanisms

of NSC differentiation or effects of knockout

genes on neuron development and mouse

behavior and cognition.

Research in

neurogenesis could

lead to potential

therapies for many

neurological patterns

or disorders

In addition, Dr. Maletic-Savatic has found

that epilepsy and impaired neurogenesis are

linked. In a joint study with Dr. Juan

Manuel Encinas, she injected

kainic acid, a neuroexcitatory

amino acid,

into the dentate gyrus

of the hippocampus.

They discovered

that high doses of

kainic acid resulted

in neural excitability

and terminal NSC

differentiation into

reactive astrocytes, 7

which are cells that

respond to pathological

conditions such as stroke and

epilepsy. 8 Unfortunately, this results in two

major problems: the NSC population will

decrease over time and reactive astrocytes

can increase inflammation, form scar tissue

and disrupt synaptic connectivity. 9-10 At this


Critical periods of



(1-4 days)


(1-3 weeks)

Stem cells


NSCs ANPs Early NBs NBs Immature








Cell Layer



Apoptotic cells


Images of neural stem cells (NSCs)

differentiating into neurons. NSCs are vital for

learning, memory and mood.





The neurogenic cascade, which shows the differentiation of neural stem

cells (NSCs) into amplifying neural progenitor cells (ANPs) and eventually,

into immature neurons and granule cells. Throughout differentiation and

proliferation, cells die by a process called apoptosis and microglia clean up

the dead cells.

time, findings highlight new mechanisms

that may contribute to epilepsy but

do not propose new therapies. In the

future, researchers plan to prevent

epileptic activation of NSC conversion

into reactive astrocytes to stop rapid

depopulation of NSCs.

Dr. Maletic-Savatic is also interested in

early neuronal development and sees

prematurely born babies and autistic

children at a clinic. Neurogenesis in

early periods of life can modulate the

ability to learn, and she is focused on

developing an integrative approach

to the study of early developmental

disorders, such as autism. Behavioral

features of autism include repetitive

thinking and the inability to accept

new environments. Autistic children

like routine and have trouble in new

environments or with new objects in

their environment. Newborn neurons

are involved in pattern separation,

which is a process that helps people

distinguish new and old objects in their

environment. Pattern separation may be

disrupted in autistic children; perhaps

their hippocampal neurogenesis has

something to do with this disruption.

This is important because autism is not

diagnosed until about 2-3 years old, so

an earlier intervention is critical because

the human brain has the most plasticity

in the first two years of life. Dr. Maletic-

Savatic is trying to do that by developing

a image-based model of autism that

could help diagnose autism earlier

through mapping different parts of the

brain using various techniques such

as magnetic resonance imaging (MRI)

and magnetic resonance spectroscopy

(MRS). 11-12

Neurogenesis is a hot field right now

because it is a potential target for new

therapies and drugs. Research in the field

could lead to potential therapies for many

neurological patterns or disorders, such as

age-related cognitive decline, epilepsy and

autism. These projects are interdisciplinary

and often involve collaborations around

the world. In the future, Dr. Maletic-Savatic

hopes to improve human neurological

function and health by elucidating other

mechanisms that are involved with Lunatic

fringe and Notch signaling as well as

epileptic and autistic processes that affect

NSC differentiation.

Works Cited

[1] Blakeslee, S. A Decade of Discovery Yields a Shock

About the Brain. The New York Times, Jan. 4, 2000. http://



Dec. 1, 2017).

[2] Gutierrez, G. Lunatic Fringe gene plays key role in

renewable brain. Baylor College of Medicine [Online], July

19, 2017. https://www.bcm.edu/news/brain/lunatic-fringegene-brain-renewal

(accessed Dec. 1, 2017).

[3] Gage, F. H. Science. 2000, 287, 1433-1438.

[4] Altman, J.; Das, G.D. J. Com. Neurol. 1965, 124, 319-335.

[5] Encinas, J. M. et al. Cell Stem Cell. 2011, 8, 566-579.

[6] Semerci, F. et al. Elife. [Online] 2017, 6. https://


[7] Sierra, A. et al. Cell Stem Cell. 2015, 16, 488-503.

[8] Haim, L. B. et al. Front Cell Neurosci. 2015, 9, 278.

[9] Gutierrez, G. Neural stem cells massively turn into

astrocytes in a model of epilepsy. Baylor College of

Medicine [Online], May 7, 2015. https://www.bcm.edu/


(accessed Dec. 1, 2017).

[10] Sofroniew, M. V. Trends Neurosci. 2009, 32, 638-647.

[11] Ward, A. Solving the mystery of autism. The Houston

Chronicle, Apr. 29, 2013. http://www.chron.com/news/


(accessed Dec. 1, 2017).

[12] Nace, M. Autism Researchers at Texas Hospital Hunt

For Autism’s Roots. BioNews Texas, Apr. 24, 2013. https://


(accessed Dec.

1, 2017).

Designed by NamTip Phongmekhin

EDITED BY Roma Nyaar




espite the Middle East’s abundance

of black gold, it severely lacks

two other resources needed for

stable socioeconomic development.

The limited supplies of and increasing

demands on each of these vital, highly

interdependent resources- food, water,

and energy- is a main point of tension for

countries in this region. In fact, increased

desertification and loss of fertile land,

water scarcity, and climate variability

are all cited by United Nations

Environmental Programme as

having been precursors for

past conflict (specifically in

Darfur, Sudan).¹ Balancing

the demands on these

resources is key to

achieving regional

stability. Looking

specifically at water

stress in the Middle

East, an analytical

lens that can be used

to understand the

challenges in alleviating

this stress and achieving water

security is the Food-Energy-Water (FEW)

Nexus. This FEW nexus can help us break

down the relationship and involvement

of food and energy production on water

security, which (defined by UN-Water)

is the ability to secure enough water--

and the right quality of water-- needed

for sustaining our well-being and


Mr. Gabriel Collins, J.D., a Baker Botts

Fellow in Energy & Environmental

Regulatory Affairs at the James A. Baker

III Institute for Public Policy, is currently

exploring the environmental, legal, and

economic implications of the FEW Nexus.

His work is primarily water-related and

energy-related, and one of his recent










publications, titled “Carbohydrates, H2O,

and Hydrocarbons: Grain Supply Security

and the Food-Water-Energy Nexus

in the Arabian Gulf Region”, builds a

comprehensive review of the interactions

between each component of the nexus and

ultimately offers policy recommendations

to maximize resource security.

One aspect of the nexus that Mr. Collin’s

explores in his paper is the fundamental

interaction between water usage and food

production: “[w]ater-thirsty staple food

grains must be irrigated in the region’s

arid climate”.³ Saudi Arabia, one

country Mr. Collins discusses in

his paper, had a challenging time

meeting both the hydrological

and agricultural demands

of its growing population

and “between 1980 and

1999 alone, Saudi Arabian

farms consumed more than

300 billion cubic meters of

water—most of which came

from deep aquifers that do

not recharge—and spent tens of

billions of dollars in a failed attempt

to cultivate wheat on an industrial scale

in its harsh desert climate”.⁴ In fact, the

high economic and environmental costs

of this process were so steep, that Saudi

Arabia completely abandoned wheat

production in 2016, and instead turned to

relying solely on imports.⁵ This situation

of unequal supply and demand is not

unique to Saudi Arabia. As the Middle East

is an arid region, lacking in rainfall, fertile

soil, and adequate humidity,⁶ many of the

countries here face similar challenges in

meeting the food and water demands of

their growing populations.

Another country that is facing a similar

challenge in balancing demands for

food and water is Iran. However, unlike

Saudi, Iran is choosing to remain with

their mindset of self-sufficiency and has

reinvigorated efforts to increase domestic

crop production. This encouragement

of domestic wheat production by the

government is concerning because Iran’s

groundwater depletion rate heralds the

possibility that their aquifers (their primary

water source for agriculture) will be

depleted in the next 50 years.⁷ However,

Iran is currently going through severe

water scarcities and droughts that may

have an irreversible impact on current and

future water availability, which Mr. Collins

explains in an apt analogy⁸:



































Another major barrier to achieving water

security is meeting growing energy

demands. The Middle East’s wealth comes

from it’s disproportionate abundance of

petroleum and natural gas; however, its

high investments in energy production

and exportation have severe implications

on water usage. In Saudi Arabia, “energy

production accounts for the second largest

use of water behind agriculture and is

expected to continue rising over the next

15-20 years.”⁹

The connection between water and energy

perhaps isn’t as straightforward as that

between water and food. Water is used in

the generation of electricity, the extraction

and processing of fossil fuels, and in the

production of biofuel.¹⁰ And, energy is

used in water extraction, desalination, and

transportation. From this interdependence

stems the difficulty in separating the

impact of the usage of one resource on

the availability of the other; this difficulty

makes it harder to develop deliberate

and sustainable practices that maximize

effective resource allocation.

Mr. Collins also introduces methods

that can be used to increase resource

security and facilitate the development

of sustainable practices. He discusses

the benefits of more solar and nuclear

powered infrastructure, especially for small

farmers and communities who aren’t using

established power grids. Particularly for

this subgroup of energy consumers, Mr.

Collins believes that there is a lot of merit

in solar-powered groundwater pumping

infrastructure to put in wells. And using

nuclear power, rather than fossil fuels, also

has the added benefit of freeing up gas for

other uses, such as desalination.¹¹

However, questions regarding effective

allocation of resources cannot be

answered easily, and of course there are

differing viewpoints as to what the most

“effective” practice for each country is.

But placing resource conservation at the

forefront of the region’s environmental,

economic, and sociopolitical agendas will

help spark serious efforts to think about

the importance of water security and


Going forward, Mr. Collins plans to

continue to research the FEW nexus in

Sub-Saharan Africa and South America.

The main purpose of his research is to help

introduce sound environmental, economic,

and legal analysis to policy makers to

aid them in creating stronger policies.

His vision is that he and researchers like

him will be able to offer nonpartisan

policy recommendations to policymakers

in the US and abroad.¹² The potential

implications of being able to fully analyze

each component of the nexus will be to

create specific policies that will encourage

the use of clean energy, promote effective

economic/trade policies, and allow each

country to take advantage of the resources

they have while still making sure that

future demand for those resources will be



[1] Pedraza, L.E.; and Heinrich, M. Water Scarcity:

Cooperation or Conflict in the Middle East and North

Africa? Foreign Policy Journal. Sept. 2 2016. https://www.


(accessed Feb. 17 2018).

[2] What is Water Security? Infographic. UN Water

[Online]. May 8 2013. http://www.unwater.org/

publications/water-security-infographic/ (accessed Feb.

17 2018).

[3] Collins, G. Carbohydrates, H2O, and Hydrocarbons:

Grain Supply Security, and the Food-Water-Energy

Nexus in the Arabian Gulf Region. Center for Energy

Studies, Baker Institute. June 2017. https://www.


QLC_Nexus-061317.pdf (accessed Feb. 17 2018).

[4] Ibid.

[5] Ibid.

[6] Ibid.

[7] Senguptajan, S. Warming, Water Crisis, Then Unrest:

How Iran Fits an Alarming Pattern. The New York Times

[Online], Jan. 18 2018.

[8] Interview with Mr. Gabriel Collins.


html (accessed Feb. 17 2018).

[9] Rambo, K.A. et al. Water-Energy Nexus in Saudi Arabia.

Energy Procedia. 2016, 105, 3837- 3843

[10] World Energy Outlook; International Energy Agency,

2012. Ch. 17. http://www.worldenergyoutlook.org/


(accessed Feb. 17 2018).

[11] Interview with Mr. Gabriel Collins.

[12] Ibid.

DESIGN BY Jessica Lee

EDITED BY Pujita Munnangi


methods of




and population control

o w a i s f a z a l


A variety of mosquito vector surveillance

and control programs have been instituted

over the past few decades with the intention

of limiting the spread of infectious diseases

such as dengue, malaria, and the Zika virus.

With the major public health threat of

mosquito populations spanning across the

globe, it is imperative that we continue to

develop effective methods of controlling the

mosquito population as well as designing

and implementing novel solutions on an

international scale. While these programs

have displayed varying degrees of success, this

review analyzes various methods that seem

to be effective in combating vector incidence

and prevalence within endemic populations

worldwide. Specifically, we will analyze vector

control initiatives involving Aedes albopictus

populations in Yorke Island off the coast of

Australia as well as the control of Anopheles

gambiae populations in Brazil in order to

determine trends in effective mosquito vector

control systems.


A particularly successful mosquito vector

surveillance program was implemented in

the recent Yorke island mosquito control

initiative. Consistently low densities of Aedes

albopictus populations have been recorded

six years following the program’s inception in

2005. Following the success of the program,

project leaders have claimed that the use

of insecticides appeared to be the most

important component of their intervention

program, with inspection cycles and public

outreach also playing key roles in limiting

the prevalence of the endemic mosquito

population. 1


Successful vector surveillance programs

rely heavily on the utilization of a process

known as source reduction. 2 Source reduction

essentially involves the systematic removal of

potential mosquito breeding sites, effectively

diminishing the growth rates of endemic

mosquito populations significantly. This

process was heavily used in Yorke Island, as

any containers that could potentially hold

water and support larval development were

removed, destroyed, placed under cover,


or treated with pellets or briquettes of the

insect growth regulator s-methoprene.

The s-methoprene was applied to smaller

containers in the form of 15g pellets at a rate

of one pellet per liter of estimated container

volume. Larger containers, such as rainwater

tanks and wells, were treated with ProLink

XR Briquets applied at one briquet per

5000 liters of water. 3 Containers that could

not be removed had their interior surfaces

also sprayed with the residual pyrethroid

bifenthrin to kill adult mosquitoes that came

in contact with them. 4 Samples of larvae were

collected from infested containers for species

identification on a weekly basis in order to help

monitor the efficacy of the insecticide usage.

Thus, the larval habitats of the local Aedes

mosquito species were totally decimated in

the region and mosquito vector populations

declined by as much as 98% according to

recent estimates in 2016. 5


Our increasingly

interconnected global

climate is highly vulnerable

to infectious disease

pandemics spread through

vectors such as mosquitoes,

and we must continue

to refine our mosquito

population control methods

to combat this threat.


Furthermore, it is also imperative to have

reliable methods of approximating the number

of vectors within specific regions of a target

area. In order to address this issue, the Yorke

Island initiative enlisted the support of local

public health officials in order to conduct active

surveillance of target areas and to obtain an

accurate count of mosquito prevalence in

select regions of the island. 6 For each round of

surveillance, larval densities were expressed as

number of positive containers per 100 houses

for the Aedes albopictus species. Moreover,

local populations in vector endemic regions

were surveyed at regular intervals in order to

corroborate results of any other independent,

ongoing vector density studies. 7 All in all, the

teams conducted sweep-net sampling on a

total of 230 different sites, providing data

on precise locations as well as population

densities of vector groups throughout the

vector endemic regions. 8



One of the most crucial qualities of a

successful mosquito vector surveillance and

control program is to be able to monitor

changes in mosquito vector populations

in response to the usage of specific vector

control tactics. 9 In conjunction with the above

methods, being able to accurately examine

vector trends over the period of time that a

vector control program is in place is key. A

strategy used by Yorke island public health

officials that combines monitoring with some

of the more direct methods of combating

vector populations is the cordon sanitaire

strategy, which is an integrated approach

composed of harborage spraying, source

reduction, insecticide treatment of containers,

lethal tire piles, mosquito population

monitoring and public awareness campaigns

supported by local authorities and local

media. 10


The eradication of the accidentally introduced

Anopheles gambiae mosquito species

from 54,000 km 2 of largely ideal habitat in

northeast Brazil is regarded as one of the

most effective mosquito control campaigns

in scientific history. 11 This successful program

was implemented in the 1930s and early

1940s through an integrated program that

relied overwhelmingly upon larval control

mechanisms. In the decades following the

implementation of the program, similar

initiatives utilized comparable strategies

in order to successfully combat vector

populations in Egypt as well as rural Zambia. 12


The total coverage of the A. gambiae mosquito

population was achieved primarily through

the combination of large numbers of field

workers with strictly enforced task-allocation

and supervision systems. Each individual field

worker, known as a larval inspector, was given

a fixed area in which to identify and treat

potential breeding sites and for which he or

she alone was responsible. 13 All inspectors

were allocated a reasonable area that was

carefully mapped and both the inspectors and

the mosquito populations they faced were

monitored regularly. One key method utilized

during this process involved the use two

flags to mark where the inspectors had left

the road and where they was located at any

time. 14 The campaign was designed to tackle

A. gambiae according to its ecological niche.

A. gambiae prefers sunlit and relatively small

bodies of water as larval habitats and these

had to be identified and monitored rigorously

throughout the countryside. 15



The core foundation underlying the success

of the Brazilian vector control program was in

the clearly defined and organized nature of its

activities. A cartographic unit was immediately

set up and the infested area was mapped

using aerial photographs. 16 A common

laboratory and epidemiological division

allowed centralised training, surveillance,

and decision-making. Adulticide and

medicinal measures were similarly organized,

in which researchers and local officials

clearly emphasized a central administrative

structure. 15

The work of inspectors in their allocated zones

was also closely scrutinized and managed by

district chief inspectors who were typically

allocated only five zones for which they were

held individually responsible. The zones

and districts were further aggregated and

managed through administrative units termed

posts and divisions, both of which were

headed by medical doctors who could deal

with the clinical aspects of the program in

addition to vector control. 18 A system of flags

and on-site field documentation ensured that

each inspector was monitored on an almost

hourly basis and could be held unambiguously

accountable for any lapses. 19 Notably, the

activities of the anti-larval and anti-adult

control teams were separately reported at

district level so that discrepancies could be

identified, and separate adult capture squads

conducted independent evaluations of all

vector-control activities by knockdown catches

in houses on a monthly basis. 20


Thus, we were able to analyze some of

defining characteristics of successful mosquito

vector control programs in diverse ecological

settings based off the coast of Australia in

Yorke Island as well as in Brazil. Key themes

that were observed across both programs

included a heavy reliance on local public

health officials and epidemiologists to obtain

key information regarding the locations

and prevalence of target mosquito vector

populations as well as the employment of

insecticides to decimate these mosquito

populations once locations were precisely

determined. Modern mosquito control

programs based in Sub-Saharan Africa and

the Middle East have also expanded upon

several of the techniques mentioned in this

article to increase the efficacy and efficiency of

these programs. 4 Ultimately, our increasingly

interconnected global climate is highly

vulnerable to infectious disease pandemics

spread through vectors such as mosquitoes,

and we must continue to refine our mosquito

population control methods to combat this



[1.] Worobey J, Fonseca DM, Espinosa C, Healy S, Gaugler R. J

Am Mosq Control Assoc. 2013; 29(1): 78–80.

[2.] Muzari MO, Devine G, Davis J, Crunkhorn B, van den Hurk

A, Whelan P PLOS Negl Trop Dis. 2017; 11(2); 433-9

[3.] Nelder M, Kesavaraju B, Farajollahi A, Healy S, Unlu I,

Crepeau T, et al. Am J Trop Med Hyg. 2010 ;82(5):831–7.

[4.] Benedict MQ, Levine RS, Hawley WA, Lounibos LP. Vector

Borne Zoonotic Dis. 2007;7(1):76–85.

[5.] Kuno G. J Med Entomol. 2012; 49(6):1163–76.

[6.] Lounibos LP, O'Meara GF, Juliano SA, Nishimura N, Escher

RL, Reiskind MH, et al. Ann Entomol Soc Am. 2010;103(5):757–


[7.] Sun D, Williges E, Unlu I, Healy S, Williams GM, Obenauer

P, et al. J Am Mosq Control Assoc. 2014; 30(2): 99–105.

[8.] Ritchie SA, Moore P, Morven C, Williams C. J Am Mosq

Control Assoc. 2006; 22(3): 358–65.

[9.] Rueda LM. Zootaxa. 2004; 589: 1–60.

[10.] Nguyen HT, Whelan PI, Shortus MS, Jacups SP. J Am Mosq

Control Assoc. 2009;25(1):74–82.

[11.] Killeen, G. F., Fillinger, U., Kiche, I., Gouagna, L. C., &

Knols, B. G. The Lancet infect. dis. 2002; 2(10), 618-627.

[12.] Garrett-Jones, C. Nature. 1964; 204: 1173–1175

[13.] Spielman, A, Pollack, RJ, Kiswewski, AE, and Telford III, SR.

Vector Borne Zoonotic Dis. 2001; 1: 3–19

[14.] Guyatt, HL, Gotink, MH, Ochola, SA, and Snow, RW. Trop

Aedes albopictus

Yorke Island

Anopheles gambiae


Med Int Health. 2002; 7: 1–12

[15.] Guyatt, HL, Corlett, SK, Robinson, TP, Ochola, SA, and

Snow, RW. Trop Med Int Health. 2002; 7: 298–303

[16.] Molyneux, DH, Floyd, K, Barnish, G, and Fevre, EM.

Parasitol Today. 1999; 15: 238–240

[17.] Buckling, AG, Taylor, LH, Carlton, JM, and Read, AF. Proc

R Soc Lond B Biol Sci. 1997; 264: 553–559

[18.] Shiff, C. Clin Microbiol Rev. 2002; 15: 278–298

[19.] Gimnig, JE, Ombok, M, Otieno, S, Kaufman, MG, Vulule,

JM, and Walker, ED. J Med Entomol. 2002; 39: 162–172

[20.] Charlwood, JD and Edoh, D. J Med Entomol. 1996; 33:


Fonts from GoogleFonts and DaFont.com

Images from Wikimedia Commons and


DESIGN BY Sahana Prabhu

EDITED BY Rishab Ramapriyan


the emergence of NUMBER


from a geometric investigation


Taking any regular even-sided polygon, one

can make cross-cuts from each vertex and

connect it to the midpoint of one of the

opposite sides to create a smaller regular

polygon contained within the original. The

ratio between the area of this new polygon

and the original has been shown to have

some interesting values, specifically 1/5 for

squares and 1/13 for hexagons. Building

on this, a general formula that gives the

ratio for an arbitrary regular polygon can

be found. In this paper, we will explore

how to generalize this result even further

by allowing the crosscuts to connect to any

side of the polygon, rather than just to the

opposite side. We then explore the rationality

of this expression, and discover interesting

connections to number theoretic techniques

looking into the relationship between

Chebyshev Polynomials and the minimal

polynomials of cosine.


Recent results have shown that there is an

interesting relationship between the area of a

regular polygon and the area of the polygon

formed through


Essentially, a

second regular

polygon is

created from

the first by


each vertex to

the midpoint

of the opposite side. Because even-sided

polygons have two sides that can be opposite

to any vertex, this implies that one side

should be chosen and that direction should

be repeated for every other vertex (See

Figure 1). The area ratio has been calculated

explicitly for squares and hexagons, and were

found to be 1/5 and 1/13, respectively. 1,5,6

In addition, for an odd-sided polygon, the

crosscuts will connect in the center, so that

the ratio will always be 0. After these specific

results, the next question is whether a

generalized formula can be found that agrees

with these, as well as gives ratios for a regular

polygon with an arbitrary number of sides. It

is also important to explore the rationality of

this function, as it is rational ratios that have

motivated this research.

Fig. 1: The crosscut formation of regular polygons when n=3, 4, 6.


The following are some necessary formulas

that will be used later in the paper. The first

is a simplified expression for the ratio of the

areas, which gives R=a S2

/a B2

,where a S

is the

apothem of the smaller polygon, and

a B

is for the original polygon. The

second is an explicit formula that

gives the ratio in terms of only the

number of sides of the polygons: R n


1/(1+4cot 2 (θ)) , where θ=π/n. This is

the starting point for this paper, as

this result will be generalized using

similar methods as those used in the

proof of R n



We want this new generalization to

allow our crosscut to be connected

to an arbitrary side of the polygon,

instead of requiring that it connect to the side

opposite of the vertex where it started. This

new formula should give the area ratio only in

terms of the number of sides of the polygon,

n, and the number of vertices skipped

before connecting the crosscut, denoted k.

To do this, we first define a coordinate axis

system that is

suitable for our

calculations. This

coordinate axis

can be created

by placing one

vertex and the

center of the

polygon on the

line y=C that

runs parallel to the x-axis. Then, the crosscut

originating from this vertex would create an

angle with this line. Connecting the other

endpoint of the crosscut to the center of the

polygon forms a triangle that has one known

angle, ψ=2πk/n+(1/2)(2π/n)=(π(2k+1))/n, at

the center. (We assume in this proof that

0< ψ≤π, but we will later show that this

formula actually holds for all values of ψ.) In

addition to this known angle, two of the sides

are known, as one, a B

, is the apothem and

another, r, is the radius of the polygon. This

setup is detailed in Figure 2.

As only one angle in this triangle is known, it

is easiest to form two right triangles, so that

known trigonometric formulas more readily

apply. This is accomplished by taking a line

By Jacob Kesten

from E to E’, where is connects to the line

y=Cat a right angle. In Figure 2, this new line

is denoted b. Now using trigonometry, it is

possible to find the slope of the crosscut by

calculating the lengths of b and c. This gives

Fig. 2: A diagram depicting the two triangles used to compute the slope

of the specific crosscut. The first picture shows placement of the triangle

on the polygon and coordinate axes. The second shows how the 2 right

triangles were formed, and the third shows how this can be used to

compute the length of the smaller apothem.

the slope of the crosscut as m=(–a B


(r–a B


Knowing the slope makes it possible to

write an equation for the line containing

the crosscut, as well as the equation of

the line perpendicular to the crosscut that

contains aS. These are given by y 1


and y 2

=–x/m+C, respectively. Finding the

intersection of these two lines allows us to

find a S


in terms of m and a B2

. Dividing both

sides by a B2

, we get that a S2

/a B


= R n,k

= (m 4 +m 2 )/

(cos 2 θ(1+m 2 ) 2 ) = (1–cos 2 (ψ))/(1–2cosψcos θ

+cos 2 θ), where θ=π/n and ψ=(2k+1)θ. Notice

that this formula depends only on n and k, as


We now check that this formula holds for

values of π < ψ < 2π. This follows from the

fact that for π < ψ < 2π, ψ=π+ψ’ where ψ‘ < π,

and cosψ=cosψ’ so that the formula will be

the same as the one for the corresponding

complement angle. Essentially, the case

where π < ψ < 2π reduces to a case where

ψ≤π by looking at the angle going in the

opposite direction of the obtuse angle. An

example of this is given in Figure 3 using

different crosscut octagons. Flipping the

octagons on the bottom shows that they will

be identical to the ones in the first row, so

that they will produce identical ratios.

Now that we have found the formula R n,k


can check that it matches with the values

already found for specific polygons. One such

example is when the crosscuts connect to the


opposite side of an odd-sided polygon, where

the ratio should be 0. In this case, we have

that k=(n–1)/2, so that we get the following

result: R n,k

= (1–cos 2 π)/(1–2cosπ cosθ+cos 2 θ)

= 0/(1–2cosθ +cos 2 θ) = 0, as expected. In

addition, using n=6 and k=2, we get that

R 6,2

=(1–cos 2 5π/6)/(1–2cos 5π/6 cos π/6 +cos 2

π/6) = (1–(–√3/2) 2 )/(1–2(–√3/2)(–√3/2)+(–√3/2) 2 )

= (1– 3/4)/(1+2(3/4)+3/4) = (1/4)/(1+3/2+3/4)

= (1/4)/(13/4) = 1/13, which is exactly what

Fig. 3: Octagon crosscuts using k=1,2,3,4,5,6. Comparing the

2 lines shows that the octagons using k=1,2,3 are the same as

those using k=4,5,6.

was found in earlier papers. The other known

results can also be checked using a similar



Now we try to find which values of n and k

will produce a rational value for R n,k

. We can

easily check different values of n and k and

find that the following are rational: R n,0

, R 4,1


R 6,2

, R 6,1

, R 8,2

, R n(odd),(n-1)/2

. However, there are an

infinite number of combinations for n and k,

and there is no straightforward way to prove

that these are or are not the only rational

values of R n,k

. One approach is to look at

different values for the possible parts of the

ratio that could be irrational. For example,

when both cosθ and cosψ are rational, then

R n,k

is rational. This approach works for some

combinations, but a problem arises when

both cos 2 θ and cos 2 ψ are irrational, as there

is no way to tell what will happen with R n,k


This is because both the sum and product

of irrational numbers can be rational, thus

even though all components of R n,k


irrational, the final output could still be a

rational number. Because this is the majority

of the values that will occur, an approach that

directly applies to this case is needed.

One possible approach is to look at the

polynomial representation for R n,k

and the

minimal polynomials for cosθ. The minimal

polynomial of the value cosθ is the smallest

nonzero, monic polynomial with rational

coefficients that has cosθ as a solution. For

example, the minimal polynomial of √2 is

x 2 –2=0. Note that this polynomial cannot

be reduced, and thus it is the smallest

polynomial with √2 as a solution. All algebraic

numbers have a minimal polynomial, by

definition, and thus for each value of n, cosθ

has a minimal polynomial. 2

First, we use R n,k

to create a polynomial

expression of one variable. Using the multiple

angle formula, we know that cosψ=cos((2k+1)

θ)=cos(sθ)=T s

(cosθ) where T s

(cosθ) is the

s th Chebyshev Polynomial. For example,

cos(4θ)=8cos 4 (θ)–8cos 2 (θ)+1because T 4

(x)=8x 4 –

8x 2 +1. 7 This allows us to express R n,k

as an

expression of θ: R n,k

=(1–T s2

(cosθ))/(1–2T s


cos θ +cos 2 θ, where s=2k+1. Setting x=cosθ,

and R n,k

=r, where r is a rational value, we get

the polynomial expression P(R n,k

)=1-T s


x) –

r+2rxT s

(x)–rx 2 =0.

It is known that if a certain value solves a

polynomial, then the minimal polynomial

(minpoly) of that value has to be able to divide

the larger polynomial, with a remainder of

0. 4 Thus we want to see what happens when

we use different combinations of n and k,

and see if there is any way to find a pattern

in the remainders of P(R n,k


In this case, a random collection of values

from within the bounds n≤100 and k1

–1) ∏ ψ2 (x) for n even.


d|n, d>2

where ψ n

(x) is the minimal polynomial of

cos(2π/n) multiplied by a constant and d|n

means that d is a divisor of n, so that the

product runs through all the divisors of n. 3

Exploring this relationship in more detail is

an important and rapidly expanding area

of theoretical mathematics, and could be

very helpful in creating a final proof of the

rationality of Rn,k. In our exploration, we

came across a need to understand more

about the relationship between these two

concepts and have seen how a simple

question concerning area ratios can produce

complicated mathematical questions that

are still being explored purely for theory.

This is just one application of the possible

knowledge that can be gained by the abstract

investigation of these two concepts and the

inner workings of their natural connection.

A special thanks to Dr. Zsolt Lengvarszky of

the Louisiana State University, Shreveport,

Mathematics Department for his help and

mentorship in the project.


[1] Ash, J.M., et al., Constructing a Quadrilateral Inside

Another One, Mathematical Gazette, 2009, 528, 522–532.

[2] Calcut, J.S., Rationality and the Tangent Function, http://

www2.oberlin.edu/faculty/jcalcut/tanpap.pdf (accessed Feb.

8, 2018).

[3] Gürtaş, Yusuf Z., Chebyshev Polynomials and the Minimal

Polynomial of Cos(2/n), The American Mathematical Monthly,

2017, 124, 74-78.

[4] Leinster, T., Minimal Polynomial and Jordan Form, http://

www.maths.ed.ac.uk/~tl/minimal.pdf (accessed Feb. 8, 2018).

[5] Mabry, R., Crosscut Convex Quadrilaterals, Math Mag,

2011, 84, 16–25.

[6] Mabry, R., One-Thirteenth of a Hexagon, n.d. 1-11.

[7] Mason, J.C., Handscom, D.C., Chebyshev Polynomials;

Chapman and Hall: Boca Raton, 2003; sec 1.2.1.

DESIGN BY Kaitlyn Xiong

EDITED BY Olivia Zhang






Shaurey Vetsa, Richard I. Han, Don L. Gibbons, K. Jane Grande-Allen

Department of Bioengineering, Rice University


My research has sought to further

characterize the effect of Dasatinib on the

focal adhesion pathway in cancer cells.

While plenty of research exists on how the

cell propagates tumor growth, the results

of this research show how tumor-matrix

interactions drive tumor metastasis. To

resemble the stress that the cell matrix

would face in a lung, we placed the cells in a

static tension environment and recorded the

changes in spatial distribution of the cells in

the presence of Dasatinib. It was found that

Dasatinib induced clustering in cancer cells;

however, further research is required to

show causation rather than correlation of the



Among all the cancers that afflict the

American population, lung cancer is

the second most widespread in men

and women. 1 Afflicted lung cancer cells

proliferate through a process called

epithelial to mesenchymal transition (EMT).

Although EMT for healthy cells is important

for embryonic development and other vital

organs, cancer cells become invasive after

EMT. The cells lose cell to cell adhesion and

increase the expression of mesenchymal

cell markers such as vimentin, fibronectin,

N-cadherin, and alpha-smooth muscle actin

(α-SMA). These changes allow cancer cells to

metastasize throughout the cell space. 2

The extracellular matrix (ECM) plays a

significant role in the occurrence of EMT. Two

families of enzymes, microRNA-200 and ZEB1,

regulate EMT using transcription factors.

These families form a double-negative

feedback loop in which ZEB1 activates a

collagen-producing gene to increase collagen

expression in the extracellular matrix.

Another family of enzymes, LOX, creates

greater organization of the collagen due to

the increased expression of crosslinking.

Less randomness in the structure of the ECM

promotes metastasis as cancer cells can

traverse the matrix more easily. 3

The metastasis of cancer strongly depends

not only on processes occurring in the cell,

but also on the regulatory behavior between

the cell and the extracellular matrix. The

focal adhesion pathway is an important

macromolecular assembly that transfers

information from the extracellular matrix

to the cell. Its inhibition can create changes

in cell morphology and organization in lung

cancer cells. The transduction of the focal

adhesion pathway undergoes a secondary

messenger amplification that contains the

proto-oncogene tyrosine-protein kinase

(SRC). Bristol-Myers Squibb Inc. developed an

anti-cancer drug called Dasatinib that inhibits

the SRC protein which controls important cell

processes like movement, proliferation, and

survival. 4

We investigate Dasatinib’s effect on the cell

focal adhesion pathway by incubating 344 SQ

cancer cells with the drug and observing cell



The cancer cells that were analyzed are

from a mouse model (metastatic 344 SQ

cell line) donated by the Gibbons Lab at MD

Anderson. 5 These cells have mutations in

the Kras and p53 genes, both of which are

tumor-suppressor genes.

344 SQ cells were mixed with 2 mg/ml

collagen MasterMix and incubated at a

density of 106 cells per ml of solution. 2 ml

aliquots of solution were poured in ten bone

shaped hollow molds, two in each of the five

trays. Prior to use, the trays were dessicated

using an oven. Metal pins extending outward

were inserted inside of the tray to stimulate

static tension. The contraction of the cell mix

caused the pins to be pulled inwards. Four

sawbone cylinders were placed on each of

the pins to increase the pin’s surface area to

prevent the collagen gel mix from tearing on

them during contraction. Media covered the

gels with 50 mm Dasatinib in 200 µl of the


Confocal Imaging

Each day for four days, two gels were

removed from the collagen gel molds and

immersed in 4% paraformaldehyde (PFA). Gel

sections were cut and placed in this buffer to

prevent photobleaching of the samples. Gels

were stained with DAPI for the cell nuclei

and Phalloidin for F-actin and were taken to

confocal microscopy.

Image Processing

DAPI stained images were analyzed using

ImageJ and Cell Profiler, image processing

tools that cleaned for noise and conjoined

looking cells. The end result was a binary

mask, an image representing cells in white

and everything else in black, of all the cells.

The NND ImageJ plugin was then used to

find the distance between the centroid of

each cell and the nearest centroid to it. Cell

Profiler additionally returned the x and y

coordinates of the centroids and object

numbers of the first and second closest cells

to each cell object.


Confocal Imaging

DAPI and Phalloidin staining revealed that

the cells cluster as time progresses (Figure 2).

The panel shows the progression of image

analysis that was undertaken to produce

binary masks for the image analysis software

(Figure 1).

The first column shows an overlay of the

DAPI and Phalloidin channels. The next

column shows the DAPI channel extracted

from the multichannel raw image. Column

three has processed binary masks of the


Figure 1: Panel of Image Processing: The boundaries of the nuclei are

separated from the raw image to conduct NND testing. The scale bar is 50

µm and is applicable for all images.

Figure 2: Nearest Neighbor Distance for Cells treated with 50 nM Dasatinib

over the course of four days. The green lines each indicate the median of

the distribution of distances for one day (Day 1: 46.36 µm, Day 2: 31.77 µm,

Day 3: 21.49 µm, and Day 4: 13.72 µm). The red horizontal lines indicate the

interquartile ranges for each distribution. Two stars indicate a significance

level of 0.01 and three stars indicate a significance level of 0.001.

DAPI channel, and the last column has

outlines of the binary masks in which each

object is numbered for reference. This

processing was conducted for all images in

Day 1, 2, 3, and 4 sets. The general qualitative

trend in DAPI from Day 1 to 4 is that there

are more cells and more of them are

clustered. Increased intensity in phalloidin

staining from Day 1 to Day 4 shows actin

expression increasing with time.

Each object’s NND was calculated and plotted

(Figure 2). Cell Profiler also returned NND

calculations identical to ImageJ plugin’s data.

The values for Day 1 have a statistically

significant difference from those of Day 2,

Day 3, and Day 4 (Figure 2). Day 2 and Day 4

also had a statistically significant difference.


Confocal Imaging

Image processing results show that the cells

appear clustered in response to Dasatinib.

This is seen in the statistically significant drop

in the mean NND over the course of the four


There is a possibility that the clustered

cells replicated in the same area. Dasatinib

inhibits the focal adhesion pathway, which

controls important processes such as actin

polymerization and filopodia formation

that are instrumental to the invasiveness of

cancer cells. 4 Thus, the newly divided cells

would have a diminished ability in moving.

Even if Dasatinib does not cause old cells

to migrate closer to each other, it may slow

down cell movement that otherwise drives

new cells away from each other.

Furthermore, analysis of the binary masks

shows evidence of cell division. Each set of

binary masks had more cells than that of the

previous day. This phenomenon may have

occurred if cells that were not initially in the

viewing frame moved into view or if the cells

already in view divided in place. However,

tracking the movement of cells in real

time would be required to provide further

evidence for this inference.

One limitation of our experiment is that it is

difficult to image the same gel location for

each day. According to the current protocol,

the cells must be fixated before confocal

view, so the experiment operates under the

assumption that the distribution of cells in

the gel at any time is the same. We conclude

that the clustering observed at a different

location on Day 3 suggests that the cells

viewed on Day 2 would have experienced the

same amount of clustering had they been

allowed to live to the next day. However,

we had a sample size of four different gel

locations for each day and observed the

same general characteristics across locations,

providing evidence to assume that the spatial

distribution in one location is representative

enough of the whole cell distribution.

The NND statistical model provided robust

results. The Cell Profiler data and the NND

plugin in ImageJ were identical in terms of

the object number and the calculated NND,

but the Cell Profiler method required fewer

post processing steps, allowing less room for


An area of improvement for our method of

statistical analysis would be to differentiate

between cells organized in a line and

cells clustered in a ball. Our traditional

understanding of clustering involves a

circular cluster but the images showed

clustering in the form of a slanted line.

Because the images are cross sections

in the z direction and several images in a

row showed a line of cells, that indicates

clustering in the form of a slanted plane in

the gel. Both circular and linear distributions

could have the same NND values, but the

present analysis fails to explain different

clustering behaviors. Regardless of spatial

distribution, smaller NNDs occur as an effect

of the treatment applied to the sample.


In the presence of Dasatinib there is a

significant decrease in NND over the course

of four days. All the methods used to analyze

the data suggest that cells cluster in the

presence of Dasatinib. Clustering induced

by Dasatinib will induce a reversal of EMT

induced mesenchymal cells into adopting

a more epithelial phenotype. These cell

clusters reduce the progress of metastasis

of the cancer, bringing the patient extended

lifetime. To examine the possibility that

the cell’s decrease in movement causes

clustering in Dasatinib treated cells, scanning

electron microscopy (SEM) imaging can show

diseased cell mechanisms for movement

in the presence of the drug. To determine

whether the cell clustering occurs due

to migration or cell replication, molds

constructed using the engineering design

process will be used for live cell confocal

imaging. Current and ongoing evaluations

of Dasatinib’s properties will define its

effectiveness as a cancer drug.


[1] American Cancer Society, https://www.cancer.org/cancer/

small-cell-lung-cancer/about/key-statistics.html (accessed Jan.

4, 2018).

[2] Xiao, D., et al. J Thoracic Dis, 2010, 2, 154-159.

[3] Peng, D. H., et al. Oncogene, 2017, 36, 1925-1938.

[4] Sulzmaier, F. J, et al. Nat Rev Cancer, 2014, 14, 598-610.

[5] Gibbons, D. L., et all. Genes Dev, 2009, 23, 2140-2151.

DESIGN BY: Priscilla Li

EDITED BY: Olivia Zhang







eDNA refers to the

genetic material

organisms shed

as waste in the

environment (e.g.

water, sediment,

ice cores, etc.) that

can be directly



Marine ecosystems are threatened

worldwide by a variety of anthropogenic

stressors which have severe implications

for global biodiversity, economy, and

human health. 1,2 Examining the abundance,

distribution, and diversity of marine

communities is vital to understanding

basic ecological questions pertaining to

conservation and resource management. 2,3

Traditionally, scientists monitor marine

organisms using techniques reliant on

in-situ visual identification and counting

of organisms. These methods, which

include roving diver surveys, trawls, netting,

tagging, electrofishing, and rotenone

poisoning, are expensive, time-consuming,

invasive, limited in scope, and reliant

on taxonomic experts. 3,4 In addition,

such traditional techniques are prone to

produce a significant number of falsenegatives

in which species actually present

in an environment are not seen in the

survey, generally because they are rare

and/or not easily seen. 2 Addressing these

limitations using a molecular approach

may limit the human biases associated with

these sampling methods.

Improvements to DNA sequencing

technology and decreased sequencing costs

have made sampling for environmental

DNA (eDNA) in marine environments

more feasible. eDNA refers to the genetic

material organisms shed as waste in the

environment (e.g. water, sediment, ice

cores, etc.) that can be directly sampled. 4

For the purpose of this review, eDNA

collection from specifically liquid water

samples were considered because they

detect organisms in recent time (on the

order of hours to days) due to their faster

degradation rates in the water. 4 While

ecological applications of eDNA started in

microbial communities in marine sediments

and invasive species in aquatic habitats,

there have been a handful of studies in the

past few years that have sampled for larger

size classes, like macrofauna, from marine

communities. 5-7

Regardless of ecosystem, the “eDNA

metabarcoding” approach allows for

quantification of whole communities using

water sampling (Figure 1). Water samples

are collected, filtered, extracted for DNA,

amplified using specific PCR primers, and

analyzed using next-generation sequencing

for amplicons. Sequencing data is then

compared to a reference database of

partially or fully characterized genomes

of organisms present in that particular

region. Other further applications of DNA

extracts such as quantitative PCR (qPCR)

are also utilized in place of next-generation

sequencing for identification of organisms. 7


Identifying Presence/

Absence of Marine

Macrofauna Across

Spatial Scales

eDNA sampling has been utilized in a

variety of marine ecosystems to detect the

presence of marine macrofauna, including

marine fish and mammal species. In

one of the first studies to sample eDNA

from marine communities, Thomsen

et al. (2012a) found that in comparison

to nine different traditional surveying

techniques, eDNA performed as well or

better than traditional surveying techniques

at detecting commercially important yet

rarely-detected fish species in the Sound of

Elsinore. After multiple trials, the traditional

surveying techniques, were only able to

detect on average 4.3 species using fish

pots to 14.7 species using night-snorkeling.

In comparison, the eDNA samples detected

more, at 15 fish species. In addition, the

eDNA samples detected 4 bird species

that could not have been detected using

traditional surveying techniques. However,

the scope of this study was limited, as

only three half-liter water samples were

examined, and these findings cannot be

reasonably extrapolated to apply over

larger spatial scales.

Port et al. (2015) addressed some of

these limitations and found that eDNA

sampling still outperformed traditional

roving diver surveys in a 2.5 km transect

across kelp forests, rocky reefs, sand

patches, and seagrass habitats. While the

visual surveys identified 12 taxa, the eDNA

samples identified 11 of those and 18

additional taxa that were not detected by

the visual surveys. The sensitivity of eDNA

to detect species indicates that on larger

spatial scales, the false-negative rate of

traditional surveys far exceeds that of eDNA

sampling. 2 In addition to detecting elusive

fish species, they found that eDNA can be

used to determine the spatial patterns of

entire marine communities as well. Within

each habitat, the distribution of species

was more consistent, while across these

habitats, they were markedly different.

This indicates that while ocean water

movement could theoretically move eDNA

to vastly different locations from their point

of origin, eDNA tends to stay localized (in

this case, within 60 meters) of a particular

habitat. 2

However, for organisms with large

home ranges such as cetaceans, eDNA is

dispersed across a wide spatial scale and

becomes less useful in comparison to other

monitoring methods like bioacoustics. 8 For

the harbor porpoise (Phocoena phocoena),

for example, eDNA detection was only

reliable in controlled environments and

performed worse than acoustic detections

in natural environments. However, because

scientists were able to identify a long-finned

pilot whale (Globicephala melas), another

cetacean, in the natural environment

that is rarely sighted, the research group

concluded that further polishing of the

eDNA sampling method could prove to be

promising for monitoring efforts. 8

The relationship between eDNA

abundance, relative abundance, and

biomass has been shown in closed

marine and aquatic systems, but

has yet to be applied successfully in

natural marine habitats.

Current Limitations to

eDNA Applications

The success of eDNA detection is limited by

both site-specific environmental conditions,

sample processing at the molecular level,

and bioinformatics analyses.

1 2 3 4 5 6 7















Figure 1. Simplified representation of a metabarcoding (next-generation sequencing) pipeline.


I. Relating Sequence

Abundance to Relative


While studying the presence of marine

macrofauna is a novel approach to

understanding many basic ecological

questions, relating the sequence

abundance from a water sample to the

relative abundance of an organism in

a natural community is still difficult to

determine. The relationship between

eDNA abundance, relative abundance,

and biomass has been shown in closed

marine and aquatic systems, but has yet to

be applied successfully in natural marine

habitats. 9,10

II. Environmental


As with any field method, eDNA sampling

assumes that the contents of a water

sample are representative of the whole

community. To ensure that a water sample

is homogeneous throughout, scientists

often collect subsamples from larger water

samples in order to determine if there are

significant differences in the sequencing

data between subsamples. 4,9

The degradation rates of eDNA in marine

environments are also important for

determining how long the presence signal

of an organism persists in the environment

(and potentially moves to distant locations

through currents). 3,4,10 Since eDNA shedding

and degradation depends on a variety of

factors based on the environment, target

organisms’ physiology, abiotic and biotic

factors, DNA characteristics, population

size, and more, study-specific degradation

and shedding experiments should be

utilized to calibrate the eDNA signal. 3

eDNA proves to be highly sensitive and

reliable in determining the presence of

marine macrofauna in comparison to

traditional visual/trap-based methods

III. PCR Bias and Primer


The primer choice during PCR is crucial

to determine the region of DNA that

will be amplified and sequenced from

an eDNA sample and must be validated

using experimental controls or online

programs. 4,7 There are tradeoffs in primer

choice and design - some primers have a

higher affinity to some sequences and not

others, affecting which marine organisms

transmit the largest detection signal. 4

Primer choice introduces amplification

bias, which is influenced by the types of

fishes surveyed and the environmental

conditions of the area. 9,10 For example,

when Kelly et al. (2014) used the 12S

mitochondrial DNA (mtDNA) primers, they

were able to detect bony fishes with a

low false-negative rate, but were not able

to detect the cartilaginous fishes or sea

turtle in the mesocosm tank. For these

organisms, species-specific primers were

needed for detection. Since the community

composition can vary so widely depending

on the primer choice and amplification bias,

researchers may use multiple universal and

species-specific primer sets on the same

DNA extracts to obtain higher resolution

and detection. 4,9

IV. Quality Controlling Reads

and Reliable Reference


After samples are sequenced, they come

back as raw material known as “reads”

and are processed for quality control and

validation. 7 Known as bioinformatic filtering,

sequences are assigned to samples,

filtered, trimmed and discarded based on a

strict set of criteria. 2,4,9,10 Although detecting

rare, cryptic, or low-abundance species is

desired, strict sequence and taxon filtering

ensures high-confidence reads that could

repeatedly and reliably show presence of

organisms while removing false positives,

or organisms artificially present in the data

(likely from contamination) but not present

in the actual environment. 2

After quality control and validation,

Operational Taxonomic Units (OTUs) are

assigned to sequences and referenced to

known genomic databases for taxonomic

identification. Generally, sequences are

clustered into OTUs and matched to known

genus or species sequences if they are

>99% similar to each other. 2,4,9,10 However,

the proper identification of these marine

organisms relies heavily on the quality of

the reference database. In systems that are

not well-studied, the reference database


may not be reliable.


The future of eDNA as a monitoring tool

for marine macrofauna communities is

possible, but currently limited in scope

due to the methodological constraints that

need to be considered. Current research

indicates that while eDNA proves to be

highly sensitive and reliable in determining

the presence of marine macrofauna in

comparison to traditional visual/trapbased

methods, relating eDNA sequence

abundance to the relative abundance in the

community requires further investigation.

This is due to the fact that eDNA

sequence abundance varies depending

on the particular workflow and sitespecific

environmental factors. However,

establishing multiple avenues to test the

importance of these limitations within an

experiment can help contextualize these

problems. For example, including an

eDNA degradation experiment, multiple

primer sets, and a custom database are all

different methods researchers mentioned

in this review used to address some of

these constraints. Although it is difficult

to control for all of the abiotic, biotic,

and molecular factors, the importance

of accurate monitoring efforts in these

ecosystems cannot be understated.

and 3) to continue building on known

reference databases. Combined, these

considerations will improve the eDNA

sampling and metabarcoding method,

promoting its eventual establishment as

the gold standard for marine community

monitoring, resource management, and

conservation efforts.


[1] Halpern, B. S. et al. Science. 2008. 319 (948-952).

[2] Port, J. A. et al. Mol. Ecol. 2015. 25 (527-541).

[3] Sassoubre, L.M. et al. Env. Sci. and Tech. 2016. 50


[4] Thomsen, P. F. et al. PLoS ONE. [Online] 2012a. 7.

https://doi.org/10.1371/journal.pone.0041732 (accessed

Nov. 14, 2017)

[5] Díaz-Ferguson, E. E.; Moyer, G. R. Revista de biologia

tropical. 2014. 62.4 (1273-1284).

[6] Thomsen, P. F. et al. Mol. Ecol. 2012b. 21 (2565-2573).

[7] Rees, H. C. et al. J. Applied Ecol. 2014. 51 (1450-1459).

[8] Foote, A. D. et al. PLoS ONE. [Online] 2012. 7. https://

doi.org/10.1371/journal.pone.0041781 (accessed Nov. 14,


[9] Kelly, R. P. et al. PLoS ONE. [Online] 2014. 9. https://

doi.org/10.1371/journal.pone.0086175 (accessed Nov. 14,


[10] Lacoursière-Roussel, A. et al. J. Applied Ecol 2016. 53


[11] DiBattista, J. D. et al. Coral Reefs. 2017. 36.4 (1245-


Icons from AFY Studio, Aleksandr Vector,

Jeffrey Qua, Luke Patrick, and Vega Asensio via

the Noun Project

Design by Kaitlyn Xiong

Edited by Jenny Wang

relating eDNA

sequence abundance

to the relative

abundance in the

community requires

further investigation

. . . due to the fact

that eDNA sequence

abundance varies

depending on the

particular workflow

and site-specific

environmental factors.

Thus, further studies utilizing eDNA should

proceed in three ways: 1) to take advantage

of eDNA’s benefits in presence/absence

studies to detect rare, and cryptic species,

2) to refine the methodological details,

including primer design and bioinformatic

analyses, for more reliability and efficiency,


eviewing the relationship between




by Mahesh Krishna


Primary Sclerosing Cholangitis (PSC)

is a chronic and progressive disease

of unknown etiology that affects the

bile ducts in the liver. 1 Inflammatory

Bowel Disease (IBD) is a chronic disease

affecting the gut that comprises of

two classifications: Crohn’s disease

(CD) and ulcerative colitis (UC). 2 The

exact mechanism for both diseases is

unknown, yet as high as 80% of patients

with PSC are also diagnosed with IBD. 3

This suggests that these two diseases

are closely related and may even have a

shared ‘trigger.’ This article provides an

overview of the current research carried

out on the relationship between IBD and

PSC by first focusing on theories of the

pathogenesis of PSC and then explaining

how these theories can explain the

association with IBD.




Although the exact pathogenesis of PSC is

unknown, there are three main theories

that have been proposed: a) the “leaky

gut”, b) the “gut lymphocyte homing”, and

c) the “toxic bile” theory. The earliest one

has been the ‘leaky gut’ theory, which

proposes that an injury to the mucosa in

the gut would cause a ‘leakage’ of bacteria

into the circulation of the body, eventually

reaching the liver and leading to PSC. 4

The ‘leaky gut’ theory is concerned with

The exact mechanism for

both diseases is unknown,

yet as high as 80% of

patients with PSC are

also diagnosed with IBD.

pro-inflammatory bacteria that escape the

gut due to the increased permeability of

the intestinal walls (‘leaky’). 5 Then, these

bacteria are able to reach the bile ducts

in the liver and upregulate inflammation

through lipopolysaccharides leading to

PSC. 6 Since PSC does not seem to respond

to immunosuppressants, research

suggests that non-immune mechanisms

such as bacterial infections, ischemia,

and toxicity are also important, which

are explored in the ‘leaky gut’ theory. 7

Therefore, genetically susceptible

individuals exposed to bacteria may

start having their hepatic macrophages

produce pro-inflammatory cytokines

such as TNF and chemokines, which in

turn may attract and activate T-cells, B

cells, monocytes, and neutrophils to the

liver and around the bile ducts, causing

damage and thus leading to cholestasis

and PSC. 7 The ‘leaky gut theory’ is an

interesting explanation to explain the

pathogenesis of PSC, but it is not the only


As more research has been conducted

through genome-wide association studies

(GWAS), a strong connection to the

human leukocyte antigen (HLA) complex

was identified, suggesting an autoimmune

component affecting susceptibility

to PSC. 5 Based on the results of the

GWAS, other models, including the ‘gut

lymphocyte homing’ theory and the ‘bile

acid toxicity theory,’ were developed 1,5 It is

also important to note that environmental

risk for the disease likely contributes

50% in the development of the disease,

therefore, complicating the application of

one particular theory to explain PSC. 5 A

unified model of the various theories has

not yet been proposed.

PSC pathogenesis theories:


Leaky Gut


Gut Lymphocyte Homing


Toxic Bile


Gut microbiota and bile homeostasis

Human mechanistic insights

• Hereditary choleostasis (eg, cystic fibrosis)

• Infection with or without immunodeficiency

• Ischaemia and other causes of secondary

sclerosing cholangitis

Murine mechanistic insights

• Spontaneous models (eg, ABCB4-/-)

• Induced models (eg, lithocholic acid)

Pathological changes

Crohn’s disease

30 loci







bowel disease

103 loci



22 loci

7 loci






7 loci








Endothelial cell

Genetics and the HLA complex

T-cell homing and autoreactivity

Figure 1. This image shows the complex interactions throughout the body between genetics, the immune system, and the

microbiome that leads to PSC-IBD and harmful changes within the body. From Hirschfield et al.1

The ‘gut lymphocyte homing’ theory

focuses on memory T-cells from the gut

moving into the liver via the chemokine

receptor CCR9 and integrin α4β7,

triggering inflammation once reactivated. 9

Recent research has found evidence of

memory T-cells of common clonal origin in

both the liver and gut of patients with PSC-

IBD.10 CCR9+ α4β7+ T-cells are recruited

to the gut by binding to mucosal vascular

addressing cell adhesion molecule 1

(MAdCAM-1) and chemokine CCL25, which

are usually uniquely expressed in the

gut. 9 MAdCAM-1 and CCL25, however, are

induced in the liver of patients with PSC

due to hepatic inflammation through proinflammatory

cytokines (like TNFα) and

activation of VAP1 in the veins near the

liver. 1 The ‘gut lymphocyte homing’ theory

attempts to explain the pathogenesis of

PSC by incorporating an autoimmune


Finally, the ‘bile acid toxicity’ theory

concerns the failure of mechanisms to

protect biliary epithelial cells against bile.

While bile is an important fluid that aids

in digestion, it is toxic. 11 Cholangiocytes,

the epithelial cells of the bile ducts,

are protected from bile acid toxicity by

a variety of mechanisms including a

bicarbonate layer and a calcium-driven

channel. 5 Therefore, a disturbance in bile

homeostasis is believed to contribute

to the development of PSC. 5 Bile acid

homeostasis regulation, however, is not

completely understood. 12 Currently, the

recommended medication for someone

diagnosed with PSC is ursodeoxycholic

acid, although there are controversial

findings on whether it actually helps or

not. Ursodeoxycholic acid may utilize

bile mechanisms by replacing toxic

hydrophobic bile salts in serum, liver, and

bile, which protects cholangiocytes against

cytotoxicity of the bile acids. 11 Therefore,

this theory is also vital in understanding

the complete picture of the etiology of





Studies have identified IBD in patients

with PSC (PSC-IBD) as a unique entity

compared to those with just IBD. 3 This

is due to different risk genes found

between the two diseases and unique

clinical presentations. 3,13 Additionally,

PSC-IBD patients were found to have a

higher risk of colorectal cancer, yet a less

active form of IBD. 14 75% of cases of IBD

in PSC-IBD is found to be classified as

UC. 15 Similar to PSC, the exact mechanism

for the cause of IBD and a cure have not

been found. Important factors that may

trigger an aberrant immune response

and chronic gut inflammation include the

gut microbiome, infectious agents, stress,

genetics. 16

The gut microbiota, in particular, has

been found to have an impact on the

pathogenesis of IBD. 16 Dysbiosis, an

imbalance in the gut microbiome,

has been shown through clinical and

experimental data as having a pivotal role

Both diseases share

common antibodies that

have a characteristic

perinuclear staining


in the pathogenesis of IBD. 16 Therefore,

the presence or lack of an organism in the

gut microbiome may trigger IBD in certain

environmental conditions.






Since PSC and IBD have a strong

concurrence, there is a likely connection

between the two diseases. There

have been observations regarding the

occurrence of PSC after a colectomy

and occurrence of IBD after liver

transplantation, which suggests that ‘gut

lymphocyte homing’ might be causing

both PSC and IBD. 17 Both diseases

share common antibodies that have

a characteristic perinuclear staining

pattern. 15 However, PSC and IBD have

been found to have limited genetic

overlap suggesting these diseases

utilize distinct genetic mechanisms. 15 In

addition to the ‘gut lymphocyte homing’

hypothesis, the ‘leaky gut’ hypothesis

also connects the two diseases. The liver

receives a large amount of its blood

supply through the gut. As a result,

the liver is also exposed to molecules

present in the gut microbiome. 15

Therefore, the dysbiosis present in

PSC-IBD patients may alter homing of

gut-specific lymphocytes or cause the

intestine to become more ‘leaky’ to proinflammatory

bacteria. 15 The ‘leaky gut’

and ‘gut lymphocyte homing’ concepts

attempt to provide a connection of PSC

with IBD.


The pathogenesis of PSC-IBD is still

unknown, but it is known to involve a

complex mechanism that may overlap

between the two diseases. Of the three

common theories of the etiology of PSC,

two of them (gut lymphocyte homing

and leaky gut) also attempt to explain

its interaction with the gut and IBD.

The ‘gut lymphocyte homing’ theory

attempts to explain the connection

through the aberrant expression of

MAdCAM-1 and CCL25, which are usually

only present in the gut. As a result of

this unusual expression, memory T-cells

with the appropriate homing receptors

(chemokine receptor CCR9 and integrin

Both diseases share

common antibodies that

have a characteristic

perinuclear staining


α4β7) may attack liver cells and gut cells,

triggering the inflammation typical of

PSC and IBD. The ‘leaky gut’ hypothesis

justifies the connection between the

two diseases through the portal system.

The liver is constantly exposed to the

intestinal microbiome through the

circulation of blood. Therefore, it is

thought that an imbalanced microbiome

in the gut, which has been implicated in

IBD, may also serve as a trigger in the

development of PSC. The microbiome in

PSC-IBD patients has been the subject of

recent research.

The phenotype of PSC-IBD patients

is very unique and requires further

research to better meet the need of the

patient population. The diagnosis of

PSC and UC for PSC-IBD patients also

occur at a much earlier age than PSConly

and UC-only phenotypes. 15 More

resources are needed to address this

health burden as the costs for patients

of a PSC-IBD phenotype will be much

greater, due to the early age of diagnosis

for PSC and IBD and their debilitating

effects over time. By elucidating the

mechanisms of these diseases, hopefully

a cure for both PSC and IBD can be

developed in the near future.


[1] Hirschfield G.M. et al. Lancet. 2013, 382, 1587-


[2] Nemati S.; Teimourian S. Mid East J Dig Dis. 2017,

9, 69-80.

[3] Chung B.K.; Hirschfield G.M. Curr Opin Gastro.

2017, 33, 93-98.

[4] Karlsen T.H. Gut. 2016, 65, 1579-1581.

[5] Karlsen T.H. et al. J Hepatol. 2017. 67, 1298-1323.

[6] Pontecorvi V. et al. Ann Transl Med. 2016, 4, 512.

[7] Cullen S.; Chapman R. Best Pract Res Clin Gastro.

2001, 15, 577-589.

[8] Aron J.H.; Bowlus C.L. Semin Immunopathol. 2009,

31, 383-397.

[9] Trivedi P.J.; Adams D.H. J Autoimmun. 2013, 46,


[10] Henriksen E.K. et al. J Hepatol. 2017, 66, 116-122.

[11] Trauner M. et al. N Engl J Med. 1998, 339, 1217-


[12] Chiang J.Y. F1000Res. 2017, 6, 2029.

[13] Sano H. et al. J Hepatobiliary Pancreat Sci. 2011,

18, 154-161.

[14] Sokol H. et al. World J Gastro. 2008, 14, 3497-


[15] Palmela C. et al. Gut Liver. 2018, 12, 17-29.

[16] Nishida A. et al. Clin J Gastro, 2017, 11, 1-10

[17] Nakazawa T. et al. World J Gastro, 2014, 20,


DESIGN BY: Katrina Cherk

EDITED BY: Shrey Agarwal


the science of beauty

by Krithika Kumar

Henry David Thoreau, in his book Walden, details

his almost two-year excursion of simple living in

the woods of Massachusetts. He viewed nature

as a way to achieve a higher understanding of the

universe, and enjoyed being one with the solitude

and beauty it has to offer. Nature, thus, has a way

of connecting humans to our emotions and eliciting

positive thoughts and feelings. For example, it is

a universal truth that a rainbow after a rainy day

brings a smile to anyone’s face. The aurora borealis,

or Northern Lights, are regarded by many as

breathtaking, a must-see on planet Earth. But how

does nature capture our attention and scintillate

our senses? What are the long-term effects of

spending time in the outdoors?

Beauty in the natural world affects humans

subconsciously: spending time in the outdoors is

connected to overall mental well-being. A simple

stroll through a forest, for example, can allow us

to distance ourselves from our otherwise chaotic

thoughts. We are forced to regard every stimuli

around us, from the sun shining down upon us to

the tall trees shrouding us to the the small squirrels

and insects we are careful not to harm. Compared

to the contemporary world, which forces humans

to live life in the fast lane through the influence

of technology and commerce, nature is Earth at

its most basic level. It allows humans to take a

step back and a breath in, and entices us with its

many facets of simplicity and serenity. Thus, the

environment melts stress and releases endorphins

that can decrease feelings of depression and


Nature’s ability to distract us from the present also

increases creativity and intelligence. David Strayer

of the University of Utah showed that hikers were

able to solve more complex puzzles after a four-day

backpacking trip compared to a control group. The

prefrontal cortex, which controls decision-making

and social behavior, undergoes much strain from

daily usage of technology and multi-tasking. This

area of the brain can take a break when we respond

to purely nature-driven stimulus. Nature allows

the brain to reset so that it can perform tasks with

renewed energy.

A change of environment can also makes humans

kinder and more generous. There is an out-of-body

feeling associated with viewing an awe-inspiring

landscape that makes one feel that one is part of

something bigger than the present. It can make dayto-day

inconveniences seem inconsequential and

remind us that there is more to the world than what

goes on in our lives. Humans are also more likely to

be more ethical when faced with moral dilemmas

after spending time in nature. Experiments

conducted at the University of California, Berkeley,

found that participants playing the Dictator Game

(which measures the degree to which individuals

will act out of self-interest) were more likely to be

generous to their peers after being exposed to

alluring nature scenes.

Planet Earth’s most primitive offerings actually

present us with complex and diverse benefits. A

quick breath of fresh air can melt away feelings of

stress and anxiety, while increasing cognitive focus

and creativity. Perhaps we can create our own

“Walden” and take a break from studying or working

to simply enjoy the outdoors and spend time

appreciating the many sides of our ever-changing



[1] “How Nature Can Make You Kinder, Happier, and More Creative.”

Greater Good, greatergood.berkeley.edu/article/item/how_nature_


[2] Louv, Richard. “Ten Reasons Why We Need More Contact with

Nature | Richard Louv.” The Guardian, Guardian News and Media, 12

Feb. 2014, www.theguardian.com/commentisfree/2014/feb/13/10-


[3] “Henry David Thoreau.” Henry David Thoreau, transcendentalismlegacy.tamu.edu/authors/thoreau/.

Vector from Freepik





hereditary cancer

TIME syndromes


“Life comes with many challenges. The

ones that should not scare us are the ones

we can take on and take control of.” As

hard as it is to believe, this is a quote from

Angelina Jolie’s book about hereditary

breast cancers where she encourages a

more thorough integration of genomics

into the field of oncology. Recently,

celebrities such as Angeline Jolie have

spoken out about the BRCA genes and

their personal experiences with hereditary

cancer syndromes. Jolie’s doublemastectomy

and the media’s portrayal of

her treatment have helped to drastically

increase the awareness of genetic testing

among the general population.

Hereditary cancer syndromes, and

particularly hereditary breast cancers,

are primarily associated with the genetic

mutations BRCA1 and BRCA2. An

individual with the BRCA genes can have

over a 70% chance of developing cancer

with the right combination of genetic and

environmental factors. With the odds

of developing cancer so high, it seems

obvious that any measure we can take to

lower this penetrance should be fervently

supported by all medical professionals.


There’s an important ethical dilemma

that arises whenever we think about

using these new technologies. On the one

hand, genomics, the technological aspect

of genetics concerned with sequencing

and analyzing an organism’s genome,

has greatly improved the prognosis for

cancer patients. Genetic profiling can help

individuals with hereditary breast cancers

through every stage of their disease, from

diagnosis to treatment. An interesting

use of genetic profiling is using the BRCA

genes to help classify tumors. Because

patients with the same BRCA mutation

most likely have the same type of tumor,

classifying one individual’s tumor means

you have classified the other’s! More

importantly, by providing a means of

pre-symptomatic testing, patients are

able to utilize precautionary measures

such as estrogen-regulating drugs and

preventative surgeries like mastectomies

(removal of breast tissue).

On the other hand, it is simply not

possible to test every individual for the

BRCA genes. For one, they are extremely

costly. There is no way that a geneticist

can indiscriminately recommend

genetic testing to every patient as DNA

sequencing tests have yet to be covered

by every health insurance plan. Without

insurance, the cost of one of these tests

can range from $475 to over $4000.

Furthermore, the results of such a test

can put an individual at risk for genetic

discrimination. Although GINA, or the

Genetic Information Nondiscrimination

Act, protects from genetic discrimination,

or having to pay an inflated premium

due to genetic test results that reveal

a predisposition for a severe genetic

disease, it only applies to health insurance

and not life insurance. Having young

individuals get tested for the BRCA genes

comes with the possibility of hiking up

their life insurance premiums later in life.

Finally, an individual’s mental wellbeing is

at risk because the fear of one’s diagnosis

can understandably cause anxiety and/or


So the question remains. Do we

encourage the general public to get

tested for the BRCA genes if they believe

that they have a strong family history of

hereditary cancers? Although there is no

answer to this question that pleases all

medical professionals, one thing is certain:

An ordinary individual can possibly

prevent cancer in his or her family with

the help of genetic testing. When used

cautiously, genetic testing is an invaluable

tool in all stages of cancer treatment and

prevention. It seems clear to me that

advocating for widespread awareness

of the advantages of genetic testing in

reducing cancer penetrance is one of

the most beneficial ways to prevent the

growth of tumors in an individual and to

control inheritance through generations.

Why are we waiting, then? Let’s take

control and not let cancer scare us



[1] “GINA & Your Health Insurance.” GINAhelp.org - Your

GINA Resource. N.p., n.d. Web. 03 Nov. 2016.

[2] Moyer, Virginia A. “Risk assessment, genetic

counseling, and genetic testing for BRCA-related

cancer in women: US Preventive Services Task Force

recommendation statement.” Annals of internal

medicine 160.4 (2014): 271-281.

[3] Chen, Sining, and Giovanni Parmigiani. “Meta-analysis

of BRCA1 and BRCA2 penetrance.” Journal of Clinical

Oncology 25.11 (2007): 1329-1333.





Rice University Center for Civic Leadership

Program in Writing and Communication

Rich Family Endowment

Wiess School of Natural Sciences

George R. Brown School of Engineering

The Energy Institute High School

Young Women’s College Preparatory Academy

More magazines by this user
Similar magazines