04.07.2023 Views

Modernist-Cuisine-Vol.-1-Small

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

4

Go ahead and eat high-fiber, bran-rich

cereal if you like it—just don’t expect it to

lower your risk of cancer.

Testing a Dietary System

Science has developed a rigorous process that can,

in principle, determine whether a food contributes

to (or helps prevent) a particular disease. In

practice, this scientific process sometimes breaks

down, largely because this kind of science

known as nutritional epidemiologyis a blend

of human physiology and sociology, both of which

are tremendously complex and difficult to control

experimentally.

The first step in scientifically testing a dietary

system is to express it in the form of a hypothesis,

which is a statement about the relationship

between measurable quantities whose veracity

can be supportedor, more important, contradictedby

experimental evidence. Burkitt’s

hypothesis, for example, was “A diet rich in fiber

reduces the risk of colon cancer.”

Epidemiologists test their hypotheses in several

ways (see Types of Nutritional Epidemiology

Studies, page 221). The most rigorous is a prospective

randomized, controlled clinical trial. Burkitt

was a surgeon, however, not an epidemiologist,

and he based his enthusiasm for his high-fiber

dietary system on anecdotal evidence from an

ecological study. Many years after his idea caught

on, large, randomized, controlled trials proved his

hypothesis wrongan unfortunately common

fate for hypotheses about diet and health.

The first hurdle a new nutritional hypothesis

must clear is usually a small-scale study. Relatively

cheap, fast, and easy to run, small-scale studies are

useful for selecting dietary systems that are worth

testing in a more rigorous way.

Small studies do not usually definitively settle

a scientific question, however, because they suffer

from various kinds of errors and bias that undermine

the reliability of the results. Sampling error

is familiar from opinion pollsit reflects the fact

that whenever you choose a subset of people to

represent a larger group, or humanity as a whole,

sheer coincidence might give you a group that

produces a misleading answer.

Bias comes in several varieties. Recall bias

often plagues nutritional studies when researchers

ask people to remember how frequently they have

eaten certain foods in recent months or to keep

a food diary. In either case, people may subconsciously

suppress memories of eating certain foods

and exaggerate their consumption of others.

Prospective clinical studies that actually measure

or control subjects’ diets can eliminate recall bias,

but these are relatively rare.

Observation bias occurs when the act of

studying a person changes his or her behavior.

Weight-loss intervention studies frequently

overestimate the benefit of the proposed diet, for

example, because participants stick to the diet

only as long as the scientists track their progress.

Once the study ends, the subjects tend to slip off

the diet and regain their weight.

In another form of observation bias, researchers

tend to treat patients receiving an intervention

differently from the “control” patients, who

receive only a placebo. Double-blinded trials

in which neither the doctor nor the patient knows

who is getting the interventionsignificantly

reduce this bias. But they are hard to do when it

comes to food.

Selection bias afflicts nearly every nutritional

study because it is so hard to recruit a group of

participants that mimics the composition of the

population overall. Almost always, one arm of the

study ends up with more men, for example, or

fewer African-Americans, or more tall people than

the other arm has. As a result, it is rarely possible

to know whether the findings of the study will

apply to groups that differ from the study cohorts.

Randomizing participants into different arms of

the study helps reduce selection bias. But randomization

cannot overcome the limits of a study that

includes only men (as some have done) or only

female nurses (such as the Nurses’ Health Study).

Selection bias sometimes occurs in a more

insidious form, when researchers deliberately try

to skew the outcome. Ancel Keys, M.D., the initial

champion of a link between dietary fat and heart

disease, was accused of such intentional selection

bias by other scientists.

Even if a study is large enough to reduce

sampling error and careful enough to avoid

significant bias, confounding effects can produce

misleading results. Confounding occurs when two

unrelated characteristics, such as gray hair and

colon cancer, appear strongly connected because

both are affected by the actual causal factor (age,

in this example).

When the studies have been done and the

papers have been written, publication bias can

come into play. A recent study shows that clinical

trials with positive results are more likely to be

published in scientific journals than studies that

show that a treatment did not work. This means

that negative results often disappear unheeded.

Imagine if your local newspaper published only

good news and never informed you about murders,

break-ins, and assaults. You would imagine

that your local police force was 100% effective.

Publication bias similarly deprives doctors and

their patients of all the relevant facts about the risk

of disease as they consider the relative merits of

a particular treatment or dietary system.

When scientists compare the risks experienced

in various arms of a study, they often use the terms

hazard ratio, odds ratio, or relative risk. These

numbers have a similar interpretation; namely,

whether the risk of getting a disease was higher or

lower in the intervention group than in the control

group. A hazard ratio close to 1.0 tells us that there

was little difference between the intervention and

control groups, so the intervention did not work.

In the Women’s Health Initiative study, for

example, the women who ate a low-fat, high-fiber

diet were slightly more likely to get colorectal

cancer than were the women who ate their normal

diets. The hazard ratio was 1.08, meaning those in

the low-fat, high fruit-and-grain group were 8%

more likely to get cancer (see page 217).

Very strong statistical associations—such

as the observation that smokers are about

10 times as likely to get lung cancer as

nonsmokers are—can persuasively link

a behavior to a disease even if scientists

are uncertain of the detailed causal

mechanisms at work. But in nutritional

epidemiology, associations between diet

and health outcomes are generally far

weaker—closer to 10% than to 1,000%—

so the links are much less clear.

218 VOLUME 1 · HISTORY AND FUNDAMENTALS

FOOD AND HEALTH 219

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!