24.02.2013 Views

Encyclopedia of Evolution.pdf - Online Reading Center

Encyclopedia of Evolution.pdf - Online Reading Center

Encyclopedia of Evolution.pdf - Online Reading Center

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

The testing <strong>of</strong> hypotheses accumulates gradually, but changes<br />

in theory can be rapid.<br />

Scientists accept the simplest hypothesis that will explain<br />

the observations. When given a choice between a simple,<br />

straightforward explanation, and a complex one (especially<br />

one that requires numerous assumptions), scientists will<br />

choose the former. This is referred to as Occam’s Razor,<br />

named after William <strong>of</strong> Occam (or Ockham), a medieval<br />

English philosopher and theologian.<br />

Scientific research uses null hypotheses and statistical<br />

analysis to determine whether the results might have occurred<br />

by chance and will accept the results only if they are very<br />

unlikely to have occurred by chance. In everyday life, observers<br />

frequently notice patterns in events and objects. Millions<br />

<strong>of</strong> years <strong>of</strong> evolution have given the human brains the habit<br />

<strong>of</strong> looking for patterns. However, these patterns may be the<br />

product <strong>of</strong> imagination rather than a component <strong>of</strong> reality.<br />

Scientists are no different from any other people in having<br />

brains that can deceive them into believing false patterns, but<br />

they take special precautions to prevent this from happening—a<br />

set <strong>of</strong> precautions usually absent from nonscientific<br />

ways <strong>of</strong> knowing. For example, three good days on the stock<br />

exchange may look like a trend, and some investors would<br />

take it for one. A statistical analysis may show that such a<br />

three-day streak could readily occur by chance. The scientific<br />

method is, therefore, like a self-imposed yoke: It restricts scientists<br />

from wandering <strong>of</strong>f in erroneous directions as cows<br />

or humans are wont to do; and in the process <strong>of</strong> restricting<br />

them, the scientific method allows scientists to do the useful<br />

work <strong>of</strong> pulling the cart <strong>of</strong> knowledge forward.<br />

In order to test a hypothesis about a process, a scientist<br />

must specify what would happen if that process were not<br />

occurring. This null hypothesis is therefore the alternative<br />

to the hypothesis the scientist is investigating. When experiments<br />

are involved, the null hypothesis is usually investigated<br />

by a control, which is just like the experimental treatment in<br />

every way except for the factor being investigated. One <strong>of</strong> the<br />

earliest, and most famous, examples <strong>of</strong> a null hypothesis control<br />

was from Italian scholar Francesco Redi’s 16th-century<br />

experiment that tested the hypothesis <strong>of</strong> biogenesis. Biogenesis<br />

asserts that life comes from preexisting life. If maggots<br />

appear in rotting meat, it must be because flies laid eggs<br />

there. The null hypothesis was that life need not come from<br />

preexisting life—that is, maggots can arise spontaneously<br />

from rotting meat, even in the absence <strong>of</strong> flies. Redi took two<br />

pieces <strong>of</strong> meat, put them in two jars, but covered one <strong>of</strong> the<br />

jars with a screen that excluded flies. Both pieces <strong>of</strong> meat rotted,<br />

but only the meat in the open jar produced maggots.<br />

Almost anything can happen by chance, once in a while.<br />

In the case <strong>of</strong> the flies and the maggots, the results are pretty<br />

clear. But in many or most other scientific investigations, the<br />

results are far less clear. How can a scientist be reasonably<br />

sure that the results did not “just happen” by chance? The<br />

science <strong>of</strong> statistics allows the calculations <strong>of</strong> probability to<br />

be applied to hypothesis testing.<br />

There are two kinds <strong>of</strong> error that a scientist can make<br />

regarding these probabilities. The first kind <strong>of</strong> error (called<br />

Type I error) occurs when the scientist concludes that the<br />

scientific method<br />

results were due to chance, when in reality the hypothesis<br />

was correct. The scientist failed to find something that was<br />

real. This is not considered a serious error, because later<br />

investigation may allow more chances to discover the truth.<br />

The second kind <strong>of</strong> error (<strong>of</strong> course, Type II error) occurs<br />

when the scientist concludes that the hypothesis was correct<br />

(“Eureka!”) when in reality the results were due to chance.<br />

This is a more serious kind <strong>of</strong> error, because the scientist<br />

and his or her peers around the world might conduct further<br />

investigations and waste time and effort under the misguided<br />

notion that the hypothesis was correct. Therefore<br />

scientists try their best to avoid Type II error. Since they can<br />

never be totally sure, scientists universally accept a probability<br />

<strong>of</strong> 5 percent as the generally acceptable risk for Type<br />

II error. If the probability is less than 1 in 20 (p < 0.05)<br />

that the results could have occurred by chance, then scientists<br />

generally believe the results. The calculations <strong>of</strong> probability<br />

are quite complex, and most scientists let computers<br />

do these calculations.<br />

Scientists take special precautions to avoid biased observations.<br />

Biased observations occur when the scientist expects<br />

certain results, even wants them to occur, and then tends<br />

to favor them when he or she sees them. To avoid this very<br />

human tendency to see what they want to see, scientists use<br />

objective measurements—temperature, weight, voltage,<br />

for example—rather than subjective assessments and <strong>of</strong>ten<br />

design their studies to be blind. That is, the scientist may not<br />

even know the sources <strong>of</strong> some <strong>of</strong> the specimens that he or<br />

she measures. One scientist may gather specimens and label<br />

them with simply a number. Another scientist receives the<br />

specimens, identified only by their label numbers, and makes<br />

measurements on them. This is called a blind experiment<br />

because the scientist who is making the measurements cannot<br />

be biased by the knowledge <strong>of</strong> where the specimens came<br />

from. This is particularly important in drug tests, because<br />

patients may report feeling better, and may actually improve,<br />

if they think they have received the drug (this is called the<br />

placebo effect). Even if the scientist does not tell the patient<br />

which pills are real and which are sugar pills, the scientist can<br />

subconsciously communicate the information, for example by<br />

the tone <strong>of</strong> voice. In such tests, therefore, a double blind procedure<br />

is routinely used, in which neither the investigator nor<br />

the subjects know which pill is which.<br />

Scientific research frequently involves experimentation.<br />

When possible, scientists conduct experiments. In an experiment,<br />

the scientist imposes conditions upon the phenomena<br />

being studied, so that, to the greatest extent possible, only one<br />

factor is allowed to vary. In a laboratory, all conditions such<br />

as lighting, temperature, and humidity can be controlled. In<br />

the field, conditions may be quite variable, but if the experimental<br />

treatment and the control are side by side, the variability<br />

<strong>of</strong> all factors except the one being studied might be the<br />

same and therefore cancel out <strong>of</strong> the analysis.<br />

Experiments are not always possible. Sometimes the<br />

arena <strong>of</strong> investigation is simply too big. How can one conduct<br />

an experiment with a whole mountain? Actually, some<br />

ecologists in the 1970s studied the effects <strong>of</strong> clear-cutting,<br />

strip cutting, and burning on the flow <strong>of</strong> nutrients in stream

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!