12.12.2012 Views

Educational Psychology—Limitations and Possibilities

Educational Psychology—Limitations and Possibilities

Educational Psychology—Limitations and Possibilities

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

B. F. Skinner 203<br />

There are primarily four types of reinforcement schedules: (a) fixed-interval (FI), (b) fixed-ratio<br />

(FR), (c) variable-interval (VI), <strong>and</strong> (d) variable-ratio (VR). In FI reinforcement, organisms are<br />

given or exposed to reinforcement stimulus on a fixed time schedule. When an organism becomes<br />

conditioned to an FI schedule of reinforcement, its behavior becomes stable. The general rule<br />

with FI reinforcement is that an organism’s rate of responding is inversely proportional to the<br />

interval between reinforcements. In this type of reinforcement schedule, organisms learn that<br />

responses early in the interval are never reinforced immediately <strong>and</strong> organisms will tend to pace<br />

the responses <strong>and</strong> “pile up” its responses toward the end of the interval. In an FR schedule,<br />

the reinforcement stimulus is provided after a fixed number of responses have been exhibited<br />

by the organism. With this schedule, the organism learns that rapid responding is important.<br />

There is a direct correlation between the rate of responding <strong>and</strong> the rate of reinforcement, that<br />

is, the higher the rate of responding the higher the rate of reinforcement. In a VI reinforcement<br />

schedule, time is a critical factor. After an organism has learned a particular response, the<br />

amount of time it takes for the next reinforcement stimulus to be presented keeps changing. It<br />

will not be possible for an organism to learn the time interval accurately. Organisms tend to<br />

respond at an extremely stable rate under the VI schedule. In a VR reinforcement schedule, an<br />

organism is given the reinforcement stimulus after a different number of responses have been<br />

exhibited. In short, variable number of responses is required to produce successive reinforcers.<br />

Reinforcing well-learned behaviors on a VR schedule generate extraordinarily high rates of<br />

performance.<br />

An overview of operant conditioning has been presented. Behavior, which is a series of<br />

stimulus–response connections, is followed by a consequence, <strong>and</strong> the nature of the consequence<br />

(e.g., presence or absence of reinforcing stimulus) modifies the organism’s tendency to exhibit or<br />

inhibit the behavior in the future.<br />

OPERANT CONDITIONING APPLIED TO EDUCATIONAL PSYCHOLOGY<br />

Most biographical accounts of B.F. Skinner suggest that Skinner’s interest in educational<br />

psychology began on that fateful day of November 11, 1953, Father’s Day, when Skinner visited<br />

his daughter’s fourth-grade arithmetic class. While sitting at the back of his daughter’s classroom,<br />

Skinner observed that the students were not receiving prompt feedback or reinforcement from<br />

their teacher <strong>and</strong> were all moving at the same pace despite differences in ability <strong>and</strong> preparation.<br />

Skinner had researched delay of reinforcement <strong>and</strong> knew how it hampered performance. If<br />

mathematical-problem-solving behavior is perceived as a complex series of stimulus–response<br />

connections that had to be effectively established, then the teacher in Skinner’s daughter’s fourthgrade<br />

arithmetic class definitely needed help. It was simply impossible for the teacher with twenty<br />

or thirty children to shape mathematical-problem-solving behavior in each student. In operant<br />

conditioning theory, the concept of shaping requires that the best response of the organism be<br />

immediately reinforced. In the math class, however, some of the students had no idea of how<br />

to solve the problems, while other students breezed through the exercise <strong>and</strong> learned nothing<br />

new. Furthermore, the children did not find out if one problem was correct before doing the next<br />

problem. They had to answer a whole page before getting any feedback, <strong>and</strong> then probably not<br />

until the next day.<br />

That afternoon, Skinner constructed his first teaching machine. The first teaching machine that<br />

was developed by Skinner was a device that presented problems to learners in r<strong>and</strong>om order. This<br />

machine simply practiced <strong>and</strong> rehearsed skills or behaviors already learned. Learners did not<br />

learn any new responses or new behaviors. A few years later Skinner developed <strong>and</strong> incorporated<br />

programmed instruction into the learning machines. Learners would respond to content to be<br />

learned that were broken down into small steps. The first responses of each content sequence

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!