STUDENT EVALUATION OF CLINICAL EDUCATION ENVIRONMENT
STUDENT EVALUATION OF CLINICAL EDUCATION ENVIRONMENT
STUDENT EVALUATION OF CLINICAL EDUCATION ENVIRONMENT
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
<strong>STUDENT</strong> <strong>EVALUATION</strong> <strong>OF</strong> <strong>CLINICAL</strong> <strong>EDUCATION</strong> <strong>ENVIRONMENT</strong> (SECEE):<br />
INSTRUMENT DEVELOPMENT AND VALIDATION<br />
By<br />
Kari Sand-Jecklin<br />
A DISSERTATION<br />
Submitted to<br />
The College of Human Resources and Education<br />
at<br />
West Virginia University<br />
Doctoral Committee: Anne H. Nardi PhD, chair<br />
Julie S. Vargas, PhD<br />
Rayne S. Dennison, PhD<br />
Karen E. Miles, EdD<br />
Lynne Ostrow, EdD<br />
in partial fulfillment of the requirements<br />
for the degree of<br />
Doctor of Education<br />
In<br />
Educational Psychology<br />
Department of Advanced Educational Studies<br />
Morgantown, West Virginia<br />
September 29, 1998<br />
Keywords: Student Evaluation of Clinical Education Environment<br />
Copyright 1998, Kari Sand-Jecklin
ABSTRACT<br />
<strong>STUDENT</strong> <strong>EVALUATION</strong> <strong>OF</strong> <strong>CLINICAL</strong> <strong>EDUCATION</strong> <strong>ENVIRONMENT</strong> (SECEE):<br />
INSTRUMENT DEVELOPMENT AND VALIDATION<br />
By Kari Sand-Jecklin<br />
This paper describes the refinement and testing of the Student Evaluation of Clinical<br />
Education Environment (SECEE) inventory, an instrument designed to measure nursing student<br />
perceptions of the clinical learning environment. Although a quality student clinical experience<br />
is considered critical to nursing education, no comprehensive instruments measuring the<br />
clinical learning environment have been published.<br />
The SECEE inventory contains 29 forced-choice items divided among four scales:<br />
Communication and Feedback, Learning Opportunities, Learning Support, and Department<br />
Atmosphere. Items were included based on nursing students and faculty input, a review of the<br />
literature and sample clinical agency contracts, and data resulting from administration of the<br />
original version of the SECEE instrument. Items requesting students to identify aspects of the<br />
clinical setting that helped as well as hindered learning were also included.<br />
A convenience sample of nursing students from two small liberal arts institutions (SMW<br />
and SMA) and one large university (LMA) completed the SECEE inventory at the end of the<br />
spring 1998 semester. A sub-sample of students completed the inventory twice during the<br />
semester, for test-retest reliability determination.<br />
Data analysis indicated that students responded consistently to the instrument as a whole<br />
and to the four scales. Rearrangement of four items within scales resulted in further<br />
improvements in reliability. Test-retest correlations for all scales were statistically significant.<br />
Correlations between individual student evaluations of two distinct clinical sites were not<br />
ii
significant for any of the four scales, indicating that students responded differently to the<br />
instrument when evaluating different clinical sites.<br />
Significant differences between scale scores were found between institutions for all<br />
scale scores. Differences were also found between academic levels of students within each<br />
institution, and between clinical site groups within institutions SMW and SMA. Student<br />
narrative comments supported inclusion of the majority of forced-choice items and allowed<br />
identification of items that may be beneficial to include in future inventory revisions.<br />
Overall, the SECEE inventory appears to be a reasonably valid and reliable measure.<br />
Minor revisions are suggested for future revisions.<br />
iii
ACKNOWLEDGEMENTS<br />
An endeavor such as this is not accomplished without the support and encouragement of<br />
others. I am grateful to have had the opportunity to study with faculty and students who so<br />
willingly have shared their knowledge and experience, and supported others in accomplishing<br />
their goals. I am also grateful to the Swiger Foundation for providing me the opportunity to<br />
focus solely on studies, without financial concerns.<br />
My sincere thanks to my doctoral committee who supported this research endeavor and<br />
made the process of dissertation completion as painless as possible. Thanks particularly to Dr.<br />
Anne Nardi who has provided me with unwavering support and encouragement throughout the<br />
last three years. Thanks to Dr. Rayne Sperling Dennison for hours of help with data analysis<br />
and to Dr. Julie Vargas for helping me maintain a ”balanced perspective”. Also, my<br />
appreciation to Dr. Karen Miles and Dr. Lynne Ostrow, who helped me to focus the application<br />
of all that I have learned to the field of Nursing.<br />
Without the assistance of others, data collection would have been nearly impossible. I<br />
greatly appreciate the assistance of Dr. Donna Hartweg, Dr. Shari Metcalfe, Dr. Joan Clites and<br />
Dr. Karen Miles for their assistance with data collection. Thanks also to the nursing faculty at<br />
the three institutions used for data collection, for making time during class to gather data.<br />
I am also very thankful for the support of my husband, Dave, and children Kallie and<br />
Bryce during the past three years. The road has been long, but the end is in sight. Finally, I<br />
thank God for providing me the opportunity to accomplish this longstanding goal and for<br />
keeping my family intact during the process.<br />
iv
Table of Contents<br />
List of Tables vii<br />
Chapter 1: Introduction 1<br />
Definition of Terms 4<br />
Chapter 2: Review of Literature 6<br />
Theoretical Perspectives on the Learning Environment 6<br />
Theoretical Perspectives on Evaluation of the Learning Environment 12<br />
Empirical Investigations of the Applied Learning Environment in Nursing 13<br />
Empirical Investigations of the Traditional Classroom Learning Environment 19<br />
Statement of the Problem 31<br />
Preliminary Investigation 32<br />
Research Questions 37<br />
Chapter 3: Methods 39<br />
Instrument Revisions 39<br />
Population 43<br />
Procedure 44<br />
Chapter 4: Results 46<br />
Descriptive Analysis 47<br />
Missing Data 48<br />
Descriptive Statistics 49<br />
Parametric Analyses 50<br />
Institutional Differences in Student Response to SECEE Instrument 54<br />
Instrument Reliability Determination 57<br />
Test-Retest Reliability 60<br />
Differences in Scale Scores According to Clinical Site Groups 67<br />
Analysis of Narrative Inventory Data 75<br />
Student Narrative Responses Reflecting SECEE Item Content 77<br />
Student Narrative Responses not Addressed by Forced Choice Item Content82<br />
Chapter 5: Discussion 85<br />
Research Question 1 85<br />
Brevity and Practicality of Use 85<br />
Content Validity 86<br />
Research Question 2 88<br />
Inventory Reliability 88<br />
Scale Reliability 88<br />
Research Question 3 91<br />
Construct Validity 91<br />
Research Question 4 94<br />
Removal of SECEE Items 94<br />
v
Table of Contents, Continued<br />
Demonstration of Instrument Validity 95<br />
Revisions to Improve Data Gathering and Analysis 96<br />
Summary of Proposed Changes in the SECEE Inventory 100<br />
Study Limitations 101<br />
Research Implications 104<br />
Related Research Issues 105<br />
Conclusion 107<br />
References 109<br />
Appendices<br />
A. Original SECEE Instrument 115<br />
B. SECEE Inventory Items Categorized by Scales 118<br />
C. Revised SECEE Instrument 121<br />
D. Table of Specifications for SECEE Inventory 126<br />
E. Statement to Participants 129<br />
F. IRB Approval Letter 131<br />
Curriculum Vita 133<br />
vi
Tables<br />
List of Tables<br />
1. Identification and Frequency of Factors (Scales) contained in the LEI, CES,<br />
ICEQ, MCI, CLUCEI and SLEI instruments 29<br />
2 SECEE Inventory Forced Choice Item Means and Standard Deviations 51<br />
3 Scale Means and Standard Deviations by Institutions 55<br />
4 Statistical Results for Institution by Scale Score ANOVA 56<br />
5 Significance Levels for Institutional Comparisons According to Scale Scores 58<br />
6 Reliability Coefficients for SECEE Instrument and Individual Scales by Institution 59<br />
7 Scale Score Correlations Between Pretest and End of Semester Inventories 62<br />
8 ANOVA Results, Student Level by Scale Score 63<br />
9 Significance Levels for Student Comparisons According to Scale Scores 64<br />
10 Scale Score Means and Standard Deviations by Student Level 65<br />
11 ANOVA Statistics, Clinical Site Group by Scale Score 70<br />
12 Scale Score Means and Standard Deviations by Clinical Site Groups 71<br />
13 Significance Levels for Post Hoc Differences Between Clinical Site Groups at SMW 74<br />
14 Comparison of Number of Student Narrative Comments on Pretest and End of Semester<br />
Inventories 76<br />
15 Category of Student Comments to SECEE Open-Ended Questions 78<br />
16 Frequencies of Student Responses to Request to Identify Aspects of the Clinical<br />
Setting that Helped Learning—Issues Addressed by Scaled SECEE Items 80<br />
17 Frequencies of Student Responses to Request to Identify Aspects of the Clinical<br />
Setting that Hindered Learning—Issues Addressed by Scaled SECEE Items 81<br />
18 Frequencies of Student Responses to Request to Identify Aspects of the Clinical<br />
Setting that Helped Learning—Issues not addressed by Scaled SECEE Items 83<br />
vii
List of Tables, Continued<br />
19 Frequencies of Student Response to Request to Identify Aspects of the Clinical<br />
Setting that Hindered Learning—Issues not addressed by Scaled SECEE Items 84<br />
20 Reliability Coefficients for Scales by Institution (Revised Item Placement) 90<br />
viii
Student Evaluation of Clinical Education Environment (SECEE):<br />
Instrument Development and Validation<br />
Chapter 1<br />
Introduction<br />
According to Dunn and Burnett (1995), the learning environment consists of all the<br />
conditions and forces within a setting that impact learning. As Shuell (1996, p. 726) stated,<br />
“Teachers and students work together in a rich psychological soup of a classroom, a soup<br />
comprised of cognitive, social, cultural, affective, emotional, motivational and curricular<br />
factors.” The learning environment provides the setting for learning and at the same time acts as<br />
a “participant” in teaching and learning (Loughlin & Suina, 1982). It can support, impede, or<br />
limit learning opportunities for students (Reilly & Oermann, 1992).<br />
Several authors have identified the importance of the student learning environment and<br />
have discussed various aspects of this environment. Loughlin and Suina (1982) authored an<br />
entire text geared toward teacher construction of a physical environment conducive to student<br />
learning, using student behavior as a basis for arranging the environment. Shuell (1996)<br />
identified several characteristics of classroom learning environments: multidimensionality,<br />
simultaneity, immediacy, unpredictability, publicness, and history. Earlier, Moos (1979) had<br />
conceptualized the student learning environment in terms of three dimensions. The relationship<br />
dimension includes the extent to which people are involved in the environment, how much they<br />
help (support) each other, and the amount of open expression allowed. The personal growth<br />
dimension includes the degree of task orientation, competition, independence, social orientation,<br />
1
and intellectuality of the environment. The system maintenance and system change dimension<br />
includes organization, rule clarity, teacher control, and innovation in the environment.<br />
Most formal education at elementary and secondary school levels takes place within the<br />
traditional classroom setting. Aside from a few field trips and secondary school vocational<br />
training programs or science lab classes, the learning environment consists of a traditional<br />
classroom structure. In contrast, at the post secondary level, the learning environment may<br />
include multiple experiences or even entire courses in applied learning settings. For instance, in<br />
the area of nursing, students must complete a number of courses that involve experiences in the<br />
clinical nursing environment. The same is true for most university students in professional or<br />
technical programs.<br />
Although the traditional classroom may include authentic activities that reflect activities<br />
of real life and practice in the context in which learning will be put to use, there are several<br />
differences between the applied or experiential learning environment and the traditional<br />
classroom setting (Resnick, 1987). Traditional school learning generally revolves around<br />
individual effort, pure thought activities (often without the support of references and tools),<br />
symbolic rather than tangible manipulation of objects, and generalized competencies. In<br />
contrast, learning outside the school setting involves shared cognition, tool use in cognition,<br />
contextual reasoning, and situation specific competencies (Resnick, 1987). The traditional<br />
classroom setting is self-contained and the instructor can monitor all students at one time.<br />
Students generally work on similar tasks and the teacher has a significant amount of control over<br />
the physical and social learning environment. Interactions occur between students and their<br />
peers as well as between students and teacher. In contrast, the instructor has much less control<br />
over the environment in the applied learning setting. Students may be working on the<br />
2
development of different skills and they are often not within direct view of the instructor. It is<br />
also the case that instructor assistance may not be immediately available to the students. In the<br />
applied learning setting, students interact with several other people in addition to their peers and<br />
the instructor. Professionals in the applied setting often serve as mentors for students, but may<br />
have little formal instruction in mentoring or in creating an environment conducive to student<br />
learning. These additional variables inherent in an applied learning setting further highlight the<br />
importance of assuring the adequacy of the environment in these learning situations.<br />
The clinical nursing education context is a prime example of an applied learning setting.<br />
Undergraduate education in nursing provides a foundation for nursing practice in a variety of<br />
settings with clients in all stages of the lifespan. The clinical component of nursing education is<br />
critical to the overall curriculum, as it provides realistic experiences and allows learners to<br />
“apply knowledge to practice, to develop problem solving and decision making skills, and to<br />
practice responsibility for their own actions” (Reilly & Oermann, 1992, p. 115). Although<br />
typical baccalaureate nursing students spend between eight and sixteen hours per week in a<br />
clinical learning environment (Polifroni, Packard, & Shah, 1995), the mere passage of time in the<br />
clinical environment does not itself ensure clinical competence (Scheetz, 1989) or a positive<br />
clinical experience. A number of variables, including student, faculty, resource staff, and clinical<br />
environment factors interact to contribute to student learning outcomes. In order to ensure that<br />
students experience a clinical education environment that supports learning, the factors<br />
impacting learning in that context must be identified and evaluated. Standard III of The<br />
American Association of Colleges of Nursing’s Standards for Accreditation of Baccalaureate and<br />
Graduate Nursing Programs (1997) states that the environment for teaching, learning, and<br />
3
evaluation of student performance should foster achievement of expected outcomes and that<br />
there should be clear congruence between the teaching-learning experience and expected results.<br />
One means to demonstrate the effectiveness of the teaching-learning experience is to<br />
evaluate the clinical educational environment through the students’ eyes—obtaining students’<br />
perceptions about the adequacy of the clinical education environments that they have<br />
experienced. Nursing instructors may informally request individual student feedback about the<br />
quality of the clinical learning environment, or may include one or two questions about the<br />
adequacy of this environment in the overall course evaluation tool. However, there is currently<br />
no comprehensive instrument that has been demonstrated to accurately measure student<br />
perceptions of their clinical education environment. The purpose of this investigation was to<br />
develop and test the accuracy and efficiency of such an instrument: the Student Evaluation of<br />
Clinical Education Environment (SECEE). Development of such an instrument is one of the first<br />
steps in the empirical process of assessing and improving the student clinical learning<br />
environment and positively impacting student learning in this environment.<br />
Definition of Terms<br />
Learning Environment: All the forces and conditions within a setting that impact student<br />
learning (Dunn & Burnett, 1995). The forces include but are not limited to the physical setting,<br />
instructional methods, interactions with others, and opportunities for growth.<br />
Applied Learning Environment: A setting in which students apply information and skills to an<br />
authentic situation, with exposure to all factors influencing performance in that setting.<br />
4
Examples of applied learning environments include laboratory settings, simulations, and practice<br />
settings.<br />
Traditional Classroom Environment: A setting composed of a traditional physical classroom,<br />
a general absence of application tools, and traditional instructor-student relationships (One<br />
instructor for a class of 20 or more students).<br />
Clinical Learning Environment: For the purpose of this investigation, the clinical learning<br />
environment includes any settings in which clinical nursing is practiced and nursing students<br />
have applied learning experiences.<br />
Preceptor: A staff nurse in the clinical setting who is generally paired with a nursing student for<br />
the duration of the student’s clinical rotation. The preceptor instructs and coaches the student,<br />
and may participate in student evaluation.<br />
Resource Nurse: A staff nurse in the clinical setting who is generally paired with a particular<br />
nursing student for only one or a few clinical experiences. The resource nurse may instruct and<br />
coach the student, but generally the nursing faculty member is considered the primary instructor.<br />
5
Chapter 2<br />
Review of the Literature<br />
The perceived significance and contribution of the learning environment to student<br />
learning is addressed by many learning theories and research investigations, although the<br />
majority focus on the traditional classroom environment (Fraser, 1991; McGraw et al., 1996;<br />
Resnick, 1987). Applied cognition and experiential learning theories, however, do focus on the<br />
applied environment setting and also highlight the importance of the learning environment’s<br />
influence on student learning. The behaviorological position also addresses the significance of<br />
the educational environment in relation to student learning. Although this review of literature<br />
focuses primarily on the applied cognition perspective, several aspects of the behaviorological<br />
position are also presented, as the author feels that a synthesis of the two positions most<br />
adequately describes the influence of the applied learning environment on student learning.<br />
Theoretical Perspectives on the Learning Environment<br />
According to the cognitive apprenticeship or situated cognition theory, learning and<br />
cognition are fundamentally situated (Brown, Collins & Duguid, 1989). They are a product of<br />
the learning activity, the context, and the culture in which they are developed and used (Brown,<br />
Collins & Duguid, 1989; Goodenow, 1992). Brown, Collins and Duguid suggested considering<br />
conceptual knowledge as a set of tools that can only be understood through use. Learning how to<br />
use the knowledge tools requires more than memorizing a set of rules—students must also<br />
understand the context for use and must practice using the tools (Greeno, Collins, & Resnick,<br />
1996). Behaviorologists also hold this perspective, stating that students must not only read and<br />
6
hear about a topic, but must contact the situation directly and receive feedback related to the<br />
behavior (Vargas, 1996). Knowing by acquaintance (through experience) generally results in<br />
more efficient behavior (Skinner, 1989). For instance, nursing student proficiency in performing<br />
an intravenous insertion generally does not occur as a result of only being told or shown the<br />
anatomical location of veins and the procedure for inserting an intravenous line. Rather, the<br />
student must practice the procedure, preferably in a simulated application setting first and then in<br />
a real setting. By doing the task, the student is able to contact natural reinforcers of the drag of<br />
the needle on the skin, the “pop” of the needle into the vein, and potentially the feedback of the<br />
patient undergoing the procedure.<br />
A cognitive apprenticeship can be described as a process by which learner gradually<br />
acquires expertise through interaction with an expert who models appropriate behavior and<br />
provides coaching to the learner (Brown, Collins & Duguid, 1989; Slavin, 1997). Students learn<br />
by observation, imitation, and practice through both repetition and continued engagement in the<br />
learning process. The behaviorological perspective also supports modeling as an effective<br />
instructional method, but only when the student also performs the behavior and receives<br />
feedback (Skinner, 1989).<br />
Apprenticeship methods enculturate students into authentic practices through activities<br />
and social interactions, in a way similar to craft apprenticeships. The expert, who is either an<br />
adult or a more experienced peer, socializes the learner to the norms and expected behaviors for<br />
the particular setting. The expert instructs initially by scaffolding (coaching and cueing) students<br />
as they perform a task and then gradually fading (withdrawing) the supports (Slavin, 1997;<br />
Greeno, Collins, & Resnick, 1996). Two instructional methods used by behaviorologists that<br />
share the characteristics with scaffolding are shaping of responses by reinforcement of<br />
7
successive approximations to the desired behaviors, and prompting correct responses with fading<br />
of prompts as responses are strengthened. Prompting is most effective when the student is<br />
“primed” or cued just enough to go on, with prompts removed as soon as possible, while<br />
maintaining the desired behavior (Skinner, 1989). These instructional methods help students<br />
succeed by reducing errors during learning and increasing opportunities for obtaining the natural<br />
reinforcement of success.<br />
An apprenticeship begins with tasks embedded in a familiar activity, and moves to more<br />
complex tasks and concept applications, according to a progressive and sequenced curriculum<br />
(Resnick, 1987). Cognitive apprenticeships allow students to observe the whole of a trade or<br />
profession and to begin performing small pieces of it. The ultimate goal of situated cognition<br />
instruction is for the student to behave as a practitioner in the particular field. The paradigm of<br />
behaviorology also holds that sequencing of instruction is important. Students must first know<br />
the prerequisite skills for development of a new skill or behavior, and then instruction must be<br />
arranged in sequences of related successive steps (Greeno, Collins & Resnick, 1996; Skinner,<br />
1989). For instance, in the applied nursing education setting, students progress from observation<br />
of experts performing skills and fulfilling professional roles, to practicing particular skills, and<br />
finally, to fulfilling almost all professional nursing roles.<br />
Cognitive apprenticeship concepts follow Vygotsky’s emphasis on the social nature of<br />
learning, the zone of proximal development (the span between what a learner is able to do<br />
independently and what the learner is able to do with assistance from an expert), and scaffolding<br />
(Slavin, 1997). Instructors assign complex, realistic tasks and provide enough support/prompts<br />
to allow the students to complete the tasks or solve identified problems that they would be unable<br />
to complete alone at that time. Higher psychological processes are acquired through interaction<br />
8
with others and participation in a social or professional community (Goodenow, 1992). Learning<br />
is situated in both physical and social contexts, with the social contexts being constructed by the<br />
teachers and students through interactions and around lessons. According to Marshall (1990), the<br />
goals of the “learning centered” environment relate to the process of learning, with a focus on<br />
determining how students learn and on socializing students to the role of learner rather than of<br />
producer of a product (Marshall, 1990).<br />
Experiential learning may be considered to be synonymous with situated cognition.<br />
Carver (1996) described experiential learning as education using a combination of the senses,<br />
emotions, physical conditions, and cognition that integrates conscious application of student<br />
experiences into the curriculum. She identified four pedagogical principles of experiential<br />
learning: (a) authenticity—instruction that is relevant to student lives, and that includes naturally<br />
occurring rewards; (b) active learning—experiences that are physically and/or mentally<br />
engaging; (c) drawing on student experience and encouraging students to reflect on their<br />
experiences; and (d) connecting experiences to future learning opportunities. Teachers facilitate<br />
processes in which students participate in the construction of knowledge and which lead to<br />
student achievement of the sub-goals of learning: personal agency, belonging and competence.<br />
Carver’s instructional principles may or may not be evident in an experiential setting, as they<br />
depend primarily on instructor and mentor/expert instructional focus. Addressing the presence<br />
or absence of the principles in a particular instructional setting may be beneficial in determining<br />
the quality of the applied learning environment.<br />
Carver (1996) also developed a general model for discussion and development of an<br />
experiential learning environment. Student experience, including both process and product<br />
factors, is at the center of a context of the learning environment, which is characterized by the<br />
9
nature of the student experience, the program characteristics, and the general characteristics of<br />
the environment. This model was developed as a way to look at the whole experiential learning<br />
experience, and to compare and contrast a variety of experiential student learning opportunities<br />
(Carver, 1996). Although no application of this model to a specific experiential or applied<br />
learning environment has been published to date, it would appear to be an adequate framework to<br />
describe and compare clinical education environments in the area of nursing.<br />
Warren (1993) identified the role of the teacher in the experiential learning process as<br />
creating a safe learning atmosphere for students and encouraging them to recognize the<br />
opportunities for growth in the environment. She likened the teacher in an experiential setting to<br />
a midwife, stating that she assists in the birth of new ideas by guiding students in their learning<br />
and guarding the intensity and safety of their engagement in the environment. The teacher<br />
manages the logistics of the experience to allow students to concentrate fully on the learning that<br />
is possible in the given environment. She serves as nurturer, establishes relationships with<br />
students, assists students in acknowledging commonalties and differences, creates student-<br />
centered learning experiences, and assists with closure of the experience, while remaining a<br />
learner/participant in the experience.<br />
Both the situated cognition theory and the behaviorological paradigm obviously<br />
acknowledge the impact of the learning environment on student learning. These theories are<br />
particularly appropriate in an applied educational setting, although the behaviorological<br />
perspective does not differentiate between traditional and applied settings, because it views<br />
learning as an active and applied process, regardless of the setting. Although generally viewed<br />
as opposing theories, the behaviorological and applied cognition perspectives would describe the<br />
10
ideal applied learning environment somewhat similarly (Berliner, 1983; Shuell, 1986). Both<br />
would consider the following as important characteristics of the ideal learning environment.<br />
1. A physical setting that approximates the ultimate practice or life setting as closely as<br />
possible.<br />
2. Adequate and accessible tools/equipment for skill development and practice.<br />
3. Experts who are able to model appropriate skills and behaviors and to provide<br />
constructive feedback to students regarding skill and knowledge development.<br />
4. Instructor/expert scaffolding (shaping, prompting) of new skills and knowledge concepts.<br />
5. Sequencing of learning from observation, to practice of “pieces” of the skill, to practice<br />
of the whole skill or profession.<br />
6. Adequate opportunities for practice.<br />
7. Connection between current experiences and future practice (behavior opportunities).<br />
Although constructive feedback for student behavior would be assumed to be present in<br />
the learning environment, the applied cognition literature does not focus on this factor of the<br />
environment. The behaviorological position holds that feedback, both naturally occurring and<br />
instructor provided, is responsible for the learning or lack of learning that occurs (Vargas, 1996).<br />
According to Sidman (1991), most contingencies in education are punitive—often resulting in<br />
the transformation of eager young learners to older, less willing learners. Punishment is even<br />
less effective when it is based on an absence of behavior such as “not doing” homework<br />
(Skinner, 1989). When used effectively, positive reinforcement is potentially the most powerful<br />
teaching tool available. Tangible rewards can be used initially to reinforce behavior, but they<br />
must eventually be eliminated to establish learning as its own reward. Differential reinforcement<br />
of student responses brings responding under appropriate stimulus control and results in student<br />
11
ehaviors of discrimination and generalization, which define the “understanding of concepts”<br />
(Thomas, 1991).<br />
Theoretical Perspectives on Evaluation of the Learning Environment<br />
One of the main differences between these two frameworks lies in their focus for the<br />
study of learning and of the learning environment. The situated cognition model focuses on the<br />
process of learning as well as learning outcomes, whereas the behaviorological model is<br />
concerned with the learning process primarily as it impacts the behavioral product of learning.<br />
The situated cognition focus on process would support investigation of student thoughts and<br />
feelings about the learning environment and about “coming to know” a particular skill or<br />
concept. Student perceptions of the environment are considered by some to have even more<br />
influence on student learning than does the “actual” environment, because the perceptions<br />
influence the way a student approaches a task, and thus, ultimately determine the quality of<br />
learning outcomes (Cust, 1996). If instruction is to be student centered (Marshall, 1990), it is<br />
important to determine student perceptions of the environment and about how it might be<br />
improved to benefit learning, since students may interpret the same environment differently. The<br />
applied cognition perspective also focuses on the quality of relationship and communication<br />
between students and peers, practitioners and instructors (Moos, 1979; Slavin, 1997).<br />
Behaviorologists, however, would not be as concerned about student perceptions of the<br />
learning environment as they would about the student behaviors that indicate a particular<br />
"attitude", because verbal statements about satisfaction are viewed as often unreliable and<br />
inconsistent (Skinner, 1953). A behaviorologist would be more likely to observe the<br />
12
contingencies present in the learning environment and the student’s behavior (or behavior<br />
change) both during and after instruction (C. D. Cheney, 1991; D. Cheney, 1991).<br />
Observation of the learning environment and students’ behaviors as they participate in the<br />
environment together with gathering data about student perceptions of the environment may<br />
result in the most comprehensive assessment of environmental impact on student learning.<br />
Observation of several students in each learning environment would be necessary to gather<br />
reliable observational data in each of the settings. Data collection through observation would be<br />
possible in educational programs that use limited numbers of applied learning settings.<br />
However, most nursing education programs use a large number of applied learning settings<br />
(clinical sites) in order to expose students to a variety of practice environments. If the research<br />
purpose is to evaluate the adequacy of all clinical learning environments used in a nursing<br />
program, the costs of an observational study would most likely prohibit this type of data<br />
collection. In contrast, student perceptions of multiple clinical environments may be obtained in<br />
a more efficient manner through a survey instrument. Therefore, the focus of this investigation<br />
is on student perceptions of the clinical learning environment.<br />
Empirical Investigations of the Applied Learning Environment in Nursing<br />
Relatively few research studies have investigated the impact of the applied learning<br />
environment on student learning, as opposed to the many investigations that have been<br />
conducted and tools that have been developed to measure the more traditional classroom<br />
environment. More specifically, few authors have either proposed factors that may impact<br />
learning in the clinical environment or have conducted investigations of the clinical nursing<br />
education environment (Tanner, 1994). Of the few published studies focusing on the clinical<br />
13
education environment, only a handful focus on evaluation of the environment, and the majority<br />
of these studies have been descriptive in nature.<br />
Reed and Price (1991) suggested criteria for an audit (assessment) of the clinical nursing<br />
learning environment, including social climate, nursing climate, physical environment,<br />
relationship between classroom and practice settings, and learning outcomes. However, they did<br />
not identify specific criteria within these categories that should be addressed in such an<br />
assessment. Clinical agency evaluation criteria were also suggested by Reilly & Oermann<br />
(1992). They identified the importance of flexibility of the learning environment,<br />
appropriateness of client population and adequate client numbers, range of learning<br />
opportunities, current care practices at the agency, availability of records, student orientation,<br />
and resource/space availability as important in determining clinical learning environment<br />
adequacy.<br />
Bevil and Gross (1981) identified several factors to be considered in selecting appropriate<br />
clinical settings for nursing student learning experiences, and developed a tool for faculty use in<br />
evaluating clinical settings. The forced-choice instrument addressed the issues of (a) availability<br />
of staff to facilitate learning, (b) student opportunity to assess patients and give direct care, (c)<br />
availability of records to students for review and documentation, and (d) an atmosphere<br />
conducive to growth. No tool validity or reliability data was included in the publication and no<br />
data collection was reported—only a plan for administering the tool and interpreting the results<br />
was presented.<br />
In contrast, Windsor (1987) used a student-centered approach in her study of the clinical<br />
learning environment, conducting focused interviews to better understand the clinical experience<br />
from the student perspective. Her detailed interviews of nine senior nursing students indicated<br />
14
that many aspects of the clinical environment affected the quality of the student experience. A<br />
variety of clinical assignments, opportunity for professional socialization, and staff approval<br />
contributed to improved learning, while staff criticism contributed to a “bad” clinical day.<br />
Windsor’s findings indicated that relationships of students with instructors, staff nurses, other<br />
students, and patients were all important in the clinical experience.<br />
Based on Windsor’s research, Peirce (1991) conducted a qualitative study investigating<br />
preceptored students’ perceptions of their clinical experience. Twenty-nine first level and 15<br />
second level undergraduate nursing students participated in the study. The open-ended<br />
questionnaire data were analyzed for themes, with the variables identified as influencing the<br />
clinical experience grouped into the categories of school/faculty factors, clinical site<br />
organizational and personnel factors, and student factors. Within the clinical site factors,<br />
variables mentioned by students included: assignments, staff feedback, preceptor’s availability<br />
and interest in students, preceptor nursing skills, opportunity to perform skills, adequacy of<br />
orientation, quality of nursing care, student participation in unit life, and staff willingness to<br />
teach and to serve as role models. Faculty factors identified included direction and guidance,<br />
availability to students, and follow-up to clinical experiences. Peirce stated that an unexpected<br />
finding was the relative importance of the staff nurses to the quality of the student clinical<br />
experience. This finding is supported by the research of Wilson (1994), who noted that students<br />
perceived staff nurses as linked with the real world of clinical practice as opposed to the world of<br />
theory to which the instructor was perceived to be linked.<br />
Perese (1996) identified additional factors related to student perception of clinical<br />
experiences in a descriptive study of 38 baccalaureate nursing students at the conclusion of a<br />
psychiatric clinical rotation. The majority of factors identified as influencing the student clinical<br />
15
experience involved nursing staff, including: role modeling, attitude, professionalism, staff<br />
support of nursing students, and staff workload.<br />
A review of the literature revealed only three recent studies aimed at developing and/or<br />
testing an instrument to evaluate the clinical education environment. Robins, Gruppen,<br />
Alexander, Fantone, and Davis (1996) developed an instrument to measure the medical school<br />
learning environment as part of an effort to improve student perceptions of their medical school<br />
experience. Although the authors did not specify whether their intent was to measure the<br />
classroom or the clinical environment, the reader might assume that the instrument was intended<br />
to measure the student learning environment in general, as no questions were addressed<br />
specifically to either the classroom or the clinical education component. The 435 completed<br />
instruments represented three years of data collection from medical students attending a large<br />
Midwest university. Student respondents rated the overall learning environment as well as the<br />
comfort of the environment for both men and women and for all races and ethnicities. The<br />
quality and amount of feedback provided to students, faculty involvement and interest in student<br />
education, and the program’s ability to promote critical thinking were also included in the<br />
instrument. No information as to the format of the questionnaire, student response options, or<br />
the reliability of the instrument was provided. Robins et al. (1996) did state that men and<br />
Caucasian students perceived some aspects of the learning environment more favorably, and that<br />
all medical students, regardless of gender or ethnicity valued a strong academic program and<br />
faculty who are respectful and supportive. Promotion of critical thinking and timely feedback<br />
were also identified by students as important. Regression analysis identified the strongest<br />
predictor of satisfaction as student perception that faculty placed a high priority on student<br />
education.<br />
16
The second instrument-based clinical education evaluation study, conducted in Australia,<br />
focused on development of an objective measure of nursing students’ clinical learning<br />
environment in a psychiatric rotation. Farrell and Coombes (1994) developed the Student Nurse<br />
Appraisal of Placement (SNAP) inventory with input from nursing faculty, agency staff, and<br />
students. The inventory consisted of nine semantic differential items as well as an area for<br />
student comments and an open-ended question about student experiences. Items addressed<br />
physical resources, learning opportunities, availability of staff, opportunities to practice<br />
interpersonal and technical skills, and overall student perceptions. Analysis was conducted<br />
based on right and left deviation from the analogue center. No support for validity or reliability<br />
of the instrument was presented in the article, and the size of the sample used in this<br />
investigation was not identified. However, Farrell and Coombes did state that data were used in<br />
curriculum review and decisions related to clinical site use. Data analysis revealed evaluation<br />
differences between clinical sites; authors stated the items were in keeping with professionally<br />
agreed upon criteria related to adequate learning environments for students.<br />
The final instrument-based clinical evaluation study was based on the authors’ perception<br />
that there was a lack of well-tested instruments to measure clinical learning environment. Dunn<br />
and Burnett (1995) developed the Clinical Learning Environment Scale to measure the applied<br />
component of nursing education. The instrument was based on Orten’s Ward Learning Climate<br />
Survey (as cited by Dunn & Burnett, 1995) and was intended for use in the Australian nursing<br />
system. Authors extracted and revised 55 items from Orten’s survey and administered the items<br />
to 423 nursing students and faculty at an Australian university. Exploratory factor analysis<br />
combined with a literature review led to further revision of the instrument. Confirmatory factor<br />
analysis followed with further revision of factors. Their final instrument included 23 items with<br />
17
five sub-scales measuring student-staff relationships, nurse manager commitment, patient<br />
relationships, interpersonal relationships, and student satisfaction. Factors within the sub-scales<br />
included: interpersonal relations, attitudes, physical structure, patient factors, staff workload,<br />
student allocation, role clarity, staff supervision of students, and ward atmosphere. The authors<br />
claimed construct validity of the instrument as determined by the confirmatory factor analysis.<br />
Reported reliability alpha coefficients for the final four factors were .63 to .85. Although several<br />
items in Dunn and Burnett’s tool might be appropriate for an instrument designed to assess the<br />
clinical learning environment in the United States, others, such as (a) the amount of ritual on the<br />
ward, (b) whether this was a happy ward, and (c) whether this was a good ward for learning<br />
seemed either too general or unrelated to the current clinical nursing environment in the United<br />
States. It would be difficult to use this instrument as a basis for any type of action aimed at<br />
improving the clinical learning environment, as items containing broad statements about the<br />
environment provide little information about what specific factors within the environment make<br />
it supportive or not supportive of student learning.<br />
It was apparent after review of the literature that although the clinical education<br />
environment was acknowledged to be a critical component of nursing education, there was no<br />
comprehensive tool that concisely measured student perceptions of the clinical learning<br />
environment in nursing. Therefore, the literature search was broadened to include instruments<br />
that have been developed to measure the learning environment in the more traditional classroom<br />
setting.<br />
18
Empirical Investigations of the Traditional Classroom Learning Environment<br />
A number of instruments have been developed to measure the traditional classroom<br />
learning environment. However, the majority have focused on elementary and secondary school<br />
environments. In the last 25 years, the majority of published instruments addressing the<br />
traditional classroom environment have been quantitative assessments of student perceptions of<br />
the environment. However, one recently published instrument was designed to evaluate the<br />
learning environment from an observer’s perspective. Recently, McNamara and Jolly (1994)<br />
developed the Classroom Situation Checklist as part of a more extensive assessment tool to<br />
investigate inappropriate student behavior in elementary school classrooms. The checklist was<br />
developed in response to the authors’ observation that classroom environment factors often<br />
contribute to student behavior problems. Three categories of factors compose the checklist:<br />
1. Organizational factors: student entry/exit, seating and social grouping,<br />
distractions / interruptions, equipment, lesson activities and transitions, and material<br />
distribution,<br />
2. Teaching interaction factors: classroom rules, personal comments, use of questions<br />
and rhetorical questions, reprimanding, and supervision,<br />
3. Pupil/curriculum factors: pupils unclear about tasks, pupils uninterested, unfairness,<br />
pupils unable to complete an assigned task, pupils waiting.<br />
An outside observer completes the checklist after one or more classroom observations.<br />
The observer provides evidence (description) of the effects of the above factors on the classroom<br />
environment and the possible impact on a particular student’s behavior. This information as well<br />
as suggestions for managing the environment is then provided to the instructor. No information<br />
related to tool validity or reliability or to the effectiveness of the Classroom Situation Checklist<br />
19
in identification and modification of environmental factors impacting student behavior was<br />
presented by authors.<br />
One of the first quantitative inventories developed to measure student perceptions of the<br />
classroom learning environment, the Learning Environment Inventory (LEI), was initially<br />
developed in the late 1960’s by the Harvard Project Physics (Fraser, Anderson & Walberg,<br />
1982). The final form of the instrument contains 105 items divided equally among 15 scales<br />
(cohesiveness, diversity, formality, speed, material environment, friction, goal direction,<br />
favoritism, difficulty, apathy, democracy, cliqueness, satisfaction, disorganization, and<br />
competitiveness). Secondary level students report their level of agreement with the stated items<br />
according to a four point Likert scale. Statistical testing of the instrument was reported by<br />
Fraser, Anderson and Walberg (1982). Reported scale consistency (alpha reliability coefficients)<br />
for inventory scales varied from .56 - .85. Within class correlations per scale were .31 - .92.<br />
Test retest reliability ranged from .51 - .73, and correlations among the 15 scales varied from<br />
.08 - .40.<br />
Fraser and Fisher (1982) developed a modified version of the LEI, called the My Class<br />
Inventory (MCI), for use with younger children, 8-12 years old. This instrument contains five<br />
scales (cohesiveness, friction, difficulty, satisfaction, and competitiveness), rather than the<br />
original fifteen. The MCI consists of 38 questions with yes/no response options. The inventory<br />
was initially tested with 2305 seventh grade science students in Tasmania, Australia. The scale<br />
consistency (Crohnbach’s alpha) scores ranged from .73 - .88. Correlation of each scale with all<br />
others ranged from .13 - .30. ANOVA testing demonstrated that each scale was able to<br />
differentiate between classrooms (p< .01). An eta squared value of .18 - .31 indicated that<br />
20
etween 18 and 31% of variance in scale scores was attributable to class membership (Fraser &<br />
Fisher, 1983).<br />
The MCI instrument has been used in several studies as a dependent measure for<br />
classroom improvement initiatives, although the majority of studies have been conducted in<br />
other countries. Studies by Fraser and O’Brien (1985) and Diamantes (1994) demonstrated that<br />
teacher interventions to improve the learning environment based on analysis of pre-intervention<br />
MCI data resulted in significant improvement in student perceptions of the targeted dimensions<br />
(scales) of the learning environment.<br />
Fraser also developed an abbreviated version of the MCI, consisting of 25 questions<br />
divided into five scales. Reliability testing with 758 third grade Australian students resulted in<br />
findings similar to the long form, with the instrument being able to differentiate between<br />
classrooms (p< .001), and the scales accounting for significant variance in student cognitive<br />
outcomes (post-test scores), when controlling for prior knowledge and general ability (Fisher &<br />
Fraser, 1981).<br />
Another inventory designed to measure student perceptions of the classroom learning<br />
environment, the Classroom Environment Scale (CES), was developed by Trickett and Moos<br />
(1973). It was the result of literature review, interviews with students and faculty, and classroom<br />
observations. The final version of CES consists of 90 items divided equally among 10 scales<br />
(involvement, affiliation, support, task orientation, competition, order/organization, rule clarity,<br />
teacher control and innovation). Student responses are limited to true and false. Initial<br />
instrument testing with American junior high and senior high school students resulted in scale<br />
consistency (alpha reliability) scores of .67 - .85, with correlation of individual scales with all<br />
other scales ranging from .10 - .31 (Trickett & Moos, 1973). The amount of variance in CES<br />
21
scale scores attributed to class membership varied from 21 - 48%. Test-retest stability over a<br />
two-week interval varied from .91 to .98. Fraser and Fisher (1983) repeated reliability testing of<br />
the instrument, finding similar results. They also developed and tested a preferred form of the<br />
instrument, to measure the type of learning environments that students consider ideal. The<br />
instrument was tested with junior high Australian students and faculty. Preferred scale reliability<br />
alphas ranged from .60 - .86. Correlation of individual scales with all other scales varied from<br />
.16 - .43 for the preferred form. Each scale was able to differentiate between classrooms (p<<br />
.001) with the amount of variance in the CES scales attributable to class membership varying<br />
from 18 - 43%.<br />
Fraser (Fraser, 1983; Fraser & Fisher, 1983) developed yet another instrument, the<br />
Individualized Classroom Environment Questionnaire (ICEQ), for use in more individualized<br />
classroom settings. The ICEQ was based on review of literature related to classroom learning<br />
characteristics, interviews with teachers and students, and the identified need for a time and cost-<br />
efficient instrument to measure the learning environment of secondary students. It consists of 50<br />
questions divided among five scales (personalization, participation, independence, investigation,<br />
and differentiation). Student response options range from “almost never” to “almost always”.<br />
Two forms of the instrument are available: one in which students or teachers respond to the<br />
actual classroom environment, and one in which students and/or teachers respond according to<br />
their preferred or ideal classroom environment. Reliability and validity testing was conducted<br />
with seventh through ninth grade students in 116 Australian classes (Fraser & Fisher, 1983).<br />
Scale internal consistency values for each scale ranged from .77 - .91 for the class actual form,<br />
and from .75 - .92 for the class preferred form. Correlations between each individual scale of the<br />
instrument and the other four scales were very similar across the two inventory forms, being .16 -<br />
22
.32 for the student actual form and .17 - .35 for the student preferred form. Test-retest reliability<br />
was reported to vary from .67 - .83 for a group of 105 students. Each scale of the instrument was<br />
determined to differentiate between classrooms (by means of ANOVA results), with 20 - 43% of<br />
the variance in scale scores being attributable to class membership (Fraser, 1983).<br />
Fraser (1983) investigated application of the ICEQ in a classroom study aimed at<br />
improving the student learning environment in a private secondary school class of 31 seventh<br />
grade boys. Students completed both actual and preferred forms of the ICEQ instrument. The<br />
teacher was provided the student means for each scale and developed interventions directed at<br />
improving perceived personalization and participation scale scores. After one month of teacher<br />
interventions, reassessment with the actual ICEQ inventory demonstrated significant change in<br />
student perceptions of the environment related to the two scales targeted for intervention by the<br />
teacher.<br />
Fraser and Fisher (1982) also demonstrated a significant association between student<br />
cognitive outcomes and their perception of the classroom psychosocial environment, using the<br />
ICEQ and CES instruments. The magnitude of relationships was larger for the class than for the<br />
individual student.<br />
Fraser has also developed short forms of the ICEQ and CES, due to the perception that<br />
time and cost constraints may prevent teachers from using the longer forms to assess the learning<br />
environment and to measure improvements in the environment post-intervention (Fraser &<br />
Fisher, 1986). The goal for the development of the shortened tools was to increase efficiency of<br />
testing and scoring, while maintaining an accurate representation of mean class perceptions. The<br />
shortened inventories were developed as a result of several item analyses of the long form data<br />
from administration of these two tools, as well as from an attempt to achieve a balance of<br />
23
positive and negatively worded items, with a goal of maintaining face validity of the instrument<br />
(Fraser, 1982). The short version of the ICEQ consists of five scales with five questions each.<br />
The shortened CES contains 24 questions divided evenly between six scales. The correlation<br />
between the long and short forms for the ICEQ was reported to vary from .84 to .97. Reported<br />
alpha reliability for the short form scales was .69 - .85. The correlation between long and short<br />
forms of the CES was .78 - .92 and the alpha reliability for the CES short form scales ranged<br />
from .78 - .92.<br />
In addition, Fraser and his colleagues reported case studies using the shortened versions<br />
of both the CES and MCI to assess and measure improvements in the learning environment of<br />
Australian elementary school students (Fraser & O’Brien, 1985; Fraser & Fisher, 1986).<br />
Teachers were informed of student perceptions of the environment and targeted specific areas for<br />
intervention, based on inventory results. Post intervention scale scores for each inventory were<br />
significantly different from initial scores, particularly in the areas targeted for intervention.<br />
Yet another instrument was developed by Fraser, Treagust, and Dennis (1986), based on<br />
an identified need for instruments to measure the learning environment in higher education<br />
classrooms. The College and University Classroom Environment Inventory (CUCEI) was<br />
designed to be used for smaller lecture or seminar-type university classes. The inventory<br />
contains seven scales (personalization, involvement, cohesiveness, satisfaction, task orientation,<br />
innovation, and individualization), with seven questions in each scale. Students respond to the<br />
instrument according to a four level agree to disagree Likert scale; half of the items are reverse-<br />
scored. Testing with 307 Australian post graduate and undergraduate students and 65 U.S. post<br />
graduate and undergraduate students resulted in class reliability coefficient alphas for the scales<br />
of .81 - .96, and correlation of individual scales with all other scales of .36 - .56. Each scale<br />
24
significantly differentiated between classrooms (p< .001), with 32 - 42% of the variance in scale<br />
scores being due to class membership (Fraser, Treagust, & Dennis, 1986).<br />
Fraser, Williamson, and Tobin (1987) used the CUCEI instrument to compare<br />
environmental perceptions of Australian students enrolled in an alternative high school geared<br />
toward the adult learner with the perceptions of students enrolled in more traditional education<br />
settings (evening technical school, traditional high school for adolescents only, and traditional<br />
setting including both older learners and adolescents). They found differences between school<br />
types on all seven scales of the CUCEI instrument. Overall, the evening technical school was<br />
rated most favorably, followed by the alternative high school, the conventional adolescent high<br />
school, and the high school integrating adolescents with adult learners.<br />
Recently, Fraser, Giddings, and McRobbie (1991) have developed a new questionnaire,<br />
called the Science Laboratory Environment Inventory (SLEI), to measure the unique<br />
environment of science laboratory classes in secondary or tertiary level classes. The<br />
questionnaire has five scales (student cohesiveness, open-endedness, integration, rule clarity and<br />
material management). Students respond to questions on a five-point frequency Likert scale.<br />
The instrument was field tested simultaneously in six countries with over 6,000 students (Fraser,<br />
1991). Scale alpha reliabilities by class were reported to vary from .75 - .83, with the mean<br />
correlation of individual scales with all other scales ranging from .07 - .37. Repeat investigation<br />
resulted in somewhat higher correlations (.18 - .57) between individual scales and all remaining<br />
scales (Henderson, Fisher, & Fraser, 1995). The researchers have also developed a “personal”<br />
form of the SLEI instrument, to obtain student perceptions of his/her own role within the<br />
classroom rather than perceptions of the class as a whole.<br />
25
Due to an identified need to assess the degree to which a classroom’s environment is<br />
consistent with a more constructivist philosophy, Taylor and Fraser (1991) developed the<br />
Constructivist Learning Environment Survey (CLES). The inventory items are divided among<br />
the four scales of autonomy, prior knowledge, negotiation, and student-centeredness. Reported<br />
scale reliability scores for the actual version of this instrument ranged from .61 - .79.<br />
Idiris and Fraser slightly modified the original CLES instrument to measure the perceived<br />
learning environment of 1175 agricultural science students in 20 different Nigerian secondary<br />
schools. They added the Investigation and Differentiation scales contained in the ICEQ to the<br />
original CLES scales, and dropped the Prior Knowledge scale. Within-scale reliability scores<br />
varied from .71 to .96 with the school mean as the unit of analysis and from .55 - .82 with the<br />
individual as the unit of analysis. Correlation of each individual scale with all other scales varied<br />
from .33 - .49 for group data, somewhat higher than scale correlations reported for the MCI,<br />
CES, ICEQ, and SLEI, but comparable to those reported for the CUCEI. Between scale<br />
correlations using individual data were slightly lower at .24 - .39. The actual form was found to<br />
differentiate significantly (p< .001) between perceptions of students in different schools, with 14<br />
- 45% of the variance in scale scores accounted for by school membership (Idiris & Fraser,<br />
1994).<br />
Very recently, Cannon (1997) administered both the preferred and the actual forms of the<br />
original CLES instrument to 108 American students enrolled in college science courses (biology,<br />
chemistry and physics). He reported within-scale reliability scores ranging from .63 - .89 for the<br />
actual (perceived) inventory form and from .35 - .91 for the preferred form. Chemistry students’<br />
scale reliability scores for the Prior Knowledge and Autonomy scales were noticeably lower than<br />
other reliability scores, at .35 and .57 respectively. Cannon found that student perceived scores<br />
26
were lower than preferred scores for all scales, indicating that the learning environments in all<br />
three types of science classes fell short of what students preferred for these environments.<br />
Generally, all of the instruments described above have been demonstrated to be<br />
sufficiently reliable for use in assessment of classroom learning environments (Fraser, 1991).<br />
Within-scale reliabilities indicated that students respond consistently to each of the scales within<br />
the inventories. Correlations of individual scales with the other scales in an inventory indicated<br />
the concepts measured by each of the instruments are distinct but somewhat overlapping. All<br />
inventories have been shown to discriminate between the learning environments of students in<br />
different classes. Each of the inventories measures somewhat different factors, as evidenced by<br />
the different scales in the instruments.<br />
Fraser (1991) created a matrix of the scales included in six of the above instruments (LEI,<br />
CES, ICEQ, MCI, CUCEI, and SLEI), categorizing each instrument’s scales according to the<br />
three dimensions of the human environment (Relationship, Personal Development, and System<br />
Maintenance and Change) identified by Moos (1979). It was evident from review of this matrix<br />
that several of the instruments contain some of the same or very similar scales, but that none<br />
contain exactly the same scales. This author has further summarized Fraser’s matrix, focusing<br />
on the scales included in the six instruments and frequency of identification of each scale among<br />
the instruments. In addition, the scales identified in the more recent CLES inventory have been<br />
included in the matrix summary. Table 1 identifies each scale (factor) measured by the seven<br />
previously identified instruments and the frequency of scale identification across the instruments.<br />
It is apparent that student cohesiveness or affiliations, involvement or participation, satisfaction,<br />
competitiveness, and diversity or individualization are considered important factors in the<br />
27
traditional classroom learning environment, as evidenced by their inclusion in at least three of<br />
the seven learning environment inventories.<br />
The most recently developed instrument measuring student perceptions of the classroom<br />
learning environment, the Perceptions of Learning Environments Questionnaire (PLEQ), was<br />
published by Clark (1995). This semi-structured inventory containing open-ended questions was<br />
developed in response to the author’s perception that the previously cited instruments are limited<br />
by scales developed a priori and forced-choice student response options. On the PLEQ, students<br />
identify a learning environment and then describe up to 10 behaviors and activities that help as<br />
well as hinder their learning. The author tested the instrument with 1249 students at Queensland<br />
University of Technology in Australia. The categorized statements and reasons of 100 randomly<br />
selected students were analyzed to determine 55 categories of statements and 47 categories of<br />
reasons for use in classifying the remaining data. The most frequently identified<br />
activities/behaviors that helped students to learn were (a) practical application, (b) clear<br />
presentation/explanation, (c) lecturer supports student learning (d)<br />
28
Table 1.<br />
Identification and Frequency of Factors (Scales) Contained in LEI, CES, ICEQ, MCI, CUCEI,<br />
and SLEI Instruments According to Moos’ Dimensions of the Educational Environment<br />
Relationship<br />
Dimension<br />
Student cohesiveness or<br />
affiliation (5)<br />
Involvement or<br />
participation (3)<br />
Satisfaction (3)<br />
Friction (2)<br />
Personalization (2)<br />
Favoritism (1)<br />
Cliqueness (1)<br />
Apathy (1)<br />
Teacher support (1)<br />
Personal Development<br />
Dimension<br />
Competitiveness (3)<br />
Difficulty (2)<br />
Task orientation (2)<br />
Speed (1)<br />
Independence (1)<br />
Investigation (1)<br />
Open-endedness (1)<br />
Integration (1)<br />
29<br />
System Maintenance /Change<br />
Dimension<br />
Diversity, differentiation or<br />
individualization (3)<br />
Rule clarity (2)<br />
Material environment (2)<br />
Organization or<br />
disorganization (2)<br />
Teacher control vs.<br />
democracy (2)<br />
Innovation (2)<br />
Formality (1)<br />
Goal direction (1)
lecturer asks questions, and (e) discussion occurs. The most frequently identified<br />
activities/behaviors that hindered student learning were (a) class not disciplined, (b)<br />
inappropriate pacing of instruction, (c) unclear presentation/explanation, (d) inappropriate class<br />
size, and (e) no variety in presentations/activities. Clark noted that an interesting finding of the<br />
research was that students did not refer to either constructing their own knowledge or to<br />
autonomy in learning as contributing to or hindering learning.<br />
The PLEQ instrument is descriptive in nature, and no reliability or validity testing was<br />
reported. Its purpose is also somewhat different from the previously described instruments, as<br />
students are not evaluating an actual environment, but are making statements about a general<br />
type of learning environment. It does, however, support inclusion of items related to student<br />
interaction, application of knowledge, instructor support, discussion, clear presentations,<br />
variety/innovation, and appropriate discipline in an instrument designed to measure student<br />
perceptions of an actual learning environment.<br />
Research consistently demonstrates that students' perceptions of their environment<br />
account for an appreciable amount of variance in student outcomes (Henderson, Fisher, & Fraser,<br />
1995; Moos & Moos, 1978), even when student background characteristics (prior knowledge and<br />
general ability) were controlled (Fraser, Williamson, & Tobin, 1987). In addition, teachers tend<br />
to perceive the environment more positively than students do (Fraser, 1991), although both<br />
groups generally agree about characteristics of an ideal learning environment (Fraser, 1995;<br />
Fraser & Fisher, 1982; Raviv, Raviv, & Reisel, 1990). Classroom environment research has also<br />
shown that person-environment fit, as measured by congruence between student perceived and<br />
preferred or ideal learning environment, is important in student achievement (Fraser, 1991).<br />
30
Although the above-mentioned learning environment inventories are valid and useful<br />
instruments to measure the traditional classroom environment, they do not address several<br />
variables present in applied learning settings, such as the clinical nursing education environment.<br />
The physical environment layout and resources, the adequacy of learning opportunities within<br />
the environment, the influence of other student groups on the nursing student learning<br />
experience, and the impact of professional staff or expert practitioners on student learning are not<br />
addressed by the traditional classroom inventories. As noted previously, there are currently no<br />
published instruments that comprehensively address the factors present in the applied learning<br />
environment.<br />
Statement of the Problem<br />
Student and faculty variables that influence learning in the clinical setting may be<br />
evaluated using a variety of tools and processes. The adequacy of the clinical environment,<br />
however, often is assessed with one or two evaluative questions imbedded in an overall nursing<br />
course evaluation. Analysis of one or two global questions related to the clinical setting can<br />
hardly result in reliable information about the adequacy of the setting for students of a particular<br />
ability level. The clinical education environment includes many forces and conditions within the<br />
setting that impact student learning (Dunn & Burnett, 1995). The wide variety of clinical<br />
settings used for undergraduate nursing education, the increased focus on clinical competence,<br />
and the frequent use of clinical agency preceptors for nursing students highlight the need for a<br />
comprehensive evaluation of the clinical sites used in undergraduate nursing education.<br />
However, it was apparent after review of the literature that although the clinical education<br />
environment was considered a critical component of nursing education, there was no<br />
31
comprehensive tool that concisely measured student perceptions of the clinical learning<br />
environment in nursing. Multiple inventories exist to assess student perceptions of the traditional<br />
classroom environment, but most are designed for use with elementary and secondary level<br />
students, and none address the factors identified in the nursing literature as critical to an<br />
assessment of the applied clinical environment. The purpose of this research endeavor was to<br />
develop and test an instrument that measures student perceptions of the clinical learning<br />
environment in the variety of agencies and sites used in nursing education.<br />
Preliminary Investigation<br />
The initial Student Evaluation of Clinical Education Environment (SECEE) instrument<br />
was developed as a tool for student evaluation of their clinical environments (Sand-Jecklin,<br />
1997). Attention to the validity of content was addressed from the initial steps in inventory<br />
development. Items were selected based on a review of the literature, nursing faculty and senior<br />
nursing student perceptions of important aspects of the clinical environment, and a review of<br />
sample university–agency contracts and unpublished course evaluations. Senior nursing student<br />
and nursing faculty at a large mid-Atlantic university were requested to identify characteristics of<br />
the clinical education environment that they felt should be addressed by an evaluation instrument<br />
via group discussion with researcher facilitation. A table of specifications was developed,<br />
including criteria identified through each of these sources. Item content was taken directly from<br />
the table, with particular attention to criteria identified by several sources, supporting validity of<br />
instrument content. An additional criterion considered in tool development was the need for a<br />
brief survey that could be completed by students in a short time frame. Instrument revisions<br />
were made based on review of the instrument by evaluation and nursing experts.<br />
32
The original two-page SECEE instrument consisted of thirteen forced-choice items<br />
relating to the clinical learning environment, of which eleven were presented in a four point<br />
Likert format (a copy of the instrument appears in Appendix A). Content of the forced-choice<br />
items included issues related to (a) student orientation; (b) nursing staff/preceptor availability,<br />
communication, role modeling, and workload; (c) resource availability including patients,<br />
equipment, and references; and (d) student opportunity for hands on care (Peirce, 1991; Perese,<br />
1996). In addition, two questions contained dichotomous (yes/no) choices related to staff<br />
preparation to serve as a resource and the presence of other students at the site during clinical<br />
time.<br />
Four open-ended items requested students to describe both the strengths and limitations<br />
of the clinical experience at a particular agency, to describe the impact of other health<br />
professional students at the clinical site on the student’s experience, and to comment further on<br />
either the clinical experience or the evaluation tool.<br />
Data were collected at the end of the 1996 spring semester. All students enrolled in the<br />
undergraduate nursing program at the main campus of a large mid-Atlantic university were<br />
included in data collection. Nursing faculty at the university distributed and collected the<br />
inventories during the last two weeks of the semester. From the group of 218 sophomore, junior<br />
and senior nursing students at the main university campus, 148 questionnaires were completed,<br />
representing a 68% response rate. Response rates were similar across the three levels of<br />
students. A total of 41 specific clinical learning environments were evaluated by students, with<br />
each student evaluating only one site experienced during the spring semester. The number of<br />
clinical sites was rather large due to senior level students participating in a precepted clinical<br />
33
experience in a wide variety of clinical agencies, with only one to three senior students placed at<br />
each site.<br />
Student response consistency (Cronbach alpha coefficient) for the forced-choice<br />
inventory items was .897, with question two removed from analysis as it contained nominal data.<br />
According to Burns and Grove, (1993), alpha levels of .8 to .9 demonstrate response consistency<br />
for the instrument. The range of correlations of individual items with the total ranged from .468<br />
to.789, except for the single item related to the adequacy of orientation, which had a correlation<br />
of .161 with the total.<br />
Factor analysis was also used to determine reliability of the evaluation instrument.<br />
Analysis with varimax rotation resulted in two factors with eigenvalues over 1.0. The reliability<br />
coefficient alpha for factor 1 (learning environment) was .831 and the alpha for factor 2 (agency<br />
or department atmosphere) was .774. Within factor 1, however, there appeared to be two sub-<br />
scales. The coefficient alpha for sub-scale 1 (learning opportunities) was .799 and the alpha for<br />
sub-scale 2 (staff/preceptor issues) was .790. Five of the eleven items included in the factor<br />
analysis loaded above .35 on both identified factors, although they generally loaded somewhat<br />
higher on one factor.<br />
Several steps were taken to demonstrate validity of the SECEE instrument. Review of<br />
the tool by both nursing education faculty and experts in evaluation contributed to the validity of<br />
content. In addition, analysis of variance was performed to discern the discrimination ability of<br />
the instrument. In order to conduct the ANOVA procedure, agency sites identified by fewer than<br />
six respondents were combined into site categories or groups, according to similarity of the type<br />
of nursing care provided at the sites. The resulting ten site groups varied in number of<br />
respondents from seven to twenty-seven, with the majority including thirteen to eighteen student<br />
34
espondents. Analysis of item responses by site groups demonstrated significant differences<br />
between groups for all scaled items (p < .05). Follow-up testing with the Duncan test confirmed<br />
significant differences between several site groups within each item. Some general trends were<br />
evident across items, with two clinical areas being generally rated more favorably and three sites<br />
being rated least favorably.<br />
Student responses to the open-ended questions appeared to corroborate the ANOVA<br />
results. Sites rated significantly lower than others in the comparative analysis also received<br />
fewer positive comments in response to the open-ended item that requested students to identify<br />
the strengths of having a clinical experience at the agency and were the only sites mentioned as<br />
having “no strengths”.<br />
As mentioned previously, the open-ended questions were included in the SECEE<br />
instrument in order to provide support for existing forced-choice items and to determine whether<br />
students would identify any significant clinical environment issues that were not addressed by<br />
the forced-choice portion of the instrument. One item asked students to describe the impact of<br />
other health professional students being present at the clinical agency on their experience.<br />
Generally, the comments were global in nature and either positive (e.g., enhanced the student<br />
experience) or neutral. Information about the impact of other students in the setting on the<br />
nursing student learning experience could be more efficiently gained through a forced-choice<br />
question.<br />
Students were also asked to describe the strengths and limitations of having a clinical<br />
experience at the identified agency. The most frequently identified strengths and limitations had<br />
already been addressed in the forced-choice portion of the SECEE instrument, supporting<br />
content validity of the instrument. The issues of variety of patients in the setting and time<br />
35
constraints experienced by students were the only strengths or limitations not specifically<br />
addressed by the scaled-response portion of the instrument.<br />
Finally, students were asked to make any further comments they wished to communicate,<br />
related either to their clinical experience or to the evaluation instrument. Most comments were<br />
general in nature and related to either preceptor or staff issues (both positive and negative<br />
comments) or to general perceptions about the agency environment.<br />
Three study limitations became evident during data analysis, with the physical layout of<br />
the demographic section contributing to two of the limitations. The semester identification item<br />
was found to have 57 missing responses. This did not affect data analysis, as data collection<br />
occurred during only one semester. However, it might affect analysis, if data were to be<br />
collected over the span of several semesters. In addition, the item that requested students to<br />
identify the clinical site being evaluated was found to be problematic. Twenty-six students<br />
identified clinical sites only by institution (hospital) or clinical rotation (psychiatric), rather than<br />
by specific unit or department, even though instructions requested inclusion of the unit or<br />
department. This resulted in the need to create two general agency clinical site groups for<br />
ANOVA analysis, potentially impacting analysis results. However, the majority of missing data<br />
was from students evaluating one specific rotation in psychiatry, so validity testing of the<br />
instrument was assumed to be minimally affected.<br />
The third study limitation that became apparent during data analysis related to the<br />
unequal numbers of students evaluating the variety of clinical sites. Due to senior students<br />
having precepted clinical experiences in a wide variety of community and hospital settings, the<br />
number of students evaluating some sites was very small. Thus, it was necessary to group<br />
individual sites prior to ANOVA analysis. This grouping may have masked extreme scores for<br />
36
sites having few respondents. If individual site data were needed for settings having very few<br />
students, it would be necessary to review inventories rating these settings individually, or to<br />
combine data for two or more semesters in order to achieve a larger sample size per site for<br />
quantitative analysis.<br />
Research Questions<br />
The original SECEE inventory appeared to adequately reflect nursing students’<br />
perceptions of some aspects of the clinical learning environment, based on data from students<br />
enrolled in one nursing education program. However, the inventory did not reflect the full range<br />
of factors impacting the clinical learning environment. As there continued to be no other<br />
published instruments addressing student perceptions of the clinical education environment, the<br />
investigator chose to continue refinement and testing of the SECEE inventory. The goals of<br />
inventory revision were to produce an instrument that was a valid and reliable measure of student<br />
perceptions, that reflected more fully the range of environmental impacts on student learning,<br />
and that was practically useful to nursing educators. Based on the results from administration of<br />
the original SECEE inventory and on further review of the applied learning environment<br />
literature, the following questions were posed for continued research.<br />
1. Is it possible to revise the original SECEE instrument to include all issues critical to the<br />
nursing clinical education environment, while maintaining brevity and practicality of use<br />
by nursing educators?<br />
2. Is the revised SECEE inventory a reliable instrument for measuring student perceptions<br />
of their clinical (applied) learning environment?<br />
37
3. Is the revised SECEE inventory able to differentiate between various student populations<br />
through the evaluation of their clinical education environment?<br />
4. Are further revisions recommended to improve the validity, reliability, or ease of<br />
administration and analysis of the SECEE inventory?<br />
38
Chapter 3<br />
Methods<br />
The following section describes revisions to the original SECEE inventory and the<br />
research procedure used in testing the revised inventory version. Nursing students at three<br />
institutions were participants in data collection, in order to allow increased instrument testing and<br />
potential generalizability for use with other nursing student populations.<br />
Instrument Revisions<br />
The focus of the present investigation was on continued refinement and testing of the<br />
SECEE instrument. Several changes were made to the 13 forced-choice and 4 open-response<br />
items of the original instrument, based on analysis of student data as well as on a more extensive<br />
review of both theoretical literature and instruments that have been used to assess both the<br />
traditional classroom and the applied learning environment (see Appendix D for Table of<br />
Specifications). All revisions were made with consideration for validity of content, with<br />
emphasis on data from student and faculty focus groups.<br />
Forced-choice questions were formatted to fit the same scale, to improve the ease of<br />
reading and scoring. In addition, the response options were changed from a four point frequency<br />
Likert scale (very seldom to almost always) to a five point agreement scale (strongly agree to<br />
strongly disagree), to increase the discrimination ability of the instrument. A sixth option of<br />
“can’t answer” was also added, along with a request for students to provide an explanation for<br />
questions to which they responded “can’t answer”. It was hoped that the addition of the “can’t<br />
answer” response option would reduce the incidence of items left blank, provide information as<br />
39
to why students were unable to respond to an item, and perhaps point to the need for further<br />
inventory modifications.<br />
The inventory contained four items for which a low score reflects a negative, rather than<br />
positive, characteristic of the environment. These items were reverse-coded prior to data<br />
analysis. The two demographic items (semester and clinical site being evaluated) on the initial<br />
instrument that were found to have missing or incomplete student responses in the initial data<br />
collection were altered to guard against recurrence of missing data.<br />
An additional objective in revision of the inventory was to encompass more fully the<br />
wide range of environmental influences on learning that had been identified through previous<br />
research investigations (Peirce, 1991; Windsor, 1987). In order to accomplish this objective, four<br />
items addressing student opportunities for learning and success, the impact of both instructor and<br />
resource staff behaviors on the student experience, and relationships with other students were<br />
added to the inventory.<br />
The initial instrument did not address the impact of the instructor on the learning<br />
environment. Based on research literature support for inclusion of instructor variables in a<br />
comprehensive educational environment evaluation (Fraser, Treagust, & Dennis, 1986;<br />
Henderson, Fisher, & Fraser, 1995; Robins et al., 1996; Trickett & Moos, 1973), the issues of<br />
instructor feedback to students, instructor serving as a role model for professional nursing,<br />
adequate instructor guidance, instructor encouragement of students helping each other, and<br />
instructor availability for questions or assistance were addressed through the addition of one<br />
inventory item for each issue. Two inventory items related to adequate preceptor/resource nurse<br />
guidance in learning new skills and preceptor/resource nurse feedback to students were also<br />
40
added, based on the applied cognition literature (Bevil & Gross, 1982; Peirce, 1991; Slavin,<br />
1997).<br />
An item addressing the range of learning opportunities at the site was also added, as a<br />
result of multiple student comments about learning opportunities on the open-ended items of the<br />
initial instrument as well as from further literature review (Farrell & Coombes, 1994; Reilly &<br />
Oermann, 1992). In addition, eight items asking about clear communication of student<br />
responsibilities, whether students are encouraged to identify and pursue learning opportunities,<br />
whether students were allowed increased independence as their skills improved, if they<br />
experienced difficulty obtaining assistance when needed, whether adequate support was provided<br />
during attempts at a new skill, whether the student found responsibilities overwhelming, if<br />
nursing students helped each other, and whether the atmosphere was conducive to learning were<br />
added, based on review of both the applied and traditional classroom learning literature (Cust,<br />
1996; Fraser & Fisher, 1983; Slavin, 1997).<br />
Two questions were removed from the instrument, due to the perception that not all<br />
nursing students would have adequate prior knowledge and experience to make reliable<br />
judgements about the items. The forced-choice items asking whether students felt their<br />
preceptor/resource nurse was prepared to serve as a resource to students and requesting student<br />
recommendation about future use of the agency as a clinical education site were removed during<br />
instrument revision. If either of these issues were critically important to a particular student s/he<br />
could comment on them via the open-ended questions.<br />
The forced-choice instrument items encompass four predetermined factors or scales:<br />
communication and feedback, learning opportunities, learning support and assistance, and<br />
department atmosphere (see appendix B for a list of items within each scale). The four scales are<br />
41
epresentative of all three of Moos’ (1979) dimensions of educational environments. The<br />
Communication and Feedback scale represents Moos’ relationship dimension; the learning<br />
support and assistance and learning opportunity scales fall under the personal development<br />
dimension; and the department atmosphere scale represents the system maintenance and change<br />
dimension. The SECEE instrument has fewer scales than some of the previously cited traditional<br />
classroom inventories (Fraser & Fisher, 1983), but it was believed that the items within the<br />
scales measure the primary components of the clinical education environment and that the scales<br />
would be shown through data analysis to measure distinct factors of the environment. A copy of<br />
the complete SECEE instrument appears in Appendix C.<br />
The majority of forced-choice items as well as the open-ended questions are written from<br />
an individual rather than from a group perspective. As Fraser (1991) mentioned, students may<br />
perceive the environment somewhat differently from a personal or from a class perspective.<br />
Students generally work independently in the clinical nursing setting, and so may not be able to<br />
comment on the learning environment for the clinical group as a whole. Thus, the inventory was<br />
designed to gather perceptions about the individual’s experience within the environment.<br />
Open-ended items asking students to identify aspects of the clinical setting that promoted<br />
learning and the aspects that hindered learning, as well as an item providing space for additional<br />
student comments remained in the instrument. It was felt that analysis of these items would<br />
provide support for the validity of inclusion of the forced-choice items as well as identify issues<br />
that may still not be addressed by the forced-choice items.<br />
In addition to the six demographic questions, the SECEE inventory contains 29 scaled<br />
items, plus three open response items. Questionnaire brevity was an issue identified by Fraser<br />
(1991) in his development of the shortened versions of the MCI, CES and ICEQ inventories.<br />
42
Limiting the number of items while maintaining the discrimination ability and consistency of an<br />
instrument allows for improved efficiency of administration and scoring, and may promote use<br />
of the instrument as a tool in the process of improving the student learning environment. It was<br />
anticipated that students would need approximately ten minutes to complete the inventory, so<br />
that students could easily complete the instrument at the end of a clinical session or during a<br />
session of a theory based course.<br />
Population<br />
The study population for this investigation consisted of nursing students enrolled in<br />
clinical nursing courses in baccalaureate nursing programs. The sample population was taken<br />
from one large mid-Atlantic university (LMA) and two small liberal arts colleges, (one college in<br />
the Mid-Atlantic States [SMA] and one small university in the Midwest [SMW]). The potential<br />
number of student respondents from all institutions was approximately 380. Permission to<br />
conduct the investigation was obtained from each institution prior to administration of the<br />
instrument.<br />
A description of the general nature of student clinical experiences at each university was<br />
obtained from the dean of each school of nursing. Nursing students at all three institutions have<br />
clinical sessions twice per week. The number of hours per week in the clinical setting varied<br />
from 12 to 18, varying more with student level than with institution. Generally, sophomores had<br />
the fewest clinical hours per week (8 – 12) and seniors had the highest number of clinical hours<br />
per week (16 – 18). Nursing programs at LMA and SMW included a staff-precepted student<br />
clinical experience at the senior level in both the community and leadership clinical rotations.<br />
The program at LMA did not include any formally arranged precepted clinical experience for<br />
43
students. Students attending LMA and SMW had two distinct clinical rotations per semester,<br />
except senior students at LMA, who were placed in a rural site rotation for an entire semester.<br />
At SMA, only junior students had two clinical rotations per semester; sophomores and seniors<br />
remained at one primary clinical site throughout the semester.<br />
Procedure<br />
The SECEE instrument was administered to all classes of nursing students who were<br />
enrolled in a clinical nursing experience at each institution. The instruments were distributed<br />
during classroom sessions of nursing courses, for the sake of convenience in data gathering.<br />
Students were asked to evaluate the primary clinical site that they were currently experiencing.<br />
If students had concurrent clinical experiences at more than one site, they were assigned to<br />
evaluate the first clinical setting they experienced during the school week. Only primary clinical<br />
rotation sites were evaluated—observational and single day experiences at clinical sites were not<br />
included in data collection. Although these experiences are also important to student learning,<br />
inclusion of observational or single session sites in data collection would require students to<br />
complete multiple inventories (one for each site experienced) within a short time frame,<br />
potentially impacting the quality of student responses. To the investigator’s knowledge, all<br />
students had at least three clinical sessions at the site being evaluated prior to data collection.<br />
Data were collected during the 1998 spring semester. A sub-sample of sophomore and<br />
junior students at institution LMA completed the instrument twice during the semester, during<br />
two different clinical rotations. These data allowed comparison of individual student evaluations<br />
of distinct clinical sites during the same semester. The junior students at SMW and the senior<br />
students at SMA completed the inventory twice during the semester, but evaluated the same<br />
44
clinical sites, for the purposes of test-retest reliability determination. The second administration<br />
of the inventory to the above groups of students was between three and four weeks after the first<br />
administration, following guidelines for test-retest reliability determination (Burns & Grove,<br />
1993). All other students at the three institutions completed the SECEE inventory only once<br />
toward the end of the semester.<br />
Nursing instructors at institutions LMA, SMW, and SMA who distributed and collected<br />
the instruments read the “Statement to Participants” prior to instrument distribution, assuring<br />
students of confidentiality and anonymity as well as the voluntary nature of participation in the<br />
study. The investigator read the “Statement to Participants” as well as distributed and collected<br />
the SECEE inventories only for the sophomore and junior classes of students at LMA. All<br />
students were requested to identify their inventories with the first two letters of their mother’s<br />
first name and the last two digits of their social security number. Identification of student<br />
inventories was necessary for comparisons of students’ pretest and end of semester inventories.<br />
To the investigator’s knowledge, no time limitations were imposed for student completion of the<br />
inventory. Upon completion, the SMW and SMA student inventories were forwarded to the<br />
researcher by the deans of the nursing departments. LMA sophomore and junior student<br />
inventories were collected directly by the investigator and senior student inventories at LMA<br />
were forwarded to the investigator by nursing faculty.<br />
45
Chapter 4<br />
Results<br />
The following chapter presents analysis of the data resulting from administration of the SECEE<br />
inventory to the previously described nursing student population as well as brief discussion of<br />
results. Data resulting from the end of semester SECEE inventory administration were the<br />
primary focus for analysis, with analysis of the pretest data primarily for the purposes of<br />
reliability determination. Tables reflect analysis of the primary end of semester data, with<br />
inclusion of pretest data analysis when appropriate. Descriptive analysis of the forced-choice<br />
portion of the inventory is followed by comparison of results by institution, reliability<br />
calculations, and comparative analysis of scale scores by student level and by clinical site.<br />
Finally, student narrative responses to the open-ended questions are presented.<br />
Three hundred nineteen nursing students at the three identified institutions (LMA, SMW,<br />
and SMA) completed the end of semester SECEE inventory. Of the selected sub-sample, 126<br />
students completed pretest inventories. Student response rates across institutions were somewhat<br />
different, with SMA and SMW students having higher response rates (92.3% and 90.8%<br />
respectively) than LMA students (67.2%). This trend was also evident in the pretest data. The<br />
low LMA response rate was primarily due to several students being out of class for a nursing<br />
conference on the date of pretest inventory administration, and to half of the senior students<br />
being off campus for the entire spring clinical rotation, returning only on the last day of class<br />
during the semester to complete paperwork and evaluations.<br />
46
Descriptive Analysis<br />
Data from the forced-choice inventory items were entered according to the coded<br />
responses (1-5) with responses of 6 (can’t answer) coded as “99” and removed from the data set<br />
prior to parametric analysis. Student explanations as to why they could not answer a particular<br />
forced-choice item were reviewed for thematic content, in order to determine if instrument<br />
revisions would address the situations identified.<br />
Prior to data entry, scores for items 9, 11, 21, and 25 were reverse-coded, as they<br />
reflected negative characteristics of the learning environment. Prior to analysis, respondents’<br />
“scale scores” were calculated by adding the scores for the items within each scale (see<br />
Appendix B for a list of items within each scale). Data analysis was focused on determination of<br />
SECEE inventory validity and reliability.<br />
The 319 end of semester study participants fairly evenly represented the three student<br />
levels, with 100 sophomore students, 94 juniors, and 122 seniors. Fifty sophomores, 40 juniors<br />
and 37 seniors completed the pretest inventory. The vast majority of participants in both the<br />
pretest and end of semester inventory (N = 297 and 120, respectively) were generic BSN<br />
students. Only 15 students (4 students in the pretest group) reported being enrolled in RN to<br />
BSN programs.<br />
As expected, the majority of participants in both the pretest and end of semester<br />
inventory reported being assigned a staff nurse at the clinical site (N = 63 and 159, respectively).<br />
Fewer students (N = 21 and 59) reported being assigned an RN preceptor for the clinical rotation.<br />
An unexpected finding was the number of students (N = 37 for pretest inventory and 86 for end<br />
of semester inventory) who reported working only with their instructor at the clinical site, not<br />
47
having been assigned any resource person. Only two respondents reported being assigned a<br />
non-nurse resource person.<br />
Missing data. Only two of the demographic items contained a frequency of missing data<br />
worthy of concern. The first was the student ID number. Thirty-five students did not identify<br />
their inventories with the first two letters of their mother’s first name and the last two digits of<br />
their social security number. This presented a problem only when attempting to match pretest<br />
and end of semester inventories for test-retest reliability determination. The second demographic<br />
item with a frequency of missing data worthy of concern was clinical site identification. Sixty-<br />
six respondents to the end of semester inventory and 22 respondents to the pretest inventory did<br />
not identify the clinical site they were evaluating. Missing data were not evenly distributed<br />
across institutions. All students at SMW institution completed the site identification item for<br />
both the pretest and end of semester inventories. There were 40 cases of missing site<br />
identification data from the end of semester inventories at LMA and 9 cases from the pretest<br />
inventories. At SMA, 26 students completing the end of semester inventory and 9 students<br />
completing the pretest inventory failed to identify the clinical site they were evaluating. The<br />
missing site identifications resulted in exclusion of these inventories from any analysis using<br />
clinical site as a study variable.<br />
Data from the forced-choice portion of the inventory contained few occurrences of<br />
missing data (items left blank by respondents). Of the 319 completed end of semester<br />
inventories, only 17 “open” cells were identified. No more than three respondents left any<br />
particular item blank. The pre-test data contained only six open cells, with no more than one<br />
respondent leaving any item blank.<br />
48
Study participants responded “can’t answer” to inventory items with a higher frequency<br />
than that of leaving items blank. Of the 319 completed end of semester inventories, there were<br />
183 occurrences of response number “6” (can’t answer). These occurrences were not evenly<br />
distributed. Of the seven items with a frequency of ten or more can’t answer responses (numbers<br />
2, 5, 8, 9, 21, 22, and 28), four dealt with preceptor or resource nurse issues and three contained<br />
issues related to other students in the clinical learning environment. Thirty-nine explanations for<br />
“can’t answer” responses were provided by students, accounting for 21.3% of the “can’t answer”<br />
responses. Student explanations for the four preceptor/resource nurse items with highest<br />
frequency of “can’t answer” responses were all similar, stating that there had been no preceptor<br />
or resource nurse assigned to the student. Student explanations for “can’t answer” responses to<br />
items related to other students at the clinical site were also similar, citing the absence of other<br />
students at the particular clinical site.<br />
Overall, the total number of student responses that were either open or entered as “can’t<br />
answer” was relatively low, representing less than two percent of total possible responses to the<br />
instrument. Therefore, the impact on data analysis was assumed to be negligible.<br />
The pretest inventory data contained only six open cells among the 127 completed<br />
instruments and 61 “can’t answer” cells. Again less than two percent of total possible data was<br />
either missing or coded as “can’t answer”.<br />
Descriptive Statistics<br />
Descriptive analysis of forced-choice items revealed that overall item means varied from<br />
1.70 to 2.62 on the end of semester inventory administration, indicating that respondents<br />
evaluated the clinical sites positively to some degree (bear in mind that a lower item or scale<br />
score indicates a positive evaluation of that aspect of the clinical learning environment).<br />
49
Standard deviations ranged from 0.86 to 1.22. Pretest data means and standard deviations were<br />
similar. All items except numbers 9 (negative impact of high preceptor/resource nurse<br />
workload) and 25 (difficult to find help when needed) were somewhat positively skewed upon<br />
visual inspection of histograms. The calculated means and standard deviations for forced-choice<br />
items appear in Table 2.<br />
Parametric Analyses<br />
Prior to further analysis, scale scores were calculated for each respondent, by adding the<br />
scores for all items contained in each of the four predetermined scales. Open data cells and cells<br />
coded as “can’t answer” were replaced with the individual participant’s mean scale score, which<br />
was calculated from the individual’s response to the remainder of the items in the scale. No<br />
cases contained more than two “calculated” item scores per scale. Visual inspection of scale<br />
score histograms revealed a more normal distribution than was apparent with individual items.<br />
A slightly positive skew was visible on the Communication / Feedback and Learning Support<br />
Scales. Following are the analysis results for the SECEE inventory data collected during the<br />
spring semester, 1998. An alpha level of .05 was used for all statistical tests.<br />
50
Table 2<br />
SECEE Inventory Forced-choice Item Means and Standard Deviations.<br />
End of Semester Inventory Pretest Inventory<br />
Item 1. Adequacy of orientation M = 1.78<br />
M = 2.17<br />
SD = 0.93<br />
SD = 0.82<br />
n = 319<br />
n = 127<br />
Item 2. Preceptor/resource nurse available M = 1.78<br />
SD = 0.86<br />
n = 295<br />
Item 3. Range of learning opportunities<br />
available<br />
Item 4. Responsibilities clearly<br />
communicated<br />
51<br />
M = 1.86<br />
SD = 0.88<br />
n = 319<br />
M = 1.90<br />
SD = 0.91<br />
n = 317<br />
Item 5. RN maintained pt. responsibility M = 1.81<br />
SD = 0.92<br />
n = 299<br />
Item 6. Instructor available M = 1.89<br />
SD = 1.11<br />
n =317<br />
Item 7. Encouraged to identify / pursue<br />
learning opportunities<br />
Item 8. Preceptor/resource RN<br />
communication about patient.<br />
Item 9. Impact of high RN workload on<br />
student experience (reverse coded)<br />
M = 1.74<br />
SD = 0.86<br />
n = 319<br />
M = 1.82<br />
SD = 0.89<br />
n = 307<br />
M = 2.67<br />
SD = 0.89<br />
n = 295<br />
Item 10. Adequate instructor guidance M = 1.80<br />
SD = 0.95<br />
n = 314<br />
M = 1.85<br />
SD = 0.84<br />
n = 117<br />
M = 1.87<br />
SD = 0.83<br />
n = 127<br />
M = 1.82<br />
SD = 0.77<br />
n =127<br />
M = 2.19<br />
SD = 1.05<br />
n = 116<br />
M = 1.73<br />
SD = 0.83<br />
n = 127<br />
M = 1.66<br />
SD = 0.74<br />
n = 127<br />
M = 1.88<br />
SD = 0.85<br />
n = 121<br />
M = 2.62<br />
SD = 1.17<br />
n = 114<br />
M = 1.83<br />
SD = 0.91<br />
n = 127
Item 11. Overwhelmed by responsibilities<br />
of role (reverse coded)<br />
Item 12. Instructor provided adequate<br />
feedback<br />
Item 13. Adequate number and variety of<br />
patients appropriate for abilities<br />
Item 14. Adequate preceptor / resource<br />
nurse guidance<br />
Item 15. Allowed more independence as<br />
skills increased<br />
Item 16. RN’s served as positive role<br />
models<br />
Item 17. Equipment, supplies and<br />
resources available<br />
Item 18. Felt supported in attempts at new<br />
skills<br />
Item 19. RN’s informed students of<br />
learning opportunities<br />
Item 20. Instructor served as positive role<br />
model<br />
Item 21. Negative impact from competition<br />
with other students (reverse coded)<br />
52<br />
M = 2.32<br />
SD = 1.12<br />
n = 316<br />
M = 1.91<br />
SD = 1.01<br />
n = 317<br />
M = 1.97<br />
SD = 0.92<br />
n = 318<br />
M = 2.15<br />
SD = 1.03<br />
n = 312<br />
M = 1.76<br />
SD = 0.89<br />
n = 316<br />
M = 2.21<br />
SD = 1.02<br />
n = 315<br />
M = 1.86<br />
SD = 0.90<br />
n = 317<br />
M = 1.89<br />
SD = 0.87<br />
n = 318<br />
M = 2.21<br />
SD = 1.08<br />
n = 314<br />
M = 1.68<br />
SD = 0.91<br />
n = 316<br />
M = 2.40<br />
SD = 1.22<br />
n = 308<br />
M = 2.15<br />
SD = 0.96<br />
n = 127<br />
M = 1.83<br />
SD = 0.77<br />
n = 127<br />
M = 2.05<br />
SD = 0.85<br />
n = 127<br />
M = 2.13<br />
SD = 0.79<br />
n = 124<br />
M = 1.92<br />
SD = 0.77<br />
n = 126<br />
M = 2.34<br />
SD = 0.94<br />
n = 127<br />
M = 2.16<br />
SD = 0.95<br />
n = 126<br />
M = 1.93<br />
SD = 0.75<br />
n = 126<br />
M = 2.30<br />
SD = 0.97<br />
n = 126<br />
M = 1.69<br />
SD = 0.85<br />
n = 127<br />
M = 2.35<br />
SD = 1.13<br />
n = 123
Item 22. Students helped each other M = 1.75<br />
SD = 0.84<br />
n = 291<br />
Item 23. Site provided an atmosphere<br />
conducive to learning<br />
Item 24. Nursing staff positive about<br />
serving as a resource<br />
Item 25. Was difficult to find help when<br />
needed (reverse coded)<br />
Item 26. Was able to do “hands on” to<br />
level of ability<br />
Item 27. RN staff provided constructive<br />
feedback<br />
Item 28. Instructor encouraged students to<br />
help each other<br />
Item 29. Was successful in meeting<br />
learning goals<br />
53<br />
M = 1.92<br />
SD = 0.86<br />
n = 318<br />
M = 2.21<br />
SD = 1.02<br />
n = 315<br />
M = 2.42<br />
SD = 1.17<br />
n = 314<br />
M = 1.74<br />
SD = 0.96<br />
n = 314<br />
M = 2.27<br />
SD = 1.11<br />
n = 311<br />
M = 1.85<br />
SD = 0.96<br />
n = 300<br />
M = 1.83<br />
SD = 0.89<br />
n = 319<br />
M = 1.88<br />
SD = 0.91<br />
n = 120<br />
M = 1.90<br />
SD = 0.80<br />
n = 127<br />
M = 2.33<br />
SD = 0.96<br />
n =126<br />
M = 2.31<br />
SD = 0.99<br />
n =127<br />
M = 1.73<br />
SD = 0.76<br />
n = 127<br />
M = 2.39<br />
SD = 0.88<br />
n = 126<br />
M = 1.91<br />
SD = 0.94<br />
n = 122<br />
M = 1.80<br />
SD = 0.65<br />
n = 125
Institutional Differences in Student Response to the SECEE Instrument. Prior to<br />
reliability determination for both the SECEE instrument as a whole and for the four<br />
predetermined scales, it was necessary to determine whether there were differences between the<br />
institutions in terms of respondent scale scores. Calculations of four ANOVAs were completed,<br />
using the institution as the independent variable and the four calculated scale scores as the<br />
dependent variables. A random sample of 35 completed inventories from each institution was<br />
used for analysis. No violation of the homogeneity of variance assumption was detected via the<br />
Levene’s Statistic method (Norusis, 1997). Institution specific and overall scale score means and<br />
standard deviations are reported in Table 3. Significant differences in scale scores were found<br />
between institutions for all four scales: Communication / Feedback F (2, 102) = 8.17, p < .01;<br />
Learning Opportunities F (2, 102) = 4.49, p < .05; Learning Support F (2, 102) = 8.40, p < .01;<br />
and Department Atmosphere F (2, 102) = 4.70, p < .05. Results of the ANOVA analysis are<br />
presented in Table 4.<br />
Multiple comparisons were calculated using the Dunnett T3 method (Lomax, 1992) and a<br />
significance level of p < .05. These results revealed that for the Communication and Feedback<br />
scale, students attending SMW rated their clinical sites more positively (M = 11.06) than did<br />
students attending both LMA (M = 14.99) and SMA (M = 13.78). Students attending SMW<br />
institution also rated the Learning Opportunities at their clinical sites more positively (M =<br />
13.11) than did students at LMA did (M = 16.47). Learning Support at the clinical site was rated<br />
higher by students at SMA (M = 13.33) and students at SMW (M = 14.68) than by students at<br />
LMA (M = 17.13). Finally, students at SMA rated the department atmosphere at their clinical<br />
54
Table 3<br />
Scale Means and Standard Deviations by Institution<br />
LMA Instit.<br />
n = 35<br />
SMW Instit.<br />
n = 35<br />
SMA Instit.<br />
n = 35<br />
All Institutions n<br />
= 319<br />
Comm. /<br />
Feedback<br />
M = 14.99<br />
SD = 5.27<br />
M = 13.78<br />
SD = 3.66<br />
M = 11.06<br />
SD = 3.32<br />
M = 13.99<br />
SD = 5.02<br />
Learning<br />
Opportunities<br />
M = 16.47<br />
SD = 5.82<br />
M = 15.19<br />
SD = 3.85<br />
M = 13.11<br />
SD = 4.27<br />
M = 15.37<br />
SD = 5.25<br />
55<br />
Learning<br />
Support<br />
M = 17.13<br />
SD = 4.40<br />
M = 14.68<br />
SD = 3.40<br />
M = 13.33<br />
SD = 3.92<br />
M = 15.52<br />
SD = 5.05<br />
Department<br />
Atmosphere<br />
M = 14.63<br />
SD = 5.88<br />
M = 11.81<br />
SD = 2.94<br />
M = 12.10<br />
SD = 3.24<br />
M = 12.59<br />
SD = 3.59
Table 4<br />
Statistical Results for Institution by Scale Score ANOVA<br />
df 1 df 2 F ratio p<br />
Communication/ Feedback 2 102 8.17 .001<br />
Learning Opportunities 2 102 4.49 .013<br />
Learning Support 2 102 8.41 .000<br />
Department Atmosphere 2 102 4.70 .011<br />
56
sites more favorably (M = 11.81) than students at LMA did (M = 14.63). The p values for the<br />
significant findings in post hoc comparisons are provided in Table 5.<br />
In addition to comparisons of scale scores by institution, item specific ANOVAs were<br />
completed using institution as the independent variable. Results indicated that institution effect<br />
on individual item scores was significant for 20 of the 29 forced-choice items at p < .05,<br />
although the data for 4 of the items violated the homogeneity of variance assumption according<br />
to the Levene Statistic. There did not appear to be a pattern in terms of the significant and non-<br />
significant items. The nine items for which no significant institutional differences were found<br />
were evenly distributed across all four scales, with each scale having either two or three items for<br />
which no institutional differences were found. Post hoc testing using the Dunnett T3 method<br />
(controlling for violation of the homogeneity of variance assumption) produced results consistent<br />
with the results of the scale score analysis, with students attending the smaller institutions rating<br />
their clinical experience more positively than students attending the larger institution.<br />
Instrument Reliability Determination. As there were significant differences in inventory<br />
scale scores between the three institutions used in this investigation, calculations for reliability of<br />
the scales and for the instrument as a whole were conducted separately for each institution as<br />
well as for all institutions together (see Table 6). Reported internal reliability figures represent<br />
standardized coefficient alphas (Anastasi, 1988). Correlations were found to be similar among<br />
institutions, although SMA results were slightly lower than the other two institutions.<br />
57
Table 5<br />
Significance Levels for Institutional Comparisons According to Scale Scores<br />
Comm /<br />
Feedback<br />
Learning<br />
Opportunities<br />
58<br />
Learning<br />
Support<br />
Department<br />
Atmosphere<br />
LMA vs. SMW .033 -- .001 .002<br />
LMA vs. SMA .000 .003 .001 --<br />
SMW vs. SMA -- -- -- --<br />
Note. Dashes indicate non-significant comparison results (p > .05)
Table 6<br />
Reliability Coefficients for SECEE Instrument and Individual Scales by Institution<br />
LMA Instit.<br />
n = 126<br />
SMW Instit.<br />
n = 69<br />
SMA Instit.<br />
n = 122<br />
All Instit.<br />
n = 318<br />
Entire<br />
Instrument<br />
Comm. /<br />
Feedback<br />
Scale<br />
Learning<br />
Support<br />
Scale<br />
59<br />
Learning<br />
Opportunity<br />
Scale<br />
Department<br />
Atmosphere<br />
Scale<br />
.94 .87 .87 .82 .64<br />
.89 .74 .74 .74 .59<br />
.94 .86 .87 .81 .65<br />
.94 .85 .81 .86 .63
Review of the analyses indicated that the overall alpha for the entire instrument was quite<br />
high--.94 for both LMA and SMA institutions and .89 for SMA. The reliability of the instrument<br />
as a whole was highest with all items included. Review of scale reliabilities by institution<br />
indicated that reliability for the Learning Opportunities Scale might be slightly higher with the<br />
item “I felt overwhelmed by the demands of my role in this environment”<br />
taken out of the scale. Removal of this item results in reliabilities of .02 to .03 higher for all<br />
institutions. Otherwise, reliability alphas were highest with all items included in each of the four<br />
scales. There were only two items for which correlations of individual items with the scale were<br />
below .35 for more than one institution: (1) the previously mentioned “felt overwhelmed” item<br />
in the Learning Opportunities Scale, and (2) the “Competition with other health professional<br />
students negatively impacted the student experience” item from the Department Atmosphere<br />
scale.<br />
Test-Retest Reliability. In order to determine test-retest reliability for the instrument,<br />
pretest and end of semester scale scores were compared for the sub-sample of SMW and SMA<br />
students who evaluated the same clinical site on the pretest and end of semester inventories<br />
(n = 66). The lack of student identification of their completed inventories resulted in the loss of<br />
20 cases from the test-retest reliability determination. Significant correlations were found<br />
between the pretest and end of semester scores for each inventory scale (p < .001) for those<br />
students who evaluated the same clinical sites on both inventories (n = 46). Correlations ranged<br />
from .50 to .61 (see Table 7 for all correlation results).<br />
In addition, a sub-sample of LMA students completed two SECEE inventories,<br />
evaluating different clinical sites on the pretest and end of semester inventory. According to<br />
Richards (1996), comparison of the same participants’ evaluations of discrete environments<br />
60
assists in reducing the confounding person-environment sources of variance. If differences in<br />
individual student evaluations of discrete sites are found, the differences are most likely due to<br />
the discrimination ability of the instrument. For student participants evaluating different clinical<br />
sites on the pretest and end of semester inventories (n = 60), no significant correlations were<br />
found between pretest and end of semester scale scores (p > .05). This result indicates that there<br />
was no relationship between the evaluations of different clinical sites, when the same students<br />
evaluated two different sites. Correlations between inventory scale scores ranged from -.01 to<br />
.20. These correlations also appear in Table 7.<br />
Differences in Scale Scores According to Student Academic Level. As the SECEE<br />
Inventory was able to detect differences between institutions in terms of student evaluation of<br />
sites, the researcher wondered whether it might also detect differences between student<br />
evaluation of sites according to student academic level (sophomore, junior, senior). Thus,<br />
comparative ANOVAs were computed for differences in scale scores (dependent variable) by<br />
student level (independent variable), after sorting data by institution. ANOVA results are<br />
presented in Table 8 and significance levels for post hoc tests are shown in Table 9. Cell means<br />
and standard deviations for the scale scores according to student level can be found in Table 10.<br />
The Dunnett T3 method was used for all post hoc tests, as there was violation of the<br />
homogeneity of variance assumption for some of the ANOVAs (violations will be identified in<br />
discussion of each analysis). Results indicated that for institution SMW, differences between<br />
scale scores were only significant for the Department Atmosphere scale [F (2, 66) = 4.70, p <<br />
.05]. However, according to the Levene Statistic, the assumption of homogeneity of variance<br />
was violated for the test [F (2, 66) = 3.93, p = .025]. Multiple comparisons<br />
61
Table 7<br />
Scale Score Correlations Between Pretest and End of Semester Inventories<br />
Same Sites<br />
Evaluated<br />
Different Sites<br />
Evaluated<br />
Communication<br />
/ Feedback<br />
r = .55*<br />
n = 46<br />
r = .17<br />
n = 60<br />
Learning<br />
Opportunities<br />
r = .52*<br />
n = 46<br />
r = -.01<br />
n = 60<br />
Note. * Indicates correlation is significant at p < .001<br />
62<br />
Learning<br />
Support<br />
r = .50*<br />
n = 45<br />
r = .20<br />
n = 58<br />
Department<br />
Atmosphere<br />
r = .61*<br />
n = 46<br />
r = .19<br />
n = 60
Table 8<br />
ANOVA Results, Student Level by Scale Score<br />
df 1 df 2 F ratio p<br />
LMA Instit. Comm. / Feedback 2 123 5.95 .003<br />
Learning Opportunities 2 123 3.89 .023<br />
Learning Support 2 123 3.50 .033<br />
Dept. Atmosphere 2 123 5.00 .008*<br />
SMW Instit. Comm. / Feedback 2 66 0.98 .38<br />
Learning Opportunities 2 66 1.03 .36<br />
Learning Support 2 66 1.08 .35<br />
Dept. Atmosphere 2 66 4.70 .012*<br />
SMA Instit. Comm. / Feedback 2 117 3.85 .024<br />
Learning Opportunities 2 117 2.94 .057<br />
Learning Support 2 117 6.68 .002*<br />
Dept. Atmosphere 2 117 5.69 .004<br />
* Note: violation of Homogeneity of Variance assumption according to Levene’s<br />
Statistic at p < .05<br />
63
Table 9<br />
Significance Levels for Student Comparisons According to Scale Scores<br />
Comm /<br />
Feedback<br />
64<br />
Learning<br />
Opportunity<br />
Learning<br />
Support<br />
Dept.<br />
Atmosphere<br />
LMA Instit. Jr vs Soph -- -- -- --<br />
Jr vs Sr .004 .023 .045 .011<br />
Soph vs Sr -- -- -- .021<br />
SMW Instit. Jr vs Soph -- -- -- .003<br />
Jr vs Sr -- -- -- --<br />
Soph vs Sr -- -- -- --<br />
SMA Instit. Jr vs Soph .013 -- .002 .023<br />
Jr vs Sr -- -- -- --<br />
Soph vs Sr -- -- .002 .002<br />
Note. Dashes represent non-significant comparisons (p > .05).
Table 10<br />
Scale Score Means and Standard Deviations by Student Level<br />
LMA<br />
Institution<br />
SMW<br />
Institution<br />
SMA<br />
Institution<br />
Student<br />
Level<br />
Sophomore<br />
n = 49<br />
Junior<br />
n = 46<br />
Senior<br />
n = 31<br />
Sophomore<br />
n = 25<br />
Junior<br />
n = 16<br />
Senior<br />
n = 28<br />
Sophomore<br />
n = 25<br />
Junior<br />
n = 32<br />
Senior<br />
n = 47<br />
Comm. /<br />
Feedback<br />
M = 15.33<br />
SD = 4.48<br />
M = 17.26<br />
SD = 6.51<br />
M = 12.94<br />
SD = 4.90<br />
M = 13.44<br />
SD = 4.17<br />
M = 14.81<br />
SD = 3.27<br />
M = 13.17<br />
SD = 3.88<br />
M = 11.16<br />
SD = 3.41<br />
M = 14.42<br />
SD = 4.85<br />
M = 12.57<br />
SD = 4.65<br />
Learning<br />
Opportunity<br />
M = 17.05<br />
SD = 5.47<br />
M = 17.83<br />
SD = 6.66<br />
M = 14.17<br />
SD = 4.91<br />
M = 14.06<br />
SD = 4.00<br />
M = 15.25<br />
SD = 3.44<br />
M = 15.60<br />
SD = 4.29<br />
M = 12.30<br />
SD = 3.27<br />
M = 14.73<br />
SD = 5.04<br />
M = 15.00<br />
SD = 5.17<br />
65<br />
Learning<br />
Support<br />
M = 17.07<br />
SD = 5.47<br />
M = 18.29<br />
SD = 6.64<br />
M = 14.95<br />
SD = 5.02<br />
M = 13.72<br />
SD = 4.32<br />
M = 15.50<br />
SD = 2.63<br />
M = 14.61<br />
SD = 3.95<br />
M = 11.85<br />
SD = 3.38<br />
M = 16.22<br />
SD = 5.61<br />
M = 14.97<br />
SD = 4.43<br />
Department<br />
Atmosphere<br />
M = 13.62<br />
SD = 3.30<br />
M = 14.07<br />
SD = 4.05<br />
M = 11.54<br />
SD = 3.26<br />
M = 10.34<br />
SD = 2.45<br />
M = 13.24<br />
SD = 2.48<br />
M = 11.48<br />
SD = 3.54<br />
M = 10.63<br />
SD = 2.96<br />
M = 12.90<br />
SD = 3.20<br />
M = 13.33<br />
SD = 3.69
according to the Dunnett T3 test demonstrated that sophomore students rated the Department<br />
Atmosphere of their sites (M = 10.63) significantly more positively (p < .01) than juniors did (M<br />
= 12.90).<br />
At LMA institution, there were significant differences in student evaluations of the<br />
clinical learning environment according to student level for all four scales. Differences between<br />
student levels on the Communication / Feedback scale were significant F (2, 123) = 5.95, p <<br />
.01, with senior students at LMA institution having rated their clinical environments significantly<br />
more positively (M = 12.94) than junior students did (M = 17.26), p < .01. Student level also<br />
accounted for significant differences in student perception of clinical site Learning Opportunities<br />
F (2, 123) = 3.89, p < .05. Senior students rated Learning Opportunities more positively (M =<br />
14.17) than junior students did (M = 17.83), p < .05. ANOVA results also indicated differences<br />
between student academic levels for the Learning Support scale F (2, 123) = 3.50, p < .05.<br />
Again, seniors evaluated their sites more positively (M = 14.95) than did juniors (M = 17.07), p<br />
< .05. Differences between student levels were also identified for the Department Atmosphere<br />
scale F (2, 123) = 5.00, p < .01. Seniors students at LMA rated the Department Atmosphere of<br />
their clinical education environments (M = 11.54) higher than did both juniors (M = 14.07) and<br />
sophomores (M = 13.62). The Homogeneity of variance assumption was violated only for the<br />
Department Atmosphere Scale ANOVA [F (2,123) = 3.35, p = .038].<br />
Differences on three of the four scale scores were found for SMA students. Differences<br />
between academic levels of students were found for the Communication and Feedback scale F<br />
(2, 117) = 3.85, p < .05, with sophomore SMA students having evaluated their clinical<br />
environments more positively (M = 11.16) than did junior students (M = 14.42). Differences<br />
between student levels were also found for the Learning Support scale F (2, 117) = 6.68, p < .01.<br />
66
Sophomore SMA students rated the Learning Support at their clinical sites significantly more<br />
positively (M = 11.85) than both junior (M = 16.22) and senior students (M = 14.97) on the<br />
Learning Support scale, p < .01. Students of different academic levels also evaluated the<br />
Department Atmosphere of their learning environments differently F (2, 117) = 5.69, p < .01,<br />
with sophomore students having evaluated their sites more positively (M = 10.63) than both<br />
junior students (M = 12.90) and senior students (M = 13.33), p < .01. The Levene Statistic for<br />
homogeneity of variance was significant only for the Learning Support scale [F (2, 117) = 5.71,<br />
p = .004].<br />
Differences in Scale Scores According to Clinical Site Groups. One means to support the<br />
validity of an environmental measure is to compare student responses to the inventory across a<br />
variety of settings, in order to determine whether the instrument is able to distinguish between<br />
distinct environmental settings (Richards, 1996). The investigation of whether there were<br />
differences between SECEE inventory scale scores based on the clinical site being evaluated was<br />
conducted via ANOVA testing. Prior to this analysis, it was necessary to group some clinical<br />
sites together, as several sites were evaluated by only one or two students. Site grouping was<br />
based upon the type of nursing care provided at the site, and the dean or faculty at the schools of<br />
nursing at each institution confirmed appropriateness of the grouping. Site grouping reduced the<br />
number of clinical groups from 56 to 25. The degree of grouping required was not similar across<br />
institutions. Only one site identified by SMW students was grouped with another site within the<br />
same hospital. Six sites evaluated by SMA students were grouped together with other sites, with<br />
the grouped sites representing primarily specialty care units (intensive care units, labor and<br />
delivery, and other units with only one respondent). In contrast, 24 identified clinical sites at<br />
LMA were consolidated into 10 site groups. The grouped sites represented community<br />
67
healthcare programs, school nursing, clinic or physician offices, home health / hospice, and<br />
specialty care units in the hospital environment. Even after grouping the cell sizes were not<br />
equivalent, with the number of respondents in each site group varying from 4 to 26. The median<br />
number of respondents per site group cells was 8.<br />
Initial analysis consisted of four ANOVAs with site group as the independent variable<br />
and scale score as the dependent variable. Although all F ratios were significant at p < .01,<br />
indicating that scale scores differed according to clinical group, the Levene Statistic for<br />
Homogeneity of Variance was violated for tests on two of the four scales (Communication /<br />
Feedback and Learning Opportunities). Therefore, the additional grouping variable of institution<br />
was used, and separate ANOVAs were run for each institution, resulting in 12 separate analyses.<br />
The MANOVA procedure was not used for testing as the dependent variables (scale scores) were<br />
found to be fairly highly correlated. The Dunnett T3 test was used for all post hoc analyses, as<br />
some ANOVA tests violated the Homogeneity of Variance assumption. ANOVA statistics are<br />
presented in Table 11 and scale means and standard deviations by clinical site group appear in<br />
Table 12.<br />
Results indicated that there were significant differences in scale scores according to<br />
clinical site groups. For the SMW institution, ANOVA results were significant for all but one<br />
scale score (Learning Opportunities). Significant differences between student evaluations of<br />
clinical sites were found for the Communication and Feedback Scale F (6. 54) = 3.41, p < .01.<br />
Post Hoc comparisons of SMW data indicated that students rated the Communication / Feedback<br />
of clinical site group 5 more positively (M = 9.00) than site group 1 (M = 15.89), site group 2 (M<br />
= 15.20), and site group 7(M = 15.29), p < .05. Differences between site groups were also found<br />
for the Learning Support scale F (6, 54) = 3.46, p < .01. SMW students rated the Learning<br />
68
Support at site group 5 more positively (M = 9.86) than site groups 1 (M = 16.92), 2 (M =<br />
15.60), and 7 (M = 15.51), p < .05. ANOVA analysis also detected differences in scale scores<br />
according to clinical site group for the Department Atmosphere scale F (6, 54) = 4.83, p < .01.<br />
Again, SMW students at site group 5 rated the Department Atmosphere more positively (M =<br />
8.43) than students at site 2 (M = 12.78). In addition, students at site 4 (M = 9.58) perceived the<br />
Department Atmosphere to be more positive than students at site 2 (M = 12.78), p < .05. Levels<br />
of significance for the differences found between site groups at SMW appear in Table 13.<br />
At SMA, two of the four scales were found to have significant differences according to site<br />
groups. Students evaluated the Learning Support F (11, 83) = 1.92, p < .05 and the Department<br />
Atmosphere F (11, 83) = 1.95, p < .05 differently according to clinical site group. However<br />
multiple comparisons using the Dunnett T3 test did not reveal significant differences between<br />
individual site groups (p > .05). LMA institution data differed from the other institutions in that<br />
no significant differences between clinical site groups were found for any of the four scales.<br />
69
Table 11<br />
ANOVA Statistics, Clinical Site Group by Scale Score<br />
df 1 df 2 F ratio p<br />
LMA Instit. Comm/ Feedback 10 73 1.21 .30 *<br />
Learning Opportunity 10 73 1.00 .46<br />
Learning Support 10 73 1.44 .18<br />
Department Atmosphere 10 73 1.87 .064<br />
SMW Instit. Comm/ Feedback 6 54 3.41 .006<br />
Learning Opportunity 6 54 1.74 .13<br />
Learning Support 6 54 3.46 .006<br />
Department Atmosphere 6 54 4.83 .001<br />
SMA Instit. Comm/ Feedback 11 83 1.44 .17<br />
Learning Opportunity 11 83 1.77 .073 *<br />
Learning Support 11 83 1.92 .048<br />
Department Atmosphere 11 83 1.95 .045<br />
* Note: violation of Homogeneity of Variance assumption according to Levene’s Statistic at p <<br />
.05<br />
70
Table 12<br />
Scale Score Means and Standard Deviations by Clinical Site Groups<br />
LMA 11<br />
n = 11<br />
Site Group Comm. /<br />
12<br />
n = 5<br />
13<br />
n = 6<br />
14<br />
n = 16<br />
17<br />
n = 5<br />
19<br />
n = 5<br />
33<br />
n = 4<br />
39<br />
n = 4<br />
51<br />
n = 9<br />
54<br />
n = 9<br />
Feedback<br />
M = 14.45<br />
SD = 3.96<br />
M = 12.40<br />
SD = 1.82<br />
M =17.75<br />
SD = 2.72<br />
M = 16.95<br />
SD = 6.99<br />
M = 19.00<br />
SD = 6.36<br />
M = 12.60<br />
SD = 2.70<br />
M = 16.29<br />
SD = 5.12<br />
M = 14.0<br />
SD = 6.98<br />
M = 15.15<br />
SD = 7.99<br />
M = 12.33<br />
SD = 3.08<br />
Learning<br />
Opportunity<br />
M = 13.27<br />
SD = 3.35<br />
M = 13.60<br />
SD = 3.51<br />
M = 15.83<br />
SD = 2.48<br />
M = 17.39<br />
SD = 6.25<br />
M = 19.80<br />
SD = 7.19<br />
M = 12.40<br />
SD = 3.44<br />
M = 16.07<br />
SD = 4.66<br />
M = 18.50<br />
SD = 10.47<br />
M = 18.11<br />
SD = 7.93<br />
M = 16.11<br />
SD = 7.52<br />
71<br />
Learning<br />
Support<br />
M = 15.35<br />
SD = 3.72<br />
M = 12.40<br />
SD = 2.41<br />
M = 20.00<br />
SD = 1.90<br />
M = 18.53<br />
SD = 6.85<br />
M = 19.40<br />
SD = 6.34<br />
M = 16.21<br />
SD = 5.89<br />
M = 14.29<br />
SD = 5.19<br />
M = 13.25<br />
SD = 7.54<br />
M = 17.21<br />
SD = 6.19<br />
M = 14.47<br />
SD = 4.18<br />
Dept.<br />
Atmosphere<br />
M = 12.00<br />
SD = 3.26<br />
M = 10.92<br />
SD = 2.83<br />
M = 15.00<br />
SD = 1.41<br />
M = 14.72<br />
SD = 4.20<br />
M = 14.20<br />
SD = 4.20<br />
M = 9.40<br />
SD = 0.89<br />
M = 12.25<br />
SD = 0.96<br />
M = 9.25<br />
SD = 3.95<br />
M = 12.98<br />
SD = 4.78<br />
M = 11.89<br />
SD = 4.29
SMW 1<br />
n = 9<br />
Site Group Comm. /<br />
2<br />
n = 10<br />
3<br />
n = 6<br />
4<br />
n = 17<br />
5<br />
n = 7<br />
7<br />
n = 7<br />
10<br />
n = 5<br />
Feedback<br />
M = 15.89<br />
SD = 4.20<br />
M = 15.20<br />
SD = 3.85<br />
M = 14.17<br />
SD = 2.14<br />
M = 12.34<br />
SD = 3.73<br />
M = 9.00<br />
SD = 3.21<br />
M = 15.29<br />
SD = 2.98<br />
M = 14.25<br />
SD = 4.51<br />
Learning<br />
Opportunity<br />
M = 16.22<br />
SD = 5.31<br />
M = 14.80<br />
SD = 3.49<br />
M = 16.00<br />
SD = 3.52<br />
M = 13.59<br />
SD = 4.37<br />
M = 12.14<br />
SD = 4.22<br />
M = 17.67<br />
SD = 2.35<br />
M = 13.50<br />
SD = 2.32<br />
72<br />
Learning<br />
Support<br />
M = 16.92<br />
SD = 3.27<br />
M = 15.60<br />
SD = 2.41<br />
M = 15.33<br />
SD = 3.20<br />
M = 13.15<br />
SD = 4.87<br />
M = 9.86<br />
SD = 2.61<br />
M = 15.51<br />
SD = 2.54<br />
M = 14.07<br />
SD = 2.09<br />
Dept.<br />
Atmosphere<br />
M = 12.78<br />
SD = 4.24<br />
M = 12.78<br />
SD = 2.12<br />
M = 14.00<br />
SD = 3.03<br />
M = 9.58<br />
SD = 2.42<br />
M = 8.43<br />
SD = 2.37<br />
M = 13.17<br />
SD = 2.50<br />
M = 12.32<br />
SD = 1.95
SMA 14<br />
n = 7<br />
Site Group Comm. /<br />
17<br />
n = 6<br />
19<br />
n = 17<br />
29<br />
n = 16<br />
33<br />
n = 4<br />
74<br />
n = 12<br />
75<br />
n = 9<br />
77<br />
n = 5<br />
79<br />
n = 5<br />
80<br />
n = 5<br />
82<br />
n = 5<br />
84<br />
n = 4<br />
Feedback<br />
M = 17.71<br />
SD = 5.85<br />
M = 12.00<br />
SD = 3.35<br />
M = 11.97<br />
SD = 3.99<br />
M = 12.81<br />
SD = 4.12<br />
M = 11.50<br />
SD = 5.92<br />
M = 12.17<br />
SD = 3.90<br />
M = 15.11<br />
SD = 6.70<br />
M = 14.20<br />
SD = 6.38<br />
M = 12.03<br />
SD = 5.40<br />
M = 9.60<br />
SD = 3.43<br />
M = 13.40<br />
SD = 2.70<br />
M = 10.00<br />
SD = 2.94<br />
Learning<br />
Opportunity<br />
M = 20.14<br />
SD = 3.72<br />
M = 14.00<br />
SD = 4.34<br />
M = 12.29<br />
SD = 4.62<br />
M = 14.50<br />
SD = 5.56<br />
M = 14.00<br />
SD = 7.44<br />
M = 14.00<br />
SD = 2.66<br />
M = 16.21<br />
SD = 4.55<br />
M = 15.60<br />
SD = 9.10<br />
M = 11.20<br />
SD = 3.27<br />
M = 12.60<br />
SD = 2.88<br />
M = 14.80<br />
SD = 3.70<br />
M = 11.75<br />
SD = 3.10<br />
73<br />
Learning<br />
Support<br />
M = 19.29<br />
SD = 4.75<br />
M = 15.67<br />
SD = 1.97<br />
M = 14.55<br />
SD = 5.89<br />
M = 14.22<br />
SD = 4.57<br />
M = 13.75<br />
SD = 6.45<br />
M = 13.85<br />
SD = 3.95<br />
M = 18.67<br />
SD = 5.43<br />
M = 14.20<br />
SD = 5.54<br />
M = 16.00<br />
SD = 3.87<br />
M = 11.00<br />
SD = 2.74<br />
M = 11.40<br />
SD = 2.07<br />
M = 12.25<br />
SD = 3.86<br />
Dept.<br />
Atmosphere<br />
M = 16.20<br />
SD = 3.87<br />
M = 13.33<br />
SD = 3.98<br />
M = 11.00<br />
SD = 3.30<br />
M = 13.81<br />
SD = 3.64<br />
M = 9.45<br />
SD = 2.97<br />
M = 11.52<br />
SD = 3.21<br />
M = 14.22<br />
SD = 2.44<br />
M = 13.64<br />
SD = 5.60<br />
M = 12.80<br />
SD = 4.44<br />
M = 12.16<br />
SD = 2.41<br />
M = 12.00<br />
SD = 2.65<br />
M = 10.50<br />
SD = 3.70
Table 13<br />
Significance Levels for Post Hoc Differences between Clinical Site Groups at SMW<br />
Comm. / Feedback Learning Support Dept. Atmosphere<br />
SMW Site 5 vs. 1 .007 .004 .042<br />
SMW Site 5 Vs 2 .018 .026 .035<br />
SMW Site 5 Vs 3 -- -- .010<br />
SMW Site 5 Vs 7 .032 -- .034<br />
74
Analysis of Narrative Inventory Data<br />
As noted in the Methods section, students were asked to respond in narrative form to<br />
three SECEE Inventory questions. They were asked to identify aspects of the clinical setting that<br />
promoted learning, aspects of the clinical setting that hindered learning, and asked whether there<br />
was anything else they would like to mention on the inventory. Prior to analysis of these data,<br />
student comments were written on a separate card, along with the student’s demographic data.<br />
Each comment was noted individually. For example, if a student noted three separate issues or<br />
comments in response to one question, three data cards were created.<br />
Students made a total of 358 comments about aspects of the clinical environment that<br />
promoted learning, 226 comments about aspects of the environment that hindered learning, and<br />
60 comments in response to the “anything else you would like to mention” item. An interesting<br />
but somewhat expected finding was that those students who completed both the pretest and the<br />
end of semester inventory made fewer comments overall upon repeat exposure to the instrument.<br />
The overall numbers of student comments on the pretest and end of semester inventories, for the<br />
sub-population of students completing both inventories are presented in Table 14.<br />
75
Table 14<br />
Comparison of Numbers of Student Narrative Comments on Pretest and End of Semester<br />
Inventories<br />
Number of Student Comments on<br />
Pretest<br />
76<br />
Number of Student Comments on<br />
Final Inventory<br />
LMA 130 87<br />
SMW 44 20<br />
SMA 96 36
Analysis of the narrative student comments to the two specific clinical environment<br />
questions revealed that relatively few were too broad in nature to be helpful in determining the<br />
adequacy of the clinical learning environment (i.e., great experience, good instructor, disliked the<br />
unit). The majority of student comments (both positive and negative) directly reflected issues<br />
addressed by the forced-choice portion of the SECEE Inventory, lending support to the validity<br />
of inventory content. The frequencies of student comments that were broad in nature, the<br />
comments that had already been addressed by items contained in the inventory, and comments<br />
that raised issues which were not addressed by the forced-choice portion of the instrument are<br />
noted in Table 15.<br />
Student Narrative Responses Reflecting SECEE Item Content. Of the student responses<br />
mirroring the forced-choice portion of the inventory about environmental conditions which<br />
helped learning, there was not even distribution across issues or items. Three statements had<br />
frequencies of 30 or more and two additional statements had frequencies of 19. Of the five most<br />
frequent statements about positive aspects of the learning environment, two were related to<br />
nursing staff providing learning support, two involved the availability of learning opportunities,<br />
and one identified the benefit of independence and autonomy. The frequencies of student<br />
comments about aspects of the clinical setting that helped learning, which were directly related<br />
to SECEE items are listed in Table 16.<br />
77
Table 15<br />
Categorization of Student Responses to SECEE Open-Ended Questions<br />
Aspects of<br />
Environment that<br />
Helped Learning<br />
78<br />
Aspects of<br />
Environment that<br />
Hindered Learning<br />
Additional<br />
Comments<br />
Broad, non-specific<br />
Statements 52 2 8<br />
Statements Reflected<br />
Item content<br />
199 116 4<br />
Statements not<br />
Addressed by Item<br />
Content 107 108 56<br />
Total Statements 358 226 60
Student comments reflecting scaled item content about environmental conditions that hindered<br />
learning were also unequally distributed. Of the four statements with frequency over 10, two<br />
related to staff being unwilling to help or unreceptive to students (Learning Support), and two<br />
identified a lack of learning opportunities related to “hands on” learning and variety of patients.<br />
A listing of the most frequent student comments about aspects of the environment that hindered<br />
learning appears in table 17.<br />
Only six items on the scaled portion of the inventory were not specifically addressed by<br />
students in their narrative comments: numbers 4, 11, 22, 23, 28, and 29. Item numbers 23 (“The<br />
environment provided an atmosphere conducive to learning”) and 29 (“I was successful in<br />
meeting most of my learning goals”) were fairly broad in nature. In the open-ended questions,<br />
students were asked to identify aspects of the clinical setting that helped and hindered learning,<br />
and so would be expected to identify specific characteristics of the environment, rather than<br />
broad statements.<br />
Item numbers 22 and 28 addressed students helping each other in the clinical setting and<br />
the instructor encouraging students to help each other and share experiences. It is interesting that<br />
respondents made very few comments about fellow students at the clinical site. Only two<br />
students responded “other students” in respect to the question about aspects of the environment<br />
that helped learning. No students identified feeling overwhelmed by the demands of their role<br />
(item 4) or the presence or absence of clear communication of patient care responsibilities (item<br />
11) as impacting their learning in the clinical environment in response to the open ended<br />
questions.<br />
79
Table 16<br />
Frequencies of Student Responses to the Request to Identify Aspects of the Clinical Setting that<br />
Helped Learning—Issues Addressed by Scaled SECEE Items<br />
Staff were helpful<br />
“Hands on” and skill opportunities<br />
Variety of patients and care needs<br />
Staff took time to teach and explain<br />
Independence / autonomy<br />
Instructor available for help and questions<br />
Staff / preceptor open to working with students<br />
Nurses informed us of learning opportunities<br />
Instructor taught and encouraged students<br />
Resources / equipment available<br />
Statement Response Frequency<br />
Instructor encouraged us to take advantage of learning opportunities<br />
Instructor encouraged independence<br />
80<br />
36<br />
34<br />
31<br />
19<br />
19<br />
8<br />
8<br />
7<br />
6<br />
5<br />
3<br />
3
Table 17<br />
Frequencies of Student Responses to Request to Identify Aspects of the Clinical Setting that<br />
Hindered Learning—Issues Addressed by Scaled SECEE Items<br />
Staff unwilling or unavailable to help<br />
Staff not receptive to students (bad attitude)<br />
Lack of patient variety<br />
Lack of “hands on<br />
Instructor unavailable<br />
Staff refused to help us learn<br />
Nurses too busy<br />
Unprofessional staff<br />
Unprofessional instructor<br />
Lack of help (general comment)<br />
Statement Response Frequency<br />
81<br />
27<br />
25<br />
16<br />
12<br />
9<br />
6<br />
5<br />
4<br />
3<br />
3
Student Narrative Responses Not Addressed by Forced-choice Item Content. Of the<br />
issues students raised in responding to the question “What aspects of this clinical setting<br />
helped/promoted your learning” that were not addressed by the forced-choice portion of the<br />
inventory, none were mentioned by more than six students. Student comments that had<br />
frequencies of three or greater are listed in Table 18. The remainder of the comments had<br />
frequencies of two or less. A few of the statements identified particular individuals who were<br />
helpful in learning or specific situations. Others were isolated comments about the particular<br />
learning environment.<br />
Student comments about aspects of the learning environment hindering learning that were<br />
not addressed specifically by the inventory were also fairly spread out among a variety of issues,<br />
with one exception. Nineteen students identified a too high student to instructor ratio as<br />
hindering their learning, a much higher frequency than any other student comments. Student<br />
comments with a frequency of three or higher about aspects of the environment that hindered<br />
learning are presented in Table 19.<br />
Generally, student comments provided support for the validity of inclusion of the forced-<br />
choice portion of the SECEE inventory. In addition, the narrative data provided valuable<br />
information about what may need to be included in future revisions of the SECEE inventory.<br />
82
Table 18<br />
Frequencies of Student Responses to Request to Identify Aspects of the Clinical Setting that<br />
Helped Learning—Issues not Addressed by Scaled SECEE Items<br />
Statement Frequency<br />
Opportunities for development of a particular skill 6<br />
Teaching hospital environment 5<br />
Knowledge / competence of instructor 5<br />
One-on-one with client 4<br />
Instructor supportive of students 4<br />
Observation Opportunities 3<br />
Not looked down upon for asking questions 3<br />
Learned time management 3<br />
83
Table 19<br />
Frequencies of Student Responses to Request to Identify Aspects of the Clinical Setting that<br />
Hindered Learning—Issues not Addressed by Scaled SECEE Items<br />
Statement Frequency<br />
Too high student / instructor ratio 19<br />
Not enough clinical time 7<br />
Too many people (staff, students) on unit 6<br />
Didn’t feel comfortable asking questions 5<br />
Treated like beginning students or aids 4<br />
Unit/floor too busy 4<br />
Long drive to clinical 3<br />
Not seeing own clients 3<br />
Not connecting with clients on home visits 3<br />
Must rely only on self for learning 3<br />
Nurses ignored us 3<br />
84
Chapter 5<br />
Discussion and Conclusions<br />
The purpose of the described research was to continue development and testing of the<br />
Student Evaluation of Clinical Education Environment (SECEE) inventory. This inventory was<br />
developed in response to the identified absence of any instrument that concisely and<br />
comprehensively measures nursing student perceptions of their clinical learning environment.<br />
The SECEE inventory appears to be able to accurately and reliably reflect student perceptions of<br />
their learning environments, although minor revisions may make the instrument even more<br />
comprehensive and reliable. In the discussion section the research findings are related to the<br />
questions posed at the beginning of the study.<br />
Research Question 1<br />
Is it possible to revise the original SECEE instrument to include all issues critical to the nursing<br />
student clinical education environment, while maintaining brevity and practicality of use by<br />
nursing educators?<br />
Brevity and Practicality of Use. As described in the review of literature, the original<br />
SECEE instrument was revised extensively both to increase the scope of measurement of the<br />
learning environment, and to reflect findings of current literature and analysis results from initial<br />
administration of the inventory. The “revised” SECEE inventory is brief in comparison to many<br />
learning environment inventories (Fraser, 1991). It also appears to be practical for use, as data<br />
for this study were collected with ease by both the researcher and nursing faculty at three<br />
different institutions. After being given brief instructions as to administration of the instrument<br />
85
and reading of the “Statement to Participants”, faculty had no difficulties in instrument<br />
administration and collection at any of the institutions.<br />
Content Validity. The SECEE inventory addresses all clinical learning environment<br />
issues that were identified in the Table of Specifications (Appendix D), indicating that it reflects<br />
the critical aspects of the learning environment identified by nursing students, faculty, and the<br />
literature, as well as the reviewed samples of Nursing School - Agency contracts. Although<br />
administrative and faculty issues raised by the above-named sources also appear in the Table of<br />
Specifications, they are not addressed by the inventory, as the purpose was to develop an<br />
instrument that focuses on measuring student perceptions of their clinical learning environments.<br />
Inclusion of all clinical learning environment issues identified in the Table of Specifications<br />
within inventory items supports the content validity of SECEE instrument.<br />
Content validity of the scaled portion of the inventory is also supported by student<br />
narrative responses to the open-ended inventory items (requests to identify aspects of the clinical<br />
setting that helped and hindered learning). The forced-choice portion of the inventory had<br />
addressed the majority of student comments. Of the issues students raised that were not<br />
addressed by the current items, a few may be of benefit to include in future instrument revisions.<br />
An item about student/faculty ratio may be beneficial to include, as 19 students identified that<br />
too high a student/faculty ratio hindered their learning. Narrative data would also support adding<br />
a scaled item addressing student comfort with asking questions of both instructor and staff, as<br />
five students stated they felt uncomfortable asking questions in the clinical environment, and<br />
three students stated that feeling comfortable asking questions helped their learning. Several<br />
students raised another issue in response to the open-ended inventory items. Four students noted<br />
that one-on-one interactions with patients were beneficial, and three students cited not seeing<br />
86
their own clients as a hindrance to learning. Adding an item addressing interaction with patients<br />
may also be helpful during instrument revision.<br />
It may be beneficial to split item number 18 (“I felt supported in attempts at applying new<br />
knowledge or learning new skills”) into two items--one addressing instructor support and one<br />
addressing support from staff. Four students specifically identified instructor support as a<br />
beneficial aspect of the learning environment, and three students noted that relying on<br />
themselves only for learning was a hindrance. Another issue raised by students was the<br />
perception of not having enough clinical time during rotations. Seven students identified<br />
insufficient clinical time as a hindrance to learning. This may be another issue worth addressing<br />
in inventory revision.<br />
As one of the goals of SECEE instrument development was to have an inventory that was<br />
brief in nature, it would not be possible to address all issues raised by students in the narrative<br />
comments within the forced-choice portion of the inventory. Some issues were raised by only<br />
two or three students, while others were related only to specific clinical settings. Although these<br />
comments would be helpful for clinical faculty supervising students at the particular sites, they<br />
do not seem appropriate to address individually in the scaled portion of the inventory at this time.<br />
If all of the above-mentioned issues were addressed within the forced-choice portion of<br />
the SECEE inventory, the result would be the addition of 6 questions to the 29 that are in<br />
currently included in the instrument. Further testing of the instrument after these additions<br />
would confirm whether they indeed add valuable information about student perceptions of the<br />
clinical education environment.<br />
87
Research Question 2<br />
Is the revised SECEE inventory a reliable instrument for measuring student perceptions of their<br />
clinical (applied) learning environment?<br />
Inventory Reliability. Reliability of the SECEE inventory was determined through a<br />
variety of methods. Overall instrument reliability was determined for each institution's nursing<br />
student population. Institutional data were analyzed separately, as significant differences were<br />
found between institution scale score means. Internal consistency for the entire instrument was<br />
high, with the coefficient alpha for LMA and SMA institutions being .94 and the alpha for SMW<br />
institution being .89. Reliability findings indicate that students responded consistently to the<br />
entire instrument (Anastasi, 1988).<br />
Scale Reliability. Reliability for each of the four scale scores was determined using<br />
correlation coefficients. Using data from all institutions together, coefficient alphas for both the<br />
Communication / Feedback scale and Learning Opportunities scale were .85 and .86,<br />
respectively. The alpha for the Learning Support scale was .81 and the Department Atmosphere<br />
scale alpha was .63. Reliabilities appear acceptably high for all scales except the Department<br />
Atmosphere scale. This scale has the fewest items (six) and covers a fairly broad range of issues.<br />
A review of item placement for thematic content and the correlation of each item with<br />
other items within a scale lead the researcher to consider changing the placement of four items<br />
within the scales. Item 4 (“My responsibilities were clearly communicated”) was temporarily<br />
moved from the Communication / Feedback scale to the Department Atmosphere scale. Item 11<br />
("felt overwhelmed by the responsibilities of my role”) was moved from the Learning<br />
Opportunities to the Learning Support scale. Item 21 (“competing with other students for skills<br />
and resources had a negative impact”) was repositioned to the Learning Opportunities scale from<br />
88
the Department Atmosphere scale, and item 23 (“site provided an atmosphere conducive to<br />
learning”) was moved from the Learning Opportunities scale to the Department Atmosphere<br />
scale. This temporary moving of items within-scales left all scales with between six and nine<br />
items.<br />
After changing the placement of these four items, correlation coefficients were<br />
recalculated. The result was standardized alpha levels for all scales of above .75 (see Table 20<br />
for revised scale correlation coefficients). The revision of item placement increased the<br />
Department Atmosphere reliability by .14. However, the change resulted in a decrease of .05 in<br />
the Learning Opportunities scale reliability. The Learning Support and Communication /<br />
Feedback scales were almost unchanged, being calculated at .01 and .02 lower than with the<br />
original item placement. Overall, reliabilities were higher with the revised item placements. It<br />
may be beneficial to consider using the revised item placements for further investigations using<br />
the SECEE inventory. In addition, expanding the inventory to include some of the issues that<br />
students raised in discussion of the aspects of the clinical environment that helped and hindered<br />
learning may further improve reliability of the four scales.<br />
The additional reliability measure of test-retest scale score correlations was calculated for a<br />
sample of 46 students from SMA and SMW institutions. Test-retest correlations were found to<br />
be significant at p < .001. Although not as high as might be expected (r value varied from .50 to<br />
.61 for the four scales), the reader must bear in mind that the student respondents had<br />
experienced several additional clinical sessions at the clinical site between the pretest and end of<br />
semester inventory. These additional experiences may have significantly impacted the<br />
89
Table 20<br />
Reliability Coefficients for Scales by Institution (Revised Item Placement)<br />
LMA Instit.<br />
n = 126<br />
SMW Instit.<br />
n = 69<br />
SMA Instit.<br />
n = 122<br />
All Instit.<br />
n = 317<br />
Comm. /<br />
Feedback<br />
Learning<br />
Opportunity<br />
90<br />
Learning<br />
Support<br />
Department<br />
Atmosphere<br />
.84 .84 .82 .78<br />
.75 .72 .73 .67<br />
.85 .80 .80 .77<br />
.83 .81 .80 .77
end of semester perception of the learning environment. In addition, test-retest reliability results<br />
are generally somewhat lower than internal consistency measures (Kubiszyn & Borich, 1987).<br />
The investigator believes that the level and significance of the test-retest correlations are<br />
sufficiently high, given the potential effect of additional exposure to the environment between<br />
testing occasions.<br />
Research Question 3<br />
Is the revised SECEE inventory able to differentiate between various student populations through<br />
evaluations of their clinical learning environments?<br />
Construct Validity. As was presented in the Results section, significant differences<br />
between student sub-populations were found for all scale scores, using institution as the<br />
independent variable. Generally, students attending the two smaller institutions evaluated their<br />
clinical education environments more positively than did students attending the larger institution.<br />
If the SECEE inventory was focused on evaluating the classroom environment, one might expect<br />
these findings, with larger classrooms being generally noted to be less personal and perhaps less<br />
positively perceived. However, as the clinical setting was the focus of the SECEE instrument<br />
and the clinical groups at the three institutions were similar in size, the differences found were<br />
most likely not due to the impersonal nature of the larger institution. This finding would be<br />
interesting to investigate further, to determine whether some aspect of the clinical environments<br />
at the smaller institutions has a more positive effect on students’ perceptions of the learning<br />
environment.<br />
In addition, differences in scale scores were found between academic levels of students.<br />
At LMA, significant differences were found for all of the scale scores, and there were significant<br />
91
differences between academic ranks of students in all but one scale score at SMA. However, at<br />
SMW institution, only one scale score (Department Atmosphere) was found to have significant<br />
differences according to student academic level. The ANOVA results indicate that the SECEE<br />
instrument did detect differences between student levels, but that the differences found were not<br />
consistent across institutions. This finding was not unexpected, as the nursing programs at<br />
LMA, SMA, and SMW have institution-specific programs of study and clinical foci for each<br />
academic level. An interesting sidelight is that when differences were detected between levels of<br />
students, the junior students appeared to perceive their clinical education environments less<br />
positively than did the sophomore or senior students (see Table 10 in the Results section). These<br />
findings present another possible topic for research, beyond that of instrument validation. It may<br />
be of interest to find out why students at one level perceive their environments more or less<br />
positively than students at other levels.<br />
Scale score differences were also found between students evaluating different clinical site<br />
groups at the two small institutions (SMA and SMW). The clinical sites at the smaller<br />
institutions required much less grouping prior to ANOVA analysis, and thus clinical groups were<br />
most likely more homogeneous. No scale score differences were found between site groups at<br />
LMA institution; however, the 35 different clinical sites evaluated by LMA students were<br />
collapsed into 10 site groups, prior to analysis. This forced grouping meant that as many as<br />
seven distinct clinical learning sites were grouped together for analysis. Although the sites were<br />
grouped according to the type of nursing care provided at the facility, the nature of the learning<br />
environment (facilities, equipment, staff, and patient population) may have been very different.<br />
Thus, student evaluations may have varied greatly within one site group. It is encouraging to<br />
note that differences between clinical site groups were found for the institutions for which<br />
92
minimal grouping of sites was required. In the future, it would be beneficial to limit grouping of<br />
clinical sites as much as possible. Those sites with only one or two students could be analyzed<br />
over a period of several semesters, allowing collection of data from several students experiencing<br />
the same clinical learning site, without having to group sites together for analysis.<br />
In addition to detecting differences in inventory scale scores between student sub-<br />
populations, the instrument was able to differentiate between the same students’ evaluations of<br />
different clinical sites. For those students at LMA who evaluated two different clinical sites on<br />
the pretest and end of semester inventories (n = 60), there were no significant correlations<br />
between the scale scores for the two inventories. This result indicates that individual students do<br />
evaluate distinct clinical sites differently when using the SECEE inventory.<br />
The above results appear to support construct validity of the SECEE instrument. One<br />
would expect different student populations (i.e. different academic levels of students) to evaluate<br />
clinical learning environments differently. One would also expect individual students to evaluate<br />
distinct clinical sites somewhat differently, and to evaluate the same clinical site similarly on two<br />
separate occasions within a limited time frame. Finally, an instrument would be expected to<br />
identify differences between distinct clinical sites, in terms of student perceptions of the learning<br />
environments. The SECEE inventory was found to meet all of these construct validity<br />
expectations. The lack of differentiation between clinical site groups at LMA may have been<br />
largely due to the extensive grouping of distinct sites having only one or two student<br />
respondents, prior to analysis.<br />
93
Research Question 4<br />
Are further revisions needed to improve the validity, reliability, or ease of administration and<br />
analysis of the SECEE inventory?<br />
Removal of SECEE Items. Discussion of the previous three research questions has<br />
included several potential areas for improving the comprehensive nature of the inventory—<br />
adding items to improve scale reliability and addressing student-raised clinical learning<br />
environment issues. In addition, the inventory may be improved by removing or rewording a<br />
few items that either had low item-scale correlations or were not supported through student<br />
narrative comments. Items number 23 and 29, “the environment provided an atmosphere<br />
conducive to learning” and “I was successful in meeting most of my learning goals”, seemed to<br />
be rather broad in comparison with the specificity of other inventory items, and were not<br />
specifically mentioned in student narrative comments. It may be beneficial to replace these two<br />
items with more specific items such as those identified by students in their narrative comments.<br />
Two additional inventory items that were not directly identified by students in their<br />
narrative comments related to feeling overwhelmed with the responsibilities of the student role,<br />
and the presence of clear communication of the student’s patient care responsibilities. The<br />
“overwhelmed” item did not appear to fit well in any of the scales. Perhaps these items should<br />
be removed from the inventory during the next revision, or reworded into a statement such as “I<br />
felt prepared (or unprepared) for my role in this clinical setting”.<br />
The last two inventory items that were not directly addressed by student narrative responses<br />
related to students helping each other in the clinical environment and to the instructor<br />
encouraging students to help each other and share their learning experiences. Although student<br />
narrative responses did not support inclusion of these two items in the forced-choice portion of<br />
94
the instrument, the investigator feels that these two items should remain in the inventory (at least<br />
for the near future). Peer interactions and sharing of experiences with other students may<br />
significantly impact the student learning environment (Goodenow, 1992). It may be that faculty<br />
and nursing staff at the sites under study did not emphasize the learning benefit of peer<br />
interactions and support, and thus students did not identify these factors as aspects of the<br />
environment that either helped or hindered learning.<br />
Demonstration of Instrument Validity. Issues related to content validity of the SECEE<br />
inventory were addressed in discussion of Research Question 1 and criterion validity issues were<br />
presented in discussion of Research Question 3. Instrument criterion validity is not currently<br />
possible to measure, as there are no other published instruments with the stated purpose of<br />
measuring student perceptions of the clinical education environment. Some nursing course<br />
evaluations include one or two questions related to the clinical environment, but there is no<br />
evidence that these global questions are valid measures of student perceptions of the<br />
environment. It might be possible in the future to conduct observational studies of selected<br />
clinical sites using items addressed by the SECEE inventory as observational components. If<br />
this type of study were done, comparisons could be made between observer data and student-<br />
reported SECEE inventory data. However, observational research would consume a significant<br />
amount of researcher time, as it would require that several students be observed interacting with<br />
the clinical learning environment over a period of several sessions in order to achieve an overall<br />
perspective of the environment.<br />
Recent publications that contain information related to instrument validity (Hambleton,<br />
1996; Goodwin, 1997) state that the consequences of instrument use, including both intended<br />
and unintended effects, must be considered during instrument validation and use. The intended<br />
95
consequences of use of data resulting from the SECEE instrument are: (a) identification of<br />
clinical sites perceived by students as providing a positive learning experience, and sites<br />
perceived as providing a less positive experience; (b) provision of feedback to clinical agency<br />
facilitators related to student perception of their experiences at the site; and (c) revision,<br />
adaptation, or alteration of clinical sites and instruction practices used for the undergraduate<br />
nursing program. Unintended, but possible consequences of data use include: (a) conflict<br />
between students or faculty and clinical agency staff as a result of negative student evaluation<br />
and (b) discipline of agency staff or nursing faculty who are evaluated unfavorably by students.<br />
It is the investigator’s opinion that the use of data resulting from instrument administration<br />
should not result in negative consequences, if consideration is given to assure constructive<br />
communication of results to students, faculty, and clinical agencies.<br />
Revisions to Improve Data Gathering and Analysis. A question which arose during<br />
revision of the scaled items of the original SECEE inventory related to the benefit of inclusion of<br />
response option number “6” (can’t answer). The option was included in instrument revision,<br />
with the hope that the analysis of data gathered would either confirm that the option provided<br />
helpful information or demonstrate that it was unnecessary. Results of the analysis supported<br />
inclusion of the “can’t answer” response as an option for the forced-choice items. Student<br />
explanations for the “can’t answer” response indicated that being unable to answer was related to<br />
a lack of exposure to the particular issue they were asked to evaluate within the identified clinical<br />
site, rather than to lack of clarity of the inventory item as it was written. Students provided<br />
logical reasons for the inability to answer particular items, such as there being no other students<br />
at the clinical site, or not having been assigned a resource nurse at the site. The inclusion of the<br />
sixth response option provided additional valuable information about student experiences and<br />
96
also may have averted a higher incidence of items left blank by respondents. There was a very<br />
low incidence of missing data for the scaled inventory items—only 17 missing responses to the<br />
forced-choice items were found for the 319 completed end of semester inventories. It should be<br />
of benefit to continue to provide the “can’t answer” response option in instrument revisions, at<br />
least until trends can be determined for items having frequent responses of “can’t answer” and<br />
for items left blank. Eventually this option might be changed to “no opportunity to experience”<br />
or “not applicable”<br />
A few issues of concern were noted in review of the data resulting from the demographic<br />
items within the inventory. Student identification of their completed inventories was missing in<br />
35 cases. The missing data may have been the result of student fear for loss of anonymity,<br />
particularly when there were few students evaluating a particular clinical site. The method of<br />
inventory identification (first two letters of mother’s first name and last two digits of the<br />
student’s social security number) was chosen in order to protect student anonymity, so a lower<br />
incidence of missing data was expected. However, the only analysis affected by missing<br />
identification was the correlation of individual student responses to the pretest and end of<br />
semester inventories. This issue may continue to be a problem if further instrument testing<br />
involves comparison of student responses at different points in time.<br />
Missing data and incomplete responses were also a problem for the clinical site<br />
identification item. Twenty percent of the end of semester inventories and seventeen percent of<br />
the pretest inventories were missing clinical site identification. This resulted in the exclusion of<br />
these inventories from the comparative analysis of scale scores according to clinical sites. It was<br />
interesting to note that in the analysis of the original SECEE inventory there was not a problem<br />
with missing clinical site identification. However, the original instrument did not request student<br />
97
identification of their completed inventories. Perhaps students were hesitant to provide both<br />
inventory identification information and identification of the clinical site they were evaluating<br />
for fear of loss of anonymity, particularly if there were a limited number of students at the<br />
particular site.<br />
Student lack of specificity in identification of clinical sites also presented a problem for<br />
data analysis and interpretation. Although they were requested to identify the particular clinical<br />
area or department within a facility to which they were assigned, 54 student respondents<br />
identified their clinical learning sites with only a hospital name. This incomplete identification<br />
of clinical sites resulted in the grouping of all inventories identified only with the facility name<br />
into one site group for clinical site comparisons, potentially masking differences between distinct<br />
(but unidentified) clinical sites within the facility. Five of the 25 clinical site groups used in the<br />
comparative analysis included an unknown number of clinical sites within a particular hospital,<br />
potentially impacting the ANOVA results, particularly for LMA and SMW institutions. This<br />
incomplete clinical site identification may also have been a purposeful student act to protect<br />
anonymity. It may be that the request for identification of inventories will need to be removed in<br />
order to prompt students to accurately identify clinical sites. Perhaps in the future, separate<br />
student populations should be used to gather test-retest reliability data and to conduct clinical site<br />
comparisons, eliminating the need to gather student identification data for the site comparisons.<br />
Student identification of their inventories would only be necessary for those students completing<br />
evaluations of the same or different clinical sites over a period of time, for the purpose of<br />
comparison of individual student scores.<br />
Student data from one other demographic item were somewhat disconcerting to the<br />
researcher. In response to the item addressing assignment of a student resource person, the<br />
98
number of students reporting that no resource person was assigned them was higher than<br />
expected. In addition, same-level students from the same institution did not always respond<br />
consistently to this item, when there most likely should have been consistency in student<br />
experience with a resource person. Students may have been somewhat confused about whether<br />
the item was addressing formal assignment to a resource person, or informal identification of the<br />
staff person who was assigned the same patient(s) as a possible resource for the student. In<br />
future instrument revisions, an additional response option could be included, indicating that no<br />
formal assignment to a preceptor / resource person was made, but that the student worked<br />
informally with the staff assigned to provide care for the patient to whom the student was<br />
assigned.<br />
During data analysis, a minor inconvenience was identified related to layout of the scaled<br />
portion of the inventory. The point value assigned to the Likert agree to disagree scale made<br />
data somewhat more difficult to interpret, as a lower point value represented a more positive<br />
perception of the learning environment. Generally, a higher point value represents a more<br />
positive or desired state. In the future, the point values for the scale could be reversed, so that<br />
the “strongly agree” response is assigned a point value of “5” rather than “1”. The result would<br />
be a higher point value indicating a more positive student perception of the learning<br />
environment, which seems a bit more logical.<br />
Four reverse-coded items were included in the forced-choice portion of the inventory, for<br />
the purpose of minimizing response bias. These items did not seem to present a problem to<br />
respondents and mean scores indicate that students responded appropriately to the items (see<br />
Table 2 in the results section). Some degree of reverse coding of items will be maintained in<br />
future inventory revisions.<br />
99
Summary of Proposed Changes in the SECEE Inventory<br />
Following is a listing of the proposed SECEE inventory changes resulting from this research<br />
investigation. All changes have been previously discussed.<br />
1. Addition of items covering student identified issues<br />
A. Student/Faculty ratio allowed adequate faculty supervision and assistance<br />
B. Student comfort in asking questions of faculty<br />
C. Student comfort in asking questions of staff<br />
D. Benefit of one-to-one interaction with own patients<br />
E. Adequate amount of clinical time during rotation to meet learning goals<br />
2. Split item “I felt supported in attempts at applying new knowledge or learning new skills”<br />
into two items: “Instructor supported me in my attempts at applying knowledge or<br />
learning skills ” and “staff supported me in my attempts at applying knowledge or<br />
learning skills”.<br />
3. Combine items “felt overwhelmed by demands of role” and “responsibilities were clearly<br />
communicated” into “I felt adequately prepared for my responsibilities in providing<br />
patient care in this clinical setting”.<br />
4. Remove two items “The environment provided an atmosphere conducive to learning” and<br />
“I was successful in meeting my learning goals in this environment” due to their broad<br />
nature in comparison to other SECEE items.<br />
5. Reverse the scoring values for the scaled response items, so that a higher point value<br />
reflects a more positive perception of the clinical education environment.<br />
100
6. Add a fifth response item to the demographic item related to identification of the staff<br />
resource person at the clinical site, indicating that the student was not formally assigned a<br />
resource person, but the RN assigned to the same patient as the student served as an<br />
informal student resource.<br />
7. Use different student populations for data collection in repeated measure comparisons<br />
and single administration clinical site comparisons, to reduce the need for student<br />
identification of their inventories and hopefully improve accurate identification of clinical<br />
education sites.<br />
The above changes would result in a net effect of the addition of 3 items to the current 29<br />
forced-choice items. The only foreseeable difficulty with the addition of three items to the<br />
inventory is that it would require a three rather than a two-page format, which may result in<br />
student perception that the instrument will take a lengthy time to complete. However, if the<br />
administrator of the inventory mentions that students should be able to complete the instrument<br />
in about 10 minutes, there should not be any significant negative effects from item expansion.<br />
Study Limitations<br />
During the research process a few issues were recognized that either limit ability to draw<br />
conclusions based on the data or to generalize results to other nursing student populations.<br />
Group (pooled) data constituted the unit of analysis for most of the quantitative testing. The<br />
grouping of students from distinct clinical sites for the purposes of ANOVA testing most likely<br />
masked individual extremes in responses.<br />
101
The SECEE inventory contains primarily a priori determined items and scales, as well as<br />
limited choice responses. The data indicated that some student perceptions about the clinical<br />
education environment were unaddressed by the forced-choice portion of the instrument.<br />
However, the inclusion of the open-ended items did assist in identifying potentially significant<br />
issues that were overlooked in instrument development. These issues will be addressed in<br />
further SECEE inventory revisions.<br />
The SECEE instrument reflects only student perceptions of their clinical education<br />
environment. No data have been gathered through observation of students as they interact with<br />
these environments. There is no absolute assurance that students are accurately reporting their<br />
internal state or perceptions about the environment or that student perceptions accurately depict<br />
the clinical learning environment (Luce & Krumhansl, 1988). Future instrument validation<br />
research might include initial data collection through the SECEE inventory with follow-up<br />
observation of students interacting with learning environments that have been perceived<br />
positively and those that have been perceived less positively. Similarities between observer data<br />
and student evaluations would further support instrument validity.<br />
At least one environmental factor that may significantly impact student learning in the<br />
clinical nursing environment is not addressed by the SECEE inventory. This instrument does not<br />
directly address the impact of the student-patient interaction on student learning, although the<br />
investigator recognizes that this interaction may have a significant impact on student perceptions<br />
of the learning environment. Student interactions with patients who are unreceptive to attempts<br />
to provide care or patient teaching, patients who are confused or angry, and patients who are<br />
eager to participate in their plan of care will most likely impact the overall student perception of<br />
the clinical experience. However, in many clinical sites students have only short-term contact<br />
102
with individual patients, and are assigned a number of different patients throughout the course of<br />
a clinical rotation. Thus, although interaction with patients may impact the student experience<br />
either positively or negatively, this factor may vary greatly over the length of a four to six week<br />
clinical rotation.<br />
In order to fully address the impact of patient interactions on the learning environment,<br />
some type of measurement would need to be completed after each discrete clinical experience.<br />
However, the SECEE inventory is intended to measure overall student perceptions of the clinical<br />
environment, rather than student perceptions of a particular clinical day. Thus, the decision was<br />
made not to address patient interaction factors within the instrument. It might be more<br />
appropriate for nursing faculty to address this factor informally through conversations with<br />
students during the clinical experience, and to take appropriate action (i.e.: alter patient<br />
assignments) to reduce any negative impact on learning resulting from patient factors. Another<br />
option might be to include a general question about whether patient interactions were perceived<br />
positively, or an open-ended question asking students to identify how characteristics of student-<br />
patient interactions assisted or hindered their learning.<br />
Other limitations of this particular study have been previously discussed: missing or<br />
incomplete respondent data, the lower student response rate at LMA institution, and the grouping<br />
of large numbers of clinical sites for comparative analysis. Suggestions have also been made to<br />
avoid these limiting factors in future research. A final limitation involves current inability to<br />
generalize the support for validity and reliability of the instrument to all nursing student<br />
populations. Although nursing students from three different institutions were included in this<br />
study, further research with other nursing student populations would be necessary before the<br />
103
instrument could be presented as an accurate and valid measure of nursing student perceptions of<br />
their learning environments.<br />
Research Implications<br />
The SECEE inventory appears to have reasonable reliability and validity levels with the<br />
nursing student populations under study. Incorporation of the recommended revisions should<br />
result in even higher reliability and validity figures. After revision, the instrument should again<br />
be tested with various student populations. Instrument and scale internal consistency should be<br />
determined as well as differences between scale scores according to student sub-populations.<br />
Factor analysis to explore other potential scale divisions may also be helpful.<br />
After inventory revision and testing, it would be also be helpful to conduct an observation<br />
study of students interacting with clinical education sites that were evaluated both more<br />
positively and more negatively than the average. If observational data supported the inventory<br />
findings, validity of the SECEE instrument would be strengthened.<br />
Further consideration of the potential uses for the SECEE instrument in the process of<br />
assessment and improvement of the clinical education environment resulted in another<br />
suggestion for future investigation. Current inventory scale divisions reflect important constructs<br />
in the clinical education environment (Communication / Feedback, Learning Opportunities,<br />
Learning Support, and Department Atmosphere). However, as both faculty and clinical staff<br />
issues are included in the Communication / Feedback and Learning Support scales, it is not<br />
possible to use scale scores to determine overall student perceptions of the impact of either<br />
faculty or clinical staff on learning. A nursing program administrator who may wish to use the<br />
SECEE tool to gain information about the clinical education environment would have to look<br />
104
separately at each item pertaining to instructor and Preceptor / Resource RN to determine student<br />
perceptions of these two important figures in the learning environment. In the future, it may be<br />
more practical to consider reorganizing the items within the Communication / Feedback and<br />
Learning Support scales to create two new scales: Instructor Facilitation scale and Preceptor /<br />
Nursing Staff Facilitation scale. These two scales would reflect both the relationship and the<br />
personal development dimensions of Moos’s dimensions of educational environments (Moos,<br />
1979). The inventory would contain the same items and the same number of scales, but scale<br />
scores may be more immediately useful to faculty and administrators of schools after<br />
reorganization. Future investigations should determine the benefit of reorganization, based on<br />
both student data and conversations with nursing faculty and administrators about which scale<br />
divisions would result in the most useful information.<br />
Related Research Issues<br />
The current investigation has pointed to the potential difference in academic levels of<br />
students in terms of their perceptions of the clinical education environment. It may be of benefit<br />
to further investigate whether there is indeed a difference in how students perceive their learning<br />
environments based on academic level. Perhaps students expect different learning experiences<br />
or different degrees of learning support than are being provided. Further investigation of this<br />
issue may result in identification of issues that must be addressed in order to maximize student<br />
learning in the clinical setting.<br />
Another area for study related to the current investigation involves the relationship of<br />
student perceptions of the clinical education environment to student learning outcomes. Student<br />
perceptions of the learning environment have been shown to be related to student learning<br />
105
outcomes in the traditional classroom setting (Henderson, Fisher, & Fraser, 1995), but this<br />
relationship has not been demonstrated in the applied learning (clinical) setting. Often, expected<br />
clinical learning outcomes are not operationally defined and are difficult to measure. Detailed<br />
clinical learning objectives would be necessary for each clinical rotation within a nursing<br />
program, in order to make objective determinations about whether individual students have met<br />
learning outcomes, and prior to making any comparisons between learning outcomes and student<br />
perceptions of the clinical education environment. This area of research would demand<br />
considerable time and energy, but the results would certainly make a contribution to knowledge<br />
about the applied learning environment.<br />
Another factor in the applied learning environment that has not been directly addressed<br />
through the current research relates to how the students themselves impact learning in the clinical<br />
environment. Student variables such as motivation, preparation for clinical sessions, and<br />
assertiveness in gaining opportunities for skill development undoubtedly impact learning<br />
outcomes, but these factors have not been studied in the applied setting. In order to achieve a<br />
full picture of learning in the clinical setting, these variables will also have to be considered.<br />
It may also be beneficial to compare student perceptions of the clinical education<br />
environment with faculty perceptions of the same environment. The SECEE inventory could be<br />
written in a third person format for use with nursing faculty. Research has indicated that faculty<br />
often perceive the traditional classroom environment more positively than do students (Fraser &<br />
Fisher, 1982; Raviv, Raviv, & Reisel, 1990). However, these comparisons have not been made<br />
in the applied clinical education setting.<br />
106
Conclusion<br />
The applied learning aspect of nursing education is a critical component of the overall<br />
nursing program as the development of competent nursing practitioners depends to a great<br />
degree upon the quality of the clinical learning environment (Jacka & Lewin, 1986). Research<br />
has demonstrated that student perceptions of the learning environment account for an appreciable<br />
variance in student learning outcomes (Fraser, Williamson, & Tobin, 1987). The wide variety of<br />
clinical sites used for applied learning experiences may provide vastly different qualities of<br />
learning environments for nursing students. Thus, assessment of the quality of these sites is<br />
critical. Yet, to date, there has been a lack of a means to comprehensively evaluate the clinical<br />
learning environment and its impact on student learning. Accurate identification of the factors<br />
influencing student learning in the applied (clinical) environment is necessary, in order for<br />
nursing faculty and administrators to be able to assess the current status of these factors within<br />
the clinical settings used by a nursing program, to implement changes to improve the clinical<br />
learning environment, and to evaluate the benefit of changes that are made.<br />
The SECEE inventory provides an efficient means to begin to assess the quality of the<br />
clinical learning environment through the students’ eyes. The instrument is efficient to<br />
administer and analyze. Students in the sample population responded consistently to the<br />
instrument as a whole and to the scales within the inventory. Test-retest reliability was<br />
sufficiently high, given the additional student clinical experiences between initial and repeat<br />
testing. Data also indicated that students responded differently to the instrument when<br />
evaluating two discrete clinical environments.<br />
The SECEE inventory was demonstrated to detect differences between student sub-<br />
populations. Students responded differently to the instrument according to the institution in<br />
107
which they were enrolled, according to their academic level, and according to clinical site group.<br />
The majority of issues raised by students in response to open-ended questions had already been<br />
addressed by the forced-choice portion of the inventory. Recommendations were made for<br />
revision of the instrument to include other issues raised by students as well as to remove items<br />
that were either too general or that did not correlate well with other items in the inventory.<br />
Recommendations were also made to reduce the incidence of missing demographic data and to<br />
improve the ease of scoring the inventory. Implementation of these recommendations together<br />
with those for repositioning some items within-scales should improve both the reliability of the<br />
instrument and the ease of scoring and interpretation of data. Following revisions and testing, the<br />
instrument will even more closely reflect student perceptions of their clinical learning<br />
environment.<br />
The goal for the development of an instrument to measure student perceptions of their<br />
clinical learning environments is to provide accurate information to nursing faculty and<br />
administrators about the quality of the learning environment at all clinical sites used by a nursing<br />
program. Data from the SECEE inventory could be used together with other information to<br />
make decisions about taking action to improve the clinical learning environment and to monitor<br />
the success of actions taken, in terms of both the quality of the learning environment and student<br />
learning outcomes.<br />
108
References<br />
American Association of Colleges of Nursing. (1997). Draft Standards for Accreditation<br />
of Baccalaureate and Graduate Nursing Education Programs November, 1997. [On-line].<br />
Available: http://www. aacn.nche.edu/Accred/standard.htm.<br />
Co.<br />
Anastasi, A. (1988). Psychological Testing (6 th ed.). New York: Macmillan Publishing<br />
Berliner, D. C. (1983). Developing conceptions of classroom environments: some light<br />
on the T in classroom studies of ATI. Educational Psychologist, 18, 1-13.<br />
Bevil, C. W. & Gross, L. C. (1981). Assessing the adequacy of clinical learning settings.<br />
Nursing Outlook, 29, 658-661.<br />
Brown, J. S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of<br />
learning. Educational Researcher, 18, 32-42.<br />
Burns, N. & Grove, S.K. (1993). The practice of nursing research: Conduct, critique &<br />
utilization (2 nd ed.). Philadelphia: W. B. Saunders Company.<br />
Cannon, J. R. (1997). The constructivist learning environment survey may help halt<br />
student exodus from college science courses. Journal of College Science Teaching, 27, 67-71.<br />
Carver, R. (1996). Theory for practice: A framework for thinking about experiential<br />
education. The Journal of Experiential Education, 19 (1), 8-13.<br />
Cheney, C. D. (1991). The source and control of behavior. In W. Ishaq (Ed.) Human<br />
behavior in today’s world (pp. 73-86). New York: Praeger.<br />
Cheney, D. (1991). A rational technology of education. In W. Ishaq (Ed.) Human<br />
behavior in today’s world (pp. 163-170). New York: Praeger.<br />
Clark, J. A. (1995). Tertiary students’ perceptions of their learning environments: a new<br />
procedure and some outcomes. Higher Education Research and Development, 14, 1-12.<br />
Cust, J. (1996). A relational view of learning: Implications for nurse education. Nursing<br />
Education Today, 16, 256-266.<br />
Diamantes, T. (1994). Reducing differences between student actual and preferred<br />
classroom environments. ED 378 036.<br />
Dunkin, M. J. & Biddle, J. J. (1974). The study of teaching. New York: Holt, Rinehart<br />
and Winston, Inc.<br />
109
Dunn, S. V. & Burnett, P. (1995). The development of a clinical learning environment<br />
scale. Journal of Advanced Nursing, 22, 1166-1173.<br />
Entwistle, N. & Tait, T. (1995). Approaches to study and perceptions of the learning<br />
environment across disciplines. New Directions for Teaching and Learning, 64, 93-103.<br />
Farrell, G. A. & Coombes, L. (1994). Student nurse appraisal of placement (SNAP): an<br />
attempt to provide objective measures of the learning environment based on qualitative and<br />
quantitative evaluations. Nurse Education Today, 14, 331-336.<br />
Fisher, D. L. & Fraser, B. J. (1981). Validity and use of the my class inventory. Science<br />
Education, 65, 145-156.<br />
Fraser, B. J. (1982). Development of short forms of several classroom environment<br />
scales. Journal of Educational Measurement, 19, 221-227.<br />
Fraser, B. J. (1983) Managing positive classroom environments. In B. J. Fraser (Ed.)<br />
Classroom management. Bentley, Australia: Western Australian Institute of Technology. P.36-<br />
47.<br />
Fraser, B. J. (1991). Validity and the use of classroom environment instruments. Journal<br />
of Classroom Interaction, 26 (2), 5-11.<br />
Fraser, B. J., Anderson, G. J., & Walberg, H. J. (1982). Assessment of learning<br />
environments: Manual for learning environment inventory (LEI) and my class inventory (MCI)<br />
(3 rd ed.). Perth, Australia: Western Australian Institute of Technology.<br />
Fraser, B. J. & Fisher, D. L. (1982a). Effects of classroom psychosocial environment on<br />
student learning, British Journal of Educational Psychology, 52, 374-377.<br />
Fraser, B. J. & Fisher, D. L. (1982b). Predicting students’ outcomes from their<br />
perceptions of classroom psychosocial environment. American Educational Research Journal,<br />
19, 498-518.<br />
Fraser, B. J. & Fisher, D. L. (1983). Assessment of Classroom Psychosocial<br />
Environment: Workshop manual. Bentley, Australia: Western Australian Institute of<br />
Technology. pp. 2-68.<br />
Fraser, B. J. & Fisher, D. L. (1986). Using short forms of classroom climate instruments<br />
to assess and improve classroom psychosocial environment. Journal of Research in Science<br />
Teaching, 23, 387-413.<br />
Fraser, B.J., Giddings, G. J., & McRobbie, C. J. (1991). Science laboratory classroom<br />
environments: A cross-national perspective. Paper presented at the annual meeting of the<br />
American Educational Research Association, Chicago.<br />
110
Fraser, B. J. & O’Brien, P. O. (1985). Student and teacher perceptions of the environment<br />
of elementary school classrooms. The Elementary School Journal, 85, 567-579.<br />
Fraser, B. J., Treagust, D. F., & Dennis, N. C. (1986). Development of an instrument for<br />
assessing classroom psychosocial environment at universities and colleges. Studies in Higher<br />
Education, 11, 43-54.<br />
Fraser, B. J., Williamson, J. C., & Tobin, K. G. (1987). Use of classroom and school<br />
climate scales in evaluating alternative high schools, Teaching & Teacher Education, 3, 219-231.<br />
Goodenow, C. (1992). Strengthening the links between educational psychology and the<br />
study of social contexts. Educational Psychologist, 27, 177-196.<br />
Goodwin, L. D. (1997). Changing conceptions of measurement validity. Journal of<br />
Nursing Education, 36, 102-107.<br />
Greeno, J. G., Collins, A. M., & Resnick, L. B. (1996). Cognition and learning. In D. C.<br />
Berliner & R.C. Calfee (Eds), Handbook of educational psychology (pp. 15-46). New York:<br />
Simon & Schuster Macmillan.<br />
Hambleton, R. K. (1996). Advances in assessment models, methods and practices. In D.<br />
C. Berliner & R. C. Calfee (Eds.). Handbook of educational psychology (pp. 899-925). New<br />
York: Simon & Schuster Macmillan.<br />
Henderson, D., Fisher, D. & Fraser, B. (1995, April). Associations between learning<br />
environments and student outcomes in biology. Paper presented at the annual meeting of<br />
American Educational Research Association, San Francisco.<br />
Idiris, S. & Fraser, B. J. (April, 1994). Determinants and effects of learning environments<br />
in agrucultural science classrooms in Nigeria. Paper presented at the annual meeting of the<br />
American Educational Research Association, New Orleans, Louisiana.<br />
Jacka, & Lewin, (1986). Elucidation and measurement of clinical learning opportunities.<br />
Journal of Advanced Nursing 11, 573-582.<br />
Knight, S. L. (1991). The effects of students’ perceptions of the learning environment on<br />
their motivation in language arts. Journal of Classroom Instruction, 26 (2), 19-23.<br />
Kubiszyn, T.& Borich, G. (1987). Educational Testing and Measurement: Classroom<br />
Application and Practice (2 nd ed.). Glenview, IL: Scott Foresman and Co.<br />
Lomax, R. G. (1992). Statistical concepts: A Second Course for Education and the<br />
Behavioral Sciences. White Plains, NY: Longman Publishing Group<br />
Loughlin, C. E. & Suina, J. J. (1982). The learning environment: An instructional<br />
strategy. New York: Teachers College Press.<br />
111
Luce, R. D. & Krumhansl, C. L. (1988). Measurement, Scaling, and Psychophysics. In R.<br />
C. Atkinson (Ed.), Stevens’ Handbook of Experimental Psychology (2 nd ed., pp. 3-74). New<br />
York: John Wiley & Sons, Inc.<br />
Marshall, H. H. (1990). Beyond the workplace metaphor: the classroom as a learning<br />
setting. Theory into Practice, 29, 94-101.<br />
McGraw, S. A., Sellers, D. E., Stone, E. J., Bebchuk, J., Edmundson, E. W., Johnson, C.<br />
C., Bachman, K. J., & Luepken, R. V. (1996). Using process data to explain outcomes: an<br />
illustration from the child and adolescent trial for cardiovascular health. Evaluation Review, 20,<br />
291-312.<br />
McNamara, E. & Jolly, M. (1994). Assessment of the learning environment – The<br />
classroom situation checklist. Therapeutic Care and Education: The Journal of the Association of<br />
Workers for Children with Emotional and Behavioral Difficulties, 3, 277-283.<br />
Moos, R. H. (1979). Educational Climates. In H. J. Walberg (Ed.), Educational<br />
environments and effects: Evaluation, policy and productivity. Berkely, CA: McCutchan<br />
Publishing Corporation.<br />
Moos, R. H. & Moos, B. S. (1978). Classroom social climate and student absences and<br />
grades. Journal of Educational Psychology, 70, 263-269.<br />
Norusis, M. J. (1997). SPSS 7.5 Guide to Data Analysis. Upper Saddle River, NJ:<br />
Prentice Hall.<br />
Peirce, A. G. (1991). Preceptorial students’ view of their clinical experience. Journal of<br />
Nursing Education, 30, 244-250.<br />
Perese, E. F. (1996). Undergraduates’ perceptions of their psychiatric practicum:<br />
Positive and negative factors in inpatient and community experience. Journal of Nursing<br />
Education, 35, 281-284.<br />
Polifroni, E. C., Packard, S. A., Shah, H. S. & MacAvoy, S. (1995). Activities and<br />
interactions of baccalaureate nursing students in clinical practica. Journal of Professional<br />
Nursing, 11, 161-169.<br />
Raviv, A., Raviv, A, & Reise, E. (1990). Teachers and students: two different<br />
perspectives? Measuring social climate in the classroom. American Educational Research<br />
Journal, 27, 141-157.<br />
58.<br />
Reed, S. & Price, J. (1991). Audit of clinical learning areas. Nursing Times, 87 (27), 57-<br />
112
Reilly, D. E. & Oerman, M. H. (1992). Clinical teaching in nursing education. New York:<br />
National League for Nursing.<br />
Resnick, L. B. (1987). Learning in school and out. Educational Researcher, 16, 13-20.<br />
Richards, J. M. (1996). Units of analysis, measurement theory, and environmental<br />
assessment: A response and clarification. Environment and Behavior, 28, 220-236.<br />
Robins, L. S., Gruppen, L. D., Alelxander, G. W., Fantone, J., & Davis, W. K. (April,<br />
1996). A model of student satisfaction with the medical school learning environment. Paper<br />
presented at the annual meeting of the American Educational Research Association, New York.<br />
Sand-Jecklin, K. E. (1997). Nursing student evaluation of the clinical education<br />
environment: Development of a measurement instrument (Tech. Rept. No. 1). Morgantown,<br />
W.V.: Kari Sand-Jecklin<br />
Scheetz, L. J. (1989). Baccalaureate nursing student preceptorship programs and the<br />
development of clinical competence. Journal of Nursing Education, 28, 29-35.<br />
Shuell, T. J. (1986). Cognitive conceptions of learning. Review of Educational Research,<br />
56, 411-436.<br />
Shuell, T. J. (1996). Teaching and learning in a classroom context. . In D. C. Berliner &<br />
R.C. Calfee (Eds), Handbook of educational psychology (pp. 726-764). New York: Simon &<br />
Schuster Macmillan.<br />
Sidman, M. Positive reinforcement in education. In W. Ishaq (Ed.) Human behavior in<br />
today’s world (pp. 171-174). New York: Praeger.<br />
Skinner, B. F. (1953). Science and human behavior. New York: Macmillan.<br />
Skinner, B. F. (1989). Recent issues in the analysis of behavior. Colombus,<br />
OH: Merrill.<br />
Slavin, R. E. (1997). Educational psychology theory and practice (5 th ed.). Needham<br />
Heights, MA: Allyn and Bacon.<br />
Taylor, P. C. & Fraser, B. J. (1991, April). CLES: An instrument for assessing<br />
constructivist learning environments. Paper presented at the annual meeting of the National<br />
Association for Research in Science Teaching, Fontane, WI.<br />
Tanner, C. A. (1994). On clinical teaching. Journal of Nursing Education, 33, 387-388.<br />
Thomas, D. R. (1991). Stimulus control: Principles and Procedures. In W. Ishaq (Ed.)<br />
Human behavior in today’s world (pp. 191-203). New York: Praeger.<br />
113
Trickett, E. J. & Moos, R. H. (1973). Social environment of junior high and high school<br />
classrooms. Journal of Educational Psychology, 65, 93-102.<br />
Vargas, E. A. (1996). A university for the 21 st century. In J. R. Cautela & W. Ishaq,<br />
Contemporary issues in behavior therapy: Improving the human condition (pp. 159-188). New<br />
York: Plenum Press.<br />
Warren, K. (1993). The midwife teacher: engaging students in the experiential education<br />
practice. The Journal of Experiential Education, 16 (1), 33-38.<br />
Wilson, M. E. (1994). Nursing student perspective of learning in a clinical setting.<br />
Journal of Nursing Education, 33, 81-86.<br />
Windsor, A. (1987). Nursing students’ perceptions of clinical experience. Journal of<br />
Nursing Education, 26, 150-154.<br />
114
Appendix A<br />
Original SECEE Instrument<br />
115
WEST VIRGINIA UNIVERSITY SCHOOL <strong>OF</strong> NURSING<br />
<strong>STUDENT</strong> <strong>EVALUATION</strong> <strong>OF</strong> <strong>CLINICAL</strong> AGENCIES: 1996<br />
Please circle or check the best answer to each question and provide written answers in the<br />
blanks provided.<br />
WVU Campus: Morgantown Potomac State Semester: Spring<br />
Charleston Glenville Fall<br />
Are you a: Generic BSN student RN to BSN student<br />
Year in program: Sophomore Junior Senior<br />
Clinical site being evaluated (include unit or dept)_____________________________________<br />
1. What was the adequacy of your orientation to this agency / department?<br />
no orientation given inadequate minimally adequate complete<br />
2 Which of the following best describes your assignment to a preceptor or staff nurse<br />
familiar with your assigned patients during your time at this agency / department?<br />
____RN Preceptor assigned--same preceptor throughout the experience.<br />
____Staff resource RN assigned--changed on a daily basis.<br />
____Non-RN preceptor or resource person assigned.<br />
____No resource person or preceptor was assigned to me at this agency.<br />
3. To what degree did your assigned preceptor or resource nurse communicate with you<br />
during your agency experience?<br />
____little/no communication ____through report only<br />
____through report and occasionally at other times ____frequently throughout the day<br />
Please indicate in the right margin the number that best represents your answer to the<br />
following questions.<br />
Key: 1 = very seldom 2 = sometimes 3 = frequently 4 = almost always<br />
4. How often was the preceptor or resource nurse available to answer<br />
your questions or provide assistance with patient care? 1 2 3 4<br />
5. How often did your preceptor or resource nurse maintain ultimate<br />
responsibility for patients to which you were assigned? 1 2 3 4<br />
6. How often did the agency nursing staff serve as role models for<br />
professional nursing? 1 2 3 4<br />
7. How often did preceptor or resource nurse workload impact your<br />
experience at this agency / department? 1 2 3 4<br />
116
Key: 1 = very seldom 2 = sometimes 3 = frequently 4 = almost always<br />
8. How often did agency nursing staff point out potential learning<br />
experiences for students? 1 2 3 4<br />
9. How often were there adequate numbers of patients whose acuity of<br />
nursing needs were appropriate for your clinical nursing abilities? 1 2 3 4<br />
10. How often did the agency and agency staff permit you to perform<br />
"hands on" care to the level of your clinical ability? 1 2 3 4<br />
11. How often were equipment, supplies and material resources needed to<br />
provide patient care and teaching available in this agency/department? 1 2 3 4<br />
12. What was the attitude of nursing staff toward serving as a resource to nursing students?<br />
very negative somewhat negative somewhat positive very positive<br />
13. Do you believe nursing staff were prepared to serve as a resource to nursing students?<br />
no yes<br />
14. Were other health professional students using this agency or department for clinical<br />
experience?<br />
no yes (if yes, please name the students and describe the impact on your<br />
experience)<br />
______________________________________________________________________________<br />
______________________________________________________________________________<br />
15. What were the strengths of having a clinical experinece at this agency / department?<br />
______________________________________________________________________________<br />
______________________________________________________________________________<br />
16. What were the limitations of having a clinical experience at this agency / department?<br />
______________________________________________________________________________<br />
______________________________________________________________________________<br />
17. Please note your recommendations for the future use of this agency or department as a<br />
clinical site in the future.<br />
____Do not recommend for use at any student level<br />
____Do not recommend for use of this level, but may be appropriate at a different level<br />
____Recommend for use at this level with minor modifications<br />
____Recommend for use at this level without modification<br />
Any other comments about either your experience at this agency or about this questionnaire?<br />
______________________________________________________________________________<br />
117
Appendix B<br />
118
Inventory Scale Item<br />
Number<br />
Communication/Feedback 4<br />
8<br />
12<br />
16<br />
20<br />
24<br />
27<br />
Learning Opportunities 3<br />
7<br />
11<br />
15<br />
19<br />
23<br />
26<br />
29<br />
Learning<br />
Support/Assistance<br />
2<br />
6<br />
10<br />
14<br />
18<br />
22<br />
25<br />
28<br />
SECEE Inventory Items Categorized by Scales<br />
Abbreviated Content Item in original<br />
Inventory? (Y/N)<br />
Responsibilities clearly communicated<br />
N<br />
Preceptor/resource nurse communication re: pt. care<br />
Y<br />
Instructor provided constructive feedback<br />
N<br />
Nursing staff served as positive role models<br />
Y<br />
Instructor served as positive role model<br />
N<br />
Nursing staff positive about serving as student resource<br />
Y<br />
Nursing staff provided constructive feedback<br />
N<br />
Wide range of learning opportunities available at site<br />
Encouraged to identify/pursue learning opportunities<br />
Felt overwhelmed by demands of role (reverse coded)<br />
Allowed more independence with increased skills<br />
Nursing staff informed students of learning opportunities<br />
Atmosphere conducive to learning<br />
Allowed hands on to level of abilities<br />
Was Successful in meeting most learning goals<br />
Preceptor/resource nurse available<br />
Instructor available<br />
Instructor provided adequate guidance with new skills<br />
Nursing staff provided adequate guidance with new skills<br />
Felt supported in attempts at learning new skills<br />
Nursing students helped each other<br />
Difficult to find help when needed (reverse coded)<br />
Instructor encouraged students to help each other<br />
119<br />
N<br />
N<br />
N<br />
N<br />
Y<br />
N<br />
Y<br />
N<br />
Y<br />
N<br />
N<br />
N<br />
N<br />
N<br />
N<br />
N
Scale Item<br />
Number<br />
Department Atmosphere 1<br />
5<br />
9<br />
13<br />
17<br />
21<br />
Abbreviated Content Item in Original<br />
Inventory? (Y/N)<br />
Adequately oriented to department<br />
Y<br />
RN maintained responsibility for student assigned pt.<br />
Y<br />
High RN workload negatively impacted exp. (reverse coded) Y<br />
Adequate number and variety of patients available at agency Y<br />
Needed equipment, supplies and resources were available Y<br />
Competing for skills and resources negatively impacted exp.<br />
(reverse coded)<br />
Y<br />
120
Appendix C<br />
Revised SECEE Instrument<br />
121
<strong>STUDENT</strong> <strong>EVALUATION</strong> <strong>OF</strong> <strong>CLINICAL</strong> <strong>EDUCATION</strong> <strong>ENVIRONMENT</strong><br />
Please circle or check the best answer to each question and provide written answers in the<br />
blanks provided.<br />
University and Campus ______________________________________________________<br />
Semester/Yr: Spring Fall 19____<br />
Program type: Generic BSN RN to BSN<br />
Year in program: Freshman Sophomore Junior Senior<br />
Clinical site you are evaluating (include both the name of facility and the department or<br />
unit)_______________________________________________________________________<br />
This evaluation is being used to evaluate a variety of clinical education environments. In some<br />
agencies or departments, students are assigned resource nurses or preceptors in addition to the<br />
supervision of their clinical instructor. In other settings, the instructor is the only student<br />
resource. Which of the following best describes your experience with a preceptor or staff<br />
resource nurse during your time at this agency/department (choose only one).<br />
____I was assigned an RN preceptor (same preceptor throughout the experience).<br />
____I was assigned a staff resource RN (the RN assigned changed on a daily basis).<br />
____I was assigned a non-RN preceptor or resource person.<br />
____I worked only with my instructor at this agency (no resource person or preceptor was<br />
assigned to me).<br />
Circle the number that best represents your answer to the following questions. Please<br />
provide an explanation for any questions to which you respond “can’t answer” (number 6)<br />
in the space directly below the question.<br />
Key: 1 = Strongly Agree 2 = Agree 3 = Neutral 4 = Disagree<br />
5 = Strongly Disagree 6 = Can’t Answer<br />
1. I was adequately oriented to the agency or department. 1 2 3 4 5 6<br />
2. My preceptor/resource nurse was available to answer questions<br />
and to help with patient care. 1 2 3 4 5 6<br />
3. A wide range of learning opportunities was available at this<br />
agency/department. 1 2 3 4 5 6<br />
122
Key: 1 = Strongly Agree 2 = Agree 3 = Neutral 4 = Disagree<br />
5 = Strongly Disagree 6 = Can’t Answer<br />
Remember to explain any questions to which you responded “can’t answer” (6)<br />
immediately below the question.<br />
4. My responsibilities in providing patient care were clearly<br />
communicated to me. 1 2 3 4 5 6<br />
5. My preceptor/resource nurse maintained ultimate<br />
responsibility for the patients to which I was assigned. 1 2 3 4 5 6<br />
6. My instructor was available to answer questions and to<br />
help with patient care. 1 2 3 4 5 6<br />
7. I was encouraged to identify and pursue opportunities<br />
for learning in this environment. 1 2 3 4 5 6<br />
8. My preceptor/resource nurse or instructor talked with me<br />
about new information and developments related to my<br />
patients’ care. 1 2 3 4 5 6<br />
9. High preceptor/resource nurse workload negatively<br />
impacted my experience at this agency/department. 1 2 3 4 5 6<br />
10. The instructor provided me with adequate guidance as I<br />
learned to perform new skills. 1 2 3 4 5 6<br />
11. I felt overwhelmed by the demands of my role in<br />
this environment. 1 2 3 4 5 6<br />
12. My instructor provided constructive feedback about my<br />
nursing actions in this setting. 1 2 3 4 5 6<br />
13. This agency/department had an adequate number and<br />
variety of patients appropriate for my clinical<br />
nursing abilities. 1 2 3 4 5 6<br />
14. The nursing staff provided me with adequate guidance<br />
as I learned to perform new skills. 1 2 3 4 5 6<br />
15. As my skills and knowledge increased, I was allowed to<br />
do more skills independently. 1 2 3 4 5 6<br />
123
Key: 1 = Strongly Agree 2 = Agree 3 = Neutral 4 = Disagree<br />
5 = Strongly Disagree 6 = Can’t Answer<br />
Remember to explain any questions to which you responded “can’t answer” (6)<br />
immediately below the question.<br />
16. The nursing staff in this department served as positive<br />
role models for professional nursing. 1 2 3 4 5 6<br />
17. Equipment, supplies, and material resources needed to<br />
provide patient care and teaching were available in<br />
this agency/department. 1 2 3 4 5 6<br />
18. I felt supported in my attempts at applying new knowledge<br />
or learning new skills. 1 2 3 4 5 6<br />
19. Nursing staff in this department informed students of<br />
potential learning experiences. 1 2 3 4 5 6<br />
20. My instructor served as a positive role model for<br />
professional nursing. 1 2 3 4 5 6<br />
21. Competing with other health professional students using<br />
this agency for skills/procedures, patient assignments,<br />
or resources negatively impacted my clinical experience. 1 2 3 4 5 6<br />
22. Nursing students helped each other with patient care and<br />
problem solving in this setting. 1 2 3 4 5 6<br />
23. This environment provided an atmosphere conducive<br />
to learning. 1 2 3 4 5 6<br />
24. The nursing staff were positive about serving as a<br />
resource to nursing students. 1 2 3 4 5 6<br />
25. When I needed assistance with a situation or skill, it was<br />
difficult to find someone to help. 1 2 3 4 5 6<br />
26. I was allowed to perform "hands on" care at the level<br />
of my clinical abilities. 1 2 3 4 5 6<br />
27. The nursing staff provided constructive feedback about<br />
my nursing actions in this setting. 1 2 3 4 5 6<br />
124
Key: 1 = Strongly Agree 2 = Agree 3 = Neutral 4 = Disagree<br />
5 = Strongly Disagree 6 = Can’t Answer<br />
Remember to explain any questions to which you responded “can’t answer” (6)<br />
immediately below the question.<br />
28. The instructor encouraged students to assist each other<br />
with patient care and to share their learning experiences. 1 2 3 4 5 6<br />
29. I was successful in meeting most of my learning goals<br />
in this environment. 1 2 3 4 5 6<br />
What aspects of this clinical setting helped/promoted your learning?<br />
___________________________________________________________________________<br />
___________________________________________________________________________<br />
What aspects of this clinical setting hindered your learning?<br />
___________________________________________________________________________<br />
___________________________________________________________________________<br />
Is there anything else you wish to mention?<br />
______________________________________________________________________________<br />
________________________________________________________________________<br />
125
Appendix D<br />
126
Table of Specifications for Student Evaluation of Clinical Education Environment (SECEE) Inventory<br />
Issue Identified Identified by<br />
Agency Literature Nursing Nursing Trad.Classrm.<br />
Contracts * Faculty * Students * Instruments<br />
Facility<br />
Adequate supplies / equipment for providing care X X X<br />
Adequate reference sources X X<br />
Student access to records / documentation X<br />
Atmosphere conducive to growth X<br />
Contact person assigned and available ** X<br />
Agency contact person meets with faculty each sem. ** X<br />
Private area for conferences and belongings ** X X X<br />
Agency Experience<br />
Adequate orientation - student X X X X<br />
Adequate number and acuity of patients X<br />
Degree of competition for pts / skills with other students X X<br />
Students allowed adequate "hands on" care X X<br />
Range/variety of learning opportunities X<br />
Student role clarity X<br />
Adequate orientation - faculty ** X X<br />
Agency agrees to provide certain experiences ** X<br />
Agency experience provides for meeting course obj. ** X X<br />
Agency Staff/Preceptor<br />
Preceptor / resource is identified for students X X X<br />
Preceptor available as resource X X X<br />
Staff adjusts work to accommodate student experience X X<br />
127
Issue Identified Identified by<br />
Agency Staff/Preceptor (continued)<br />
Agency Literature Nursing Nursing Trad. Classrm.<br />
Contracts * Faculty * Students * Instruments<br />
Staff workload allows time to work with students X X<br />
Staff communicate with/provide feedback to students X X X<br />
Attitude of staff toward working with students X X X X<br />
Staff are skillful and serve as role models for students X X<br />
Agency/staff remain ultimately responsible for patients X X<br />
Staff prepared to work with students ** X X<br />
Staff education level (AD, diploma, BSN) ** X X<br />
Recommendations related to continued affiliation ** X<br />
Changes recommended related to agency experience ** X X<br />
Agency keeps faculty informed of student progress ** X X X<br />
Assist in determining student assignments ** X X<br />
Note. (*) indicates information gathered prior to development of the original version of the SECEE inventory.<br />
Note. (**) indicates issues that are important in determining the full nature of the clinical environment but that are outside the<br />
realm of the current focus of instrument development: measurement of student perceptions of their clinical education environments.<br />
128
Appendix E<br />
129
STATEMENT TO PARTICIPANTS<br />
This research investigation is being conducted by Kari Sand-Jecklin, an adjunct<br />
faculty member at West Virginia University and a doctoral student. The purpose of the study<br />
is to test a newly developed instrument that measures student perceptions of their clinical<br />
learning experiences. The study involves completing an inventory (survey) which requests<br />
your opinions about the clinical setting that you are currently experiencing.<br />
Your participation in this study is strictly voluntary. You may choose not to respond<br />
to any or all portions of the inventory. Your decision about participating will have no bearing<br />
on your success in this course or in the nursing program. Your privacy will be protected, as<br />
no names are requested on the inventory. The only identifying mark will be the first two<br />
letters of your mother’s first name and the last two digits of your social security number. This<br />
identification is necessary, as some students will be completing more than one inventory<br />
during the study period.<br />
Analysis of the inventory should be completed by fall, 1998. Your school of nursing<br />
will be provided with summarized results of instrument analysis as well as group data about<br />
perceptions of clinical sites.<br />
130
Appendix F<br />
131
College of Human Resources and Education, West Virginia University<br />
Office of the Dean<br />
February 27, 1998<br />
TO: Kari Sand-Jecklin<br />
FROM: Ernest R. Goeres Associate Dean<br />
RE: Human Resources & Education H.S.#98-029<br />
Title: "Student Evaluation of Clinical Education Environment (SECEE):<br />
Instrument Development and Testing"<br />
Your Application for Exemption for your above-captioned research project has been<br />
reviewed under the Human Subjects Policies and has been approved.<br />
This exemption will remain in effect on the condition that the research is carried out exactly<br />
as described in the application.<br />
Best wishes for the success of your research.<br />
Attachment<br />
cc: Deans Office<br />
Student Advising and Records<br />
Anne Nardi, Advisor<br />
304 293-5703 0 802 Allen Hall 11 P.O. Box 6122 0 Morgantown, WV 26506-6122<br />
Equal Opportunity / Affirmative Action Institution<br />
132
KARI SAND-JECKLIN<br />
2003 White Oak Dr.<br />
Morgantown, WV 26505<br />
(304) 598-2673<br />
West Virginia RN License #: 051382<br />
<strong>EDUCATION</strong><br />
Bachelor of Science in Nursing, completed at Illinois Wesleyan University in May, 1982.<br />
Graduated Magna Cum Laude.<br />
Master of Science in Adult Health Nursing, completed at the University of Illinois in August,<br />
1987. Emphasis in self-care. Completed teaching internship in the area of Medical-<br />
Surgical Nursing.<br />
Doctor of Education from the Department of Educational Psychology at West Virginia<br />
University, 1998. Awarded the Swiger Fellowship for doctoral study.<br />
ADDITIONAL <strong>EDUCATION</strong><br />
Cardiac dysrhythmia Course: 6 hour course completed at Mennonite Hospital in<br />
Bloomington, IL.<br />
Critical Care Course: 80 hour course completed at Mennonite Hospital.<br />
Physical Assessment Course: 48 hour course completed in 1982. Additional physical<br />
assessment course completed in 1994. Both courses taught by Certified Nurse<br />
Practitioners.<br />
Leadership/Management: various courses related to health care management, administration,<br />
change and situational leadership.<br />
Continuing Education: Topics of seminars/conferences attended include: health care reform,<br />
diabetes management, other clinical issues, staff development/nursing education,<br />
patient education, risk management, discharge planning, quality improvement, and<br />
nursing research.<br />
NURSING EXPERIENCE<br />
Visiting Instructor at the School of Nursing, West Virginia University, from August 1998 to<br />
present. Responsibilities include instruction in both classroom and clinical settings.<br />
Adjunct Faculty at the School of Nursing, West Virginia University, from August, 1996 to<br />
May, 1998. Conducted research, assisted with course revision, and instructed nursing<br />
students during this time.<br />
133
Clinical Instructor in the undergraduate department of the School of Nursing at West Virginia<br />
University from February to May, 1996. Responsibilities included instructing and<br />
supervising students in a medical-surgical clinical area.<br />
Nurse Educator for the Center for Healthy Lifestyles at St. Joseph Medical Center,<br />
Bloomington, IL from June 1994 to August 1995. Responsibilities included health<br />
education needs assessment; program planning, development, presentation, and<br />
evaluation. Also provided individual outpatient education related to illness<br />
management and health promotion, coordinated monthly public seminar series, and<br />
spoke to groups/organizations on health related topics. Served as on-site supervisor<br />
for nursing and health education interns. Authored and received a United Way grant<br />
to provide diabetic education and follow-up care for low income diabetic population.<br />
Director, Nursing Education at Frederick Memorial Hospital, Frederick, MD from July 1993<br />
to May 1994. Responsibilities included supervising 5.1 FTE staff; maintaining<br />
department budget; and facilitating plan, implementation and evaluation of Nursing<br />
education activities for over 600 nursing service employees. Obtained CEU provider<br />
status for the hospital through the Maryland Nurses Association. Administered FMH<br />
Nursing Scholarship program, based on yearly grants from HSCRC. Chairperson of<br />
Patient Education committee, and member of several interdepartmental committees.<br />
Director, Progressive Care Nursing at Frederick Memorial Hospital, Frederick, MD from<br />
November 1990 to July 1993. Responsibilities included staffing, budget<br />
administration, and supervision of 56 FTE staff, and 24 hour accountability for the 38<br />
bed telemetry step-down unit. Implemented pilot RN/NA partnership. Concurrently<br />
functioned as Chair for Nursing Quality Improvement Council and member of hospital<br />
Quality Assurance Committee.<br />
Director of Medical and Oncology unit at St. Joseph Medical Center, Bloomington, IL from<br />
July to October 1990. Responsibilities included 24 hour accountability for the 38 bed<br />
unit, supervision of 30 FTE employees, budget administration, staffing, and<br />
policy/procedure development. Member of Service Excellence Steering committee.<br />
Director of Education/Professional Development at St. Joseph Medical Center, Bloomington,<br />
IL from February 1988 to July 1990. Responsibilities included coordinating nursing<br />
orientation, nursing staff development, inpatient and outpatient education programs<br />
and community health promotion activities. Developed and presented education<br />
programs related to a variety of clinical and professional issues.<br />
Patient Education Coordinator at St. Joseph Medical Center in Bloomington, IL from July<br />
1987 to February 1988. Responsibilities included developing and coordinating<br />
inpatient, outpatient, and diabetic education as well as community health promotion<br />
activities.<br />
Staff Nurse: employed full-time and part-time at Brokaw Hospital in Normal, IL from<br />
November 1983 to May 1987 in the Maternal-child Health Department.<br />
134
Staff Nurse: employed part-time at Galena Stauss Hospital from June 1983 to November<br />
1983. Provided care for medical, surgical, obstetric, pediatric, emergency and<br />
intensive care patients.<br />
Staff Nurse: employed full-time at Mennonite Hospital from June 1982 to June 1983 in the<br />
Intensive Care/Post Intensive Care Unit.<br />
RELATED EXPERIENCE<br />
Adjunct Specialist in Nursing at Illinois Wesleyan University, Bloomington, IL (1994-95).<br />
Chair, Nursing Quality Improvement Council and consult for nursing or clinically-related<br />
quality issues at Frederick Memorial Hospital (1992-94).<br />
Member, Service Excellence at St. Joseph Medical Center, Bloomington, IL (1988-90).<br />
Transcultural Experience: member of a medical mission team, staffing a small clinical on a<br />
seminary compound in Haiti in 1984.<br />
Prenatal Exercise instructor: developed and instructed exercise programs for prenatal and<br />
postnatal women at Brokaw Hospital (1984-87).<br />
Exercise Instructor for BroMenn Wellness Center: taught an adapted aerobic exercise class<br />
for individuals who had physical fitness limitations or were just beginning exercise<br />
(1986).<br />
PRESENTATIONS, PUBLICATIONS AND CONSULTATIONS<br />
Member of panel of experts on "Health for Herself" television program produced by PBS<br />
(WILL TV) on October 27, 1994. Topic: Women and Nutrition.<br />
Regularly presented programs on topics such as stress management, self-care, alternative<br />
therapies, eating disorders, breast health and diabetes management to both lay and<br />
health professional groups (1994-95).<br />
"Incorporating Patient Classification Measures into a Volume-Based Nursing Productivity<br />
System": storyboard presentation at Innovations '93, sponsored by the Maryland<br />
Hospital Education Institute, November 1993.<br />
"Quality Improvement - It's Up to You" presentation to professional nurses, July and<br />
December 1993.<br />
"Becoming a Wise Healthcare Consumer": community lecture, September 1993.<br />
135
Contributor to Health at Work section of the Business to Business magazine in<br />
Bloomington, IL (1988-90).<br />
"Educating Yourself and Your Patients": presentation to senior level BSN students at Illinois<br />
Wesleyan University (1988 - 1990).<br />
"Stress: You Can Cope": presentations to a variety of community and business groups in<br />
1988.<br />
"Have a Healthy Retirement": presentation to retiring industrial employees, November 1988.<br />
"Identifying Self-Care Behaviors in Response to Minor Illness": unpublished thesis submitted<br />
as partial fulfillment of requirements for the Master of Science Degree at the<br />
University of Illinois at Chicago (1987).<br />
"Pregnancy and Exercise - A Safe Combination?": seminar presented to local exercise<br />
instructors, September 27, 1986.<br />
"Women's Health in Haiti": presentation at Linking Research to Practice Seminar in Urbana,<br />
IL, November 1985.<br />
136