10. Briefing Paper Template - Higher Education Academy
10. Briefing Paper Template - Higher Education Academy
10. Briefing Paper Template - Higher Education Academy
Transform your PDFs into Flipbooks and boost your revenue!
Leverage SEO-optimized Flipbooks, powerful backlinks, and multimedia content to professionally showcase your products and significantly increase your reach.
<strong>Briefing</strong> <strong>Paper</strong><br />
Technology Enhanced Assessment for Learning: Case Studies and<br />
Best Practice<br />
<strong>Briefing</strong> <strong>Paper</strong> by: John Dermo, University of Bradford (March 2011)<br />
Overview<br />
This seminar explored how instructors on foundation degree and first year<br />
undergraduate courses can deliver formative e-assessments, giving automatic generic<br />
feedback to assist and enhance their students’ learning<br />
1. Abstract: please provide a brief abstract of the seminar delivered (maximum 200 words).<br />
The seminar first looked at the literature of assessment in general as well as technology<br />
enhanced assessment, to set the background for two case studies. These two case studies,<br />
from the fields of Clinical Sciences and Engineering, provided varied practical examples of<br />
how formative e-assessment is being carried out in two different subject areas at the<br />
University of Bradford. This was followed by a group discussion of the major issues related<br />
to formative electronic feedback, and delegates were able to share experiences with other<br />
practitioners and to identify the main challenges facing formative e-assessment, especially<br />
in the HE sector. In addition, delegates were given a guided tour of the University of<br />
Bradford’s dedicated e-assessment suite, which is used both for summative and formative<br />
technology enhanced assessment.<br />
2. Rationale: please provide the background context, such as the research/evidence-informed<br />
practice context, which provided the impetus for the seminar.<br />
In recent years there has been a remarkable increase in the use of e-assessment in the<br />
higher and further education sectors. Much of this, though, has focused on summative<br />
assessment, often primarily driven by administrative efficiency and to save time for busy<br />
academics teaching on large undergraduate modules. However, in the field of ELTT<br />
(enhancing learning and teaching through technology) there is now less discussion of<br />
summative assessment - “e-assessment of learning” - and more interest in how<br />
technology-enhanced assessment can benefit the learning process through formative<br />
feedback: this is “e-assessment for learning”. This is partly in response to a clear demand<br />
from students (eg through the National Student Survey) for increased quality and quantity<br />
of feedback.<br />
This seminar sought to provide examples of formative assessment which reach beyond the<br />
stereotypical cliché that e-assessment can only be used for multiple choice questions to<br />
test knowledge, and to challenge the oft-heard accusation that reliance on e-assessment<br />
can lead to “dumbing down” of learning. The aim was to investigate whether it is, in fact,<br />
possible to use generic automated feedback which can genuinely be called “assessment for<br />
learning” and might enhance the learning experience for large numbers of university<br />
students in a practical way. By using a range of question types and careful question,<br />
EvidenceNet is a <strong>Higher</strong> <strong>Education</strong> <strong>Academy</strong> resource.<br />
www.heacademy.ac.uk/evidencenet<br />
Page 1 of 9
<strong>Briefing</strong> <strong>Paper</strong><br />
feedback and learning design, the case studies demonstrated how to assess learners at<br />
higher levels (Bloom, 1956).<br />
The seminar built on the achievements of the University of Bradford’s HEA e-Learning<br />
Pathfinder Project “Embedding support processes for e-assessment” and Bradford’s JISC<br />
Institutional Exemplar e-Assessment Project, “Integrating thin client systems for secure e-<br />
assessment” (ITS4SEA).<br />
The case studies were closely aligned with HEFCE’s framework in “Enhancing Learning and<br />
Teaching through the Use of Technology” (HEFCE, 2009), in particular with the strategic<br />
priorities related to innovation in teaching and learning, enhancing flexibility, student<br />
achievement and improving efficiency of curriculum delivery processes.<br />
The case studies were also driven by the University of Bradford’s Academic Framework<br />
2009-14, which identifies diagnostic assessment, formative assessment and feedback as<br />
key activities within its approach to curriculum delivery.<br />
3. Generation of evidence: please describe how the reported research/evaluation findings were<br />
generated e.g. methods used<br />
This seminar first looked to the literature of educational development to consider what<br />
constitutes “assessment for learning”, and then focused on the literature of technologyenhanced<br />
learning to consider the pros and cons of carrying this out in an online<br />
environment. The work also drew on recent JISC-funded work in the sector, such as the<br />
FEASST and REAQ projects, which have sought to scope and explore best practice in e-<br />
assessment.<br />
The seminar presented research findings from two case studies carried out at the<br />
University of Bradford in 2008-10, to evaluate examples of e-assessment for learning,<br />
delivered in different ways by different lecturers, in search of a model of best practice for<br />
e-assessment for learning. The case studies come from different subject areas and adopted<br />
a mixed methods approach (Yin, 1984) consisting of structured interviews, pseudoqualitative<br />
surveys of student opinion through questionnaires, as well as analysis of<br />
quantitative assessment data; methodologically, the research is based on a pragmatic<br />
methodology to educational research (Pring, 2000).<br />
The first case study (Dr Liz Carpenter, Department of Clinical Sciences) investigated the<br />
quality of feedback-rich formative assessment on a biology foundation course, where<br />
students were presented with a formative e-assessment in the university’s e-assessment<br />
suite towards the end of their taught module, then had access to the same formative<br />
assessment and feedback via the virtual learning environment in the period leading up to<br />
their final e-assessment. Impact was measured in a number of ways: quantitative analysis<br />
of student progress; multiple questionnaires to evaluate student perceptions of the<br />
process and of their own study habits; analysis of student access patterns; comparison of<br />
student engagement with feedback and subsequent progress; comparison of e-feedback<br />
with face-to-face feedback.<br />
EvidenceNet is a <strong>Higher</strong> <strong>Education</strong> <strong>Academy</strong> resource.<br />
www.heacademy.ac.uk/evidencenet<br />
Page 2 of 9
<strong>Briefing</strong> <strong>Paper</strong><br />
The second case study (Dr Darwin Liang, Department of Engineering) demonstrated an<br />
innovative approach in teaching a first year engineering module. This “activity-based<br />
student-centred teaching and learning” (HEA Engineering subject centre Teaching Award<br />
finalist 2010) combines traditional face-to-face lectures and lab work with online quizzes<br />
and e- tutorials incorporating automated feedback, as well as regular low-stakes<br />
summative e-assessment and a final high-stakes summative e-assessment. For this case<br />
study, this approach was evaluated through statistical tracking of student achievement,<br />
structured interviews with students, questionnaires looking at student perceptions and<br />
attitudes, and also detailed in-depth reflections from the course tutor.<br />
The seminar concluded with a group discussion task where delegates from across the HE<br />
sector compiled lists of challenges facing formative e-assessment in the sector, and then<br />
prioritised these according to importance. These were subsequently written up and<br />
collated by the seminar facilitator, and fed back to the group after the event, and the<br />
results are included below.<br />
4. Existing evidence: please provide details of research/evaluation evidence drawn on and<br />
reported in the seminar<br />
There is a great deal of recent evidence from UK <strong>Higher</strong> <strong>Education</strong>, for example, from the<br />
National Union of Students (2009), the National Student Forum (2010) and the annual<br />
National Student Survey, indicating that feedback is a major issue in the sector.<br />
The literature from the field of educational development suggests that feedback can have a<br />
positive impact on learning, so long as the feedback is appropriate and timely, and is<br />
accompanied by opportunities for learners to reflect, act upon the feedback and build this<br />
into their future study (Black and William, 2009: Boud, 2000; Hattie and Timperley, 2007;<br />
Nicol and MacFarlane-Dick, 2006; Sadler, 1989).<br />
Literature on e-assessment (also referred to as “computer-assisted assessment” or<br />
“technology enhanced assessment”) suggests that automated formative feedback can be<br />
one of the key benefits of this aspect of technology enhanced learning (Bull and McKenna,<br />
2004; Gilbert et al, 2009; Nicol and Draper, 2009; Pachler et al, 2009; Whitelock and<br />
Brasher, 2006). There is consensus in the literature that this e-feedback can be efficient,<br />
and can save a great deal of instructor time, by delivering generic feedback to large<br />
numbers of students. Also, this feedback can offer a greater level of detail, and richer<br />
media, could even offer a level of flexibility and personalisation, and can be incorporated<br />
into a blended learning environment to support a diverse and changing student population.<br />
However, a number of key questions remain: is it possible to use this kind of automated<br />
generic-feedback to create “moments of contingency” (Black and William, 2009), especially<br />
for HE students? Can the use of automated, objectively marked tasks really provide<br />
appropriate feedback of sufficient quality to have a significant impact on learning for<br />
university students? Also, how will students perceive this kind of automated feedback;<br />
could it help to satisfy the demand from students for more (and better) feedback?<br />
EvidenceNet is a <strong>Higher</strong> <strong>Education</strong> <strong>Academy</strong> resource.<br />
www.heacademy.ac.uk/evidencenet<br />
Page 3 of 9
<strong>Briefing</strong> <strong>Paper</strong><br />
5. Research findings/new evidence: please describe any new findings or evidence reported in<br />
the seminar.<br />
Case study 1: Measuring the impact of formative e-assessment and feedback on learning.<br />
Dr Liz Carpenter, Department of Clinical Sciences, School of Life Sciences, University of<br />
Bradford.<br />
This case study drew a number of findings about the impact of formative e-assessment,<br />
which can be summarised as follows:<br />
Students do value feedback-rich formative e-assessments and especially appreciate<br />
having the feedback immediately.<br />
There is a significant association between student progress and engagement with<br />
formative tasks; that is to say, the students who engage with the e-feedback task<br />
make more progress than those who do not (although the number of times they do<br />
so does not make a significant difference).<br />
The quantity of engagement with the formative feedback is not significantly<br />
influenced by the student’s initial level, which suggests that this is not simply a case<br />
of the stronger students exhibiting higher levels of autonomy and learning skills.<br />
Student engagement is greatest during the period immediately prior to the<br />
summative examination.<br />
Access to formative tasks tends to be via laptops at home or in halls.<br />
Students who view the formative and feedback as part of their learning show the<br />
greatest amount of progress; students who see the formative tasks as mere<br />
preparation for the summative exam, or as an evaluation tool for their other<br />
revision, tend to benefit less.<br />
In conclusion, computers can deliver quality feedback.<br />
Case Study 2: Activity-based student-centred teaching and learning. Dr Darwin Liang,<br />
School of Engineering, Design and Technology, University of Bradford.<br />
The findings emerging from this case study can be summarised thus:<br />
Students enjoyed learning at their own pace and liked receiving automatic<br />
immediate feedback on their work.<br />
E-tutorials, with automated generic feedback, provide one more method to enable<br />
students to learn, with more opportunity for practice, greater flexibility and can be<br />
repeated on demand.<br />
Whilst there was no immediate statistically significant increase on student<br />
achievement, the impact of this teaching method is more likely to be seen in longer<br />
term student development.<br />
There were mixed reactions among students to the innovative teaching method,<br />
due to the diversity of students within the cohort.<br />
Students are sometimes more concerned about marks than about learning, and<br />
EvidenceNet is a <strong>Higher</strong> <strong>Education</strong> <strong>Academy</strong> resource.<br />
www.heacademy.ac.uk/evidencenet<br />
Page 4 of 9
<strong>Briefing</strong> <strong>Paper</strong><br />
need to have a tangible reward for their efforts.<br />
There is a risk that too many different e-assessment tasks during the semester may<br />
lead to “over-assessment” and a negative impact on student perceptions and<br />
learning.<br />
E-feedback tasks should not be too long or over complicated, because students tend<br />
to give up if the task is too time-consuming.<br />
Students are keen to be allowed to take responsibility for some of their learning and<br />
to support one another during learning.<br />
Students do appreciate the personal, face-to-face contact with the lecturer and do<br />
not want this to be replaced by automated tasks.<br />
6. Outcomes of research /evaluation evidence and the implications for policy and practice:<br />
please identify any application or outcomes of research/evaluation evidence and detail the<br />
implications for policy and practice for different stakeholder groups such as: academics,<br />
learning technology practitioners, professional developers, senior managers, policy makers,<br />
students, sector organisations, employers and professional bodies.<br />
The main conclusion of the research emerging from these case studies is that by following<br />
the Black and William (2009) definition of formative assessment, using automated<br />
feedback to create “moments of contingency”, there is certainly a suggestion that this can<br />
direct learners towards improved learning in the higher education sector. This is especially<br />
the case in foundation degree and first year undergraduate courses, where the course<br />
content may lend itself better to this kind of learning; also, incidentally, this is often where<br />
the largest student cohorts are to be found, so the efficiencies in terms of time saving will<br />
also be maximised.<br />
The main implication is that academics, learning technologists and senior managers should<br />
work together to place a greater emphasis on the potential of e-assessment for learning,<br />
not just e-assessment of learning. The most effective impact of e-assessment for the higher<br />
education sector may not in fact be for large-scale invigilated summative assessments in<br />
vast computer clusters, but rather for more flexible delivery of formative e-assessment<br />
tasks, where large numbers of students can receive immediate generic feedback.<br />
7. Emerging themes: please detail the discussion topics or themes that were raised by<br />
delegates during the course of the seminar - suggesting areas that would merit further<br />
investigation.<br />
The case studies prompted an interesting and wide-ranging discussion of themes related<br />
for e-assessment for learning. Much of the discussion focused on how to engage learners<br />
with the e-feedback: it is not sufficient to provide the feedback, but there must be a<br />
deliberate effort to develop a second reflective stage which transforms passive feedback<br />
into active “feedforward”. This concurs with the findings of the University of Strathclyde’s<br />
REAP project (Re-engineering Assessment practices in <strong>Higher</strong> <strong>Education</strong>) and the Australian<br />
Learning and Teaching Council’s Assessment 2020: Seven propositions for assessment<br />
reform in higher education , which both stress the importance of feedback being a<br />
EvidenceNet is a <strong>Higher</strong> <strong>Education</strong> <strong>Academy</strong> resource.<br />
www.heacademy.ac.uk/evidencenet<br />
Page 5 of 9
<strong>Briefing</strong> <strong>Paper</strong><br />
dialogue between the instructor and the student, and encourage peer feedback. It was<br />
suggested by delegates, for example, that students might be included in the question<br />
design process, creating their own questions and feedback to share with one another.<br />
There was also some discussion about the most effective way of generating a bank of<br />
formative questions, with associated feedback, as it was recognised that question creation<br />
can be a very time-consuming activity indeed. It was felt that it would be desirable to<br />
develop subject-specific banks of questions to share as an open educational resource<br />
across the sector.<br />
It was also stressed that regular feedback throughout the course is likely to have a greater<br />
impact than a one-off formative assessment, and there was also some discussion about the<br />
desirability of using low-stakes grades for these assessments as a way of engaging students<br />
with the formative tasks. These points were also further developed in the group discussion<br />
task which followed.<br />
After the case studies and the ensuing discussion, there was a group discussion task<br />
around the theme “What are the biggest challenges facing formative e-assessment in the<br />
HE sector”. Delegates broke into small groups of 3-5 participants to identify challenges and<br />
wrote them on post-it notes, then positioned these in order of importance on a flip-chart;<br />
these data were later collated for distribution to the group by email.<br />
The output from the discussion task may be summarised thus:<br />
One key challenge was “what constitutes effective feedback”? Does e-assessment<br />
really improve learning? Some wondered whether it was actually a good idea to<br />
provide a direct link to the correct answer, and others pointed out that there was a<br />
general lack of reading around the subject by students, which needed to be<br />
considered when writing feedback.<br />
Student engagement was identified as a key challenge, especially how to get<br />
students to engage with and reflect upon feedback, online and otherwise. It was<br />
also pointed out that we also need to have mechanisms to encourage discussion<br />
and create dialogue about the feedback. In addition, automated feedback offers<br />
limited support for social learning and we need to maintain the personalised aspect<br />
of feedback to be able to motivate learners in need of support.<br />
There was also discussion of mixed formative/summative assessments, where low<br />
stakes grades are assigned to encourage engagement with feedback, but may raise<br />
concerns about cheating.<br />
Staff and institutional engagement were also identified as key issues: staff can be<br />
reliant on technical specialists, there might be ineffective tools available, and the<br />
technology might be a hurdle. As well as having to learn the specific e-assessment<br />
software, staff also need to develop skills in question design and understanding the<br />
issues related to formative feedback. All of this can be very time consuming,<br />
especially at first. It would help if we could share question banks with other<br />
academics, but this can be a challenge, too.<br />
There was some concern that formative e-assessment may be more relevant in<br />
EvidenceNet is a <strong>Higher</strong> <strong>Education</strong> <strong>Academy</strong> resource.<br />
www.heacademy.ac.uk/evidencenet<br />
Page 6 of 9
<strong>Briefing</strong> <strong>Paper</strong><br />
some subject areas than others, and that it may be difficult in more discursive<br />
subject areas. We also need to consider how to deliver feedback for open-ended<br />
questions and answers which are not black & white, but require interpretation. It is<br />
not always appropriate for the teacher to make assumptions about why a certain<br />
answer is wrong. Also, work-based learners may have different needs.<br />
It was also pointed out that formative e-assessment should be viewed as part of a<br />
larger portfolio of methods of formative assessment, and we also need to consider<br />
the feedback we give to manually marked assignments, submitted electronically.<br />
In conclusion, the case studies and the ensuing discussion from this seminar raised a<br />
number of questions which merit further study and consideration:<br />
What exactly constitutes quality automated electronic feedback?<br />
Might topic-based feedback more useful to students than feedback on individual<br />
questions?<br />
Is formative e-assessment more applicable to some subject areas than others? Is it<br />
restricted to foundation/first year undergraduates?<br />
How do we best train students to engage with this feedback and make the most of<br />
it in their learning?<br />
How can academics work together to share their banks of questions and feedback?<br />
What are the best research methods to investigate these issues; how do we really know<br />
what students are doing with the feedback?<br />
8. Any other comments: please use this box to include any additional details.<br />
This half-day seminar in the HEA Evidence-based practice series 2010 took place at the<br />
University of Bradford on the afternoon of 8 th December 2010, in the Learn <strong>Higher</strong> room<br />
and the Richmond Building e-Assessment suite.<br />
The event was attended by 21 delegates representing 10 different UK HE institutions as<br />
well as the <strong>Higher</strong> <strong>Education</strong> <strong>Academy</strong>. Thanks are due to the delegates who, in many<br />
cases, braved inclement weather conditions to be able to attend.<br />
Special thanks and acknowledgements go to colleagues at the University of Bradford,<br />
without whose support this seminar would not have been possible:<br />
Professor Nigel Lindsey, Director of Learning and Teaching<br />
Dr Liz Carpenter, Department of Clinical Sciences<br />
Dr Darwin Liang, Department of Engineering<br />
Debbie Alstead and Professor Peter Hartley, Centre for <strong>Education</strong>al Development<br />
Also thanks are due to Clare Gash and Eddie Gulc from the <strong>Higher</strong> <strong>Education</strong> <strong>Academy</strong> for<br />
all their support, before, during and after the day of the event.<br />
EvidenceNet is a <strong>Higher</strong> <strong>Education</strong> <strong>Academy</strong> resource.<br />
www.heacademy.ac.uk/evidencenet<br />
Page 7 of 9
<strong>Briefing</strong> <strong>Paper</strong><br />
9. Bibliography/references (preferably annotated): please list any references mentioned<br />
in or associated with the seminar topic. Where possible, please annotate the list to enable<br />
readers to identify the most relevant materials.<br />
Black, P. and William D. (2009) “Developing the theory of formative assessment.”<br />
Assessment, Evaluation and Accountability, 21 (1), 5-31.<br />
Bloom, B.S. (1956) Taxonomy of educational objectives, Handbook I: the Cognitive Domain.<br />
New York: David McKay Co. Inc.<br />
Boud, D. (2000) “Sustainable assessment: rethinking assessment for the learning society.”<br />
Studies in Continuing <strong>Education</strong>, 22 (2), 151-167.<br />
Boud, D. and Associates. (2010) Assessment 2020: Seven propositions for assessment<br />
reform in higher education. Sydney: Australian Learning and Teaching Council.<br />
Bull, J. and McKenna, C. (2004) Blueprint for Computer-Assisted Assessment. London:<br />
RoutledgeFalmer.<br />
Gilbert, L., Gale, V., Wills, G. and Warburton, B. (2009) JISC Report on e-Assessment Quality<br />
in UK <strong>Higher</strong> <strong>Education</strong>. Southampton: University of Southampton.<br />
Hattie, J. and Timperley, H. (2007) “The Power of Feedback.” Review of <strong>Education</strong>al<br />
Research, 77 (1), 81-112.<br />
<strong>Higher</strong> <strong>Education</strong> Funding Council for England (2009) Enhancing learning and teaching<br />
through the use of technology: a revised approach to HEFCE’s strategy for e-learning.<br />
Available online http://www.hefce.ac.uk/pubs/hefce/2009/09_12/<br />
National Student Forum (2010) NSF Annual Report 20<strong>10.</strong> Available online at<br />
http://www.bis.gov.uk/assets/biscore/corporate/docs/n/10-p83-national-student-forumannual-report-2010<br />
National Union of Students. (2009) Assessment Purposes and Practices. NUS briefing<br />
paper.<br />
Nicol, D. and Draper, S. (2009) “A blueprint for transformational organisational change in<br />
higher education: REAP as a case study.” In Mayes, T., Morrison, D., Meller, H., Bullen, P.,<br />
and Oliver, M. (eds) <strong>Education</strong> through technology-enhanced learning. <strong>Higher</strong> <strong>Education</strong><br />
<strong>Academy</strong>.<br />
Nicol, D.J. and Macfarlane-Dick, D. (2006). “Formative assessment and self-regulated<br />
learning: A model and seven principles of good feedback practice.” Studies in <strong>Higher</strong><br />
<strong>Education</strong>, 31(2), 199-218.<br />
EvidenceNet is a <strong>Higher</strong> <strong>Education</strong> <strong>Academy</strong> resource.<br />
www.heacademy.ac.uk/evidencenet<br />
Page 8 of 9
<strong>Briefing</strong> <strong>Paper</strong><br />
Pachler, N., Mellar, H., Daly, C., Mor, Y., and William, D. (2009) Scoping a vision for<br />
formative e-assessment. A project report for the Joint Information Systems Committee.<br />
London: WLE Centre.<br />
Pring, R. (2000) Philosophy of educational research. London: continuum<br />
Sadler, R. (1989) “Formative assessment and the design of instructional systems”.<br />
Instructional Science, 18, 119-144.<br />
Whitelock, D. and Brasher, A. (2006) Roadmap for e-Assessment. Joint Information Systems<br />
Committee.<br />
Yin, R. (1984) Case study research. Beverley Hills: Sage Publications.<br />
EvidenceNet is a <strong>Higher</strong> <strong>Education</strong> <strong>Academy</strong> resource.<br />
www.heacademy.ac.uk/evidencenet<br />
Page 9 of 9