Full copy of Planet - Issue 23 (PDF file 3.4mb) - GEES Subject ...

Full copy of Planet - Issue 23 (PDF file 3.4mb) - GEES Subject ...

Full copy of Planet - Issue 23 (PDF file 3.4mb) - GEES Subject ...


Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

10<br />

Editorial<br />

Celebrating 10 years <strong>of</strong> teaching and learning<br />

Welcome to <strong>Planet</strong> <strong>23</strong>, and the new editorial<br />

team at Plymouth working with many <strong>GEES</strong><br />

colleagues through the new <strong>Planet</strong> article<br />

reviewing process. Articles submitted to <strong>Planet</strong><br />

are peer-reviewed, revised as appropriate,<br />

followed by final pro<strong>of</strong>ing and editing by the<br />

team in Plymouth. We have updated the<br />

submission guidelines and style guides for the<br />

publication, please see the <strong>Planet</strong> website for<br />

details. Thanks are due to everyone involved in<br />

the <strong>Planet</strong> process. We hope you find the wide<br />

range <strong>of</strong> articles interesting and enjoyable.<br />

The <strong>GEES</strong> team for 2010-11 are<br />

Pauline Kneale<br />

<strong>Subject</strong> Centre Director<br />

Yolande Knight<br />

<strong>Subject</strong> Centre Associate Director<br />

This <strong>Planet</strong> brings together a collection <strong>of</strong><br />

papers on Assessment and Feedback, many <strong>of</strong><br />

which were presented at the <strong>GEES</strong> Assessment<br />

for Learning Conference held in Manchester,<br />

2009. The Assessment and Feedback focus <strong>of</strong><br />

this special edition is complemented by two<br />

<strong>GEES</strong> <strong>Subject</strong> Centre briefings authored by<br />

Pr<strong>of</strong>essor Carolyn Roberts, University <strong>of</strong> Oxford,<br />

entitled Giving Feedback and Modes <strong>of</strong> Feedback.<br />

Download both from http://www.gees.ac.uk/<br />

pubs/briefings/briefings.htm<br />

Mike Sandrers<br />

C&IT Manager<br />

Susie Stillwell<br />

Dissemination Coordinator<br />

In 2009-10 we held the first <strong>GEES</strong> student photo<br />

competition. Some <strong>of</strong> the top 20 photos are<br />

featured in this edition, and can also be viewed<br />

on line. We have launched the 2010-2011<br />

competition, so get your students involved. <strong>Full</strong><br />

details can be found at http://gees.ac.uk/<br />

projtheme/sawards/2011/photocomp11.htm<br />

Esther Bobek<br />

Resource Coordinator<br />

Jane Dalrymple<br />

Administrator<br />

PLANET ISSUE <strong>23</strong><br />


Jacqueline M. Pates 1 | Robert Blake 2 | Ali Cooper 2<br />

1: Lancaster Environment Centre, Lancaster University<br />

2: Centre for the Enhancement <strong>of</strong> Learning and Teaching (CELT), Lancaster University<br />

Using structured<br />

reflection to encourage<br />

more active use <strong>of</strong><br />

feedback<br />

PLANET ISSUE <strong>23</strong><br />

Abstract<br />

Students are frequently dissatisfied with the feedback<br />

they receive but providing more feedback is <strong>of</strong>ten<br />

unrealistic for staff. This paper reports on activities<br />

which helped students to make the most <strong>of</strong> the<br />

feedback they receive. Students were set weekly tasks<br />

and received rapid feedback aimed at the whole<br />

group. In-class activities, including personal<br />

reflections, encouraged engagement with the<br />

feedback. These activities turned feedback from a<br />

passive ‘receipt’ process to an active dialogue, in<br />

which there was scope for future action. The class<br />

became more aware <strong>of</strong> the variety <strong>of</strong> feedback<br />

methods being used with them, increased in<br />

confidence and changed their perceptions <strong>of</strong> the<br />

value <strong>of</strong> feedback to inform their work.<br />

Introduction<br />

The quantity, quality and timing <strong>of</strong> feedback are<br />

currently ‘big issues’ in university education. The<br />

National Student Survey repeatedly finds that<br />

‘Assessment and Feedback’ is the area with which<br />

students are least satisfied. In general, students want<br />

more, clearer and faster feedback (National Union <strong>of</strong><br />

Students, 2008). At the same time, staff are juggling<br />

high workloads and increasing class sizes. This discord<br />

is heightened when students appear not to act upon<br />

the feedback they have been given. One solution is to<br />

look critically at the whole feedback cycle. Instead <strong>of</strong><br />

focussing exclusively on the quality <strong>of</strong> written<br />

feedback, staff and students alike can gain by placing<br />

more emphasis on developing self-evaluation skills in<br />

students (Yorke, 2003).<br />

Student-centred models <strong>of</strong> feedback emphasise its<br />

cyclical nature, inherent in which is the notion <strong>of</strong><br />

feedback as ‘dialogue’ between staff and student<br />

(Nicol and Macfarlane-Dick, 2006; Nicol, 2008).<br />

Unfortunately, the reality <strong>of</strong>ten mirrors a linear<br />

process or staff ‘monologue’, where students do not<br />

engage as active participants in their own learning.<br />

Student performance improves when students’<br />

self-evaluation skills are fostered (e.g. McDonald and<br />

Boud, 2003; Rust et al., 2003). The inclusion <strong>of</strong> selfmanagement<br />

and independent learning skills in<br />

Benchmarking Statements highlights their importance<br />

in Higher Education (HE) (Q.A.A., 2007).<br />

The traditional staff-student dialogue developed<br />

through tutorials or seminars does not work with<br />

large classes. Here, we focus on group feedback,<br />

which is time-efficient for staff to provide, but is <strong>of</strong>ten<br />

very poorly used by students. We present the key<br />

findings from a study in which structured reflection<br />

was used to encourage students in large classes to<br />

engage with feedback.<br />

Methods<br />

Module context<br />

ENV201 Project Skills is compulsory for students<br />

studying Environmental Science (ES) (approx. 70-80<br />

students at Lancaster). The module has dual aims <strong>of</strong><br />

developing skills in data presentation and analysis,<br />

and in report writing. It is closely integrated with a<br />

seven-day residential field course (ENV200 Carrock<br />

Fell) running immediately prior to the module. The<br />

assessment for ENV200 includes a 2000-word report,<br />

due at the end <strong>of</strong> the term.<br />


The report writing element is jointly taught by two <strong>of</strong><br />

the authors. A writing topic is introduced each week<br />

(e.g. structuring an introduction, writing an abstract,<br />

structuring your results), followed by a short (< 1<br />

page) practice task. These tasks are unassessed, but<br />

map onto components <strong>of</strong> the assessed ENV200 report<br />

(e.g. prepare an outline introduction / the abstract /<br />

outline results section for your ENV200 report). The<br />

tasks are submitted online the next week for tutor<br />

review. The tutor skim reads the submissions looking<br />

for patterns (e.g. common mistakes), then selects<br />

examples <strong>of</strong> work completed well or poorly for class<br />

discussion, and prepares group feedback. The<br />

following day a feedback session is used to analyse<br />

the submitted work and provide general feedback<br />

(Table 1). In total, 6 tasks are completed by the<br />

students during the 10 week module.<br />

Intervention<br />

For this project we focussed on improving the use <strong>of</strong><br />

group feedback. At the end <strong>of</strong> each feedback session,<br />

we introduced a 5-minute reflection period to<br />

encourage engagement with the feedback material,<br />

during which the students were asked specific<br />

questions such as:<br />

“What have I learnt during today’s session?”<br />

“What will I change in the next draft <strong>of</strong> this piece<br />

<strong>of</strong> work?”<br />

These responses were written down and entirely<br />

personal to the students. Together with the VLE<br />

discussion and the Post-it note questions (Table 1),<br />

these activities were designed to develop dialogue<br />

between staff and students.<br />

Evaluation<br />

At the start <strong>of</strong> the year, the whole class was asked to<br />

complete a baseline questionnaire about their<br />

experiences <strong>of</strong> tutor feedback on 1st year<br />

assignments, with an emphasis on five common<br />

methods (comments in the margin, paragraph at the<br />

end, marking grid, one-to-one and group feedback).<br />

There were nine questions (Table 2) and space to add<br />

comments about issues such as timing, detail,<br />

variation between assessments, particularly helpful<br />

feedback and how the feedback influenced ways <strong>of</strong><br />

studying.<br />

A focus group <strong>of</strong> 15 students was recruited; the<br />

participants were volunteers and reflected the gender<br />

composition <strong>of</strong> the cohort (40% female, 60% male)<br />

and the range <strong>of</strong> academic achievement. Recorded<br />

discussions, led by a colleague from outside the<br />

department (Ali), were held at the start and end <strong>of</strong><br />

the year, in groups <strong>of</strong> 5 or 6 students. In the first focus<br />

group discussion focussed on how the students had<br />

experienced feedback in the 1st year. In the second<br />

focus group, the students were asked to reflect back<br />

on their experiences <strong>of</strong> the feedback strategies used<br />

in ENV201.<br />

PLANET ISSUE <strong>23</strong><br />


Using structured reflection to encourage more active use <strong>of</strong> feedback<br />

Jacqueline M. Pates | Robert Blake | Ali Cooper<br />

Type <strong>of</strong> feedback<br />

Description<br />

Feedback sessions<br />

Self-assessment in class<br />

Peer assessment in class<br />

Group oral feedback<br />

Discussion <strong>of</strong> examples <strong>of</strong><br />

student work<br />

Verbal responses to individual<br />

questions (one-to-one)<br />

Verbal responses to individual<br />

questions (whole group)<br />

Post-it note questions<br />

Self-evaluation in class<br />

Written group feedback<br />

Questions on VLE<br />

The feedback session starts with a reminder <strong>of</strong> the key features <strong>of</strong><br />

the task. Students then assess their own or a fellow student’s work<br />

against given criteria.<br />

During the feedback session, examples <strong>of</strong> student work are projected<br />

for the whole class to see. Students led the annotation <strong>of</strong> the work,<br />

using a tablet laptop, assessing the work against the same criteria<br />

as used above. They are encouraged to find positive points as well as<br />

criticise the work. Examples are selected to illustrate a range <strong>of</strong> typical<br />

errors and demonstrate good practice.<br />

Students are able to ask the tutors questions throughout the class.<br />

Tutors move around the class during the initial activities, providing<br />

opportunities for one-to-one questions.<br />

Students write questions on Post-it notes during the feedback session.<br />

The answers are either given in class if time allows or through the VLE.<br />

See intervention questions below.<br />

At the end <strong>of</strong> the feedback session a summary (1 side A4) <strong>of</strong> key<br />

feedback points is distributed. The feedback summarises 3 things that<br />

were done well and 3 things that were incorrect, each ranked with<br />

ticks and crosses (e.g. 3 ticks, very good; 1 tick, OK).<br />

At all times, students are encouraged to ask questions via the VLE.<br />

Answers are provided by fellow students or tutors.<br />

Other feedback activities<br />

Peer evaluation <strong>of</strong> group work<br />

Written comment on group work<br />

Model answers on VLE<br />

Students evaluate group posters and give oral feedback.<br />

The tutor provides a written feedback summary on the group posters.<br />

Detailed worked answers are provided for statistics tests.<br />

PLANET ISSUE <strong>23</strong><br />

4<br />

Individual feedback<br />

Individual written feedback<br />

Table 1: Summary <strong>of</strong> feedback methods employed in ENV201.<br />

Individual written feedback is provided for the report at the end <strong>of</strong> this<br />


Question 1<br />

Question 2<br />

Question 3<br />

Question 4<br />

Question 5<br />

Question 6<br />

Question 7<br />

Question 8<br />

Question 9<br />

I read the feedback before looking at the grade (y/n)<br />

I did not read the feedback - I just looked at the grade (y/n)<br />

The feedback affected the way I felt about the grade given (y/n)<br />

I had a follow-up conversation with my tutor after receiving the grade/feedback (y/n)<br />

I found it easy to understand what the tutor meant in the feedback given<br />

(0 = very difficult - 3 = very easy)<br />

I found the feedback helpful for indicating what I had done well<br />

(0 = not at all helpful - 3 = extremely helpful)<br />

I found the feedback helpful for understanding where I had gone wrong or omitted things<br />

(0 = not at all helpful - 3 = extremely helpful)<br />

The feedback caused me to think about how much I had gained from the module.<br />

(0 = not at all - 3 = a lot)<br />

I used the feedback to guide my next piece <strong>of</strong> assessed work.<br />

(0 = I did not use it at all - 3 = I used it a lot)<br />

Table 2: Initial questionnaire questions.<br />

Results and discussion<br />

The first year position<br />

The majority <strong>of</strong> students responded to the<br />

questionnaire (91%, n = 67). All students had received<br />

1st year feedback in the form <strong>of</strong> margin comments<br />

and a paragraph <strong>of</strong> text at the end <strong>of</strong> their work (Table<br />

3). Approximately two-thirds had received feedback<br />

as a marking grid or group feedback. Only 25% had<br />

received one-to-one feedback.<br />

The majority <strong>of</strong> students looked at the grade before<br />

looking at the feedback, and a significant minority did<br />

not read the feedback given at all. At least 50% <strong>of</strong><br />

students stated that the feedback affected how they<br />

felt about their grade, with a paragraph <strong>of</strong> text having<br />

the most impact and group feedback having the least.<br />

Very few students sought out a follow-up<br />

conversation with their tutor.<br />

Question<br />

Comments<br />

Paragraph<br />

Grid<br />

One-to-one<br />

Group<br />

N (receiving this type<br />

<strong>of</strong> feedback)<br />

67<br />

66<br />

43<br />

17<br />

42<br />

Percentage <strong>of</strong> all respondents<br />

100%<br />

99%<br />

64%<br />

25%<br />

63%<br />

I read the feedback before<br />

looking at the grade.<br />

15%<br />

21%<br />

16%<br />

18%<br />

17%<br />

I did not read the feedback –<br />

I just looked at the grade.<br />

13%<br />

14%<br />

16%<br />

6%<br />

19%<br />

The feedback affected the way<br />

I felt about the grade given.<br />

52%<br />

68%<br />

56%<br />

59%<br />

48%<br />

I had a follow-up conversation<br />

with my tutor after receiving<br />

the grade/feedback.<br />

4%<br />

Table 3: Percentage <strong>of</strong> students responding “yes” to each question for the five feedback methods evaluated<br />

8%<br />

0%<br />

24%<br />

5%<br />

PLANET ISSUE <strong>23</strong><br />


Using structured reflection to encourage more active use <strong>of</strong> feedback<br />

Jacqueline M. Pates | Robert Blake | Ali Cooper<br />

Figure 1 summarises the responses to the Likert scale<br />

questionnaires. There are no strong differences<br />

between the different feedback methods in terms <strong>of</strong><br />

usefulness to the students. In general, the students<br />

considered that all the feedback methods were<br />

relatively easy to understand, and that they were<br />

helpful both in indicating what the student had done<br />

wrong and what they had done correctly. However,<br />

the majority <strong>of</strong> students did not feel that the<br />

feedback caused them to reflect on what they had<br />

gained from the module, nor was it useful for guiding<br />

their next piece <strong>of</strong> work. The one exception was<br />

one-to-one feedback, which did promote a more<br />

significant level <strong>of</strong> reflection. In comparison to the<br />

other methods, group feedback performed similarly or<br />

slightly worse across all questions.<br />

The most common issues raised in the free text<br />

responses were: more one to one feedback would be<br />

useful (24%); more detail and clearer explanations<br />

needed (24%) and not enough feedback given (21%).<br />

This comment is typical:<br />

“Feedback … is <strong>of</strong>ten vague and doesn’t always<br />

explain where we did well / badly. Most <strong>of</strong> the<br />

time it’s just ticks in the margin, the odd “why”<br />

and then a mark at the end.”<br />

Group feedback was not raised at all in the free text<br />

responses, perhaps indicating that the class did not<br />

perceive this feedback method as important to them.<br />

The issues raised in the questionnaires were explored<br />

further in the focus groups. Overall, the tenor <strong>of</strong> the<br />

discussion was more negative about 1st year<br />

feedback than the questionnaire responses indicate.<br />

The majority <strong>of</strong> students in this class have moved<br />

directly from studying A-levels in a school or 6th-form<br />

college to university. During A-levels, the norm was to<br />

submit multiple drafts for correction by teachers. The<br />

students were surprised when, at university, they<br />

were expected to hand in a final version, with no<br />

opportunity for re-submission. However, they<br />

recognised that the school model did not help them<br />

become independent in their learning.<br />

“… the nature <strong>of</strong> the feedback changes between<br />

‘A’ level and university … (in school) it was<br />

basically ‘you should have put this in’ and now (at<br />

university) it’s like, ‘you’ve missed out something’<br />

and you’ve got to … figure it out for yourself …”<br />

It was clear that the students felt no sense <strong>of</strong><br />

dialogue with their tutors, and most felt unable to<br />

seek one-to-one support, particularly students who<br />

felt they had done poorly in a piece <strong>of</strong> work. The<br />

students valued individual written comments on their<br />

own work, which they would ideally like to be followed<br />

up with a one-to-one conversation with the tutor.<br />

Finally, they were unable to recognise the links<br />

between their work for different modules, and did not<br />

consider using the feedback from one to guide the<br />

next.<br />

PLANET ISSUE <strong>23</strong><br />

End <strong>of</strong> year two responses<br />

The initial question for the end <strong>of</strong> year focus groups<br />

was “What types <strong>of</strong> feedback did you receive in<br />

ENV201?”. The responses are summarised in Table 4.<br />

The students all mentioned the group feedback as<br />

their first or second recollection. Although individual<br />

written feedback was still prominent, the students<br />

were much more aware <strong>of</strong> the range and variety <strong>of</strong><br />

feedback methods being used. They regarded<br />

questions and answers in class and on the VLE as<br />

feedback, unlike at the start <strong>of</strong> the year. The students<br />

all clearly valued the opportunities for classroom<br />

dialogue during the module.<br />


Figure 1: First<br />

questionnaire<br />

responses, where “0”<br />

is “not at all helpful<br />

/ useful” and “3” is<br />

“extremely helpful /<br />

useful”.<br />

PLANET ISSUE <strong>23</strong><br />


Using structured reflection to encourage more active use <strong>of</strong> feedback<br />

Jacqueline M. Pates | Robert Blake | Ali Cooper<br />

Type <strong>of</strong> feedback<br />

Order <strong>of</strong> recollection<br />

Group 1<br />

Group 2<br />

Group 3<br />

Group feedback<br />

Written group feedback<br />

1,12<br />

2<br />

1<br />

Group oral feedback<br />

2<br />

4, 11<br />

2<br />

Discussion <strong>of</strong> examples <strong>of</strong> student work<br />

11<br />

3<br />

6<br />

Peer evaluation <strong>of</strong> group work<br />

4<br />

10<br />

7<br />

Written comment on group work<br />

4<br />

10<br />

7<br />

Model answers on VLE<br />

--<br />

--<br />

8<br />

Questions on VLE<br />

15<br />

8<br />

3<br />

Feedback to group in response to<br />

individual’s question<br />

Verbal responses to individual questions<br />

(one-to-one)<br />

6<br />

6, 9<br />

--<br />

Verbal responses to individual questions<br />

(whole group)<br />

7, 9<br />

7<br />

8<br />

Post-it note questions<br />

--<br />

12<br />

5<br />

Individual feedback<br />

Individual written feedback<br />

3, 5, 8<br />

1<br />

4<br />

Self and peer assessment<br />

Self-assessment in class<br />

13<br />

5, 14<br />

2<br />

Peer assessment in class<br />

13<br />

5<br />

--<br />

Self-evaluation in class (how would you<br />

improve this work?)<br />

14<br />

13<br />

--<br />

PLANET ISSUE <strong>23</strong><br />

Table 4: Types <strong>of</strong> feedback used in ENV201 as recalled by the students 5 months after the completion <strong>of</strong> the module.<br />


The discussion then turned to an evaluation <strong>of</strong><br />

the feedback methods experienced during the<br />

module. Feedback methods the students found<br />

particularly helpful are outlined below, illustrated<br />

by student comments.<br />

The task-based approach to report writing was<br />

positively received (the mean class submission rate<br />

for the weekly tasks was 83% in 2007 and 75% in<br />

2008). The use <strong>of</strong> written feedback summaries was<br />

highlighted, the students reporting that the format<br />

was accessible and easy to apply to their own work.<br />

Perhaps more important were the built-in<br />

opportunities for follow up. The tasks enabled<br />

students to make mistakes in a non-assessed<br />

environment, with an opportunity to rectify these<br />

mistakes in later work (the ENV200 report).<br />

“we went through … examples … <strong>of</strong> what they’d<br />

done well and done badly and then you could sort<br />

<strong>of</strong>, mark it against your one, so if you’d made the<br />

same mistake you’d write ‘oh, I did that wrong’ do<br />

it better like this”<br />

“it’s easier to do something the second time round<br />

so I would say I’d been taught something, like<br />

how to put figures and tables into my report; I<br />

might do it the first time and get one or two<br />

things wrong but then the group feedback<br />

highlights those things and the second time round<br />

you know how to do it a lot better”<br />

“… it’s not just the work that we did, we …<br />

processed the actual feedback in a way cos it was<br />

like a product … <strong>of</strong> what we’d done and like group<br />

discussions … so it sticks in your mind …”<br />

The students made frequent references to “doing<br />

things differently the next time”. It was recognised<br />

that the focus <strong>of</strong> ENV201 (skills) lends itself to<br />

application across the curriculum. The students also<br />

noted that the fact that “the second year counts”<br />

towards their degree makes them much more<br />

interested in improving their work.<br />

Although the self-evaluation exercises were low on<br />

the students’ list <strong>of</strong> feedback methods, the students<br />

talked enthusiastically and in detail about their<br />

experiences <strong>of</strong> group feedback. The students valued<br />

the opportunities for dialogue during the module,<br />

talking at length about the different ways in which<br />

they could ask questions and obtain responses,<br />

notably via the VLE.<br />

“There’s still … reluctance to actually put your<br />

hand up and say something (in a lecture). I think<br />

its just a little bit safer to put it on the message<br />

board …”<br />

Several reasons were identified for this success:<br />

• Investment <strong>of</strong> staff time (questions were answered<br />

in a reasonable time frame);<br />

• Tutors only answered questions posed on the VLE<br />

discussion board. Students asking questions by<br />

email were asked to post them on the VLE to get<br />

an answer;<br />

• Students answered questions, encouraged by<br />

tutors only responding to the VLE during working<br />

hours and adding supportive posts (e.g. “John’s<br />

answer was correct”, “Lucy has part <strong>of</strong> the answer,<br />

but you need to think about this as well.”);<br />

• The VLE was used for several other aspects <strong>of</strong> the<br />

module (e.g. for downloading spreadsheets for<br />

the practicals).<br />

The students appreciated several facets <strong>of</strong> the VLE: it<br />

was available when they needed it; it acted as an<br />

archive that could be returned to (unlike verbal<br />

conversations); and everyone could benefit from the<br />

responses not just the person asking the questions.<br />

“… if you were stuck on something you would go<br />

on there (the VLE) and someone else had asked<br />

that question already and you could … look at the<br />

conversation they’d had and find the answer.”<br />

The Post-it note exercise always generated many<br />

more questions than appeared on the VLE, because<br />

they were immediate, questions were asked in class<br />

while fresh in the mind, and fully anonymous.<br />

Although VLE posts can be anonymous to fellow<br />

students, the tutor can always identify the poster.<br />

Staff grouped questions by theme and responded to<br />

them on the VLE. The students found this processing<br />

useful, as it <strong>of</strong>ten clarified the question.<br />

PLANET ISSUE <strong>23</strong><br />


Using structured reflection to encourage more active use <strong>of</strong> feedback<br />

Jacqueline M. Pates | Robert Blake | Ali Cooper<br />

Conclusions and recommendations<br />

In this module, group feedback, used actively by the<br />

tutor, proved to be a useful and effective method for<br />

developing independent learning skills <strong>of</strong> the<br />

students, with advantages for both the tutor and<br />

students. The students actively engaged with<br />

formative work, because they were able to see the<br />

benefit for themselves.<br />

From a tutor perspective, generating group feedback<br />

took approximately 3 hours each week. Individual<br />

feedback could not have been provided in the same<br />

time and it was a much more positive experience for<br />

the tutor. Providing individual feedback can be<br />

repetitive and reinforce negative views <strong>of</strong> the<br />

students’ work, whereas looking for patterns is<br />

inherently more interesting. By standing back, the<br />

tutor can gain a greater sense <strong>of</strong> what has worked<br />

in the teaching and use the observations to review<br />

the curriculum.<br />

For the students, the group feedback was not only as<br />

good as individual feedback, but had additional<br />

benefits. Receiving individual written feedback can be<br />

a passive experience, but the activities discussed here<br />

resulted in engagement with the comments. It was<br />

useful for the class to see comments that did not<br />

directly pertain to their own work. The students<br />

valued being able to position themselves relative to<br />

the class, and felt reassured that they were not alone<br />

in finding some tasks difficult, building their<br />

confidence during the module.<br />

The methods discussed here are time-consuming,<br />

relative to standard assessment methods. However,<br />

by devoting one module to the development <strong>of</strong> these<br />

skills and incorporating activities designed to<br />

promote engagement with feedback, the whole<br />

programme benefits from students who are more<br />

able to learn independently.<br />

Acknowledgements<br />

This work was funded by the <strong>GEES</strong> Small Scale Project Fund.<br />

References<br />

• McDonald B. and Boud D. 2003 The impact <strong>of</strong> self-assessment on achievement: the effects <strong>of</strong> selfassessment<br />

training on performance in external examinations, Assessment in Education, 10, 2, 209-220<br />

• National Union <strong>of</strong> Students 2008 The great NUS feedback amnesty: briefing paper http://resource.<br />

nusonline.co.uk/media/resource/2008-Feedback_Amnesty_Briefing_Paper1.pdf Accessed 21 June 2010<br />

• Nicol D. 2008 Learning is a two-way street, Times Higher Education, 1842, 24<br />

• Nicol D.J. and Macfarlane-Dick D. 2006 Formative assessment and self-regulated learning: a model and<br />

seven principles <strong>of</strong> good feedback practice, Studies in Higher Education, 31, 2, 199-218<br />

• Q.A.A. 2007 Earth Sciences, Environmental Sciences and Environmental Studies Benchmarking<br />

Statement http://www.qaa.ac.uk/academicinfrastructure/benchmark/statements/earthsciences.asp<br />

Accessed 21 June 2010<br />

• Rust C., Price M. and O’Donovan B. 2003 Improving students’ learning by developing their understanding<br />

<strong>of</strong> assessment criteria and processes, Assessment and Evaluation in Higher Education, 28, 2, 147-164<br />

• Yorke M. 2003 Formative assessment in higher education: moves towards theory and the enhancement<br />

<strong>of</strong> pedagogic practice, Higher Education, 45, 4, 477-501<br />

PLANET ISSUE <strong>23</strong><br />

Jacqueline M. Pates j.pates@lancaster.ac.uk<br />

Robert Blake learningdevelopment@lancaster.ac.uk<br />

Ali Cooper a.m.cooper@lancaster.ac.uk<br />


Assessment for Learning Centre for Excellence<br />

in Teaching & Learning (AfL CETL) resources<br />

The AfL CET is the national centre for expertise in Assessment for Learning and a home for innovation,<br />

development and research into assessment, learning and teaching at Northumbria University.<br />

Assessment for Learning (AfL)<br />

A key purpose <strong>of</strong> AfL is to foster student development in taking responsibility for evaluating, judging and<br />

improving their own performance by actively using a range <strong>of</strong> feedback. These capabilities are at the<br />

heart <strong>of</strong> autonomous learning and <strong>of</strong> the graduate qualities valued by employers and in pr<strong>of</strong>essional<br />

practice.<br />

Assessment for Learning:<br />

1. Emphasises authenticity and complexity in the content and methods <strong>of</strong> assessment rather than<br />

reproduction <strong>of</strong> knowledge and<br />

reductive measurement.<br />

2. Uses high-stakes summative<br />

assessment rigorously but sparingly<br />

rather than as the main driver for<br />

learning.<br />

3. Offers students extensive opportunities<br />

to engage in the kinds <strong>of</strong> tasks that<br />

develop and demonstrate their<br />

learning, thus building their confidence<br />

and capabilities before they are<br />

summatively assessed.<br />

4. Is rich in feedback derived from formal<br />

mechanisms e.g. tutor comments on<br />

assignments, student self-review logs.<br />

5. Is rich in informal feedback e.g. peer<br />

review <strong>of</strong> draft writing, collaborative<br />

project work, which provides students<br />

with a continuous flow <strong>of</strong> feedback on<br />

‘how they are doing’.<br />

6. Develops students’ abilities to direct their own learning, evaluate their own progress and attainments<br />

and support the learning <strong>of</strong> others.<br />

The AfL CETL has produced a series <strong>of</strong> Signpost leaflets, which are the first step towards capturing many<br />

<strong>of</strong> the activities undertaken by AfL staff when using Assessment for Learning methodologies to<br />

enhance their teaching. They are a personal snapshot <strong>of</strong> a specific activity and describe the idea behind<br />

it and how it was executed as well as things to bear in mind. These resources are available at: http://<br />

www.northumbria.ac.uk/cetl_afl/resources/signposts/<br />

One <strong>of</strong> the resources (signpost 13) was developed through a soil degradation and rehabilitation module.<br />

Signpost 13 - On The Edge: Supporting marginalised students in groupwork assessment<br />

http://northumbria.ac.uk/static/5007/cetlpdf/signpost13.pdf<br />

PLANET ISSUE <strong>23</strong><br />


Alan P. Boyle | Dave J. Prior | Andy E. Heath<br />

School <strong>of</strong> Environmental Sciences, Herdman Building, University <strong>of</strong> Liverpool, Liverpool, L69 3GP<br />

Using portfolio assessment<br />

to engage Level 1 Geoscience<br />

students in their subject and<br />

to develop their learning skills<br />

PLANET ISSUE <strong>23</strong><br />

12<br />

Abstract<br />

Portfolios provide a means <strong>of</strong> collecting together<br />

materials in an organised way that can help the<br />

portfolio creator understand the materials better. This<br />

article describes a lecture-based geology module<br />

designed to help students create a portfolio <strong>of</strong> lecture<br />

notes, annotated reading and summaries with the<br />

aim <strong>of</strong> improving their early understanding <strong>of</strong> geology<br />

as well as improving their study skills and<br />

engagement with their home department.<br />

Introduction<br />

This development arose from three disparate strands.<br />

In 2003 the first author attended a “Teaching<br />

Petrology in the 21st Century” 1 workshop at the<br />

University <strong>of</strong> Montana, where LeeAnn Srogi (West<br />

Chester University) gave a presentation on “Portfolios<br />

in petrology: enhancing teaching and learning”.<br />

Portfolios have been widely used in PDP and in<br />

subjects such as Art and Architecture, but less so for<br />

the assessment for learning described in this article. It<br />

is accepted that there are assessment problems<br />

associated with more open-ended portfolios (e.g.<br />

Dempsey, 2009), but more constrained portfolio<br />

activities can be less problematic. Secondly,<br />

informative discussions with Sarah Maguire<br />

(University <strong>of</strong> Ulster) about retention and transition<br />

problems helped to crystallise some ideas about ways<br />

that students could be helped to develop better<br />

learning skills. Thirdly, a course review in May 2006<br />

provided the impetus to develop a new 7.5 CAT<br />

module for 2006-07, which addressed a number <strong>of</strong><br />

interrelated issues:<br />

1<br />

http://serc.carleton.edu/NAGTWorkshops/petrology03/programguide.html<br />

1. most students have not studied geoscience prior to<br />

entry;<br />

2. student engagement with the geosciences as a<br />

discipline and the department;<br />

3. transition and retention matters;<br />

4. improving students’ ways <strong>of</strong> learning.<br />

It is difficult to deal with wide-ranging, and<br />

fundamentally exciting, geoscience topics at<br />

introductory level when the background geoscience<br />

knowledge <strong>of</strong> many incoming students is limited.<br />

Students with limited prior knowledge <strong>of</strong> geoscience<br />

<strong>of</strong>ten think they are disadvantaged compared to<br />

some <strong>of</strong> their peers, which can impact on retention.<br />

Modularisation has militated against cross-disciplinary<br />

topics showing the true wonder <strong>of</strong> geoscience in<br />

favour <strong>of</strong> narrow, single topic modules. Thus, new<br />

geoscience students are confronted by self-contained,<br />

subject-based topics (e.g. introductory mineralogy)<br />

and fail to see the bigger picture, leading to<br />

frustration and possibly a lack <strong>of</strong> interest.<br />

New students <strong>of</strong>ten have difficulties with the<br />

fundamental learning skills involved in combining<br />

note-taking, reading and general organisation <strong>of</strong><br />

paper-based materials into a learning process that<br />

works for them. They have problems with scholarship<br />

and essay writing, especially in providing the evidence<br />

in an essay that they have undertaken the “outside<br />

reading” to satisfy one <strong>of</strong> the major criteria for an<br />

upper second class degree. They clearly need help<br />

with the transfer from school to university education<br />

(Maguire et al. 2008).

The new module<br />

A new module, EOSC111 Special Topics in Geology,<br />

managed by Prior and moderated by Boyle, sought to<br />

address these issues. Although only 7.5 credits (75<br />

hours study time), it was taught ‘long and thin’ across<br />

the whole session.<br />

Students are required to attend ten lectures from<br />

different speakers covering topics like “Mass<br />

Extinctions”, “Snowball Earth”, “Geohazards”, “Origin<br />

<strong>of</strong> the Moon”, and “Earthquake Prediction”. Each<br />

lecture is designed to illustrate the scientific method,<br />

overtly demonstrating the relationship between<br />

evidence, interpretation and uncertainty. Each topic<br />

directs students to associated reading deliberately<br />

including a mixture <strong>of</strong> primary sources in journals,<br />

chapters in books and web resources. The lectures are<br />

split between the first and second semesters and<br />

occur at approximately two week intervals. In 2008-<br />

09 and 2009-10 M. Williams and P. Williams ran a<br />

<strong>GEES</strong> <strong>Subject</strong> Centre-funded 2D-3D visualisation<br />

project (Williams et al. submitted) as part <strong>of</strong><br />

EOSC111. Some <strong>of</strong> the student perceptions reported<br />

here came from responses to 2D-3D sessions collated<br />

by M. Williams.<br />

Students are also required to attend at least six<br />

extra-mural talks from the student society lecture<br />

series or the departmental seminar series. This<br />

requirement aimed to get students to engage early<br />

with the ethos <strong>of</strong> the department so that they could<br />

widen their geoscience knowledge base, become<br />

aware <strong>of</strong> other scientific approaches and begin to see<br />

links between topics.<br />

Outwith lectures, students have to develop a portfolio<br />

<strong>of</strong> evidence containing their contemporaneous lecture<br />

notes for the special topics lectures, handouts,<br />

annotated copies <strong>of</strong> the reading list and a one page<br />

summary, which they write on the basis <strong>of</strong> their<br />

notes, handouts and reading. The reading is designed<br />

to cover some aspects <strong>of</strong> the topic not covered fully in<br />

the lecture. Students include the external lectures in<br />

their portfolio providing contemporary notes and a<br />

summary. The portfolio is geared to help students<br />

practise reflecting on a topic and synthesise a<br />

structured summary as a means <strong>of</strong> helping them to<br />

adapt to more independent learning in their new<br />

university environment.<br />

Assessment<br />

The module is assessed by a written examination<br />

[60%] and the portfolio [40%]. The examination<br />

supports students in writing a structured essay. Four<br />

essay questions are advertised about one month<br />

beforehand. Two <strong>of</strong> these appear on the examination<br />

paper. Candidates have to answer one <strong>of</strong> them in 60<br />

minutes. They commonly have to be led through the<br />

logic that the minimum number <strong>of</strong> the four topics<br />

to revise would be three; many hope that two would<br />

be enough.<br />

The lecture topics are split between first and second<br />

semesters, allowing assessment <strong>of</strong> the portfolio to be<br />

formative at the end <strong>of</strong> the first semester and<br />

summative at the end <strong>of</strong> the second semester.<br />

Students are not required to submit their portfolios for<br />

formative assessment, but the majority do. Formative<br />

feedback reviews the notes, reading and summary<br />

statement for each lecture attended and includes an<br />

indicative overall mark. Students are given further<br />

formative feedback with their summative portfolio<br />

grade after semester two, especially in relation to<br />

how they have taken on board semester one<br />

feedback. In general, students who did not submit<br />

their portfolio at the end <strong>of</strong> semester one get<br />

feedback that their portfolio would be better if they<br />

had. All portfolios are returned before the formal<br />

examinations period.<br />

Outcomes<br />

The module has been a success: student assessment<br />

performance in the module has been good and their<br />

feedback positive. The requirement to attend six<br />

extra-mural talks has improved attendances at<br />

invited speaker sessions and provided the spur for<br />

new students to engage more fully with the life <strong>of</strong><br />

the department.<br />

A significant minority <strong>of</strong> students did not submit their<br />

portfolios for formative assessment at the end <strong>of</strong> the<br />

first semester. This was probably a strategic decision,<br />

as no summative marks were involved. Ironically,<br />

it is also apparent from talking to the students that<br />

many were uncertain about how they were ‘doing’<br />

at University. Submission <strong>of</strong> the portfolio for formative<br />

assessment would have given them some early<br />

feedback to find out how they were ‘doing’, but we<br />

suspect some students were worried about the<br />

feedback and so avoided it by not submitting their<br />

portfolio.<br />

PLANET ISSUE <strong>23</strong><br />


Using portfolio assessment to engage level 1 Geoscience students in their<br />

subject and to develop their learning skills<br />

Alan P. Boyle | Dave J. Prior | Andy E. Heath<br />

Students were only given explicit instructions about<br />

what should be in the portfolio, not how to organise it.<br />

Very few portfolios were easy to use at the end <strong>of</strong> the<br />

first semester. Problems included: no contents page;<br />

notes and summary hard to identify; headings<br />

missing; systematic ordering <strong>of</strong> material by topic<br />

absent or limited; and many submissions came in<br />

plastic document wallets making assessment timeconsuming.<br />

Most students used first semester<br />

feedback positively to improve their portfolios with<br />

title pages, tabbed separators, and logical ordering <strong>of</strong><br />

material in a sufficiently large ring-binder. Students<br />

who, through non-submission, did not get formative<br />

feedback produced generally poor portfolios for<br />

summative assessment.<br />

It is notable that students who did not hand in a<br />

portfolio for formative feedback at the end <strong>of</strong> the first<br />

semester did less well in portfolio assessment and<br />

examination than students who did (Figure 1,<br />

compare open circles and triangles). Simple numerical<br />

averages bear this out: 32 students who submitted no<br />

portfolio for formative assessment averaged 50% for<br />

their portfolio and 57% for their exam. Equivalent<br />

grades for 78 students receiving portfolio feedback<br />

(average indicative grade 56%) were 71% and 65%.<br />

It is unclear if this is just self-selection amongst the<br />

students (the better ones do everything), or if the<br />

better performance is a result <strong>of</strong> better engagement<br />

with the process. We suspect they were helped by<br />

engaging with the formative feedback process so that<br />

they gained a better understanding <strong>of</strong> how they were<br />

‘doing’ and what they could do to improve further.<br />

“Yes, but it was heavy going at times. I struggled<br />

with annotating the reading.”<br />

“Some <strong>of</strong> it. Some was too in-depth and a bit<br />

daunting.”<br />

There is clearly recognition that reading is important<br />

and helpful, but the amount expected appears<br />

daunting to some students.<br />

The question ‘Did this module encourage you to<br />

attend other lectures?’ had these indicative responses:<br />

“Yes, I made an effort to attend Herdman lectures<br />

and seminars due to the requirements <strong>of</strong> this<br />

module and found them interesting. I am not sure<br />

I would have done so otherwise.”<br />

“Without this module I would probably never have<br />

bothered with Herdman Lectures. It was a good<br />

way <strong>of</strong> introducing them.”<br />

“No, some seminars I found boring because I<br />

didn’t understand them.”<br />

Again there is evidence that the module has<br />

succeeded in engaging students (but obviously not<br />

all!) with the subject and departmental activities in a<br />

way that they may not have without the module<br />

requirement to attend extra-mural lectures.<br />

In response to the question ‘Was portfolio<br />

development useful?’ the following are indicative<br />

responses:<br />

PLANET ISSUE <strong>23</strong><br />

Student feedback indicates that students appreciate<br />

the module. Free-text written comments have been<br />

informative. The question ‘Did the recommended<br />

reading help you understand the topic better?’ had<br />

these indicative responses:<br />

“Yes, a more in-depth understanding was<br />

achieved.”<br />

“Yes, but there was too much <strong>of</strong> some <strong>of</strong> it, for<br />

example earthquake prediction.”<br />

“Yes, the recommended reading was very<br />

important.”<br />

“Good for revision as it made you sort your notes<br />

out and understand them after each lecture so it<br />

was just revision and not re-learning things.”<br />

“Yes, it helped me learn to organise my notes,<br />

summarise lectures and annotate reading<br />

materials.”<br />

“Yes. A lot. I would have struggled with the exam<br />

without the portfolio.”<br />

These comments, and others like them, are perhaps<br />

the most informative. They suggest that the module<br />

has succeeded in embedding good study skills and<br />


that the students understand their worth. The<br />

portfolio is an assessment tool that produces a grade,<br />

but more importantly it is about enhancing students’<br />

learning skills. We emphasise that these skills are not<br />

just for this module, and should be transferred to all<br />

modules, but we have done no research into how<br />

effective this transfer is.<br />

Summary<br />

In summary, we think this module has been a<br />

successful experiment in addressing a number <strong>of</strong><br />

concerns. In particular:<br />

• Students learn how to incorporate reading into<br />

their lecture-based learning materials.<br />

• Students get early formative feedback on a<br />

substantial piece <strong>of</strong> work so that they have a clear<br />

idea <strong>of</strong> how they are “doing” at university and what<br />

they can do to improve.<br />

• Students improve their organisation <strong>of</strong> topic-based<br />

learning materials.<br />

• Students engage more with departmental<br />

activities, and their improved sense <strong>of</strong> belonging is<br />

positive.<br />

We believe a module like this could be used widely in<br />

earth sciences, and other disciplines, to engender<br />

improved study skills, better engagement with the<br />

subject and host department, and ultimately<br />

better students.<br />

References<br />

• Dempsey D. 2009 The Promise and Challenges <strong>of</strong> Portfolio Assessment, http://serc.carleton.edu/<strong>file</strong>s/<br />

departments/program_assessment/promise_challenge_portfolios.ppt Accessed on 8th August 2010<br />

• Maguire S., Carter C. and Curran R. 2008 Pre-entry qualifications – staff perceptions versus reality,<br />

<strong>Planet</strong>, 19, 31-35<br />

• Williams M., Williams T.P. and Boyle A.P. submitted. Using internet-based problem-solving activities to<br />

enhance students’ understanding <strong>of</strong> 3-dimensional spatial relationships. <strong>Planet</strong> 24, in print.<br />

Figure 1: Comparing final written examination scores with first and second semester portfolio scores for 110 students over<br />

four sessions (2007-2010). P1 students (triangles) did not submit a portfolio for formative assessment (P1a) but generally<br />

did for semester 2 (P1b). P2 students submitted portfolios for formative assessment (P2a) and also, with one exception, for<br />

semester 2 (P2b). The black triangles on the baseline <strong>of</strong> the graph are students who did not submit a portfolio.<br />

PLANET ISSUE <strong>23</strong><br />


Assessment Standards Knowledge exchange<br />

(ASKe) Centre <strong>of</strong> Excellence in Teaching and<br />

Learning (CETL) resources<br />

The work <strong>of</strong> the ASKe CETL focuses on ways <strong>of</strong> helping staff and students develop a<br />

common understanding <strong>of</strong> academic standards, through researching and disseminating<br />

established good practice. ASKe has produced a number <strong>of</strong> resources related to<br />

assessment and feedback. Of particular interest are their series <strong>of</strong> ‘1,2,3’ leaflets. These<br />

are briefly described below. If you would like a <strong>copy</strong> <strong>of</strong> these resources or more<br />

information please visit http://www.brookes.ac.uk/aske/resources.html or email aske@<br />

brookes.ac.uk<br />

ASKe 1,2,3 Leaflets<br />

ASKe has launched a series <strong>of</strong> ‘1,2,3’ Leaflets which highlight some practical ways in which<br />

teaching staff can improve their students’ learning. Each leaflet focuses on a piece <strong>of</strong><br />

assessment-related research and clearly states how that research can be applied to<br />

teaching practice in three easy steps.<br />

Assessment - doing better! How can students be encouraged to do better on their<br />

assignments? This leaflet gives three easy steps to help students tackle an<br />

assignment as well as useful tips on ‘Doing better; what you need to know’, including<br />

explanations <strong>of</strong> assessment requirements, assessment criteria and assessment<br />

standards. A really helpful guide for students.<br />

Improve your students’ performance in 90 minutes! Do you feel<br />

that your students are underperforming due to their poor<br />

understanding <strong>of</strong> your assessment standards? Are you<br />

concerned that they don’t really understand your assessment<br />

criteria? Have you ever experienced blank looks when students<br />

read your feedback comments on their work? If so, try our preassessment<br />

intervention!<br />

Reduce the risk <strong>of</strong> Plagiarism in just 30 mins! Do you<br />

want to avoid spending hours searching for plagiarism in your students’ work?<br />

Do you want to design out easy opportunities for <strong>copy</strong>ing and ‘faking’ learning?<br />

Do you want to encourage your students’ activity as a way <strong>of</strong> encouraging their<br />

learning? If so, try our three-step exercise!<br />

PLANET ISSUE <strong>23</strong><br />

Adopting a social constructivist approach to<br />

assessment in three easy steps! Do you believe there’s<br />

more to assessment than passive transmission <strong>of</strong><br />

standards? Would you like to actively engage your<br />

students in the assessment process? Would you like to<br />

help your students improve their learning for themselves? If so, why<br />

don’t you try a social constructivist approach to assessment? It<br />

doesn’t take up much time, and it works…<br />


Cultivating Community: why it’s worth doing and three ways <strong>of</strong> getting there Why<br />

should we cultivate community? Evidence from the literature and practice shows that<br />

student involvement and a strong sense <strong>of</strong> community are highly significant factors in<br />

students’ academic success.<br />

Feedback - Make it work for you! How can students be helped<br />

to use their feedback? This leaflet explains what feedback is,<br />

why it is valuable, and sets out 3 easy steps for getting the<br />

very best out <strong>of</strong> feedback. A must for students to read!<br />

How to make your feedback work in three easy<br />

steps! Would you like to help your students gauge how<br />

well they are doing? Are you concerned that your<br />

students fail to understand the feedback that you give<br />

them? Would you like to actively engage your students<br />

in the assessment process? If so, why don’t you try<br />

this three-step exercise?<br />

Using generic feedback effectively Would you like to help<br />

your students gauge how well they are doing? Would you like to<br />

encourage them to become independent learners? Have you ever<br />

considered how generic feedback might improve individual<br />

performance? If so, why don’t you try this three-step exercise?<br />

Using Turnitin to provide powerful formative feedback Do you<br />

want to reduce the risk <strong>of</strong> inadvertent plagiarism in your students’<br />

work? Would you like to educate your students, before<br />

assessment, about the meaning <strong>of</strong> plagiarism and how to avoid it?<br />

Can you see the value <strong>of</strong> using the latest tools and techniques to provide feedback for<br />

your students? If so, why don’t you try this three-step exercise? It doesn’t take up much<br />

time, and it works...<br />

Making peer feedback work in three easy steps! Why<br />

should you encourage peer feedback? Would you like to<br />

enhance your students’ active engagement with their<br />

studies? Would you like to increase the amount <strong>of</strong> feedback<br />

your students receive? Would you like to increase your<br />

students’ disciplinary understanding? If so why not try our<br />

three easy steps to making peer feedback work…<br />

PLANET ISSUE <strong>23</strong><br />


Dawn T. Nicholson 1 | Margaret E. Harrison 2 |<br />

W. Brian Whalley 3<br />

1<br />

Department <strong>of</strong> Environmental and Geographical Sciences, Manchester Metropolitan University, 2 retired, formally <strong>of</strong><br />

University <strong>of</strong> Gloucestershire 3 School <strong>of</strong> Geography, Archaeology and Palaeoecology, Queen’s University Belfast<br />

Assessment criteria<br />

and standards <strong>of</strong> the<br />

Geography Dissertation<br />

in the UK<br />

PLANET ISSUE <strong>23</strong><br />

Abstract<br />

Undergraduate dissertations may provide up to one<br />

third <strong>of</strong> the weighting <strong>of</strong> the final year <strong>of</strong> a degree in<br />

Geography and are seen as important indicators <strong>of</strong><br />

graduates’ independent research ability by<br />

prospective employers. This project examines the<br />

standards, criteria and assessment <strong>of</strong> the Geography<br />

dissertation in the UK. All Geography Departments in<br />

the UK were invited to complete a questionnaire<br />

survey that explored a range <strong>of</strong> aspects related to<br />

dissertations including format, assessment criteria<br />

and marking procedures. Responses were received<br />

from 24 Departments. The findings suggest that there<br />

is broad consensus in many areas including product<br />

format, study period, assessment criteria, and rigour<br />

and transparency in marking procedures. However, in<br />

other areas, including credit weighting, procedures<br />

followed in the event <strong>of</strong> a disagreement over marks,<br />

and interpretation <strong>of</strong> assessment criteria to students,<br />

there is wide variation in practice. Some suggestions<br />

are made to enhance equivalence and consistency in<br />

dissertation work.<br />

Introduction<br />

‘Dissertations have had a long history in geographical<br />

higher education, being widely regarded as the<br />

pinnacle <strong>of</strong> an individual’s undergraduate studies and<br />

the prime source <strong>of</strong> autonomous learning’ (Gold et al.,<br />

1991). Our previous investigations into Geography<br />

dissertations (Harrison and Whalley, 2006; Harrison<br />

and Whalley, 2008) suggest that this is still the case.<br />

Dissertations typically constitute up to one third <strong>of</strong> the<br />

overall weighting for the final year which is Level 6 <strong>of</strong><br />

the English, Welsh and Northern Irish system, (QAA<br />

2008), and Level 10 <strong>of</strong> the Scottish system (Scottish<br />

Credit and Qualifications Framework 2009) <strong>of</strong><br />

undergraduate degrees. Dissertation performance<br />

may also be used to adjudicate degree awards in<br />

borderline cases. They present an opportunity for<br />

students to demonstrate their ability to work<br />

independently and autonomously. Grades awarded<br />

for dissertations are therefore <strong>of</strong> increasing interest to<br />

employers.<br />

The role <strong>of</strong> dissertations in Geography undergraduate<br />

degrees can be set against a background <strong>of</strong> increasing<br />

concern throughout Higher Education about marking<br />

reliability, the maintenance <strong>of</strong> standards, ‘grade<br />

inflation’ and accountability. That said, there is<br />

increasing evidence <strong>of</strong> a new assessment culture<br />

emerging (Rust, 2007) that includes widespread use<br />

<strong>of</strong> assessment criteria (e.g. Harrison and Whalley,<br />

2008), grade descriptors and formative assessment<br />

as well as greater consideration <strong>of</strong> feedback timing<br />

and mechanisms. Rust (2007) argues that there are<br />

still poor practices that go unchallenged and a number<br />

<strong>of</strong> studies suggest that there are considerable<br />

inconsistencies in marking, weighting and standards<br />

(Hand and Clewes, 2000, Pepper et al., 2001), and<br />

confusion over terminology (Sadler, 2005). Ambiguities<br />

concerning the use, meaning and application <strong>of</strong><br />

assessment criteria are also evident (Webster et al.,<br />

2000). Others (Penny and Grover, 1996, Rust et al.,<br />

2003, Woolf, 2004) have commented on the<br />

interaction <strong>of</strong> students with assessment criteria and<br />

marking schemes for dissertations, and in particular,<br />

their poor conceptual understanding <strong>of</strong> expectations<br />


(Gibbs and Simpson, 2004) and poor matching <strong>of</strong><br />

assessment grades with tutors (Penny and Grover,<br />

1996). This does not accord with the QAA principle<br />

that “students and markers are aware <strong>of</strong> and<br />

understand the assessment criteria and/or schemes<br />

that will be used to mark each assessment task” (QAA<br />

2006, p16-17).<br />

This paper reports on a project to review assessment<br />

schemes and procedures for undergraduate<br />

Geography dissertations in the UK. The project sought<br />

to address the following key questions:<br />

option for combined honours, joint honours and<br />

major-minor students. Alternatives to dissertations<br />

for these students include taught modules and<br />

work-based options.<br />

Credit-rating and word length<br />

The credit weighting <strong>of</strong> the dissertation varies from 15<br />

to 40 Credit Accumulation Transfer Scheme (CATS)<br />

credits (Figure 1) with the modal weighting at 30<br />

credits (25% <strong>of</strong> the final year). One HEI permits<br />

students to opt for the dissertation as either a single<br />

(15 CATS) or double (30 CATS) module.<br />

1. What assessment criteria are used and how are<br />

they established and approved?<br />

2. How are students assisted to interpret assessment<br />

criteria? What is the role <strong>of</strong> supervision in this?<br />

3. What grade descriptors and marking schemes are<br />

used?<br />

4. What are the procedures for double marking,<br />

anonymous marking and blind marking?<br />

5. What happens in the event <strong>of</strong> a disagreement<br />

between first and second markers?<br />

The project collated baseline information about<br />

dissertations, including credit rating, length, format,<br />

time available, preparatory work, supervisory<br />

arrangements and feedback, and identified good and<br />

innovative practice. This paper addresses selected<br />

outcomes from the research. A more detailed report<br />

will be provided at a later date.<br />

Methods<br />

A questionnaire survey was sent to all Geography<br />

Departments in the UK resulting in 24 responses<br />

(including one from environmental science). Twentytwo<br />

Higher Education Institutions (HEIs) were<br />

represented with separate responses from different<br />

Faculties / Schools at two HEIs. Responses were<br />

evenly split in terms <strong>of</strong> pre-1992 and post-1992 HEIs.<br />

The questionnaire contained a mixture <strong>of</strong> closed and<br />

open questions and invited respondents to submit a<br />

range <strong>of</strong> documents including assessment criteria,<br />

marking schemes and grade descriptors. Many<br />

respondents additionally submitted copies <strong>of</strong><br />

Dissertation Handbooks, written procedures in the<br />

event <strong>of</strong> a disagreement over marks, lecture slides<br />

and other materials demonstrating good practice.<br />

The nature <strong>of</strong> the dissertation<br />

At the surveyed HEIs, 79% <strong>of</strong> students (92% <strong>of</strong> single<br />

honours students) have to prepare a final year<br />

dissertation. It is sometimes, but not always, an<br />

No. <strong>of</strong> responses No. <strong>of</strong> responses<br />

12<br />

10<br />

8<br />

126<br />

104<br />

82<br />

60<br />

4<br />

46%<br />

38%<br />

46%<br />

12%<br />

38%<br />

4%<br />

15 CATS 20 CATS 30 CATS 40 CATS<br />

12%<br />

Figure 21: CATS 4% credit weighting <strong>of</strong> geography dissertations.<br />

The majority<br />

0<br />

15 CATS<br />

<strong>of</strong> dissertations<br />

20 CATS<br />

(79%)<br />

30 CATS<br />

are required<br />

40 CATS<br />

to be<br />

10-12,000 words in length (Figure 2). One HEI requires<br />

a 4000-word piece <strong>of</strong> final work, but this 50% excludes a<br />

12<br />

literature review which is submitted separately at an<br />

earlier stage.<br />

10<br />

No. <strong>of</strong> responses No. <strong>of</strong> responses<br />

8<br />

12<br />

6<br />

10<br />

4<br />

8<br />

2<br />

6<br />

0<br />

4<br />

2<br />

0<br />

4000<br />

50% 29%<br />

8% 8%<br />

29%<br />

4%<br />

6000<br />

8000<br />

10000<br />

12000<br />

8% 8%<br />

4%<br />

Median word length<br />

4000<br />

6000<br />

8000<br />

10000<br />

12000<br />

Median word length<br />

Figure 2: Median word length <strong>of</strong> geography dissertations.<br />

PLANET ISSUE <strong>23</strong><br />


Assessment criteria and standards <strong>of</strong> the Geography Dissertation in the UK<br />

Dawn T. Nicholson | Margaret E. Harrison | W. Brian Whalley<br />

The relationship between credit weighting and word<br />

length is interesting: overall, there is a positive<br />

correlation, but this masks significant variation. The<br />

majority <strong>of</strong> 10,000-word dissertations are awarded 30<br />

or 40 credits but one institution awards just 20 credits<br />

for the same. Another institution that provided two<br />

questionnaire responses awards 20 credits for a<br />

10,000-word dissertation in one Faculty and 40<br />

credits for the same word length in another Faculty.<br />

No. <strong>of</strong> responses<br />

10<br />

8<br />

6<br />

4<br />

Assessed elements<br />

8%<br />

The weighting 2 <strong>of</strong> the final dissertation product varies<br />

from 70 to 100% (Figure 4) and elements making up<br />

the remaining portion <strong>of</strong> the module include interim<br />

0<br />

progress reports, oral presentations, seminars, posters<br />

8 - 9 10 - 12 > 12<br />

and literature reviews. In addition, several<br />

programmes include Study formative period assessed (months) elements<br />

(particularly seminars).<br />

Study period and product format<br />

The overwhelming majority <strong>of</strong> students have at least<br />

10 months to prepare the dissertation (Figure 3),<br />

although this may partly reflect the fact that 55%<br />

are introduced to preparatory work in their<br />

penultimate year.<br />

No. <strong>of</strong> responses<br />

12<br />

10<br />

8<br />

6<br />

4<br />

2<br />

8%<br />

46% 46%<br />

No. <strong>of</strong> responses<br />

18<br />

15<br />

12<br />

9<br />

6<br />

3<br />

0<br />

8% 8%<br />

17%<br />

67%<br />

70 80 90 100<br />

% Final report weighting<br />

Figure 4: The percentage weighting <strong>of</strong> the final report for<br />

the dissertation module.<br />

PLANET ISSUE <strong>23</strong><br />

20<br />

0<br />

8 - 9 10 - 12 > 12<br />

Study period (months)<br />

Figure 3: Dissertation study period.<br />

Most 18 dissertations are submitted as hard <strong>copy</strong>, but<br />

one third also require e-submission for archiving 67% and<br />

plagiarism<br />

15<br />

detection. There is some flexibility about<br />

the format <strong>of</strong> submission with some allowing<br />

alternative formats or attachments (e.g. audio, visual,<br />

12<br />

field notebook).<br />

No. <strong>of</strong> responses<br />

9<br />

6<br />

17%<br />

3 8% 8%<br />

0<br />

70 80 90 100<br />

The nature <strong>of</strong> assessment<br />

Assessment criteria<br />

A wide range <strong>of</strong> assessment criteria are in use at the<br />

HEIs surveyed (Box 1). In some cases these are<br />

presented explicitly as assessment criteria and in<br />

others they are embedded into grade descriptors. The<br />

criteria submitted address the fundamental<br />

requirements for the dissertation, presentation,<br />

administrative considerations (such as ethics),<br />

evidence <strong>of</strong> student independence and what we term<br />

the ‘X factor’. The latter are the characteristics <strong>of</strong> a<br />

very high quality dissertation that are not easy to<br />

define in black and white terms – flair, innovation,<br />

creativity and criticality.

Fundamentals <strong>of</strong> the dissertation<br />

• Evidence <strong>of</strong> originality and perceptiveness<br />

• Clarity <strong>of</strong> aims, topic identification<br />

• Evidence <strong>of</strong> reading, awareness <strong>of</strong> literature<br />

• Quality <strong>of</strong> research design and methodology<br />

• Quality <strong>of</strong> data<br />

• Presentation, analysis, evaluation, synthesis and interpretation <strong>of</strong> data<br />

• Conceptual awareness, theoretical understanding<br />

• Sustained argument<br />

• Findings and conclusions justified and contextualised in the literature<br />

Presentation<br />

• Standard <strong>of</strong> presentation, use <strong>of</strong> English language, structure<br />

• Use <strong>of</strong> complex academic terminology<br />

• Correct use <strong>of</strong> referencing conventions<br />

• Coherent integration <strong>of</strong> illustrative materials<br />

Administrative<br />

• Conduct including engagement with administrative processes<br />

• Assessment <strong>of</strong> risks and ethical considerations<br />

• Compliance with requirements<br />

Independence<br />

• Ability to work independently<br />

• Exercise <strong>of</strong> personal initiative and responsibility<br />

• Conduct and competence during practical work<br />

• Cognitive, intellectual, practical and personal skills<br />

• Appropriate and correct use <strong>of</strong> ICT applications<br />

• Reflective, critically evaluating own performance and personal development<br />

The ‘X Factor’<br />

• Critical ability<br />

• Creative thinking<br />

• Flair, innovation<br />

Box 1: The range <strong>of</strong> assessment criteria encountered compiled from all contributing institutions.<br />

We were interested to explore how assessment<br />

criteria were developed and established. In most<br />

cases this occurred through team discussions,<br />

working parties and formal approval: at programme<br />

review and through consultation with external<br />

examiners. Some respondents said that criteria had<br />

evolved over time or through shared experiences and<br />

tradition. In some cases criteria were generic to the<br />

University, though one HEI identified the difficulties<br />

inherent in establishing common criteria even across<br />

a single department.<br />

PLANET ISSUE <strong>23</strong><br />


Assessment criteria and standards <strong>of</strong> the Geography Dissertation in the UK<br />

Dawn T. Nicholson | Margaret E. Harrison | W. Brian Whalley<br />

Explaining assessment criteria to students<br />

Responses to the survey are encouraging in that the<br />

majority <strong>of</strong> HEIs do something to explain and interpret<br />

assessment criteria to students: typically through the<br />

provision <strong>of</strong> detailed text-based or online materials<br />

(e.g. a dissertation guide) that explicitly includes<br />

explanation <strong>of</strong> dissertation assessment criteria and<br />

expectations, and through the conduct <strong>of</strong> face-t<strong>of</strong>ace<br />

tutorials and classes. These latter include formal,<br />

structured supervisory sessions where criteria are<br />

discussed, and whole cohort lectures. In some cases,<br />

it is assumed that criteria are discussed through the<br />

supervisory process, but there is no specific guidance<br />

or requirement to do so.<br />

Marking procedures<br />

At every HEI surveyed, dissertations are marked by<br />

two people. Only one HEI did not involve the<br />

supervisor in the marking process (Figure 5). In<br />

another case the supervisor was the second marker.<br />

In 15 cases (62%), the second marking is completed<br />

‘blind’ (i.e. markers are unaware <strong>of</strong> grades awarded by<br />

the other marker). In seven cases, anonymous<br />

marking is undertaken, although comments suggest<br />

this is only partially successful because supervisors<br />

recognise their students’ work.<br />

PLANET ISSUE <strong>23</strong><br />

Grade descriptors and marking schemes<br />

Almost all <strong>of</strong> the assessment criteria presented in the<br />

survey responses were couched in the context <strong>of</strong><br />

standards or grade descriptors. However, in a number<br />

<strong>of</strong> cases it was difficult to extract the assessment<br />

criteria from the grade descriptors and the criteria<br />

appeared to vary for different grades. It is not clear<br />

whether this reflects the confusion around<br />

assessment terminology to which Sadler refers<br />

(2005) (e.g. criteria, standards, marking schemes) or<br />

whether assessment criteria are designed to permit<br />

marking flexibility. Although not explicitly stated as<br />

such, there appears to be implicit reliance on what<br />

Rust et al. (2003) refer to as the ‘connoisseur<br />

approach’ - knowledge transfer as a product <strong>of</strong> the<br />

student-tutor relationship.<br />

The degree <strong>of</strong> numerical breakdown <strong>of</strong> each grade<br />

and the level <strong>of</strong> detail provided for each varies<br />

considerably. In some cases, broad descriptors were<br />

developed at institutional level and more detailed<br />

interpretations drawn up by departments.<br />

There was very little evidence that any marking<br />

schemes were in use to award marks for different<br />

weighted sections or attributes <strong>of</strong> the dissertation.<br />

One HEI breaks the marks awarded into four<br />

components (40% for content, 40% for argument,<br />

10% for structure / approach and 10% for style) and<br />

another has a three-way division (academic context;<br />

methodology, data collection and analysis; and<br />

interpretations, conclusions and presentation).<br />

Figure 5: The different permutations for marking<br />

procedures.<br />

Dealing with disagreements over marks<br />

The first thing to note about dealing with disagreements<br />

over marks is that there is considerable<br />

variation over what constitutes a disagreement, with<br />

anything from 5% to 12% or a difference <strong>of</strong> class.<br />

In most departments, markers are encouraged to<br />

engage in discussions before introducing a third<br />

marker – and many disagreements are resolved at<br />

this stage.<br />

Where the numerical difference is small, the mean<br />

mark is <strong>of</strong>ten used. Where a disagreement cannot be<br />

resolved between first and second markers, it is very<br />

common practice to introduce a third marker - <strong>of</strong>ten<br />


determined by the module leader or Head <strong>of</strong><br />

Department. Subsequent procedures for resolution<br />

include discussions between the three markers,<br />

discussing all three marks at internal examination<br />

boards, using the median mark, and inviting an<br />

external examiner to adjudicate.<br />

Exploring the role <strong>of</strong> external examiners a little<br />

further, we found split opinions on the matter. Some<br />

HEIs state, with some conviction, that external<br />

examiners are not asked to adjudicate - only to<br />

moderate. Others are comfortable in inviting external<br />

examiners to adjudicate where necessary. Four HEIs<br />

said that externals are commonly invited to comment<br />

on dissertations in borderline cases or where there is a<br />

particular problem such as suspected plagiarism or a<br />

failed degree where passes were achieved elsewhere.<br />

Good practice<br />

The survey respondents themselves identified a<br />

number <strong>of</strong> areas deemed to be good practice. There<br />

were two recurring themes:<br />

1. Explanation and interpretation <strong>of</strong> assessment<br />

criteria for students. A variety <strong>of</strong> ways to achieve this<br />

were identified. These included detailed explanation<br />

<strong>of</strong> criteria and expectations in a dissertation guide;<br />

learning activities during the penultimate<br />

undergraduate year including peer assessment <strong>of</strong><br />

past dissertations and discussions around criteria;<br />

final year whole cohort lectures including detailed<br />

discussion <strong>of</strong> criteria.<br />

2. Marking procedures. Good practice involves<br />

following formalised procedures for double marking<br />

and blind marking and having rigorous and transparent<br />

procedures in the event <strong>of</strong> a disagreement. There is<br />

also good practice in the use <strong>of</strong> standard mark sheets<br />

that require a brief explanation <strong>of</strong> the mark awarded<br />

by each marker and the process <strong>of</strong> agreeing a final<br />

mark where there was disagreement.<br />

Conclusions and recommendations<br />

It is clear from the survey that, although the<br />

requirements for dissertations vary, there is broad<br />

consensus across the sector in terms <strong>of</strong> product<br />

format, the elements assessed, report weighting,<br />

study period, word length and credit-rating. It is a<br />

concern that within this overall consensus the credit<br />

rating for a comparable length dissertation varies<br />

from 20 to 40 CATS credits and that the contribution<br />

(weighting) to the degree is very variable.<br />

A consistent range <strong>of</strong> themes are addressed by<br />

assessment criteria, typically research and analytical<br />

skills, critical ability, presentation, originality and<br />

self-organisation and management. There is<br />

encouraging evidence that a range <strong>of</strong> approaches are<br />

being utilised to explain these criteria to students.<br />

However, a common survey response stated that<br />

‘students are told in tutorials’. This raises the question<br />

<strong>of</strong> whether students really do understand the<br />

expectations. Clearly the level <strong>of</strong> student<br />

understanding <strong>of</strong> criteria and expectations can be<br />

assessed integrally with assessment <strong>of</strong> the final report<br />

– but it would be valuable to assess their level <strong>of</strong><br />

understanding at an earlier stage.<br />

There is evidence <strong>of</strong> rigor and transparency in marking<br />

procedures and widespread use <strong>of</strong> grade descriptors,<br />

although also evidence <strong>of</strong> some confusion over<br />

terminology. However, there are significant<br />

differences in marking procedures, especially the role<br />

<strong>of</strong> the supervisor, the methods for resolution <strong>of</strong><br />

disagreements, and the role <strong>of</strong> blind and anonymous<br />

marking.<br />

Bearing in mind these conclusions and the good<br />

practices identified, we make the following<br />

suggestions:<br />

1. Departments should develop rigorous methods to<br />

ensure assessment criteria and the expectations <strong>of</strong><br />

a dissertation are explicitly explained to students,<br />

preferably through an action learning approach.<br />

2. Markers should adhere closely to assessment<br />

criteria to achieve equivalence and consistency in<br />

grading standards;<br />

3. Staff discussions should focus on ensuring clarity<br />

over the terms <strong>of</strong> assessment criteria, standards<br />

and marking schemes, as well as marking<br />

procedures.<br />

4. Departments should review procedures for dealing<br />

with disagreement over marks to ensure they are<br />

transparent, consistent and rigorous.<br />

PLANET ISSUE <strong>23</strong><br />


Assessment criteria and standards <strong>of</strong> the Geography Dissertation in the UK<br />

Dawn T. Nicholson | Margaret E. Harrison | W. Brian Whalley<br />

Acknowledgements<br />

We <strong>of</strong>fer our thanks to <strong>GEES</strong> <strong>Subject</strong> Centre for providing financial support for this project through the Small<br />

Scale Project Fund; to Karen Logan for her assistance with data collection and collation; and to the <strong>GEES</strong><br />

community for completing questionnaires and submitting details <strong>of</strong> assessment criteria and procedures.<br />

References<br />

• Gibbs, G. and Simpson C. 2004 Conditions under which assessment supports students’ learning, Learning<br />

and Teaching in Higher Education 1, 3–31<br />

• Gold J. R., Jenkins A., Lee R., Monk J., Riley J., Shepard I. and Unwin D. 1991 Teaching Geography in<br />

Higher Education, Basil Blackwell, Oxford<br />

• Hand L. and Clewes D. 2000 Marking the Difference: an investigation <strong>of</strong> the criteria used for assessing<br />

undergraduate dissertations in a business school, Assessment and Evaluation in Higher Education 25, 1,<br />

5-21<br />

• Harrison M. E. and Whalley W. B. 2006 Combining student independent learning and peer advice to<br />

improve the quality <strong>of</strong> undergraduate dissertations, <strong>Planet</strong> 16, 15-18<br />

• Harrison M. E. and Whalley W. B. 2008 Undertaking a dissertation from start to finish: the process and<br />

product, Journal <strong>of</strong> Geography in Higher Education, 32, 3, 401-418<br />

• Penny A. J. and Grover C. 1996 An analysis <strong>of</strong> student grade expectations and marker consistency<br />

Assessment and Evaluation in Higher Education, 21, 2, 173-184<br />

• Pepper P., Webster F. and Jenkins A. 2001 Benchmarking in Geography: some implications for assessing<br />

dissertations in the undergraduate curriculum, Journal <strong>of</strong> Geography in Higher Education, 25, <strong>23</strong>-35<br />

• Quality Assurance Agency 2006 Code <strong>of</strong> Practice for the Assurance <strong>of</strong> Academic Quality and Standards<br />

in HE, Section 6 - Assessment <strong>of</strong> Students, http://www.qaa.ac.uk/academicinfrastructure/<br />

codeOfPractice/section6/COP_AOS.pdf Accessed 22 June 2010<br />

• Quality Assurance Agency 2008 The Framework for Higher Education Qualifications in England, Wales<br />

and Northern Ireland, August 2008, QAA, Mansfield. http://www.qaa.ac.uk/academicinfrastructure/<br />

FHEQ/EWNI08/FHEQ08.pdf Accessed 21 June 2010<br />

• Rust C. 2007 Towards a scholarship <strong>of</strong> assessment, Assessment and Evaluation in Higher Education, 32,<br />

229-<strong>23</strong>7<br />

• Rust C., Price M. and O’Donovan B. 2003 Improving students’ learning by developing their understanding<br />

<strong>of</strong> assessment criteria and processes, Assessment and Evaluation in Higher Education, 28, 146-164<br />

• Sadler D. R. 2005 Interpretations <strong>of</strong> criteria-based assessment and grading in higher education,<br />

Assessment and Evaluation in Higher Education, 30, 175-194<br />

• Scottish Credit and Qualifications Framework 2009 The Scottish Credit and Qualifications Framework<br />

http://www.scqf.org.uk/AbouttheFramework/Overview-<strong>of</strong>-Framework.aspx Accessed 22 June 2010<br />

• Webster F., Pepper D. and Jenkins A. 2000 Assessing the undergraduate dissertation, Assessment and<br />

Evaluation in Higher Education, 25, 1, 71-80<br />

• Woolf H. 2004 Assessment criteria: reflections on current practices, Assessment and Evaluation in Higher<br />

Education, 29, 4, 479-493<br />

Dawn T. Nicholson d.nicholson@mmu.ac.uk<br />

W. Brian Whalley b.whalley@qub.ac.uk<br />

PLANET ISSUE <strong>23</strong><br />


Tim Stott<br />

Faculty <strong>of</strong> Education, Community & Leisure, Liverpool John Moores University<br />

Diversity in Level 1<br />

<strong>GEES</strong> Assessment:<br />

moving from less <strong>of</strong><br />

more to more <strong>of</strong> less<br />

Abstract<br />

This article reports on an analysis <strong>of</strong> assessment<br />

items in a Level 1 Earth Science and Climatology<br />

module at Liverpool John Moores University (LJMU).<br />

It examines the effect <strong>of</strong> increasing the diversity <strong>of</strong><br />

assessment methods in the module and increasing<br />

the number <strong>of</strong> summative assessment items from 4<br />

to 8.<br />

The effect <strong>of</strong> doubling the number <strong>of</strong> assessment<br />

items on student performance is examined.<br />

Regression analysis highlights differences in how well<br />

the marks in the 8 assessment items predict the<br />

students’ final module total. The analysis highlights<br />

how students ranked overall in the top quartile<br />

performed better in the so called ‘deep learning’<br />

assessments (field reports, weather analysis and use<br />

<strong>of</strong> a wiki) whereas students in the lowest quartile<br />

performed better in so called ‘shallow learning’<br />

assessments (on-line multiple choice test and the<br />

formal written examination). Individual student’s<br />

‘assessment pr<strong>of</strong>iles’ are examined and strengths<br />

(high class ranking) compared with weaknesses (low<br />

class ranking). The reasons for the differences are<br />

explored and discussed in the light <strong>of</strong> students’<br />

motivations, recently introduced ‘graduate skills<br />

mapping’ in the University and research by<br />

assessment experts.<br />

Introduction<br />

What influences students’ learning most is not the<br />

teaching but the assessment (e.g. Miller and Parlett,<br />

1974). Students have been described as ‘strategic<br />

learners’ who are assessment-led or ‘assessmentdriven’.<br />

Increased pressure on students’ time may be<br />

one reason why students become assessmentfocused<br />

(Gibbs, 1992, p101). Traditionally, assessment<br />

primarily measures students’ performance<br />

(summative assessment) but more recently the<br />

notion <strong>of</strong> formative assessment has been raised as an<br />

important aspect <strong>of</strong> learning in Higher Education (HE)<br />

(Gibbs, 1999) and as a factor in student retention<br />

(Yorke, 2001).<br />

The Assessment in Geography guide states:<br />

“Few <strong>of</strong> us will have been systematically<br />

introduced to assessment issues when we began<br />

lecturing. It is something we have grown into and<br />

accepted, perhaps ‘caught’ rather than ‘learned’<br />

and, for some <strong>of</strong> us, experimented with”.<br />

Bradford and O’Connell (1998, p1)<br />

This article reports on just such ‘experimentation’<br />

within the Level 1 module ECLOE1201 at LJMU.<br />

The UK Government’s widening participation (WP) in<br />

HE policy is generating larger numbers <strong>of</strong> students<br />

with wider ranges <strong>of</strong> backgrounds, qualifications,<br />

experiences and diversity <strong>of</strong> learning needs in class.<br />

One strategy employed in a workload allocation<br />

model at LJMU to address WP, issues <strong>of</strong> transition<br />

into HE, and retention <strong>of</strong> Level 1 students, has been<br />

that <strong>of</strong> ‘front loading’ teaching. This involves<br />

allocating more teaching hours at Level 1. Typically<br />

10% more teaching/contact hours are allocated to<br />

Level 1 modules.<br />

PLANET ISSUE <strong>23</strong><br />


Diversity in Level 1 <strong>GEES</strong> Assessment: moving from less <strong>of</strong> more to more <strong>of</strong> less<br />

Tim Stott<br />

The ‘experimentation’ with assessment reported here<br />

resulted from a desire:<br />

1. to structure the ‘non-contact’ or ‘independent<br />

study’ hours <strong>of</strong> Level 1 students more effectively, so<br />

that they embraced University-appropriate learning<br />

styles as effectively as possible and as early as<br />

possible,<br />

2. to encourage Level 1 students to embrace new<br />

learning technologies available through the<br />

University’s Blackboard Virtual Learning<br />

Environment (VLE).<br />

The context<br />

The Level 1 Earth Sciences and Climatology module is<br />

a semester-long (15-week), 24 credit (double)<br />

module. A full time student at LJMU studies 120<br />

credits per year, or 1200 hours. It is a core module in<br />

the BA/BSc (Hons) Outdoor Education programme. It<br />

has been delivered by the author (with assistance<br />

from another colleague for the 1.5 days <strong>of</strong> fieldwork)<br />

since mid-1990s. Twenty topics are addressed (eg.<br />

plate tectonics, rocks and minerals, weather systems,<br />

micro-climatology etc) through 35 hours class-based<br />

and 10 hours <strong>of</strong> field trip contact time in the 240<br />

hours allocation. Assuming students attend all 45<br />

teaching hours, this leaves 195 independent study<br />

hours. It was the author’s desire to help students to<br />

structure and use these 195 independent study hours<br />

more effectively through changing the assessment<br />

pattern. This action-learning research evaluates this<br />

curriculum intervention through:<br />

1. assessing the effect <strong>of</strong> increasing the assessment<br />

diversity on students’ performance.<br />

2. examining whether certain groups <strong>of</strong> students<br />

perform better in certain types <strong>of</strong> assessment.<br />

3. reflecting on these changes to inform future<br />

planning.<br />

Assessment schedule 2002-07 Assessment schedule 2008-09<br />

Weighting<br />

Word<br />

Length/<br />

time<br />

Week <strong>of</strong><br />

Submission<br />

deadline<br />

Weighting<br />

Word<br />

Length/<br />

time<br />

Week <strong>of</strong><br />

Submission<br />

deadline<br />

Coursework (50%)<br />

Coursework (50%)<br />

1. Coursework: library based<br />

essay on individual topic<br />

25%<br />

1500<br />

5<br />

1. Coursework: Rocks &<br />

Minerals Resource<br />

12.5%<br />

1000<br />

4<br />

2. Micro-climatology<br />

Field Report<br />

25%<br />

1500<br />

15<br />

2. Coursework:<br />

Weather Report<br />

12.5%<br />

1000<br />

9<br />

3. 5-min Micro-lecture and<br />

summary input to module<br />

Wikipedia<br />

5%<br />

500<br />

4. Micro-climatology<br />

Field Report<br />

15%<br />

1200<br />

15<br />

5. Contribution to module<br />

Wiki and Blog: ‘Earth &<br />

Atmosphere in the News’<br />

5%<br />

5 x 100<br />

= 500<br />

Examinations (50%)<br />

Examinations (50%)<br />

PLANET ISSUE <strong>23</strong><br />

26<br />

3. 3-hr written examination<br />

(1 hr multiple choice<br />

questions; 1 hr short<br />

answer questions; 2 x 30<br />

min. essay questions<br />

(choose 2 from 6)<br />

3-hr<br />

6. Practical File including<br />

in-class tests and<br />

worksheets<br />

7. On-line test<br />

8. 2-hr Written examination<br />

(1 hr short answer<br />

questions; 2 x 30 min<br />

essay questions (choose<br />

2 from 4)<br />

Table 1: Module Assessment schedule before (2002-07) and after (2008-09) changes.<br />

13<br />

10%<br />

15%<br />

25%<br />

Variable<br />

1-hr<br />

2-hr<br />

8<br />


The ‘experiment’<br />

From 2003 to 2009 the number <strong>of</strong> students enrolled<br />

on the module increased from 29 to 52, and the<br />

author’s perception was that the diversity <strong>of</strong> learning<br />

needs increased with the number <strong>of</strong> students. A new<br />

assessment schedule was developed and introduced<br />

in 2008. The assessment tasks before and after the<br />

changes are summarised in Table 1. The new<br />

schedule was designed to break the assessment<br />

tasks into smaller ‘chunks’ with more deadlines<br />

spread across the module. The rationale behind this<br />

was to encourage students to spread their workload,<br />

rather than leaving much <strong>of</strong> the work until<br />

the ‘last minute’ at the end <strong>of</strong> the module,<br />

as seemed to be the case for many<br />

students in the pre-2007 schedule. The<br />

new assessment schedule introduced new<br />

e-technologies: the paper version <strong>of</strong> the<br />

multiple choice questions (MCQ) covering all<br />

module topics, as part <strong>of</strong> the written 3-hr<br />

examination, was removed and introduced<br />

as an on-line test earlier in the module (Sly,<br />

1999; Stott & Meers, 2002; Stott et al.,<br />

2004; Stott, 2006). This test was supported<br />

by formative assessment in the form <strong>of</strong> 17<br />

on-line revision quizzes, one for each topic,<br />

which students could attempt as many<br />

times as they liked. Blackboard gave the<br />

students feedback on each question, and<br />

some <strong>of</strong> the questions, students were<br />

assured, would appear in the summative<br />

on-line test in week 8, worth 15% <strong>of</strong> the module<br />

weighting. A further innovation was the use <strong>of</strong><br />

Blackboard wikis. A short wiki training<br />

session was delivered early in the module.<br />

Students were tasked with entering<br />

information, which they had researched, into<br />

the module wiki so that their research could<br />

be shared with all students on the module.<br />

scatter plot for class mean <strong>of</strong> weighted assessment<br />

items vs. unweighted assessment items was<br />

produced to test whether the weighting assigned to<br />

an assessment item itself influenced the module total<br />

(Figure 2). Since there is a statistically significant<br />

relationship in Figure 2 it is unlikely that the weighting<br />

<strong>of</strong> the individual assessment item is the main<br />

influence on the module total, but rather the nature<br />

<strong>of</strong> the assessment task itself (i.e students’ marks in<br />

assessment items are not driven by the weighting<br />

assigned). This may not be the case where the<br />

difference between weightings is larger.<br />

Figure 1: Scatter plot <strong>of</strong> % scored in one item <strong>of</strong> assessment<br />

(in this case the Blackboard on-line test worth 15% <strong>of</strong> the<br />

module weighting) vs. overall module total (%).<br />

Marks awarded for all assessment items,<br />

along with attendance data, were available<br />

from 2003-2009.<br />

Results and discussion<br />

Influence <strong>of</strong> each assessment on the overall<br />

module total (Aim 1)<br />

In order to examine the influence <strong>of</strong> each<br />

individual assessment item on the overall<br />

module total, the marks in each individual<br />

assessment were plotted against the overall<br />

module total (example in Figure 1) and linear<br />

regression relationships fitted. Since the weighting <strong>of</strong><br />

individual assessment items ranged from 5-15%, a<br />

Figure 2: Scatter plot showing class mean for weighted<br />

assessment items vs.unweighted assessment items.<br />

PLANET ISSUE <strong>23</strong><br />


Diversity in Level 1 <strong>GEES</strong> Assessment: moving from less <strong>of</strong> more to more <strong>of</strong> less<br />

Tim Stott<br />

Correlation Coefficients (r)<br />

Weighting<br />

Weighting<br />

2003<br />

2004<br />

2005<br />

2006<br />

2007<br />

2008<br />

2009<br />

MEAN<br />

n<br />

2003-07<br />

2008-09<br />

29<br />

29<br />

29<br />

32<br />

35<br />

40<br />

51<br />

35<br />

Coursework 25%<br />

Rocks Assignment<br />

0.125<br />

0.125<br />

0.63<br />

0.62<br />

0.84<br />

0.60<br />

0.56<br />

0.59<br />

0.56<br />

0.63<br />

1500 words<br />

Weather Assignment<br />

0.125<br />

0.125<br />

0.68<br />

0.56<br />

0.66<br />

0.51<br />

0.70<br />

0.62<br />

Written<br />

Exam MCQ/on-line test<br />

0.15<br />

0.15<br />

0.81<br />

0.46<br />

0.80<br />

0.66<br />

0.62<br />

0.37<br />

0.72<br />

0.64<br />

Exam<br />

Exam Short Answer<br />

0.15<br />

0.125<br />

0.79<br />

0.71<br />

0.88<br />

0.73<br />

0.77<br />

0.64<br />

0.60<br />

0.73<br />

Exam 2 x 30 mins Essays<br />

0.2<br />

0.125<br />

0.69<br />

0.62<br />

0.72<br />

0.62<br />

0.73<br />

0.55<br />

0.55<br />

0.64<br />

Exam total<br />

0.35<br />

0.25<br />

0.88<br />

0.74<br />

0.85<br />

0.70<br />

0.81<br />

0.62<br />

0.62<br />

0.75<br />

Field Report<br />

Field Report<br />

0.25<br />

0.15<br />

0.52<br />

0.71<br />

0.73<br />

0.70<br />

0.55<br />

0.68<br />

0.68<br />

0.65<br />

1500 words<br />

1.00<br />

New in 2008<br />

5 x 100 words<br />

Wiki News Report<br />

0.05<br />

0.55<br />

0.35<br />

0.45<br />

16 worksheets<br />

Practical Worksheet<br />

0.1<br />

0.64<br />

0.61<br />

0.62<br />

500 words<br />

File<br />

0.05<br />

0.41<br />

0.41<br />

Wiki Lecture Summary<br />

1.00<br />

MEAN<br />

0.72<br />

0.65<br />

0.80<br />

0.65<br />

0.67<br />

0.57<br />

0.58<br />

0.66<br />

P values<br />

MAX<br />

0.88<br />

0.74<br />

0.88<br />

0.73<br />

0.81<br />

0.68<br />

0.72<br />

0.78<br />

0.001 (99.9%)<br />

MIN<br />

0.52<br />

0.46<br />

0.72<br />

0.56<br />

0.55<br />

0.37<br />

0.35<br />

0.50<br />

0.01 (99%)<br />

n<br />

6.00<br />

7.00<br />

6.00<br />

7.00<br />

7.00<br />

9.00<br />

10.00<br />

7.43<br />

0.05 (95%)<br />

STDEV<br />

0.13<br />

0.10<br />

0.07<br />

0.06<br />

0.10<br />

0.09<br />

0.12<br />

0.10<br />

SE<br />

0.05<br />

0.04<br />

0.03<br />

0.02<br />

0.04<br />

0.03<br />

0.04<br />

0.04<br />

MEAN<br />

Before<br />

0.70<br />

After<br />

0.58<br />

0.64<br />

St Error<br />

0.04<br />

0.03<br />

0.04<br />

Table 2: Pearson correlation coefficients for individual assessments items vs. module total (2003-09).<br />

PLANET ISSUE <strong>23</strong><br />

Having established regression relationships (R2<br />

values) for each assessment item for each year <strong>of</strong> the<br />

study, the correlation coefficients (r) are presented in<br />

Table 2 (shaded according to their statistical<br />

significance levels). Note that the written examination<br />

items have been presented as sections (MCQ, short<br />

answers, essays) independently, as well as the<br />

combined exam score. Most, but not all, items are<br />

significant at the p < 0.05 level, with four significant at<br />

the p < 0.01 level.<br />

In general, the examination assessment items (MCQ/<br />

on-line tests, short answer and essays in written<br />

exams) tend to have a greater influence on the final<br />

module total than the coursework items. The first<br />

assessment to be submitted in week 4, the rocks and<br />

mineral teaching resource (1000 word coursework<br />

submitted either as a Powerpoint Presentation, leaflet<br />

or web site) is only a statistically significant influence<br />

on the final module total in 2005. University guidance<br />

states that feedback must be given on all<br />

assessments within four weeks (though in this case it<br />

is normally within two weeks <strong>of</strong> submission) so it<br />

seems that students use this feedback to perform<br />

better in the second assessment task (the weather<br />

report) which has a statistically significant influence<br />

on the final module totals in 2004, 2007 and 2009.<br />

In the examination assessments, the short answer<br />

question section <strong>of</strong> the written examination is the<br />

only form <strong>of</strong> assessment which has a statistically<br />

significant influence on the final module total in every<br />

year. This is interesting, and may well reflect the fact<br />

that students are more used to this type <strong>of</strong><br />

assessment through GCSE and A-level examinations<br />

than any other, so it is a good predictor <strong>of</strong> the final<br />

module total. The MCQ item (paper-based, then<br />

changed to on-line) has a statistically significant<br />

influence on the final module total in four <strong>of</strong> the<br />

seven years, whereas the exam essay questions are<br />

only statistically significant in two <strong>of</strong> the seven years.<br />

This suggests that examination written essays are not<br />

a good predictor <strong>of</strong> the final module total in most<br />

years. This may be because students are either not<br />

used to this skill, it is not practised enough, or, since<br />


the essays are normally tackled at the end <strong>of</strong> the<br />

written exam, students are fatigued, or feel they have<br />

scored enough marks in other parts <strong>of</strong> the module to<br />

have passed. The field report has a statistically<br />

significant influence on the final module in five <strong>of</strong> the<br />

seven years.<br />

Of the new assessment tasks introduced in 2008 (and<br />

retained in 2009), the wiki tasks did not have a<br />

statistically significant influence on the final module<br />

total, but the practical worksheets did. The<br />

worksheets are collected at lectures by the students<br />

and completed partially during teaching sessions and<br />

finished as independent study. All are submitted at<br />

the end <strong>of</strong> the module. This task certainly seemed to<br />

have been taken seriously by students in 2008 and<br />

2009, whereas the wiki tasks were not. This may have<br />

been because wikis were new or perhaps the training<br />

or the purpose <strong>of</strong> the tasks were not communicated<br />

well enough. The use <strong>of</strong> wikis (and blogs) as learning<br />

and teaching methods is growing rapidly, and more<br />

innovative ways <strong>of</strong> engaging students with these<br />

technologies is being considered.<br />

This analysis <strong>of</strong>fers one way in which HE teachers can<br />

monitor the influence <strong>of</strong> their assessment tasks on<br />

the final module total.<br />

How does the performance <strong>of</strong> different groups <strong>of</strong><br />

students vary with different assessment tasks?<br />

(Aim 2)<br />

In order to answer this question, students’ ranking in<br />

assessment items was compared to their final<br />

module total ranking. Students in the 2008 and 2009<br />

cohorts were ranked (highest to lowest) according to<br />

their final module total (the aggregate score in all<br />

eight assessments). Their scores in each individual<br />

assessment item were ranked and the difference<br />

between their rank in that task and their final module<br />

ranking was calculated (Table 3). This gave an<br />

indication <strong>of</strong> whether an individual, or group <strong>of</strong><br />

students, did better than (over-performed), as well as,<br />

or worse (under-performed) in that task than<br />

compared to the module overall.<br />

The students were then grouped into four quartiles<br />

(based on overall module ranking) and a measure <strong>of</strong><br />

that quartile’s performance in each assessment was<br />

calculated (called the Inverse Sum <strong>of</strong> Ranks by adding<br />

up the ranks for the group then making the value<br />

positive by adding the lowest value to them all). These<br />

are plotted in Figure 3 and generally show the<br />

expected trend: that the rankings show 1st quartile ><br />

2nd quartile > 3rd quartile > 4th quartile. Indeed, in all<br />

assessment items in both years the 1st quartile<br />

always out-performs the other three groups in all<br />

assessment items. Conversely, the 4th quartile always<br />

performs worse than the other three quartiles.<br />

However, there are exceptions to this predictable<br />

pattern. In 2008, the 3rd quartile group out-performs<br />

the 2nd quartile in the Rocks Resource assignment,<br />

practical worksheets and on-line test. In 2009,<br />

however, the pattern is not repeated but instead the<br />

3rd quartile group again out-performs the 2nd<br />

quartile, this time in the weather analysis project, the<br />

field report and one <strong>of</strong> the wiki tasks. At the time <strong>of</strong><br />

writing, no obvious explanation for these differences<br />

is readily forthcoming and further investigation will be<br />

required. However, these findings do seem to indicate<br />

that different assessment tasks may favour different<br />

groups <strong>of</strong> students within a cohort, which is not<br />

entirely surprising given the diversity <strong>of</strong> learning styles<br />

(Kolb, 1984) typical in a cohort <strong>of</strong> 40-50 students.<br />

Unfortunately, in this short study the particular tasks<br />

varied from one year to the next.<br />

100<br />

10<br />

15<br />

15<br />

10<br />

Module total<br />

Rock<br />

diff<br />

Weather<br />

diff<br />

field<br />

diff<br />

wiki<br />

diff<br />

Ranking<br />

Rank<br />

rock<br />

rank<br />

weather<br />

rpt<br />

field<br />

rank<br />

wiki<br />

etc<br />

Top student<br />

1<br />

1<br />

0<br />

2<br />

-1<br />

3<br />

-2<br />

10<br />

-9<br />

Second student<br />

2<br />

3<br />

-1<br />

11<br />

-9<br />

16<br />

-14<br />

4<br />

-2<br />

Third student<br />

Fourth student<br />

etc<br />

3<br />

4<br />

5<br />

16<br />

13<br />

15<br />

Table 3: Ranking <strong>of</strong> students’ performance in different assessment items, and calculation<br />

<strong>of</strong> the difference between each assessment item and their overall module total ranking.<br />

-13<br />

-9<br />

-10<br />

14<br />

25<br />

21<br />

-11<br />

-21<br />

-16<br />

10<br />

21<br />

1<br />

-7<br />

-17<br />

4<br />

3<br />

5<br />

2<br />

0<br />

-1<br />

3<br />

PLANET ISSUE <strong>23</strong><br />


Diversity in Level 1 <strong>GEES</strong> Assessment: moving from less <strong>of</strong> more to more <strong>of</strong> less<br />

Tim Stott<br />

Figure 3: Rankings in different assessment items analysed by student quartiles for 2008 and 2009 module cohorts.<br />

PLANET ISSUE <strong>23</strong><br />

Reflections on the findings and looking forward<br />

(Aim 3)<br />

Reflecting on these findings answers some questions<br />

but inevitably raises more. While the correlations<br />

between students’ marks on single assessment tasks<br />

vs. their final module mean may be a way for HE<br />

teachers to assess the usefulness <strong>of</strong> a particular<br />

assessment item in contributing to the final module<br />

mark, it is clearly not an exact science. The correlation<br />

coefficients vary from year to year but it is possible to<br />

see that some types <strong>of</strong> assessment are better<br />

predictors <strong>of</strong> the final module mark than others. The<br />

effect <strong>of</strong> adding new tasks was, as expected, a partial<br />

‘dilution’ <strong>of</strong> the influence <strong>of</strong> the existing forms <strong>of</strong><br />

assessment, as weightings <strong>of</strong> former items are<br />

reduced to make space for the new items. Recent<br />

research (Gibbs, 2009) suggests that HE teachers<br />

should be making more use <strong>of</strong> formative assessment<br />

strategies throughout a module (or whole degree)<br />

with fewer summative assessment tasks, mainly<br />

coming at the end <strong>of</strong> the degree programme. Our<br />

experience at LJMU tends to show that unless a task<br />

carries some form <strong>of</strong> assessment weighting, students<br />

tend not to take it seriously or even avoid it<br />

altogether. However, in this study, even by providing<br />

lots <strong>of</strong> smaller assessment tasks with deadlines<br />

spaced throughout the module, with feedback on the<br />

earlier items available before the later tasks are<br />

assessed, the 4th quartile <strong>of</strong> students still show low<br />

engagement and poor attendance in general.<br />

Perhaps there is a need to change students’<br />

expectations right from the start. A great deal rests<br />

with the University Assessment Regulations where, at<br />

LJMU, the Level 1, 2 and 3 weightings towards final<br />

degree classification are 0, 25%, 75%. Should we be<br />

moving to 20%, 30%, 50% (to encourage students to<br />

take Level 1 more seriously, or perhaps, as Gibbs<br />

(2009) suggests, 0%, 0%, 100% but with lots <strong>of</strong><br />

formative assessment and feedback in Levels 1 and 2<br />

so that by Level 3 students are performing at their<br />

‘peak’?<br />

End <strong>of</strong> module fade can also be a problem for some<br />

students. Some students start <strong>of</strong>f eager to make a<br />

good impression and with great intentions <strong>of</strong><br />

completing all the tasks to the best <strong>of</strong> their ability.<br />

Later in the module their motivation fades. This may<br />

be for personal issues, poor time management or<br />

feedback on earlier assessment tasks which tells<br />

them they are doing well enough to pass. They<br />

slacken <strong>of</strong>f their effort later in the module, or do not<br />

submit some smaller/later tasks at all, because they<br />

know they have done enough to pass (all that is<br />

required at Level 1). Some HE teachers have either<br />

insisted that all tasks must be at least submitted<br />

otherwise none <strong>of</strong> the tasks will be counted (or<br />

marked), but this could still mean poor effort is made<br />

in some smaller weighted tasks. Is a task weighted 5<br />

– 10% worth doing? Is it enough to motivate students<br />

into doing their best work? The answer is ‘probably<br />

not’, but students may still be learning important new<br />

skills (such as using the module wiki) by attempting<br />


the task and engaging with the problem. In welldesigned<br />

programmes, the new skills they learn at<br />

Level 1 are laying the foundation for success at Levels<br />

2 and 3, enabling students to be better placed to<br />

succeed. Perhaps the order in which the tasks are<br />

presented is crucial. One strategy may be to present<br />

low weighted smaller tasks early in the module<br />

(5-10%), with higher weighted (25-50%) ones<br />

towards the end.<br />

Another strategy used by HE teachers to ensure high<br />

submission rates is to ‘sample mark’. Here, for<br />

example, six tasks are set, but the assessor states<br />

that only one will be marked, but they do not say<br />

which one it is until all six tasks are submitted. This<br />

clearly reduces the marking load and should ensure<br />

that all students complete all the tasks, but this may<br />

lead to ill-feeling among students who have only had<br />

feedback on one-sixth <strong>of</strong> their work.<br />

Separating classes into groups (in this example<br />

quartiles) does seem to reveal unexpected differences<br />

in performance in some assessment tasks. In several<br />

tasks in both years the 3rd quartile performed better<br />

than the 2nd. However, the 4th quartile never<br />

performed better than the 3rd, although in some<br />

assessment tasks the gap was narrowed. For some<br />

reason, certain groups <strong>of</strong> students (in this case the 3rd<br />

quartile) sometimes perform better than expected.<br />

This may be because the task better matched their<br />

learning style or they were more motivated towards<br />

some tasks than others. The division <strong>of</strong> the cohort into<br />

quartiles in this example is clearly arbitrary, but this<br />

technique could be used to draw out individual<br />

student’s strengths (where their assessment task<br />

ranking is better than their overall module ranking) so<br />

that HE teachers can see strengths and weaknesses<br />

<strong>of</strong> individuals and groups <strong>of</strong> students. This information<br />

could be useful in designing future assessment<br />

strategies at Level 2 and 3. Some HE teachers have<br />

argued for giving students more choice in the way they<br />

are assessed, particularly at Level 2 and 3, so that<br />

they can maximise their performances and/or address<br />

their weaknesses. This is not really novel, school<br />

teachers routinely differentiate their classes, and<br />

many tailor Individual Education Plans for their pupils.<br />

Conclusions<br />

1. Where module assessments consist <strong>of</strong> several<br />

discrete marks which are aggregated to give a final<br />

module total, linear regression relationships<br />

between marks scored in individual assessment<br />

items and the final module total seem to <strong>of</strong>fer a<br />

way in which HE teachers can gauge the influence<br />

<strong>of</strong> certain assessment tasks on the final module<br />

total. Analysis over seven cohorts (years) <strong>of</strong><br />

students shows that the influence varies from year<br />

to year. When extra (new) assessment items are<br />

added, a pattern emerges where some assessment<br />

tasks turn out to consistently influence the final<br />

module total, while others are less consistent in<br />

their influence.<br />

2. The introduction <strong>of</strong> new assessment tasks,<br />

including a module wiki task and a weekly practical<br />

worksheet, showed that students’ marks in the wiki<br />

tasks had no statistically significant influence on<br />

the module total whereas the weekly practical<br />

worksheet did.<br />

3. An analysis <strong>of</strong> students’ rankings in eight<br />

assessment items for two cohorts showed that in<br />

most cases the expected pattern held. However, in<br />

2008 the 3rd quartile group out performed the 2nd<br />

quartile in the Rocks Resource assignment, practical<br />

worksheets and on-line test. In 2009, however, the<br />

pattern was not repeated: instead the 3rd quartile<br />

group out-performed the 2nd quartile, but this time<br />

in the weather analysis project, the field report and<br />

one <strong>of</strong> the wiki tasks.<br />

Further research<br />

This study is a preliminary one, with some<br />

experimentation and a sample <strong>of</strong> techniques which<br />

HE teachers might consider using to find out more<br />

about students’ response to the assessment tasks<br />

designed to help their learning. One avenue this study<br />

has not explored is special learning needs. In any one<br />

year, up to 10% <strong>of</strong> the cohort studying this module<br />

will have varying degrees <strong>of</strong> dyslexia affecting reading<br />

speed, writing, letter/word recognition, sequencing,<br />

planning and the time needed to complete<br />

assignments. Typically, these students are awarded<br />

25% extra time in written examinations and<br />

sometimes, depending on the severity <strong>of</strong> the<br />

condition, extra tutor support. Researching the<br />

suitability <strong>of</strong> different styles <strong>of</strong> assessment for these<br />

students is one avenue for further investigation.<br />

PLANET ISSUE <strong>23</strong><br />


Diversity in Level 1 <strong>GEES</strong> Assessment: moving from less <strong>of</strong> more to more <strong>of</strong> less<br />

Tim Stott<br />

References<br />

• Bradford M. and O’Connell C. 1998 Assessment in Geography, Geography Discipline Network,<br />

Cheltenham<br />

• Gibbs G. 1992 Assessing More Students, Oxford Centre for Staff Development, Oxford<br />

• Gibbs G. 1999 Using assessment strategically to change the way students learn. In Brown S. and Glasner<br />

A. (Eds.) Assessment Matters in Higher Education, SRHE, Open University Press, Buckingham<br />

• Gibbs G. 2009 Designing assessment for entire degree programmes, Designs for Assessment Conference,<br />

Leeds Metropolitan University http://flap.teams.leedsmet.ac.uk/conference-<strong>23</strong>rd-june-2009 Accessed<br />

1 July 2010<br />

• Kolb D.A. 1984 Experiential learning, Prentice-Hall, Englewood Cliffs, New Jersey<br />

• Miller C.M.I. and Parlett M. 1974 Up to the mark: A study <strong>of</strong> the examination game, Society for Research<br />

into Higher Education, Guildford<br />

• Sly L. 1999 Practice tests as formative assessment improve student performance on computer managed<br />

learning assessments, Assessment and Evaluation in Higher Education, 24, 3, 339-344<br />

• Stott T.A. and Meers P.M. 2002 Using BlackBoard VLE to Support Referral Students at JMU, LJMU Internal<br />

Report to Learning Development Unit, Liverpool<br />

• Stott T.A., Boorman A. and Hardy D.P. 2004 Developing Mountain Navigation Skills in Outdoor Education:<br />

Part 1, An Evaluation <strong>of</strong> Questionmark Perception, LJMU Learning & Teaching Press, 4, 1, 17-19<br />

• Stott T.A. 2006 Evaluation <strong>of</strong> the Use <strong>of</strong> Supporting Diagrams and Video Clips in Blackboard’s on-line<br />

assessment tests. Poster presented at 1st Pedagogical Research in Higher Education conference,<br />

Pedagogical Research: enhancing student success, Liverpool Hope University<br />

• Yorke M. 2001 Formative assessment and its relevance to retention, Higher Education Research and<br />

Development, 20, 2, 115-126.<br />

Tim Stott t.a.stott@ljmu.ac.uk<br />

<strong>GEES</strong> Photo Competition 2009/10<br />

WINNER<br />

Harry J Bailey<br />

“Evolution <strong>of</strong> Transport”<br />

Oman desert, 18/03/2010<br />

PLANET ISSUE <strong>23</strong><br />


A Word in Your Ear 2009: Conference Report<br />

Derek France, Department <strong>of</strong> Geography, University <strong>of</strong> Chester<br />

This one-day conference focussed on the relatively new and important area <strong>of</strong><br />

practice centred on ‘Audio Feedback’ and was held at Sheffield Hallam University in<br />

December 2009. Over 110 delegates attended the event from 47 HEIs, including<br />

Institutions in the USA and Malaya.<br />

The Keynote address by Bob Rotheram <strong>of</strong> Leeds Metropolitan University, who led the<br />

JISC-funded “Sounds Good” project, used his experiences to reflect on the project and<br />

highlight the main findings, practice tips and limitations <strong>of</strong> audio feedback.<br />

The programme was divided into three sessions and contributions were made by<br />

academics, students, educational developers and learning technologists through (16)<br />

short papers, (12) posters, (3) workshops, a student panel and (3) Challenge Circle<br />

discussion groups.<br />

The Conference website http://research.shu.ac.uk/lti/awordinyourear2009/ (accessed 21<br />

June 2010) provides access to a range <strong>of</strong> resources including the conference proceedings<br />

and a series <strong>of</strong> downloadable podcasts which were recorded during a number <strong>of</strong> the<br />

conference sessions, workshops and discussions held.<br />

Derek France d.france@chester.ac.uk<br />

<strong>GEES</strong> Photo Competition 2009/10<br />


Jonathan Kinnear<br />

Angel’s Landing<br />

Captured from the summit <strong>of</strong> Angel’s Landing,<br />

Zion National Park, Utah, USA on March 25th 2009<br />

PLANET ISSUE <strong>23</strong><br />


W. Brian Whalley<br />

School <strong>of</strong> Geography, Archaeology and Palaeoecology, Queens University Belfast<br />

Marks, remarks<br />

and feedback.<br />

Do we really need<br />

examinations?<br />

PLANET ISSUE <strong>23</strong><br />

34<br />

Abstract<br />

This paper explores the role <strong>of</strong> marking and feedback<br />

styles in learning and discusses ways in which<br />

academics respond to student work using the analogy<br />

<strong>of</strong> an ‘educational control system’. It is argued that<br />

using this style <strong>of</strong> thinking can help assessors to refine<br />

their approach to assessment and feedback. In<br />

engineering terms, ‘control system events’ (in this<br />

instance, for example, exams, tests and feedback) act<br />

as inputs to the next stage <strong>of</strong> (student learning)<br />

development. This paper argues that activities<br />

themselves are feedback and that marks plus<br />

associated (feedback) remarks or comments together<br />

drive the student learning process and educational<br />

attainment. This process can be assisted by making<br />

assessment terminology and criteria clear. Research<br />

on meta-cognition suggests that students need to<br />

know about the learning process, that is, what is<br />

expected <strong>of</strong> them, as much as what they ‘learn’, and<br />

that thinking styles are as important as learning styles<br />

in all assessment processes, including coursework.<br />

Examinations should provide feedback as part <strong>of</strong> the<br />

educational control system. Suggestions are made to<br />

assist the use <strong>of</strong> feedback and remarks to enhance<br />

attainment.<br />

Introduction<br />

This paper employs an engineering metaphor to<br />

consider how assessment and feedback drives<br />

educational attainment. Introducing students to<br />

these notions is argued to be central to their<br />

understanding <strong>of</strong> the differences between HE learning<br />

and their pre-HE experience. The ‘First Year<br />

Experience’ is an especially important part <strong>of</strong> the<br />

undergraduate programme. Ideally students and staff<br />

need to understand assessment terms and criteria so<br />

that feedback comments are aligned and understood<br />

consistently in all assessed work. Engaging students<br />

with their own learning and being explicit about<br />

assessment criteria and feedback mechanisms at an<br />

early stage in their HE experience will help in the<br />

“scholarship <strong>of</strong> assessment” as defined by Rust (2007).<br />

Whalley (2008) argued that we still have a ‘Victorian<br />

educational system’ in the form and manner <strong>of</strong><br />

assessment; we might call this the ‘University<br />

Challenge’ process. Points (module marks) are won by<br />

competitors and scaled according to the total gained<br />

year by year. Assessment in <strong>GEES</strong> subjects is still<br />

dominated by end <strong>of</strong> module examinations. However,<br />

for students in their first year (Level 1), university-style<br />

assessment is a new experience. Students find it hard<br />

to know what is expected <strong>of</strong> them and may not<br />

perform as well as they had hoped. This paper focuses<br />

on ways to inform Level 1 students about the<br />

university assessment experience so that they can<br />

maximise their achievements and learn from the<br />

process.<br />

An educational control system<br />

The approach taken here is to look at learning in<br />

terms <strong>of</strong> an engineering ‘control system’ metaphor, in<br />

which the processes <strong>of</strong> feedback are designed into a<br />

module. Such a system helps academics to consider<br />

where, how much and when assessment processes<br />

are included in any learning experience.<br />

A simple engineering control system has an input<br />

level <strong>of</strong> a variable, an output level, some form <strong>of</strong><br />

reference sensor and a control mechanism. In a<br />

domestic central heating system, for example, a<br />

device heats the air in a room. The sensor determines<br />

what the temperature actually is, compares it to the<br />

reference and allows heating to continue if the

desired set-point (temperature) has not been reached.<br />

The term ‘feedback’, in an engineering sense, controls<br />

the desired output level. Positive feedback can be<br />

seen as a signal coming from a sensor that increases<br />

the set-point value. Similarly, negative feedback<br />

reduces the input signal to decrease the set point<br />

value. Introducing students to the notion <strong>of</strong> a<br />

metaphorically parallel ‘education control system’ can<br />

help in developing their understanding <strong>of</strong> the<br />

difference between HE learning and their previous<br />

experience. For example, at school level, almost all<br />

information applied to the ‘system’ is positive<br />

feedback in terms <strong>of</strong> the student learning experience.<br />

Figure 1: Simplified educational system. Attainment is an overall term for the educational advancement <strong>of</strong> a student. It can<br />

be applied to a specific task set (as in c) or over the length <strong>of</strong> a module, as in a, b and d, or as overall educational<br />

accomplishment. It incorporates experience and achievement as well as the marks gained for a piece <strong>of</strong> work but it is a<br />

general statement and is not quantified.<br />

a. A ‘flat’ curve in which the student achieves nothing, ie no educational attainment.<br />

b. The achievement is slowly ramped, e.g. by progressive memorisation <strong>of</strong> materials in the module. The educational need is<br />

to ramp the achievement to the highest level possible (by ‘deep learning’) by the end <strong>of</strong> the module. If an Examination<br />

event (Ee) occurs near or at the end <strong>of</strong> the module but no information is given until the marks arrive after the module has<br />

finished and no remarks are presented, then there is a lack <strong>of</strong> feedback within the module. This is the common, University<br />

Challenge, mode <strong>of</strong> examination setting.<br />

c. A portion <strong>of</strong> a module with a task (T) set. The achievement is ramped over the time the task is being done (the Activity, A)<br />

and feedback (ie information about the input level which provides positive feedback to enhance or amplify the output<br />

level) is achieved with the arrival <strong>of</strong> meaningful remarks (R) about the product as well as the marks (M). Note that the<br />

remarks are (here and ideally) more significant than the marks alone and can be given before the marks. In the case <strong>of</strong><br />

examinations (or end <strong>of</strong> term projects), the remarks may not be provided by the end <strong>of</strong> the teaching period or contribute<br />

to the learning process rather late in the semester (as in b).<br />

d. A multi-phase enhancement <strong>of</strong> achievement as experiential learning in a module (or part module). This reads, from left<br />

to right:<br />

T – Task set by tutor<br />

A – Student does the activity and ramps achievement<br />

R1 – First remarks provided (perhaps to the class)<br />

R2 – Second set <strong>of</strong> remarks, perhaps individually<br />

M – Marks for first task<br />

T – Second task set<br />

A – Students does second activity<br />

MR – Marks and remarks provided for second task<br />

Note that a task could be an examination; this does not have to be at the very end <strong>of</strong> the module.<br />

PLANET ISSUE <strong>23</strong><br />


Marks, remarks and feedback. Do we really need examinations?<br />

W. Brian Whalley<br />

PLANET ISSUE <strong>23</strong><br />

In the example <strong>of</strong> a simplified educational system<br />

shown in Figure 1, we do not want a flat curve (Figure<br />

1a), as this implies that the student has not advanced<br />

educationally. So, how do we know the student has<br />

both gained in knowledge and understanding, and<br />

developed their learning and examination techniques?<br />

What are the signals that will ‘amplify the output<br />

level’? Generally, ‘assessment’ (marks) provides the<br />

sensing mechanism. The numerical value awarded<br />

can also act as a controlling signal, (lower marks<br />

potentially driving increased learning where there is<br />

motivation to get a higher mark). This signal can be<br />

either external, where the tutor recognises the value<br />

and makes a further input to the learning, or internal,<br />

where the student sees the value and decides to do<br />

better. However, this mark attribution process does<br />

not necessarily drive educational attainment<br />

upwards. The role <strong>of</strong> feedback (via comments rather<br />

than marks) on work is also relevant. Figure 1 refers to<br />

attainment, not specifically to marks. A lone exam<br />

event usually provides feedback marks at the end <strong>of</strong><br />

the module, but this is rarely combined with students<br />

reviewing the remarks added to a script by the<br />

assessor (Figure 1b).<br />

In this system, attainment can be related to learning<br />

affected by positive feedback that confirms the<br />

knowledge gained, and potentially adds to, reinforces<br />

or corrects it. In education as a whole, feedback is<br />

<strong>of</strong>ten used in a rather loose sense. If we fail to provide<br />

comments, or if comments are misunderstood, the<br />

incorrect understanding <strong>of</strong> a concept by a student<br />

might go uncorrected and in effect this may drive<br />

attainment down. The consequence may be that the<br />

next activity is done less well than expected, although<br />

this would still be seen as positive feedback in system<br />

terms. Feedback comments therefore need careful<br />

thought. ‘You didn’t do a very good job <strong>of</strong> this’ is not<br />

only discouraging but it does not add to the<br />

understanding <strong>of</strong> the student. A remark such as ‘you<br />

did a good job <strong>of</strong> this task’ may also not be helpful, if<br />

the student does not understand what they did well.<br />

The lack <strong>of</strong> any attempt at correcting a<br />

misapprehension can act as a negative feedback in<br />

the system. Essentially, feedback must be meaningful<br />

and reflect the marks awarded: that is, remarks made<br />

to students through comments on scripts, oral or<br />

video feedback should aim to drive up the attainment<br />

curve (Figure 1c).<br />

So, encouragement is worthwhile, but within what<br />

might be termed a ‘feedback sandwich’. Educational<br />

reinforcement and instructional design which focus<br />

on ‘outcomes’ (Fleming and Levie, 1993) can inform<br />

the assessment design process. For example,<br />

feedback comments to students can be separated<br />

into the following categories; confirmation, corrective,<br />

explanation, diagnostic and elaboration. Note that<br />

these all provide positive feedback via the input signal<br />

information and so are part <strong>of</strong> the educational system<br />

as a whole. In the same way that marks alone are not<br />

helpful, comments on the results (‘you did well’) are<br />

not as good as comments on the process by which<br />

the results were achieved: ‘Your paragraph structure<br />

has improved. You have focused on one argument per<br />

paragraph. Keep this thought in mind next time you<br />

review your writing’.<br />


A purely encouraging comment might be less<br />

beneficial than an assessor expects, if it does not have<br />

accompanying contextual information and is not<br />

related to attainment levels for other assessment<br />

tasks (Kahneman et al.,1992). The assessor needs to<br />

recognise that marks per se are indicators, or<br />

information, for the system feedback rather than the<br />

end-product <strong>of</strong> attainment, and that it would be<br />

helpful if students understood this concept. The most<br />

important assessment outcome is the provision <strong>of</strong><br />

appropriate, positive feedback information, with an<br />

acknowledgement that overly substantial feedback<br />

(potentially overwhelming for the student) can have<br />

as negative an impact as too little feedback. This is<br />

why the confirmation, corrective, explanation,<br />

diagnostic and elaboration statements are relevant<br />

for assessors thinking about assessment protocols. It<br />

then becomes easier for both staff and students to<br />

distinguish between standards, “for a 2i mark you<br />

should .…” and criteria “you were asked to interpret the<br />

graph”. The latter is a specific statement tailored<br />

according to the task set. Achieving the criteria<br />

requested for an activity allows marks to be awarded<br />

accordingly. Marks plus appropriately specific and<br />

weighted feedback comments show how well the<br />

task has been achieved.<br />

To be effective students need practice and experience<br />

<strong>of</strong> the mechanisms <strong>of</strong> assessment in HE, the style and<br />

role <strong>of</strong> feedback remarks from staff. Essentially this<br />

should parallel the practice and coaching processes<br />

that all athletes experience. Academically, this may<br />

be best achieved in a series <strong>of</strong> practicals or<br />

coursework activities aimed at enhancing<br />

achievement. Even where an example is worked<br />

through with students (and this may take a<br />

considerable amount <strong>of</strong> time), the new-found<br />

knowledge and experience needs to be practised to<br />

ramp up the attainment (Figure 1d). Confidence in<br />

performing activities comes with practice and tailored<br />

feedback (coaching) so that the activity becomes<br />

more or less subconscious.<br />

Meta-cognition and learning<br />

and thinking styles<br />

A tutor’s style <strong>of</strong> feedback is also worth considering.<br />

For the already expert student, different styles <strong>of</strong><br />

feedback should not be a problem. If a tutor passes a<br />

comment such as “you need to be more critical” this<br />

raises the question <strong>of</strong> what ‘critical’ means and what<br />

criticality entails. There is no simple answer to what is<br />

essentially a ‘value’ judgement in this instance.<br />

Shared understanding <strong>of</strong> criticality (between tutor<br />

and student) can become even more troublesome if<br />

the knowledge to be transferred is essentially tacit, or<br />

even assumed. Keysar and Barr (2002) illustrate<br />

problems <strong>of</strong> conversations with misunderstanding or<br />

recognition associated with the concept <strong>of</strong> anchoring.<br />

Anchoring comes in various forms and is associated<br />

with heuristics in problem solving; it relates to the<br />

human tendency to rely too heavily (or ‘anchor’) on<br />

what may <strong>of</strong>ten be the most obvious or commonly<br />

accepted statement or concept. Most tutors can show<br />

examples <strong>of</strong> ‘anchoring’ from student essays and<br />

examination scripts (“glaciers covered all <strong>of</strong> Northern<br />

Ireland in the Holocene”), but we also have to consider<br />

mis-matches <strong>of</strong> understanding between student and<br />

tutor, not just in subject knowledge, but also in our<br />

understanding <strong>of</strong> pedagogy. We need to make sure<br />

that there is a shared understanding <strong>of</strong> the concept<br />

and language <strong>of</strong> feedback between tutor and student.<br />

Experiential learning has been related to learning<br />

styles. In the UK Higher Education system both the<br />

Honey and Mumford (1992) and VARK (Fleming, 1993)<br />

typologies <strong>of</strong> learning styles are widely cited.<br />

Students’ thinking styles are also important. Sternberg<br />

(1997) identifies global, local, internal, external, liberal<br />

and conservative thinking styles. These lead to<br />

differentiated approaches and responses to problem<br />

solving. For example:<br />

“I prefer tasks dealing with a single, concrete<br />

problem, rather than general or multiple ones”,<br />

(Local style; Sternberg 1997, p 62)<br />

“When I’m in charge <strong>of</strong> something, I like to follow<br />

methods and ideas used in the past”<br />

(Conservative style; Sternberg 1997, p 73)<br />

Thus, when setting assignments through coursework<br />

or examinations, we need to consider what we want<br />

students to do and the ways we might guide them to<br />

produce their best results. Setting a range <strong>of</strong><br />

questions that appeal to students with different styles<br />

would be more equable than setting them in a single<br />

style.<br />

PLANET ISSUE <strong>23</strong><br />


Marks, remarks and feedback. Do we really need examinations?<br />

W. Brian Whalley<br />

Constructing educational control<br />

systems: assessment in the system<br />

The ideas discussed above suggest that the single<br />

module/ one assessment/ limited formative<br />

assessment model will lead to the flat curve 1a<br />

shown in Figure 1. The approach advocated in this<br />

paper involves the planning <strong>of</strong> activities and<br />

accompanying assessment and feedback at both<br />

module and programme level, as in Fig. 1 d. This<br />

planning would take into account that system<br />

feedback – the activity itself plus marks and feedback<br />

comments (with that feedback aligned with<br />

assessment criteria) – is the major factor in enhancing<br />

student attainment. The feedback should relate<br />

clearly to the nature <strong>of</strong> the tasks set, must be<br />

understood by all and, ideally, enhanced by criterion<br />

referencing, e.g. Fox and Rowntree (2004), Whalley<br />

and Taylor (2008). At its simplest, a single activity<br />

produces learning achievement by the<br />

accomplishment <strong>of</strong> the task and the next task is<br />

enhanced by the feedback received on the first task.<br />

This produces a step or ramp-up in the achievement.<br />

Feedback comments need to be given soon after the<br />

activity is performed in order to be effective, like<br />

heating systems re-igniting a boiler when<br />

temperatures drop. The feedback should come as<br />

forms <strong>of</strong> advice that are confirmation, corrective,<br />

explanation, diagnostic and elaboration, and<br />

presented to the student individually or in a group. In<br />

practice, the ramps in the attainment curve can be<br />

increased in various ways: the very fact that students<br />

perform some activity within the module will act as<br />

positive feedback that improves attainment because<br />

it has provided an opportunity to practise.<br />

Looked at in terms <strong>of</strong> the over-arching educational<br />

system, these arguments above suggest that a single<br />

assessment for a module is not a good idea,<br />

especially if the module runs up to an end-<strong>of</strong>semester<br />

submission, because there is no opportunity<br />

for formative feedback (coaching) to inform a<br />

student’s work. The situation is exacerbated in<br />

modules with 100% end-<strong>of</strong>-semester examinations.<br />

Churches (2009) suggests ideas for feedback from<br />

which a balance <strong>of</strong> emphasis can be created for<br />

each response:<br />

• Developing a hypothesis (Evaluating)<br />

• Experimenting (Evaluating)<br />

• Planning (Creating)<br />

• Designing (Creating)<br />

• Judging and evaluating (Evaluating)<br />

• Producing and making (Creating)<br />

• Critiquing, reviewing and testing (Evaluating)<br />

• Refining (Creating)<br />

• Mixing and remixing (Creating)<br />

These topics should aim to link with the metacognitive<br />

aspects <strong>of</strong> learning. Churches’ critique<br />

concludes, “These [topics] cannot be tested<br />

adequately in an examination or test”. It is possible to<br />

envisage them being part <strong>of</strong> an examination, as long<br />

as it is part <strong>of</strong> an enquiry-led, rather than mark<br />

achievement-led, approach.<br />

PLANET ISSUE <strong>23</strong><br />


Conclusion and some suggestions<br />

Directed feedback remarks or comments and<br />

numerical marks are necessary for constructive<br />

feedback and assessment. For Level 1 students,<br />

whose knowledge <strong>of</strong> academics’ requirements for<br />

successful assignments is <strong>of</strong>ten poor to non-existent,<br />

academics should be cognisant <strong>of</strong> these issues in<br />

setting tasks and should aim to clearly show students<br />

the thinking behind the tasks and their accompanying<br />

assessments. It is imperative that good feedback<br />

comments are provided for as many stages as<br />

possible in the assessment process. Overall, the<br />

provision <strong>of</strong> marks and feedback remarks with the aim<br />

<strong>of</strong> increasing attainment is at least as important in<br />

module design as it is in the teaching.<br />

Good assessments should be more than a test <strong>of</strong><br />

memory. Assessments can be recast to solve<br />

problems via developing ‘thinking styles’ and be<br />

aligned to enquiry-based outcomes and experiential<br />

achievement (Boud and Falchikov, 2006). Metacognitive<br />

ideas apply both to student approaches (for<br />

example, ‘knowing about knowing’, how to produce a<br />

‘good’ answer), but also to tutor approaches to<br />

learning development (how do we know what we are<br />

asking <strong>of</strong> a student when we set an assignment?)<br />

These issues are particularly important for<br />

examinations when feedback provision is delayed<br />

(students leave campus after the summer exams), or<br />

absent (the departmental culture is not to provide<br />

feedback on examinations). With the wealth <strong>of</strong><br />

possible assessments available, it could be argued<br />

that we do not need examinations, but where<br />

employed they should at least be effective in ramping<br />

up attainment (Figure 1). The challenge is to develop<br />

learning and assessment for student education so<br />

that it is more than a University Challenge question<br />

and recall response system.<br />

References<br />

• Boud D. and Falchikov N. 2006 Aligning assessment with long-term learning, Assessment & Evaluation in<br />

Higher Education 31, 399-413<br />

• Churches A. 2009 Bloom’s and assessment, http://edorigami.wikispaces.com/<br />

Bloom’s+and+Assessment Accessed 21 June 2010<br />

• Fleming M. and Levie W. H. 1993 Instructional message design: principles from the behavioural sciences,<br />

Educational Technology Publications, Englewood Cliffs, NJ<br />

• Fox R. and Rowntree K. 2004 Linking the doing to the thinking: using criterion-based assessment in role<br />

playing simulations. <strong>Planet</strong> , 13, 12-15<br />

• Honey P. and Mumford A. 1992 The manual <strong>of</strong> learning styles, Peter Honey, Maidenhead<br />

• Kahneman D., Slovic P. and Tversky A. 1982 Judgment under uncertainty: Heuristics and biases,<br />

Cambridge University Press, Cambridge<br />

• Keysar B. and Barr D. J. 2002 Self-anchoring in conversation: why language users do not do what they<br />

“should”. In Gilovich T., Griffin S., and Kahneman D. (Eds.) Heuristics and Biases, Cambridge University<br />

Press, Cambridge<br />

• Rust C. 2007 Towards a scholarship <strong>of</strong> assessment, Assessment in Higher Education, 32, 2, 229-<strong>23</strong>7<br />

• Sternberg R. J. 1997 Thinking styles, Cambridge University Press, Cambridge<br />

• Whalley W. B. 2008 What should a (Geography) degree for the 21st century be like? <strong>Planet</strong>, 19, 36-41<br />

• Whalley W. B. and Taylor L. 2008 Using criterion-referenced assessment and ‘preflights’ to enhance<br />

education in practical assignments, <strong>Planet</strong>, 20, 29-36<br />

W. Brian Whalley b.whalley@qub.ac.uk<br />

PLANET ISSUE <strong>23</strong><br />


Sharon Gedye<br />

Directorate <strong>of</strong> Teaching and Learning, University <strong>of</strong> Plymouth<br />

Formative assessment<br />

and feedback: a review<br />

PLANET ISSUE <strong>23</strong><br />

40<br />

Abstract<br />

This review examines definitions <strong>of</strong> formative<br />

assessment and the factors that control the<br />

effectiveness <strong>of</strong> feedback. It summarises the various<br />

ways <strong>of</strong> delivering formative feedback and discusses<br />

the problems that may be encountered when<br />

assessment practice is altered to improve feedback.<br />

Introduction<br />

It has been argued by many authors (e.g. Nichol and<br />

Macfarlane-Dick, 2004; Yorke, 2003; Irons, 2007),<br />

that providing opportunities for formative feedback<br />

is the single most beneficial thing tutors can do for<br />

their students. However, for reasons such as a focus<br />

on standards and grading, high staff-student ratios,<br />

modularisation, research pressures and the inertia<br />

<strong>of</strong> traditional teaching practices (Yorke, 2003,<br />

p483), formative feedback is not yet widely used in<br />

higher education.<br />

For most students, the dominant feedback they<br />

receive takes place through summative assessment.<br />

Staff increasingly put effort into clarifying assessment<br />

instructions, provide marking criteria, and attempting<br />

to give positive, constructive feedback, yet are<br />

frequently dismayed when these efforts appear to fall<br />

on deaf ears. Students rarely seem to engage<br />

effectively with feedback comments; they tend to be<br />

grade-focused and pay little attention to the feedback<br />

given. Once the work has been marked there is little<br />

incentive for students to reflect deeply on their tutor’s<br />

opinions. This paper discusses formative feedback and<br />

what can be done to improve the feedback experience<br />

<strong>of</strong> students.<br />

What is formative feedback?<br />

Definitions <strong>of</strong> formative assessment vary. Often<br />

formative and summative assessment are described<br />

as being distinct from each other. Black and Wiliam<br />

(1998) talk in terms <strong>of</strong> both the formative functions<br />

and summative functions <strong>of</strong> assessment. When<br />

viewed in these terms, assessment can serve a dual<br />

purpose - a summative assessment can provide<br />

formative feedback. The least restrictive way <strong>of</strong><br />

viewing formative assessment is that it is assessment<br />

which provides the learner with information that<br />

allows them to improve their learning and<br />

performance. In this sense, an end-<strong>of</strong>-module, graded<br />

assignment may be formative if the student receives<br />

good quality feedback on how they might improve<br />

their work, whilst a mid-semester, ungraded<br />

assessment may not be formative if all the feedback<br />

says is, “good work, well done”. The emphasis is<br />

clearly on the style and relevance <strong>of</strong> the feedback,<br />

and the ability <strong>of</strong> the lecturer to provide the learner<br />

with comments that they can understand and use to<br />

improve.<br />

Yorke (2003, p477) cites Bruner (1970, p120) who<br />

says “learning depends on knowledge <strong>of</strong> results, at a<br />

time when, and at a place where, the knowledge can<br />

be used for correction.” Reflecting on this statement,<br />

we might want to further qualify our definition <strong>of</strong><br />

formative feedback. It is not enough to just provide<br />

good quality feedback, but we must also provide this<br />

in a way that encourages students to use it. Without<br />

supporting students in their use <strong>of</strong> feedback (be this<br />

through module design, training, clear<br />

communication <strong>of</strong> expectations etc), then feedback<br />

given with the intention <strong>of</strong> being formative will only<br />

have the potential to be formative. This sense <strong>of</strong>

formative feedback is similar to that <strong>of</strong> Sadler (1989)<br />

who talks in terms <strong>of</strong> assessment being formative<br />

only if it is used to close the gap between actual and<br />

reference levels (i.e. ‘best’ expected standards / model<br />

answers etc) <strong>of</strong> performance.<br />

Improving the effectiveness<br />

<strong>of</strong> formative feedback<br />

The effectiveness <strong>of</strong> formative feedback is influenced<br />

by a number <strong>of</strong> factors, including the ability <strong>of</strong><br />

students to self-assess, giving students clear goals<br />

and criteria, and setting out expected standards; the<br />

encouragement <strong>of</strong> teacher and peer dialogue around<br />

learning; closure <strong>of</strong> the ‘feedback loop’; the provision<br />

<strong>of</strong> quality feedback information; and the<br />

encouragement in students <strong>of</strong> positive motivational<br />

beliefs and self-esteem (Nicol & Macfarlane-Dick,<br />

2004). These will each be dealt with in turn.<br />

Ability to self-assess<br />

Assessment practices currently are dominated by<br />

tutor-led feedback, which can keep students in a state<br />

<strong>of</strong> dependency and will inhibit them from learning<br />

how to self-correct. A number <strong>of</strong> authors (eg<br />

MacDonald and Boud, 2003; Bedford and Legg, 2007)<br />

have demonstrated that self-assessment and<br />

reflection can be effective in enhancing learning and<br />

achievement.<br />

Sadler (1998) identifies six ‘resources’ that highly<br />

competent teachers bring to formative assessment<br />

(Table 1).<br />

Intellectual and Experiential Resources<br />

Knowledge <strong>of</strong> subject matter<br />

Attitudes and dispositions towards teaching<br />

Skills in setting / compiling assessment<br />

Knowledge <strong>of</strong> criteria and standards<br />

Evaluative skills<br />

Expertise in framing feedback statements<br />

Explanation<br />

What is correct, partially correct, incorrect,<br />

appropriate etc.<br />

Empathy; desire to help students develop; personal<br />

reflection on their own staff judgements and<br />

patterns <strong>of</strong> feedback.<br />

Setting assessment that reveals understanding; that<br />

tests desired outcomes; is dissimilar enough from<br />

previous work to avoid regurgitation and challenge<br />

students whilst similar enough to allow students<br />

to build on previous experience and transfer their<br />

learning.<br />

Understands general criteria and standards<br />

appropriate to ‘level’ and those that are task specific.<br />

Confidence and experience in making judgements;<br />

ability to take on board student responses beyond<br />

their (staff) imagination and previous experience<br />

that enriches their ‘repertoire <strong>of</strong> tactical moves’.<br />

Provides feedback that describes the features <strong>of</strong> a<br />

student’s work; makes evaluative comments, linked<br />

to criteria, on the positive and negative features<br />

<strong>of</strong> the work; suggests improvements; exemplifies<br />

feedback.<br />

Table 1: The intellectual and experiential resources that highly competent teachers bring to the act <strong>of</strong> formative<br />

assessment (Sadler, 1998, pp80-82).<br />

PLANET ISSUE <strong>23</strong><br />


Formative assessment and feedback: a review<br />

Sharon Gedye<br />

PLANET ISSUE <strong>23</strong><br />

42<br />

Students have a limited understanding <strong>of</strong> these six<br />

resources and have problems in interpreting the<br />

language in which they are expressed. Sadler<br />

proposes that if students are to be able to engage<br />

with self and peer assessment – shown by the<br />

literature on the subject to be highly significant<br />

practices in formative assessment (e.g. Black and<br />

Wiliam, 1998) – then they will need to be able to<br />

begin to understand and apply these six resources.<br />

Having clear goals, criteria,<br />

and expected standards<br />

Despite efforts to be clear, tutors are <strong>of</strong>ten<br />

disappointed to find that student work does not<br />

appear to fully grasp what is intended and that the<br />

feedback seems to be ignored (Walker, 2009). Evidence<br />

suggests (e.g. Hounsell, 1997; Chanock, 2000; Hyland,<br />

2000) that students commonly do not comprehend<br />

the feedback language used; they misunderstand<br />

what tutors think are the clear goals <strong>of</strong> a piece <strong>of</strong> work;<br />

they have a different (or no) idea <strong>of</strong> the standards<br />

expected; and they do not understand the wellintentioned<br />

feedback nor know how to act on it.<br />

Whilst academics may appreciate instructions such as<br />

‘critically analyse the statement …’ and feedback such<br />

as ‘fails to adequately develop a logical argument’,<br />

students <strong>of</strong>ten do not. It would seem then, to make<br />

feedback truly formative, students need to be actively<br />

engaged with the assessment process and academics<br />

need to do more to use language effectively.<br />

“If students do not share (at least in part) their<br />

tutor’s conceptions <strong>of</strong> assessment goals (criteria/<br />

standards) then the feedback information they<br />

receive is unlikely to ‘connect’”<br />

(Nicol, D. and Macfarlane-Dick, D., 2004).<br />

Written documents can be improved by including<br />

statements that help to define the goals, criteria and<br />

standards. We should think about modifying our<br />

feedback, using terms we know that students better<br />

understand. However, despite attempts to address<br />

clarity in written or verbal instructions, difficulties will<br />

persist in articulating the requirements <strong>of</strong> complex<br />

tasks (Rust et al, 2003; Yorke, 2003). Complementary<br />

strategies, including the provision <strong>of</strong> exemplars<br />

(Orsmond et al, 2002) and the wider use <strong>of</strong> staffstudent<br />

dialogue (discussed below) are also necessary.<br />

Encouraging teacher and peer<br />

dialogue around learning<br />

Whilst clear instructions and feedback from tutors are<br />

important, conversations between staff and students<br />

and student peer discussions may be considered<br />

more vital. Misunderstandings and ambiguities<br />

around assessment can be cleared-up through<br />

dialogue, either as a group, when introducing the<br />

assignment, or at the end <strong>of</strong> the process, when<br />

providing the feedback. Allowing time for such<br />

conversations not only helps to clarify the learning<br />

process for students (and hopefully improves<br />

performance), but should provide tutors with valuable<br />

feedback. This in turn should help to improve the<br />

quality <strong>of</strong> instructions and feedback, and give a better<br />

idea as to where there may be wider problems in<br />

learning for the group as a whole.<br />

Closing the ‘feedback loop’<br />

For practical reasons, especially with classes <strong>of</strong> over<br />

100 students, the time between submission and<br />

feedback can be up to four weeks. Students,<br />

therefore, do not have the opportunity to directly use<br />

the feedback they receive on their coursework.<br />

However, Boud argues that:<br />

“The only way to tell if learning results from<br />

feedback is for students to make some kind <strong>of</strong><br />

response to complete the feedback loop (Sadler,<br />

1989). This is one <strong>of</strong> the most <strong>of</strong>ten forgotten<br />

aspects <strong>of</strong> formative assessment. Unless students<br />

are able to use the feedback to produce improved<br />

work, through, for example, re-doing the same<br />

assignment, neither they nor those giving the<br />

feedback will know that it has been effective.”<br />

(Boud, 2000, p158)<br />

If assessment practice allowed for (at least a<br />

proportion <strong>of</strong>) student work to be submitted after<br />

receiving feedback, it is highly likely that learning and<br />

performance would be improved. In practice this<br />

might mean re-submission <strong>of</strong> work after it has been<br />

examined by a tutor (feedback could be individual or<br />

to the group) or might involve facilitating peer or<br />

self-assessment. Coursework can also be designed to<br />

include sub-tasks on which students receive feedback;<br />

these build towards the completed assignment. When<br />

assessment practice is changed to ‘close-the-gap’,<br />

tutor feedback is likely to alter to include more

constructive advice about how the student may<br />

improve their work.<br />

The quality <strong>of</strong> feedback information<br />

There are a number <strong>of</strong> ways in which the quality <strong>of</strong><br />

feedback could be improved. Feedback should be<br />

given as soon as possible after submission; be<br />

relevant to the task and the pre-defined assessment<br />

criteria; and should help the student to understand<br />

how to improve the work (not just highlight strengths<br />

and weaknesses). Nicol and Macfarlane-Dick (2004)<br />

state that students <strong>of</strong>ten receive too much feedback.<br />

This can be overwhelming and it makes it difficult for<br />

them to decide which advice to act on. They cite<br />

Lunsford (1997) who advocates providing only three,<br />

well considered items <strong>of</strong> feedback. This helps tutors<br />

and students to prioritise areas for improvement.<br />

Various authors (e.g. Good & Grouws, 1975; Siero &<br />

van Oudenhoven, 1995) found that any feedback,<br />

whether positive or negative, that draws attention<br />

away from the task and instead focuses on selfesteem,<br />

has a negative effect.<br />

Encouraging positive motivational<br />

beliefs and self-esteem<br />

Dweck’s (2000) work on ‘self-theories’ identifies two<br />

types <strong>of</strong> student: those who believe their ability can<br />

be improved, and those that believe it is fixed. For<br />

those students who believe their ability is fixed, any<br />

criticism <strong>of</strong> assessment performance will be viewed as<br />

a reflection <strong>of</strong> their low ability, whereas conversely,<br />

those with a more malleable outlook will view<br />

criticism as an obstacle to be overcome or an<br />

opportunity to improve. A challenge for those who<br />

provide feedback is helping those with fixed selftheories<br />

to believe they can improve. In order to do<br />

this, students need to be motivated and possess<br />

self-esteem. Butler (1988) shows that students<br />

appear to pay less attention to feedback when they<br />

are provided with grades, and grading negatively<br />

affects the self esteem <strong>of</strong> less-able students (Craven<br />

et al. 1991). These studies suggest that focusing on<br />

low-stakes assessment with feedback, rather than<br />

high-stakes assessment accompanied by grades, may<br />

help students focus on learning and improving rather<br />

than confirming performance.<br />

Some examples <strong>of</strong> the structures and devices that<br />

can be employed to facilitate formative feedback are<br />

given in Table 2. The ASKe (Assessment, Standards,<br />

Knowledge exchange) CETL’s series <strong>of</strong> 1,2,3 Leaflets<br />

also provides an excellent resource, with numerous<br />

constructive assessment and feedback ideas (see<br />

page 16/17 <strong>of</strong> this edition <strong>of</strong> <strong>Planet</strong>).<br />

• Use portfolios which include a requirement for self-reflection<br />

• Get students to re-submit work after receiving feedback on a draft version<br />

• Involve students in the drawing up <strong>of</strong> assessment criteria<br />

• Require students to reflect on their work and the feedback it received in order to<br />

receive their grade/mark<br />

• Ask students to identify the strengths and weaknesses <strong>of</strong> their work in relation to the<br />

assessment criteria prior to handing it in<br />

• Use peer review to get students to comment on each other’s work prior to submission<br />

• Employ exemplars to help students understand the standards expected<br />

• Allow time for discussion and reflection about criteria and standards in class<br />

• Before students leave the class, get them to draw up a list <strong>of</strong> action points, based on the<br />

feedback they have just received<br />

• Feedback should include action points in addition to, or instead, <strong>of</strong> ‘normal’ feedback<br />

• Set formative sub-tasks that build to a summative item<br />

• Ask students what kinds <strong>of</strong> feedback they find the most useful<br />

• In tutorials, ask students to examine their feedback comments and get them to: explain what<br />

was/was not useful; and their strategies for improvement.<br />

Table 2: Examples <strong>of</strong> the structures and devices that can be employed to facilitate formative feedback.<br />

PLANET ISSUE <strong>23</strong><br />


Formative assessment and feedback: a review<br />

Sharon Gedye<br />

Problems with implementing<br />

improved formative assessment<br />

strategies<br />

Whilst the literature on formative assessment is clear<br />

about the improvements that can be made through<br />

peer and self-assessment, persuading students <strong>of</strong><br />

their benefits can be a significant challenge<br />

(Falchikov, 2004). It has already been noted above<br />

that in order to peer or self-assess, students need to<br />

develop a much better understanding <strong>of</strong> the various<br />

aspects <strong>of</strong> the assessment process. This is not a ‘quick<br />

and easy’ thing to achieve. Getting students<br />

cognitively engaged with assessment and feedback is<br />

more likely to be successful where there is a<br />

curriculum-wide approach rather than a module-bymodule<br />

approach where individual members <strong>of</strong> staff<br />

adopt different strategies.<br />

Inducting students into the language and process <strong>of</strong><br />

assessment is an excellent starting point, but<br />

students who are asked to peer or self-assess are<br />

<strong>of</strong>ten uneasy about taking on this role. Concerns are<br />

commonly expressed about evaluating work, as<br />

students feel that they and their peers do not hold<br />

‘expert’ knowledge: there is also a fear about the<br />

accuracy <strong>of</strong> any judgements made. Students worry<br />

about having to make judgments publicly and are<br />

apprehensive about criticising their peers. They fret<br />

that others may be receiving overly generous grading<br />

from peer marking and they are uneasy at their scope<br />

for being over- or under-critical <strong>of</strong> themselves or their<br />

peers. Finally, students are worried about the<br />

credibility <strong>of</strong> the feedback they receive from a nonexpert<br />

(be it from themselves or a peer).<br />

From this we can conclude that for formative<br />

assessment to be successful it cannot be bolted-on<br />

to current practice. Staff must be enabled to re-design<br />

their provision and supported in re-thinking their<br />

purpose as a tutor. Adopting successful formative<br />

assessment practices therefore is likely to affect all<br />

aspects <strong>of</strong> a tutor’s teaching. Whether conceived<br />

narrowly, only in terms <strong>of</strong> developing formative<br />

assessment practice, or more widely, in terms <strong>of</strong> a<br />

re-invention <strong>of</strong> the tutor’s role in the classroom, there<br />

would certainly seem to be a need for staff<br />

development to support tutors in these significant<br />

shifts.<br />

Conclusions<br />

Formative assessment is considered to be one <strong>of</strong> the<br />

most important mechanisms for improving student<br />

learning. Self and peer-assessment are particularly<br />

effective in formative learning as they require<br />

students to engage more fully with the assessment<br />

process. Staff who are considering adopting formative<br />

assessment practices need to be aware <strong>of</strong> the various<br />

controls that impact on the effectiveness <strong>of</strong> the<br />

process <strong>of</strong> feedback. Students require a great deal <strong>of</strong><br />

support in learning to use feedback and in peer and<br />

self-assessment; therefore, a consistent, curriculumwide<br />

adoption <strong>of</strong> formative assessment practices is<br />

preferable to smaller-scale, module-based reforms.<br />

PLANET ISSUE <strong>23</strong><br />

44<br />

It is also common for students to rail against peer and<br />

self-assessment as they feel it is not their job to<br />

assess. When using peer or self-assessment, it is<br />

therefore important to spend time explaining to<br />

students the rationale for, and benefits <strong>of</strong>, these<br />

forms <strong>of</strong> feedback. If students are to take on board<br />

this approach they need to be convinced <strong>of</strong> its value<br />

and not see it as a way <strong>of</strong> the lecturer neglecting a<br />

significant responsibility.<br />

Finally, Black and Wiliam (1998, p20) note the<br />

“… close link <strong>of</strong> formative assessment practice both<br />

with other components <strong>of</strong> a teacher’s own pedagogy,<br />

and with a teacher’s conception <strong>of</strong> his or her role.”

References<br />

• 1,2,3 Leaflets, Assessment Standards Knowledge exchange (ASKe), Centre for Excellence in Teaching<br />

and Learning, Oxford Brookes University http://www.brookes.ac.uk/aske/ Accessed 21 June 2010<br />

• Bedford S. and Legg S. 2007 Formative peer and self feedback as a catalyst for change within science<br />

teaching, Chemistry Education Research and Practice, 8, 1, 80-92<br />

• Black P. and Wiliam D. 1998 Assessment and classroom learning, Assessment in Education, 5, 1, 7–74<br />

• Boud D. 2000 Sustainable assessment: Rethinking assessment for the learning society, Studies in<br />

Continuing Education, 22, 2, 151–167<br />

• Bruner J.S. 1970 Some theories on instruction. In Stones E. (Ed.) Readings in Educational Psychology,<br />

Methuen, London<br />

• Butler R. 1988 Enhancing and undermining intrinsic motivation: the effects <strong>of</strong> task-involving and egoinvolving<br />

evaluation on interest and performance, British Journal <strong>of</strong> Educational Psychology, 58, 1-14<br />

• Chanock K. 2000 Comments on essays: do students understand what tutors write? Teaching in Higher<br />

Education, 5, 1, 95-105<br />

• Craven R.G., Marsh H.W. and Debus R.L. 1991 Effects <strong>of</strong> internally focused feedback on enhancement <strong>of</strong><br />

academic self-concept, Journal <strong>of</strong> Educational Psychology, 83, 17-27<br />

• Dweck C. 2000 Self-theories: Their Role in Motivation, Personality and Development, Psychology Press,<br />

Philadelphia<br />

• Falchikov N. 2004 Improving Assessment through Student Involvement: Practical Solutions for Higher and<br />

Further Education Teaching and Learning, Routledge<br />

• Good T.L. and Grouws D.A. 1975. Process product relationships in fourth-grade mathematics classrooms,<br />

Columbia, MO, University <strong>of</strong> Missouri<br />

• Hounsell D. 1997 Contrasting conceptions <strong>of</strong> essay-writing. In Marton F., Hounsell D. and Entwistle N.<br />

(Eds.) The Experience <strong>of</strong> Learning, Scottish Academic Press, Edinburgh<br />

• Hyland P. 2000 Learning from feedback on assessment. In Booth A. and Hyland P. (Eds.) The practice <strong>of</strong><br />

university history teaching, Manchester University Press, Manchester<br />

• Irons A. 2007 Enhancing Learning through Formative Assessment and Feedback, Routledge<br />

• Lunsford R. 1997 When less is more: principles for responding in the disciplines. In Sorcinelli M. and<br />

Elbow P. (Eds.) Writing to learn: strategies for assigning and responding to writing across the disciplines,<br />

Jossey-Bass, San Francisco<br />

• McDonald B. and Boud D. 2003 The impact <strong>of</strong> self-assessment on achievement: the effects <strong>of</strong> selfassessment<br />

training on performance in external examinations, Assessment in Education, 10, 2, 209-220<br />

• Nicol D. and Macfarlane-Dick D. 2004 Rethinking formative assessment in HE: a theoretical model and<br />

seven principles <strong>of</strong> good feedback practice. http://www.heacademy.ac.uk/assets/York/documents/<br />

ourwork/assessment/web0015_rethinking_formative_assessment_in_he.pdf Accessed 21 June 2010<br />

• Orsmond P., Merry S., and Reiling K. 2002 The use <strong>of</strong> exemplars and formative feedback when using<br />

student derived marking criteria in peer and self-assessment, Assessment & Evaluation in Higher<br />

Education, 27, 4, 309-3<strong>23</strong><br />

• Rust C., Price M. and O’Donovan B. 2003 Improving students’ learning by developing their understanding<br />

<strong>of</strong> assessment criteria and processes, Assessment and Evaluation in Higher Education, 28, 2,147-164.<br />

• Sadler D.R. 1989 Formative assessment and the design <strong>of</strong> instructional systems. Instructional Science,<br />

18, 2,119–141<br />

• Sadler D.R. 1998 Formative assessment: Revisiting the territory, Assessment in Education, 5, 1, 77-84<br />

• Siero F. and Van Oudenhoven J.P. 1995 The effects <strong>of</strong> contingent feedback on perceived control and<br />

performance, European Journal <strong>of</strong> Psychology <strong>of</strong> Education, 10, 13-24<br />

• Walker M. 2009 An investigation into written comments on assignments: do students find them usable?<br />

Assessment & Evaluation in Higher Education, 34, 1, 67-78<br />

• Yorke M. 2003 Formative assessment in higher education: Moves towards theory and the enhancement<br />

<strong>of</strong> pedagogic practice, Higher Education. 45, 477–501<br />

Sharon Gedye sharon.gedye@plymouth.ac.uk<br />

PLANET ISSUE <strong>23</strong><br />


Jasper Knight<br />

Department <strong>of</strong> Geography, University <strong>of</strong> Exeter, Cornwall Campus<br />

Investigating<br />

students’ responses<br />

to different styles <strong>of</strong><br />

essay feedback<br />

PLANET ISSUE <strong>23</strong><br />

46<br />

Abstract<br />

This report describes some <strong>of</strong> the major outcomes <strong>of</strong><br />

a <strong>GEES</strong> <strong>Subject</strong> Centre Small-Scale Project, run in the<br />

2007-08 academic year, where first-year<br />

undergraduate students in Geography were provided<br />

with feedback on an assessed essay in one <strong>of</strong> eight<br />

different styles. These different feedback styles were<br />

evaluated by students through pre- and post-session<br />

anonymous questionnaires. This report makes a<br />

preliminary evaluation <strong>of</strong> students’ responses to these<br />

different feedback styles and considers the<br />

implications <strong>of</strong> these styles for student learning.<br />

Introduction and aims<br />

‘Feedback’, within the context <strong>of</strong> assessment, refers to<br />

the ‘information provided by an agent… regarding<br />

aspects <strong>of</strong> one’s performance or understanding’<br />

(Hattie and Timperley, 2007, p81). This broad<br />

definition is useful because it can refer to any internal<br />

or external stimulus that impacts on student learning,<br />

and which can contribute to reflection, deep learning,<br />

and attainment. The breadth <strong>of</strong> this definition <strong>of</strong><br />

feedback, however, has also constrained exploration<br />

<strong>of</strong> feedback style and effectiveness. This is largely<br />

because it is difficult to find the correct methodology<br />

with which to investigate ‘feedback’, and because the<br />

role <strong>of</strong> feedback in shaping student learning is difficult<br />

to quantify. This latter characteristic is significant<br />

because, in reality, the learning undertaken by<br />

students in response to feedback is neither<br />

unidirectional nor linear: student learning and<br />

progression takes place in jumps and in response to a<br />

range <strong>of</strong> factors, which makes it difficult to link to<br />

specific instances <strong>of</strong> feedback (Carless, 2006). This<br />

more nuanced view <strong>of</strong> feedback contrasts with the<br />

somewhat optimistic view that argues that feedback<br />

is associated with a permanent, sustained and<br />

directional change in learning. In this standpoint,<br />

feedback can be seen in a transformational sense as<br />

being the catalyst for change (a ‘change-agent’) in an<br />

individual’s learning, learning style, achievement or<br />

understanding (Higgins et al., 2002; Hattie and<br />

Timperley, 2007). Feedback, especially one-<strong>of</strong>f,<br />

interventionist feedback from a tutor, can act as the<br />

touchstone for this transformation. Feedback also<br />

forms part <strong>of</strong> a wider set <strong>of</strong> tools that can be used by<br />

students, individually or in groups, to set their work or<br />

their ideas within a wider context, or to reflect upon<br />

their strengths and weaknesses, and to evaluate<br />

against marking criteria. Finally, feedback can be an<br />

initiator <strong>of</strong> academic dialogue between tutor and<br />

student that can close the quality assurance loop.

This report describes a project that examined in detail<br />

the role <strong>of</strong> one-<strong>of</strong>f feedback given in response to a<br />

standard undergraduate essay. This project focused<br />

on essay feedback for a number <strong>of</strong> reasons. First,<br />

many studies identify feedback (in a range <strong>of</strong> forms)<br />

as a key component contributing to students’ learning<br />

(e.g. Carless, 2006; Handley et al., 2007). These<br />

studies do not agree, however, on the following:<br />

which feedback style (written, verbal etc) is most<br />

effective in contributing to students’ learning; when it<br />

should be deployed for its greatest effect; or in what<br />

ways action taken by the student as a result <strong>of</strong> the<br />

feedback they received can be followed up or<br />

reinforced by the tutor (Hattie and Timperley, 2007).<br />

Second, the essay is (still) a standard means <strong>of</strong><br />

summative assessment for undergraduate students<br />

and is used routinely across a range <strong>of</strong> academic<br />

disciplines and at all levels <strong>of</strong> study. As such, the<br />

feedback that students receive from a tutor on their<br />

performance in essays is one <strong>of</strong> the most important<br />

formative experiences they undertake. In addition,<br />

essay assignments are <strong>of</strong>ten completed in a largely<br />

self-directed way with limited tutor supervision, which<br />

makes them ideal for large classes. Finally,<br />

‘assessment and feedback’ forms one <strong>of</strong> the six<br />

groups <strong>of</strong> questions in the National Student Survey<br />

(NSS). This survey, undertaken annually since 2005,<br />

polls final year undergraduates on their learning<br />

experiences at UK Higher Education institutions (HEIs).<br />

Results from the NSS contribute to the position <strong>of</strong> HEIs<br />

in UK league tables. Questions on assessment and<br />

feedback in the NSS in almost all cases receive lower<br />

scores than in other categories. This means that<br />

improved quality and usefulness <strong>of</strong> feedback has<br />

strategic, as well as practical, importance for student<br />

learning and progression.<br />

Methodology<br />

This project evaluates the use <strong>of</strong> different feedback<br />

styles on a single 1500-word undergraduate essay, by<br />

using a student-centred and longitudinal<br />

methodology. The essay assignment forms part <strong>of</strong> the<br />

assessment for a compulsory, year 1 Physical<br />

Geography module at the University <strong>of</strong> Exeter<br />

(Cornwall Campus). In the 2007-8 academic year, 81<br />

students completed the module (34F, 47M) <strong>of</strong> which<br />

20 were on BA (Human Geography) and 58 on BSc<br />

(Physical Geography) programmes. Three students<br />

were on Joint Honours programmes with departments<br />

other than Geography. Students completed an<br />

anonymous pre-session questionnaire (n=46) which<br />

evaluated their attitudes to assessment and<br />

feedback, including receiving feedback in different<br />

styles. Following submission <strong>of</strong> the essay, students<br />

were assigned randomly into ten groups. These<br />

groups received feedback on their essays in one <strong>of</strong><br />

eight styles (with two control groups), which included<br />

disclosure <strong>of</strong> their percentage mark (Table 1). The<br />

control groups received standard written comments<br />

(with their mark) only. Students’ evaluation <strong>of</strong> their<br />

experience <strong>of</strong> receiving feedback by these different<br />

styles was undertaken by anonymous post-session<br />

questionnaires. At the end <strong>of</strong> the process, all students<br />

received standard written feedback. All questionnaires<br />

are available by request from the author.<br />

Feedback style (number <strong>of</strong> student respondents)<br />

Face-to-face verbal and written feedback from peers (7)<br />

Face-to-face verbal and written feedback from a tutor (6)<br />

Face-to-face verbal feedback from a tutor (4)<br />

Audio feedback from a tutor as a mp3 <strong>file</strong> (6)<br />

Video feedback from a tutor as a movie <strong>file</strong> (8)<br />

Feedback as a video showing their essay being marked in real time by a tutor (7)<br />

Real time video feedback from a tutor using Skype (5)<br />

Typed feedback from a tutor using a WebCT-hosted chatroom (4)<br />

Control (2 groups) (7)<br />

Table 1: Different feedback styles investigated in this project.<br />

PLANET ISSUE <strong>23</strong><br />


Investigating students’ responses to different styles <strong>of</strong> essay feedback<br />

Jasper Knight<br />

Results<br />

The pre-session questionnaire asked about students’<br />

attitudes to receiving feedback in different ways<br />

(Table 2). Traditional feedback styles, by verbal and<br />

written tutor feedback, receive the highest Lickert<br />

values. Other less-familiar feedback styles have lower<br />

values. Students were asked to name those things<br />

that, in their view, make for good feedback. Items<br />

mentioned (n=66) include ways to improve the essay<br />

(48%), good and bad points about the essay (21%),<br />

essay structure (10%), reference to marking criteria<br />

(10%), and reference to a model answer (9%).<br />

Some students’ responses to receiving feedback in<br />

certain ways are shown in Table 3, collated from the<br />

free-text comments box on the post-session<br />

questionnaire. The responses, and qualitative<br />

evaluations from the students’ perspective <strong>of</strong> their<br />

effectiveness in enhancing learning, are almost all<br />

strongly positive (n=40), with 50% <strong>of</strong> responses<br />

reporting their own feedback style to be ‘much better’<br />

and 48% to be ‘somewhat better’ than standard<br />

written tutor feedback alone. The only exception was<br />

for peer feedback (n=7) where 42% reported it to be<br />

‘no different’, 42% ‘somewhat worse’ and 12%<br />

‘somewhat better’ than written tutor feedback.<br />

The free-text comments shown in Table 3 highlight a<br />

number <strong>of</strong> themes, including the perceived quantity<br />

<strong>of</strong> feedback (expanding on certain points, depth <strong>of</strong><br />

Feedback style<br />

Pre-session<br />

questionnaire:<br />

student rating <strong>of</strong><br />

how useful different<br />

feedback styles may<br />

be for their learning.<br />

Average Lickert score<br />

(modal class), where<br />

1 = ‘Not useful at all’,<br />

5 = ‘Very useful’<br />

Pre-session<br />

questionnaire:<br />

do you think<br />

this feedback<br />

method would<br />

be useful to<br />

your learning?<br />

(%)<br />

No<br />

Yes<br />

Post-session<br />

questionnaire:<br />

do you think<br />

this feedback<br />

method would<br />

be useful to<br />

your learning?<br />

(%)<br />

No<br />

Yes<br />

Verbal feedback from your peers<br />

3.39 (4)<br />

32 68<br />

53 47<br />

Face to face verbal feedback<br />

from a tutor<br />

Audio feedback from a tutor as<br />

a mp3 <strong>file</strong><br />

4.76 (5)<br />

3.41 (4)<br />

1 99<br />

37 63<br />

10 90<br />

44 56<br />

Video feedback from a tutor as<br />

a movie <strong>file</strong><br />

3.15 (4)<br />

45 55<br />

49 51<br />

Seeing a real-time video <strong>of</strong> your<br />

essay being marked by a tutor<br />

2.89 (2)<br />

54 46<br />

45 55<br />

Receiving feedback on your<br />

essay from a tutor using online<br />

video (e.g. Skype)<br />

3.04 (3)<br />

51 49<br />

63 37<br />

PLANET ISSUE <strong>23</strong><br />

48<br />

Receiving feedback on your<br />

essay from a tutor in an online<br />

chatroom<br />

Traditional written feedback<br />

from a tutor only<br />

2.91 (3)<br />

4.15 (4)<br />

Table 2: Results from pre- and post-session questionnaire surveys (n=46).<br />

53 47<br />

10 90<br />

67 33<br />

N/A<br />


engagement), and its perceived quality (suggestions<br />

for positive outcomes, improvements for the future).<br />

Free-text comments on the advantages <strong>of</strong> these<br />

different methods focus on the depth <strong>of</strong> explanation<br />

(20 <strong>of</strong> 45 comments), the personalised feedback (6<br />

comments), the opportunity to ask questions with<br />

face-to-face feedback (8 comments), and the ability<br />

to pause or make notes with audio/video feedback (6<br />

comments). Free-text comments on disadvantages<br />

focus on the lack <strong>of</strong> opportunity to ask questions (10<br />

<strong>of</strong> <strong>23</strong> comments) and the lack <strong>of</strong> hard-<strong>copy</strong> (6<br />

comments) with audio/video feedback.<br />

Feedback style<br />

Verbal feedback from<br />

your peers<br />

Face to face verbal<br />

feedback from a tutor<br />

Audio feedback from a<br />

tutor as a mp3 <strong>file</strong><br />

Video feedback from a<br />

tutor as a movie <strong>file</strong><br />

Seeing a real-time video<br />

<strong>of</strong> your essay being<br />

marked by a tutor<br />

Receiving feedback on<br />

your essay from a tutor<br />

using online video (e.g.<br />

Skype)<br />

Receiving feedback on<br />

your essay from a tutor<br />

in an online chatroom<br />

Student comments<br />

• Useful insight into the approaches and views <strong>of</strong> other people in the peer group – also useful<br />

for ‘grading’ yourself against others.<br />

• It was very useful to read other people’s work as it showed other angles that could be taken<br />

to answer the same question, and it showed other writing styles.<br />

• Felt as though it was more <strong>of</strong> a fair discussion, and I could explain myself.<br />

• Where you went wrong was explained well so you understand better. Face to face you can<br />

ask questions.<br />

• Essay was explained with how things could be expanded and changed. 3 minutes <strong>of</strong> verbal<br />

feedback wouldn’t fit on to marking sheet. Beginning to forget it already.<br />

• Found it very useful as feedback was explained more fully than written, going through step<br />

by step.<br />

• Gained more information than written however accusations/assumptions were made but no<br />

defence could occur.<br />

• Positive – I learnt far more about my essay from the audio than I ever did from the feedback<br />

sheet.<br />

• Very positive, clear on what is good and what needs work and also provides advice on how to<br />

improve in future essays.<br />

• Extremely positive – focused, specific and non-pressured.<br />

• Overall useful. The video expanded on the points raised in the written feedback sheet. You<br />

could go through the essay along with the marker and see exactly where issues/problems<br />

with it were. You could pause the video to make notes <strong>of</strong> what the marker was saying. The<br />

marker provided improvements and sources <strong>of</strong> help in future.<br />

• Very good, more in depth and clear. Generally very constructive and helpful.<br />

• It was slightly embarrassing, but that has aided me to do better in the future, and improve in<br />

many areas.<br />

• This was a good experience. Felt it was a lot clearer than just written feedback and helped to<br />

explain points made rather than just listing [them].<br />

• Positive to an extent, having the video <strong>file</strong> enabled me to pause and take notes etc.<br />

• Good – being able to ask questions as you go through the text was very useful.<br />

• I thought it was useful and the feedback was a lot more detailed than feedback we have<br />

received in the past. It was the good that the points made on the feedback sheet could be<br />

expanded [on] to a greater degree.<br />

• I found Skype a good method <strong>of</strong> receiving feedback as it is interactive therefore you can<br />

respond to possible improvements.<br />

• At first I thought it was pointless but then I found that it could be useful. Because it was<br />

easier for everyone to get involved and it doesn’t require everyone to meet up.<br />

• This method was useful as it allowed the written feedback to be discussed with the person<br />

who wrote it so a better understanding <strong>of</strong> the feedback could be achieved.<br />

• Good, it answered some questions I had and expanded well on the feedback sheet.<br />

Traditional written<br />

feedback from a tutor<br />

only (control group)<br />

• Mostly positive but difficult to read and there was no chance for discussion. However, key<br />

points were noted.<br />

• OK – not really informative and you cannot talk in depth about where I have gone drastically<br />

wrong. Not sure whether I am understanding comments correctly.<br />

• It was ok – you received brief comments about your work and certain issues [were] picked up.<br />

Table 3: Results from post-session questionnaire survey on the students’ experiences <strong>of</strong> receiving essay feedback in a<br />

certain style.<br />

PLANET ISSUE <strong>23</strong><br />


Investigating students’ responses to different styles <strong>of</strong> essay feedback<br />

Jasper Knight<br />

Students were also asked whether, in their opinion,<br />

these different feedback methods would be useful or<br />

not to their learning (Table 2). Results from the<br />

pre-session questionnaire show generally positive<br />

responses to all proposed feedback methods. The<br />

post-session questionnaire, however, shows a<br />

noticeable shift to more negative responses in all<br />

categories (Table 2). Superficially this suggests that no<br />

feedback style is useful, but the post-session<br />

questionnaires show a more nuanced view <strong>of</strong> these<br />

preferences (Table 4). For example, <strong>of</strong> those students<br />

who had received verbal tutor feedback (n=4), all also<br />

wanted audio and Skype feedback and none wanted<br />

video feedback (Table 4). All <strong>of</strong> those students who<br />

had received peer feedback (n=7) also wanted verbal<br />

tutor feedback, and no student wanted feedback from<br />

any other method. All <strong>of</strong> the students who had<br />

received real-time video feedback (n=7) also wanted<br />

video and audio feedback, and no one wanted Skype<br />

or chatroom feedback. The significance <strong>of</strong> these<br />

contrasting preferences is discussed below.<br />

commented, unprompted, that they would like to<br />

speak to a tutor.<br />

Discussion<br />

The pre-session questionnaire highlights students’<br />

preconceptions on the style, purpose and significance<br />

<strong>of</strong> feedback. It is notable that the highest Lickert<br />

scores are associated with the feedback styles that<br />

incoming students are most likely to be familiar with<br />

from school or college. This is also shown in Table 4<br />

where all students, irrespective <strong>of</strong> feedback style,<br />

would like face-to-face verbal tutor feedback. The<br />

lowest Lickert values are associated with less familiar<br />

and more innovative styles, which is consistent with<br />

results from other studies (e.g. Knight, 2007). The<br />

free-text responses on what makes for good feedback<br />

(e.g. Table 3) also clearly show that students are<br />

strongly concerned with feedback for achievement<br />

(driven by identifying how to improve, good/bad<br />

points, marking criteria, model answers) rather than<br />

feedback for learning (cf. Pain and Mowl, 1996). At no<br />

…also wanted to experience<br />

Verbal<br />

peer<br />

F2F verbal<br />

tutor<br />

MP3<br />

Video<br />

Realtime<br />

video<br />

Skype<br />

Chatroom<br />

Students who experienced…<br />

(Control)<br />

Written tutor<br />

Chatroom<br />

Skype<br />

Real-time video<br />

Video<br />

MP3<br />

F2F verbal tutor<br />

Verbal peer<br />

Y<br />

Y<br />

N<br />

N<br />

N<br />

-<br />

Y<br />

Y<br />

Y<br />

Y<br />

Y<br />

Y<br />

Y<br />

-<br />

Y<br />

Y<br />

Y<br />

N<br />

Y<br />

Y<br />

-<br />

Y<br />

N<br />

Y<br />

Y<br />

Y<br />

-<br />

N<br />

N<br />

Y<br />

Y<br />

Y<br />

-<br />

N<br />

N<br />

Y<br />

N<br />

Y<br />

-<br />

N<br />

N<br />

Y<br />

N<br />

Y<br />

-<br />

Y<br />

N<br />

N<br />

N<br />

N<br />

Table 4: Matrix <strong>of</strong> responses to future feedback styles. Students who had experienced one feedback style (rows) were asked<br />

which other feedback styles they would like to experience (columns). Y=yes please, N= no thanks, (blank)= no preference.<br />

PLANET ISSUE <strong>23</strong><br />

Irrespective <strong>of</strong> which feedback method they received,<br />

when asked whether the feedback would help them<br />

to achieve a better mark next time, 78% <strong>of</strong> all<br />

students answered yes and only 2% no (n=46). In all<br />

<strong>of</strong> these methods <strong>of</strong> receiving feedback no student<br />

point did any student define feedback with respect to<br />

learning or understanding. This shows very clearly the<br />

dominance <strong>of</strong> surface/procedural over deep learning<br />

approaches (Knight, 2010).<br />


The differences between pre- and post-session<br />

questionnaire results regarding the different feedback<br />

styles appears to link closely to this surface learning<br />

(Table 4). The results suggest that, having had a<br />

positive experience from one feedback style, students<br />

preferentially choose styles that are similar to the one<br />

they have experienced, to the exclusion <strong>of</strong> other<br />

styles. The stark difference between groups <strong>of</strong><br />

students who had experienced different feedback<br />

styles shows this clearly (Table 4). For example, those<br />

students who had experienced verbal feedback<br />

appear to want to reinforce this behaviour through<br />

experiencing Skype and audio feedback. The<br />

experiences <strong>of</strong> using Skype and a chatroom appear to<br />

be linked, and these mutual preferences are not<br />

shared with other styles. It is also notable that all<br />

styles ask for face-to-face verbal tutor feedback,<br />

though, as noted below, the uptake <strong>of</strong> this is very low.<br />

Overall, these results suggest that no one style<br />

provides an overall better experience than any other,<br />

and that student preferences are more nuanced and<br />

dependent on the nature <strong>of</strong> the feedback itself<br />

(including its tone, timing and structure) (Hattie and<br />

Timperley, 2007). The free-text comments (Table 3)<br />

also suggest that the mark students received did not<br />

influence their perception <strong>of</strong> its usefulness.<br />

The strong positive response to the experience <strong>of</strong><br />

receiving feedback in these different styles (Table 3)<br />

highlights this point, and is useful for a number <strong>of</strong><br />

reasons. First, it suggests that when students are<br />

required to actively engage with feedback (rather<br />

than to be the passive recipients <strong>of</strong> feedback) they are<br />

much more likely to reflect and act upon it. Second, it<br />

shows that there should not be a single or predefined<br />

way <strong>of</strong> receiving feedback, and that this can be<br />

achieved in different ways and at different times. One<br />

student commented: “[I] found it much more useful<br />

than expected, making me more willing to consider<br />

other types <strong>of</strong> feedback I previously didn’t think I<br />

would have found useful.”<br />

comments (Table 3) (Orrell, 2006). Although student<br />

responses suggest that face-to-face contact (Table 4)<br />

and the opportunity to ask questions is important<br />

(Table 3), in practice the uptake <strong>of</strong> this opportunity (in<br />

the author’s view) is near zero, despite many<br />

promptings. This may therefore be an issue <strong>of</strong><br />

students’ perception <strong>of</strong> tutor availability and<br />

approachability.<br />

Conclusions<br />

Feedback on assignments is a key component <strong>of</strong><br />

student learning, but its effectiveness is contingent on<br />

a range <strong>of</strong> interrelated issues. Feedback style is one<br />

such issue, which has been explored in the project<br />

described here. The results show fundamentally that<br />

feedback that is perceived by students as useful, and<br />

which delivers in-depth information that is linked to<br />

higher future achievement, can be delivered<br />

effectively in any number <strong>of</strong> styles (Tables 3, 4).<br />

Although this emphasises the role <strong>of</strong> the tutor to<br />

provide this high-quality feedback, it also suggests<br />

that students have an important role to play through<br />

active engagement with feedback in order to effect<br />

change (Orrell, 2006).<br />

Part <strong>of</strong> this engagement may involve disentangling<br />

‘assessment and feedback’, so that feedback can be<br />

about learning rather than just achievement through<br />

assessment (Nicol, 2007). The results presented here<br />

suggest that feedback, and therefore learning, can<br />

take place in more informal settings or in different<br />

ways without the need for (summative) assessment,<br />

and can be achieved through a range <strong>of</strong> media that<br />

promote tutor-student interaction (Higgins et al.,<br />

2002). The results also suggest that relatively small<br />

changes from current practice, to better embed this<br />

interaction and make feedback part <strong>of</strong> the active<br />

learning process, could have significant impacts on<br />

student satisfaction.<br />

Third, these components together converge on the<br />

notion <strong>of</strong> feedback as part <strong>of</strong> ongoing tutor-student<br />

dialogue focused on enhancing learning and<br />

developing deep rather than surface learning<br />

approaches.<br />

Part <strong>of</strong> this student engagement with feedback,<br />

however, also depends on its perceived quality and<br />

quantity. It is likely that many students (incorrectly)<br />

have the view that more feedback is better feedback.<br />

It is also likely that some tutor comments are<br />

misinterpreted, which was identified in some free-text<br />

PLANET ISSUE <strong>23</strong><br />


Investigating students’ responses to different styles <strong>of</strong> essay feedback<br />

Jasper Knight<br />

References<br />

• Carless D. 2006 Differing perceptions in the feedback process, Studies in Higher Education, 31, 2, 219-<strong>23</strong>3<br />

• Handley K., Szwelni, A., Ujma D., Lawrence L., Millar J. and Price M. 2007 When less is more: Students’<br />

experiences <strong>of</strong> assessment feedback. Paper presented at the Higher Education Academy conference, July<br />

2007. http://www.heacademy.ac.uk/resources/detail/when_less_is_more Accessed 21 June 2010<br />

• Hattie J. and Timperley H. 2007 The power <strong>of</strong> feedback, Review <strong>of</strong> Educational Research, 77, 1, 81-112<br />

• Higgins R., Hartley P. and Skelton A. 2002 The conscientious consumer: reconsidering the role <strong>of</strong><br />

assessment feedback in student learning, Studies in Higher Education, 27, 1, 53-64<br />

• Knight J. 2007 Exploring the role <strong>of</strong> personal technologies in undergraduate learning and teaching<br />

practice, PRIME, 2 , 2, 107-116<br />

• Knight J. 2010 Distinguishing the learning approaches adopted by undergraduates in their use <strong>of</strong> online<br />

resources, Active Learning in Higher Education, in press<br />

• Nichol D. 2007 Laying a foundation for lifelong learning: Case studies <strong>of</strong> e-assessment in large 1st-year<br />

classes, British Journal <strong>of</strong> Educational Technology, 38 ,4, 668-678<br />

• Orrell J. 2006 Feedback on learning achievement: rhetoric and reality, Teaching in Higher Education, 11,<br />

4, 441-456<br />

• Pain R. and Mowl G. 1996 Improving geography essay writing using innovative assessment, Journal <strong>of</strong><br />

Geography in Higher Education, 20 ,1, 19-31<br />

Jasper Knight jasper.knight@exeter.ac.uk<br />

<strong>GEES</strong> Photo Competition 2009/10<br />

PLANET ISSUE <strong>23</strong><br />

52<br />


Helen Wakefield<br />

“Suburban Bush Fire”,<br />

Safety Bay, Perth, Western<br />

Australia , 18th April 2009

Assessment and the ‘pr<strong>of</strong>essional student’:<br />

engaging with academic integrity<br />

Understandings <strong>of</strong> academic integrity is an issue for staff and students in Higher<br />

Education. The IDEA CETL has developed tutorial materials that cover, for example, the<br />

issue <strong>of</strong> scientific integrity (including the ethics <strong>of</strong> falsifying data) and unethical<br />

academic behaviour.<br />

This resource has been made available free <strong>of</strong> charge for the <strong>GEES</strong> HE community, and<br />

thanks go to the IDEA CETL for their help on this. The free teaching resource can be found<br />

at: http://www.idea.leeds.ac.uk/2009/12/a-free-resource-for-teaching-ethics/<br />

The IDEA CETL also provides other materials on ‘teaching inter-disciplinary ethics’, which<br />

are available from their website 1 , but require the purchase <strong>of</strong> a licence for use, with a range<br />

<strong>of</strong> licences available (individual, team, institutional).<br />

The JISC now <strong>of</strong>fer a Plagiarism Advisory Service 2 that provides resources, training, advice<br />

and guidance in all aspects <strong>of</strong> plagiarism prevention. The emphasis is placed on<br />

encouraging innovative, original work from students, and engaging students with the idea<br />

<strong>of</strong> ‘academic integrity’.<br />

The Academy JISC Academic Integrity Service (AJAIS) 3 is a joint initiative that aims to raise<br />

awareness and enhance understanding <strong>of</strong> the issues relating to academic integrity in<br />

Higher Education. One <strong>of</strong> the priorities <strong>of</strong> this project is to work with <strong>Subject</strong> Centres to<br />

identify subject-specific issues relating to academic integrity (such as assessment<br />

strategies, student skills development and plagiarism).<br />

The AJAIS team are currently talking to <strong>Subject</strong> Centres about the following aspects <strong>of</strong><br />

academic integrity:<br />

• issues highlighted by their discipline communities;<br />

• examples <strong>of</strong> good practice;<br />

• the resources staff and students need to tackle this issue and what is already out there;<br />

• how this work can be linked to guidance for students on developing skills for academic/<br />

pr<strong>of</strong>essional practice.<br />

What are YOUR Disciplinary Concerns?<br />

• Let us know about subject-specific issues relating to academic integrity<br />

• What challenges do your students experience in <strong>GEES</strong>-specific assessment tasks?<br />

• What resources would help you tackle issues surrounding academic integrity?<br />

Email us with your thoughts to: info@gees.ac.uk<br />

Web Links (all accessed 1st September 2010)<br />

1. IDEA CETL: Teaching Inter-disciplinary Ethics http://www.idearesources.leeds.ac.uk/<br />

Default.aspx?Id=20<br />

2. PlagiarismAdvice.org: http://www.jisc.ac.uk/whatwedo/services/pas.aspx<br />

3. AJAIS: http://www.heacademy.ac.uk/ourwork/teachingandlearning/assessment/<br />

integrityservice<br />

PLANET ISSUE <strong>23</strong><br />


Helen M. Roe 1 | David P. Robinson 2<br />

1: School <strong>of</strong> Geography, Archaeology and Palaeoecology, Queen’s University <strong>of</strong> Belfast<br />

2: Media Services Unit, Queen’s University <strong>of</strong> Belfast<br />

The use <strong>of</strong> Personal Response<br />

Systems (PRS) in multiple-choice<br />

assessment: benefits<br />

and pitfalls over traditional,<br />

paper-based approaches<br />

PLANET ISSUE <strong>23</strong><br />

Abstract<br />

In this paper we discuss our experiences <strong>of</strong> using<br />

wireless keypads for multiple-choice (MCQ)<br />

assessment and evaluate the benefits over traditional,<br />

paper-based MCQ assessment methods. A Personal<br />

Response System (PRS), TurningPoint ® , was trialled for<br />

two years for a Geography undergraduate class test,<br />

designed to provide both formative and summative<br />

assessment. The benefits <strong>of</strong> PRS are considered in<br />

terms <strong>of</strong> i) student performance; ii) students’<br />

experiences, which were assessed by questionnaires;<br />

iii) quality <strong>of</strong> feedback; and iv) staff time inputs. Of the<br />

students surveyed, 74% stated that they strongly<br />

preferred PRS over paper-based MCQ assessment,<br />

whilst 6% <strong>of</strong> students disagreed. Students praised the<br />

immediacy and graphical nature <strong>of</strong> feedback<br />

associated with PRS, but some criticised the<br />

inflexibility <strong>of</strong> PRS to change answers after a fixed ‘per<br />

question’ time interval. Staff time inputs were<br />

substantially reduced (ca. 75% less) for PRS than for<br />

the equivalent paper-based test. PRS was found to be<br />

a useful tool in MCQ assessment, but it has some<br />

disadvantages. We highly recommend PRS for<br />

formative assessment but caution its use for<br />

summative assessment in its current form and<br />

recommend that it only be used for assessments<br />

carrying a small mark-weighting.<br />

Introduction<br />

Studies over the last decade have repeatedly shown<br />

that ‘clicker’ or Personal Response System (PRS)<br />

technologies <strong>of</strong>fer many benefits in higher education.<br />

They can be used to test students’ knowledge and<br />

understanding (Dufresne et al., 1996), help students<br />

to engage interactively (Boyle and Nicol, 2003; Draper<br />

and Brown, 2004), and help students to learn<br />

concepts more easily and retain them for longer than<br />

students who sit learning passively (Wood, 2004). The<br />

anonymity <strong>of</strong> responses facilitated by PRS technology<br />

also allows staff to initiate class debate on sensitive<br />

topics that might otherwise be difficult to explore<br />

(Zhu, 2007). Indeed, the applications <strong>of</strong> PRS are so<br />

broad that in many undergraduate teaching<br />

institutions in North America the technology has led<br />

to a transformation in science teaching, particularly<br />

for small classes. There has been a move from a<br />

format dominated by lectures to more interactive,<br />

seminar-style courses that require students’ active<br />

participation (Wood, 2004).<br />

Many PRS systems are now available, and whilst each<br />

differs slightly in its capabilities, most comprise a<br />

hand-held keypad (a transmitter) which communicates<br />

with a receiver on a ‘base’ computer via radio or<br />

infra-red transmission. Presentation s<strong>of</strong>tware (e.g.<br />

Micros<strong>of</strong>t PowerPoint ® ) is typically used for audience<br />

communication. Questions are projected on-screen<br />

and members <strong>of</strong> the audience ‘click’ their responses,<br />

whilst a keypad light or LCD screen-indicator signals<br />

that a response has been registered. The responses<br />

are then processed, typically within seconds, providing<br />

the audience with results charts.<br />

Whilst the benefits <strong>of</strong> PRS for improving student<br />

engagement have been widely reported, there has<br />

been less emphasis in the recent literature on the<br />

utility <strong>of</strong> PRS as an assessment tool, particularly for<br />

summative assessment. This may reflect a number <strong>of</strong><br />

factors, including: i) technical difficulties that have<br />

hindered the configuration <strong>of</strong> PRS systems for formal<br />

assessment; ii) concerns over the reliability <strong>of</strong> PRS<br />

technology for recording students’ answers; and iii) a<br />


eticence on the part <strong>of</strong> practitioners to explore the<br />

medium for summative assessment, possibly<br />

reflecting a combination <strong>of</strong> the above.<br />

Notwithstanding this, there are studies dating back to<br />

the 1970s that describe the use <strong>of</strong> classroom clickers<br />

for assessment, particularly in physics and the<br />

mathematical sciences (for a review see Judson and<br />

Sawada, 2002). However, it has been acknowledged<br />

recently that the potential use <strong>of</strong> PRS as a tool for<br />

formal assessment requires further research (Elliott,<br />

2003). Given the relatively high costs associated with<br />

purchasing PRS, it is important that the full range <strong>of</strong><br />

applications and limitations <strong>of</strong> each system are<br />

explored by users prior to investment. Like any<br />

technology, PRS can be used creatively or skilfully,<br />

bringing many teaching and learning benefits, or it<br />

can be used clumsily or destructively (Wood, 2004).<br />

Communication <strong>of</strong> experiences, both good and bad,<br />

are vital if practitioners are to refine and enhance this<br />

form <strong>of</strong> technology-supported learning.<br />

In this paper we discuss the results <strong>of</strong> a two year pilot<br />

<strong>of</strong> PRS for a second year Geography undergraduate<br />

class test (multiple-choice) and consider the<br />

advantages and disadvantages over a more<br />

traditional, paper-based test format used in previous<br />

years. The class test forms 10% <strong>of</strong> the mark for a core<br />

module, ‘Field and Research Techniques in<br />

Geographical Practice’ (GGY2024), taken by 100-150<br />

students each year. The primary aims <strong>of</strong> the module<br />

are to develop students’ field data collection and<br />

analysis skills, which are achieved via a week-long<br />

Mediterranean field course run during the Easter<br />

vacation. The class test is held a few days before the<br />

field course to assess students’ knowledge <strong>of</strong> a series<br />

<strong>of</strong> background topics relating to the geographies <strong>of</strong><br />

the Mediterranean. These are covered in lectures and<br />

help students better contextualise issues that are<br />

developed in the field. The test significantly improves<br />

student attendance at these preparatory lectures.<br />

Feedback is provided in the lecture after the test,<br />

taking the form <strong>of</strong> a question-by-question runthrough<br />

<strong>of</strong> the results. The test assessment is thus<br />

both formative and summative.<br />

Our motivation to trial PRS for the GGY2024 test was<br />

partly influenced by a desire to reduce staff marking<br />

time; in paper format, the multiple-choice (MCQ) test<br />

typically takes nearly two days to mark, and as this<br />

also needs to be completed before the field course,<br />

this puts pressure on staff. We also felt that the<br />

graphical nature <strong>of</strong> feedback associated with PRS<br />

might improve students’ understanding and retention<br />

<strong>of</strong> concepts prior to field discussion.<br />

For our study we used a PRS technology purchased by<br />

Queen’s University’s Media Services Department in<br />

2005 (TurningPoint ® , 2005-6 version). This s<strong>of</strong>tware<br />

embeds into Micros<strong>of</strong>t PowerPoint ® for audience<br />

communication. The system was used to run the test<br />

in two years, 2007 and 2008. In our evaluation <strong>of</strong> PRS<br />

we consider a range <strong>of</strong> factors, including its impact on<br />

student performance, students’ experiences and<br />

perceptions <strong>of</strong> the PRS interface, and impacts on staff<br />

time. Each stage <strong>of</strong> the set-up and marking process<br />

was timed and compared with the equivalent time<br />

inputs for the paper test, whilst students’ opinions<br />

were assessed via anonymous questionnaires. These<br />

included questions on i) the usefulness <strong>of</strong> a PRS<br />

practice session before the tests; ii) time allocation<br />

per question; iii) ease <strong>of</strong> using the keypads; iv) quality,<br />

speed and usefulness <strong>of</strong> feedback; and v) students’<br />

perceived impact <strong>of</strong> PRS on performance. Students<br />

were also asked to compare their experiences <strong>of</strong> PRS<br />

tests with paper-based MCQ tests. In 2007 44% <strong>of</strong> the<br />

class (n=62) responded; in 2008, 85% (n=85).<br />

System set up and preparation<br />

TurningPoint ® s<strong>of</strong>tware (version 1.1.2) was installed on<br />

a Macintosh computer (OSX 10.4.2) and MCQs were<br />

prepared as PowerPoint ® slides. A number <strong>of</strong> routine<br />

steps associated with session set up were followed.<br />

These are summarised below to provide an overview<br />

<strong>of</strong> the process. Readers are referred to the<br />

TurningPoint ® or other PRS system operational<br />

instructions for specific guidance with session<br />

preparation and data processing (TurningPoint 2010).<br />

Steps:<br />

1. Preparation <strong>of</strong> a ‘Participants’ (class) list which<br />

includes: a) students’ identification numbers; and b)<br />

a designated keypad number for each student.<br />

2. Preparation <strong>of</strong> MCQ questions (one question per<br />

slide) and identification <strong>of</strong> the correct answer for<br />

each question.<br />

3. Allocation <strong>of</strong> a time limit for each question.<br />

4. Performance <strong>of</strong> a trial run using a small number <strong>of</strong><br />

keypads (2-3) to check that the s<strong>of</strong>tware is<br />

functioning correctly.<br />

5. Inspection <strong>of</strong> all keypads to ensure they are<br />

functioning correctly.<br />

The keypads used for the test were the second<br />

generation <strong>of</strong> keypads issued with the TurningPoint ®<br />

system (Figure 1). These have fewer functions than<br />

some newer handsets that are currently available. A<br />

major limitation <strong>of</strong> the early generation keypads is<br />

that it is not possible to customize or ‘tag’ them to<br />

users prior to a TurningPoint ® session. It was thus<br />

PLANET ISSUE <strong>23</strong><br />


The use <strong>of</strong> Personal Response Systems (PRS) in multiple-choice assessment:<br />

benefits and pitfalls over traditional, paper-based approaches<br />

Helen M. Roe | David P. Robinson<br />

PLANET ISSUE <strong>23</strong><br />

necessary to supplement Step 1b by attaching a<br />

paper label to each keypad with students’<br />

identification details. This rather primitive method <strong>of</strong><br />

tagging allowed students to locate their keypads at<br />

the start <strong>of</strong> the test, which were distributed in<br />

alphabetical order on desks around the exam room.<br />

For more advanced PRS systems this step is not<br />

necessary; users can key in or ‘authenticate’ their<br />

identification details at the start <strong>of</strong> a session via the<br />

handsets. This process <strong>of</strong> ‘paper label’ tagging was<br />

time-consuming, accounting for ca. 1.5 hours <strong>of</strong> the<br />

total preparation time in 2007 and ca. 1 hour in 2008<br />

when class size was smaller (Table 1). It nevertheless<br />

had one advantage in that by placing keypads in<br />

alphabetical order this effectively randomised student<br />

distribution in the exam room. This prevented friends<br />

from sitting near to each other and <strong>copy</strong>ing answers.<br />

Previous studies have shown <strong>copy</strong>ing or cheating can<br />

be an issue with PRS, particularly if keypads do not<br />

have a shield to cover keys (Zhu, 2007). The spacing<br />

issue is therefore important.<br />

Figure 1: Picture <strong>of</strong> a TurningPoint ® keypad<br />

and dongle receiver.<br />

Question preparation<br />

In preparing MCQ questions (Step 2 above), care was<br />

taken to limit the number <strong>of</strong> answer choices to five or<br />

less per question in accordance with ‘best practice’<br />

guidelines (cf. Zhu, 2007 p. 5). Questions were devised<br />

so that there was only one correct answer per<br />

question. This answer format was partly constrained<br />

by s<strong>of</strong>tware limitations. It was not possible to use<br />

some more complex answer formats with the<br />

s<strong>of</strong>tware available, for example, asking students to<br />

rank or order their answers or enter an absolute<br />

numerical value in response to a question, although<br />

some <strong>of</strong> these formats are available with the newer<br />

version <strong>of</strong> TurningPoint ® and with some other systems.<br />

These alternative answer-formats had not been used<br />

previously for the class test so their lack <strong>of</strong> availability<br />

was not a disadvantage. In total 20 questions were<br />

used for the one hour test.<br />

An important consideration in the preparation <strong>of</strong><br />

question slides is the amount <strong>of</strong> time to display, and<br />

allow students to answer, each question (Step 3). This<br />

issue gains greater significance for summative rather<br />

than purely formative assessment and so requires<br />

careful consideration. PRS systems invariably include a<br />

‘countdown’ timer-facility that enables a fixed time to<br />

be allocated to each question or slide. After the time<br />

elapses, answering or ‘voting’ closes. Voting can also<br />

be closed ‘manually’ by staff when all students have<br />

provided an answer, which can be tallied on-screen as<br />

students ‘click’ their responses. A major bonus <strong>of</strong> PRS<br />

is that answers can be changed as many times as<br />

keypad users wish within the ‘voting’ period. We felt<br />

that it would disadvantage students to close voting<br />

when all students had ‘clicked’ a response, as it would<br />

be impossible to assess whether these represented<br />

students’ final answers. We also recognised that if too<br />

short or long a time was allocated then this might<br />

pressure students into making a wrong answer or,<br />

conversely, promote an air <strong>of</strong> distraction in the exam<br />

room as students waited for the next question. As a<br />

compromise, 2 minutes was allocated to each<br />

question during the 2007 test. This was reduced to 1<br />

minute and 30 seconds in 2008 following<br />

overwhelming feedback from the student surveys that<br />

2 minutes was too long.<br />

As the class occasionally includes students with visual<br />

disabilities, questions were read out loud as well as<br />

being displayed on screen. Over 90% <strong>of</strong> the students<br />

surveyed found this helpful. This took on average ca.<br />

20 seconds per question and the ‘voting’ or answering<br />

period was activated after this, giving students a few<br />

additional seconds to consider their responses.<br />


S<strong>of</strong>tware-related difficulties<br />

A number <strong>of</strong> technical difficulties arose when<br />

implementing Step 3 in both years <strong>of</strong> running the test.<br />

These were associated with running the TurningPoint ®<br />

s<strong>of</strong>tware on a Macintosh platform. The test was thus<br />

transferred to a PC and no further problems were<br />

encountered.<br />

Implementation<br />

As few students registered for the module had any<br />

previous experience <strong>of</strong> PRS, a 15 minute practice<br />

session was run in a lecture prior to the tests to<br />

enable students to familiarise themselves with the<br />

interface (Figure 2). The majority <strong>of</strong> students surveyed<br />

stated that they found the session increased their<br />

confidence in using PRS, although a small number <strong>of</strong><br />

students (3% in 2007, 9% in 2008) stated that the<br />

system was so easy to use that they did not need a<br />

run-through. This session also enabled keypads to be<br />

re-checked, augmenting Step 5. Reliability and issues<br />

<strong>of</strong> resistance to tampering have been cited as<br />

important factors in selecting a response system<br />

(Burnstein and Lederman, 2001). A small number <strong>of</strong><br />

keypads were found to be non-functional during the<br />

practice sessions, resulting from battery problems,<br />

and were replaced.<br />

Early in the 2007 test it was noticed that many<br />

students were answering questions very quickly<br />

(within 30 seconds) <strong>of</strong> voting opening. Whilst this<br />

implied that students were confident in their answers,<br />

it also indicated that some students may have had a<br />

poor perception <strong>of</strong> the time left. The number <strong>of</strong><br />

students answering within 30 seconds was noted<br />

during both tests. In 2007 this was 62%; in 2008, 54%.<br />

Data processing was carried out immediately after<br />

each test. In both years it took less than 30 minutes to<br />

produce a spreadsheet with marks for distribution to<br />

students. This represented a significant reduction in<br />

marking time over the previous paper-based test,<br />

which took over 11 hours to mark for a class <strong>of</strong> 150<br />

students (Table 1).<br />

Figure 2: Photograph <strong>of</strong> a student practice session with PRS.<br />

PLANET ISSUE <strong>23</strong><br />


The use <strong>of</strong> Personal Response Systems (PRS) in multiple-choice assessment:<br />

benefits and pitfalls over traditional, paper-based approaches<br />

Helen M. Roe | David P. Robinson<br />

Table 1: Data relating to the paper-based and PRS class tests, 2006-2008.<br />

Year<br />

2006<br />

2007<br />

2008<br />

Number <strong>of</strong> students who took test<br />

151<br />

142<br />

100<br />

Format <strong>of</strong> test<br />

Paper: MCQ<br />

Personal response:<br />

MCQ<br />

Personal response:<br />

MCQ<br />

Number <strong>of</strong> questions left blank per student<br />

0.03<br />

0.06<br />

0.07<br />

% students who answered questions<br />

within 30 seconds after ‘voting’ opened<br />

(unknown)<br />

62%<br />

54%<br />

Average Mark<br />

60%<br />

51%<br />

60%<br />

Staff time to prepare test<br />

2 hours, 30 mins<br />

2 hours, 30 mins<br />

2 hours, 15 mins<br />

Staff time to mark test, check and collate results<br />

11 hours, 30 mins<br />

30 mins<br />

30 mins<br />

Staff time to display results in graph format<br />

n/a<br />

1 hour<br />

45 mins<br />

Total staff time involved<br />

14 hours<br />

4 hours<br />

3 hours, 30 mins<br />

The correct answers to the MCQs were relayed to<br />

students in dedicated feedback sessions using a<br />

correct answer display facility in TurningPoint ® . Like<br />

most PRS systems, this system permits charts to be<br />

generated showing the percentage <strong>of</strong> students who<br />

have selected the correct and incorrect answers for<br />

each question (Figure 3). These provide an excellent<br />

platform for discussion, particularly when combined<br />

with standard graphics or text slides to remind<br />

students <strong>of</strong> the context <strong>of</strong> the questions. For example,<br />

for a question relating to karst geomorphology,<br />

images <strong>of</strong> the features in question could be displayed<br />

next to the chart.<br />

Student performance<br />

Student performance during the two PRS test years<br />

varied, with a mean test mark <strong>of</strong> 51% in 2007 and<br />

60% in 2008 (Table 1). For comparison, the mean<br />

marks in the two previous years when the test had<br />

run in paper format were 60% (2006) and 57%<br />

(2005). It is difficult to explain these variations as<br />

factors unrelated to test-format are likely to have<br />

influenced the results. For example, different<br />

questions were used each year which may have<br />

affected performance. In spite <strong>of</strong> this, we speculate<br />

that the dip in performance in 2007 may have been<br />

slightly influenced by students’ lack <strong>of</strong> familiarity with<br />

the system and the fast speed with which many<br />

students answered the questions, perhaps without<br />

sufficient reflection on their answers. In the 2008 test,<br />

the importance <strong>of</strong> reflecting on answers before<br />

answering was stressed more fully at the start <strong>of</strong> the<br />

test and this may have contributed to the improved<br />

performance (and slower class answering time)<br />

recorded that year.<br />

PLANET ISSUE <strong>23</strong><br />

Figure 3: Example <strong>of</strong> a MCQ question and answer chart<br />

showing students’ responses.<br />

Interestingly, a slightly larger proportion <strong>of</strong> students<br />

failed to answer 100% <strong>of</strong> the questions with the PRS<br />

tests than the 2006 paper-test (Table 1). This is<br />

surprising, as students were well aware <strong>of</strong> the need to<br />

guess answers if they did not know them. We can<br />

only speculate that students may on rare occasions<br />

have thought that they had ‘clicked’ when they had<br />

not. They may not perhaps have checked the keypad<br />

indicator light when pressing the answer key, or they<br />

may have pressed a wrong key with no answer<br />


allocated to it. Clearly if students fail to take sufficient<br />

care in the ‘clicking’ process then this severely limits<br />

the utility <strong>of</strong> PRS as an assessment tool. Whilst it is<br />

relatively easy to check that students have all<br />

answered a particular question with small classes,<br />

with a large class (>75 students) it is difficult to<br />

monitor which students have failed to ‘click’ during<br />

the test itself without incurring significant time delays<br />

so this was not attempted.<br />

Students’ opinions <strong>of</strong> PRS<br />

In the surveys, a large majority <strong>of</strong> students (74% both<br />

2007 and 2008) agreed with the statement ‘I strongly<br />

prefer PRS format to a paper-based test’. When asked<br />

via an open question to state what they liked about<br />

PRS (Box 1), these students frequently cited the rapid<br />

distribution <strong>of</strong> marks and the graphical feedback.<br />

Some students noted that they liked the class<br />

response charts because they gave them reassurance<br />

that if they had got the answers wrong they were not<br />

alone. Many students commented that the<br />

technology was innovative and enjoyable to use.<br />

“Great system, I really enjoyed using it. It<br />

was far better than doing the test on paper”<br />

“I really appreciated getting my marks back<br />

so quickly”<br />

“I think it’s a good system, however, I like to<br />

look back at my answers to make sure I’m<br />

happy with them. This cannot be done with<br />

the digital system, and I was unsure <strong>of</strong> the<br />

way I left some answers, with a paper test I<br />

could have gone back”<br />

“It didn’t have the feeling <strong>of</strong> a proper test so<br />

I didn’t work as hard as I would have done<br />

for a conventional test”<br />

Box 2: Examples <strong>of</strong> negative comments from students<br />

on PRS.<br />

In terms <strong>of</strong> students’ perceptions <strong>of</strong> their test<br />

performance, the majority <strong>of</strong> the students surveyed<br />

(>90% in both years) agreed that they thought that<br />

the test format made no difference to their marks.<br />

This finding, together with the lack <strong>of</strong> a clear<br />

improvement in the actual performance between the<br />

PRS and paper-test years, confirms that students<br />

were not incentivised to work harder by the PRS<br />

technology. Indeed, for a minority <strong>of</strong> students the test<br />

format may have distracted them from focusing on<br />

their answers, as we mentioned above. This is also<br />

reinforced by the fact that a small percentage (

The use <strong>of</strong> Personal Response Systems (PRS) in multiple-choice assessment:<br />

benefits and pitfalls over traditional, paper-based approaches<br />

Helen M. Roe | David P. Robinson<br />

By far the most significant drawback <strong>of</strong> running the<br />

GGY2024 test with PRS was the problems associated<br />

with the need to allocate a fixed ‘per-question’ time<br />

limit, which clearly disadvantaged students who do<br />

not cope well with answering questions under<br />

pressure. The paper-based MCQ test format is better<br />

for such students. This problem cannot easily be<br />

rectified with the technology available without<br />

repeating the test and voiding the first set <strong>of</strong> answers,<br />

which would probably be unpopular with most<br />

students, and impractical.<br />

Advantages<br />

Students<br />

Disadvantages<br />

Students<br />

• Innovative assessment medium praised by<br />

the majority <strong>of</strong> students<br />

• Rapid generation <strong>of</strong> results and feedback<br />

• Graphical feedback aids student<br />

engagement with course materials<br />

• Fixed ‘per-question’ time interval limits the<br />

capacity for students to review answers at<br />

their own pace<br />

• Some students unfamiliar with PRS<br />

expressed lack <strong>of</strong> confidence in system<br />

• Students with physical disabilities may<br />

struggle with small buttons on keypads<br />

Staff<br />

• Significantly reduced marking time<br />

• Slightly reduced preparation time<br />

(significantly reduced if students’<br />

identification details can be authenticated<br />

via keypads)<br />

• No need for photo<strong>copy</strong>ing / paper<br />

• Potential for marking errors likely to be<br />

significantly less than for paper-based MCQ<br />

test<br />

• Easily-generated answer charts provide<br />

a clear visual gauge <strong>of</strong> questions which<br />

students struggled with, aiding targetted<br />

remedial teaching<br />

Staff<br />

• Significantly greater cost<br />

• Keypad reliability / tampering issues are<br />

a concern but considered to be <strong>of</strong> low<br />

incidence and can be detected<br />

• S<strong>of</strong>tware-related problems e.g., difficulties<br />

with Macintosh interface in our experience<br />

• PRS not suited to other class test answer<br />

formats, e.g. short answers, diagram<br />

plotting<br />

• Staff training in the use <strong>of</strong> the PRS interface<br />

required<br />

PLANET ISSUE <strong>23</strong><br />

Table 2: Advantages and disadvantages <strong>of</strong> running the GGY2024 test with PRS in comparison to paper-based format.<br />


The issue <strong>of</strong> staff training in the use <strong>of</strong> PRS is also a<br />

concern to practitioners, particularly for institutions<br />

without media support staff. Although PRS technology<br />

is straightforward to use, and would probably take<br />

only a few hours to learn for anyone familiar with<br />

presentation s<strong>of</strong>tware (e.g., PowerPoint ® ), the<br />

difficulties that we encountered relating to the lack <strong>of</strong><br />

cross-platform s<strong>of</strong>tware versatility (Macintosh versus<br />

PC) were frustrating and would have required<br />

specialised knowledge to resolve. These compatibility<br />

concerns may also be relevant for other PRS systems<br />

and should be researched before a system is<br />

purchased, particularly for institutions using both<br />

platforms. Clearly this and other issues must be<br />

weighed up alongside the more obvious cost-related<br />

factors when departments are considering PRS.<br />

Our findings regarding student performance neither<br />

strengthen nor weaken the case for PRS, although our<br />

cautionary points regarding students being slightly<br />

distracted by the interface and concerns over time<br />

remaining should be heeded by staff contemplating<br />

running MCQ tests via PRS. Previous research on the<br />

impact <strong>of</strong> PRS on student performance remains<br />

equivocal, with some studies suggesting that the<br />

technology improves performance (e.g., Poulis et al.,<br />

1997; Hake, 1998) and others suggesting that it<br />

makes no difference (e.g., Brown, 1972). In<br />

considering this issue, an important distinction must<br />

clearly be drawn between improvements that stem<br />

from application <strong>of</strong> the technology per se (e.g.,<br />

through incentivisation) and the wider pedagogic<br />

benefits gained from its skilful application.<br />

As a result <strong>of</strong> our experiences, we would strongly<br />

recommend the use <strong>of</strong> PRS as a formative assessment<br />

tool in undergraduate teaching, particularly for large<br />

classes. PRS is also well suited to knowledge-centred<br />

assessment in the Earth and Environmental Sciences,<br />

because <strong>of</strong> the great diversity <strong>of</strong> information that can<br />

be displayed and assessed in graphical format.<br />

However, given the problems relating to the fixed<br />

per-question time issue and the anxieties expressed<br />

by a small minority <strong>of</strong> students in using PRS, we<br />

recommend that the technology in its current form<br />

only be used for formative assessments that carry a<br />

relatively small mark weighting. This may mean that<br />

the technology is more applicable to first or second<br />

year undergraduate modules than for final year<br />

assessment. We also recommend that staff<br />

contemplating using PRS as an assessment tool<br />

should: i) carefully review different systems to check<br />

that the response options available fulfill specific<br />

needs; ii) allow plenty <strong>of</strong> time for session set-up,<br />

including trouble-shooting and knowing where to turn<br />

to for technical support; iii) run practice sessions to<br />

ensure that students are comfortable and competent<br />

with the technology; iv) perform trials to consider the<br />

important issue <strong>of</strong> time allocation per question; and<br />

v) ensure that students sit at least 1 metre apart to<br />

minimise the incidence <strong>of</strong> cheating.<br />

In the longer term, as PRS technology evolves, the<br />

problems relating to the allocation <strong>of</strong> a fixed ‘perquestion’<br />

time limit in MCQ testing may be overcome,<br />

making PRS a more attractive tool for summative as<br />

well as formative assessment. Indeed, alternative<br />

response systems have recently been developed (e.g.,<br />

Testingpoint ® , also manufactured by TurningPoint<br />

Technologies®) which allow students to work<br />

independently with a keypad to answer questions<br />

that are presented on paper under exam conditions.<br />

Responses are not relayed to the base computer until<br />

the end <strong>of</strong> the session, thus allowing students to<br />

review their answers. Whilst this new, ‘self-paced’<br />

technology resolves the per-question time limit issue,<br />

it has other disadvantages. For example, it reverts<br />

back to the use <strong>of</strong> paper and it does not permit<br />

instantaneous display <strong>of</strong> results charts that can be<br />

used for discussion or for encouraging interaction with<br />

students. This technology was not available when we<br />

ran our tests. Increasing student exposure to PRS via a<br />

variety <strong>of</strong> classroom contexts may also engender a<br />

greater sense <strong>of</strong> trust in the medium, thus diminishing<br />

the technology-related concerns that some <strong>of</strong> our<br />

students expressed. This may in turn pave the way for<br />

a wider ‘culture’ change in the way in which PRSsupported<br />

learning and assessment are perceived, as<br />

has already been the case in many North American<br />

universities.<br />

PLANET ISSUE <strong>23</strong><br />


The use <strong>of</strong> Personal Response Systems (PRS) in multiple-choice assessment:<br />

benefits and pitfalls over traditional, paper-based approaches<br />

Helen M. Roe | David P. Robinson<br />

References<br />

• Boyle J. and Nicol D. 2003 Using classroom communication systems to support interaction and<br />

discussion in large class settings, Association <strong>of</strong> Learning Technology Journal, 11, 43-57<br />

• Brown J.D. 1972 An evaluation <strong>of</strong> the Spitz student response system in teaching a course in logical and<br />

mathematical concepts, Journal <strong>of</strong> Experimental Education, 40, 3, 12-20<br />

• Burnstein R.A. and Lederman L.M. 2001 Using wireless keypads in lecture classes, The Physics Teacher,<br />

39, 8-11<br />

• Draper S. W. and Brown M. I. 2004 Increasing interactivity in lectures using an electronic voting system,<br />

Journal <strong>of</strong> Computer Assisted Learning, 20, 81-94<br />

• Dufresne R.J., Gerace W.J., Leonard W.J., Mestre J.P. and Wenk. L. 1996 Classtalk: A classroom<br />

communication system for active learning, Journal <strong>of</strong> Computing in Higher Education, 7, 3-47<br />

• Elliott C. 2003 Using a personal response system in economics teaching, International Review <strong>of</strong><br />

Economics Education, 1,1, 80-86<br />

• Hake R.R. 1998 Interactive-engagement versus traditional methods: a six-thousand student survey <strong>of</strong><br />

mechanics test data for introductory physics courses, American Journal <strong>of</strong> Physics, 66, 1, 64–74<br />

• Judson E. and Sawada A. 2002 Learning from past and present: Electronic Response Systems in college<br />

lecture halls, Journal <strong>of</strong> Computers in Mathematics and Science Teaching, 21, 167-181<br />

• Poulis J., Massen C., Robens E. and Gilbert M. 1997 Physics lecturing with audience paced feedback,<br />

American Journal <strong>of</strong> Physics, 66, 5, 439–641<br />

• TurningPoint 2010 http://www.turningpointtechnologies.co.uk accessed 1st October 2010<br />

• Wood W. B. 2004 Clickers: a teaching gimmick that works, Developmental Cell 7, 6, 796–798<br />

• Zhu E. 2007 Teaching with Clickers, Center for Research on Learning and Teaching (CRLT), University <strong>of</strong><br />

Michigan, Occasional Papers, 22, 8<br />

Helen Roe h.roe@qub.ac.uk<br />

David Robinson david.robinson@qub.ac.uk<br />

<strong>GEES</strong> Photo Competition 2009/10<br />

Andrew Elvidge<br />

University <strong>of</strong> East Anglia<br />

“From out the mist”<br />

Mount Bromo, Indonesia, 20th August 2009<br />


PLANET ISSUE <strong>23</strong><br />


Kelly Wakefield 1 | Derek France 2<br />

1: Department <strong>of</strong> Geography, Loughborough University<br />

2: Department <strong>of</strong> Geography and Development Studies, University <strong>of</strong> Chester.<br />

Bringing digital<br />

stories into<br />

assessment<br />

Abstract<br />

Since 2007, the Geography and Development Studies<br />

department at the University <strong>of</strong> Chester has<br />

incorporated the use <strong>of</strong> student-generated digital<br />

stories into a core 20 credit, first year module<br />

‘Foundations for Successful Studentship’. This<br />

innovative approach to the fieldwork element <strong>of</strong> the<br />

module provides students with the opportunity to<br />

design and create their own digital story. This paper<br />

considers student and staff perceptions <strong>of</strong> this<br />

technology and reflects upon the practice <strong>of</strong> bringing<br />

digital stories into assessment and the techniques<br />

used.<br />

Introduction<br />

The modern day undergraduate arriving at university<br />

holds a tool-box <strong>of</strong> technical knowledge that prepares<br />

them for study. Students entering HE are more<br />

technologically capable than ever before and have<br />

been defined as ‘digital natives’ (Prensky, 2001).<br />

Those students who have grown up with digital<br />

technology are able to perform multiple tasks<br />

simultaneously. This cohort <strong>of</strong> students has been<br />

referred to as the ‘net generation’, characterised by<br />

those who are digitally literate, highly Internet<br />

familiar, highly social, crave interactivity in image-rich<br />

environments, and do not think in terms <strong>of</strong><br />

technology but in terms <strong>of</strong> activity which technology<br />

enables (Oblinger and Oblinger, 2005). The increasing<br />

technology available to students and staff is creating<br />

a new way <strong>of</strong> incorporating digital technologies into<br />

HE assessment. Indeed Prensky (2009), now<br />

advocates the use <strong>of</strong> ‘digital wisdom’ and ‘digital<br />

enhancement’ as a way <strong>of</strong> describing how students<br />

look at technology and what it can do for them.<br />

Universities are being challenged to compete with the<br />

seemingly exhaustive advancement in digital and<br />

Web 2.0 technologies. Widening participation,<br />

demographic change, competition and funding are<br />

swirling together to create a ‘perfect storm’. The<br />

implications <strong>of</strong> these factors for universities may be<br />

enormous as institutions need to embrace<br />

technology, rather than be threatened by it (Bradwell,<br />

2009), as “a greater focus on technology will produce<br />

real benefits for all” (Department <strong>of</strong> Education and<br />

Skills, 2005, p24)<br />

This article presents provisional thoughts from mixedmethods<br />

analysis <strong>of</strong> student-generated digital stories<br />

supporting a core Level 4 module. In this article, a<br />

digital story refers to a collection <strong>of</strong> still images, video<br />

and audio produced with free and readily available<br />

s<strong>of</strong>tware. Findings will explore the use <strong>of</strong> this<br />

particular technology in assessment and how it can<br />

specifically be used to support fieldwork. These will<br />

include the perceptions and reflections <strong>of</strong> students<br />

from the University <strong>of</strong> Chester, and staff perceptions<br />

from attendees at the <strong>GEES</strong> Assessment for Learning<br />

Conference, June 2009.<br />

Rationale<br />

Incorporating a novel assessment method such as a<br />

digital story into fieldwork has the potential to<br />

increase student engagement and has become an<br />

integral component <strong>of</strong> the field report. The first year<br />

module ‘Foundations for Successful Studentship’ has<br />

on average 70 students per year and the fieldwork<br />

element takes place in February. Three locations are<br />

used; the Single Honours Geography students spend a<br />

week on a residential field course in Slapton Sands,<br />

Devon; the Combined Honours Geography students<br />

PLANET ISSUE <strong>23</strong><br />


Bringing digital stories into assessment<br />

Kelly Wakefield | Derek France<br />

use Mid-Wales as their field area; and International<br />

Development Studies students work more locally, in<br />

Chester and Liverpool. Figure 1 shows students<br />

working in small groups on a research project<br />

particular to their fieldwork locale. With the aid <strong>of</strong> a<br />

story board and some planning, they use a digital<br />

camera and a tripod to capture the still images, video<br />

and audio required to build a digital story.<br />

The students can choose to work in groups or<br />

individually to create their digital story from the<br />

downloaded material. Free s<strong>of</strong>tware programmes<br />

such as Windows Movie Maker and Audacity facilitate<br />

the production <strong>of</strong> the digital story, which is advised to<br />

be around 3 minutes in duration.<br />

Assessment criteria<br />

The student’s individual field report is a blend <strong>of</strong> the<br />

traditional written form with one digital component<br />

comprising either video footage and/or still images<br />

which are embedded (via a hyperlink) into the written<br />

methodology or results/analysis sections. The<br />

production <strong>of</strong> the digital component(s) is completed<br />

prior to the submission <strong>of</strong> the final report and is<br />

uploaded onto the institutional VLE. The digital<br />

stories, once uploaded, cannot be seen by anyone<br />

other than the student authors and staff who have<br />

access. The digital story element <strong>of</strong> the research<br />

report is worth 30% <strong>of</strong> the student’s final mark, with<br />

the other 70% accredited to the individually written<br />

component. The assessment criteria for the individual<br />

field report are outlined below in Table 1, and<br />

highlight the weightings between the written and<br />

digital forms.<br />

Figure 1: First year students experimenting with digital technologies on fieldwork in Devon<br />

PLANET ISSUE <strong>23</strong><br />

64<br />

Research report (weighted 70%)<br />

• Research objectives<br />

• Project rationale/justification<br />

• Methods <strong>of</strong> data collection<br />

• Analysis and interpretation<br />

• Conclusion (including critique)<br />

• Writing style<br />

• Use and incorporation <strong>of</strong> digital component<br />

Table 1: Assessment criteria for the individual field report.<br />

Digital component (weighted 30%)<br />

• Quality <strong>of</strong> content<br />

• Content complements / enhances the report<br />

• Quality <strong>of</strong> presentation<br />

• Sound quality<br />

• Creativity

Student evaluation<br />

This paper draws upon a group <strong>of</strong> 148 first year<br />

students’ learning experiences <strong>of</strong> completing a digital<br />

story assessment (74 females and 74 males). Data<br />

were gathered through a bespoke pre- and postdigital<br />

story questionnaire and through focus groups.<br />

The pre-questionnaire gathered the students’ previous<br />

experiences <strong>of</strong> digital technology and sought to<br />

capture how each student felt about their own<br />

competency with digital technologies. The postquestionnaire<br />

aimed to capture the students’ views<br />

on digital story making and how they felt this ‘created<br />

knowledge’ had impacted on their personal learning<br />

experience. Surveys were anonymised using a<br />

matching code, enabling a trace <strong>of</strong> how an<br />

individual’s opinion had changed. The response rates<br />

to the surveys were high, with 77% for the pre-digital<br />

story and 45% for the post-digital story. A focus group<br />

was then carried out after the students returned from<br />

fieldwork, using volunteers from the whole cohort.<br />

The focus group themes and main discussions were<br />

pulled from the initial responses to both<br />

questionnaires; these included talking about the<br />

students’ engagement with the subject and<br />

enhancement <strong>of</strong> the learning experience.<br />

Results and discussion<br />

Pre- and post-digital story questionnaires<br />

Data collected from the pre-questionnaire highlighted<br />

the students’ confidence in their competency with<br />

digital technologies; the students were asked if they<br />

felt competent when using a digital camera. Sixty<br />

percent (all data are 2008 and 2009) reported that<br />

they totally agreed with this statement. Complementing<br />

this statistic, 62% <strong>of</strong> students totally agreed or agreed<br />

to feeling competent when using a digital camera to<br />

record video. This showed the students’ perception <strong>of</strong><br />

their competency was similar, whether working with<br />

still images or video. Students expressed their<br />

expectations <strong>of</strong> creating digital technologies in a more<br />

qualitative format on the pre-questionnaires with<br />

both excitement and trepidation:<br />

“Never done it before, learning something new”<br />

digital story or problems with using the equipment<br />

or s<strong>of</strong>tware.<br />

The post-digital story questionnaire highlighted that<br />

the nature <strong>of</strong> the digital story assessment had<br />

impacted on student learning. Responses were made<br />

such as creating a digital story:-<br />

“enhanced my learning experience <strong>of</strong> the subject”<br />

(84% totally agree or agree with this statement)<br />

“made me more interested to learn about the<br />

subject”<br />

(64% totally agree or agree with this statement)<br />

“made me more motivated to learn about the<br />

subject”<br />

(58% totally agree or agree with this statement)<br />

These statements show that the students were able<br />

to identify potential enhancements <strong>of</strong> incorporating<br />

digital technology within the assessment and in each<br />

case more than half <strong>of</strong> the students agreed or totally<br />

agreed with the statement.<br />

Student focus groups<br />

The focus groups were facilitated with volunteers<br />

across the first year (n=7, 5 and 8) and themes that<br />

were discussed included the learning experience and<br />

the practicalities <strong>of</strong> creating the digital story in a<br />

group setting. The major concern that emerged was<br />

that the students felt nervous <strong>of</strong> filming the videos<br />

and creating the audio voice-over track. Some<br />

students expressed a nervousness behind the camera<br />

in directing the story but also hearing their own voices<br />

and being seen ‘on film’. These concerns, however,<br />

were prior to any filming and once the students were<br />

provided with a tripod and digital camera out in the<br />

field, the nerves started to wear <strong>of</strong>f. Student focus<br />

group discussions reported:<br />

“Not very confident about it [digital story telling]<br />

at first, unsure, until we had a run through with<br />

what we had to do”<br />

“Technology, I enjoy creativity”<br />

“Depends how creative I am with these skills, I<br />

know how to use it [the technology] but I have no<br />

idea whether or not it will be good enough”<br />

The most frequent concern from the students was<br />

regarding issues <strong>of</strong> never having created a digital<br />

story before rather than any fear <strong>of</strong> creating the<br />

“Did a digital story for 3 minutes. Everyone had a<br />

go at being on the camera and featuring in it.<br />

Some were more comfortable in front <strong>of</strong> the<br />

camera, some did more recording”<br />

The practicalities <strong>of</strong> allowing students to create their<br />

own digital story are simple. Once the students have<br />

their equipment set up, the footage can be recorded<br />

with or without audio. However, it is important to<br />

PLANET ISSUE <strong>23</strong><br />


Bringing digital stories into assessment<br />

Kelly Wakefield | Derek France<br />

allow students time to practise this new technique<br />

and gain confidence before creating their assessed<br />

digital story. The essence <strong>of</strong> the first day in the field is<br />

therefore to allow the students to record anything - it<br />

does not have to be serious and can be a spo<strong>of</strong>. As<br />

long as the students plan their filming with the aid <strong>of</strong><br />

a storyboard, practising the technique <strong>of</strong> creating a<br />

digital story is the most important factor.<br />

“We did a practice digital story which helped even<br />

though we were messing about and we got into<br />

it…better to practise than just doing the serious<br />

version straight <strong>of</strong>f”<br />

“It enhanced our experience; we captured things<br />

on the podcast that you might forget”<br />

The learning experience that the students have<br />

expressed manifests itself in the technical side to the<br />

making <strong>of</strong> the digital story.<br />

“It was an advantage to do it [the audio voice<br />

over] again when we got back to university…If you<br />

stand in front <strong>of</strong> the camera it is best to maintain<br />

eye contact rather than reading from the paper”<br />

“I learnt how to do digital story telling and the<br />

information stands out more…the learning is better”<br />

Staff perceptions<br />

At the 2009 <strong>GEES</strong> Assessment for Learning<br />

Conference, a workshop was run called ‘Feeling<br />

Creative? How to bring digital stories into assessment’.<br />

This session provided practical experience <strong>of</strong> creating<br />

a digital story for academic staff which could then be<br />

included in the participant’s own assessment<br />

practices. In contrast to the students’ perceptions<br />

prior to creating a digital story, which were almost<br />

totally positive, the workshop participants used words<br />

such as:<br />

Fear<br />

Nervous<br />

Apprehension<br />

Figure 2: ‘A word cloud’ (www.wordle.net) represents academic staff participants’ pre-workshop perceptions <strong>of</strong> creating a<br />

digital story; the most frequent words are in a larger font size. (All words that were mentioned have been included in the<br />

diagram, n=45).<br />

PLANET ISSUE <strong>23</strong><br />


The words in Figure 2 were related to the participant’s<br />

own feelings and it is possible to see that “nervous”,<br />

“interesting”, “creative”, “idea” and “new” were the<br />

most referred to words to describe the pre-workshop<br />

perceptions. In some cases it seemed highly likely<br />

that this anxiety will be pushed onto the students.<br />

Post-workshop feedback was extremely positive, with<br />

a number <strong>of</strong> tangible outputs from participants. The<br />

qualitative comments were gathered from workshop<br />

respondents through email and post-workshop<br />

feedback forms. These suggest previous anxieties had<br />

been overcome: -<br />

“It was amazing how much could be achieved in<br />

such a short space <strong>of</strong> time”<br />

“I’m about to introduce a ‘digital story’ element to<br />

our first year fieldcourse”<br />

“I really felt self-conscious about recording myself<br />

speaking..... I have now managed to get over that<br />

one.”<br />

“I came away thinking about how this type <strong>of</strong><br />

short digital story could be used in a variety <strong>of</strong><br />

assessment methods”<br />

Conclusions<br />

The use <strong>of</strong> the digital story element in the<br />

‘Foundations for Successful Studentship’ module<br />

assessment can be a hook to encourage the students<br />

to engage more effectively with the subject,<br />

specifically fieldwork methodology and enquiry.<br />

Whilst the students have expressed that the<br />

technology motivates and increases their interest in<br />

the subject, evidence suggests that it also enhances<br />

the learning experience. In contrast with the staff<br />

participants at the <strong>GEES</strong> annual conference, the<br />

students concerns prior to creating the digital story<br />

were related more to the practicalities <strong>of</strong> filming the<br />

story and being filmed, whereas the staff perceptions<br />

extended to nerves and fear in relation to using the<br />

technology, and to how students might react to this<br />

novel assessment practice.<br />

More generally, there is a sense that students are<br />

arriving into HE with a competency with digital<br />

technologies. Rather than staff looking at technology<br />

as a barrier that hinders progress, HE institutions and<br />

staff should recognise that technology is here to<br />

advance and facilitate the pedagogies already in use.<br />

References<br />

• Bradwell P. 2009 The Edgeless University: Why Higher Education must embrace technology, Demos,<br />

London<br />

• Department for Education and Skills 2005 Harnessing technology transforming learning and children’s<br />

services, http://www.dfes.gov.uk/publications/e-strategy/ Accessed 21 June 2010<br />

• HEFCE 2009 E-learning strategy, http://www.hefce.ac.uk/pubs/hefce/2009/09_12/09_12.pdf Accessed<br />

21 June 2010<br />

• Oblinger D. G and Oblinger J. L. 2005 Educating the Net Generation, Educause, http://www.educause.<br />

edu/educatingthenetgen/ Accessed 21 June 2010<br />

• Prensky M. 2001 Digital Natives Digital Immigrants, On the Horizon, 9, 5, MCB University Press, http://<br />

www.marcprensky.com/writing/Prensky%20-%20Digital%20Natives,%20Digital%20Immigrants%20<br />

-%20Part1.pdf Accessed 21 June 2010<br />

• Prensky M. 2009 H. Sapiens Digital: From Digital Immigrants and Digital Natives to Digital Wisdom,<br />

Innovate, 5, 3, 1-9<br />

Kelly Wakefield K.R.Wakefield@lboro.ac.uk<br />

Derek France d.france@chester.ac.uk<br />

PLANET ISSUE <strong>23</strong><br />


Claire Jarvis | Jennifer Dickie | Gavin Brown<br />

Department <strong>of</strong> Geography, University <strong>of</strong> Leicester<br />

Aligned assessment <strong>of</strong><br />

technology-mediated<br />

field learning<br />

PLANET ISSUE <strong>23</strong><br />

Abstract<br />

Mobile technology-mediated learning is increasingly<br />

being adopted in geographical contexts, but<br />

considerably less thought has been applied as to how<br />

assessment protocols might be adjusted so that the<br />

overall learning experience remains appropriately<br />

aligned. In this case study, in which a suite <strong>of</strong><br />

locationally situated mediascapes were constructed<br />

and used as part <strong>of</strong> a 2nd year Human Geography<br />

field course to Dublin, we explore our success or<br />

otherwise in designing an assessment regime that<br />

captures the field experience as a whole. Evaluation,<br />

conducted via a focus group, identified that the use <strong>of</strong><br />

a combined mediascape-essay approach as a major<br />

component <strong>of</strong> the assessment successfully captured<br />

the main elements <strong>of</strong> the learning and teaching<br />

experience and facilitated deeper learning.<br />

Introduction<br />

Mobile-technology assisted learning is relatively new<br />

within geography and other subjects, although its use<br />

is rapidly increasing (Lynch et al., 2008). With this in<br />

mind, it is perhaps surprising that discourse regarding<br />

the assessment <strong>of</strong> learning experiences incorporating<br />

mobile methods is rare both in geography and<br />

beyond. This is a potentially significant omission since<br />

Biggs (2003) has highlighted that matching up<br />

teaching and learning activities and their assessment<br />

is an important component <strong>of</strong> a constructively aligned<br />

programme designed to deepen student learning. The<br />

connection between student learning and<br />

assessment is well established; students focus their<br />

time, by and large, on work that contributes to their<br />

grade (Gibbs, 1999).<br />

In looking at connections between mobile-technology<br />

assisted learning and assessment within a<br />

constructively aligned curriculum, practitioners need<br />

to look more closely at the motives and aspirations for<br />

incorporating technology. We do not advocate<br />

assessing using technology simply because it has been<br />

incorporated within the student experience. Rather, we<br />

suggest that a significant adjustment in learning and<br />

teaching style from traditional norms should trigger<br />

reflection as to whether adjustments <strong>of</strong> assessment<br />

are needed, and if so, <strong>of</strong> what type these should be.<br />

In some cases, mobile technologies <strong>of</strong>fer a more<br />

efficient replacement to traditional learning and<br />

teaching methods and an opportunity for students to<br />

practise basic generic skills. However, they do not<br />

otherwise challenge the status quo <strong>of</strong> the overall<br />

student learning experience or outcomes. To alter the<br />

pattern <strong>of</strong> assessment in ways that over-emphasise<br />

the role <strong>of</strong> mobile technologies would be to misalign<br />

the programme. Where the aim is to alter and enrich<br />

student activities and experiences significantly using<br />

novel technology, assessment should ideally also<br />

change. This can be problematic since assessing all<br />

four learning domains (cognitive, affective, conative,<br />

and psychomotor (Reeves, 2006)) is not<br />

straightforward and focusing assessment strategies<br />

on what is (relatively) easy to measure is both<br />

practical, replicable and equitable. This paper explores<br />

the assessment <strong>of</strong> a fieldcourse module to Dublin that<br />

seeks to use (inter alia) mobile learning and teaching<br />

methods to deepen students’ critical engagement<br />

with concepts <strong>of</strong> national identity, in this case by<br />

researching the social construction <strong>of</strong> ‘Modern<br />

Ireland’. Mobile learning and teaching approaches are<br />

an integral component <strong>of</strong> this course.<br />


Methods<br />

Mobile s<strong>of</strong>tware context: Mediascape<br />

The pedagogic approach we employed draws on the<br />

concept <strong>of</strong> “mediascape” as facilitated within the<br />

freely available s<strong>of</strong>tware mscape. Here, mediascapes<br />

were embedded within more traditional field teaching<br />

methods involving staff-led tours, discussions and<br />

student-led group research projects. A mediascape<br />

(m-scape) is composed <strong>of</strong> sounds, images and video<br />

placed in the (urban) landscape which can be<br />

activated through the use <strong>of</strong> a GPS-enabled handheld<br />

computer (PDA).<br />

Structured use <strong>of</strong> mediascape<br />

The multi-modal mediascapes perfomed two roles<br />

within the field course. Firstly, in addition to a staff-led<br />

introductory tour, they were used to guide students<br />

during their initial experiences <strong>of</strong> the city and to<br />

provide training in observation both by example and<br />

situated questioning. Secondly, the students<br />

themselves developed mediascapes as a digital<br />

format for expression and critical reflection on their<br />

findings in the field. In small groups, students collected<br />

photographic and sound-based materials to expose<br />

critical processes behind the material rebuilding <strong>of</strong><br />

Dublin, while others compiled their observations<br />

regarding the cultural representations <strong>of</strong> Irishness in<br />

the city. In this, we aimed to align our instructional<br />

design with the module goals and content, and also<br />

to align learner tasks with those we had exemplified<br />

as instructors when constructing the mscapes. Our<br />

approach links with contemporary research identifying<br />

how photographs can be “active players in the<br />

construction <strong>of</strong> a range <strong>of</strong> different kinds <strong>of</strong><br />

geographical knowledge” (Rose, 2008), and seeks to<br />

encourage personal reflection by our students<br />

through active learning (Healey, 2005).<br />

The fieldcourse was assessed using a group<br />

presentation, an individual, traditional field notebook<br />

and a student-led project report consisting <strong>of</strong> a group<br />

mediascape and supporting individual reflective essay.<br />

Evaluation<br />

An ongoing programme <strong>of</strong> evaluation is being carried<br />

out throughout the development <strong>of</strong> the mobile<br />

approaches and strategies in the Department <strong>of</strong><br />

Geography at Leicester, in which the consideration <strong>of</strong><br />

aligned assessment to cement learning is one theme.<br />

In this paper, data from the reflective diaries <strong>of</strong><br />

students, a focus group, plus staff reflections on the<br />

style and content <strong>of</strong> the group mediascapes, are used<br />

to form a view on the novel assessment approaches<br />

we adopted. In particular, we investigate the degree<br />

to which the assessment regime was able, through<br />

technological affordances, to convey cognitive,<br />

affective, conative and psychomotor aspects <strong>of</strong> the<br />

students’ experiences.<br />

Findings<br />

Initial results at the end <strong>of</strong> the assignment period for<br />

the field trip course demonstrated a variety <strong>of</strong><br />

advantages <strong>of</strong> incorporating ICT within the<br />

assessment process in addition to the initial field<br />

learning. We focus specifically on assessment-related<br />

aspects in this paper.<br />

Firstly, students were clear that the mscape<br />

assignment allowed a fuller expression <strong>of</strong> their ideas<br />

than text alone. The reflective essay and mscape<br />

were considered integral to each other by both staff<br />

and students alike; together they allowed<br />

considerably more information content to be<br />

presented. Further, as the student quotes below<br />

indicate, the materials incorporated clearly show both<br />

cognitive and affective aspects <strong>of</strong> learning:<br />

“I definitely think the Mscape was better than<br />

doing a whole essay on like the area <strong>of</strong> the Monto,<br />

mainly because in the Mscape you could put<br />

across more emotional things … like you could<br />

portray things that you could never be able to<br />

write, like a picture speaks a thousand words … It<br />

was just a lot more emotive and just easier to like<br />

show what you felt in the area in an Mscape.”<br />

“To support the amount <strong>of</strong> information we had in<br />

the Mscape in an essay, it would be epically long.”<br />

“Being able to add in music and like other people<br />

talking, the tour guide, that added like an extra<br />

dimension, it’s a different way <strong>of</strong> putting things<br />

across that is much better than just reading an<br />

essay.”<br />

Linking with this freedom to communicate, and to<br />

incorporate personal experiences, some students felt<br />

liberated at being able to communicate in less formal<br />

ways. The context surrounding the quotation below<br />

provides for an interpretation in which students<br />

appreciate freedom from the pincers <strong>of</strong> grammatical<br />

correctness and are able to include, for example, an<br />

“epic track” within a formal assessment; they can<br />

dare to play with ideas in new ways.<br />

PLANET ISSUE <strong>23</strong><br />


Aligned assessment <strong>of</strong> technology-mediated field learning<br />

Claire Jarvis | Jennifer Dickie | Gavin Brown<br />

PLANET ISSUE <strong>23</strong><br />

“It’s less structured as well, like you feel more<br />

liberated doing it ... Like in an essay you have to<br />

worry about how someone’s going to read it and<br />

things like that, whereas here you can just literally<br />

put in what you want to put in and not worry.”<br />

This issue <strong>of</strong> structure can be seen from two<br />

perspectives. In their review on curriculum design<br />

principles, Meyers & Nulty (2009) summarise material<br />

that suggests students adopting a deep learning<br />

approach form highly structured knowledge. The<br />

mscape approach clearly does not foster this.<br />

However, it does have other advantages that reflect<br />

deeper learning consistent with a more aligned<br />

module, for example the fostering <strong>of</strong> thinking that<br />

allows more elaborate non-linear connectivity in<br />

knowledge structures to be explored. To echo<br />

Hartnell-Young and Vetere (2008), “While we and the<br />

teachers saw value in linear narratives, which we are<br />

used to, it is a sobering thought to consider what we,<br />

as researchers, missed about learning in the rich<br />

content we saw, and what teachers might be missing<br />

every day”. The format also allowed the free-thinking<br />

venturing <strong>of</strong> ideas and expression <strong>of</strong> theories, again<br />

an expression <strong>of</strong> deeper learning and a well-aligned<br />

module. We had not forseen, for example, that<br />

students would pose questions back to us as part <strong>of</strong><br />

their digital mscape materials.<br />

Collating content for the mediascape, involving the<br />

making and expressing <strong>of</strong> connections between<br />

movement in the field and mental processes, appealed<br />

to a wide range <strong>of</strong> students. Aspects <strong>of</strong> technical<br />

construction, involving psychomotor skills at a<br />

different level, had a more limited appeal. Even then,<br />

within a group context, students still preferred the<br />

format to an assignment assessed solely by traditional<br />

essay; the approach <strong>of</strong>fered those with different<br />

conative styles spaces within which to act, partly<br />

owing to the range <strong>of</strong> sensory experience afforded by<br />

an m-scape both during its construction and use.<br />

“I quite enjoyed doing our Mscape at home, but I<br />

think that’s because I am quite sort <strong>of</strong> like<br />

technically minded and I sort <strong>of</strong> enjoyed seeing<br />

how it worked. Whereas I know other people in<br />

our group weren’t that fussed at all about how it<br />

worked or what. They enjoyed putting the stuff<br />

into it, but the actual construction <strong>of</strong> it they<br />

wouldn’t care at all really, and they still wouldn’t<br />

be able to know what went into it.”<br />

“Like me, I wouldn’t be able to construct one I<br />

don’t think without the help <strong>of</strong> a lot <strong>of</strong> other<br />

people in the group, but then I’d still prefer to do<br />

an Mscape as opposed to write an essay. So even<br />

though I might not be able to make a great one,<br />

I’d still prefer yeah, I’d still prefer to do it.”<br />

“Beats doing an essay any day to be honest.”<br />

Conclusions<br />

In this case study, students themselves used mobile<br />

devices to participate in the building <strong>of</strong> further layers<br />

<strong>of</strong> evidence on research themes related, for example,<br />

to the material rebuilding <strong>of</strong> Dublin’s built<br />

environment (issues <strong>of</strong> urban regeneration and<br />

gentrification), cultural representations <strong>of</strong> Irishness in<br />

the city and connections to key nodal points within<br />

the Irish diaspora. While encouraging students to<br />

visualize and synthesize intellectually challenging<br />

theoretical concepts about the relationality <strong>of</strong> urban<br />

space (Amin and Thrift, 2002) in new, easily accessible<br />

ways, the work also introduced human geography<br />

undergraduates to new generation digital<br />

technologies.<br />

Implicitly, in the process <strong>of</strong> building their m-scapes,<br />

students recorded their own learning; pedagogically,<br />

the approach therefore not only trained students in a<br />

leading edge research methodology but provided a<br />

means <strong>of</strong> recording reflective accounts <strong>of</strong> their<br />

fieldwork experiences. That is, we deliberately echoed<br />

the mode <strong>of</strong> the student learning experience, and the<br />

many aspects <strong>of</strong> learning and meaning constructed<br />

by the students’ activities, within the assessment<br />

process. The student feedback firmly suggests that<br />

elements <strong>of</strong> deeper learning, a sign <strong>of</strong> a well-aligned<br />

module, were fostered in this novel assessment<br />

approach. To date little has been written about<br />

assessment and alignment in the context <strong>of</strong> mobile<br />

learning, either in Geography or beyond. These early<br />

results suggest that further work exploring how we<br />

align modules in the context <strong>of</strong> mobile learning<br />

deserves further exploration.<br />


Acknowledgements<br />

This work was financially supported by the University <strong>of</strong> Leicester New Teaching Initiative Fund. Our<br />

appreciation also goes to the students who contributed their opinions via a focus group.<br />

References<br />

• Amin A. and Thrift N. 2002 Cities: Reimagining the Urban, Polity Press, Cambridge<br />

• Biggs J. 2003 Teaching for Quality Learning at University, Society for Research into Higher Education and<br />

Open University Press, Buckingham<br />

• Gibbs G. 1999 Using assessment strategically to change the way students learn. In Brown S.A. and<br />

Glasner A. (Eds.) Assessment Matters in Higher Education: Choosing and using diverse approaches, Society<br />

for Research into Higher Education and Open University Press, London, 41-52<br />

• Hartnell-Young E. and Vetere F. 2008 A means <strong>of</strong> personalising learning: incorporating old and new<br />

literacies in the curriculum with mobile phones, The Curriculum Journal,19, 283-292<br />

• Healey M. 2005 Linking research and teaching to benefit student learning, Journal <strong>of</strong> Geography in<br />

Higher Education, 29, 2, 183-201<br />

• Lynch K., Bednarz B., Boxall J., Chalmers L., France D. and Kesby J. 2008 E-Learning for Geography’s<br />

Teaching and Learning Spaces, Journal <strong>of</strong> Geography in Higher Education, 32, 1, 135-149<br />

• Meyers N. M. and Nulty D. D. 2009 How to use (five) curriculum design principles to align authentic<br />

learning environments, assessments, students’ approaches to thinking and learning outcomes,<br />

Assessment & Evaluation in Higher Education, 34, 5, 565-577<br />

• Reeves T.C. 2006 How do you know they are learning? The importance <strong>of</strong> alignment in higher education,<br />

International Journal <strong>of</strong> Learning Technology, 2, 4, 294-309<br />

• Rose G. 2008 Using Photographs as Illustrations in Human Geography. Journal <strong>of</strong> Geography in Higher<br />

Education 32, 1, 151-160<br />

Claire Jarvis chj2@le.ac.uk<br />

Jennifer Dickie jd92@le.ac.uk<br />

Gavin Brown gpb10@le.ac.uk<br />

<strong>GEES</strong> Photo Competition 2009/10<br />

David Middlemiss<br />

“Driving through lakes”<br />

Salar de Uyuni, Bolivia, 27th March 2010<br />


PLANET ISSUE <strong>23</strong><br />


The great NUS<br />

feedback amnesty<br />

The following article is taken from the National Union for Students (NUS) quarterly<br />

journal (volume 1) entitled “The Great NUS Feedback Amnesty”, written by Aaron Porter,<br />

President <strong>of</strong> the NUS. Further NUS resources on feedback can be accessed at http://www.<br />

<strong>of</strong>ficeronline.co.uk//education/articles/275707.aspx<br />

It is clear that feedback and assessment is a fundamental part <strong>of</strong> the learning process.<br />

Not only does it enable students to develop and shape their learning but it can also foster<br />

greater levels <strong>of</strong> self esteem and motivation. However, research results, such as those<br />

coming from the National Student Survey (NSS), the Higher Education Academy’s<br />

Postgraduate Taught Experience Survey and NUS’s own Student Experience Report, have<br />

all shown that poor assessment feedback procedures are a huge worry for students and in<br />

some cases, have negative impact on learning.<br />

The NUS has been addressing this issue over the last year. Our members have been vital in<br />

this work helping to create ten principles <strong>of</strong> good feedback practice and informing us <strong>of</strong><br />

the real life experiences that their members are facing. Our principles are as follows.<br />

NUS believes that feedback:<br />

PLANET ISSUE <strong>23</strong><br />

1. Should be for learning, not just <strong>of</strong> learning<br />

Feedback should be primarily used as a learning tool and therefore positioned for<br />

learning rather than as a measure <strong>of</strong> learning.<br />

2. Should be a continuous process<br />

Rather than a one-<strong>of</strong>f event after assessment, feedback should be part <strong>of</strong> continuous<br />

guided learning and an integral part <strong>of</strong> the learning experience.<br />


3. Should be timely<br />

Feedback should be provided in a timely manner allowing students to apply it to future<br />

learning and assessments. This timeframe needs to be communicated to students.<br />

4. Should relate to clear criteria<br />

Objectives for assessment and grade criteria need to be clearly communicated to, and<br />

fully understood by, students. Subsequent feedback should be provided primarily in<br />

relation to this.<br />

5. Should be constructive<br />

If feedback is to be constructive it needs to be concise, focused and meaningful to<br />

feed-forward, highlighting what is going well and what can be improved.<br />

6. Should be legible and clear<br />

Feedback should be written in plain language so it can be easily understood by all<br />

students, enabling them to engage with it and support future learning.<br />

7. Should be provided on exams<br />

Exams make up a high proportion <strong>of</strong> assessment and students should receive feedback<br />

on how well they did and how they could improve for the next time.<br />

8. Should include self-assessment and peer-to-peer feedback<br />

Feedback from peers and self-assessment practices can play a powerful role in<br />

learning by encouraging reassessment <strong>of</strong> personal beliefs and interpretations.<br />

9. Should be accessible to all students<br />

Not all students are full-time, campus-based and so universities should utilise different<br />

technologies to ensure all students have easy assess to their feedback.<br />

10. Should be flexible and suited to students’ needs<br />

Students learn in different ways and therefore feedback is not ‘one size fits all’. Within<br />

reason, students should be able to request feedback in various formats depending on<br />

their needs.<br />

PLANET ISSUE <strong>23</strong><br />


Carol Ekinsmyth<br />

Department <strong>of</strong> Geography, University <strong>of</strong> Portsmouth<br />

Reflections on<br />

using digital audio<br />

to give assessment<br />

feedback<br />

PLANET ISSUE <strong>23</strong><br />

74<br />

Abstract<br />

This paper aims to share the experiences <strong>of</strong> a group <strong>of</strong><br />

geography department staff at the University <strong>of</strong><br />

Portsmouth who have experimented with digital<br />

audio assessment feedback. In particular, the paper<br />

aims to indicate what it was like to do it, whether it<br />

was cost-benefit effective and whether the method<br />

was able to deliver superior feedback vis a vis<br />

traditional written feedback methods. There is some<br />

literature on the benefits <strong>of</strong> audio feedback, but most<br />

<strong>of</strong> this focuses on the student experience/<br />

perspective. This article, after providing some<br />

justification for adopting the audio method, considers<br />

its advantages from the perspective <strong>of</strong> the teacher<br />

and begins to address the ways in which the audio<br />

method might most effectively be used.<br />

Introduction<br />

There is consensus in the teaching and learning<br />

literature that we need to try harder to break down<br />

dissonances between teacher and learner knowledges<br />

and expectations. To put it simply and metaphorically,<br />

teachers and learners are not always (or perhaps even<br />

<strong>of</strong>ten) singing from the same hymn sheet,<br />

communicating in the same language or even<br />

engaging in learning activity for the same purposes.<br />

The more effective sharing between teachers and<br />

learners <strong>of</strong> integrated knowledge structures (Kinchin<br />

et al. 2008) and the pursuit <strong>of</strong> ways to enhance<br />

pedagogic resonance (Trigwell and Shale, 2004) is an<br />

essential project for the teaching pr<strong>of</strong>essions. Trigwell<br />

and Shale (2004) explain pedagogic resonance thus:<br />

“It is the quality <strong>of</strong> awareness in collaborative<br />

meaning-making with students that defines the<br />

quality <strong>of</strong> a teacher’s response to the teaching<br />

situation. It is this evoked awareness - the<br />

dynamic, reciprocal, fluid engagement with<br />

students – and related action…. This evoked or<br />

relational awareness/action is what we call<br />

pedagogic resonance.”(p532- paraphrased)<br />

Effective communication between teacher and<br />

learner is vital in the achievement <strong>of</strong> this sharing and<br />

relational awareness. Kinchin et al. (2008) argue for;<br />

“… a cycle <strong>of</strong> teaching and learning that promotes<br />

expert understanding… For this to happen, the<br />

teacher must be prepared to share his/her expert<br />

knowledge structure with his/her students, and to<br />

support a dialogue that will help the students<br />

navigate their personalised journeys from novice<br />

to expert.” (p101)<br />

In a similar vein, Land (2008) acknowledging Perkins<br />

(2006), identifies the need to recognise the ‘games <strong>of</strong><br />

enquiry we play’. He says that disciplines “are more<br />

than bundles <strong>of</strong> concepts. They have their own<br />

characteristic epistemes.” Land goes on to argue the<br />

need “for students to recognise the underlying<br />

episteme or game and develop epistemic fluency.”<br />

(Personal address - given at the University <strong>of</strong><br />

Portsmouth, December 2008.)<br />

I am sure that this sharing <strong>of</strong> expert knowledge<br />

structures or underlying epistemes is a goal that most<br />

teachers hold and attempt, but the practicalities <strong>of</strong><br />

doing this effectively within the time-pressured<br />

context <strong>of</strong> contemporary Higher Education are<br />

challenging. New technology, especially the<br />

E-learning agenda, might be expected to <strong>of</strong>fer some<br />

solutions and new possibilities, but <strong>of</strong>ten the reverse

situation is bemoaned by teachers who complain that<br />

technological solutions, mediated through Virtual<br />

Learning Environment (VLE) and other electronic<br />

media, seem to take them further from the goal <strong>of</strong><br />

communication, dialogue and the sharing <strong>of</strong><br />

knowledge structures with students. Encouraged by<br />

the suggestion that it is uninspired use <strong>of</strong> these<br />

technologies rather than the technologies themselves<br />

that limits possibilities, I undertook a small-scale<br />

experiment with digital audio technology to explore<br />

whether audio could give students assignment<br />

feedback that would provide them with a richer and<br />

more meaningful insight into our expert knowledge<br />

structures and ways <strong>of</strong> thinking than traditional<br />

written feedback. In particular, I was concerned to<br />

limit the possibilities <strong>of</strong> students’ “non-learning<br />

outcomes” (Kinchin et al, 2008) and to provide<br />

feedback that would enable transformative learning<br />

or, in the words <strong>of</strong> Meyer and Land (2005), feedback<br />

that could help students through “stuck places”.<br />

Those who have gone before me using digital audio<br />

feedback have mostly reported the advantages <strong>of</strong> the<br />

method from the students’ perspective (see for<br />

example France and Wheeler, 2007). Accordingly, in<br />

this article, I aim to provide a candid account <strong>of</strong> staff<br />

experiences <strong>of</strong> replacing written with audio feedback.<br />

Methods<br />

This research is ongoing. So far, digital audio <strong>file</strong>s have<br />

been sent to 48 Level 3 students and 24 Level 1<br />

students across three modules, providing spoken<br />

commentary on their written assignments. In most<br />

cases, audio feedback was accompanied by<br />

conventional marking pr<strong>of</strong>ormas. These were handed<br />

to students after the audio feedback was received.<br />

Students were sent voicemail recordings (via Wimba<br />

Voicemail through Blackboard) which ranged in length<br />

between 3 and 8 minutes. The Level 3 students were<br />

not given their mark in these recordings and had to<br />

wait until their written feedback was ready to learn<br />

their mark. The Level 1 students were given their mark<br />

at the end <strong>of</strong> the recordings. Emphasis in the audio<br />

recordings was placed upon feed-forward. Feedback<br />

about how the work measured up against marking<br />

criteria was mostly communicated through the<br />

written sheets. All students were aware that we were<br />

trialling this audio method and that they would be<br />

asked to help with the research subsequently<br />

(response rates were 69% for Level 3 students and<br />

96% for Level 1 students.) Responses were elicited<br />

through voicemail (audio) feedback on the feedback,<br />

focus groups and questionnaire. Three academic staff<br />

members were involved in the experiment, one <strong>of</strong><br />

1<br />

Please contact the author for more detail on student responses.<br />

whom sent over 50 audio <strong>file</strong>s to students and who is<br />

able to comment on its time-efficiency as a method.<br />

The physical handling <strong>of</strong> the task <strong>of</strong> recording and<br />

distributing audio feedback was greatly assisted by<br />

Wimba Classroom s<strong>of</strong>tware (and within this, the<br />

Wimba voicemail facility). This was being trialled at<br />

the University <strong>of</strong> Portsmouth and had been appended<br />

to the Blackboard VLE (Virtual Learning Environment)<br />

facility. It meant that the only equipment needed for<br />

the recording <strong>of</strong> the feedback was a headset, though<br />

this also meant that recording could only take place<br />

where access to the internet or a universitynetworked<br />

computer was possible. Once recorded, the<br />

<strong>file</strong> was sent to the student within the module VLE, a<br />

straightforward process because student email details<br />

were readily available. An additional advantage is that<br />

the WIMBA s<strong>of</strong>tware creates a fully dated and timed<br />

automatic archive <strong>of</strong> all voicemail messages which<br />

can be readily downloaded into digital <strong>file</strong>s for backup<br />

elsewhere. Once students received their audio<br />

feedback, they were encouraged to provide audio<br />

feedback themselves to the marker by sending a<br />

voicemail message back. This could be done at the<br />

click <strong>of</strong> a button and provided a very good research<br />

method for eliciting student experience.<br />

Experiences<br />

In common with others who have experimented with<br />

digital audio technology for assessment feedback,<br />

this study found that students were very positive<br />

about the method. The key aspects that students<br />

valued in audio feedback were: its more detailed<br />

nature, more personal nature, greater clarity,<br />

potential for feed-forward, feelings that the spoken<br />

voice conveys more than written words, the benefit<br />

that their understanding is not reliant on decoding<br />

poor handwriting and the perception that it has a<br />

‘more constructive nature’. Some students also had<br />

criticisms. Some were concerned about the physical<br />

separation <strong>of</strong> feedback from the script, its personal<br />

nature sometimes being too uncomfortable/<br />

upsetting, the inability to answer back (as they<br />

explained that listening to criticism made them feel<br />

like arguing back), and the concern that it doesn’t<br />

replace the (obviously valued) written feedback¹. In<br />

particular, in the context <strong>of</strong> the discussion earlier in<br />

this article, students did seem to experience greater<br />

clarity and understanding from the audio feedback vis<br />

a vis written feedback received earlier in their studies.<br />

The following quote from a Level 3 male student<br />

sums this up very well;<br />

PLANET ISSUE <strong>23</strong><br />


Reflections on using digital audio to give assessment feedback<br />

Carol Ekinsmyth<br />

PLANET ISSUE <strong>23</strong><br />

“I thought it was the best feedback I’d had at<br />

University full stop. You could argue that you<br />

could go to the Lecturer and talk through your<br />

essay but in reality … unless you’re uber-keen<br />

that’s not going to happen and I found that the<br />

comments made in the audio feedback … were<br />

extremely useful. I got a good mark anyway and I<br />

got a good mark that I was happy with, but then<br />

the feedback provided really good constructive<br />

criticism about how I could improve my work to<br />

boost it up even more… and I’m using it now in<br />

my gender and development essay” (Level 3<br />

focus group).<br />

It is academic staff experiences, mine and those <strong>of</strong><br />

two <strong>of</strong> my colleagues, that are the main focus <strong>of</strong> this<br />

article. My first difficulty, as project leader, was<br />

persuading some colleagues to get involved in the<br />

project. There was reluctance amongst colleagues<br />

despite a one-day staff development event provided<br />

by someone who had experience <strong>of</strong> audio feedback.<br />

The general feeling was that there was little need to<br />

change the way feedback was provided because the<br />

written method had worked for them in the past, and<br />

time was too pressing to engage in non-essential<br />

experimentation. Central to overcoming this<br />

reluctance, I feel, is a re-evaluation amongst<br />

practitioners and teachers <strong>of</strong> the goals, types,<br />

possibilities and importance <strong>of</strong> feedback. Here, the<br />

words <strong>of</strong> Brown (2007) need ‘coalface’ thought,<br />

evaluation, discussion and action:<br />

“I believe that concentrating on giving students<br />

detailed and developmental formative feedback is<br />

the single most useful thing we can do for our<br />

students…” (Brown, 2007)<br />

A culture shift with regards to feedback is necessary.<br />

Teaching and learning experts uphold this mantra<br />

but many academic staff remain unconvinced,<br />

unwilling or unable (due to time pressures) to<br />

introduce change.<br />

Amongst the three <strong>of</strong> us who did experiment with<br />

audio feedback, a recorded, two-hour discussion <strong>of</strong><br />

our experiences and thoughts focused upon issues <strong>of</strong><br />

feasibility, practicality, types <strong>of</strong> feedback,<br />

appropriateness, possible applications, context<br />

limitations and litigation, language and skill<br />

acquisition. Many <strong>of</strong> these themes were overlapping.<br />

In a mass Higher Education environment, it is not<br />

surprising that issues <strong>of</strong> cost-benefit arose first and<br />

foremost. There was a feeling that the greatest<br />

potential <strong>of</strong> the audio medium was to provide<br />

students with individual, personal, explicit and<br />

carefully considered feed-forward. As a method <strong>of</strong><br />

summative feedback, we felt that written feedback<br />

forms were time-efficient, easily cross-referenced to<br />

marking criteria and generic grade-point criteria, an<br />

easily understood (handwriting dependent) and visual<br />

form <strong>of</strong> reference and, as was discovered in student<br />

focus groups, a method liked and valued by students.<br />

All focus group participants agreed that the process <strong>of</strong><br />

experimenting with audio feedback had made us<br />

think more deeply and carefully about what feedback<br />

is, what it can be used for, how we as individuals treat<br />

its production, what students need from it and how it<br />

can be most effectively delivered. In particular, the<br />

experience had revealed for all <strong>of</strong> us the great<br />

deficiencies <strong>of</strong> the written method for formative<br />

feed-forward (the most important role <strong>of</strong> feedback) as<br />

the slow speed <strong>of</strong> writing greatly limited the amount<br />

<strong>of</strong> detail that we were able or willing to provide. The<br />

great benefit <strong>of</strong> audio feedback is, therefore, that<br />

minute-for-minute <strong>of</strong> time spent, the spoken voice is<br />

able to provide much more detail and information<br />

than the written script. We were unanimous that<br />

audio feedback is highly valuable if the goal <strong>of</strong> the<br />

feedback is a formative one, and if the feedback<br />

provider is willing to use the full potential <strong>of</strong> the<br />

medium to carefully explain and develop their advice<br />

to students, to attempt to provide students with a<br />

window into their ‘expert knowledge structure’ and<br />

the underlying ‘games’ <strong>of</strong> disciplinary endeavour and<br />

discourse. For this, we agreed, audio feedback is vastly<br />

superior to written forms because it is quicker. We felt<br />

too, having tried it, that it worked for these goals, and<br />

there is a great deal <strong>of</strong> evidence in the student<br />

responses that it works for students too.<br />

Implicit in the above, therefore, is the belief that<br />

written feedback pr<strong>of</strong>ormas and comments serve a<br />

different purpose from audio methods. For summative<br />

feedback, there was a strong belief that written<br />

pr<strong>of</strong>ormas were best. They are quick to produce and<br />

easy to understand, but perhaps most importantly,<br />


have an in-built language control – feedback<br />

language is standardised, criterion referenced and<br />

‘safe’. There was concern amongst staff that the<br />

necessarily freer language form in audio methods<br />

might introduce a problem with regards to possible<br />

student complaint and litigation. This made staff wary<br />

when recording their audio feedback. As a way <strong>of</strong><br />

overcoming this, it was felt that a strict separation <strong>of</strong><br />

purpose for the written (summative) and audio<br />

(formative) methods, explained to students in a<br />

handbook, might be a sensible way forward. This<br />

separation <strong>of</strong> purpose led us to discuss the necessity<br />

to identify within the curriculum the appropriate or<br />

most effective points at which audio (formative)<br />

feedback might be used. This is another avenue for<br />

further research and reflection. The necessity to<br />

restrict the use <strong>of</strong> audio feedback was a function, we<br />

decided, <strong>of</strong> the time-consuming nature <strong>of</strong> providing<br />

carefully created and advice-rich feedback and the<br />

hunch (backed up by student comment) that too<br />

much audio feedback would decrease its value to<br />

students. There are disadvantages to listening to an<br />

audio <strong>file</strong> that lasts a few minutes and perhaps<br />

students would cease to listen carefully if they<br />

received too many <strong>of</strong> them.<br />

Conclusions<br />

Digital audio feedback is excellent for formative<br />

feedback and feed-forward but perhaps not so useful<br />

(or necessary) for summative feedback. Despite the<br />

possibilities it <strong>of</strong>fers, a key problem might be<br />

persuading academic staff to use the method as this<br />

involves breaking through teachers’ established<br />

economies <strong>of</strong> practice. Central to acceptance is a<br />

re-evaluation amongst teachers <strong>of</strong> the goals,<br />

possibilities and importance <strong>of</strong> feed-forward. Not<br />

surprisingly, staff involved in this research were more<br />

cautious than students about the cost-benefits <strong>of</strong><br />

audio feedback, but all recognised great strengths<br />

and potentials. For summative feedback, however,<br />

neither staff nor students recognised audio feedback<br />

as a replacement for the rule-governed written<br />

feedback.<br />

Students in particular felt that audio feedback<br />

provided a much more detailed and richer account <strong>of</strong><br />

the strengths and weaknesses <strong>of</strong> their work, and they<br />

felt that hearing feedback was more effective and<br />

memorable than reading it. However, both staff and<br />

students felt that the sporadic use <strong>of</strong> audio feedback<br />

was important in this respect. Thus the judicious use<br />

<strong>of</strong> the audio medium at the most appropriate<br />

moments in the curriculum/degree programme is<br />

recommended. Staff also had concerns about the<br />

need to be very careful about language and tone in<br />

audio feedback, and felt that a well-publicised (to<br />

students) separation <strong>of</strong> intent between written<br />

feedback and audio feedback would be useful.<br />

These conclusions are based upon ongoing<br />

experimentation with audio feedback and only apply<br />

to individual student feedback. We have not yet<br />

experimented with the utility and popularity <strong>of</strong> this<br />

method for generic group-based feedback or<br />

feedback for types <strong>of</strong> assessment that differ from the<br />

traditional written assignment.<br />

References<br />

• Brown, S 2007 Feedback and feed-forward, Centre for Biosciences Bulletin 22, Higher Education Academy<br />

(HEA) <strong>Subject</strong> Centre for Bioscience<br />

• France D. and Wheeler A. 2007 Reflections on using podcasting for student feedback, <strong>Planet</strong> 18, 3-5<br />

• Kinchin I., Lygo-Baker S. and Hay S. 2008 Universities as centres <strong>of</strong> non-learning, Studies in Higher<br />

Education, 33, 1, 89-103<br />

• Land R. 2008 Personal address given at University <strong>of</strong> Portsmouth<br />

• Meyer, J. and Land, R. (2005) ‘Threshold concepts and troublesome knowledge (2): Epistemological<br />

considerations and a conceptual framework for teaching and learning’, Higher Education 49, 373-388<br />

• Perkins D. 2006 Constructivism and troublesome knowledge. In Meyer J. and Land R. (Eds.) Overcoming<br />

Barriers to Student Understanding: Threshold Concepts and Troublesome Knowledge, Routledge, London<br />

• Trigwell K. and Shale S. 2004 Student learning and the scholarship <strong>of</strong> university teaching, Studies in<br />

Higher Education 29, 4, 5<strong>23</strong>-536<br />

Carol Ekinsmyth carol.ekinsmyth.port@ac.uk<br />

PLANET ISSUE <strong>23</strong><br />


Mary Dengler¹ | Diana Williams²<br />

1: Environmental Geography, Deputy Director, MSc Sustainability & Management, Royal Holloway, University <strong>of</strong> London<br />

2: Academic Development Services, Royal Holloway, University <strong>of</strong> London<br />

Using an individual volunteer<br />

project to provide an innovative<br />

learning and assessment<br />

experience in the field <strong>of</strong><br />

sustainability<br />

PLANET ISSUE <strong>23</strong><br />

Abstract<br />

Sustainability is a relatively young field which in the<br />

past decade, has seen increased media coverage,<br />

employment opportunities and student interest.<br />

‘Principles <strong>of</strong> Environmental Sustainability’ is a Masters<br />

Level module whose innovative curriculum delivery<br />

features a complement <strong>of</strong> active learning methods<br />

and constructively aligned assessments. These are<br />

designed to engage students through practice to<br />

apply theories that address challenges <strong>of</strong> a societal<br />

shift to sustainability. This paper outlines how an<br />

individual volunteer project provides a learning<br />

environment that encourages both the social and<br />

academic integration <strong>of</strong> a diverse student group.<br />

Details <strong>of</strong> the assessment strategy and methods for<br />

providing formative feedback through self and peer<br />

evaluation are explained.<br />

Introduction<br />

Student satisfaction with assessment, and in<br />

particular with feedback that supports learning<br />

development, continues to be reported as significantly<br />

less positive compared with other areas <strong>of</strong> the<br />

student experience. In response (as outlines on p72<br />

and 73 <strong>of</strong> this edition <strong>of</strong> <strong>Planet</strong>), the National Union <strong>of</strong><br />

Students (NUS) has published ten principles <strong>of</strong> good<br />

feedback practice (NUS, 2008) that aim to make<br />

assessment fundamental ‘to’ and ‘for’ learning and<br />

includes points about timely and continuous feedback<br />

with opportunities for self and peer evaluation.<br />

Importantly, the early student experience <strong>of</strong><br />

assessment has been found to be significant in<br />

student attrition. Krause (2005, p60) highlights that<br />

students “who tend to keep to themselves at<br />

university are more likely to consider dropping out than<br />

those who work with other students on assignments<br />

and make contact with their peers out <strong>of</strong> class —<br />

whether online or face-to-face”. This paper outlines<br />

how an assignment based on an individual volunteer<br />

project <strong>of</strong>fers staff and peers on a taught Masters<br />

course the opportunity for collaborative approaches<br />

to assessment.<br />

Course details<br />

‘Principles <strong>of</strong> Environmental Sustainability’, is one <strong>of</strong><br />

two core modules for the MSc in Sustainability and<br />

Management, a cross-faculty full time 1 year<br />

programme jointly run by the School <strong>of</strong> Management<br />

and the Department <strong>of</strong> Geography at Royal Holloway,<br />

University <strong>of</strong> London. Unlike the other core modules<br />

run in the autumn term, ‘Principles’ consists <strong>of</strong> 20 x 3<br />

hour sessions convened in both terms. The module<br />

explores the linkages between environmental, sociopolitical<br />

and economic issues in modern society. In<br />

the rapidly evolving field <strong>of</strong> sustainability, a key aim is<br />

to equip students with the analytical skills and ability<br />

to apply theories to issues that they will encounter in<br />

the future. (See Box 1 for the learning outcomes <strong>of</strong><br />

the course.) By the end <strong>of</strong> the module, students are<br />

expected to be knowledgeable about key theories<br />

pertaining to environmental governance for<br />

sustainability, and understand how these theories are<br />

relevant for developing practical approaches to<br />

manage contrasting environmental and human<br />

resources.<br />


Course Learning Outcomes<br />

• Identify the scientific, legal and political<br />

basis for sustainable governance <strong>of</strong><br />

resources<br />

• Assimilate and critically evaluate current<br />

research and demonstrate originality <strong>of</strong><br />

interpretation in written and oral arguments<br />

• Apply principles <strong>of</strong> governance for<br />

sustainability to analyse how society<br />

interacts with environment and the effects<br />

on environmental processes and functions<br />

• Identify and devise different approaches to<br />

governance for sustainability suitable for a<br />

range <strong>of</strong> resources (water, air, land,<br />

biodiversity) at varied spatial scales (local,<br />

regional, national, international) in<br />

different locations<br />

volunteered before. This is particularly true <strong>of</strong><br />

international students, who normally constitute more<br />

than 60% <strong>of</strong> the class. Each student is required to<br />

volunteer for a minimum <strong>of</strong> 24 hours with a nongovernmental<br />

organisation (NGO) <strong>of</strong> personal interest<br />

that pertains to an aspect <strong>of</strong> sustainability. Many<br />

students exceed this minimum requirement.<br />

Individual volunteer project assignment<br />

• Complete risk assessment<br />

• 250 word summary <strong>of</strong> the project<br />

• Notebook log<br />

• Formative presentation to peers<br />

• Project report (2000 - 2500 words)<br />

• A letter <strong>of</strong> reference from the NGO<br />

• A volunteer hours record sheet<br />

Box 2: Individual volunteer project assignment.<br />

Box 1: Course learning outcomes.<br />

Curriculum<br />

The curriculum was designed to include a range <strong>of</strong><br />

active learning methods and constructively aligned<br />

(Biggs, 2003) assessments which would engage<br />

students, through practice, to apply theories that<br />

address the challenges <strong>of</strong> a societal shift to<br />

sustainability. The active learning curriculum includes<br />

case studies, interactive role-play, inclusion <strong>of</strong><br />

different socio-cultural contexts, integration <strong>of</strong><br />

student-selected examples, an urban sustainability<br />

field-trip, on-line discussion forum and the individual<br />

volunteer project. These were designed to enhance<br />

student learning and employability, and to integrate<br />

diverse perspectives.<br />

In 2005 the assessment was amended from 50/50<br />

coursework/exam to 100% coursework. The module<br />

constitutes a 25% contribution to the overall<br />

programme assessment. The aim was to align<br />

assessment methods with the active learning delivery<br />

(see Dengler, 2008). The assessments include a<br />

diversified portfolio <strong>of</strong> coursework assignments; two<br />

essays (60%), a book review (5%), a field trip report<br />

(5%), a reflective essay (10%), and the individual<br />

volunteer project report (20%).<br />

Individual student<br />

volunteering project<br />

The individual volunteer project is an assignment (see<br />

Box 2) that requires students to actively engage with<br />

a stakeholder group that is central to the practice <strong>of</strong><br />

sustainability in society. Most <strong>of</strong> the students have a<br />

first degree in business and many have never<br />

Students are required to identify their own<br />

organisation independently. Part <strong>of</strong> their learning<br />

process is to explore the different ways that the NGO<br />

can benefit from an aspect <strong>of</strong> sustainability in the<br />

community and to find a project <strong>of</strong> personal interest,<br />

which also caters for the diversity amongst the<br />

students. This individualisation <strong>of</strong> the project provides<br />

further pedagogic benefit for enhancing student<br />

learning, as they have the opportunity to learn about<br />

other projects when the students provide periodic<br />

updates on their work through presentations. The<br />

originality and personalisation <strong>of</strong> the assignment<br />

means it would be extremely difficult to plagiarise the<br />

final written report and is a constructive counterplagiarism<br />

measure (Carroll, 2007).<br />

Individual project report<br />

When selecting an organisation and when engaged in<br />

the volunteer work, students are asked to reflect on<br />

how their practical work relates to theories explored in<br />

class, and ultimately they submit a 2000-2500 word<br />

report where they focus on how their experiences<br />

relate to class themes. In the report, students are<br />

asked to focus on reflections about what they have<br />

learnt and how their volunteer experience relates to<br />

wider debates about governance for sustainability,<br />

including debates about social and environmental<br />

justice (see Box 3). There is a clear expectation that<br />

the work is informed by, and explicitly linked with, the<br />

relevant literature.<br />

PLANET ISSUE <strong>23</strong><br />


Using an individual volunteer project to provide an innovative learning<br />

and assessment experience in the field <strong>of</strong> Sustainability<br />

Mary Dengler | Diana Williams<br />

PLANET ISSUE <strong>23</strong><br />

80<br />

Students are advised that their individual<br />

project report should include<br />

1. a brief account <strong>of</strong> volunteer work<br />

2. spend more space reflecting on experiences,<br />

what was learned and how this volunteer<br />

work relates to wider debates on<br />

governance for sustainability, including<br />

debates about social and environmental<br />

justice<br />

3. letter <strong>of</strong> reference from supervisor at the<br />

volunteer organisation<br />

4. record card <strong>of</strong> volunteer hours signed by<br />

student and supervisor<br />

5. volunteer log which includes ongoing report<br />

<strong>of</strong> activities and reflections<br />

Box 3: Project report checklist.<br />

Formative feedback<br />

Research is consistent in highlighting the immensely<br />

positive effect <strong>of</strong> formative feedback on student<br />

learning (Black and Williams, 1998). In order for<br />

feedback to be effective in helping the student to<br />

improve in the future, ‘feed-forward’ needs to be<br />

timely, continuous and incremental (Hughes and<br />

Boyle, 2005). During the project, students are<br />

required to undertake discrete formative tasks that<br />

not only demonstrate progress, but also <strong>of</strong>fer the<br />

opportunity to receive prompt feedback that will<br />

benefit their learning and contribute to their final<br />

assessment.<br />

Risk assessment: Before volunteering, students<br />

complete a risk assessment, which is supported<br />

through a group tutorial. This task proved challenging<br />

for many students to complete without guidance.<br />

Project summary: A 250 word project summary must<br />

be submitted by the middle <strong>of</strong> term one. Formative<br />

comments are returned to students within two weeks.<br />

Interim presentation <strong>of</strong> ongoing work: At the end <strong>of</strong><br />

term one students are asked to make a five minute<br />

presentation about ongoing work with their volunteer<br />

project. This provides feedback to the students about<br />

their own project, but crucially allows them to learn<br />

from each other. Astin’s findings (1996) identify peer<br />

involvement in collaborative activities as a strong<br />

factor in learning.<br />

Class discussion forum: Students have the opportunity<br />

to discuss aspects <strong>of</strong> the volunteer project on the<br />

class discussion forum on Moodle. This use <strong>of</strong> virtual<br />

space is an important element in helping to cultivate<br />

a sense <strong>of</strong> community amongst the cohort and helps<br />

meet the needs <strong>of</strong> a range <strong>of</strong> learners. Quieter<br />

contributors, which may include international, Asianorigin,<br />

female students, initially contribute more<br />

on-line rather than through speaking in class.<br />

Notebook log: Keeping a volunteer log is a component<br />

<strong>of</strong> the assignment that helps students to chart the<br />

progress <strong>of</strong> their activities and reflections about them.<br />

More generally, this ongoing writing exercise assists<br />

students in improving their written skills, which is<br />

particularly important for some international students<br />

where English is a second or third language. This further<br />

aids students to produce the final summative report.<br />

Final formative presentation: At the end <strong>of</strong> term two,<br />

students make a 5 minute presentation about their<br />

completed volunteer project in advance <strong>of</strong> preparing<br />

their summative reports, which are submitted at the<br />

beginning <strong>of</strong> term three.<br />

The volunteer project was designed to engage with<br />

NGOs, with the 2008 NGO representatives invited to<br />

attend the presentations.<br />

Student evaluation<br />

While students can be sceptical initially, by the end <strong>of</strong><br />

the year many students mention the volunteer<br />

project as one <strong>of</strong> their favourite aspects <strong>of</strong> the<br />

module. Students note how they have learned about<br />

the challenges <strong>of</strong> putting sustainability into practice<br />

at a local scale.<br />

“What we were studying was not only in books but<br />

in reality as well.”<br />

For international students, the volunteer project<br />

provides them with an opportunity to interact with<br />

local British residents, establishing links to the wider<br />

community and enabling them to improve their<br />

spoken English. Many <strong>of</strong> the Chinese students<br />

comment that they enjoy meeting people in the local<br />

community, who were very welcoming and that<br />

people in England like to take breaks for tea, which<br />

they found to be surprisingly similar to China.

“I found the volunteering work also can help me<br />

to meet many people and make friends with them,<br />

such as my manager and some colleagues and<br />

even some familiar customers.”<br />

Students reflect that their home country does not have<br />

a parallel volunteer organisation infrastructure, and<br />

that as a result <strong>of</strong> their projects they are considering<br />

how charity shops or recycling programmes can be<br />

transferred to their home countries.<br />

“I did not know what an NGO was before [and]<br />

now I can see how they <strong>of</strong>fer many benefits to<br />

society. I think that China should have NGOs and I<br />

would like to be involved in volunteering when I<br />

return [to China]”<br />

Many <strong>of</strong> the students seek employment in the<br />

corporate sector and, increasingly, NGOs are<br />

partnering with corporations. The volunteer project<br />

provides all students with the opportunity to<br />

understand more about how charitable organisations<br />

work and may encourage them to further linkages<br />

between corporations and NGOs in the future.<br />

“The compulsory individual volunteer project at<br />

Christian Aid UK was an exciting experience for<br />

me as I was able to experience governance in<br />

practice.”<br />

In 2007-2008, 34 students exceeded the minimum<br />

requirement and the class total was 986 volunteer<br />

hours. In 2008–2009, 25 students volunteered a total<br />

<strong>of</strong> 1024 hours.<br />

Future directions<br />

The volunteer project component <strong>of</strong> the course has<br />

been further developed through funding from the<br />

College’s Research & Enterprise unit to organise a<br />

‘Celebration <strong>of</strong> Sustainability Volunteering’ event in<br />

spring 2009 where short-listed students presented<br />

projects to compete for prizes. All students<br />

contributed a poster <strong>of</strong> their project to the event and<br />

competed for a prize for the best poster by thematic<br />

categories. Judges were from local companies and<br />

supervisors from the volunteer organisations.<br />

Members <strong>of</strong> the wider college community and alumni<br />

were invited, thus enabling students networking<br />

opportunities with the local community.<br />

Summary<br />

The individual community volunteer project engages<br />

students actively with an issue <strong>of</strong> sustainability,<br />

encourages reflection on that involvement, and<br />

provides an opportunity to share the experience and<br />

to learn about diverse examples from their peers. As<br />

a result <strong>of</strong> the project, students have an enhanced<br />

understanding <strong>of</strong> how an NGO works, preparing them<br />

to be involved in future partnerships. Integration <strong>of</strong><br />

this activity into the curriculum <strong>of</strong>fers practical<br />

learning benefits, enhances students’ transferable<br />

skills and improves their employability. At a<br />

minimum, each <strong>of</strong> the MSc graduates can add a<br />

volunteer experience to their CV. Because students<br />

have been asked to reflect on how the NGO relates to<br />

their chosen Masters field, they are also better<br />

positioned to respond to job interview questions that<br />

draw linkages between sustainability in theory and<br />

practice.<br />

While integrating a volunteer project into all degree<br />

programmes is not practical, the transferable lessons<br />

from this project are that innovative methods <strong>of</strong><br />

teaching and assessment can be integrated with<br />

pedagogic benefit to enhance student learning. In<br />

summary:<br />

“The compulsory individual volunteer project at<br />

Christian Aid UK was an exciting experience for<br />

me as I was able to experience governance in<br />

practice.”<br />

‘I emphasise that the students<br />

can each contribute unique and<br />

valuable perspectives from<br />

their varied life experiences.<br />

This approach fosters students<br />

to respect others and<br />

themselves as people whose<br />

ideas matter and can be<br />

problem solvers for the shared<br />

issues <strong>of</strong> sustainability.’<br />

Dr Mary Dengler<br />

PLANET ISSUE <strong>23</strong><br />


Using an individual volunteer project to provide an innovative learning<br />

and assessment experience in the field <strong>of</strong> Sustainability<br />

Mary Dengler | Diana Williams<br />

References<br />

• Astin A. W. 1996 Involvement in Learning Revisited: Lessons We Have Learned, Journal <strong>of</strong> College<br />

Student Development, 37, 2, 1<strong>23</strong> -134<br />

• Biggs J. 2003 Aligning Teaching and Assessment to Curriculum Objectives, Imaginative Curriculum<br />

Project, LTSN Generic Centre<br />

• Black P. and Williams D. 1998 Assessment and Classroom Learning, Assessment in Education, 5, 1, 7 – 74<br />

• Carroll J. 2007 A Handbook for Deterring Plagiarism in Higher Education, Second Edition, Oxford Centre for<br />

Staff and Learning Development, Oxford<br />

• Dengler M. 2008 Classroom Active Learning Complemented by an Online Discussion Forum to Teach<br />

Sustainability, Journal <strong>of</strong> Geography in Higher Education, 32 ,3, 481 – 494<br />

• Hughes P. and Boyle A. 2005 Assessment in the Earth Sciences, Environmental Sciences and<br />

Environmental Studies. <strong>GEES</strong> <strong>Subject</strong> Centre Learning and Teaching Guide, http://www.gees.ac.uk/pubs/<br />

guides/assess/geesassesment.pdf Accessed 21 June 2010<br />

• Krause K. 2005 Serious thoughts about dropping out in first year: trends, patterns and implications for<br />

higher education, Studies in Learning, Evaluation Innovation and Development 2, 3, 55–68<br />

• NUS 2008 The Great NUS-amnesty Feedback - Briefing Paper http://resource.nusonline.co.uk/media/<br />

resource/2008-Feedback_Amnesty_Briefing_Paper1.pdf Accessed 21 June 2010<br />

Mary Dengler mary.dengler@rhul.ac.uk<br />

Diana Williams diana.williams@rhul.ac.uk<br />

<strong>GEES</strong> Photo Competition 2009/10<br />

PLANET ISSUE <strong>23</strong><br />

Sam Meyrick<br />

“A Slice <strong>of</strong> the Big Apple”<br />

Times Square - New York City<br />

3rd August 2005<br />



Alin Dobrea<br />

Student Learning and Teaching Network<br />

<strong>GEES</strong> Assessment for<br />

Learning Conference held<br />

at the University <strong>of</strong><br />

Manchester on 22 June 2009<br />

The Student Learning and Teaching Network is an<br />

informal community <strong>of</strong> students actively engaged in<br />

learning and teaching. It is coordinated by a<br />

committee <strong>of</strong> student volunteers, originally students<br />

engaging with Centres for Excellence in Teaching and<br />

Learning (CETL) who have been working together<br />

since March 2006. The Network’s core aim is to<br />

promote students as valid and active members <strong>of</strong><br />

learning communities.<br />

The Geography, Earth and Environmental Sciences<br />

(<strong>GEES</strong>) <strong>Subject</strong> Centre invited members <strong>of</strong> the Network<br />

to participate in a panel session as part <strong>of</strong> their<br />

Assessment for Learning conference, in June 2009.<br />

The following is the reflections <strong>of</strong> one <strong>of</strong> the students<br />

involved who studies Advertising and Marketing<br />

Communications at the University <strong>of</strong> Bedfordshire.<br />

“I am a student who will be starting a work<br />

placement with the Bridges CETL in September 2009<br />

as a Student Liaison Officer. On 22 June I attended<br />

the <strong>GEES</strong> Assessment for Learning Conference as a<br />

representative <strong>of</strong> the University <strong>of</strong> Bedfordshire. I was<br />

interested to understand how assessment and<br />

feedback is perceived by academia.<br />

I enjoyed Aaron Porter’s keynote session and was<br />

surprised to hear that the majority <strong>of</strong> students, 57%<br />

(NSS, 2008), are not satisfied with the standard <strong>of</strong><br />

feedback they receive because <strong>of</strong> ambiguity, lateness<br />

or negativity. As an Advertising student I understand<br />

that effective communication should be at the<br />

forefront <strong>of</strong> a successful relationship between<br />

students and staff, particularly when it comes to the<br />

feedback that students receive for assignments.<br />

It was interesting to hear different perspectives on<br />

some <strong>of</strong> the topics relating to assessment and<br />

feedback. The conference emphasised the importance<br />

<strong>of</strong> effective feedback and I know from my own<br />

experience that good feedback can be invaluable and<br />

help my development as a student and an individual.<br />

As a student I was able to reflect on my own<br />

experiences for the panel session.<br />

Some <strong>of</strong> the key ideas which emerged from the<br />

discussion where:<br />

• The best feedback is achieved through dialogue<br />

between students and staff and students and<br />

peers;<br />

• Feedback should be constructive to enable<br />

students to develop;<br />

• Courses should be designed to enable students to<br />

receive and act upon feedback throughout (rather<br />

than at the end <strong>of</strong>) a module<br />

It was a pleasure to be given the opportunity to<br />

attend and contribute to the conference. I particularly<br />

enjoyed discussing issues with staff about our<br />

understanding <strong>of</strong> assessment for learning and ways in<br />

which assessment for learning might develop for<br />

students in the future.”<br />

The Network welcomes all students and staff with a<br />

passion for learning and teaching to get involved<br />

through our website http://studentlandtnetwork.<br />

ning.com/.<br />

PLANET ISSUE <strong>23</strong><br />


<strong>GEES</strong> <strong>Subject</strong> Centre website resources<br />

Download and share with colleagues and students<br />

Start here - http://gees.ac.uk/pubs/start/start.htm<br />

A series <strong>of</strong> short resource lists for early career lectures covering a variety <strong>of</strong> different L&T<br />

themes and including handy tips and quick links. Topics include:<br />

• Improving feedback<br />

• Induction ideas<br />

• Assessment options<br />

• Careers advice<br />

Bibliographies - http://gees.ac.uk/pubs/bibs/bibs.htm<br />

Annotated listings <strong>of</strong> <strong>GEES</strong>-related and generic resources on various L&T topics.<br />

Topics include:<br />

• Employability<br />

• Assessment and learning outcomes<br />

• Education for Sustainable Development (ESD)<br />

• Independent learning<br />

Resource packs - http://www.gees.ac.uk/pubs/pubs.htm<br />

• New and Aspiring Lecturer’s Resource Pack<br />

• One Step Ahead: Enhancement through the <strong>GEES</strong> Disciplines, a Scottish Perspective<br />

<strong>GEES</strong> <strong>Subject</strong> Centre publications<br />

The <strong>GEES</strong> <strong>Subject</strong> Centre produces a variety <strong>of</strong> publications, including <strong>23</strong> editions <strong>of</strong><br />

‘<strong>Planet</strong>’, ‘<strong>GEES</strong> guides’ and ‘<strong>GEES</strong> briefings’. Download from our ‘Publications’ web page at:<br />

http://www.gees.ac.uk/pubs/pubs.htm Examples include:<br />

• Designing Effective Fieldwork for the Environmental and Natural Sciences<br />

• Employability within Geography, Earth and Environmental Science<br />

• Assessment in the Earth Sciences & Environmental Sciences and Environmental<br />

Studies<br />

• Practical & Laboratory Work in Earth & Environmental Sciences: guide to good<br />

practice and helpful resources<br />

PLANET ISSUE <strong>23</strong><br />


Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!