13.07.2015 Views

PP654 UniSa Freney - Final Report Feb 2010.pdf - Office for ...

PP654 UniSa Freney - Final Report Feb 2010.pdf - Office for ...

PP654 UniSa Freney - Final Report Feb 2010.pdf - Office for ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Although <strong>Freney</strong> and Williams’ 21 study was limited to design studios, similar situationsin which the venue <strong>for</strong> assessment is not the academic’s desk, such as practicals, mayalso find this approach to be beneficial.A similar ALTC project also utilised a new online criteria-based assessment systemcalled ReView. The main focus of ReView is on linking graduate attributes toassessment criteria, and it has the facility <strong>for</strong> self assessment whereby students ratetheir per<strong>for</strong>mance on a sliding scale prior to their assessment. Of note is the fact thattutors (the assessors) are not able to view the students’ self assessment until theyhave made their own assessment, thereby overcoming any prejudice that the selfassessment might induce 22 . Class averages <strong>for</strong> each assessment criterion are alsodisplayed, a feature shared with CAFAS.An interesting possibility <strong>for</strong> an innovative assessment practice, based on the digitalstorage of feedback <strong>for</strong>ms provided by CAFAS is proposed here <strong>for</strong> futureexperimentation. If the same assessment criteria were utilised across all assessmenttasks in a course/unit/subject (they must be the same but could be weighteddifferently), it would be possible to map the development of a student’s progress in aparticular criterion in terms of a summative mark and <strong>for</strong>mative comments across aseries of assessment tasks (i.e. the whole course/unit/subject). Thus, rather thancreating a new feedback <strong>for</strong>m <strong>for</strong> subsequent assessment tasks, the original feedback<strong>for</strong>m could be appended, thereby clearly displaying the history of comments and marksthat a student received in that course. At the culmination of the course, the feedback<strong>for</strong>m having been appended <strong>for</strong> each assessment task, would show the evolution of thestudent’s learning and highlight whether they were responding to feedback andimproving in each assessment criterion.There is potential <strong>for</strong> the Rubric functionality to easily become a “checklist” in whichrather than a limited list of per<strong>for</strong>mance levels – typically there are four to seven levelswhich roughly correspond to the grade levels – there would be an extensive list ofattributes each carrying a certain number of marks. The academic would “tick” eachapplicable attribute to communicate what had and had not been addressed in thestudent’s assignment submission. Marks would be tallied automatically giving anoverall score <strong>for</strong> a particular assessment criterion. This checklist <strong>for</strong>mat could becomeanother option <strong>for</strong> assessing a particular assessment criterion, thus three options couldbe possible: Slider, Rubric or Checklist.The issue of the validity of weighted assessment criteria arose during presentations ofthe system. Colleagues from various disciplines argued that weightings limited theircapacity to allocate the final grade that they wanted to award. A rigorous system suchas CAFAS, which relies on weighted assessment criteria to calculate an overall gradeand mark <strong>for</strong> the assignment sometimes tends to override one’s intuition regarding thegrade a student should be awarded. However, it was suggested to these colleaguesthat they could overcome this problem by specifying very small weightings <strong>for</strong> themajority of assessment criteria, reserving the largest weighting, <strong>for</strong> example, 80-90%<strong>for</strong> an “overall per<strong>for</strong>mance” criterion. This would give them the freedom to award21 <strong>Freney</strong>, M & Williams, T. 2007.22 Thompson, D.G. 2008, “Software as a facilitator of graduate attribute integration andstudent self-assessment”, ATN Assessment Conference 2008: Engaging Students inAssessment., University of South Australia, November 2008 in ATN AssessmentConference 2008: Engaging Students in Assessment., ed Duff, A., Quinn,D., Green, M.Andre, K., Ferris, T., Copeland, S., Australian Technology Network, South Australia, pp.234-246.Computer Aided Feedback & Assessment System (CAFAS) 34

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!