Views
3 years ago

chapters 1-4 - University of Queensland

chapters 1-4 - University of Queensland

Evaluation and the

Evaluation and the Pedagogy-Space-Technology FrameworkWho are the evaluators? Analysis of the literature (reflected in thecolloquium) finds that the most prolific evaluatorsare librarians. Library staff members are activemanagers of their spaces and are constantlyengaged with their users or clients. In general,librarians are committed to ongoing evaluationwith often highly developed and standardisedinstruments. It is worth pointing out, howeverthat part of the goal in this case is the evaluationof library services, with only a portion relatingto issues of space design. However in seekingto understand the changing usage patterns,librarians are often in the position of having longterm data upon which to draw. This provides solidempirical evidence for change.Interesting differences in perspective are evidentin the evaluations of space performed by othergroups. Some of the papers presented hereare authored by space builders (both architectsand technologists) who have a vested interestin improving their solutions as briefs becomemore standardised. Academics who studytertiary teaching practice are also becoming moreactive along with a smattering of independentresearchers.When should space be evaluated? It is not surprising that since the majority ofevaluative studies presented in this collection wereconducted post occupancy and over a relativelyshort term that we can be beguiled into thinkingof ‘evaluation’ only in this narrow sense. Howeverother views of evaluation were raised duringthe discussion sessions that find support in thepapers.Deborah Terry (UQ Deputy Vice Chancellor Teachingand Learning) argued strongly for proper evaluationtechniques to be applied to projects in the proposalstage in order to facilitate proper competitive fundingdecisions. This kind of evaluation of space looksto not just answer the question of how we shoulddesign and build space but also where should webuild and if we should build at all. This view foundbroad agreement with contributors pointing also tothe role of projects such as UQ ACTS (AdvancedConcept Teaching Space) and Stanford’s WallenbergHall as experimental spaces whose ongoingevaluation was designed to inform the institution’sdecision making regarding pedagogy, space andtechnology design across a range of projects.Mitchell, Winslett and Howell (Queensland Universityof Technology) present a comprehensive plan forevaluation in an evolving and experimental space(Lab 2.0) that takes this approach. Both Graves(Griffith University) and Lee (Swinburne) presentperspectives on pre-build evaluation and Andrewsand Powell (UQ) sought to illuminate the ways inwhich issues uncovered in evaluation were directlyapplied to succeeding projects of the same genre.Others argued for longer term evaluation of spaceto address ongoing environmental concerns. Thisprompts a broader question as to why matureteaching spaces were not subject to regular andsustained evaluation in the same way as moreclosely managed learning spaces such as libraries.What should be evaluated? Before moving to the question of methodology, itis instructive to review the goals of the evaluation.There are a variety of motivators for undertakingevaluation. Some appraisals appear to have thegoal of validating a newly completed project andby extension, arguing for the creation of similarspaces. Simpler measures such as head countsand multiple choice user satisfaction questionsare often the mainstay of these surveys. Bycontrast, research projects or design studiesaimed at informing ongoing development typicallystrive to uncover more detail, both by targetingempirical measures and probing with open endedquestioning and focus groups.Usage is of course a fundamental measure. Itwould be a brave or foolish project managerwho would argue success in the face of meagreoccupancy, so gate numbers will always have aplace. However, understanding patterns of useover time is increasingly recognised as useful. Dopatterns change hour-by-hour or shift notablybetween early, mid and later weeks of semester?Geographic patterns, such as understandingwhere users have been prior to entering theevaluated space and where they are headed canalso be immensely useful in campus planning.Several of the studies presented attempted to gaina more sophisticated insight into the patterns of an example) and this should be seen as an axis ofstudy which is growing in importance.28 NEXT GENERATION LEARNING SPACES

It is debatable whether satisfaction surveys, whileuseful in the long term to plot changes over time,are valuable in a short-term or one-off evaluation.Questions such as “How would you rate thefacilities of the space (Poor to Excellent on a 7point scale)” would seem to offer little in the wayof guidance to those planning similar projects.Satisfaction surveys, though commonly aimed atanswering the question “should we build more?”don’t address continuous improvement. Openendedquestions are more valuable at uncoveringissues that can be addressed.The Pedagogy Space Technology frameworkgives more specific guidance, making explicit thecontention that evaluation should be focused onmeasuring the degree to which the original goals,particularly the defined pedagogic goals, weremet. Though overt in the colloquium design,relatively few of the studies presented made clearlinkages in evaluation between the goals definedin the pedagogy and the outcomes. The PSTframework asks: What types of learning andteaching are observed to take place? What is theevidence? In his summation, Professor Radcliffe(Purdue) asked the same questions in this context:What were we actually trying to achieve? Whatwas the original intent? Surveys and statisticsalone are not enough to measure success in thisframework and observational studies are stronglysuggested as we will see below.Several threads in the forum addressed thedesirability of evaluating spaces in terms of(improved) learning outcomes. Gallagher, Pearceand McCormack (Victoria University) arguedstrongly for this while noting the difficulties inherentin such an evaluation.…a successful evaluation of the commons as asite for non-transmission forms of learning maydepend to some extent on the success of thewhole institution in moving away from transmissionmodels and developing meta-cognition in itsstudentsLearning outcomes are clearly dependant ona significant number of variables beyond thespace and the task of evaluating a space withrespect to these outcomes when so many othercontributing factors typically remain uncontrolledis difficult indeed. While in no way denyingthat the goal of improving learning outcomesshould be paramount, the PST Frameworktakes a step back from trying to evaluate thisdirectly. The goals of the space are defined interms of fostering particular modes or patternsof teaching and learning. The primary evaluationtherefore, is to determine whether or not suchbehaviours are observed and and which aspectsof the space and technology are seen to enable,encourage and empower these types of teachingand learning activities. The task of determiningwhether the pedagogy improves student learningoutcomes is left to a wider, possibly whole-ofinstitutionbased evaluation.How is space evaluated? Virtually all studies presented at the colloquiumincluded user surveys in their actual or plannedevaluation regimes. These varied from simpleweb-based questionnaires to structured interviewsand focus groups. Evaluations of teaching spacesmost commonly involved small sample sizes andoften conducted separate surveys of student andteacher user groups. Learning spaces based onlibraries showed the most consistency in surveys,with typically larger sample sizes though none ofthe studies presented attempted a statisticallyrigorous evaluation.Analysis of open ended questions remains sketchyand anecdotal evidence suggests that inordinateweight can be placed on a single comment,particularly if it is pithy, humorous or particularlyapt in its expression. These however remain thebest source for understanding client needs andwants and are vital to the process of improvement.Several comments indicated that there is muchNEXT GENERATION LEARNING SPACES 29

4 Corners Newsletter - Vol 1 - Central Queensland University
Download Part 1: Chapters 1 to 4. - The University of Sydney
Chapter 1 and 2 - Department of Primary Industries - Queensland ...
QTS Chapter 1 - Strategy Overview - Tourism Queensland
2011 Quarter 1 - AIBN - University of Queensland
4 Corners Newsletter - Vol 4 - Central Queensland University
Chapter 4 - University of Cape Town
Chapter 4 - Sarasota.WaterAtlas.org - University of South Florida
chapter 1 - University of the Free State
4 Corners Newsletter - Vol 2 - Central Queensland University
4 Corners Newsletter - Vol 5 - Central Queensland University
4 Corners Newsletter - Vol 3 - Central Queensland University
Annual Report 2006 - Part 1 - Central Queensland University
Chapter 4: Both Worlds - University of Southern California
1/31/08 Chapter 2, 4, 6
Chapter-4 1 st Law Open Systems
Electromagnetic Testing-EMT Chapter 1-4
Chapter 1 - University of the Western Cape
Chapter 1 - Introduction - University of Exeter
industry analysis 4 Queensland tourism - Tourism Queensland
LALP Consultation Version Chapters 1 to 4
Appendices - University of Queensland
Presentation [PDF] - University of Queensland
Download PDF - University of Queensland
Download PDF - University of Queensland
Winter 2013 - University of Queensland
Honours Handbook - University of Queensland