13.07.2015 Views

Peer Assessment in Experiential Learning: Assessing ... - Helsinki.fi

Peer Assessment in Experiential Learning: Assessing ... - Helsinki.fi

Peer Assessment in Experiential Learning: Assessing ... - Helsinki.fi

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Peer</strong> <strong>Assessment</strong> <strong>in</strong> <strong>Experiential</strong> Learn<strong>in</strong>gAssess<strong>in</strong>g Tacit and Explicit Skills <strong>in</strong> Agile Software Eng<strong>in</strong>eer<strong>in</strong>g Capstone ProjectsFabian Fagerholm, Arto Vihava<strong>in</strong>enDepartment of Computer Science, University of Hels<strong>in</strong>kiP.O. Box 68 (Gustaf Hällström<strong>in</strong> katu 2b)FI-00014, F<strong>in</strong>landfabian.fagerholm@hels<strong>in</strong>ki.<strong>fi</strong>, arto.vihava<strong>in</strong>en@cs.hels<strong>in</strong>ki.<strong>fi</strong>Abstract—To prepare students for real-life software eng<strong>in</strong>eer<strong>in</strong>gprojects, many higher-education <strong>in</strong>stitutions offer courses thatsimulate work<strong>in</strong>g life to vary<strong>in</strong>g degrees. As software eng<strong>in</strong>eer<strong>in</strong>grequires not only technical, but also <strong>in</strong>ter- and <strong>in</strong>trapersonalskills, these skills should also be assessed. Assess<strong>in</strong>g soft skillsis challeng<strong>in</strong>g, especially when project-based and experientiallearn<strong>in</strong>g are the primary pedagogical approaches. Previous worksuggests that <strong>in</strong>clud<strong>in</strong>g students <strong>in</strong> the assessment process canyield a more complete picture of student performance. This paperpresents experiences with develop<strong>in</strong>g and us<strong>in</strong>g a peer assessmentframework that provides a 360-degree view on students’ projectperformance. Our framework has been explicitly constructed toaccommodate and evaluate tacit skills that are relevant <strong>in</strong> agilesoftware development. The framework has been evaluated with18 bachelors- and 11 masters-level capstone projects, total<strong>in</strong>g176 students work<strong>in</strong>g <strong>in</strong> self-organized teams. We found that theframework eases teacher workload and allows a more thoroughassessment of students’ skills. We suggest <strong>in</strong>clud<strong>in</strong>g self- andpeer assessment <strong>in</strong>to software capstone projects alongside other,more traditional schemes like productivity metrics, and discusschallenges and opportunities <strong>in</strong> def<strong>in</strong><strong>in</strong>g learn<strong>in</strong>g goals for tacitand social skills.Keywords—<strong>Peer</strong> assessment; assessment metrics; selfassessment;case study; capstone project; experiential learn<strong>in</strong>g;project-based learn<strong>in</strong>g; tacit skills; teamwork; computer scienceeducation; agile software eng<strong>in</strong>eer<strong>in</strong>g.I. INTRODUCTIONIt is well known that there exists a gap between whateng<strong>in</strong>eer<strong>in</strong>g students learn and what is expected from them asthey graduate [1], [2], [3]. The expectation gap [4] is especiallyvisible <strong>in</strong> software eng<strong>in</strong>eer<strong>in</strong>g education, where practiceslearned while study<strong>in</strong>g may even have to be unlearned later [5].Among the expected skills are the abilities to read socialcues, regulate emotional expression, and to engage <strong>in</strong>constructive dialogue with project stakeholders to discovertacit knowledge. These so-called soft skills are particularlyimportant <strong>in</strong> software eng<strong>in</strong>eer<strong>in</strong>g projects that rely more on<strong>in</strong>formal communication than document-driven and plan-basedapproaches. They are important for build<strong>in</strong>g and ma<strong>in</strong>ta<strong>in</strong><strong>in</strong>gcohesion <strong>in</strong> development teams, work<strong>in</strong>g with external teams,and for <strong>in</strong>volv<strong>in</strong>g other project participants and stakeholders <strong>in</strong>the software development process.The gaps <strong>in</strong> communication and teamwork skills of neweng<strong>in</strong>eers were discussed already <strong>in</strong> the 1990s [6], and themost relevant skills required from entry-level IT personnel still<strong>in</strong>clude personal attributes such as problem solv<strong>in</strong>g, criticaland creative th<strong>in</strong>k<strong>in</strong>g, and team skills and communication [7].Teamwork and communication is also emphasized <strong>in</strong> theemerg<strong>in</strong>g agile methodologies, where <strong>in</strong>teraction between<strong>in</strong>dividuals is valued over processes and tools [8]. Ultimately,students should grow <strong>in</strong>to members of their communities ofpractice [9], adopt<strong>in</strong>g the tacit skills required to function <strong>in</strong> the<strong>fi</strong>eld. This is a notable challenge for higher education.Teamwork is usually practiced <strong>in</strong> several projects <strong>in</strong> highereducation. Perhaps the most notable project is the capstoneproject, which is often the culm<strong>in</strong>ation of a degree program.Capstone projects are typically made for real customers <strong>in</strong>as realistic sett<strong>in</strong>gs as possible, given the constra<strong>in</strong>ts of theeducational <strong>in</strong>stitution. Capstone projects provide an opportunityto assess higher-order cognitive dimensions of learn<strong>in</strong>g aswell as affective and skill-based dimensions [10]. Even if theend product of a project is the most valuable deliverable for acustomer, the whole project can be a cont<strong>in</strong>uous and valuablelearn<strong>in</strong>g process for the students.As students direct their activities based on the givenassessment criteria [11], the assessment design plays a keyrole <strong>in</strong> what students will focus on. In a software eng<strong>in</strong>eer<strong>in</strong>gcapstone project, the assessed skills and knowledge shouldconta<strong>in</strong>: (1) elementary software eng<strong>in</strong>eer<strong>in</strong>g related skills suchas requirements analysis, design, development, and validation;(2) tool related skills such as the use of a version controlsystem, development tools, and process management tools; and(3) process related skills such as process knowledge and howa selected process is followed. How the students utilize andbene<strong>fi</strong>t from these skills <strong>in</strong> a teamwork sett<strong>in</strong>g is moderatedby several tacit, soft, and social skills.In this paper, we present ongo<strong>in</strong>g work on an assessmentframework that can be used as a decision support tool to assesstacit skills together with explicit skills <strong>in</strong> capstone projectenvironments. The framework has been built to help focusstudents’ attention to important team-related aspects, and tohelp teachers assess student performance <strong>in</strong> capstone projects.This paper is structured as follows. In Section II, we give anoverview of our educational context and the learn<strong>in</strong>g objectivesof our capstone courses, as well as discuss our motives todevelop a new framework. In Section III, we describe relatedwork on assessment of project-based education as well asself- and peer assessment, and <strong>in</strong> Section IV, we describe theframework. The evaluation of the framework is discussed <strong>in</strong>Section V through a multiple case study, and f<strong>in</strong>ally, <strong>in</strong> SectionVI, we conclude this paper and outl<strong>in</strong>e future work.


II.BACKGROUNDComputer Science studies at the University of Hels<strong>in</strong>kiare divided <strong>in</strong>to a three-year bachelor’s degree, and a twoyearhigher master’s degree. The bachelor’s degree is acomprehensive computer science degree, which prepares thestudents for both work<strong>in</strong>g life and future studies. There is no“specialization track” with<strong>in</strong> the bachelor’s studies: every studenttakes courses on e.g. math, software eng<strong>in</strong>eer<strong>in</strong>g, distributedsystems, as well as algorithms and mach<strong>in</strong>e learn<strong>in</strong>g.If the students choose to pursue a master’s degree, theyhave a variety of specialization tracks to choose from. Ourfocus is on the software eng<strong>in</strong>eer<strong>in</strong>g specialization track, <strong>in</strong>which students deepen their understand<strong>in</strong>g on e.g. softwareprocesses and quality, agile methodologies and coach<strong>in</strong>g, aswell as software architecture.A. Capstone ProjectsBoth the bachelor’s and master’s degrees conta<strong>in</strong> a capstoneproject. The bachelor’s degree studies culm<strong>in</strong>ate <strong>in</strong> either a 7-or a 14-week Software Eng<strong>in</strong>eer<strong>in</strong>g Project, dur<strong>in</strong>g which thestudents work <strong>in</strong> 4–5-person teams on a project from e.g. an<strong>in</strong>dustry partner or a research group. The 7-week version is afull-time project, where the students are collocated at one ofour labs, while the 14-week version is a part-time project, andcan be partially distributed. Although the students are mentoredby staff, they handle all project aspects <strong>in</strong> a self-organizedmanner, <strong>in</strong>clud<strong>in</strong>g project management and sett<strong>in</strong>g customerexpectations.The capstone project for the software eng<strong>in</strong>eer<strong>in</strong>g specializationtrack is the Software Factory Project [12], [13],which simulates a teamwork environment <strong>in</strong> contemporarysoftware development organizations. Its design aims to shiftresponsibility of all aspects of project operation to the studentteam, <strong>in</strong> order to ensure that students are exposed to the realitiesof software development.The Software Factory Project is similar to the bachelor’slevelcapstone project, but with some characteristics that makeit more challeng<strong>in</strong>g. The project usually beg<strong>in</strong>s with more illdef<strong>in</strong>edgoals. Part of the purpose is to discover, together withthe customer, even which software the project is to produce,and how the software can br<strong>in</strong>g value to the customer andend user. Some of the projects also operate <strong>in</strong> a distributedenvironment together with other Software Factory nodes atseparate universities.B. Motivation and Learn<strong>in</strong>g ObjectivesThe motivation for assess<strong>in</strong>g teamwork and tacit skills arose<strong>in</strong>itially dur<strong>in</strong>g the design and development of the SoftwareFactory Project. The tacit skills are of particular importance <strong>in</strong>such courses, and thus were set as important learn<strong>in</strong>g objectivesof the project. One of the challenges was to design an approachto assess not only project deliverables and productivity, butalso performance <strong>in</strong> terms of tacit and team skills. The lessonslearned <strong>in</strong> the Software Factory were <strong>in</strong>corporated, togetherwith results from a multi-year improvement effort [14], <strong>in</strong>tothe Software Eng<strong>in</strong>eer<strong>in</strong>g Project.The ma<strong>in</strong> learn<strong>in</strong>g objective for both capstone projects wasorig<strong>in</strong>ally def<strong>in</strong>ed as “the ability to become a member of asoftware development team, function as part of it, contributeto its development, and work as part of it towards its currentmission or purpose”. In addition, the software eng<strong>in</strong>eer<strong>in</strong>gproject has a speci<strong>fi</strong>c learn<strong>in</strong>g objective rubric, which outl<strong>in</strong>esthe pr<strong>in</strong>cipal themes <strong>in</strong> the course and what is required for eachlevel of student assessment. The rubric follows the pr<strong>in</strong>ciples ofconstructive alignment [15], and outl<strong>in</strong>es software developmentrelated skills, as well as management and tool usage.The effort that students put on learn<strong>in</strong>g is heavily determ<strong>in</strong>edby the assessment criteria [16], [17]. However, neither the rubricnor the ma<strong>in</strong> learn<strong>in</strong>g objective supported assess<strong>in</strong>g soft skills.As the ma<strong>in</strong> learn<strong>in</strong>g objective of the capstone projects consistsof a set of dist<strong>in</strong>ct sub-objectives, assessment requires that eachof them is identi<strong>fi</strong>ed and assessed <strong>in</strong>dependently, preferablyus<strong>in</strong>g a small number of traits that cover the knowledge andskills required to do well <strong>in</strong> each part of an activity [17].Our framework is based on the cognitive doma<strong>in</strong> of Bloom’srevised taxonomy [18], [19], which provides guidel<strong>in</strong>es foragree<strong>in</strong>g on assessment and learn<strong>in</strong>g objectives for a course.It outl<strong>in</strong>es six levels (remember<strong>in</strong>g, understand<strong>in</strong>g, apply<strong>in</strong>g,analyz<strong>in</strong>g, evaluat<strong>in</strong>g, and creat<strong>in</strong>g), which are ordered fromsimpler to more complex; the orig<strong>in</strong>al idea was that master<strong>in</strong>ga “higher” category requires the mastery of the previouscategories.Based on the six levels def<strong>in</strong>ed by the cognitive doma<strong>in</strong><strong>in</strong> Bloom’s revised taxonomy, we outl<strong>in</strong>ed the follow<strong>in</strong>g teamskills that each participant focuses on: presence, activity,eagerness, devotion, contribution, and expert maturity. Inaddition, the bachelor’s-level software eng<strong>in</strong>eer<strong>in</strong>g projects putadditional focus on the participant behavior and its <strong>in</strong>fluenceon process and result. A more comprehensive description ofthese team skills is given <strong>in</strong> Section IV.III.RELATED WORKSeveral studies have exam<strong>in</strong>ed self- and peer assessmentof teamwork <strong>in</strong> regular courses, and assessment of teamwork<strong>in</strong> projects, <strong>in</strong>clud<strong>in</strong>g capstone-like courses. Here, we brieflypresent some of the issues exam<strong>in</strong>ed and results found.One of the most fundamental questions regard<strong>in</strong>g assessmentis that of its purpose. Naturally, assessment can serve multiplepurposes simultaneously; it can help rank students with respectto their performance, allow<strong>in</strong>g selection to be made <strong>in</strong> differentstages of an educational system, and it can provide importantfeedback to students regard<strong>in</strong>g their study performance. Whenassessment is tied to speci<strong>fi</strong>c learn<strong>in</strong>g objectives, students’activities can be directed towards activities that build knowledgeand skills that are deemed relevant.<strong>Assessment</strong> can be used on the systemic level to evaluatelearn<strong>in</strong>g programs <strong>in</strong> terms of how well they support achievementof learn<strong>in</strong>g outcomes [20]. The nature of capstone projectsas comprehensive experiences means that they allow assess<strong>in</strong>ga wide range of abilities; they are <strong>in</strong>dicative of learn<strong>in</strong>gprogram strengths and weaknesses. Analysis of capstone projectoutcomes can provide valuable <strong>in</strong>sights for improv<strong>in</strong>g learn<strong>in</strong>gprograms, and thus, improv<strong>in</strong>g student learn<strong>in</strong>g. Payne et al.suggest assess<strong>in</strong>g student read<strong>in</strong>ess for capstone courses <strong>in</strong>order to gather feedback on both the presence of necessarybackground knowledge, skills, and dispositions, and the ability


to apply them to capstone courses [10]. They outl<strong>in</strong>e criticalconcepts and skills that students must be taught to assuretheir success <strong>in</strong> capstone courses, not<strong>in</strong>g that educators andresearchers should set up cont<strong>in</strong>uous feedback frameworks thatcould be used to transfer knowledge to core-course faculty onthe level of preparation the students believe they have for theupcom<strong>in</strong>g capstone experience.A pert<strong>in</strong>ent question is how to actually assess capstoneprojects: what is to be assessed and how? One approachis to map project deliverables and artifacts to general andspeci<strong>fi</strong>c learn<strong>in</strong>g outcomes and rubrics, and then assess thedeliverables with respect to the rubric, as proposed by Murrayet al. [20]. As an example, Murray et al. describe the goalsof <strong>in</strong>formation systems capstone projects: students shouldbe able to i) understand that projects require collaborationas well as <strong>in</strong>dividual effort, ii) participate as contribut<strong>in</strong>gmembers of a development team, iii) apply teamwork skills <strong>in</strong>development and implementation of a system, iv) demonstrateacknowledgment of and respect for the team members, andv) identify the qualities needed to be an an effective leader,and expla<strong>in</strong> the roles of leadership and teamwork <strong>in</strong> systemdevelopment and implementation. The artifacts used to evaluatethese outcomes <strong>in</strong>clude <strong>in</strong>dividual reports, peer evaluations, andweekly status forms.Self- and peer assessments appears to be viewed favorablyby many teachers and researchers <strong>in</strong> terms of how well it<strong>in</strong>cludes students <strong>in</strong>to the assessment process. For <strong>in</strong>stance,Fellenz f<strong>in</strong>ds that peer evaluation can improve the quality ofthe students’ experience and <strong>in</strong>crease their engagement <strong>in</strong> thelearn<strong>in</strong>g task [21]. However, a particular concern <strong>in</strong> assess<strong>in</strong>gteamwork skills is the accuracy of assessment. Through areview of assessment literature, Van Duzer and McMart<strong>in</strong> [22]identi<strong>fi</strong>ed two primary types of bias as especially relevant forself-assessment and peer evaluation: self-enhancement, whereone’s own performance is evaluated as unreasonably optimistic,and downward comparison, a general tendency for positiveself-bias and negative other-bias. Similar results are reported <strong>in</strong>many works. For example, Ryan et al. compared peer and selfevaluationsof class participation aga<strong>in</strong>st those of professors[23]. They found that faculty grades tended to be higher thanpeer grades, and that self-evaluation grades were typicallyhigher than faculty grades. This study used a forced rank<strong>in</strong>gsystem for students to rank each other while faculty did notuse forced rank<strong>in</strong>g.Van Duzer and McMart<strong>in</strong> suggest some approaches toreduce self-enhancement and downward comparison biases [22].Us<strong>in</strong>g language shared by respondents and testers <strong>in</strong> assessmentcriteria helps to reduce mis<strong>in</strong>terpretation and thus improves thevalidity of the assessment process. Correlat<strong>in</strong>g self-assessmentswith scores by multiple raters allows evaluation of <strong>in</strong>strumentreliability. Design<strong>in</strong>g questions so that they rate past performance,not expected future performance, improves reliabilityby reduc<strong>in</strong>g the effect of downward comparison. F<strong>in</strong>ally, ask<strong>in</strong>grespondents to make comparisons with an explicit group ofknown <strong>in</strong>dividuals rather than an abstract group when socialcomparisons are required, also improves reliability. Qualitativeanalysis while develop<strong>in</strong>g the <strong>in</strong>strument is necessary tounderstand the mean<strong>in</strong>g of the assessment to participants. VanDuzer and McMart<strong>in</strong> developed a process with both quantitativeand qualitative parts for improv<strong>in</strong>g and tailor<strong>in</strong>g teamwork skillassessment <strong>in</strong> speci<strong>fi</strong>c environments. They found a dramaticimprovement <strong>in</strong> sensitivity when apply<strong>in</strong>g the process to theirown <strong>in</strong>strument.A number of approaches and frameworks for self- and peerassessment have been described <strong>in</strong> the literature. Willey andFreeman report on a tool that facilitates formative assessmentvia self- and peer assessment [24]. They report that formativefeedback encouraged development of teamwork skills, and alsodiscouraged free-rid<strong>in</strong>g and sabotage, thus promot<strong>in</strong>g academichonesty. They argue that while self- and peer assessment is oftenimplemented as summative assessment, even better outcomesmay be achieved by us<strong>in</strong>g them as formative assessment. Theyobserve that the adm<strong>in</strong>istrative burden of apply<strong>in</strong>g self- andpeer assessment can often outweigh the perceived bene<strong>fi</strong>t.Furthermore, they observe that feedback is often given longafter the assessable work has been completed, which meansthat students’ attention may already have shifted to other tasks.Beyerle<strong>in</strong> et al. [25] describe an assessment frameworkfor capstone design courses. Their framework is based on aconceptual model of knowledge representation and expertisedevelopment. They strive to exam<strong>in</strong>e students’ performanceand growth from several perspectives. They exam<strong>in</strong>e growth<strong>in</strong> personal knowledge and skills applied <strong>in</strong> problem solv<strong>in</strong>g.They exam<strong>in</strong>e professional development through goal-driven<strong>in</strong>itiative, competence <strong>in</strong> problem-solv<strong>in</strong>g, <strong>in</strong>tegrity and professionalism,and ongo<strong>in</strong>g reflection. Also, they exam<strong>in</strong>e teamprocesses and dynamics as well as productivity by determ<strong>in</strong><strong>in</strong>gwhether team resources are used strategically, and decisionsmade add real value to the project. They also exam<strong>in</strong>e how wellstudents are able to formulate solution requirements, considerstakeholder needs, and formalize these <strong>in</strong>to speci<strong>fi</strong>cations. F<strong>in</strong>ally,they evaluate deliverables <strong>in</strong> terms of desired functionality,economic bene<strong>fi</strong>ts, feasibility of implementation, and favorableimpact on society.Another concern is students’ motivation to rate their peers.Friedman et al. found that students who provided categoricalrat<strong>in</strong>gs (multiple scores on different categories or dimensions)multiple times dur<strong>in</strong>g a course experienced the lowest motivationto rate their peers, while students who provided holisticrat<strong>in</strong>gs (a s<strong>in</strong>gle score) multiple times reported the highestmotivation [26]. We may hypothesize that respondent fatigueplays a role here: a small number of items is less likely tofeel overwhelm<strong>in</strong>g. The type of item may also be important:describ<strong>in</strong>g a particular behavior and ask<strong>in</strong>g the respondent to<strong>in</strong>dicate its frequency is usually recommended – an approachused, e.g., <strong>in</strong> rat<strong>in</strong>g a system developed by Clark et al. [27].F<strong>in</strong>ally, also related to practical concerns, is the burdenof manual work <strong>in</strong> collect<strong>in</strong>g and analyz<strong>in</strong>g self- and peerassessment data. Naturally, onl<strong>in</strong>e questionnaires and semiautomatedanalysis tools can remove much of this manualwork. Some reports exist on complete systems for self- andpeer assessment management. The SPARK system, describedby Freeman and McKenzie [28], emphasizes fairness <strong>in</strong> groupwork assessment and reduced adm<strong>in</strong>istrative burden throughautomation. Similarly, the CATME system, described by Ohlandet al. [29], provides automation to reduce teacher workload,but places greater emphasis on us<strong>in</strong>g behavioral anchors <strong>in</strong> theassessment itself. The SMARTER system [30] extends CATMEand attempts to l<strong>in</strong>k educational research with teach<strong>in</strong>g facultyactions to enhance learn<strong>in</strong>g of teamwork skills.


IV.FRAMEWORK FOR ASSESSING TACIT SKILLSOur Framework for Assess<strong>in</strong>g Tacit Skills consists of aquestionnaire whose items can be used (e.g. by weightedaverag<strong>in</strong>g) to provide assessment decision support for teachers.The framework enables assessment of tacit skills throughn<strong>in</strong>e <strong>in</strong>dicators, used for both self- and peer assessment. Wecategorized the <strong>in</strong>dicators to represent six different tacit skills,and decided that the assessment should impose as little overheadas possible for all participants and thus should be implementedas a short onl<strong>in</strong>e questionnaire.The framework factors, questionnaire items, and scales areshown <strong>in</strong> Table I. The questionnaire, which is <strong>fi</strong>lled <strong>in</strong> bythe students, project coach, and the client, allows rat<strong>in</strong>g eachstudent based on the questionnaire items. Once the questionnairehas been answered, the answers are exported for further dataanalysis,where a set of scripts is used to e.g. suggest overallgrades based on a given weight<strong>in</strong>g, or to <strong>in</strong>dicate students thathave been free-rid<strong>in</strong>g.The questionnaire is structured along six factors, beg<strong>in</strong>n<strong>in</strong>gfrom basic factors and progress<strong>in</strong>g towards higher levelsof <strong>in</strong>volvement and skills. The <strong>fi</strong>rst factor, presence, is aprerequisite for becom<strong>in</strong>g a member of the development team.The activity factor implies that a person is not only present,but also actively <strong>in</strong>volved <strong>in</strong> the project. Eagerness reflects theattitude that the person takes towards the project: is the personnot only active but also tak<strong>in</strong>g <strong>in</strong>itiative and display<strong>in</strong>g a positivedesire to get th<strong>in</strong>gs done. Devotion reflects a deeper level ofcommitment: the person not only takes the <strong>in</strong>itiative but actually<strong>in</strong>vests effort <strong>in</strong>to carry<strong>in</strong>g out planned tasks. Contributionreflects actual impact on the project, whether <strong>in</strong> the form ofcode, documentation, or other deliverables, or <strong>in</strong> the form ofproject management, customer communication, or support tasks.F<strong>in</strong>ally, expert maturity reflects an overall assessment of howthe person performed <strong>in</strong> their role. We purposefully chose toleave the def<strong>in</strong>ition of this factor quite open and broad, <strong>in</strong> orderto allow each <strong>in</strong>dividual to assess it accord<strong>in</strong>g to the speci<strong>fi</strong>cconditions of each particular project.While we appreciate the objectivity and wide coverage ofthe approach described by Murray et al. [20] and other similarlydetailed assessment schemes, we suspect that both studentsand teachers can quickly be overwhelmed by the amount ofeffort required to produce and analyze the assessment artifacts,result<strong>in</strong>g <strong>in</strong> both less effort be<strong>in</strong>g available for project workand formative assessment and guidance. It also feels counterto the philosophy of Agile software development methodologyto employ a heavy-weight assessment framework – after all,agile projects purposely do not def<strong>in</strong>e artifacts to be produceduntil there is a proven need to produce them.Three ma<strong>in</strong> criteria were def<strong>in</strong>ed for the framework. First,the framework should ease teacher workload. The frameworkshould function as a support tool for teachers dur<strong>in</strong>g assessment,and it should support assessment of project-based courses evenwhen teachers cannot constantly observe students’ activities.Second, it should allow systematic assessment of students’skills; each factor <strong>in</strong> the framework can be thought of asbuild<strong>in</strong>g on top of the previous factors. F<strong>in</strong>ally, it should beeasy to detect attempted misuse of the framework, so thatteachers can be con<strong>fi</strong>dent that they may use the results as validdecision support <strong>in</strong>formation.V. EVALUATIONThe framework has been evaluated iteratively dur<strong>in</strong>g itsdevelopment. It was <strong>fi</strong>rst evaluated <strong>in</strong> several projects <strong>in</strong> theSoftware Factory, and later also <strong>in</strong> the Software Eng<strong>in</strong>eer<strong>in</strong>gProject. In this section, we report on the evaluation proceduresand present the most relevant evaluation results. We then discussthe validity and limitations of our evaluation and present resultsfrom evaluat<strong>in</strong>g the framework from a teacher perspective.As noted, the motive for assess<strong>in</strong>g tacit skills arose dur<strong>in</strong>gthe design of the Software Factory. We <strong>fi</strong>rst conducted a pilotproject <strong>in</strong> spr<strong>in</strong>g 2010 with 11 students, dur<strong>in</strong>g which theframework dimensions were developed. Then, the frameworkwas deployed to 11 consecutive Software Factory projects witha total of 77 students. The evaluation of the framework <strong>in</strong> theSoftware Eng<strong>in</strong>eer<strong>in</strong>g Project started <strong>in</strong> fall 2011, after whicha total of 18 projects with a total of 88 students have beenboth evaluated by the framework and given their evaluationof the framework. S<strong>in</strong>ce the latter project is mandatory forall bachelor’s-level students, we wanted to ga<strong>in</strong> reasonablecon<strong>fi</strong>dence that the framework worked well before deploy<strong>in</strong>g itthere. As part of that deployment, we found that the SoftwareEng<strong>in</strong>eer<strong>in</strong>g Project students did not perceive teamwork-relatedskills as important. For example, competitive situations arosewhere several strong <strong>in</strong>dividuals attempted to pull the project<strong>in</strong> their desired direction. For this reason, factors regard<strong>in</strong>g<strong>in</strong>dividual behavior <strong>in</strong> relation to the group were added.Our evaluation strategy is laid out as follows. Ultimately,the objective is to f<strong>in</strong>d out whether the framework is suitablefor the purpose of <strong>in</strong>fluenc<strong>in</strong>g learn<strong>in</strong>g of teamwork skillsthrough assessment. However, before actually determ<strong>in</strong><strong>in</strong>gits effect on learn<strong>in</strong>g, we want to understand whether theframework is otherwise suitable for use <strong>in</strong> capstone projects.This <strong>in</strong>cludes evaluat<strong>in</strong>g the accuracy of assessment and utilityof the framework as a decision support tool: does the frameworkadequately guard aga<strong>in</strong>st biases such as self-enhancementand downward comparison, does it adequately reflect rater’sunderstand<strong>in</strong>g of the factors, and does it produce results thatare <strong>in</strong> l<strong>in</strong>e with teachers’ expert evaluations, tak<strong>in</strong>g <strong>in</strong>to accountthe rich, qualitative observational data obta<strong>in</strong>ed when guid<strong>in</strong>gstudents <strong>in</strong> the capstone projects?To perform this evaluation, we proceed as follows. We checkthe association between self- and peer rat<strong>in</strong>gs to determ<strong>in</strong>ewhether a bias is visible (see Table II). <strong>Peer</strong> rat<strong>in</strong>gs should helpdampen bias <strong>in</strong> self-rat<strong>in</strong>gs. We check association between thedifferent rat<strong>in</strong>g factors. There should be discernible differencesbetween the factors both <strong>in</strong> self- and peer rat<strong>in</strong>gs; they shouldnot have perfect correlation. However, there should be someassociation between the factors that are <strong>in</strong> fact conceptuallyrelated.Table II shows correlations between self- and peer assessments<strong>in</strong> both the Software Factory (SF) and the SoftwareEng<strong>in</strong>eer<strong>in</strong>g Project (SP). Most of these correlations are asexpected: there is a large degree of correlation but there aredifferences <strong>in</strong> the grad<strong>in</strong>gs. However, some correlations standout from the others. In SF, there is quite low correlation oneagerness, and self-rat<strong>in</strong>gs tend to be higher (mean: 0.863) thanpeer rat<strong>in</strong>gs (mean: 0.743). In SP, self-rat<strong>in</strong>gs tend less towardsthe highest grade (mean: 0.788) and peer rat<strong>in</strong>gs are similar<strong>in</strong> distribution (mean: 0.786). In our <strong>in</strong>terpretation, students


TABLE I.FRAMEWORK FACTORS, QUESTIONNAIRE ITEMS, AND SCALES.Factor Questionnaire item ScalePresence How many days per week did you work on this project? 1 – Was not present at allHow many hours did you spend on the entire project <strong>in</strong> total? (Round to nearest hour.)2 – Was sometimes presentHow much was each team member present? Also rate your own presence.3 – Was moderately present4 – Was nearly always present5 – Was always present0 – I don’t knowActivity How actively did each team member participate <strong>in</strong> the project? Also rate your own activity. 1 – Was not active at all2 – Was somewhat <strong>in</strong>active3 – Was moderately active4 – Was quite active5 – Was very active0 – I don’t knowEagerness Eagerness: a positive feel<strong>in</strong>g of want<strong>in</strong>g to push ahead with someth<strong>in</strong>g. 1 – Was not eager at allHow eager was each team member to participate <strong>in</strong> the course? Also rate your own eagerness.2 – Was a little eager3 – Was moderately eager4 – Was quite eager5 – Was very eager0 – I don’t knowDevotion Devotion: commitment to some purpose; “the devotion of his time and wealth to our project” 1 – Was not devoted at allHow devoted was each team member to the course? Also rate your own devotion.2 – Was a little devoted3 – Was moderately devoted4 – Was quite devoted5 – Was very devoted0 – I don’t knowContribution How much did each team member contribute to the deliverables (code, documentation, tests, 1 – Did not contribute at allbugs, plans, or anyth<strong>in</strong>g else that the project produced)? Also rate your own productivity.2 – Contributed a little3 – Contributed moderately4 – Contributed quite much5 – Contributed very much0 – I don’t knowExpert Maturity Each team member has acted as a software development expert with some speci<strong>fi</strong>c focus area. 1 – Very low expert maturityHow mature was each team member <strong>in</strong> their expert role? Also rate your own maturity.2 – Low expert maturity3 – Neutral expert maturity4 – Some expert maturity5 – High expert maturity0 – I don’t knowGroup dynamics: each member can <strong>in</strong>fluence the team spirit and the end result with their social behavior.Process How did the group behavior of each member <strong>in</strong>fluence the sensed mean<strong>in</strong>gfulness of the project work? 1 – Influenced negatively(only BSc project)2 – Did not <strong>in</strong>fluence3 – Influenced a little4 – Influenced quite much5 – Influenced very much0 – I don’t knowResult How did the group behavior of each member <strong>in</strong>fluence the end quality of the project work? 1 – Influenced negatively(only BSc project)2 – Did not <strong>in</strong>fluence3 – Influenced a little4 – Influenced quite much5 – Influenced very much0 – I don’t knowTABLE II. CORRELATIONS BETWEEN SELF- AND PEER RATINGS ONDIFFERENT FRAMEWORK DIMENSIONS IN SOFTWARE FACTORY (SF) ANDSOFTWARE ENGINEERING PROJECT (SP), WITH CORRESPONDING P-VALUES.DimensionCorrelation between self- andpeer rat<strong>in</strong>gp-valuePresence (SF) 0.492 < 0.001Presence (SP) 0.457 < 0.001Activity (SF) 0.531 < 0.001Activity (SP) 0.544 < 0.001Eagerness (SF) 0.279 0.017Eagerness (SP) 0.473 < 0.001Devotion (SF) 0.433 < 0.001Devotion (SP) 0.333 0.002Contribution (SF) 0.582 < 0.001Contribution (SP) 0.376 < 0.001Expert maturity (SF) 0.461 < 0.001Expert maturity (SP) 0.207 0.062Contribution to mean<strong>in</strong>gfulness (SP) 0.487 < 0.001Contribution to quality (SP) 0.370 0.002<strong>in</strong> SP could be less <strong>in</strong>cl<strong>in</strong>ed to penalize each other, perhapsbecause their level of experience is lower and the course ismandatory – they may not want to give low rat<strong>in</strong>gs to eachother on eagerness given that situation.On devotion and contribution, the trend is similar: <strong>in</strong> SF,the correlation is stronger than <strong>in</strong> SP. In the SP data, highpeer rat<strong>in</strong>gs were more common than <strong>in</strong> the SF data. In SF,roughly one third of students rated their peers at average expertmaturity, while more than two thirds of SP students assignedeach other the two highest scores. This may <strong>in</strong>dicate that thecompetitiveness among students <strong>in</strong> the SF is higher. We observethat this <strong>in</strong>formation allows the teacher to assess the amountof bias <strong>in</strong> responses and that there appears to be agreement onthe mean<strong>in</strong>g of the dimensions.Next, we consider the association between the variables. Inthe self-evaluation scores, presence correlates somewhat with


activity and eagerness but less with devotion, contribution, andleast with expert maturity. This could <strong>in</strong>dicate that studentsdo see these factors as separate. Activity correlates quitestrongly with eagerness, devotion, and expert maturity. Devotioncorrelates strongly with contribution and expert maturity.Contribution correlates most strongly with expert maturity.In the peer evaluation scores, all factors are moderately tostrongly correlated. In SF, the strongest (≥ 0.9) correlationsare i) activity with eagerness (0.930), devotion (0.932), andcontribution (0.943); ii) eagerness with devotion (0.912) andcontribution (0.911); iii) devotion with contribution (0.939);and iv) contribution with expert maturity (0.906; p < 0.001<strong>in</strong> all cases). In SP, the correlations are smaller but still quitestrong. The order of strength is roughly the same. We <strong>in</strong>terpretthese results as support<strong>in</strong>g the <strong>in</strong>tended structure of the factors.In SP, the two added factors had moderate to low correlationbetween self- and peer evaluation. On contribution to mean<strong>in</strong>gfulness,self- and peer evaluations had a moderate correlation(0.487), while on contribution to quality, the correlation waslower (0.370). In the latter, there may be bias toward th<strong>in</strong>k<strong>in</strong>gthat one’s own contribution is the most important, and thereforeone rates the others lower.A. ValidityThe validity of the framework is limited by the fact that ituses a questionnaire-based approach. Respondents are asked torecall their own behavior and that of their teammates, and thisrecall may not be perfect. However, more fundamentally, thevalidity is ultimately relative to context <strong>in</strong> which the <strong>in</strong>strumentis deployed. The purpose of the framework is to function as adecision support tool, and teacher judgment should be used todeterm<strong>in</strong>e the f<strong>in</strong>al assessment. As MacLellan notes, validityconcerns not the assessment <strong>in</strong>strument used or the result<strong>in</strong>gscores as such, but rather the <strong>in</strong>ferences which are derived fromthem [31].To lend more validity to such <strong>in</strong>ferences, the frameworkshould provide a way to detect whether the data may be biasedor <strong>in</strong>correct. The most common reason besides un<strong>in</strong>tentionalbias is students’ attempts to arti<strong>fi</strong>cially <strong>in</strong>fluence their grades.We found some cases of attempted subversion, where a smallnumber of students systematically rated themselves with thehighest scores and others with the lowest scores. These caseswere easily detected us<strong>in</strong>g simple, semi-automatically producedoutlier analysis.B. Teacher satisfactionIn our context, we have evaluated the framework withthree different teachers. While the results of this evaluation areexperiential and cannot be generalized, we f<strong>in</strong>d it important toreport on these experiences to enable other teachers to determ<strong>in</strong>ewhether our approach is of value <strong>in</strong> their context.Our <strong>fi</strong>rst f<strong>in</strong>d<strong>in</strong>g relates to the goal of eas<strong>in</strong>g the teacher’sworkload. We found the framework to be non-<strong>in</strong>trusive andsupport<strong>in</strong>g formative assessment and feedback dur<strong>in</strong>g thecapstone courses. The framework required no extra effort dur<strong>in</strong>gthe courses, and the teachers were able to devote their time to<strong>in</strong>-situ <strong>in</strong>struction. At the end of the course, some adm<strong>in</strong>istrativeeffort was needed to adm<strong>in</strong>ister the on-l<strong>in</strong>e questionnaire, collectthe results, and perform the required data analysis. However,s<strong>in</strong>ce many of these tasks were automated or semi-automated,teachers could focus on the <strong>in</strong>tellectual side of summativeassessment: <strong>in</strong>terpretation of the numeric results and comparisonof them to other assessment sources, <strong>in</strong>clud<strong>in</strong>g notes takendur<strong>in</strong>g the course.One of the teachers voiced concerns regard<strong>in</strong>g fairness andcomparability between students and projects. However, wefound that when used as a decision-support tool for assessment,the framework did not <strong>in</strong>troduce any fairness concerns. Thiswas also reflected <strong>in</strong> students’ attitudes – all students weregiven the same opportunity to grade themselves and each other,and the teacher validated the results so that unfair biases wereaccounted for <strong>in</strong> the f<strong>in</strong>al grade. Cross-project comparability isstill an issue, however, but it is not unique to this framework.Each capstone project is <strong>in</strong>herently different, and ma<strong>in</strong>ta<strong>in</strong><strong>in</strong>gthe level of realism often desired <strong>in</strong> such projects means thatcomparisons of performance are dif<strong>fi</strong>cult.VI.CONCLUSIONS AND FUTURE WORKIn this article, we have described our tacit skills assessmentframework, which is an easy-to-use decision support utility forevaluat<strong>in</strong>g students’ teamwork pro<strong>fi</strong>ciency. The framework hasbeen evaluated with data from 18 bachelor’s and 11 master’slevelcapstone projects, where it has been found to providereasonable support for teachers <strong>in</strong> evaluat<strong>in</strong>g tacit, social, andteamwork skills. We found that the framework guarded aga<strong>in</strong>strater bias, that its dimensions were well understood, and thatit matched teachers’ expert rat<strong>in</strong>gs. Our results are relevant <strong>in</strong>the context of project-based courses emphasiz<strong>in</strong>g experientiallearn<strong>in</strong>g and agile methodologies.We suggest <strong>in</strong>clud<strong>in</strong>g self- and peer assessment <strong>in</strong>to softwarecapstone projects. However, although technically possible, oneshould not base assessment of students <strong>in</strong> capstone projectsonly on the values provided by the self- and peer rat<strong>in</strong>gs. Wesuggest us<strong>in</strong>g additional criteria that takes <strong>in</strong>to account severalother data sources, such as version control system commitsand their quality. In addition, feedback on the overall projectcan be obta<strong>in</strong>ed from the customer as well as a possible teamlead or coach. Aggregat<strong>in</strong>g scores <strong>in</strong>to a f<strong>in</strong>al grade requiresexperimentation and the <strong>in</strong>clusion of teacher judgment.In case participants display behavior that is not seen asbene<strong>fi</strong>cial for the team, additional assessment criteria can beadded to the framework due to it’s small size. As an example,a few participants <strong>in</strong> our current bachelor’s level softwareeng<strong>in</strong>eer<strong>in</strong>g projects have displayed a tendency for “safetyseek<strong>in</strong>g”, where <strong>in</strong>dividuals have avoided work<strong>in</strong>g on tasks thatrequire learn<strong>in</strong>g new tools and practices. Additional <strong>in</strong>centivesfor mov<strong>in</strong>g away from the comfort zone have been <strong>in</strong>troducedvia a new assessment criteria “How well did the participanthandle tasks that required learn<strong>in</strong>g new tools and practices?”.We are currently consider<strong>in</strong>g a replication study to evaluatethe framework <strong>in</strong> a Software Factory <strong>in</strong> another country, as wellas evaluat<strong>in</strong>g approaches to make the framework easier to use.Other possible directions <strong>in</strong>clude formative assessment support,and determ<strong>in</strong><strong>in</strong>g the association between framework factorsand objectively measurable metrics such as code metrics.


REFERENCES[1] R. Mart<strong>in</strong>, B. Maytham, J. Case, and D. Fraser, “Eng<strong>in</strong>eer<strong>in</strong>g graduates’perceptions of how well they were prepared for work <strong>in</strong> <strong>in</strong>dustry,”European Journal of Eng<strong>in</strong>eer<strong>in</strong>g Education, vol. 30, no. 2, pp. 167–180, 2005.[2] R. L. Meier, M. R. Williams, and M. A. Humphreys, “Refocus<strong>in</strong>gour efforts: Assess<strong>in</strong>g non-technical competency gaps,” Journal ofEng<strong>in</strong>eer<strong>in</strong>g Education, vol. 89, no. 3, pp. 377–385, 2000.[3] M. Natishan, L. Schmidt, and P. Mead, “Student focus group results onstudent team performance issues,” Journal of Eng<strong>in</strong>eer<strong>in</strong>g Education,vol. 89, no. 3, pp. 269–272, 2000.[4] E. M. Trauth, D. W. Farwell, and D. Lee, “The is expectation gap:Industry expectations versus academic preparation,” Mis Quarterly,vol. 17, no. 3, pp. 293–307, 1993.[5] A. Begel and B. Simon, “Struggles of new college graduates <strong>in</strong> their<strong>fi</strong>rst software development job,” <strong>in</strong> ACM SIGCSE Bullet<strong>in</strong>, vol. 40, no. 1.ACM, 2008, pp. 226–230.[6] P. J. Denn<strong>in</strong>g, “Educat<strong>in</strong>g a new eng<strong>in</strong>eer,” Communications of the ACM,vol. 35, no. 12, pp. 82–97, 1992.[7] M. E. McMurtrey, J. P. Downey, S. M. Zeltmann, and W. H. Friedman,“Critical skill sets of entry-level it professionals: An empirical exam<strong>in</strong>ationof perceptions from <strong>fi</strong>eld personnel,” Journal of InformationTechnology Education, vol. 7, pp. 101–120, 2008.[8] M. Fowler and J. Highsmith, “The agile manifesto,” Software Development,vol. 9, no. 8, pp. 28–35, 2001.[9] E. Wenger, Communities of Practice: Learn<strong>in</strong>g, Mean<strong>in</strong>g, and Identity,ser. Learn<strong>in</strong>g <strong>in</strong> Do<strong>in</strong>g. Cambridge University Press, 1998.[10] S. L. Payne, J. Flynn, and J. M. Whit<strong>fi</strong>eld, “Capstone Bus<strong>in</strong>ess Course<strong>Assessment</strong>: Explor<strong>in</strong>g Student Read<strong>in</strong>ess Perspectives.” Journal ofEducation for Bus<strong>in</strong>ess, vol. 83, no. 3, pp. 141–146, 2008.[11] J. Biggs and C. Tang, Teach<strong>in</strong>g for quality learn<strong>in</strong>g at university. Openuniversity press, 2011.[12] F. Fagerholm, N. Oza, and J. Münch, “A Platform for Teach<strong>in</strong>g AppliedDistributed Software Development: The Ongo<strong>in</strong>g Journey of the Hels<strong>in</strong>kiSoftware Factory,” Collaborative Teach<strong>in</strong>g of Globally DistributedSoftware Development, 2013.[13] P. Abrahamsson, P. Kettunen, and F. Fagerholm, “The set-up of a softwareeng<strong>in</strong>eer<strong>in</strong>g research <strong>in</strong>frastructure of the 2010s,” <strong>in</strong> Proceed<strong>in</strong>gs ofthe 11th International Conference on Product Focused Software, ser.PROFES ’10. New York, NY, USA: ACM, 2010, pp. 112–114.[14] M. Luukka<strong>in</strong>en, A. Vihava<strong>in</strong>en, and T. Vikberg, “Three years ofdesign-based research to reform a software eng<strong>in</strong>eer<strong>in</strong>g curriculum,”<strong>in</strong> Proceed<strong>in</strong>gs of the 13th annual conference on Information technologyeducation. ACM, 2012, pp. 209–214.[15] J. Biggs, “Enhanc<strong>in</strong>g teach<strong>in</strong>g through constructive alignment,” Highereducation, vol. 32, no. 3, pp. 347–364, 1996.[16] J. Biggs and C. Tang, Teach<strong>in</strong>g for Quality Learn<strong>in</strong>g at University, ser.SRHE and Open University Press Impr<strong>in</strong>t. McGraw-Hill Education,2011.[17] J. R. Frederiksen and A. Coll<strong>in</strong>s, “A systems approach to educationaltest<strong>in</strong>g,” Educational researcher, vol. 18, no. 9, pp. 27–32, 1989.[18] B. S. Bloom, M. Engelhart, E. J. Furst, W. H. Hill, and D. R. Krathwohl,“Taxonomy of educational objectives: Handbook i: Cognitive doma<strong>in</strong>,”New York: David McKay, vol. 19, p. 56, 1956.[19] L. W. Anderson, D. R. Krathwohl, and B. S. Bloom, A taxonomy forlearn<strong>in</strong>g, teach<strong>in</strong>g, and assess<strong>in</strong>g. Longman, 2005.[20] M. Murray, J. Pérez, and M. Guimaraes, “A Model for Us<strong>in</strong>g a CapstoneExperience as One Method of <strong>Assessment</strong> of an Information SystemsDegree Program,” Journal of Information Systems Education, vol. 19,no. 2, pp. 197–208, 2008.[21] M. R. Fellenz, “Toward Fairness <strong>in</strong> Assess<strong>in</strong>g Student Groupwork: aProtocol for <strong>Peer</strong> Evaluation of Individual Contributions,” Journal ofManagement Education, vol. 30, no. 4, pp. 570–591, 2006.[22] E. Van Duzer and F. McMart<strong>in</strong>, “Methods to improve the validityand sensitivity of a self/peer assessment <strong>in</strong>strument,” Education, IEEETransactions on, vol. 43, no. 2, pp. 153–158, 2000.[23] G. Ryan, L. Marshall, K. Porter, and H. Jia, “<strong>Peer</strong>, professor and selfevaluationof class participation,” Active Learn<strong>in</strong>g <strong>in</strong> Higher Education,vol. 8, no. 1, pp. 49–61, 2007.[24] K. Willey and M. Freeman, “Complet<strong>in</strong>g the Learn<strong>in</strong>g Cycle: TheRole of Formative Feedback When Us<strong>in</strong>g Self and <strong>Peer</strong> <strong>Assessment</strong> toImprove Teamwork and Engagement.” Auckland, NZ: AustralasianAssociation for Eng<strong>in</strong>eer<strong>in</strong>g Education, 2006.[25] S. Beyerle<strong>in</strong>, D. Davis, M. Trevisan, P. Thompson, and O. Harrison,“<strong>Assessment</strong> framework for capstone design courses,” Chicago, IL, 2006.[26] B. A. Friedman, P. L. Cox, and L. E. Maher, “An Expectancy TheoryMotivation Approach to <strong>Peer</strong> <strong>Assessment</strong>,” Journal of ManagementEducation, vol. 32, no. 5, pp. 580–612, 2008.[27] N. Clark, P. Davies, and R. Skeers, “Self and peer assessment <strong>in</strong> softwareeng<strong>in</strong>eer<strong>in</strong>g projects,” <strong>in</strong> Proceed<strong>in</strong>gs of the 7th Australasian conferenceon Comput<strong>in</strong>g education - Volume 42, ser. ACE ’05. Darl<strong>in</strong>ghurst,Australia, Australia: Australian Computer Society, Inc., 2005, pp. 91–100.[28] M. Freeman and J. McKenzie, “SPARK, a con<strong>fi</strong>dential web–basedtemplate for self and peer assessment of student teamwork: bene<strong>fi</strong>tsof evaluat<strong>in</strong>g across different subjects,” British Journal of EducationalTechnology, vol. 33, no. 5, pp. 551–569, 2002.[29] M. W. Ohland, M. Loughry, R. Carter, L. Bullard, R. Felder, C. F<strong>in</strong>elli,R. Layton, and D. Schmucker, “The comprehensive assessment of teammember effectiveness (catme): A new peer evaluation <strong>in</strong>strument,” <strong>in</strong>Proceed<strong>in</strong>gs of the 2006 ASEE Annual Conference, 2006.[30] M. Ohland, R. Layton, M. Loughry, H. Pomeranz, D. Woehr, andE. Salas, “Smarter teamwork: System for the management, assessment,research, tra<strong>in</strong><strong>in</strong>g, education, and remediation of teamwork,” 2010.[31] E. Maclellan, “How conv<strong>in</strong>c<strong>in</strong>g is alternative assessment for use <strong>in</strong> highereducation?.” <strong>Assessment</strong> & Evaluation <strong>in</strong> Higher Education, vol. 29,no. 3, pp. 311–321, 2004.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!