Technologies for 21st Century Self-and-Peer-Assessment (REVIEW)
A significant body of centre research has focused on developing assessment strategies that support learning. These approaches have included:
- Development of the REVIEW software for self-assessment
- Creation of learning analytics tools, particularly focused on ‘professional reflection’ to support professional development
- Analysis of ‘benchmarking’ tasks and use of exemplars to support learning (you can read more about this project here )
- Designing approaches to assessing 21st century competencies, and holistic assessment for university entry (see Darral’s UTS Social Impact case study )
The idea of building up student’s ‘evaluative judgement’ is common across these, and described in a bit more detail below.
REVIEW: Developing evaluative judgement
Single-mark or grade indicators are commonplace in describing student performance, leading to a tendency for both students and staff to focus on this single indicator, rather than more nuanced evaluation of a student’ knowledge and attributes (Thompson, 2006). Moreover, such assessments cannot provide feedback regarding the development of knowledge and other attributes across disciplinary boundaries and years of study.
The REVIEW software is an assessment tool designed to bring both summative and formative feedback together, over time, and across disciplinary boundaries. The tool has been developed to enhance learning through three modes of action:
- Providing a self-assessment space, to encourage students to reflect on and articulate their perception of their own achievements, which they can compare to tutor-assessments that target written formative feedback at the criteria in which there is the largest gap between the self-assessment and tutor-assessment.
- To make explicit the association between: assessments (including exams); graduate attributes; the marks given; and specific feedback (such that two identical ‘grades’ can be composed from multiple different criterion-level assessments).
- Through ‘2’, to act as a change agent in developing and shifting assessment tasks and criteria towards constructive alignment between individual assessments – perhaps most notably examinations – and higher level graduate attributes.
Led by researchers at the University of Technology Sydney (UTS), the tool has been evaluated against these objectives over a period of 12 years. Early evaluations (Kamvounias & Thompson, 2008; Taylor et al., 2009; Thompson, Treleaven, Kamvounias, Beem, & Hill, 2008) indicated that (1) based on student feedback surveys, they had generally positive experiences in using the tool, specifically that it enhanced the clarity of the assessment expectations, and (2) based on instructor reflections and analysis of unit outline changes, the tool was a driver for change in developing explicit assessment criteria and constructive alignment between assessments and graduate attributes.
Perhaps most significantly, based on 4 semesters of REVIEW self-assessment data, analysis indicates enhancement of student learning through calibration of their self-assessments such that they become more aligned with tutor-judgements over the semesters (Boud, Lawson, & Thompson, 2013), a finding replicated over a shorter period, with varied cohorts, elsewhere (Carroll, 2013). In addition, “There are early signs in student feedback that the visual display of criteria linked to attribute categories and sub-categories is useful in charting progress and presenting to employers in interview contexts. Employers take these charts seriously because they are derived from actual official assessment criteria from a broad range of subjects over time” (Thompson, Forthcoming, p. 19)
Video introduction to REVIEW
Watch Darrall talk about the REVIEW platform for a ‘highly commended’ ACODE-Pearson 2016 award.
Read more about Darrall’s work at his author profile.