Assessment

Introduction

Empower offers experience and expertise in a range of assessment methodologies for students working in online or blended environments.

Technology-enhanced assessment (“e-assessment”) offers the potential to deliver summative, formative and diagnostic assessment to assess a range of learning outcomes for large student numbers. The assessment can be embedded within learning design in a way that can motivate students and help them to pace their studies.

Meet Sally Jordan from OUUK and listen to what she says about the expertise on assessment.

Meet our experts on assessment
  • Kim Dirkx, Open University of the Netherlands (OUNL), Chair

  • José Janssen, Open University of the Netherlands (OUNL)

  • Claudine Muhlstein-Joliette, Fédération Interuniversitaire de l'Enseignement à Distance (FIED) 

  • Murat Akyildiz, Anadolu University 

  • Tarja Ladonlahti, Jyväskylän yliopisto (JYU)

  • Vassilios Verykios, Hellenic Open University (HOU)

  • Ana Elena Guerrero Roldan, Open University of Cyprus (UOC)

EMPOWER offers

We take a broad definition of technology-enhanced assessment to include:

  • Computer-marked assessment (including sophisticated questions which can, for example, deliver targeted and graduated feedback on questions which require students to enter their answer as a free-text sentence)
  • Electronic management of assessment, which allows for the electronic and return of assignments, marked by humans
  • Online examinations
  • The marking of digital artefacts (audio, video, software, digital papers)
  • The assessment of synchronous and asynchronous group interaction
  • The assessment of ePortfolios
  • Electronic tools to assist in the process of peer review

Learning analytics enable us to use data to improve the learning process for students, both directly (by delivering learning that is personalised) and indirectly (by informing teachers of common misconceptions and informing learning designers of what works well and what does not).

Empower experts are available to help you to address some of the misconceptions surrounding assessment in online/blended environments, as well as challenges and opportunities including:

  • The assessment of prior learning/diagnostic testing
  • Authenticity of assessed tasks
  • Authentication/identification of students
  • Plagiarism and developing good academic practice
  • Online marking
  • Examinations in remote locations
  • Achieving quality and consistency of marking and feedback, for work that is marked by many different people in different places
  • Assessment of very large courses, including MOOCs.

Tools & resources

Repository

E-assessment week:

26 & 28 June 2018

Academic dishonesty: challenges & solutions

Though academic dishonesty or academic fraud can be considered ‘a fact of life’ (much as we like to erase it, we know that at best we can try to contain it), educational institutions need to make a convincing case that their assessment practices are fair and reliable. In this respect, recent figures on the extent of the problem of academic dishonesty give rise to concern. At the same time, there is a desire to increase flexibility in educational assessment through online assessment, which also constitutes a challenge in terms of ensuring the response to an assessment is provided by the right person. The European Horizon2020 TeSLA project aims at enabling reliable e-assessments through various state-of-the-art technologies for authentication and authorship verification, which can help to improve assessment practices in both online and face-to-face settings. These technologies include face-recognition, voice-recognition, analysis of keystroke (typing) dynamics, plagiarism detection, and forensic (writing style) analysis. In this webinar, we will explore definitions and types of academic dishonesty, the scope of the problem, solutions provided by technologies, and the extent to which they cover the problem. Finally, we will discuss possible measures beyond the use of technology. By José Janssen (OUNL)

Teachers intentions and students perceptions of written feedback

Feedback is one of the most powerful enhancers of learning and although there has been quite a large amount of research studying determinants of effective feedback, both teachers and students indicate that feedback is still not optimally used. In this webinar, we will present an overview of some studies conducted at the Open University of actual feedback practices and students' perceptions of feedback. We will conclude with some food for thought when providing feedback to our students. By Kim Dirkx (OUNL)


Viewbrics: mirroring and mastering complex generic skills with video-enhanced rubrics through a technology-enhanced formative assessment methodology

To master complex generic skills (or ‘21st-century skills’), it is important to form a concrete and consistent mental model of all constituent sub-skills and mastery levels.  An analytic assessment rubric describes skills’ mastery levels in text, by means of a set of performance indicators for constituent sub-skills. However, text-based rubrics have a limited capacity to convey contextualized, procedural, time-related and observable behavioural aspects of a complex skill, thus restricting the construction of a rich mental model. 

Therefore, within the Viewbrics-project, we study the possibilities of using video modelling examples combined with rubrics, called video-enhanced rubrics, for the formative assessment of complex skills. We expect that using video-enhanced rubrics instead of text-based rubrics will lead to a ‘richer’ mental model and improves feedback quality (in terms of consistency as well as concreteness)  while practising a complex skill, for both pupils and teachers in secondary schools.  Subsequently, we expect increased skill’s mastery levels. 

Within the Viewbrics-project, we developed and tested this technology-enhanced formative assessment methodology with video-enhanced rubrics, through a design research approach with teachers, pupils, researchers and various domain experts, for three generic complex skills, namely presenting, collaborating and information literacy. This webinar reports on the followed design research process, the resulting formative assessment methodology and functionality of the Viewbrics online tool and on future research. We will also discuss the applicability of  ‘Viewbrics’  in other educational contexts. By Ellen Rusman (OUNL)

Fostering engagement and learning through formative feedback

UNED developments and use of automatized and mobile feedback for closed and open-ended questions:

Formative assessment and personalised feedback are commonly recognised as key factors both for improving student’s performance and increasing their motivation and engagement (Gibbs, 2005). Currently, in large and massive online courses technological solutions to give feedback are reduced to different kinds of quizzes. In this webinar, solutions and results for automated closed and open-ended questions will be presented, based on UNED experiences in different undergraduate subjects. 

 

Automatic Feedback for closed question 

Previous research in Educational Psychology has showed the positive results for students’ engagement and learning of, on the one hand, the so-called Testing effect, or answering questions after study sessions; and, on the other hand, Spaced education, meaning spaced repetition of the same questions at specific intervals, which increases long-term retention. Through this webinar participant would have the opportunity to know the features of a new Moodle activity plug-in activity developed in UNED called UNEDTrivial, which allows instructors design quizzes as learning tools based on “testing effect” and “spaced education”. First results in two subjects of the Faculties of Economics and Psychology will be presented. 

 

Automatic Feedback for open-ended questions 

At present, one of our challenges is to be able to give feedback for open-ended questions through semantic technologies in a sustainable way. To face such challenge, our academic team decided to test a Latent Semantic Analysis-based automatic assessment tool, named G-Rubric, developed by researchers at the Developmental and Educational Psychology Department of UNED (Spanish National Distance Education University). By using GRubric, automated formative and iterative feedback was provided to our students to different types of open-ended questions (70-800 words). This feedback allowed students to improve their answers and practice writing skills, thus contributing both to a better concept organisation and the building of knowledge.  

At this webinar, we will present the promising results of our first experiences in UNED Business Degree students along with three academic courses (2014/15, 2015/16 and 2016/17).  (Miguel Santamaría Lancho & Ángeles Sánchez-Elvira Paniagua)

Assessing Students and Tutors with Learning Analytics Dashboards

Newly emerging schemes for data capturing and storage have been creating a prosperous ecosystem for revolutionizing the way public organizations and private companies are doing business. Educational institutions are now able to provide evidence of accountability and efficiency based on the adequate allocation of public funding and their ranking in relation to other institutions as well as to assess and guide their students, tutors and administration. In this study we present our initial findings from applying learning analytics schemes along with adequate visual representations bundled together for ease of use into the so-called learning analytics dashboards to establish patterns of student performance and tutor productivity. Moreover, we report on the applicability of certain of these software suites for addressing the needs of the students and tutors in a module offered by the Information Systems graduate program in the School of Science and Technology of the Hellenic Open University in Greece. By Vassilis Verykios, Andreas Gkontzis, Elias Stavropoulos (Hellenic Open University)

26 September 2017

A cluster-based analysis to diagnose students’ learning achievements

Assessment and evaluation of students’ performance has always played an important role in the learning process as it provides information about the level of knowledge acquired on a subject, and the progress that has been achieved. However, another main issue is detecting the thematic core in which the students have learning problems because they are evaluated in terms of competencies. This study proposes an adaptive approach to diagnose and feedback students, and makes use of the Item Response Theory to estimate skill levels and classify the students. In addition, it uses a model of concepts’ relationship between the concepts and the items of the test. The purpose is to diagnosestudents’ cognitive problems and provide personalized and intelligent learning suggestions. This approach can be used as a system of intelligent diagnosis that receives a set of responses, and generates a data set of weak concepts for each student, specifying their learning path and resulting in clustering individuals who share the same shortcomings to ease any process of group feedback. By Miguel Rodríguez Artacho, Associate Profesor at ETSI Informática, UNED

19 July 2016

Confidence-based marking

Confidence-based marking (CBM) is an assessment method which asks the student not only to provide the answer to a question, but also to report their level of confidence (or certainty) in the correctness of their answer. They need to consider this carefully because it affects the marks they are awarded: a student scores full marks for knowing that they know the correct answer, some credit for a tentative correct answer but are penalised if they believe they know the answer but get it wrong. There are several motivations for using CBM: it rewards care and effort so engendering greater engagement, it encourages reflective learning, and it promises accuracy and reliability.

CBM has had niche success in the past in the context of medical training and recently may have a found a new niche in the context of regulatory compliance; these are both areas where assessment of competency and mastery is expected. However, CBM has not been widely adopted in other areas of education.

In this talk I will review the CBM landscape and ask why CBM is not used more widely. What are the benefits claimed and how robust is the evidence? How should CBM be presented to the students? Do they need training to understand how the system works? Is it a fair method of assessment? Does it disadvantage any category of student? How does it fit with ideas around ‘assessment for learning’ and ‘reflective learning’?

Confidence-based marking could offer both the student and teacher greater insight into a student’s understanding than the standard fare of e-assessment, the multiple-choice quiz. It is a technique that we should therefore keep under consideration. By Jon Rosewell, Dept Computing & Communications, MCT (OUUK)

22 June 2016