Encouraging students to be active in their learning and to actively discuss issues is a good way of improving understanding of the topic being taught. It will sink in better and the discussion can be used as part of the summative assessment of projects. It was with this in mind that we started a project as part of the Creative Technology programme at the University of Twente. The objective: to use group peer feedback as a teaching tool. We also wanted to investigate whether group peer feedback could be used as part of the assessment process. It would then contribute to determining the grade awarded for the submitted project. But would this be fair?
It’s good to talk
Encouraging students to reflect on the topic being taught and to discuss in groups is useful. That is one of the results of the group peer feedback project. Students like looking at other people’s work. Giving feedback in a group is valuable because it leads to discussions and to a better understanding of the topic being taught, and it can also encourage students to take a different perspective on the work they have submitted. So far, I’m positive about group peer feedback. But I have a number of concerns that I’d like to share with you.
So many peer feedback tools but none of them are really suitable
There is a huge range of peer feedback tools available, both free of charge and otherwise. A simple Google search will generate a large number of hits. But we couldn’t find a single tool that allows individually submitted work to be assessed by peer feedback groups. So we developed software that would allow us to do just that. Although this software isn’t yet fully developed, we were still able to use the tool in our pilot. The pilot took place in Quarter 4 of academic year 2015-2016.
Good enough quality?
It sounds so logical. When students discuss someone else’s work with each other and provide feedback on it, it enthuses them and they learn a lot of things that they wouldn’t otherwise have learnt. What could be better than learning from each other and from realistic assignments submitted by your peers? But we have not been able to prove that the quality of the feedback improves if students give feedback as a group.
Lecturers on the other hand are convinced that, on average, group feedback is better on average than individual feedback. Clearly, not every group gives fantastic feedback. But if the number of reviews is large enough, there will be enough for the recipient of the feedback to work with.
In our project, the students received feedback from a minimum of three groups of three students. It was noticeable that the quality of the feedback improved when students took the trouble to give written/typed feedback as well as the checklists. This is a bit odd because the completed checklists are very often different from the lecturer’s opinions. So, students who give feedback clearly have trouble interpreting or understanding a checklist.
You can’t predict which groups of students will give a good assessment and which not (meaning: an assessment score that is very close to the score awarded by the lecturer). Even if we don’t include the feedback score of the less able student groups (where the group’s individual scores are on average lower than 6, for example) in the analyses, this does not led to any obvious improvement. Although that is what we would have expected. So, for the time being, it would not seem to be reliable to use group peer feedback to determine a score for a project.
Keep at it! To my mind, that’s the best policy. You can’t tell me that group peer feedback is not a good idea. But how do we make it effective and useful? And when will it be possible to use it as part of a summative assessment? We hope to figure this out in a follow-up project.
We’re a bit disappointed about the fact that group peer feedback can’t yet be used in assessments, but we’re really looking forward to continuing our search. How do we get people talking about group peer feedback? Or should we take a different tack? Is peer ranking perhaps far simpler and more effective?
Innovation scheme: Digital assessment for customised education
This project is one of the 9 projects in SURFnet’s innovation scheme Digital assessment for customised education. Under this scheme, between 1 July 2015 and 1 July 2016, higher education institutions experimented with the use of digital assessment for the creation of customised education, with a view to improving the quality of the teaching and devising learning processes that better match lecturers’ and students’ requirements.
Halfway through the scheme, the project leaders wrote a blog about the progress of their project. In this latest blog they describe the results and the lessons learned from their experiments. Read the first blog on this project.
About the author
Slotman works as Educational Consultant at the University of Twente, in the Centre of Expertise in Learning and Teaching. She has been seconded to the faculty of Electrical Engineering, Mathematics and Computer Science, where she forms part of the TELT team (Technology-Enhanced Learning and Teaching).