This may actually turn out to be my second post this month, which hasn’t happened since I started blogging. I don’t want to jinx things, so here goes. It’s about the Moodle workshop – their peer review activity. I suspect it works like most online peer reviews do: first you set up the task, then there’s a submission period, an assessment period and when that’s over everyone sees the feedback their classmates have given them. Pretty straightforward. Oh, and I always make it a double-blind process; I don’t think there would be as much useful feedback if the students knew whose work they were reviewing.
I’ve run the peer review a number of times now and have introduced a couple of tweaks along the way, so I thought it was about time I had some kind of written record of how things developed.
In my course this activity is part of a unit on descriptive writing, so the idea is to jazz up a bland piece of writing using a number of possible strategies. The feedback students give each other is on how successfully these strategies have been used, not on language accuracy. At least that is the plan, although people occasionally have given feedback not entirely restricted to strategy use. The bland piece of writing comes from a book – I hesitate to call it a coursebook because it’s not, but the course is built on the material it covers to an extent. The book, however, doesn’t suggest the students do anything other than improve upon this piece, so the peer review is my online adaptation.
The submission stage lasts around 5 days. Moodle allows you to be quite strict about this, which means that it’s up to you to accept late submissions or not. I did the first year, but this proved to be complicated during the next stage – assessment. If you decide on a deadline by which work needs to be submitted, everyone who has submitted something can start assessing at the same time. If you accept late submissions, some students will need to wait before they can start assessing. I decided this wasn’t fair and at the risk of not including everyone in the task, I’ve now had a fixed submission deadline for a couple of years.
When you have all the submissions, they need to be shuffled around and allocated to other students. This can be done by the system or manually. I always do it manually, trying for a balance between weaker and stronger students. The assessment stage takes another 3–4 days, and again it is up to the instructor how (in)flexible they want to be regarding the deadline. I don’t think I’ve ever had everyone observe the deadline; I always need to nag gently remind some people to finish their assessment.
I could simply set a cut-off time after which the system would not allow further assessments, but my feeling is that it would be unfair on those students who have given their peers feedback but wouldn’t receive any themselves. As it is, there’s always at least one student who doesn’t assess and ignores my DM, and I then do the assessment myself so as not to hold the activity up forever.
- Something I tried in one of the early iterations of the course was to organize a second round of the activity for those students who had missed the deadline the first time around. I think I only did that once, because it quickly dawned on me that this an invitation to be taken advantage of.
- Also early on, the majority of students wrote their learning journal entry as soon as they completed the submission phase, which meant that few people reflected on giving feedback. This has since changed (I schedule the deadlines differently) and now most say how they feel about the peer review and what they have learned from it, if anything. There have been some interesting comments re the perceived inadequacy of someone who is not a teacher giving feedback.
- When the assistant mods started helping me, I asked them to assess the students’ work as well, so each student would end up with two assessments. This, arguably, is not strictly peer review in the sense that the assistant mods had done the activity when they were doing the course themselves, but perhaps it could be argued that they are still students, so in a sense it is peer review?
- A change introduced last year is that each student now has to give feedback to two other students. In MOOCs I think it’s common to review the work of several people, but this makes sense because of the low completion rate – you want to be sure everyone will receive at least some feedback.
- Last semester, when the course was run in blended format, the students did the activity online but when we next met in class we had a follow-up discussion. I picked out some of the comments from their learning journals – these are shared with the group so I knew that these were thoughts the students were more or less comfortable sharing – and in pairs they decided if they agreed with the statements. Then, when I knew each pair would be ready to say something, we discussed them as a class.
A reason I like peer review so much is that I think it is transferable to life outside the classroom. Not the submission–assessment process perhaps, but realizing the importance of giving useful feedback. Focusing on specific issues, not the person. Being helpful and identifying what could be done differently and possibly more effectively. Realizing that a piece of writing can be improved upon even if we don’t focus on simply correcting grammar errors.
How do you feel about peer review? If you run an online course do you do this type of activity with your students? How about in the classroom?