Categories
Edtech MOOC online course

Some (new) observations on peer review

I recently completed a MOOC called Elements of AI. Let me first say that I am privately (and now perhaps not as privately) thrilled to have managed this because I’m highly unlikely to commit to a MOOC if it looks like I might not have time to do it properly (whatever that means) and it often looks that way. I enjoyed the course and definitely learned a bit about AI – robots will not replace teachers anytime soon in case anyone was wondering – but I couldn’t help noticing various aspects of course design along the way. This is what this post is about, in particular the peer review component. 

S B F Ryan: #edcmooc Cuppa Mooc (CC BY 2.0)

Most of my experience with peer review is tied up with Moodle’s workshop activity, which I have written about here, so the way it was set up in this course was a bit of a departure from what I am used to. There are 5 or 6 peer review activities in Elements of AI and they all need to be completed if you want to get the certificate at the end – obviously, I do. *rubs hands in happy anticipation*

Let’s take a look at how these are structured. To begin with, the instructions are really clear and easy to follow – and despite reading them carefully more than once, I still occasionally managed to feel, on submitting the task and reading the sample “correct” answer, that I could have paid closer attention (the “duh, they said that” feeling). The reason I note this is because it’s all too easy to forget about it when you’re the teacher. I often catch myself thinking – well, I did a really detailed job explaining X, so how did the student not get that? 

Before submitting the task, you’re told in no uncertain terms that there’s no resubmitting and which language you’re meant to use (the course is offered in a range of languages). I read my submissions over a couple of times and clicked submit. In the Moodle workshop setup, which I am used to, you can then relax and wait for the assessment stage, which begins at the same time for all the course participants. Elements of AI has no restrictions in terms of when you can sign up (and submit each peer review), so I realized from the start that their setup would have to be different. 

The assessment stage starts as soon as you’ve made your submission. You first read a sample answer, then go on to assess the answers of 3 other course participants. For each of these three you can choose between two random answers you’re shown before you commit to one and assess it on a scale of an intensely frowning face to a radiant smile (there are 5 faces altogether). You are asked to grade the other participants on 4 points:

  1. staying on topic
  2. response is complete/well-rounded
  3. the arguments provided are sound
  4. response is easy to understand

The first time I did this, I read both random responses very carefully and chose the one that seemed more detailed. This was then quickly assessed because the 4 points are quite easy to satisfy if you’ve read the instructions at all carefully. However, I did miss the fact that there was no open-ended answer box where I could justify anything less than a radiant smile. I’m guessing this was intentional so as to prevent people from either submitting overly critical comments or spamming others (or another reason that hasn’t occurred to me) but I often felt an overwhelming urge to say, well, yes, the response was easy to understand, but you might consider improving it further by doing X. Possibly those who aren’t teachers don’t have this problem. 😛

It was also frustrating when I came across an answer that simply said “123” and another that was plagiarized – my guess is that the person who submitted it had a look at someone else’s screen after that other person had already made their submission and could access the sample answer. Or maybe someone copied the sample answer somewhere where others had access to it? The rational part of my brain said, “Who cares? They clearly don’t, so why should you? People could have a million different reasons for signing up for the course.” The teacher part of my brain said, “Jesus. Plagiarizing. Is. Not. Okay. Where do I REPORT this person? They are sadly mistaken if they think they’re getting the certificate.”

Once you’ve assessed the three responses an interesting thing happens. You’ve completed the task and can proceed to the next one, but you still have to wait for someone to assess your work. This, you’re told, will happen regardless, but if you want to speed up the process, you can go ahead and assess some more people. The more responses you assess, the faster your response will come up for assessment. I ended up assessing 9 responses per peer review task, so clearly this incentive worked on me, though I have no idea how much longer I would’ve had to wait for my grades if I had only assessed 3 responses per task. I only know that when I next logged on, usually the following day, my work had already been assessed. 

For a while I was convinced that either whoever had assessed my work had been very lenient or else all responses were automatically awarded four radiant smiles. My work hadn’t been that good, I thought. Then in the very last peer review I got a less than perfect score, so I assume there was at least one other teacher taking the course. 🙂 

In theory then, once your work has been assessed by two of your peers, you’re completely done with the task. However, at the very end of the course, you’re told that in addition to the grades you received from your peers, your work will also be graded by the teaching staff. Happily, your tasks are still marked as complete and you can get your certificate nevertheless. I suspect I’ll be waiting a while for that grade from the teaching staff and it seems a bit irrelevant, to be honest. It would make sense for someone other than other course participants to check the responses if this were done before course completion was officially confirmed (so those who submitted “123” wouldn’t get their certificate, for instance) but now I think of the course as finished and my work as graded, I’m not likely to go back and check whether I received any further feedback, especially if it’s only emoji. 

There were other interesting aspects of the course but I’ll stop here so as not to mess up my chances of posting this soon. In short, the course reminded me of why l like peer review (if everyone participates the way the course designers intended them to) and has given me some new ideas of how similar activities can be set up.

Have you completed any MOOCs or other online courses lately? Did they include peer review? What do you think makes a good peer review activity?

Categories
Edtech Moodle Tertiary teaching

Anonymous reviewer

This may actually turn out to be my second post this month, which hasn’t happened since I started blogging. I don’t want to jinx things, so here goes. It’s about the Moodle workshop – their peer review activity. I suspect it works like most online peer reviews do: first you set up the task, then there’s a submission period, an assessment period and when that’s over everyone sees the feedback their classmates have given them. Pretty straightforward. Oh, and I always make it a double-blind process; I don’t think there would be as much useful feedback if the students knew whose work they were reviewing.

I’ve run the peer review a number of times now and have introduced a couple of tweaks along the way, so I thought it was about time I had some kind of written record of how things developed.

Photo taken from ELTpics by @aClilToClimb, used under a CC BY-NC 4.0 license.

In my course this activity is part of a unit on descriptive writing, so the idea is to jazz up a bland piece of writing using a number of possible strategies. The feedback students give each other is on how successfully these strategies have been used, not on language accuracy. At least that is the plan, although people occasionally have given feedback not entirely restricted to strategy use. The bland piece of writing comes from a book – I hesitate to call it a coursebook because it’s not, but the course is built on the material it covers to an extent. The book, however, doesn’t suggest the students do anything other than improve upon this piece, so the peer review is my online adaptation.

The submission stage lasts around 5 days. Moodle allows you to be quite strict about this, which means that it’s up to you to accept late submissions or not. I did the first year, but this proved to be complicated during the next stage – assessment. If you decide on a deadline by which work needs to be submitted, everyone who has submitted something can start assessing at the same time. If you accept late submissions, some students will need to wait before they can start assessing. I decided this wasn’t fair and at the risk of not including everyone in the task, I’ve now had a fixed submission deadline for a couple of years.

When you have all the submissions, they need to be shuffled around and allocated to other students. This can be done by the system or manually. I always do it manually, trying for a balance between weaker and stronger students. The assessment stage takes another 3–4 days, and again it is up to the instructor how (in)flexible they want to be regarding the deadline. I don’t think I’ve ever had everyone observe the deadline; I always need to nag gently remind some people to finish their assessment.

I could simply set a cut-off time after which the system would not allow further assessments, but my feeling is that it would be unfair on those students who have given their peers feedback but wouldn’t receive any themselves. As it is, there’s always at least one student who doesn’t assess and ignores my DM, and I then do the assessment myself so as not to hold the activity up forever.

Tweaks

  1. Something I tried in one of the early iterations of the course was to organize a second round of the activity for those students who had missed the deadline the first time around. I think I only did that once, because it quickly dawned on me that this an invitation to be taken advantage of.
  2. Also early on, the majority of students wrote their learning journal entry as soon as they completed the submission phase, which meant that few people reflected on giving feedback. This has since changed (I schedule the deadlines differently) and now most say how they feel about the peer review and what they have learned from it, if anything. There have been some interesting comments re the perceived inadequacy of someone who is not a teacher giving feedback.
  3. When the assistant mods started helping me, I asked them to assess the students’ work as well, so each student would end up with two assessments. This, arguably, is not strictly peer review in the sense that the assistant mods had done the activity when they were doing the course themselves, but perhaps it could be argued that they are still students, so in a sense it is peer review?
  4. A change introduced last year is that each student now has to give feedback to two other students. In MOOCs I think it’s common to review the work of several people, but this makes sense because of the low completion rate – you want to be sure everyone will receive at least some feedback.
  5. Last semester, when the course was run in blended format, the students did the activity online but when we next met in class we had a follow-up discussion. I picked out some of the comments from their learning journals – these are shared with the group so I knew that these were thoughts the students were more or less comfortable sharing – and in pairs they decided if they agreed with the statements. Then, when I knew each pair would be ready to say something, we discussed them as a class.

A reason I like peer review so much is that I think it is transferable to life outside the classroom. Not the submission–assessment process perhaps, but realizing the importance of giving useful feedback. Focusing on specific issues, not the person. Being helpful and identifying what could be done differently and possibly more effectively. Realizing that a piece of writing can be improved upon even if we don’t focus on simply correcting grammar errors.

How do you feel about peer review? If you run an online course do you do this type of activity with your students? How about in the classroom?