I recently completed a MOOC called Elements of AI. Let me first say that I am privately (and now perhaps not as privately) thrilled to have managed this because I’m highly unlikely to commit to a MOOC if it looks like I might not have time to do it properly (whatever that means) and it often looks that way. I enjoyed the course and definitely learned a bit about AI – robots will not replace teachers anytime soon in case anyone was wondering – but I couldn’t help noticing various aspects of course design along the way. This is what this post is about, in particular the peer review component.

Most of my experience with peer review is tied up with Moodle’s workshop activity, which I have written about here, so the way it was set up in this course was a bit of a departure from what I am used to. There are 5 or 6 peer review activities in Elements of AI and they all need to be completed if you want to get the certificate at the end – obviously, I do. *rubs hands in happy anticipation*
Let’s take a look at how these are structured. To begin with, the instructions are really clear and easy to follow – and despite reading them carefully more than once, I still occasionally managed to feel, on submitting the task and reading the sample “correct” answer, that I could have paid closer attention (the “duh, they said that” feeling). The reason I note this is because it’s all too easy to forget about it when you’re the teacher. I often catch myself thinking – well, I did a really detailed job explaining X, so how did the student not get that?
Before submitting the task, you’re told in no uncertain terms that there’s no resubmitting and which language you’re meant to use (the course is offered in a range of languages). I read my submissions over a couple of times and clicked submit. In the Moodle workshop setup, which I am used to, you can then relax and wait for the assessment stage, which begins at the same time for all the course participants. Elements of AI has no restrictions in terms of when you can sign up (and submit each peer review), so I realized from the start that their setup would have to be different.
The assessment stage starts as soon as you’ve made your submission. You first read a sample answer, then go on to assess the answers of 3 other course participants. For each of these three you can choose between two random answers you’re shown before you commit to one and assess it on a scale of an intensely frowning face to a radiant smile (there are 5 faces altogether). You are asked to grade the other participants on 4 points:
- staying on topic
- response is complete/well-rounded
- the arguments provided are sound
- response is easy to understand
The first time I did this, I read both random responses very carefully and chose the one that seemed more detailed. This was then quickly assessed because the 4 points are quite easy to satisfy if you’ve read the instructions at all carefully. However, I did miss the fact that there was no open-ended answer box where I could justify anything less than a radiant smile. I’m guessing this was intentional so as to prevent people from either submitting overly critical comments or spamming others (or another reason that hasn’t occurred to me) but I often felt an overwhelming urge to say, well, yes, the response was easy to understand, but you might consider improving it further by doing X. Possibly those who aren’t teachers don’t have this problem. 😛
It was also frustrating when I came across an answer that simply said “123” and another that was plagiarized – my guess is that the person who submitted it had a look at someone else’s screen after that other person had already made their submission and could access the sample answer. Or maybe someone copied the sample answer somewhere where others had access to it? The rational part of my brain said, “Who cares? They clearly don’t, so why should you? People could have a million different reasons for signing up for the course.” The teacher part of my brain said, “Jesus. Plagiarizing. Is. Not. Okay. Where do I REPORT this person? They are sadly mistaken if they think they’re getting the certificate.”
Once you’ve assessed the three responses an interesting thing happens. You’ve completed the task and can proceed to the next one, but you still have to wait for someone to assess your work. This, you’re told, will happen regardless, but if you want to speed up the process, you can go ahead and assess some more people. The more responses you assess, the faster your response will come up for assessment. I ended up assessing 9 responses per peer review task, so clearly this incentive worked on me, though I have no idea how much longer I would’ve had to wait for my grades if I had only assessed 3 responses per task. I only know that when I next logged on, usually the following day, my work had already been assessed.
For a while I was convinced that either whoever had assessed my work had been very lenient or else all responses were automatically awarded four radiant smiles. My work hadn’t been that good, I thought. Then in the very last peer review I got a less than perfect score, so I assume there was at least one other teacher taking the course. 🙂
In theory then, once your work has been assessed by two of your peers, you’re completely done with the task. However, at the very end of the course, you’re told that in addition to the grades you received from your peers, your work will also be graded by the teaching staff. Happily, your tasks are still marked as complete and you can get your certificate nevertheless. I suspect I’ll be waiting a while for that grade from the teaching staff and it seems a bit irrelevant, to be honest. It would make sense for someone other than other course participants to check the responses if this were done before course completion was officially confirmed (so those who submitted “123” wouldn’t get their certificate, for instance) but now I think of the course as finished and my work as graded, I’m not likely to go back and check whether I received any further feedback, especially if it’s only emoji.
There were other interesting aspects of the course but I’ll stop here so as not to mess up my chances of posting this soon. In short, the course reminded me of why l like peer review (if everyone participates the way the course designers intended them to) and has given me some new ideas of how similar activities can be set up.
Have you completed any MOOCs or other online courses lately? Did they include peer review? What do you think makes a good peer review activity?