Categories
Edtech MOOC online course

Some (new) observations on peer review

I recently completed a MOOC called Elements of AI. Let me first say that I am privately (and now perhaps not as privately) thrilled to have managed this because I’m highly unlikely to commit to a MOOC if it looks like I might not have time to do it properly (whatever that means) and it often looks that way. I enjoyed the course and definitely learned a bit about AI – robots will not replace teachers anytime soon in case anyone was wondering – but I couldn’t help noticing various aspects of course design along the way. This is what this post is about, in particular the peer review component. 

S B F Ryan: #edcmooc Cuppa Mooc (CC BY 2.0)

Most of my experience with peer review is tied up with Moodle’s workshop activity, which I have written about here, so the way it was set up in this course was a bit of a departure from what I am used to. There are 5 or 6 peer review activities in Elements of AI and they all need to be completed if you want to get the certificate at the end – obviously, I do. *rubs hands in happy anticipation*

Let’s take a look at how these are structured. To begin with, the instructions are really clear and easy to follow – and despite reading them carefully more than once, I still occasionally managed to feel, on submitting the task and reading the sample “correct” answer, that I could have paid closer attention (the “duh, they said that” feeling). The reason I note this is because it’s all too easy to forget about it when you’re the teacher. I often catch myself thinking – well, I did a really detailed job explaining X, so how did the student not get that? 

Before submitting the task, you’re told in no uncertain terms that there’s no resubmitting and which language you’re meant to use (the course is offered in a range of languages). I read my submissions over a couple of times and clicked submit. In the Moodle workshop setup, which I am used to, you can then relax and wait for the assessment stage, which begins at the same time for all the course participants. Elements of AI has no restrictions in terms of when you can sign up (and submit each peer review), so I realized from the start that their setup would have to be different. 

The assessment stage starts as soon as you’ve made your submission. You first read a sample answer, then go on to assess the answers of 3 other course participants. For each of these three you can choose between two random answers you’re shown before you commit to one and assess it on a scale of an intensely frowning face to a radiant smile (there are 5 faces altogether). You are asked to grade the other participants on 4 points:

  1. staying on topic
  2. response is complete/well-rounded
  3. the arguments provided are sound
  4. response is easy to understand

The first time I did this, I read both random responses very carefully and chose the one that seemed more detailed. This was then quickly assessed because the 4 points are quite easy to satisfy if you’ve read the instructions at all carefully. However, I did miss the fact that there was no open-ended answer box where I could justify anything less than a radiant smile. I’m guessing this was intentional so as to prevent people from either submitting overly critical comments or spamming others (or another reason that hasn’t occurred to me) but I often felt an overwhelming urge to say, well, yes, the response was easy to understand, but you might consider improving it further by doing X. Possibly those who aren’t teachers don’t have this problem. 😛

It was also frustrating when I came across an answer that simply said “123” and another that was plagiarized – my guess is that the person who submitted it had a look at someone else’s screen after that other person had already made their submission and could access the sample answer. Or maybe someone copied the sample answer somewhere where others had access to it? The rational part of my brain said, “Who cares? They clearly don’t, so why should you? People could have a million different reasons for signing up for the course.” The teacher part of my brain said, “Jesus. Plagiarizing. Is. Not. Okay. Where do I REPORT this person? They are sadly mistaken if they think they’re getting the certificate.”

Once you’ve assessed the three responses an interesting thing happens. You’ve completed the task and can proceed to the next one, but you still have to wait for someone to assess your work. This, you’re told, will happen regardless, but if you want to speed up the process, you can go ahead and assess some more people. The more responses you assess, the faster your response will come up for assessment. I ended up assessing 9 responses per peer review task, so clearly this incentive worked on me, though I have no idea how much longer I would’ve had to wait for my grades if I had only assessed 3 responses per task. I only know that when I next logged on, usually the following day, my work had already been assessed. 

For a while I was convinced that either whoever had assessed my work had been very lenient or else all responses were automatically awarded four radiant smiles. My work hadn’t been that good, I thought. Then in the very last peer review I got a less than perfect score, so I assume there was at least one other teacher taking the course. 🙂 

In theory then, once your work has been assessed by two of your peers, you’re completely done with the task. However, at the very end of the course, you’re told that in addition to the grades you received from your peers, your work will also be graded by the teaching staff. Happily, your tasks are still marked as complete and you can get your certificate nevertheless. I suspect I’ll be waiting a while for that grade from the teaching staff and it seems a bit irrelevant, to be honest. It would make sense for someone other than other course participants to check the responses if this were done before course completion was officially confirmed (so those who submitted “123” wouldn’t get their certificate, for instance) but now I think of the course as finished and my work as graded, I’m not likely to go back and check whether I received any further feedback, especially if it’s only emoji. 

There were other interesting aspects of the course but I’ll stop here so as not to mess up my chances of posting this soon. In short, the course reminded me of why l like peer review (if everyone participates the way the course designers intended them to) and has given me some new ideas of how similar activities can be set up.

Have you completed any MOOCs or other online courses lately? Did they include peer review? What do you think makes a good peer review activity?

Categories
EAP Edtech Moodle Tertiary teaching

ABC for VLE

A couple of days ago I went to a workshop (for work) and I thought I’d blog about it. The workshop was called ABC Workshop for Learning Design (only in Croatian) and it was run by the people from the Computing Centre at the University of Zagreb. Specifically, one of the moderators was (the pretty recently elected) EDEN president, which I thought was kinda cool. In ELT terms it’s probably like attending a workshop run by the IATEFL president – I know they’re only human but still, it’s like, oh, they’ve taken the time out of their busy lives to run this little workshop… anyway, I digress. 

The workshop concept was actually devised as part of an Erasmus+ project which you can read more about on the project website. In brief, it’s meant to help online course instructors plan their courses – actually, it’s probably not targeted primarily at the lowly course instructor but a team of people responsible for learning design at a particular institution, only in real life in Croatia I think it’s more often each course instructor for themselves when it comes to designing and teaching an online course. In fairness, though, the Computing Centre team are always there if you need them and are very willing to help. 

I should note, before I start on what we did, that an online course in this context refers to courses in an LMS (Moodle in our case), not synchronous courses. 

Right at the start we were divided into two groups and thus found ourselves seated together with several other people teaching a range of subjects. The workshop activities have been devised with a view to (a couple of) people teaching the same subject working in a group, and in fact it was recommended that people apply with this in mind. Our group, however, was quite diverse, incorporating instructors of music and classical philology, among others, so we first needed to agree on a course we all felt comfortable planning. We could choose either an actual course one of us was teaching, which had the disadvantage of only one person being familiar with it, or devise a course on the spot, which everyone would be equally unfamiliar with – so we went with the latter, opting to plan an introductory course on academic writing. I actually have taught an EAP course, so I guess technically I was somewhat at an advantage, only this was a course aimed at L1 speakers. 

Our first task was to fill in the handout below.

ABC workshop – course info sheet

We needed to come up with the course title, the number of ECTS points (this is apparently a tweak introduced by the folks at the Computing Centre because it turns out teachers have a tendency to say, oh, this is gonna be something basic and then proceed to load it up with coursework out of all proportion to what the course load is supposed to be as reflected in the number of ECTS), and a course summary no longer than a tweet (because ideally it should take no longer than that to summarize the main points of your course – I liked that). 

We also had to formulate a couple of learning outcomes (we stopped at four and this was lucky as it turned out because we felt, at the end, that we would need to tack on another ECTS point once we’d looked at all the activities we’d planned for the students). The spider chart on the right is supposed to reflect the proportion of the course that would be devoted to different learning types (no, not learning styles). These are acquisition, inquiry, discussion, practice, collaboration and production. They’re “based on the pedagogic theory of Professor Diana Laurillard’s Conversational Framework”, according to the project website, and there’s a video where she explains how they work. The idea is that you first fill in the spider chart using one color in the initial stage and then again after you’ve designed the whole course, to see if anything has changed. We, for instance, initially thought our students would be doing a lot more inquiry. 

Finally, we needed to give some thought to whether our course was going to be fully online or blended, which is what the line on the bottom right of the picture represents – we opted for a blended course but with a pronounced online dimension. 

This all actually took longer than you might expect, given that there were seven of us and some negotiating was required. The second step was the storyboard, which is in the following pics. 

We decided on how to address the learning outcomes – in week-by-week or topic-based format. I think we first went with the week-by-week, then decided that some of the outcomes (or would it be better to call them course aims?) would take more than a single week to address, so we switched. 

We next picked the learning types we felt would best help students achieve these outcomes, then had to decide on the actual activities the students would do. For instance, the inquiry type (somewhat confusingly – albeit not incorrectly – called “research” in Croatian) includes traditional and digital methods of carrying out an activity. All the learning types do, because if you’re running a blended course, you’ll probably use traditional methods as well as digital. If we stick with the inquiry type, an example would be using traditional methods vs digital tools to collect and analyze data. 

As the course began to take shape, there was a lot of discussion on exactly how much F2F time the students needed and which activities were most suitable for the online segment. It turned out that some learning outcomes, which we’d perhaps thought would be easily achieved and would not require much class time, were a bit more demanding and would thus take longer. Our initial estimate that 30 hours (2 ECTS points) would be enough was challenged, but we didn’t officially revise it. Once all the activities had been planned, we went back through them and awarded stars to those that would be assessed if the course were ever taught (silver for formative and gold for summative assessment). I don’t know if this shows up in the photo, but our idea was to use formative assessment for collaboration and discussion activities so as to encourage students to take part in these.

I understand the workshop includes one more step, which is devising an action plan of sorts whereby you identify what exactly you’ll need to do to put all you’ve designed into practice; for instance, you might have to record a video and you’ll need someone’s help to do this, so you should plan how to go about it. We’d run out of time for this step but I think in our case this wasn’t a problem because I doubt this course will ever be implemented in its current form (seeing as it’s fictional). 

I thought the workshop was practical and useful. It made me reflect on my writing skills course and how it might look different if I’d designed it following the ABC principles. I’ve always been kind of reluctant to look at the big picture; if we were supposed to write an outline for an essay in English class I usually didn’t do it and started on the first paragraph straight away. Generally, the essays turned out fine, but I can appreciate that it would have been useful to write an outline. The essays might have been even better. 

I’d be interested to read how you approach (blended or online) course design. Do you think you could apply (parts of) this approach? Would it work with classroom courses? What about language teaching? 

Thanks for reading!