Categories
Edtech MOOC online course

Some (new) observations on peer review

I recently completed a MOOC called Elements of AI. Let me first say that I am privately (and now perhaps not as privately) thrilled to have managed this because I’m highly unlikely to commit to a MOOC if it looks like I might not have time to do it properly (whatever that means) and it often looks that way. I enjoyed the course and definitely learned a bit about AI – robots will not replace teachers anytime soon in case anyone was wondering – but I couldn’t help noticing various aspects of course design along the way. This is what this post is about, in particular the peer review component. 

S B F Ryan: #edcmooc Cuppa Mooc (CC BY 2.0)

Most of my experience with peer review is tied up with Moodle’s workshop activity, which I have written about here, so the way it was set up in this course was a bit of a departure from what I am used to. There are 5 or 6 peer review activities in Elements of AI and they all need to be completed if you want to get the certificate at the end – obviously, I do. *rubs hands in happy anticipation*

Let’s take a look at how these are structured. To begin with, the instructions are really clear and easy to follow – and despite reading them carefully more than once, I still occasionally managed to feel, on submitting the task and reading the sample “correct” answer, that I could have paid closer attention (the “duh, they said that” feeling). The reason I note this is because it’s all too easy to forget about it when you’re the teacher. I often catch myself thinking – well, I did a really detailed job explaining X, so how did the student not get that? 

Before submitting the task, you’re told in no uncertain terms that there’s no resubmitting and which language you’re meant to use (the course is offered in a range of languages). I read my submissions over a couple of times and clicked submit. In the Moodle workshop setup, which I am used to, you can then relax and wait for the assessment stage, which begins at the same time for all the course participants. Elements of AI has no restrictions in terms of when you can sign up (and submit each peer review), so I realized from the start that their setup would have to be different. 

The assessment stage starts as soon as you’ve made your submission. You first read a sample answer, then go on to assess the answers of 3 other course participants. For each of these three you can choose between two random answers you’re shown before you commit to one and assess it on a scale of an intensely frowning face to a radiant smile (there are 5 faces altogether). You are asked to grade the other participants on 4 points:

  1. staying on topic
  2. response is complete/well-rounded
  3. the arguments provided are sound
  4. response is easy to understand

The first time I did this, I read both random responses very carefully and chose the one that seemed more detailed. This was then quickly assessed because the 4 points are quite easy to satisfy if you’ve read the instructions at all carefully. However, I did miss the fact that there was no open-ended answer box where I could justify anything less than a radiant smile. I’m guessing this was intentional so as to prevent people from either submitting overly critical comments or spamming others (or another reason that hasn’t occurred to me) but I often felt an overwhelming urge to say, well, yes, the response was easy to understand, but you might consider improving it further by doing X. Possibly those who aren’t teachers don’t have this problem. 😛

It was also frustrating when I came across an answer that simply said “123” and another that was plagiarized – my guess is that the person who submitted it had a look at someone else’s screen after that other person had already made their submission and could access the sample answer. Or maybe someone copied the sample answer somewhere where others had access to it? The rational part of my brain said, “Who cares? They clearly don’t, so why should you? People could have a million different reasons for signing up for the course.” The teacher part of my brain said, “Jesus. Plagiarizing. Is. Not. Okay. Where do I REPORT this person? They are sadly mistaken if they think they’re getting the certificate.”

Once you’ve assessed the three responses an interesting thing happens. You’ve completed the task and can proceed to the next one, but you still have to wait for someone to assess your work. This, you’re told, will happen regardless, but if you want to speed up the process, you can go ahead and assess some more people. The more responses you assess, the faster your response will come up for assessment. I ended up assessing 9 responses per peer review task, so clearly this incentive worked on me, though I have no idea how much longer I would’ve had to wait for my grades if I had only assessed 3 responses per task. I only know that when I next logged on, usually the following day, my work had already been assessed. 

For a while I was convinced that either whoever had assessed my work had been very lenient or else all responses were automatically awarded four radiant smiles. My work hadn’t been that good, I thought. Then in the very last peer review I got a less than perfect score, so I assume there was at least one other teacher taking the course. 🙂 

In theory then, once your work has been assessed by two of your peers, you’re completely done with the task. However, at the very end of the course, you’re told that in addition to the grades you received from your peers, your work will also be graded by the teaching staff. Happily, your tasks are still marked as complete and you can get your certificate nevertheless. I suspect I’ll be waiting a while for that grade from the teaching staff and it seems a bit irrelevant, to be honest. It would make sense for someone other than other course participants to check the responses if this were done before course completion was officially confirmed (so those who submitted “123” wouldn’t get their certificate, for instance) but now I think of the course as finished and my work as graded, I’m not likely to go back and check whether I received any further feedback, especially if it’s only emoji. 

There were other interesting aspects of the course but I’ll stop here so as not to mess up my chances of posting this soon. In short, the course reminded me of why l like peer review (if everyone participates the way the course designers intended them to) and has given me some new ideas of how similar activities can be set up.

Have you completed any MOOCs or other online courses lately? Did they include peer review? What do you think makes a good peer review activity?

Advertisement
Categories
Edtech MOOC

Can you get everyone to like your MOOC?

My last post started out as an idea for a compilation of random observations on course design, based on the Introduction to Linguistics MOOC on FutureLearn – a brief digression: I only just realized that the spelling seems to be with a capital L mid-word – but then it turned out to be a sort of introduction to the topic and an overall comment on what it felt like to be doing a course which you’ve joined when it’s practically over (spoiler alert: alone). I feel reasonably confident that this post will achieve what the last one was meant to because I already have a list of observations; I just have to flesh them out.

A bit of context first: this was apparently this MOOC’s first run. It’s a three-week course and the study time estimated per week is three hours, so not very demanding and overall in accordance with the course aims:

On this course, you’ll get an introduction to the main approaches used in linguistic research, including linguistic experiments and discourse analysis. You’ll find out about the key methods used in linguistic descriptions, and some of the everyday ‘myths’ about language. You’ll discover how linguistic researchers turn our ideas about language into linguistic knowledge.

There seemed to be two mentors/moderators – one lead educator and one educator, as FutureLearn calls them (or is it just this course?). Perhaps there were more, but I could only find the lead educator’s bio. They obviously kept an eye out on what was happening on the course, as there were several responses to participant comments; however, I got the impression that most of the commenting was done by the lead educator and am thinking that this is probably an indication that strong moderator involvement had not been planned, which again would be in line with the course aims.

Something to keep in mind as I move on to the observations: some of the features I comment on are present in other FutureLearn courses as well. Also, I should stress that this is in no way meant to be a dissection of the instructional design involved; just thoughts that popped into my head – in no particular order – from the viewpoint of having recently helped coordinate MOOC development.

Photo taken from ELTpics by @mk_elt, used under a CC BY-NC 2.0 license.

Videos
  • Most were of the length that ensures you won’t drift off. Only one was around 9 minutes (and another one was 7), which I feel is too long although I have included a screencast of similar length in my own online course. This MOOC is heavily video-based and the videos are only occasionally interspersed with some other activities – for instance, one activity required participants to analyze two similar websites in the context of a theoretical framework presented in the previous step. Possibility: add some readings and tests (T/F, MCQs), but this could make the course appear more heavy going (arguably not in line with the course aims).
  • There were several presenters in the videos and I liked the fact that for the most part they weren’t reading a prepared script – I thought it made them appear more passionate about their subject. Side note: if you read, you reduce the risk of getting mixed up or forgetting something and your delivery is likely to be smoother. I generally read in my own screencasts, but as the focus is not on me – I’m not on camera – I think I don’t sound too wooden.
Transcripts
  • There is a transcript accompanying each video (in addition to subtitles), which is of course necessary for accessibility purposes (screen readers). What I thought wasn’t strictly necessary (but was definitely helpful and I liked it a lot) was that each transcript was broken up into a couple of paragraphs and the time was marked at the beginning of each one so you could navigate it more easily.
  • Transcripts can also be downloaded as PDFs but the download isn’t forced – a pet peeve – so definitely thumbs up for this. Side note: I’m not a fan of transcripts because I find that if I read them, I’ll skip parts. If, on the other hand, I watch the video, I’ll force myself to slow down and focus on what the person is saying. This is also why I like audio books; they force me to adapt to the narrator and relax.
Discussions
  • Each video is followed up by the option to discuss (as are all other activities). I think I like this – as opposed to, for instance, separate forum activities like in Moodle – because you end up with all the comments neatly sorted by activity. However, I did wonder what happens if you have a question or comment that would be better suited to a sort of general housekeeping forum. For example, if you’re wondering what the official starting date of the course was. 🙂 Or if software you were instructed to use in an activity didn’t work.
  • Comments can be sorted by oldest, newest and most liked (my most frequent choice). If I think a comment would be useful to other participants I like it in the hope that this will make it visible to more people. You can also bookmark comments and follow mentors and participants, but I didn’t on this course.
Accessibility
  • External links open in the same window, which I understand is a requirement of guidelines for web content accessibility. I think I will now stop advising people to tick the “open in new window” box – which I have sometimes done unsolicited to ELT bloggers, purely for the reason that I personally don’t like having to click back to return to the page I started from (I prefer to close the new window).
Activity completion
  • Participants can mark each activity as done when they wish to; there are no requirements, for instance, to post a comment before a discussion activity is considered complete. This seems fair because, well, you may not have that much to say about a subject; however, what happens if someone decides to mark all their activities as done without having even looked at them? I wonder if that is any different if you upgrade – because I understand that you are then entitled to a certificate of completion. On the other hand, there are students – I speak from experience here – who have done the F2F equivalent of marking their activities as done with no engagement whatsoever (suffered through the sessions in silence) and they still got the final mark. But they had to take an exam.
Moderators
  • I’ve already noted that as opposed to some MOOCs I’ve done, there wasn’t strong moderator involvement in this one and I assume this was intentional. I liked the way the moderators handled an issue that came up: the participants were asked to analyze a couple of extracts of spoken language. These extracts were almost completely punctuation free. The participants found this confusing and said so in the comments, so a note was added in a prominent place, explaining the thinking behind this. Side note: when I came along, the explanation had already been added, so as soon as I noticed the lack of punctuation I read the explanation and thought it had been there from the start. I found this small detail very helpful and reassuring, as it indicated the moderators’ online presence, even if they were keeping a low profile.
  • The moderators’ responses to the participants’ comments were thoughtful and positive, which wasn’t a surprise. The reason I mention it is because I wonder if there’s some kind of bank with (beginnings of) responses to comments, especially if the participant seems to be upset about something and you’d like to set things right as quickly as possible.
Other activities
  • A couple of activities were described as articles – as opposed to videos or discussions – but they’re a single paragraph in length, so this seems like a slightly odd choice of word.
Participants
  • In already noted this in a response to Marc’s comment on my last post, but thought I would include it here as well because I was quite taken aback by the critical attitude of some of the participants. One of my firmer beliefs – not just related to course design – is that Croatians on the whole are more likely to criticize than offer unsolicited praise. You can imagine my surprise when I saw critical comments directed at some aspects of the course – and they hadn’t been posted by Croatians! I’m sure the course designers and/or moderators did not expect universal agreement and praise but I think disagreement or doubt can be expressed in a neutral tone, leaving room for the possibility that you’ve overlooked something. If nothing else, whoever it is you’re engaging with is more likely to offer a constructive response. (But that’s just me; I don’t have any research evidence to back this up.) For instance, if you notice that there is a spelling error, I think it’s more productive to simply point this out, rather than suggest that no one bothered to check the spelling. (This example from participant contributions has been modified to protect the overly direct). Anyway, I suppose this is my message to all course participants everywhere – if you think something has been overlooked, could’ve been explained more clearly or is unnecessary/incorrect, etc., please try to point this out in a constructive fashion. Thanks from course designers and moderators everywhere. 🙂

That’s it for this run of the course. A final observation I’m going to add is that FutureLearn has a very extensive FAQ bank, so some of the questions that participants may have and aren’t sure where to post them might already have been addressed there.

Although this topic isn’t related to language teaching, I hope it’s still useful to some extent. I’m hoping to be able to do another MOOC via a different provider, and possibly add to the observations here.

I’m curious what your perspective is on MOOC moderation. Are you happy to just get on with things, with only occasional moderator involvement, or do you prefer a stronger moderator presence? Thanks for reading!

Categories
Edtech MOOC

Wandering down empty hallways

This is a very unusual summer for me: it’s August and I haven’t been to the coast yet. As a teacher (and language school owner) I’ve sometimes wondered what it would be like to be able to go on holiday whenever I chose to as opposed to having to go when there were no students willing to pay for classes. Still, I guess because so many people go in August, things are more relaxed if you choose to stay in the office. For one, I finally have time to explore MOOCs a bit. Since part of my job over the past year has involved coordinating MOOC creation, I think I could be justified in thinking of this as work – at least in part.

Photo taken from ELTpics by @ChrisCattaneo, used under a CC BY-NC 2.0 license.

I haven’t done many MOOCs because I’m the kind of learner who, if they know they’re about to embark on a structured type of training, wants to do more or less everything the course designer has planned for them, trusting that there must have been sound reasons the course was designed in a particular way. And if I know I’m not going to have enough time to do it properly, I’d rather not even start – I’m still disappointed, for instance, that I wasn’t able to keep up with the TESOL EVO course Teaching Listening: Principles, techniques and technologies earlier this year. The couple of MOOCs I *have* completed I felt I got quite a bit out of – I blogged about the one on corpus linguistics and the one on how to get started with Moodle (tangentially) – so overall my experience with this type of course has been positive.

About two weeks ago I started on Introduction to Linguistics on Futurelearn. The choice of topic was prompted by the idea that I should be vaguely familiar with the content so that I could better focus on how the course was set up and see if I could pick up any tips in terms of course design that I could apply at work.

It was clear that the course had already started by the time I joined, but I couldn’t find info on when that was. Perhaps I’m not being entirely fair; maybe the start date was visible before I joined but subsequently I was unable to find it. My assumption was that it couldn’t have been too long before because otherwise they wouldn’t have kept letting people join.

A few days into the course I came across this interesting EdSurge article on Twitter: A Proposal to Put the ‘M’ Back in MOOCs and with a somewhat sinking feeling read the opening sentence:

MOOCs have evolved over the past five years from a virtual version of a classroom course to an experience that feels more like a Netflix library of teaching videos.

The fact is, since joining I felt a bit like I was wandering through a deserted building (a school, why not), hence the title. The content is predominantly videos. Discussions accompanying each video seemed to be long over, even though people were still posting sporadically, but I feel it’s unlikely they’ll ever get a response from another participant. I feel even more certain they won’t be getting a response from the online mentor (Lead Educator in Futurlearnese) because the course is officially over – although it doesn’t actually say so anywhere.

There is a prominent message every time I log on that the course content will remain available until a certain date, after which I’ll only be able to access it if I upgrade. However, I’m not sure that ensuring access to videos and discussions of other people is tempting enough for me to upgrade. Granted, I can’t guarantee that I would upgrade even if I had started on the course along with everyone else and taken part in the discussions, but that way I would have at least felt partial ownership. This way I feel like I’m entering empty classrooms, leafing through books left on the shelves and occasionally sensing someone else is in the building – not a feeling I would pay to sustain.

My plan was to keep a record of any interesting design features I come across on the course; then it occured to me I could write these up in a post. But as has been known to happen when I haven’t blogged in a while – which is, now I think of it, my customary blogging state – the introduction has turned into a post of its own, so I’ll leave the design observations for another post.

Have you done any MOOCs lately? What was your experience like – have you noticed any differences compared to MOOCs a couple of years ago? I’m especially curious about iTDi courses, which I keep hearing good things about.