Categories
Edtech Moodle online course

On talking to your online students

Graffitied brick wall that says "Listen".
painteverything: listen (CC BY 2.0)

I’ll skip references to the fact that I haven’t posted on this blog for months now and dive right in, shall I?

Right. Four semesters ago I wrote a post on how I’d decided to start adding audio recordings to the online course I teach and a follow-up post on the topic soon afterwards. In the meantime I kept working with audio recordings and adding tweaks, so I wanted to write down some observations.

A brief digression: have you noticed how it sounds almost strange to be describing students/courses as ‘online’? It’s like all courses now have some kind of online component and it’s hard to even imagine a time – just four semesters ago! just four course iterations ago! – when teaching a semester-long course online wasn’t exactly routine and it seemed important to note that for context. Or maybe it’s just me?

Anyway, the way my audio files are structured and presented has developed over time into a Tips on what to watch out for chapter in each unit guide (a Moodle book resource). The tips are divided into Things that were done well over the past week or so and Things to watch out for in the current unit. The ‘developed over time’ bit makes it sound as if a whole lot of development has been going on but this setup has in fact been in place pretty much since I started using the H5P course presentation (see the second link above for a more detailed account of how that came about). 

One thing that became obvious pretty quickly was that a lot of the recordings in the Things that were done well category needed to be recorded over again each semester, as each group was slightly different in the things they did well and it was tricky to stay neutral in these recordings. What I mean by ‘neutral’ is avoiding any mention of something group-specific. I knew that I should strive for this in theory, if I wanted to be able to reuse the recordings, but in practice it’s surprisingly difficult to speak to a group of students without references to that particular group. Try it and go back to the recording in six months’ time. I guarantee you’ll find phrases that will make you groan. For instance, you’re commenting on forum activity and you hear yourself saying, “I can see that several people have added comments to this thread…”, whereas this semester, with your luck, no one has added anything to that thread. 

The Things to watch out for in the current unit files were easier to reuse because they’re basically general advice on what to keep in mind as you complete a particular activity, so aren’t linked to any individual group. An example would be how to approach a glossary activity: if there are any areas students commonly slip up on, what to watch out for with regard to the final exam and so on.

The most time-consuming aspect of working with these files is that you have to listen to them again every six months before you re-record. I guess what you could do is just assume that all the Done well recordings need to be re-recorded and not waste time listening to those from last semester but I always hoped that I could at least use some of them again, possibly dealing with minor differences by adding an explanatory text box as in the screenshot. 

Tips on what to watch out for: Before you start on the tasks in this chapter, I recommend listening to the audio comments. They need not all be listened to at once; instead you can listen to them as they become relevant to the task you are completing. Things that were done well over the past week or so: communication, Hypothes.is app, Jobs of the future forum. To the right of each topic there is an icon indicating audio content can be played. An arrow is pointing to the audio file icons, suggesting the following text refers to all the audio files: "I've recorded these with a different device, so the sound is lower than in the two recordings in the "Things to watch out for" section below. You'll probably need to turn the sound up."
Screenshot from course

Also, those in the Current unit category would sometimes need to be re-recorded as well because there would be changes to the way some activities were set up or some advice was too specific. For instance, only today I realized that advice on pair work included a 2-minute segment on how to make sure exchange students were not left out but this semester we don’t have any exchange students. This segment was somewhere in the middle of the recording, so I used 123 Apps’ trim audio and audio joiner to excise the bit that was no longer relevant. 

When I’d first introduced audio files to the course, I was really curious to see what the students thought, so I added this as a possible reflection topic for their learning journals. It was actually student reflections that helped me realize one longer recording might be demanding to stay with and might be more easily processed if broken up into shorter files. Although student perspective was key to this change, I didn’t add audio as a reflection topic for the next two semesters. Then last semester I added this poll.

How do you feel about the "Tips on what to watch out for" chapter in the unit guides? Possible answers: a) I listen I listen to the comments and generally find them useful, b) I listen to the comments but they don't contribute to my successful completion of the course tasks, c) I listen to the comments but have no opinion about them, and d) I don't listen to the comments. View 14 responses.
Screenshot from course

Just over half the group opted for “I listen to the comments and generally find them useful” and out of the rest only one person chose “I don’t listen to the comments”. The way the poll was designed basically only told me whether students listened to the audio and to some extent if they saw the comments in a positive light. I planned on following this up with a reflection topic but didn’t. The results didn’t seem overly negative, i.e. most students said they listened to the comments, so I probably didn’t see a pressing need to get more feedback, although it would definitely be useful to know more about why some felt the comments didn’t help them.

This semester I introduced another tweak, partly brought about by the fact that since I’d started recording audio comments I was aware of the fact that there was no transcript and that ideally there should be one, both in accordance with accessibility guidelines and also because it’s okay, I think, not to force people to listen at a certain speed (or even twice that speed) if you can offer them the option of glancing at a transcript and picking out the main points. The other reason for the tweak was, as is so often the case, Twitter.

I started using the tool in the tweet with the Done well comments. I realize now that it says this particular tool is aimed at social media use, which I don’t recall being in focus that much back in February. I suppose it may have been and another reason for choosing it may have been the (subconscious) idea that anything to do with social media would appeal to students. Anyway, using it didn’t address the transcript issue because what you do is add captions, which should make it easier to follow what the person is saying but you still can’t process the information the way you would with a transcript available. Also, I have since learned that screen readers can only read transcripts, not captions. This wasn’t an issue for the students I’ve had these past semesters but if you’re making a recording for a larger group of students (on a MOOC, say) it would definitely be important. 

An upside I noticed is that recordings made with this tool are definitely shorter, which is great as I tend to ramble the minute I don’t prepare notes on what I want to say. The captions are generated by the software, so that’s done quickly but I still need to clean them up and it’s much quicker and easier if there isn’t much waffle. In fact, compared with the first screenshot above, in which there are three topics in the Done well section, this semester I only had one topic/video per Done well section. I really did plan on checking with the students if they noticed any difference between just audio and these recordings with a visual component, but the end of the semester is here and I don’t seem to have done that. Maybe next semester.

What are your thoughts on audio in courses which are mostly delivered asynchronously online? Do you think you would prefer engaging with the audio as opposed to going through transcripts? What strikes you as the ideal length for audio recordings?

Thanks for reading!

Advertisement
Categories
Edtech Moodle online course

Type little and give extensive feedback

Photo taken from http://flickr.com/eltpics by @sandymillin, used under a CC BY-NC 2.0 license, https://creativecommons.org/licenses/by-nc/2.0/

It all started on Twitter, as these things do. I had covid and was stuck at home, so it was as good a time as any to do some marking. Then I came across Neil’s tweet.

I recommend you click through for the answers because quite there were a few suggestions and several people mentioned text expanders, which is useful for context, but the answer that caught my eye was this one:

https://twitter.com/sensendev/status/1330107775440089093

I don’t use Linux, so I’m not entirely sure why I decided to try espanso out. Now I think about it, I’m pretty sure Neil tweeted an update on how well it was working out for him. Anyway, espanso works on Windows and Macs, although I use it on Windows most of the time.

I did need a little bit of help installing the program but I probably would’ve been able to do it myself if I’d put in a little effort. The point is, it’s pretty simple and quick. (To be fair, it was more complicated to install on a Mac.)

The idea of this post is to reflect a little on the past 6 months of using it and note down some pros and cons. 

First of all, this is what it looks like in practice. Please ignore the huge gap between the top and the bottom comment; it’s my first attempt at a gif.

User types "main idea" and this is automatically expanded to This seems like a new main idea and might be best in a separate paragraph. 
User types "meaning" and this is expanded to I'm not sure what you mean by this (in this context), so consider the possibility that other readers may not be sure either.
Demo of how a text expander works

And it works everywhere. If I typed :main idea it would expand like in the gif regardless of whether I was commenting on a Word doc, typing in a Google doc, in the Moodle gradebook… 

My initial reaction was – this is bliss! My days of spending ages on marking are over! All I need to do is add the comments which are already in my comment bank to espanso and I’m all set. 

This is why in the end it wasn’t as easy as that. 

I have a huge number of comments in my comment bank. I’ve written about the comment bank I have in Google Docs in this post and in Google Keep in this one. At first I thought it would only take a long time to transfer them all to espanso, but then I realized that I would have to come up with as many triggers as there are comments. (The trigger is the combination of : and the word or letter combination that gets expanded.)

It probably wouldn’t be that taxing to come up with a long list of triggers, but eventually I didn’t because it became obvious I couldn’t remember them all. In my comment banks the comments are categorized by unit and activity (in Google Docs) and by aspect of writing like punctuation or formality (in Google Keep). Categorization isn’t possible in any meaningful way in espanso, so you’re probably best off if you choose a trigger that will most easily remind you of the longer comment you wanted to add (or vice versa). 

What tends to work best (for me) is if I add a whole word or word sequence, like “comma splice”. Great, I hear you say, so do that. But the longer the trigger is, the more likely you are to mistype something and then you need to delete what you’ve typed and start again (at least if you’re using Windows). Also, if you want to use “comma” as part of a trigger for anything other than comma splice comments, you can’t. Say you wanted to use “comma not needed” as a trigger. The nanosecond you type :comma, espanso expands it to your comma splice comment. You could use “unnecessary comma” as a trigger, but it’s not what I think of first when I see one – when I start typing, my brain has already categorized that as a comma-related error, and “comma” is the word that first comes to mind, not “unnecessary”. So if you’re old and forgetful, you’ll catch yourself going through the espanso bank, muttering “Why did I ever think I’d remember “unnecessary comma”?!” You get the idea. This is just an example, incidentally; I’m not that concerned about commas.

In order to really save time and reduce the potential for confusion, the triggers need to be short. Ideally, just a few letters. But the shorter they are, the easier they are to forget. Did I say old and forgetful? Add stressed out over a million things. Coming up with a trigger like “spe” for spelling sounds easy enough to remember… okay it is. That one is. But when I have a comment which is essentially just positive feedback on participating in a discussion in unit 4, that is quite tricky to reduce to a three-letter combo that I will remember longer than a day. Yes, you are right to wonder how I deal with PINs. 😛

What I tend to do now is work with up to 20 triggers. I always open up espanso before I start to remind myself of the triggers and attendant comments. Then I mark everyone’s work in the unit I am currently grading, where I won’t need that many different comments because the mistakes and the things done well tend to be quite similar. When I move on to the next unit, I prefer to work with the same triggers and update the expanded feedback in espanso. I won’t be needing the comments for the unit I’ve just marked until next semester anyway. Then the trigger for my positive feedback can always just be “yes” and for negative comments/suggestions for improvement it can be “no” – definitely easy to remember.  

What I’ve also decided works for me is adding as much text as possible to one single trigger. In other words, instead of thinking up three different triggers for three variations of positive comments, I add all three to the same trigger, delete the unnecessary/non-applicable comments when the text expands (and then customize further if needed).  

In short, the tool isn’t as ideal as I’d initially expected it to be, but it does speed up the feedback process considerably once you’ve figured out how it can best serve you. I still use the comment banks and, of course, a large number of comments are personalized and context specific anyway, so nothing really helps there.  

What do you do to speed up the marking and feedback process? If you have any tips, either on how to use text expanders more efficiently or which other tools have been useful to you, I’d love to hear them! 

Categories
Edtech MOOC online course

Some (new) observations on peer review

I recently completed a MOOC called Elements of AI. Let me first say that I am privately (and now perhaps not as privately) thrilled to have managed this because I’m highly unlikely to commit to a MOOC if it looks like I might not have time to do it properly (whatever that means) and it often looks that way. I enjoyed the course and definitely learned a bit about AI – robots will not replace teachers anytime soon in case anyone was wondering – but I couldn’t help noticing various aspects of course design along the way. This is what this post is about, in particular the peer review component. 

S B F Ryan: #edcmooc Cuppa Mooc (CC BY 2.0)

Most of my experience with peer review is tied up with Moodle’s workshop activity, which I have written about here, so the way it was set up in this course was a bit of a departure from what I am used to. There are 5 or 6 peer review activities in Elements of AI and they all need to be completed if you want to get the certificate at the end – obviously, I do. *rubs hands in happy anticipation*

Let’s take a look at how these are structured. To begin with, the instructions are really clear and easy to follow – and despite reading them carefully more than once, I still occasionally managed to feel, on submitting the task and reading the sample “correct” answer, that I could have paid closer attention (the “duh, they said that” feeling). The reason I note this is because it’s all too easy to forget about it when you’re the teacher. I often catch myself thinking – well, I did a really detailed job explaining X, so how did the student not get that? 

Before submitting the task, you’re told in no uncertain terms that there’s no resubmitting and which language you’re meant to use (the course is offered in a range of languages). I read my submissions over a couple of times and clicked submit. In the Moodle workshop setup, which I am used to, you can then relax and wait for the assessment stage, which begins at the same time for all the course participants. Elements of AI has no restrictions in terms of when you can sign up (and submit each peer review), so I realized from the start that their setup would have to be different. 

The assessment stage starts as soon as you’ve made your submission. You first read a sample answer, then go on to assess the answers of 3 other course participants. For each of these three you can choose between two random answers you’re shown before you commit to one and assess it on a scale of an intensely frowning face to a radiant smile (there are 5 faces altogether). You are asked to grade the other participants on 4 points:

  1. staying on topic
  2. response is complete/well-rounded
  3. the arguments provided are sound
  4. response is easy to understand

The first time I did this, I read both random responses very carefully and chose the one that seemed more detailed. This was then quickly assessed because the 4 points are quite easy to satisfy if you’ve read the instructions at all carefully. However, I did miss the fact that there was no open-ended answer box where I could justify anything less than a radiant smile. I’m guessing this was intentional so as to prevent people from either submitting overly critical comments or spamming others (or another reason that hasn’t occurred to me) but I often felt an overwhelming urge to say, well, yes, the response was easy to understand, but you might consider improving it further by doing X. Possibly those who aren’t teachers don’t have this problem. 😛

It was also frustrating when I came across an answer that simply said “123” and another that was plagiarized – my guess is that the person who submitted it had a look at someone else’s screen after that other person had already made their submission and could access the sample answer. Or maybe someone copied the sample answer somewhere where others had access to it? The rational part of my brain said, “Who cares? They clearly don’t, so why should you? People could have a million different reasons for signing up for the course.” The teacher part of my brain said, “Jesus. Plagiarizing. Is. Not. Okay. Where do I REPORT this person? They are sadly mistaken if they think they’re getting the certificate.”

Once you’ve assessed the three responses an interesting thing happens. You’ve completed the task and can proceed to the next one, but you still have to wait for someone to assess your work. This, you’re told, will happen regardless, but if you want to speed up the process, you can go ahead and assess some more people. The more responses you assess, the faster your response will come up for assessment. I ended up assessing 9 responses per peer review task, so clearly this incentive worked on me, though I have no idea how much longer I would’ve had to wait for my grades if I had only assessed 3 responses per task. I only know that when I next logged on, usually the following day, my work had already been assessed. 

For a while I was convinced that either whoever had assessed my work had been very lenient or else all responses were automatically awarded four radiant smiles. My work hadn’t been that good, I thought. Then in the very last peer review I got a less than perfect score, so I assume there was at least one other teacher taking the course. 🙂 

In theory then, once your work has been assessed by two of your peers, you’re completely done with the task. However, at the very end of the course, you’re told that in addition to the grades you received from your peers, your work will also be graded by the teaching staff. Happily, your tasks are still marked as complete and you can get your certificate nevertheless. I suspect I’ll be waiting a while for that grade from the teaching staff and it seems a bit irrelevant, to be honest. It would make sense for someone other than other course participants to check the responses if this were done before course completion was officially confirmed (so those who submitted “123” wouldn’t get their certificate, for instance) but now I think of the course as finished and my work as graded, I’m not likely to go back and check whether I received any further feedback, especially if it’s only emoji. 

There were other interesting aspects of the course but I’ll stop here so as not to mess up my chances of posting this soon. In short, the course reminded me of why l like peer review (if everyone participates the way the course designers intended them to) and has given me some new ideas of how similar activities can be set up.

Have you completed any MOOCs or other online courses lately? Did they include peer review? What do you think makes a good peer review activity?