Categories
Edtech Moodle online course

On talking to your online students

Graffitied brick wall that says "Listen".
painteverything: listen (CC BY 2.0)

I’ll skip references to the fact that I haven’t posted on this blog for months now and dive right in, shall I?

Right. Four semesters ago I wrote a post on how I’d decided to start adding audio recordings to the online course I teach and a follow-up post on the topic soon afterwards. In the meantime I kept working with audio recordings and adding tweaks, so I wanted to write down some observations.

A brief digression: have you noticed how it sounds almost strange to be describing students/courses as ‘online’? It’s like all courses now have some kind of online component and it’s hard to even imagine a time – just four semesters ago! just four course iterations ago! – when teaching a semester-long course online wasn’t exactly routine and it seemed important to note that for context. Or maybe it’s just me?

Anyway, the way my audio files are structured and presented has developed over time into a Tips on what to watch out for chapter in each unit guide (a Moodle book resource). The tips are divided into Things that were done well over the past week or so and Things to watch out for in the current unit. The ‘developed over time’ bit makes it sound as if a whole lot of development has been going on but this setup has in fact been in place pretty much since I started using the H5P course presentation (see the second link above for a more detailed account of how that came about). 

One thing that became obvious pretty quickly was that a lot of the recordings in the Things that were done well category needed to be recorded over again each semester, as each group was slightly different in the things they did well and it was tricky to stay neutral in these recordings. What I mean by ‘neutral’ is avoiding any mention of something group-specific. I knew that I should strive for this in theory, if I wanted to be able to reuse the recordings, but in practice it’s surprisingly difficult to speak to a group of students without references to that particular group. Try it and go back to the recording in six months’ time. I guarantee you’ll find phrases that will make you groan. For instance, you’re commenting on forum activity and you hear yourself saying, “I can see that several people have added comments to this thread…”, whereas this semester, with your luck, no one has added anything to that thread. 

The Things to watch out for in the current unit files were easier to reuse because they’re basically general advice on what to keep in mind as you complete a particular activity, so aren’t linked to any individual group. An example would be how to approach a glossary activity: if there are any areas students commonly slip up on, what to watch out for with regard to the final exam and so on.

The most time-consuming aspect of working with these files is that you have to listen to them again every six months before you re-record. I guess what you could do is just assume that all the Done well recordings need to be re-recorded and not waste time listening to those from last semester but I always hoped that I could at least use some of them again, possibly dealing with minor differences by adding an explanatory text box as in the screenshot. 

Tips on what to watch out for: Before you start on the tasks in this chapter, I recommend listening to the audio comments. They need not all be listened to at once; instead you can listen to them as they become relevant to the task you are completing. Things that were done well over the past week or so: communication, Hypothes.is app, Jobs of the future forum. To the right of each topic there is an icon indicating audio content can be played. An arrow is pointing to the audio file icons, suggesting the following text refers to all the audio files: "I've recorded these with a different device, so the sound is lower than in the two recordings in the "Things to watch out for" section below. You'll probably need to turn the sound up."
Screenshot from course

Also, those in the Current unit category would sometimes need to be re-recorded as well because there would be changes to the way some activities were set up or some advice was too specific. For instance, only today I realized that advice on pair work included a 2-minute segment on how to make sure exchange students were not left out but this semester we don’t have any exchange students. This segment was somewhere in the middle of the recording, so I used 123 Apps’ trim audio and audio joiner to excise the bit that was no longer relevant. 

When I’d first introduced audio files to the course, I was really curious to see what the students thought, so I added this as a possible reflection topic for their learning journals. It was actually student reflections that helped me realize one longer recording might be demanding to stay with and might be more easily processed if broken up into shorter files. Although student perspective was key to this change, I didn’t add audio as a reflection topic for the next two semesters. Then last semester I added this poll.

How do you feel about the "Tips on what to watch out for" chapter in the unit guides? Possible answers: a) I listen I listen to the comments and generally find them useful, b) I listen to the comments but they don't contribute to my successful completion of the course tasks, c) I listen to the comments but have no opinion about them, and d) I don't listen to the comments. View 14 responses.
Screenshot from course

Just over half the group opted for “I listen to the comments and generally find them useful” and out of the rest only one person chose “I don’t listen to the comments”. The way the poll was designed basically only told me whether students listened to the audio and to some extent if they saw the comments in a positive light. I planned on following this up with a reflection topic but didn’t. The results didn’t seem overly negative, i.e. most students said they listened to the comments, so I probably didn’t see a pressing need to get more feedback, although it would definitely be useful to know more about why some felt the comments didn’t help them.

This semester I introduced another tweak, partly brought about by the fact that since I’d started recording audio comments I was aware of the fact that there was no transcript and that ideally there should be one, both in accordance with accessibility guidelines and also because it’s okay, I think, not to force people to listen at a certain speed (or even twice that speed) if you can offer them the option of glancing at a transcript and picking out the main points. The other reason for the tweak was, as is so often the case, Twitter.

I started using the tool in the tweet with the Done well comments. I realize now that it says this particular tool is aimed at social media use, which I don’t recall being in focus that much back in February. I suppose it may have been and another reason for choosing it may have been the (subconscious) idea that anything to do with social media would appeal to students. Anyway, using it didn’t address the transcript issue because what you do is add captions, which should make it easier to follow what the person is saying but you still can’t process the information the way you would with a transcript available. Also, I have since learned that screen readers can only read transcripts, not captions. This wasn’t an issue for the students I’ve had these past semesters but if you’re making a recording for a larger group of students (on a MOOC, say) it would definitely be important. 

An upside I noticed is that recordings made with this tool are definitely shorter, which is great as I tend to ramble the minute I don’t prepare notes on what I want to say. The captions are generated by the software, so that’s done quickly but I still need to clean them up and it’s much quicker and easier if there isn’t much waffle. In fact, compared with the first screenshot above, in which there are three topics in the Done well section, this semester I only had one topic/video per Done well section. I really did plan on checking with the students if they noticed any difference between just audio and these recordings with a visual component, but the end of the semester is here and I don’t seem to have done that. Maybe next semester.

What are your thoughts on audio in courses which are mostly delivered asynchronously online? Do you think you would prefer engaging with the audio as opposed to going through transcripts? What strikes you as the ideal length for audio recordings?

Thanks for reading!

Advertisement
Categories
Edtech MOOC online course

Some (new) observations on peer review

I recently completed a MOOC called Elements of AI. Let me first say that I am privately (and now perhaps not as privately) thrilled to have managed this because I’m highly unlikely to commit to a MOOC if it looks like I might not have time to do it properly (whatever that means) and it often looks that way. I enjoyed the course and definitely learned a bit about AI – robots will not replace teachers anytime soon in case anyone was wondering – but I couldn’t help noticing various aspects of course design along the way. This is what this post is about, in particular the peer review component. 

S B F Ryan: #edcmooc Cuppa Mooc (CC BY 2.0)

Most of my experience with peer review is tied up with Moodle’s workshop activity, which I have written about here, so the way it was set up in this course was a bit of a departure from what I am used to. There are 5 or 6 peer review activities in Elements of AI and they all need to be completed if you want to get the certificate at the end – obviously, I do. *rubs hands in happy anticipation*

Let’s take a look at how these are structured. To begin with, the instructions are really clear and easy to follow – and despite reading them carefully more than once, I still occasionally managed to feel, on submitting the task and reading the sample “correct” answer, that I could have paid closer attention (the “duh, they said that” feeling). The reason I note this is because it’s all too easy to forget about it when you’re the teacher. I often catch myself thinking – well, I did a really detailed job explaining X, so how did the student not get that? 

Before submitting the task, you’re told in no uncertain terms that there’s no resubmitting and which language you’re meant to use (the course is offered in a range of languages). I read my submissions over a couple of times and clicked submit. In the Moodle workshop setup, which I am used to, you can then relax and wait for the assessment stage, which begins at the same time for all the course participants. Elements of AI has no restrictions in terms of when you can sign up (and submit each peer review), so I realized from the start that their setup would have to be different. 

The assessment stage starts as soon as you’ve made your submission. You first read a sample answer, then go on to assess the answers of 3 other course participants. For each of these three you can choose between two random answers you’re shown before you commit to one and assess it on a scale of an intensely frowning face to a radiant smile (there are 5 faces altogether). You are asked to grade the other participants on 4 points:

  1. staying on topic
  2. response is complete/well-rounded
  3. the arguments provided are sound
  4. response is easy to understand

The first time I did this, I read both random responses very carefully and chose the one that seemed more detailed. This was then quickly assessed because the 4 points are quite easy to satisfy if you’ve read the instructions at all carefully. However, I did miss the fact that there was no open-ended answer box where I could justify anything less than a radiant smile. I’m guessing this was intentional so as to prevent people from either submitting overly critical comments or spamming others (or another reason that hasn’t occurred to me) but I often felt an overwhelming urge to say, well, yes, the response was easy to understand, but you might consider improving it further by doing X. Possibly those who aren’t teachers don’t have this problem. 😛

It was also frustrating when I came across an answer that simply said “123” and another that was plagiarized – my guess is that the person who submitted it had a look at someone else’s screen after that other person had already made their submission and could access the sample answer. Or maybe someone copied the sample answer somewhere where others had access to it? The rational part of my brain said, “Who cares? They clearly don’t, so why should you? People could have a million different reasons for signing up for the course.” The teacher part of my brain said, “Jesus. Plagiarizing. Is. Not. Okay. Where do I REPORT this person? They are sadly mistaken if they think they’re getting the certificate.”

Once you’ve assessed the three responses an interesting thing happens. You’ve completed the task and can proceed to the next one, but you still have to wait for someone to assess your work. This, you’re told, will happen regardless, but if you want to speed up the process, you can go ahead and assess some more people. The more responses you assess, the faster your response will come up for assessment. I ended up assessing 9 responses per peer review task, so clearly this incentive worked on me, though I have no idea how much longer I would’ve had to wait for my grades if I had only assessed 3 responses per task. I only know that when I next logged on, usually the following day, my work had already been assessed. 

For a while I was convinced that either whoever had assessed my work had been very lenient or else all responses were automatically awarded four radiant smiles. My work hadn’t been that good, I thought. Then in the very last peer review I got a less than perfect score, so I assume there was at least one other teacher taking the course. 🙂 

In theory then, once your work has been assessed by two of your peers, you’re completely done with the task. However, at the very end of the course, you’re told that in addition to the grades you received from your peers, your work will also be graded by the teaching staff. Happily, your tasks are still marked as complete and you can get your certificate nevertheless. I suspect I’ll be waiting a while for that grade from the teaching staff and it seems a bit irrelevant, to be honest. It would make sense for someone other than other course participants to check the responses if this were done before course completion was officially confirmed (so those who submitted “123” wouldn’t get their certificate, for instance) but now I think of the course as finished and my work as graded, I’m not likely to go back and check whether I received any further feedback, especially if it’s only emoji. 

There were other interesting aspects of the course but I’ll stop here so as not to mess up my chances of posting this soon. In short, the course reminded me of why l like peer review (if everyone participates the way the course designers intended them to) and has given me some new ideas of how similar activities can be set up.

Have you completed any MOOCs or other online courses lately? Did they include peer review? What do you think makes a good peer review activity?

Categories
EAP Edtech Moodle Tertiary teaching

ABC for VLE

A couple of days ago I went to a workshop (for work) and I thought I’d blog about it. The workshop was called ABC Workshop for Learning Design (only in Croatian) and it was run by the people from the Computing Centre at the University of Zagreb. Specifically, one of the moderators was (the pretty recently elected) EDEN president, which I thought was kinda cool. In ELT terms it’s probably like attending a workshop run by the IATEFL president – I know they’re only human but still, it’s like, oh, they’ve taken the time out of their busy lives to run this little workshop… anyway, I digress. 

The workshop concept was actually devised as part of an Erasmus+ project which you can read more about on the project website. In brief, it’s meant to help online course instructors plan their courses – actually, it’s probably not targeted primarily at the lowly course instructor but a team of people responsible for learning design at a particular institution, only in real life in Croatia I think it’s more often each course instructor for themselves when it comes to designing and teaching an online course. In fairness, though, the Computing Centre team are always there if you need them and are very willing to help. 

I should note, before I start on what we did, that an online course in this context refers to courses in an LMS (Moodle in our case), not synchronous courses. 

Right at the start we were divided into two groups and thus found ourselves seated together with several other people teaching a range of subjects. The workshop activities have been devised with a view to (a couple of) people teaching the same subject working in a group, and in fact it was recommended that people apply with this in mind. Our group, however, was quite diverse, incorporating instructors of music and classical philology, among others, so we first needed to agree on a course we all felt comfortable planning. We could choose either an actual course one of us was teaching, which had the disadvantage of only one person being familiar with it, or devise a course on the spot, which everyone would be equally unfamiliar with – so we went with the latter, opting to plan an introductory course on academic writing. I actually have taught an EAP course, so I guess technically I was somewhat at an advantage, only this was a course aimed at L1 speakers. 

Our first task was to fill in the handout below.

ABC workshop – course info sheet

We needed to come up with the course title, the number of ECTS points (this is apparently a tweak introduced by the folks at the Computing Centre because it turns out teachers have a tendency to say, oh, this is gonna be something basic and then proceed to load it up with coursework out of all proportion to what the course load is supposed to be as reflected in the number of ECTS), and a course summary no longer than a tweet (because ideally it should take no longer than that to summarize the main points of your course – I liked that). 

We also had to formulate a couple of learning outcomes (we stopped at four and this was lucky as it turned out because we felt, at the end, that we would need to tack on another ECTS point once we’d looked at all the activities we’d planned for the students). The spider chart on the right is supposed to reflect the proportion of the course that would be devoted to different learning types (no, not learning styles). These are acquisition, inquiry, discussion, practice, collaboration and production. They’re “based on the pedagogic theory of Professor Diana Laurillard’s Conversational Framework”, according to the project website, and there’s a video where she explains how they work. The idea is that you first fill in the spider chart using one color in the initial stage and then again after you’ve designed the whole course, to see if anything has changed. We, for instance, initially thought our students would be doing a lot more inquiry. 

Finally, we needed to give some thought to whether our course was going to be fully online or blended, which is what the line on the bottom right of the picture represents – we opted for a blended course but with a pronounced online dimension. 

This all actually took longer than you might expect, given that there were seven of us and some negotiating was required. The second step was the storyboard, which is in the following pics. 

We decided on how to address the learning outcomes – in week-by-week or topic-based format. I think we first went with the week-by-week, then decided that some of the outcomes (or would it be better to call them course aims?) would take more than a single week to address, so we switched. 

We next picked the learning types we felt would best help students achieve these outcomes, then had to decide on the actual activities the students would do. For instance, the inquiry type (somewhat confusingly – albeit not incorrectly – called “research” in Croatian) includes traditional and digital methods of carrying out an activity. All the learning types do, because if you’re running a blended course, you’ll probably use traditional methods as well as digital. If we stick with the inquiry type, an example would be using traditional methods vs digital tools to collect and analyze data. 

As the course began to take shape, there was a lot of discussion on exactly how much F2F time the students needed and which activities were most suitable for the online segment. It turned out that some learning outcomes, which we’d perhaps thought would be easily achieved and would not require much class time, were a bit more demanding and would thus take longer. Our initial estimate that 30 hours (2 ECTS points) would be enough was challenged, but we didn’t officially revise it. Once all the activities had been planned, we went back through them and awarded stars to those that would be assessed if the course were ever taught (silver for formative and gold for summative assessment). I don’t know if this shows up in the photo, but our idea was to use formative assessment for collaboration and discussion activities so as to encourage students to take part in these.

I understand the workshop includes one more step, which is devising an action plan of sorts whereby you identify what exactly you’ll need to do to put all you’ve designed into practice; for instance, you might have to record a video and you’ll need someone’s help to do this, so you should plan how to go about it. We’d run out of time for this step but I think in our case this wasn’t a problem because I doubt this course will ever be implemented in its current form (seeing as it’s fictional). 

I thought the workshop was practical and useful. It made me reflect on my writing skills course and how it might look different if I’d designed it following the ABC principles. I’ve always been kind of reluctant to look at the big picture; if we were supposed to write an outline for an essay in English class I usually didn’t do it and started on the first paragraph straight away. Generally, the essays turned out fine, but I can appreciate that it would have been useful to write an outline. The essays might have been even better. 

I’d be interested to read how you approach (blended or online) course design. Do you think you could apply (parts of) this approach? Would it work with classroom courses? What about language teaching? 

Thanks for reading!