Correct me if I’m wrong

This post has been sort of brewing for a while: since the spring of 2015 if I’m honest. You may wonder how come I’m so sure about this. It’s because at the time I was using Kaizena for feedback and wanted to write about that. Only I never did.

Handwritten correction of less to fewer
Photo taken from http://flickr.com/eltpics by @sandymillin, used under a CC Attribution Non-Commercial license, https://creativecommons.org/licenses/by-nc/2.0/

Also, James Taylor had suggested at that year’s BELTA Day that instead of simply correcting student work, I could indicate the problem areas in the sentence and have the students do the correcting themselves. I found this idea very appealing and immediately put it into practice. I think we ran with it for a couple of semesters, but it turned out to be terribly time-consuming as I had to check every submission at least twice. Some I had to check three times because not all the students managed to do what I was hoping they would; i.e., they made a stab at correcting the error but went off in the wrong direction. I wanted to write about that too, only I never did.

In the post 7 things students expect from an online writing course (see the fourth thing), I briefly wrote about how I don’t actually do that much correcting. I’m not sure this is highly popular with students, as they’ve been taught to expect the instructor to correct their work, and there’s always the nagging feeling that they think I’m not doing my job properly. At the beginning of most semesters we discuss a couple of statements about writing as a group, one of which is: I expect the teacher/instructor to mark all the mistakes in my work. I ask the students to mark the statements as true or false and I don’t think I’ve ever had a student claim this particular one to be false for them.

I use this as an opportunity to explain that there are going to be three slightly longer pieces of writing throughout the semester on which they’ll be receiving detailed feedback and where everything that could be seen as a mistake or potentially confuse readers will be addressed, but apart from that, I won’t be correcting their grammar. One of the reasons for this is that a lot of the writing they do on the course is read by other students and I’ve always figured it wouldn’t exactly be productive to analyze to death something they’ve already used to communicate successfully.

I’ve recently completed this detailed correction for the first assignment of this semester and I wanted to have a kind of record what I do these days, both in terms of the tools involved and how I go about making corrections/giving feedback.

Since I stopped using Kaizena, I abandoned the idea of having students make corrections themselves. A quick digression: I’m pretty sure I’ve come across papers on Twitter on whether student correction of their own mistakes is effective, but haven’t bookmarked any, so please let me know if any research comes to mind. I think what I do now is fairly conventional. Students submit their work as a Word doc – or very occasionally in a different format which I then convert to Word so I can do my thing – and I upload the corrected versions of these back to Moodle when I’m done.

There are two types of interventions I do with the Word doc. If something is likely to be considered a mistake in terms of conventional grammar rules, I use the track changes option to correct this. If at all possible, I will add a comment explaining that this would be considered a mistake as far as standard usage rules are concerned. I’m not sure it’s very helpful to treat absolutely everything as fine just because it is fine in some dialect or other, although I do think students should be (made) aware of dialect differences. In my case, communication science students are generally aware of this in their L1, too, so my job is easier in this respect.

If I want to make a more general point, such as suggest that a student run a spell check on their submission, consider breaking up a longish paragraph into two or more if it seems to be addressing several ideas, or double check the meaning of a word they’ve used, I’ll add comment bubbles. I’ve done a post on a comment bank which I had – still have – in a regular Google Doc, but I’ve since come across this post on the Control Alt Achieve blog and started building up a comment bank in Google Keep, which does feel more organized. In the spirit of Sarah’s Twitter anniversary resolution, I think it was thanks to Adi Rajan that this post came up in my feed about two years ago.

Even though my online groups are small, giving feedback and correcting student work is time-consuming enough to make me want to know if there’s some kind of uptake, even if it’s just students reading my comments. When I used to ask them to correct their own mistakes, this obviously wasn’t something I worried about because they had to do it, even if perfunctorily, to make the corrections. The way I currently give feedback and correct though gives me no indication of whether the corrected version of the document has even been downloaded. There’s something I do about this in the second and third longer piece of writing (hopefully more on that in a future post) but for this first piece, what I do is include a reflection prompt on corrections and feedback in the portfolio section of the course. Not every student addresses this topic, but enough people do for me to feel that the work hasn’t been thrown away.

One other thing I should mention is the track changes option. There’s a tutorial in the course materials on how to view suggested changes if this option has been used. When I’ve corrected everyone’s submission, I post an announcement on the course noticeboard, pointing out that this tutorial is available should anyone want to have a look. (An indicator that they’ll want to have a look is if the suggested changes don’t show up for them automatically and they don’t know what to do about this.) Step two, when each individual submission is uploaded, the student is notified of this and the message says, among other things, that they should make sure to view the suggested changes – now as I write this, I realize that I should add a link to the tutorial on how to do this to the message.

The reason I mention this is that even though you think you’ve got it all covered, of course you don’t, and it is through a random comment that you realize that a student was completely unaware of any changes suggested to their text apart from the comment bubbles. Panic sets in as the idea surfaces that maybe no one has ever, in any of the last couple of semesters, seen any of the corrections. You’ve been doing it all for nothing, plus the students all think you haven’t actually been doing anything! The panic gradually fades away and you do all you can do, which is post an announcement explaining once again how corrections are made, a link to the tutorial, and a screenshot to illustrate how to access the review tab.

Thanks for reading and I’d love to hear how you address corrections/feedback/corrective feedback on written work, not necessarily online. Any tips?

Advertisements

How digitally competent are you?

 Oiluj Samall Zeid: Autofocus (CC BY-NC-ND 2.0)

This might sound like a trick question but is in fact an attempt (probably lame) at a clickbaity title. It’s not entirely misleading though because this post is going to be about assessing a whole range of digital competences – before you tune out at the mention of ‘digital’ and think, “Oh God, not 21st-century skills again,” give it a chance because you might be interested in how you’d score. 😛

I’ve been teaching online – in an asynchronous environment, which I suspect is not the primary definition of ‘online’ that comes to mind for most of the ELT community – for the last 6 years. Given that over this time I’ve tried out a lot of online tools and consider myself reasonably edtech proficient, I was curious to see which level I’d be at if something like the CEFR for digital skills were ever devised.

Over the last couple of months I’ve had the opportunity to familiarize myself with the DigCompEdu framework, which basically works like the CEFR and will be easy to navigate for those familiar with the six-level (A2-C2) concept. This is what the European Commission website says (if you didn’t click through above):

The European Framework for the Digital Competence of Educators (DigCompEdu) is a scientifically sound framework describing what it means for educators to be digitally competent. It provides a general reference frame to support the development of educator-specific digital competences in Europe. DigCompEdu is directed towards educators at all levels of education, from early childhood to higher and adult education, including general and vocational education and training, special needs education, and non-formal learning contexts. DigCompEdu details 22 competences organised in six Areas. The focus is not on technical skills. Rather, the framework aims to detail how digital technologies can be used to enhance and innovate education and training.

Apart from the random capitalization – why Areas? *groan* – I liked the idea. We used to use the CEFR a lot at Octopus, by which I mean that the school took part in piloting the ELP (the European Language Portfolio) – this was before my time – and for a time each student received their own copy and part of at least one class was dedicated to explaining how the ELP works and helping familiarize students with the concept of self-assessment (not widely known or trusted in Croatia 15 years ago, or possibly even now when it comes to trusting, but that’s another matter).

The six Areas, incidentally, for those who still haven’t clicked through are: professional engagement, digital resources, teaching and learning, assessment, empowering learners, and facilitating learners’ digital competence. 

Last week I used the self-assessment tool developed to accompany the DigCompEdu framework. Disclaimer: in a rush to get started I clicked on the first link available, whereas what I should have done was used the version developed for those in higher ed, which is my current context and which I had in mind as I was completing the assessment. This may have had an impact on my results.

What happens when you’re done assessing your skills in the six areas is you receive two pdf documents: one has the answers you picked and the other has your results and recommendations on how you could go from, say, A2 to B1 for each of the 22 competences. You’re asked at the beginning which level you’d place yourself at and then the same question comes up again at the end – before the results.

I confidently said I was at B2 before I started clicking away and then, in a sudden burst of what turned out to be delusion overconfidence, changed my mind to C1 before clicking submit.

As you can see above, my score places me at the higher end of B2 (okay, the higher end bit you can’t see but the range for B2 is 50-65). I scored best on professional engagement, teaching and learning and facilitating learners’ digital competence, and I think this is actually pretty fair and accurate. People at B2 go by the possibly presumptuous name of Expert, and while I very much hesitate to say that I am an expert in all things digital, I think I am quite comfortable with many things edtech related. People at C1 and especially C2 are what we – possibly sometimes with a degree of (misplaced?) irony – refer to as edtech gurus. They’re the people we’d ask for advice on edtech issues, who contribute to shaping the opinions of others… and I definitely don’t see myself as this type of person.

If you decide to do the self-assessment, I’d be interested in hearing what you think. Or even if you don’t, of course. 🙂

Leveling up

Stefanie L: cat watching tv (CC BY-NC-ND 2.0)

A couple of weeks ago I blogged about H5P and how excited I was to discover this new resource I could make use of in Moodle. In that post I described the process of setting up a drag & drop activity and adding it to my online course. I was sure I wanted to try out a number of other content types – which is what the 40 odd H5P activities are officially called – but I wanted there to be a reason for adding them, apart from novelty and the thrill of experimentation.

There are a few screencasts in the course, which I thought I could use the interactive video content type with. A little bit of background on the screencasts: they started out as presentations I used when I taught the same course offline. Yes, they were PowerPoint, but I didn’t think that was enough to ditch them, especially as they were brief and had been designed to get the students to interact with the content. The first time I moved them online I used Present.me, which I’m not sure even exists anymore. About three years ago I re-recorded them, uploaded them to YouTube and added subtitles: a far more user-friendly experience overall.

There aren’t many – six in a four-month course – partly because they’re pretty time-consuming to make for someone who doesn’t do this on a regular basis and partly because I don’t think a writing skills course actually requires many. The longest screencast is just under ten minutes, if you don’t count the one in the revision unit, where I chat about what students can expect at the exam (a little under 15 minutes). That one is unscripted and those are likely to be longer anyway.

Eventually I settled on the longest screencast – the ten-minute one – to experiment with. As with the drag & drop, I first added the H5P interactive content activity to the course and selected the content type: this time around it was interactive video. You can either upload a video directly (in which case I think there’s a size restriction) or add a link to YT, which was the route I took. One aspect I was immediately unhappy about was the disappearance of subtitles; apart from the fact that they’re important to ensure accessibility, I think they can be helpful even for pretty advanced students. I got around this (sort of) by embedding the YT video directly below the interactive one and recommending the students first try the interactive version, then watch the one with the subtitles if they felt they needed them.

The screencasts are based on short sets of slides that are often meant to be presented in the following way: I do a bit of talking, the students work in pairs to answer a question or discuss it as a group, and then we check their ideas on the next slide. Because of this, in the recordings I would often ask the students to pause the video and try to answer a question I’d asked – one that they would address in pairs or groups in class. I’d suggest they make a note of their responses somehow, so they could compare them with what came next in the screencast. These were great natural places to add questions to the video and I took advantage of them.

The interactive video content type lets you add a range of different question/interaction types, (MCQs, T/F, drag the word – which can be used for gapfills – matching, and more). I was able to add a link to external content as well, and at the end I wrote up a brief summary of what the video was about and made it into a gapfill activity. I rather liked the option of having the students choose the best summary out of three possible ones – I gather this is a separate question type – but it seemed like it might take a while to set up and I didn’t have much time.

Another advantage of these H5P activities is that you can view the results of the interactions in the Moodle gradebook and see how well the students did on average, as well as if there’s anyone who seems to need a little extra help. Of course, what you can’t see is whether those who did well maybe did a bit of research before answering the questions or if they simply knew the answers from before, nor can you see if those who did less well rushed a little/took a random stab at the answers or if they didn’t really understand/follow the explanations in the video. I did add a question about this to the list of questions the students might want to address in their learning journals, so we’ll see if any interesting insights emerge.

How do you feel about interactive videos: have you used them with your students? Are there any effective tools you would recommend for this besides H5P? I recently came across an article which recommended Edpuzzle, but I’m sure there are others. Thanks for reading!