Correct me if I’m wrong

This post has been sort of brewing for a while: since the spring of 2015 if I’m honest. You may wonder how come I’m so sure about this. It’s because at the time I was using Kaizena for feedback and wanted to write about that. Only I never did.

Handwritten correction of less to fewer
Photo taken from http://flickr.com/eltpics by @sandymillin, used under a CC Attribution Non-Commercial license, https://creativecommons.org/licenses/by-nc/2.0/

Also, James Taylor had suggested at that year’s BELTA Day that instead of simply correcting student work, I could indicate the problem areas in the sentence and have the students do the correcting themselves. I found this idea very appealing and immediately put it into practice. I think we ran with it for a couple of semesters, but it turned out to be terribly time-consuming as I had to check every submission at least twice. Some I had to check three times because not all the students managed to do what I was hoping they would; i.e., they made a stab at correcting the error but went off in the wrong direction. I wanted to write about that too, only I never did.

In the post 7 things students expect from an online writing course (see the fourth thing), I briefly wrote about how I don’t actually do that much correcting. I’m not sure this is highly popular with students, as they’ve been taught to expect the instructor to correct their work, and there’s always the nagging feeling that they think I’m not doing my job properly. At the beginning of most semesters we discuss a couple of statements about writing as a group, one of which is: I expect the teacher/instructor to mark all the mistakes in my work. I ask the students to mark the statements as true or false and I don’t think I’ve ever had a student claim this particular one to be false for them.

I use this as an opportunity to explain that there are going to be three slightly longer pieces of writing throughout the semester on which they’ll be receiving detailed feedback and where everything that could be seen as a mistake or potentially confuse readers will be addressed, but apart from that, I won’t be correcting their grammar. One of the reasons for this is that a lot of the writing they do on the course is read by other students and I’ve always figured it wouldn’t exactly be productive to analyze to death something they’ve already used to communicate successfully.

I’ve recently completed this detailed correction for the first assignment of this semester and I wanted to have a kind of record what I do these days, both in terms of the tools involved and how I go about making corrections/giving feedback.

Since I stopped using Kaizena, I abandoned the idea of having students make corrections themselves. A quick digression: I’m pretty sure I’ve come across papers on Twitter on whether student correction of their own mistakes is effective, but haven’t bookmarked any, so please let me know if any research comes to mind. I think what I do now is fairly conventional. Students submit their work as a Word doc – or very occasionally in a different format which I then convert to Word so I can do my thing – and I upload the corrected versions of these back to Moodle when I’m done.

There are two types of interventions I do with the Word doc. If something is likely to be considered a mistake in terms of conventional grammar rules, I use the track changes option to correct this. If at all possible, I will add a comment explaining that this would be considered a mistake as far as standard usage rules are concerned. I’m not sure it’s very helpful to treat absolutely everything as fine just because it is fine in some dialect or other, although I do think students should be (made) aware of dialect differences. In my case, communication science students are generally aware of this in their L1, too, so my job is easier in this respect.

If I want to make a more general point, such as suggest that a student run a spell check on their submission, consider breaking up a longish paragraph into two or more if it seems to be addressing several ideas, or double check the meaning of a word they’ve used, I’ll add comment bubbles. I’ve done a post on a comment bank which I had – still have – in a regular Google Doc, but I’ve since come across this post on the Control Alt Achieve blog and started building up a comment bank in Google Keep, which does feel more organized. In the spirit of Sarah’s Twitter anniversary resolution, I think it was thanks to Adi Rajan that this post came up in my feed about two years ago.

Even though my online groups are small, giving feedback and correcting student work is time-consuming enough to make me want to know if there’s some kind of uptake, even if it’s just students reading my comments. When I used to ask them to correct their own mistakes, this obviously wasn’t something I worried about because they had to do it, even if perfunctorily, to make the corrections. The way I currently give feedback and correct though gives me no indication of whether the corrected version of the document has even been downloaded. There’s something I do about this in the second and third longer piece of writing (hopefully more on that in a future post) but for this first piece, what I do is include a reflection prompt on corrections and feedback in the portfolio section of the course. Not every student addresses this topic, but enough people do for me to feel that the work hasn’t been thrown away.

One other thing I should mention is the track changes option. There’s a tutorial in the course materials on how to view suggested changes if this option has been used. When I’ve corrected everyone’s submission, I post an announcement on the course noticeboard, pointing out that this tutorial is available should anyone want to have a look. (An indicator that they’ll want to have a look is if the suggested changes don’t show up for them automatically and they don’t know what to do about this.) Step two, when each individual submission is uploaded, the student is notified of this and the message says, among other things, that they should make sure to view the suggested changes – now as I write this, I realize that I should add a link to the tutorial on how to do this to the message.

The reason I mention this is that even though you think you’ve got it all covered, of course you don’t, and it is through a random comment that you realize that a student was completely unaware of any changes suggested to their text apart from the comment bubbles. Panic sets in as the idea surfaces that maybe no one has ever, in any of the last couple of semesters, seen any of the corrections. You’ve been doing it all for nothing, plus the students all think you haven’t actually been doing anything! The panic gradually fades away and you do all you can do, which is post an announcement explaining once again how corrections are made, a link to the tutorial, and a screenshot to illustrate how to access the review tab.

Thanks for reading and I’d love to hear how you address corrections/feedback/corrective feedback on written work, not necessarily online. Any tips?

Advertisements

CPD where you’d least expect it

Ok, so the title’s just a little bit misleading because while there are certainly settings where engaging in CPD wouldn’t be likely, meeting up with an ESP instructor to talk about online courses isn’t one of them.

I spent this Saturday morning with a colleague who is planning to introduce a blended learning component into one of his regular (traditional) courses. His institution uses Moodle, so the idea was that I would show him how some of the resources and activities work in practice, as I’ve tested quite a few of them out in my course over the past couple of semesters.

It turned out that this instructor has some experience with Moodle, so we could skip the orientation details and dive straight in. He talked me through the activity types he was familiar with, like quizzes and an interesting activity I hadn’t used before as my course is entirely online: the attendance register. (I’d actually assumed this was a resource and not an activity, but the official site says otherwise.)

He then described the course he wanted to add online features to and we brainstormed a little on how this could be done in relatively undemanding fashion because of the time constraints involved. I showed him my course and some activity types I think work well, for instance, my favorite: the peer review (workshop). The idea of dividing students into groups and assigning a forum to each group also seemed to appeal, particularly the concept of the Q&A forum in which you can’t read earlier posts until you’ve added your own.

Throughout all this we talked about our courses and students, some common issues we’ve come up against and how we deal with them, and compared our specific teaching environments. I recommended the Learn Moodle MOOC, which I felt was tremendously useful for me when I was starting out and which they’ll be running again in June.

Later on in town I spotted some people toting bags bearing the logo of a well-known ELT materials publisher and I realized this was the day they’d held their traditional one-day event. I used to attend when I was at Octopus. And it occurred to me that they would get a certificate of attendance, while I had also spent a not inconsiderable amount of time doing CPD, only this doesn’t translate into any kind of formal recognition.

Please don’t get me wrong; I love talking about my course – if anyone knows this it’s the readers of this blog – and it’s fantastic to have an opportunity to do this with someone who is interested in online learning. I also found it very useful to talk about my day-to-day teaching issues with someone in real life – I don’t get many opportunities to do this as I work in an office now and my virtual staffroom (my online PLN) is basically my only source of ELT-related news and info. It’s inspiring, motivating, supportive and generally lovely, but even someone who is really into online stuff appreciates talking to people over coffee about things of relevance in their local context.

What I’m saying is that it would be great if these less formal/completely informal forms of CPD were also somehow recognized for what they are. I haven’t been doing CPD for the certificates for ages now, but still. I have no practical suggestions re how this could be done though, for example, in terms of defining how long this CPD session lasted or which topics were covered, and I know this is important for people quantifying CPD.

What do you think? Are there countries or organizations where CPD already is defined as something beyond what you can prove has taken place with a certificate? Or do you feel this is unnecessary and would say there is no need to describe my example as anything other than a chat with a colleague?

How digitally competent are you?

 Oiluj Samall Zeid: Autofocus (CC BY-NC-ND 2.0)

This might sound like a trick question but is in fact an attempt (probably lame) at a clickbaity title. It’s not entirely misleading though because this post is going to be about assessing a whole range of digital competences – before you tune out at the mention of ‘digital’ and think, “Oh God, not 21st-century skills again,” give it a chance because you might be interested in how you’d score. 😛

I’ve been teaching online – in an asynchronous environment, which I suspect is not the primary definition of ‘online’ that comes to mind for most of the ELT community – for the last 6 years. Given that over this time I’ve tried out a lot of online tools and consider myself reasonably edtech proficient, I was curious to see which level I’d be at if something like the CEFR for digital skills were ever devised.

Over the last couple of months I’ve had the opportunity to familiarize myself with the DigCompEdu framework, which basically works like the CEFR and will be easy to navigate for those familiar with the six-level (A2-C2) concept. This is what the European Commission website says (if you didn’t click through above):

The European Framework for the Digital Competence of Educators (DigCompEdu) is a scientifically sound framework describing what it means for educators to be digitally competent. It provides a general reference frame to support the development of educator-specific digital competences in Europe. DigCompEdu is directed towards educators at all levels of education, from early childhood to higher and adult education, including general and vocational education and training, special needs education, and non-formal learning contexts. DigCompEdu details 22 competences organised in six Areas. The focus is not on technical skills. Rather, the framework aims to detail how digital technologies can be used to enhance and innovate education and training.

Apart from the random capitalization – why Areas? *groan* – I liked the idea. We used to use the CEFR a lot at Octopus, by which I mean that the school took part in piloting the ELP (the European Language Portfolio) – this was before my time – and for a time each student received their own copy and part of at least one class was dedicated to explaining how the ELP works and helping familiarize students with the concept of self-assessment (not widely known or trusted in Croatia 15 years ago, or possibly even now when it comes to trusting, but that’s another matter).

The six Areas, incidentally, for those who still haven’t clicked through are: professional engagement, digital resources, teaching and learning, assessment, empowering learners, and facilitating learners’ digital competence. 

Last week I used the self-assessment tool developed to accompany the DigCompEdu framework. Disclaimer: in a rush to get started I clicked on the first link available, whereas what I should have done was used the version developed for those in higher ed, which is my current context and which I had in mind as I was completing the assessment. This may have had an impact on my results.

What happens when you’re done assessing your skills in the six areas is you receive two pdf documents: one has the answers you picked and the other has your results and recommendations on how you could go from, say, A2 to B1 for each of the 22 competences. You’re asked at the beginning which level you’d place yourself at and then the same question comes up again at the end – before the results.

I confidently said I was at B2 before I started clicking away and then, in a sudden burst of what turned out to be delusion overconfidence, changed my mind to C1 before clicking submit.

As you can see above, my score places me at the higher end of B2 (okay, the higher end bit you can’t see but the range for B2 is 50-65). I scored best on professional engagement, teaching and learning and facilitating learners’ digital competence, and I think this is actually pretty fair and accurate. People at B2 go by the possibly presumptuous name of Expert, and while I very much hesitate to say that I am an expert in all things digital, I think I am quite comfortable with many things edtech related. People at C1 and especially C2 are what we – possibly sometimes with a degree of (misplaced?) irony – refer to as edtech gurus. They’re the people we’d ask for advice on edtech issues, who contribute to shaping the opinions of others… and I definitely don’t see myself as this type of person.

If you decide to do the self-assessment, I’d be interested in hearing what you think. Or even if you don’t, of course. 🙂