Wednesday, November 25, 2015

Is the customer always right?

Education at all levels is awash with evaluation forms and needs analysis surveys. We all want feedback but are we asking the right questions? Development projects are based on needs analysis, assuming that we should only develop services that customers/staff/students really require. The problem with this approach is that you don't need what you don't know. If we only designed products and services that people needed we would never have had cars, television, computers, mobiles or the internet. No-one needed them until they had been developed. So asking for needs can help but is often not the right approach for innovation. The same applies to student satisfaction. We assume that if students are satisfied with a course then the teacher has done a good job and the students have learnt a lot. Or have they?

Education, unlike products and services, cannot be judged on simple customer satisfaction. Students are satisfied if they get a good grade but that doesn't always mean that they have learnt very much or that the course was well designed. An article on The Conversation, Students don’t know what’s best for their own learning, highlights the problems of judging education by satisfaction. Students tend to give good evaluations to teachers who don't demand too much, are entertaining and give them good grades. Those who make them work hard and don't serve all the answers on a plate are generally less popular. Teachers who try innovative new methods that force students to take charge of their learning and really grapple with working out concepts for themselves often face negative evaluations and possibly a resultant stern talk from their head of department. Many students (and teachers) mistake content for learning.

That is why many students assume that reading or highlighting passages in their text-book, or merely listening to a lecture, is enough to produce learning. They mistake the ease of the task with greater knowledge. Time-consuming and effortful tasks, like self-testing their knowledge, are consequently seen by students as less efficient for their learning, despite the fact that the more difficult tasks produce the most learning.

Student evaluations can therefore be dangerously misleading and result in "easy" courses being encouraged and more demanding courses modified to please the customers. Lectures are still popular because they provide the "answers" in one easily digestible session and many still equate lecturing with teaching. As a result students will generally give positive evaluations to teachers who give them what they expected. Seeing this as a measure of course quality can have serious consequences according to the article.

... universities that rely on student evaluations are likely to punish good teachers and encourage those who simply make it easy for students. Most universities have codes of conduct that require decisions to be made on valid evidence. Any manager discussing student evaluations when reviewing lecturers’ performance is probably breaching that part of their own job requirements. Given the evidence, student evaluations are a distraction from the responsibility to provide the best possible education for the nation.

Of course it is good to find out what students thought of a course but we need to steer away from simply measuring satisfaction and provide them with a new set of rubrics that allow them to gauge their effort, collaboration, acquisition of new skills and how they have met new challenges. This type of evaluation takes time to learn but that way we will get more reliable criteria for assessing course quality. At the end of a good course you may feel rather tired, maybe irritated and sore but you also realise that you've achieved something. We need to learn to appreciate teaching that challenges us and pushes us forward even if it hurts a bit. 

Friday, November 20, 2015

Recorded lectures - what's the point?

Press Play by realjv, on Flickr
Creative Commons Creative Commons Attribution-Noncommercial-Share Alike 2.0 Generic License  by realjv

A college of mine raised an interesting question. Why do we automatically record so many lectures and meetings that no-one ever watches? Just because we can easily record and store events doesn't mean that we have to. The problem is that many universities are now recording all their lectures without really pausing to think why they are doing so. Many are watched by a very small number of people, often only for a few minutes per person, and some are never watched at all. Should we maybe think more carefully about what we record and how long it should be available?

It's part of our passionate and rather tragic love affair with lecturing - breaking up is indeed very hard to do. It's is the backbone of most university teaching; it's easy to do, it's very scalable and it feels like teaching. Lecture capture systems are extremely popular, probably because we can continue lecturing as usual but feel modern because they're all available online as well. Nothing has changed, just a nod in the direction of some kind of modernity. Then there's the flipped classroom approach which in its simplest interpretation means pre-recording the lecture and letting students view it as preparation for discussion or workshops in class. That at least avoids gathering students to simply listen to one-way communication but the point here is if you are going to flip the class, why is the lecture the option of choice? I'm certainly guilty of recording quite a few sessions like this and admit that viewing figures are far from viral. Do we really learn so much from lectures and could we use that time more fruitfully?

I don't mean that the lecture should be completely abandoned. A really polished and well delivered lecture can certainly inspire and provoke discussion and reflection but maybe it should be used sparingly, as an event that cannot be missed. Instead we could focus more on shorter inputs on the lines of a TED talk, offering insights but also asking questions that the students should find out for themselves. Critics will warn that such an approach risks simplifying everything into convenient and easily digested "sound-bites". But the point here is to replace the lecture with something else, not a just a shorter version. Instead we should record a short introduction to the subject that raises questions and issues for the students to investigate and process. Start a discussion, provoke deeper thought, inspire investigation, instead of providing all the answers. The lecture represents a spoon-feeding approach that is certainly appreciated by students who want to be fed but as long as we continue to lecture, face-to-face or online, it is hard to move towards student-oriented learning.

So before you decide to press that record button stop and think about the value of your lecture. Do I need to demonstrate how much I know or do I want to encourage the students to investigate for themselves, search for information, compare sources, analyse results and present their findings? The answer is clear but I have a feeling it's going to be a difficult detox process.

Sunday, November 15, 2015

MOOCs as open ecosystems


I'm still waiting for the day we can assign the term MOOC to the acronym graveyard (let's discuss open education instead of a cryptic four letter term that is far too vague and open to many interpretations). In the meantime the courses keep coming and the debate continues. The public image of MOOCs is that of the high-end expensive and highly polished courses coming from the major players (Coursera, EdX, FutureLearn etc) and the public perception is that of recorded lectures, self-study and automated self-testing. Little attention is given to all the open, collaborative learning that takes place under the radar without the use of expensive platforms, franchising and tall budgets. So often MOOCs and online education are seen as synonyms rather than the former being a small but highly visible subset of the latter. One recent article by three Stanford professors in Inside Higher Ed, What We’ve Learned From MOOCs, draws conclusions from that university's MOOC ventures in recent years. The article contains few surprises but it's worth discussing some of the points they make.

MOOCs do not replace regular university courses and seldom attract the traditional university target group (19-23 year-olds). Instead we should consider MOOCs as ... a new instructional genre - somewhere between a digital textbook and a successful college course. This comparison with an advanced digital textbook is very apt when applied to the xMOOC sector that Stanford represents. I see great potential in letting other institutions use the material and structure of a MOOC as the backbone of a local course that could include face-to-face or online meetings and tutorials. The key factor here is whether the MOOC provider is willing to openly license the course for reuse and even adaptation or offer some kind of franchising agreement. MOOCs could certainly be used on campus and for credit if the content and structure is complemented by tuition, support, assessment and examination. The main problem is the massive aspect. The academic ideal of small tutorial groups and teacher-student contact is not very scalable on campus (that's why classes of over 100 students tend to be lecture-based). Online you can create lots of smaller study groups with facilitators and this approach is widely used today but once the course gets massive even this approach gets too complicated.

MOOCs in their present form are not the answer to wider access to higher education.
So far they have mostly attracted professionals in developed countries and have not reached those who have no access to traditional higher education. The issue here is providing better support to new student groups, for example by allowing local institutions to provide technical support and help with study skills in the local language, Universities can't provide all the support that inexperienced online learners need and so they need to offer an open interface so that local support structures can easily be added to the ecosystem. Remember that the vast majority of the world has never heard of MOOCs and are often not even aware of online education. They won't find a MOOC unless someone (a local school, community centre, library) tells them about it. Their participation and success will very much depend on the local support they receive and no amount of video guides, FAQ pages or user forums from a far-off university can really replace this. Let's try this approach more and see if the accesibility vision for MOOCs can be realised. We have only just started to address this issue.

Recorded lectures do not lead to effective learning
This is really a no-brainer since the over-reliance on lecturing are exactly what's wrong with traditional higher education. The problem both on campus and online is that universities focus on lecturing not because it enhances learning but because it's easy and cheap to do. Lecturing is very scalable whereas tutoring and interaction are not. This is not just a problem for online courses, many campus students have very little interaction with their teachers. However there are many online courses today that are built around collaboration and interaction and many MOOCs today are experimenting with methods to increase engagement.

MOOCs have raised awareness of the potential on online learning
This is certainly true since so many elite universities have suddenly discovered the field after many years of denial. Martin Weller points out in a short but insightful post, Lessons from the MOOC investment gold rush, that the motivation for many top universities entering the MOOC fray was the fear of been seen as out of date. MOOCs certainly put online learning in the media spotlight and consequently on the boardroom agenda. The problem however was that the MOOC boom failed to build on the lessons learned from years of successful online education at thousands of universities worldwide. The pointlessness of simply stacking up recorded lectures, pdf-files and PowerPoints on a web site and the importance of creating learning communities and fostering peer engagement were well-known before mainstream MOOCs came along (especially in the early collaborative MOOCs of Siemens, Downes, Cormier, Alexander etc). Maybe now the strands can be linked.

MOOCs have not fixed higher education, but they are poignant reminders of the urgent problems of college cost and access, potential forerunners of truly effective educational technology, and valuable tools for advancing the science of learning. That’s progress.

MOOCs certainly haven't fixed anything but they have widened the discussion that was previously restricted to the pioneers and enthusiasts. MOOCs are one element in an ongoing process of experimentation and innovation as education comes to terms with digitalisation and since this is work in progress it's wrong to dismiss the conecpt as failed. It hasn't really started yet. However, one thread running through my comments above is the need to stop seeing MOOCs as the product of one institution or one platform. If you offer a MOOC and you really want diversity, accessibility and higher levels of learner engagement you need to open up the concept and let other institutions and initiatives add auxiliary services, provide local support, adapt to local language and culture and offer alternative forms of engagement. That way I think there is a future for MOOCs. Let's just change the name please.

Thursday, November 5, 2015

Online IS "in real life"


We seem to have a love-hate attitude to digital technology. On the one hand we're buying and intensively using all the shiny devices we can get our hands on but at the same time we seem to be terrified of them. Mobiles and tablets are often portrayed as dangerous obsessions that are killing human interaction and turning us into zombies. We all enjoy jokes about people gazing into their mobiles, oblivious of the world around them, and long for the good old days when people spoke to each other. It's easy to criticize but maybe we should consider what people are actually doing when they're looking at their screens.

An excellent article by Héctor L. Carral, Stop saying technology is causing social isolation, is the perfect myth-buster. When we see everyone on a train staring into their screens many people feel provoked, claiming that mobiles are enslaving us. Why don't we talk to each other anymore? But if we cast our minds back 30 years or so I don't remember any spontaneous discussions with fellow commuters then either. We all buried out heads in the morning paper or a book - what's the difference? I used to get really bored standing at cold windy stations or bus stops with nothing to do but gaze into space. Today I use that time to listen to wonderful music or podcasts, see how my friends are doing, admire photos from friends, take part in a discussion or check my e-mails. We probably communicate more than ever before thanks to digital technology.

My main premise is that I don’t think smartphones are isolating us, destroying our social lives or ruining interactions. I see smartphones as instruments for communication. Instruments that enable interaction on ways that just weren’t possible before, connecting us with people all around the world, via Twitter, instant messaging or other services. Some may say that if you want to interact with people, you should interact with the ones around you, and that is probably true on certain occasions. But, on other occasions, I’m just not able to comprehend why should we be forced to interact with those physically close to us instead of with the people that we really want to interact with.

One reason for this suspicion of technology is the fact that many still only use it for entertainment and passing the time. We worry that we are becoming passive consumers of trivial entertainment rather than acknowledge the active, communicative and creative opportunities of digital technology. There's also the idea that communicating with friends digitally is not "real" communication and becomes the opposite of "in real life". Somehow your online contacts are labelled as virtual or in some way "unreal". In the past people discussed anonymously in chat rooms using pseudonyms and with cartoon characters as avatars. Here you had no idea who you were actually talking to and it was understandable that this was not seen as real communication. However today we mostly communicate with people we know and use real photos and profiles of ourselves to establish trust. When identities are clear and trust is established online communication is definitely "in real life" and I can have just as intensive face-to-face discussions with online colleagues using tools like Skype as I can have if we were in the same room. I see their faces, eye movements and body language.

As digital literacies are developed so will our perceptions of online behaviour. When we make the leap from being consumers to being producers and collaborators we will hopefully lose these negative stereotypes that are still so prevalent today. It's time to drop expressions like IRL, cyberspace, virtual and everything beginning with e- and realise that it's all about real human communication, but using the opportunities offered by new media.