Tag Archives: practice

Flying cars, digital literacy and the zone of possibility

Where’s my flying car? I was promised one in countless SF films from Metropolis through to Fifth Element. Well, they exist.  Thirty seconds on the search engine of your choice will find you a dozen of so working prototypes (here’s a YouTube video with five).

A fine and upright gentle man flying in a small helicopter like vehicle.
Jess Dixon’s flying automobile c. 1940. Public Domain, held by State Library and Archives of Florida, via Flickr.

They have existed for some time.  Come to think about it, the driving around on the road bit isn’t really the point. I mean, why would you drive when you could fly. I guess a small helicopter and somewhere to park would do.

So it’s not lack of technology that’s stopping me from flying to work. What’s more of an issue (apart from cost and environmental damage) is that flying is difficult. The slightest problem like an engine stall or bump with another vehicle tends to be fatal. So the reason I don’t fly to work is largely down to me not having learnt how to fly.

The zone of possibility

In 2010 Kathryn Dirkin studied how three professors taught using the same online learning environment, and found that they were very different. Not something that will surprise many people, but the paper (which unfortunately is still behind a paywall) is worth a read for the details of the analysis. What I liked from her conclusions was that how someone teaches online depends on the intersection of their knowledge of the content, beliefs about how it should be taught and understanding technology. She calls this intersection the zone of possibility. As with the flying car the online learning experience we want may already be technologically possible, we just need to learn how to fly it (and consider the cost and effect on the environment).

I have been thinking about Dirkin’s zone of possibility over the last few weeks. How can it be increased? Should it be increased? On the latter, let’s just say that if technology can enhance education, then yes it should (but let’s also be mindful about the costs and impact on the environment).

So how, as a learning technologist, to increase this intersection of content knowledge, pedagogy and understanding of technology? Teachers’ content knowledge I guess is a given, nothing that a learning technologist can do to change that. Also, I have come to the conclusion that pedagogy is off limits. No technology-as-a-Trojan-horse for improving pedagogy, please, that just doesn’t work. It’s not that pedagogic approaches can’t or don’t need to be improved, but conflating that with technology seems counter productive.  So that’s left me thinking about teachers’ (and learners’) understanding of technology. Certainly, the other week when I was playing with audio & video codecs and packaging formats that would work with HTML5 (keep repeating H264  and AAC in MPEG-4) I was aware of this. There seems to be three viable approaches: increase digital literacy, tools to simplify the technology and use learning technologists as intermediaries between teachers and technology. I leave it at that because it is not a choice of which, but of how much of each can be applied.

Does technology or pedagogy lead?

In terms of defining the”zone of possibility” I think that it is pretty clear that technology leads. Content knowledge and pedagogy change slowly compared to technology. I think that rate of change is reflected in most teachers understanding of those three factors. I would go as far as to say that it is counterfactual to suggest that our use of technology in HE has been led by anything other than technology. Innovation in educational technology usually involves exploration of new possibilities opened up by technological advances, not other factors. But having acknowledged this, it should also be clear that having explored the possibilities, a sensible choice of what to use when teaching will be based on pedagogy (as well as cost and the effect on the environment).

New projects for me at Heriot-Watt

Understanding large numbers in context, an exercise with socrative

I came across an exercise that aimed to demonstrate that numbers are easier to understand when broken  down and put into context, it’s one of a number of really useful resources for the general public, journalists and teachers from the Royal Statistical Society. The idea is that large numbers associated with important government budgets–you know, a few billion here, a few billion there, pretty soon you’re dealing with large numbers–but such large numbers are difficult to get our heads around, whereas the same number expressed in a more familiar context, e.g. a person’s annual or weekly budget, should be easy to understand.  I wondered whether that exercise would work as an in-class exercise using socrative,–it’s the sort of thing that might be a relevant ice breaker for a critical thinking course that I teach.

A brief aside: Socrative is a free online student response system which “lets teachers engage and assess their students with educational activities on tablets, laptops and smartphones”. The teacher writes some multiple choice or short-response questions for students to answer, normally in-class. I’ve used it in some classes and students seem to appreciate the opportunity to think and reflect on what they’ve been learning; I find it useful in establishing a dialogue which reflects the response from the class as a whole, not just one or two students.

I put the questions from the Royal Stats. Soc. into socrative as multiple choice questions, with no feedback on whether the answer was right or wrong except for the final question, just some linking text to explain what I was asking about. I left it running in “student-paced” mode and asked friends on facebook to try it out over the next few days. Here’s a run through what they saw:

Screenshot from 2015-03-31 14:54:19Screenshot from 2015-03-31 14:55:13Screenshot from 2015-03-31 14:55:52Screenshot from 2015-03-31 14:56:40Screenshot from 2015-03-31 14:58:46Screenshot from 2015-03-31 14:59:21

 

Socrative lets you download the results as a spreadsheet showing the responses from each person to each question. A useful way to visualise the responses is as a sankey diagram:
sankeymatic_1200x1000 (1)

[I created that diagram with sankeymatic. It was quite painless, though I could have been more intelligent in how I got from the raw responses to the input format required.]

So did it work? What I was hoping to see was the initial answers being all over the place, but converging on the correct answer, that is not so many chosing £10B per annum for Q1 as £30 per person per week for the last question. That’s not really what I’m seeing. But I have some strange friends, a few people commented that they knew the answer for the big per annum number but either could or couldn’t do the arithmetic to get to the weekly figure. Also it’s possible that the question wording was misleading people into thinking about how much would it cost to treat a person for week in an NHS hospital. Finally I have some odd friends who are more interested in educational technology than in answering questions about statistics, who might just have been looking to see how socrative worked. So I’m still interested in trying out this question in class. Certainly socrative worked well for this, and one thing I learnt (somewhat by accident) is that you can leave a quiz running in socrative open for responses for several months.

 

First session

Today was the first session of the first course that I am teaching.

The course is design for online learning, there are 22 students (more than anticipated, but not hugely more). The first session was a one hour long introduction, and introduction both to the contents of the course and to each other. I gave an overview of what the course covers, what are the learning objectives and why they might be interesting, what the balance is between theory and hands-on, lecture and discussion, time-tabled and open study, coursework and exam. A lot of the course derives from discussion based on the students own experiences (at least that what Roger, who has run this course for 10yrs or so, tells me works) so as a break from me talking I asked each person in the class what they had by way of experience that is relevant to online learning.

The mechanics of the session worked, the timing was spot on. All the students had some experience of online learning, a VLE at school or Uni, computer based training at work, forums when learning programming, revision resources (BBC Bytesize); some had experience in tutoring, or training in other contexts. That’s good.

Less good is that me standing up talking about course objectives is pretty boring. I think in trying explain how something they don’t yet know might be useful I lost some of them. But maybe there’s no interesting way of making sure the students have that information, and I do think that you have to realise that you are confused before you can put your ideas in order.

Less necessary perhaps was any boredom while I went around the class one at time asking for their experience. This may have worked better with a smaller class, but even then the interest is mostly of interest to me: it gave me an idea of who has interesting background knowledge, who is a confident speaker, meant I could make a start at putting names to people. Perhaps it would have been better done in parallel not series by asking them to write down their experience. Some examples would help make sure that they knew what sort of information I was interested in. On the plus side it was good to see them writing notes while other people were saying their bit, I guess the notes were about what might be relevant, which I think means that they spent a few minutes reflecting on what they already know.

One final observation struck me: hardly anyone had a laptop or tablet with them, and I didn’t see any of them using a phone. That’s odd in a class about online learning. I pretty sure that you can learn online even during a lecture.