Category Archives: cetis

Linked learning highlights from Florence #LILE2015

davidI was lucky enough to go to Florence for some of the pre WWW2015 conference workshops because Stefan Dietze invited me to talk at the Linked Learning workshop “Learning & Education with the Web of Data“. Rather than summarize everything I saw I would like to give brief pointers to three presentations from that workshop and the “Web-base Education Technologies” (WebET 2015) workshop that followed it that were personal highlights. Many thanks to Stefan for organizing the conference (and also to the Spanish company Gnoss for sponsoring it).

Semantic TinCan

I’ve followed the work on Tin Can / xAPI / Experience API since its inception. Lorna and I put a section about it into our Cetis Briefing on Activity Data and Paradata, so I was especially interested in Tom De Nies‘s presentation on TinCan2PROV: Exposing Interoperable Provenance of Learning Processes Through Experience API Logs. Tin Can statements are basically elaborations of “I did this,” providing more information about the who, how and what referred to by those three words. Tom has a background in provenance metadata and saw the parallel between those statements and the recording of actions by agents more generally, and specifically with the model behind the W3C PROV ontology for recording information about entities, activities, and people involved in producing a piece of data or thing. Showing that TinCan can be mapped to W3C PROV is of more than academic interest: the TinCan spec provides only one binding, JSON but the first step of Tom’s work was to upgrade that to JSON-LD and then through the mapping to PROV allow any of the serializations for PROV to be used (RDF/XML, N3, JSON-LD), and to bring the Tin Can statements into a format that allows them to be used by semantic technologies. Tom is hopeful that the mapping is lossless, you can try it yourself at tincan2prov.org.

Linking Courses

I also have an increasing interest in the semantic representation of courses, there’s some interest in adding Courses to schema.org, but also within my own department some of us would like to explore advantages of moving away from course descriptors as PDF documents to something that could be a little more connected with each other and the outside world. Fouad Zablith’s presentation on Interconnecting and Enriching Higher Education Programs using Linked Data was like seeing the end point of that second line of interest. The data model Fouad uses combines a model of a course with information about the concepts taught and the learning materials used to teach them. Course information is recorded using Semantic MediaWiki to produce both human readable and linked data representations of the courses across a program of study. A bookmarklet allows information about resources that are useful for these courses to be added to the graph–but importantly attached via the concept that is studied, and so available to students of any course that teaches that concept. Finally, on the topic of several courses teaching the same concepts, sometimes such repetition is deliberate, but sometimes it is unwanted.

Showing which concepts (middle) from one course (left) occur in other courses (right). Image from Fouad Zablith's presentation (see link in text)
Showing which concepts (middle) from one course (left) occur in other courses (right). Image from Fouad Zablith’s presentation (see link in text)

Fouad  showed how analysis of the concept – course part of the graph could be useful it surfacing cases of where there were perhaps too many concepts in a course that had been previously covered elsewhere (see image, above)

Linking Resources

One view of a course (that makes especial sense to anyone who thinks about etymology) is that it is a learning pathway, and one view of a pathway is as an Directed Acyclic Graph, i.e. an ordered route through a series of resources.  In the WebET workshop,  Andrea Zielinski presented A Case Study on the Use of Semantic Web Technologies for Learner Guidance which modelled such a learning pathway as a directed graph and represented this graph using OWL 2 DL.  The conclusion to her paper says “The approach is applicable in the Semantic Web context, where distributed resources often are already annotated according to metadata standards and various open-source domain and user ontologies exist. Using the reasoning framework, they can be sequenced in a meaningful and user-adaptive way.” But at this stage the focus is on showing that the expressivity of OWL 2 DL is enough to represent a learning pathway and testing the efficiency of querying such graphs.

Understanding large numbers in context, an exercise with socrative

I came across an exercise that aimed to demonstrate that numbers are easier to understand when broken  down and put into context, it’s one of a number of really useful resources for the general public, journalists and teachers from the Royal Statistical Society. The idea is that large numbers associated with important government budgets–you know, a few billion here, a few billion there, pretty soon you’re dealing with large numbers–but such large numbers are difficult to get our heads around, whereas the same number expressed in a more familiar context, e.g. a person’s annual or weekly budget, should be easy to understand.  I wondered whether that exercise would work as an in-class exercise using socrative,–it’s the sort of thing that might be a relevant ice breaker for a critical thinking course that I teach.

A brief aside: Socrative is a free online student response system which “lets teachers engage and assess their students with educational activities on tablets, laptops and smartphones”. The teacher writes some multiple choice or short-response questions for students to answer, normally in-class. I’ve used it in some classes and students seem to appreciate the opportunity to think and reflect on what they’ve been learning; I find it useful in establishing a dialogue which reflects the response from the class as a whole, not just one or two students.

I put the questions from the Royal Stats. Soc. into socrative as multiple choice questions, with no feedback on whether the answer was right or wrong except for the final question, just some linking text to explain what I was asking about. I left it running in “student-paced” mode and asked friends on facebook to try it out over the next few days. Here’s a run through what they saw:

Screenshot from 2015-03-31 14:54:19Screenshot from 2015-03-31 14:55:13Screenshot from 2015-03-31 14:55:52Screenshot from 2015-03-31 14:56:40Screenshot from 2015-03-31 14:58:46Screenshot from 2015-03-31 14:59:21

 

Socrative lets you download the results as a spreadsheet showing the responses from each person to each question. A useful way to visualise the responses is as a sankey diagram:
sankeymatic_1200x1000 (1)

[I created that diagram with sankeymatic. It was quite painless, though I could have been more intelligent in how I got from the raw responses to the input format required.]

So did it work? What I was hoping to see was the initial answers being all over the place, but converging on the correct answer, that is not so many chosing £10B per annum for Q1 as £30 per person per week for the last question. That’s not really what I’m seeing. But I have some strange friends, a few people commented that they knew the answer for the big per annum number but either could or couldn’t do the arithmetic to get to the weekly figure. Also it’s possible that the question wording was misleading people into thinking about how much would it cost to treat a person for week in an NHS hospital. Finally I have some odd friends who are more interested in educational technology than in answering questions about statistics, who might just have been looking to see how socrative worked. So I’m still interested in trying out this question in class. Certainly socrative worked well for this, and one thing I learnt (somewhat by accident) is that you can leave a quiz running in socrative open for responses for several months.

 

QAA Scotland Focus On Assessment and Feedback Workshop

Today was spent at a QAA Scotland event which aimed to identify and share good practice in assessment and feedback, and to gather suggestions for feeding in to a policy summit for senior institutional managers that will be held on 14 May.  I’ve never had much to do with technology for assessment, though I’ve worked with good specialists in that area, and so this was a useful event for catching up with what is going on.

"True Humility" by George du Maurier, originally published in Punch, 9 November 1895. (Via Wikipedia, click image for details)
“True Humility” by George du Maurier, originally published in Punch, 9 November 1895. (Via Wikipedia)

The first presentation was from Gill Ferrell on electronic management of assessment. She started by summarising the JISC assessment and feedback programmes of 2011-2014. An initial baseline survey for this programme had identified practice that could at best be described as “excellent in parts” but with causes for concern in other areas. There were wide variations in practice for no clear reason, programmes in which assessment was fragmentary rather than building a coherent picture of a student’s capabilities and progress, there not much evidence of formative assessment, not much student involvement in deciding how assessment was carried out, assessments that did not reflect how people would work after they graduate, policies that were more about procedures than educational aims and so on.  Gill identified some of the excellent parts that had served as staring points for the programme–for example the REAP project from CAPLE formerly at Strathclyde University–and she explained how the programme proceeded from there with ideas such as: projects agreeing on basic principles of what they were trying to do (the challenge was to do this in such a way that allowed for scope to change and improve practice); projects involving students in setting learning objectives; encouraging discussion around feedback; changing the timing of assessment to avoid over-compartmentalized learning; shifting from summative for formative assessment and making assessment ipsative, i.e. focussing on comparing with the students past performance to show what each individual was learning.

A lifecycle model for assessment from Manchester Metropolitan helped locate some of the points where progress can be made.

Assessment lifecycle developed at Manchest Metropolitan University. Source: Open course on Assessment in HE.
Assessment lifecycle developed at Manchester Metropolitan University. Source: Open course on Assessment in HE.

Steps 5, “marking and production of feedback” and 8 “Reflecting” were those were most help seemed to be needed (Gill has a blog post with more details).

The challenges  were all pedagogic rather than technical; there was a clear message from the programme that the electronic management of assessment and feedback was effective and efficient.  So, Jisc started scoping work on the Electronic Management of Assessment. A second baseline review in Aug 2014 showed trends in the use of technology that have also been seen in similar surveys by the Heads of eLearning Forum: eSubmission (e.g. use of TurnItIn) is the most embedded use of technology in managing assessment, followed by some use of technology for feedback. Marking and exams were the areas where least was happening. The main pain points were around systems integration: systems were found to be inflexible, many were based around US assumptions of assessment practice and processes, and assessment systems, VLEs and student record systems often just didn’t talk to each other. Staff resistance to use of technology for assessment was also reported to be a problem; students were felt to be much more accepting. There was something of an urban myth that QAA wouldn’t permit certain practices, which enshrined policy and existing procedure so that innovation happened “in the gaps between policy”.

The problems Gill identified all sounded quite familiar to me, particularly the fragmentary practice and lack of systems integration. What surprised most was the little uptake of computer marked assessments and computer set exams. My background is in mathematical sciences, so I’ve seen innovative (i.e. going beyond MCQs) computer marked assessments since about 1995 (see SToMP and CALM). I know it’s not appropriate for all subjects, but I was surprised it’s not used more where it is appropriate (more on that later). On computer set exams, it’s now nearly 10 years since school pupils first sat online exams, so why is HE so far behind?

We then split into parallel sessions for some short case-study style presentations. I heard from:

Katrin Uhilg and Anna Rolinska form the University of Glasgow about the use of wikis (or other collaborative authoring environments such as Google Docs) for learning oriented assessment in translations. The tutor sets a text to be translated, students work in  groups on this, but can see and provide feedback on each other’s work. They need to make informed decisions about how to provide and how to respond to feedback. I wish there had been more time to go into some of the practicalities around this.

Jane Guiller of Glasgow Caledonian had students creating interactive learning resources using Xerte. They provide support for the use of Xerte and for issues such as copyright. These were peer assessed using a rubric. Students really appreciate demonstrating a deep understanding of a topic by creating something that is different to an essay. The approach also builds and demonstrates the students digital literacy skills. There was a mention at the end that the resources created are released as OERs.

Lucy Golden and Shona Robertson of the University of Dundee spoke about using on wordpress blogs in a distance learning course on teaching in FE. Learners were encouraged to keep a reflective blog on their progress; Lucy and Shona described how they encouraged (OK, required) the keeping of this blog through a five-step induction, and how they and the students provided feedback. These are challenges that I can relate to from  asking students on one of my own course to keep a reflective blog.

Jamie McDermott and Lori Stevenson of Glasgow Caledonian University presented on using rubrics in Grademark (on TurnItIn). The suggestion came from their learning technologist John Smith, who clearly deserves a bonus, who pointed out that they had access to this facility that would speed up marking and the provision of feedback and would help clarify the criteria for various grades. After Jamie used Grademark Rubrics successfully in one module they have been implemented across a programme. Lori described the thoroughness with which they had been developed, with drafting, feedback from other staff, feedback from students and reflection. A lot of effort, but all with collateral benefits of better coherency across the programme and better understanding  by the students of what was required of them

Each one of these four case studies contained something that I hope to use with my students.

The final plenary was Sally Jordan who teaches physics at the Open University talking about computer marked assessment. Sally demonstrated some of the features of the OU’s assessment system, for example the use of a computer algebra system to make sure that mathematically equivalent answers were marked appropriately (e.g. y  = (x +2)/2 and y = x/2 + 1 may both be correct). Also the use of text analysis to mark short textual answers, allowing for “it decreases” to be marked as partially right and “it halves” to be marked as fully correct when the model answer is “it decreases by 50%”.  This isn’t simple key word matching: you have to be able to distinguish between “kinetic energy converts to potential energy” and “potential energy converts to kinetic  energy” as right and entirely wrong, even though they have the same words in them. These are useful for testing a student’s conceptual understanding of physics, and can be placed “close to the learning activity” so that they provide feedback at the right time.

Here was the innovative automatic marking I had expected to be commonly used for appropriate subjects. But Sally also said that an analysis of computer marked assessments in Moodle showed that 75% of the questions were plain old multiple choice questions, and probably much as 90% were some variety of selection response question. These lack authenticity (no patient ever says “Doctor, I’ve got one of the following four things wrong with me…”)  and can be badly set so as to be guessable without previous knowledge. So why? Well, Sally had made clear that the OU is exceptional: huge numbers of students learning at a distance mean that there are fewer more cost effective options for marking and providing feedback,  even when a large amount of effort is required. The numbers of students also allowed for piloting of questions and the use of assessment analytics to sort out the most useful questions and feedback. For the rest of us, Sally suggested we could do two things:
A) run moocs, with peer marking and use machine learning to infer the rules for marking automatically, or
B) talk to each other. Share the load of developing questions, share the questions (make them editable for different contexts).

So, although I haven’t worked much in assessment, I ended up feeling on familiar ground, with an argument being made for one form of Open Education or another.

hypothes.is for web annotation

A while back I went to the OER annotation summit where I learnt about hypothes.is, a tool for adding a layer of annotation on top of the web. If the idea of annotating the web sounds like one of those great ideas that has been tried a dozen times before and never worked, then you’re right and (importantly) the hypothes.is team know about it. It’s also one of the great ideas that is worth trying over and again because of its potential. Today I took a quick look at how they’re getting on, and it looks good.

hypothesisButtonI installed the Chrome extension and registered, it took about 2 minutes, so I don’t mind if you try it right now. That gives me a small icon on my browser that activates a sidebar (which is collapsed by default) for annotations.  This lets you add comments to the page, highlight sections of text, add comments to those highlights, and view other people’s highlights and comments on that page. Annotations can be private, only visible to the annotator, or public and visible to everyone. Where annotations are public it is possible to reply to them, leading to conversations and discussions.

There’s also a WordPress plugin which adds annotation functionality to individual websites, I haven’t looked at that, but it could be a very useful addition to the WordPress for education tool kit.

I was worried that comments on specific phrases might be fragile, breaking if the page was changed, but a bit of experimenting showed they are reasonably robust, surviving small edits to the text annotated and changes to the preceding text that had the effect of shifting the annotated text down the page. I’m sure you can break it if you try but I think their fuzzy anchoring works for reasonable cases.

I like hypothes.is because it has the capacity make all the static content on web into the focus of reflective and social activity for education. Whether such activities are manageable and scaleable I don’t know,–how many open conversations can you have going on around around a single web page before everything just becomes swamped?–how many annotations can you save before finding your notes becomes harder than finding the content again. If there are limits, I guess that reaching them would be a nice problem to have.

I also like hypothes.is because it is an open project.I don’t just mean that it allows the content of annotations to be shared creating open discussion, though it does, an I like that. I don’t just mean that it works on the open web rather than within the confines of a single site, though it does, and I like that. I just don’t mean that it’s Open Source, though it is, and I like that. And I just don’t mean that it’s supporting open standards, though it is, and I like that.  What I really like is the openness in discussing the projects goals, approaches, plans that can be found on the project wiki and blog.