Category Archives: MACS enhancement

BTL Surpass for online assessment in Computer Science

Over the last couple of years I have been leading the introduction of BTL’s Surpass online assessment platform for  exams in Computer Science. I posted the requirements for an online exam system we agreed on a few months ago. I have now written up an evaluation case study: Use of BTL Surpass for online exams in Computer Science, an LTDI report (local copy). TL;DR: nothing is perfect, but Surpass did what we hoped, and it is planned to continue & expand its use.

My colleagues Hans-Wofgang has also presented on our experiences of “Enhancing the Learning Experience on Programming-focused Courses via Electronic Assessment Tools” at the Trends in Functional Programming in Education Conference, Canterbury, 19-21. This paper includes work by Sanusi Usman on using Surpass for formative assessment.

A question for online exams in computer science showing few lines of JAVA code with gaps for the student to complete.
A fill the blanks style question for online exams in computer coding. (Not from a real exam!)

Thoughts on Support for Technology Enhanced Learning in HE

I was asked to put forward my thoughts on how I thought the use of technology to enhance teaching and learning should be supported where I work. I work in a UK University that has campuses overseas, and which is organised into Schools (Computer Science is in a School with Maths, to form one of the smaller schools). This was my first round brain dump on the matter. It looks like something might come of it, so I’m posting it here asking for comments. Continue reading

Reflections on a little bit of open education (TL;DR: it works).

We are setting up a new honours degree programme which will involve use of online resources for work based blended learning. I was asked to demonstrate some the resources and approaches that might be useful. This is one of the quick examples that I was able to knock up(*) and some reflections on how Open Education helped me. By the way, I especially like the last bit about “open educational practice”. So if the rest bores you, just skip to the end. Continue reading

Requirements for online exam system

Some time back we started looking for an online exam system for some of our computer science exams. Part of the process was to list a set of “acceptance criteria,” i.e. conditions that any system we looked at had to meet. One of my aims in writing these was to  avoid chasing after some mythical ‘perfect’ system, and focus on finding one that would meet our needs. Although the headings below differ, as a system for high stakes assessment the overarching requirements were security, reliability, scalability, which are reflected below.

Having these criteria were useful in reaching a consensus decision when there was no ‘perfect’ system.

Note, added July 2018: you might also be interested in: Hans-Wolfgang Loidl, Phil Barker and Sanusi Usman (2017). Enhancing the Learning Experience on Programming-focused Courses via Electronic Assessment Tools. Presented at Trends in Functional Programming in Education, Canterbury 19-21 June, 2017; and,
Phil Barker (2017). Use of BTL Surpass for online exams in Computer Science (ICBL Reports).

Continue reading

Technology enhanced learning in HW MACS

At the beginning of the summer I was handed an internal project convening a group to put into action the strategic plan for the use of technology to enhance teaching and learning at the School of Mathematical and Computer Sciences (MACS) at Heriot-Watt. Yesterday the first phase of that came to fruition in a show and tell seminar.

Our initial approach has been to identify and share what is already happening within the School, to try to make those open up those pockets of innovation where one or two people are doing something that many others could also try. We started with a survey, asking people to tell us about interesting uses of technology that they had tried. We left the individual respondents to define what they felt as “interesting” but did our best to encourage anyone who was going beyond basic PowerPoint and VLE use to let us know. We also asked what ideas for use of technology staff wanted to try in their teaching, and what (if anything) was stopping them from doing so. We had a decent number of replies, and from these identified some common themes from the two questions (i.e. there was overlap between what some people had tried and what other people were interested in trying, a bit of a win that deserves more than a parenthesis at the end of a sentence). Those themes were:

  1. managing course work in the VLE, e.g. setting up rubrics, delegated marking.
  2. online assessment
  3. promoting interaction in class
  4. use of video for short explanations, demonstrations etc.

We used this to decide what we had to include in the first event yesterday, which we billed as a show and tell seminar. We had three speakers on each of the first three theme listed (it seemed that explanations and demos of how to use video for explanations and demos could be done as videos 🙂 ). The speakers had 10 minutes each to show what they had done, so really just enough time to provide a taster so that others could decide whether they wanted to know more. So that’s 9 speakers in 90 minutes, it was a real challenge to come up with a format which didn’t last too long (because academics are always busy, we couldn’t expect to get much more than an hour or two of their time) but did cover a wide range of topics.  We had an audience of 40, which is pretty good–I’ve been to plenty of similar events covering a whole institution which have had lower turnouts. The feedback has been that not everything interested everyone, there was something for everyone that made the time committed as a whole worthwhile. So on the whole I think the format worked, even though it was hectic.

Of course by packing so much into a compressed schedule few people will have been able to get all the information they wanted and so follow-up will be crucial to the success of this work as a whole. The show and tell was a kick-off event, we want to encourage people to continue to share the ideas that are of interest to them and for there to be other, smaller, events with a tighter focus to facilitate this.

Quick notes: Ian Pirie on assessment

Ian Pirie Asst Principal for Learning Developments at University of Edinburgh came out to Heriot-Watt yesterday to talk about some assessment and feedback initiatives at UoE.  The background ideas motivating what they have been doing are not new, and Ian didn’t say that they were, they’re centred around the pedagogy of assessment & feedback as learning, and the generally low student satisfaction relating to feedback shown though the USS. Ian did make a very compelling argument about the focus of assessment: he asked whether we thought the point of assessment was

  1. to ensure standards are maintained [e.g. only the best will pass]
  2. to show what students have learnt,
  3. to help students learn.

The responses from the room were split 2:1 between answers 2 and 3, showing progress away from the exam-as-a-hurdle model of assessment. Ian’s excellent point was that if you design your assessment to help students learn, that will mean doing things like making sure  your assessments address the right objectives, that the students understand these learning objectives and criteria, and that they get feedback which is useful to them, then you will also address points 2 and 1.

Ideas I found interesting from the initiatives at UoE, included

  • Having students describe learning objectives in their own words, to check they understand them (or at least have read them).
  • Giving students verbal feedback and having them write it up themselves (for the same reason). Don’t give students their mark until they have done this, that means they won’t avoid doing it but also once students know they have / have not done “well enough” their interest in the assessment wanes.
  • Peer marking with adaptive comparative judgement. Getting students to rank other students’ work leads to reliable marking (the course leader can assess which pieces of work sit on grade boundaries if that’s what you need)

In the context of that last one, Ian mention No More Marking which has links with the Mathematics Learning Support Centre at Loughborough University. I would like to know more about how many comparisons need to be made before a reliable rank ordering is arrived at, which will influence how practical the approach is given the number of students on a course and the length of the work being marked (you wouldn’t want all students to have to mark all submissions if each submission was many pages long). But given the advantages of peer marking on getting students to reflect on what were the objectives for a specific assessment I am seriously considering using the approach to mark a small piece of coursework from my design for online learning course. There’s the additional rationale there that it illustrates the use of technology to manage assessment and facilitate a pedagogic approach, showing that computer aided assessment goes beyond multiple choice objective tests, which is part of the syllabus for that course.