Category Archives: teaching

Three resources about gender bias

These are three resources that look like they might be useful in understanding and avoiding gender bias. They caught my attention because I cover some cognitive biases in the Critical Thinking course I teach. I also cover the advantages of having diverse teams working on problems (the latter based on discussion of How Diversity Makes Us Smarter in SciAm). Finally, like any responsible  teacher in information systems & computer science I am keen to see more women in my classes.

Iris Bohnet on BBC Radio 4 Today programme 3 January.  If you have access via a UK education institution with an ERA licence you can listen to the clip via the BUFVC Box of Broadcasts.  Otherwise here’s a quick summary. Bohnet stresses that much gender bias is unconscious, individuals may not be aware that they act in biased ways. Awareness of the issue and diversity training is not enough on its own to ensure fairness. She stresses that organisational practise and procedures are the easiest effective way to remove bias. One example she quotes is that to recruit more male teachers job adverts should not “use adjectives that in our minds stereotypically are associated with women such as compassionate, warm, supportive, caring.” This is not because teachers should not have these attributes or that men cannot be any of these, but because research shows[*] that these attributes are associated with women and may subconsciously deter male applicants.

[*I don’t like my critical thinking students saying broad and vague things like ‘research shows that…’. It’s ok for 3 minute slot on a breakfast news show but I’ll have to do better. I hope the details are somewhere in Iris Bohnet, (2016). What Works: Gender Equality by Design]

This raised a couple of questions in my mind. If gender bias is unconscious, how do you know you do it? And, what can you do about it? That reminded me of two other things I had seen on bias over the last year.

An Implicit Association Test (IAT) on Gender-Career associations, which  I took a while back. It’s a clever little test based on how quickly you can classify names and career attributes. You can read more information about them on the Project Implicit website  or try the same test that I did (after a few disclaimers and some other information gathering, it’s currently the first one on their list).

A gender bias calculator for recommendation letters based on the words that might be associated with stereotypically male or female attributes. I came across this via Athene Donald’s blog post Do You Want to be Described as Hard Working? which describes the issue of subconscious bias in letters of reference. I guess this is the flip side of the job advert example given by Bohnet. There is lots of other useful and actionable advice in that blog post, so if you haven’t read it yet do so now.

Reflective learning logs in computer science

Do you have any comments, advice or other pointers on how to guide students to maintaining high quality reflective learning logs?

Context: I teach part of a first year computer science / information systems course on Interactive Systems.  We have some assessed labs where we set the students fixed tasks to work on, and there is coursework. For the coursework the students have to create an app of  their own devising. They start with something simple (think of it as a minimum viable product) but then extend it to involve interaction with the environment (using their device’s sensors), other people, or other systems. Among of the objectives of the course are that: students learn to take responsibility for their own learning,  to appreciate their own strengths and weaknesses, and what is possible within time constraints. We also want students to gain experience in conceiving, designing and implementing an interactive app, and we want them to reflect on and provide evidence about the effectiveness of the approach they took.

Part of the assessment for this course is by way of the students keeping reflective learning logs, which I am now marking.  I am trying to think how I could better guide the students to write substantial, analytic posts (including how to encourage engagement from those students who don’t see the point to keeping a log).

Guidance and marking criteria

Based on those snippets of feedback that I found myself repeating over and over, here’s what I am thinking to provide as guidance to next year’s students:

  • The learning log should be filled in whenever you work on your app, which should be more than just during the lab sessions.
  • For set labs entries with the following structure will help bring out the analytic elements:
    • What was I asked to do?
    • What did I anticipate would be difficult?
    • What did I find to be difficult?
    • What helped me achieve the outcome? These may be resources that helped in understanding how to do the task or tactics used when stuck.
    • What problems could I not overcome?
    • What feedback did I get from teaching staff and other students?
    • What would I do differently if I had to do this again?
  • For coursework entries the structure can be amended to:
    • What did I do?
    • What did I find to be difficult? How did this compare to what I anticipated would be difficult?
    • What helped me achieve the outcome? These may be resources that helped in understanding how to do the task or tactics used when stuck.
    • What problems could I not overcome?
    • What feedback did I get from teaching staff and other students on my work so far?
    • What would I do differently if I had to do this again?
    • What do I plan to do next?
    • What do I anticipate to be difficult?
    • How do I plan to overcome outstanding issues and expected difficulties.

These reflective learning logs are marked out of 5 in the middle of the course and again at the end (so represent 10% of the total course mark), according to the following criteria

  1. contributions: No entries, or very brief (i.e. one or two sentences) entries only: no marks. Regular entries, more than once per week, with substantial content: 2 marks.
  2. analysis: Brief account of events only or verbatim repetition of notes: no marks. Entries which include meaningful plans with reflection on whether they worked; analysis of problems and how they were solved; and evidence of re-evaluation plans as a result of what was learnt during the implementation and/or as a result of feedback from others: 3 marks.
  3. note: there are other ways of doing really well or really badly than are covered above.

Questions

Am I missing anything from the guidance and marking criteria?

How can I encourage students who don’t see the point of keeping a reflective learning log? I guess some examples of where they are important with respect to professional practice in computing.

These are marked twice, using rubrics in Blackboard, in the middle of the semester and at the end. Is there any way of attaching two grading rubrics to the same assessed log in Blackboard? Or a workaround to set the same blog as two graded assignments?

Answers on a postcard… Or the comments section below. Or email.

XKCD or OER for critical thinking

I teach half a course on Critical Thinking to 3rd year Information Systems students. A colleague takes the first half which covers statistics. I cover how science works including the scientific method, experimental design, how to read a research papers, how to spot dodgy media reports of science and pseudoscience, and reproducibility in science; how to argue, which is mostly how to spot logical fallacies; and a little on cognitive development. One the better things about teaching on this course is that a lot of it is covered by xkcd, and that xkcd is CC licensed. Open Education Resources can be fun.

how scientists think

[source|explain]

hypothesis testing

Hell, my eighth grade science class managed to conclusively reject it just based on a classroom experiment. It's pretty sad to hear about million-dollar research teams who can't even manage that.

[source|explain]

Blind trials

[source|explain]

Interpreting statistics

[source|explain]

p hacking

[source|explain]

Confounding variables

There are also a lot of global versions of this map showing traffic to English-language websites which are indistinguishable from maps of the location of internet users who are native English speakers

[source|explain]

Extrapolation

[source|explain]

[source|explain]

Confirmation bias in information seeking

[source|explain]

[source|explain]

undistributed middle

[source|explain]

post hoc ergo propter hoc

Or correlation =/= causation.

He holds the laptop like that on purpose, to make you cringe.

[source|explain]

[source|explain]

Bandwagon Fallacy…

…and fallacy fallacy

[source|explain]

Diversity and inclusion

[source|explain]

 Licence: All xkcd are by Randall Munroe and licensed under a Creative Commons Attribution-NonCommercial 2.5 License. This means you’re free to copy and share these comics (but not to sell them). More details.

[Updated 15/11/2016 to add full source & licence info and some links, which I really ought to have known better than to forget.]

Quick notes: Ian Pirie on assessment

Ian Pirie Asst Principal for Learning Developments at University of Edinburgh came out to Heriot-Watt yesterday to talk about some assessment and feedback initiatives at UoE.  The background ideas motivating what they have been doing are not new, and Ian didn’t say that they were, they’re centred around the pedagogy of assessment & feedback as learning, and the generally low student satisfaction relating to feedback shown though the USS. Ian did make a very compelling argument about the focus of assessment: he asked whether we thought the point of assessment was

  1. to ensure standards are maintained [e.g. only the best will pass]
  2. to show what students have learnt,
    or
  3. to help students learn.

The responses from the room were split 2:1 between answers 2 and 3, showing progress away from the exam-as-a-hurdle model of assessment. Ian’s excellent point was that if you design your assessment to help students learn, that will mean doing things like making sure  your assessments address the right objectives, that the students understand these learning objectives and criteria, and that they get feedback which is useful to them, then you will also address points 2 and 1.

Ideas I found interesting from the initiatives at UoE, included

  • Having students describe learning objectives in their own words, to check they understand them (or at least have read them).
  • Giving students verbal feedback and having them write it up themselves (for the same reason). Don’t give students their mark until they have done this, that means they won’t avoid doing it but also once students know they have / have not done “well enough” their interest in the assessment wanes.
  • Peer marking with adaptive comparative judgement. Getting students to rank other students’ work leads to reliable marking (the course leader can assess which pieces of work sit on grade boundaries if that’s what you need)

In the context of that last one, Ian mention No More Marking which has links with the Mathematics Learning Support Centre at Loughborough University. I would like to know more about how many comparisons need to be made before a reliable rank ordering is arrived at, which will influence how practical the approach is given the number of students on a course and the length of the work being marked (you wouldn’t want all students to have to mark all submissions if each submission was many pages long). But given the advantages of peer marking on getting students to reflect on what were the objectives for a specific assessment I am seriously considering using the approach to mark a small piece of coursework from my design for online learning course. There’s the additional rationale there that it illustrates the use of technology to manage assessment and facilitate a pedagogic approach, showing that computer aided assessment goes beyond multiple choice objective tests, which is part of the syllabus for that course.

Understanding large numbers in context, an exercise with socrative

I came across an exercise that aimed to demonstrate that numbers are easier to understand when broken  down and put into context, it’s one of a number of really useful resources for the general public, journalists and teachers from the Royal Statistical Society. The idea is that large numbers associated with important government budgets–you know, a few billion here, a few billion there, pretty soon you’re dealing with large numbers–but such large numbers are difficult to get our heads around, whereas the same number expressed in a more familiar context, e.g. a person’s annual or weekly budget, should be easy to understand.  I wondered whether that exercise would work as an in-class exercise using socrative,–it’s the sort of thing that might be a relevant ice breaker for a critical thinking course that I teach.

A brief aside: Socrative is a free online student response system which “lets teachers engage and assess their students with educational activities on tablets, laptops and smartphones”. The teacher writes some multiple choice or short-response questions for students to answer, normally in-class. I’ve used it in some classes and students seem to appreciate the opportunity to think and reflect on what they’ve been learning; I find it useful in establishing a dialogue which reflects the response from the class as a whole, not just one or two students.

I put the questions from the Royal Stats. Soc. into socrative as multiple choice questions, with no feedback on whether the answer was right or wrong except for the final question, just some linking text to explain what I was asking about. I left it running in “student-paced” mode and asked friends on facebook to try it out over the next few days. Here’s a run through what they saw:

Screenshot from 2015-03-31 14:54:19Screenshot from 2015-03-31 14:55:13Screenshot from 2015-03-31 14:55:52Screenshot from 2015-03-31 14:56:40Screenshot from 2015-03-31 14:58:46Screenshot from 2015-03-31 14:59:21

 

Socrative lets you download the results as a spreadsheet showing the responses from each person to each question. A useful way to visualise the responses is as a sankey diagram:
sankeymatic_1200x1000 (1)

[I created that diagram with sankeymatic. It was quite painless, though I could have been more intelligent in how I got from the raw responses to the input format required.]

So did it work? What I was hoping to see was the initial answers being all over the place, but converging on the correct answer, that is not so many chosing £10B per annum for Q1 as £30 per person per week for the last question. That’s not really what I’m seeing. But I have some strange friends, a few people commented that they knew the answer for the big per annum number but either could or couldn’t do the arithmetic to get to the weekly figure. Also it’s possible that the question wording was misleading people into thinking about how much would it cost to treat a person for week in an NHS hospital. Finally I have some odd friends who are more interested in educational technology than in answering questions about statistics, who might just have been looking to see how socrative worked. So I’m still interested in trying out this question in class. Certainly socrative worked well for this, and one thing I learnt (somewhat by accident) is that you can leave a quiz running in socrative open for responses for several months.

 

QAA Scotland Focus On Assessment and Feedback Workshop

Today was spent at a QAA Scotland event which aimed to identify and share good practice in assessment and feedback, and to gather suggestions for feeding in to a policy summit for senior institutional managers that will be held on 14 May.  I’ve never had much to do with technology for assessment, though I’ve worked with good specialists in that area, and so this was a useful event for catching up with what is going on.

"True Humility" by George du Maurier, originally published in Punch, 9 November 1895. (Via Wikipedia, click image for details)
“True Humility” by George du Maurier, originally published in Punch, 9 November 1895. (Via Wikipedia)

The first presentation was from Gill Ferrell on electronic management of assessment. She started by summarising the JISC assessment and feedback programmes of 2011-2014. An initial baseline survey for this programme had identified practice that could at best be described as “excellent in parts” but with causes for concern in other areas. There were wide variations in practice for no clear reason, programmes in which assessment was fragmentary rather than building a coherent picture of a student’s capabilities and progress, there not much evidence of formative assessment, not much student involvement in deciding how assessment was carried out, assessments that did not reflect how people would work after they graduate, policies that were more about procedures than educational aims and so on.  Gill identified some of the excellent parts that had served as staring points for the programme–for example the REAP project from CAPLE formerly at Strathclyde University–and she explained how the programme proceeded from there with ideas such as: projects agreeing on basic principles of what they were trying to do (the challenge was to do this in such a way that allowed for scope to change and improve practice); projects involving students in setting learning objectives; encouraging discussion around feedback; changing the timing of assessment to avoid over-compartmentalized learning; shifting from summative for formative assessment and making assessment ipsative, i.e. focussing on comparing with the students past performance to show what each individual was learning.

A lifecycle model for assessment from Manchester Metropolitan helped locate some of the points where progress can be made.

Assessment lifecycle developed at Manchest Metropolitan University. Source: Open course on Assessment in HE.
Assessment lifecycle developed at Manchester Metropolitan University. Source: Open course on Assessment in HE.

Steps 5, “marking and production of feedback” and 8 “Reflecting” were those were most help seemed to be needed (Gill has a blog post with more details).

The challenges  were all pedagogic rather than technical; there was a clear message from the programme that the electronic management of assessment and feedback was effective and efficient.  So, Jisc started scoping work on the Electronic Management of Assessment. A second baseline review in Aug 2014 showed trends in the use of technology that have also been seen in similar surveys by the Heads of eLearning Forum: eSubmission (e.g. use of TurnItIn) is the most embedded use of technology in managing assessment, followed by some use of technology for feedback. Marking and exams were the areas where least was happening. The main pain points were around systems integration: systems were found to be inflexible, many were based around US assumptions of assessment practice and processes, and assessment systems, VLEs and student record systems often just didn’t talk to each other. Staff resistance to use of technology for assessment was also reported to be a problem; students were felt to be much more accepting. There was something of an urban myth that QAA wouldn’t permit certain practices, which enshrined policy and existing procedure so that innovation happened “in the gaps between policy”.

The problems Gill identified all sounded quite familiar to me, particularly the fragmentary practice and lack of systems integration. What surprised most was the little uptake of computer marked assessments and computer set exams. My background is in mathematical sciences, so I’ve seen innovative (i.e. going beyond MCQs) computer marked assessments since about 1995 (see SToMP and CALM). I know it’s not appropriate for all subjects, but I was surprised it’s not used more where it is appropriate (more on that later). On computer set exams, it’s now nearly 10 years since school pupils first sat online exams, so why is HE so far behind?

We then split into parallel sessions for some short case-study style presentations. I heard from:

Katrin Uhilg and Anna Rolinska form the University of Glasgow about the use of wikis (or other collaborative authoring environments such as Google Docs) for learning oriented assessment in translations. The tutor sets a text to be translated, students work in  groups on this, but can see and provide feedback on each other’s work. They need to make informed decisions about how to provide and how to respond to feedback. I wish there had been more time to go into some of the practicalities around this.

Jane Guiller of Glasgow Caledonian had students creating interactive learning resources using Xerte. They provide support for the use of Xerte and for issues such as copyright. These were peer assessed using a rubric. Students really appreciate demonstrating a deep understanding of a topic by creating something that is different to an essay. The approach also builds and demonstrates the students digital literacy skills. There was a mention at the end that the resources created are released as OERs.

Lucy Golden and Shona Robertson of the University of Dundee spoke about using on wordpress blogs in a distance learning course on teaching in FE. Learners were encouraged to keep a reflective blog on their progress; Lucy and Shona described how they encouraged (OK, required) the keeping of this blog through a five-step induction, and how they and the students provided feedback. These are challenges that I can relate to from  asking students on one of my own course to keep a reflective blog.

Jamie McDermott and Lori Stevenson of Glasgow Caledonian University presented on using rubrics in Grademark (on TurnItIn). The suggestion came from their learning technologist John Smith, who clearly deserves a bonus, who pointed out that they had access to this facility that would speed up marking and the provision of feedback and would help clarify the criteria for various grades. After Jamie used Grademark Rubrics successfully in one module they have been implemented across a programme. Lori described the thoroughness with which they had been developed, with drafting, feedback from other staff, feedback from students and reflection. A lot of effort, but all with collateral benefits of better coherency across the programme and better understanding  by the students of what was required of them

Each one of these four case studies contained something that I hope to use with my students.

The final plenary was Sally Jordan who teaches physics at the Open University talking about computer marked assessment. Sally demonstrated some of the features of the OU’s assessment system, for example the use of a computer algebra system to make sure that mathematically equivalent answers were marked appropriately (e.g. y  = (x +2)/2 and y = x/2 + 1 may both be correct). Also the use of text analysis to mark short textual answers, allowing for “it decreases” to be marked as partially right and “it halves” to be marked as fully correct when the model answer is “it decreases by 50%”.  This isn’t simple key word matching: you have to be able to distinguish between “kinetic energy converts to potential energy” and “potential energy converts to kinetic  energy” as right and entirely wrong, even though they have the same words in them. These are useful for testing a student’s conceptual understanding of physics, and can be placed “close to the learning activity” so that they provide feedback at the right time.

Here was the innovative automatic marking I had expected to be commonly used for appropriate subjects. But Sally also said that an analysis of computer marked assessments in Moodle showed that 75% of the questions were plain old multiple choice questions, and probably much as 90% were some variety of selection response question. These lack authenticity (no patient ever says “Doctor, I’ve got one of the following four things wrong with me…”)  and can be badly set so as to be guessable without previous knowledge. So why? Well, Sally had made clear that the OU is exceptional: huge numbers of students learning at a distance mean that there are fewer more cost effective options for marking and providing feedback,  even when a large amount of effort is required. The numbers of students also allowed for piloting of questions and the use of assessment analytics to sort out the most useful questions and feedback. For the rest of us, Sally suggested we could do two things:
A) run moocs, with peer marking and use machine learning to infer the rules for marking automatically, or
B) talk to each other. Share the load of developing questions, share the questions (make them editable for different contexts).

So, although I haven’t worked much in assessment, I ended up feeling on familiar ground, with an argument being made for one form of Open Education or another.

What do students know?

I read this by Graham Gibbs in the Times Higher Education over the weekend:

Studies have identified changes over time in what teachers pay attention to, and there is broad agreement about the stages involved.

Postgraduate teaching assistants may be concerned about whether students like them or are impressed by them, and whether they can get away with passing themselves off as an academic in their discipline. It is all about identity and self-confidence rather than about effectiveness.

Teachers then focus their attention on the subject matter itself: “Do I know my stuff?” While some never move beyond this focus on content, most subsequently shift their focus to methods: “How should I go about this?” There is evidence that training programmes improve student ratings of teaching practices.

Eventually, and with luck, teachers evolve towards a focus of attention on effectiveness: “What have students learned?” and “What is it that I have done that has had most impact on what students have learned?”

That questions of “what have students learned?” is one that has interested me. One of the resources that got me interested in it is the video “A Private Universe“. I like the contrast (or lack of it) in understanding of what causes the seasons between the the MIT graduate who studied planetary motion and the 9th grade student. Clearly, apart from being too late to be of any use, exams don’t always answer that question. What I find does help is to stop talking at the students, to stop presenting information and to start listening. I ask my students to keep a learning log describing what they do an don’t understand, I also use socrative to ask questions in class. I don’t need socrative because my classes are so large that I can’t ask student individually but because the students seem more willing to answer.

hmm, Gibbs’s last question is a difficult one to answer

Teaching philosophy statement

As part of the Post Graduate Certificate in Higher Education that I am taking I need to outline my conception of teaching and learning and describe how and why you do (or will) teach”

modelLearner On the left is a model learner I found in Leuven. If you think of it as showing knowledge flowing into someone’s head as they read a book, then it is not my conception of learning. I believe:

  • knowledge is not something that exists externally and can somehow be transferred into a students head, it is something that is built and rebuilt in the brain.
  • learning is not something that can be done passively simply by absorbing what is in a book, it requires an active effort
  • education is a social activity, not a solo activity

(For the avoidance of doubt, I don’t preclude reading as an active effort or as part of a social activity.)

Part of my role as a teacher is to provide and be part of the social setting, activities and resources through which students may develop their knowledge and understanding of the outside world.

I am involved in education because I believe in the capability of people learn, to change and to improve, throughout their life. Their time at University is part of that, but I think it is important to consider what students will need the day after they leave University and in 10 years time. Once they leave University many of them will find themselves without a teacher for the first time in their life. Part of my role as a teacher is to prepare them to be without a teacher. The area that I teach (computer science/information systems) is rapidly changing. When I suggested to a colleague that he should tell his new students what it is that they will learn in the next four years that will be useful in ten years time he replied “Tough question – ‘we are/should be teaching students to solve problems we don’t know using technologies we don’t know'”. That suggests to me that problem solving and the ability to learn new things are more important that content knowledge. Now that is a thought that is quite terrifying, the content knowledge is way easier to teach, but I can at least encourage students to think about what they know, what they need to know and how they are learning.

I also believe that teaching is difficult and the resources and approaches used are difficult to create. It is important to evaluate what works and what doesn’t, to change and to improve what doesn’t work, and to share what does.

[The image is my own available on Flickr under a Creative Commons attribution only licence http://www.flickr.com/photos/philbarker/4663480615/ Leuven being the brewing capital of Belgium and therefore of the world, and alternative interpretation is that it represents the role of beer in learning.]

Really pleased to see two students have started their learning log, but wondering how much of a concerted campaign it will take to the rest going.

The students get marks for them. I did my best to explain why I think they are worth doing (but perhaps it was a bit rushed at the end of the lesson). I made a mistake in not giving time at the end of class for the first entry, I asked them to make a start, but then put up info about next week, which distracted them.

Reinforcement needed in week two.

First session

Today was the first session of the first course that I am teaching.

The course is design for online learning, there are 22 students (more than anticipated, but not hugely more). The first session was a one hour long introduction, and introduction both to the contents of the course and to each other. I gave an overview of what the course covers, what are the learning objectives and why they might be interesting, what the balance is between theory and hands-on, lecture and discussion, time-tabled and open study, coursework and exam. A lot of the course derives from discussion based on the students own experiences (at least that what Roger, who has run this course for 10yrs or so, tells me works) so as a break from me talking I asked each person in the class what they had by way of experience that is relevant to online learning.

The mechanics of the session worked, the timing was spot on. All the students had some experience of online learning, a VLE at school or Uni, computer based training at work, forums when learning programming, revision resources (BBC Bytesize); some had experience in tutoring, or training in other contexts. That’s good.

Less good is that me standing up talking about course objectives is pretty boring. I think in trying explain how something they don’t yet know might be useful I lost some of them. But maybe there’s no interesting way of making sure the students have that information, and I do think that you have to realise that you are confused before you can put your ideas in order.

Less necessary perhaps was any boredom while I went around the class one at time asking for their experience. This may have worked better with a smaller class, but even then the interest is mostly of interest to me: it gave me an idea of who has interesting background knowledge, who is a confident speaker, meant I could make a start at putting names to people. Perhaps it would have been better done in parallel not series by asking them to write down their experience. Some examples would help make sure that they knew what sort of information I was interested in. On the plus side it was good to see them writing notes while other people were saying their bit, I guess the notes were about what might be relevant, which I think means that they spent a few minutes reflecting on what they already know.

One final observation struck me: hardly anyone had a laptop or tablet with them, and I didn’t see any of them using a phone. That’s odd in a class about online learning. I pretty sure that you can learn online even during a lecture.