In talking to people about modelling metadata I’ve picked up on a distinction mentioned by Staurt Sutton between entity-based modelling, typified by RDF and graphs, and record-based structures typified by XML; however, I don’t think making this distinction alone is sufficient to explain the difference, let alone why it matters. I don’t want to get into the pros and cons of either approach here, just give a couple of examples of where something that works in a monolithic, hierarchical record falls apart when the properties and relationships for each entity are described separately and those descriptions put into a graph. These are especially relevant when people familiar with XML or JSON start using JSON-LD. One of the great things about JSON-LD is that you can use instance data as if it were JSON, without really paying much regard to the “LD” part; that’s not true when designing specs because design choices that would be fine in a JSON record will not work in a linked data graph. Continue reading
As part of work to convert plain JSON records to proper RDF in JSON-LD I often want to convert a string value to a URI that identifies a thing (real world concrete thing or a concept). Continue reading
I enjoyed Martin Weller‘s blog post series on his 25 years of Ed Tech, and the book that followed, so when Lorna said that she had agreed to read the chapter on e-Learning Standards, and would I like to join her and make it a double act I thought… well, honestly I thought about how much I don’t enjoy reading stuff out loud for other people. But, I enjoy working with Lorna, and don’t get as many chances to do that as I would like, and so it happened.
I think the reading went well. You decide. Reading the definitions of the Dublin Core metadata element set I learnt one thing: I don’t want to be the narrator for audiobook versions of tech standards.
And then there’s the “between the chapters” podcast interview, which Lorna and I have just finished recording with Laura Pasquini, which was fun. We covered a lot of the things that Lorna and I wanted to: that we think Martin was hard on Dublin Core Metadata, I think his view of it was tarnished by the IEEE LOM; but that we agree with the general thrust of what Martin wrote. Many EdTech Standards were not a success, certainly the experience that many in EdTech had with standards was not a good one. But we all learnt from the experience and did better when it came to dealling with OER (Lorna expands on this in her excellent post reflecting on this chapter). Also, many technical standards relevant to education were a success, and we use them every day without (as Martin says) knowing much about them. And there’s the thing: Martin probably should never have been in the position knowing about Dublin Core, IEEE LOM and UK LOM Core, they should just have just been there behind that systems that he used, making things work. But I guess we have to remember that back then there weren’t many Learning Technologists to go round and so it wasn’t so easy to find the right people to get involved.
We did forget to cover a few things in the chat with Laura.
We forgot how many elephants were involved in UK LOM Core.
We forgot “that would be an implementation issue”.
But my main regret is that we didn’t get to talk about #EduProg, which came about a few years later (the genesis story is on Lorna’s blog) as an analysis of a trend in Ed Tech that contrasted with the do-it-yourself-and-learn approach of EduPunk. EduProg was exemplified in many of the standards which were either “long winded and self-indulgent” or “virtuoso boundary pushing redefining forms and developing new techniques”, depending on your point of view. But there was talent there — many of the people behind EduProg were classically trained computer scientists. And it could be exciting. I for one will never forget Scott plunging a dagger into the keyboard to hold down the shift key while he ran arpeggios along the angle brackets. I hear it’s still big in Germany.
Thank you to Martin, Laura, Clint, Lorna and everyone who made it the reading & podcast possible.
Added 5 Jan: here’s Lorna’s reflections on this recording.
[Feature image for this post, X-Ray Specs by @visualthinkery, is licenced under CC-BY-SA]
There was no Dublin Core conference this year, but there was the DCMI Virtual Event over the second half of September, and for the last session of that I hosted a panel session on LRMI Metadata in use. The recordings for many of the sessions are now available, including our LRMI panel.
It is now three years since I left Heriot-Watt to become an independent consultant, and an anniversary is as good as time as any to look back and reflect. I cannot do so without thinking how this year has been a much harder for many of my friends and former colleagues than for me. Many of them had to strike because of issues relating to simple matters of equity and justice concerning pensions, pay gaps, and precarity; immediately after which they were hit with the massive systemic shock brought by covid-19. It annoyed me so much to hear reports of “dons on strike” (elitist bullshit that erases the work of my friends in learning technology, library, information services and other professional services), and then to hear that “universities are closed” when I’ve seen my former colleagues working miracles. Don’t get me started on having that work described as “so called blended learning” and conclusions being drawn from emergency provision extrapolated to online learning as a whole.
In comparison to all that my own work has been plain sailing. The second half of 2019 was fairly quiet. My work on K12-OCX, a metadata specification to help reusers of curriculum and content material, was paused while the project looked for more implementors; but I think it looks good. I was focusing on the Talent Marketplace Signaling W3C community, which made great progress in improving how schema.org to can be used to describe job postings. I was also involved in some work with Cetis LLP colleagues looking at Curriculum Analytics for Jisc. The idea is that, rather than use data to analyse learners, use it to analyse which aspects of a course or program work well, and look for clues as to why. I also kept up my voluntary work with Dublin Core groups, mainly the LRMI Task Group, where we added some new terms to schema.org, and with the Application Profile Interest Group, which let me explore some of the issues in using schema.org, for example in the K12-OCX spec.
Then around Christmas work started picking up. I got a contract from the USCCF to work supporting their T3 innovation network, mostly mapping data standards (exciting results on that soon) and keeping the Talent Signal community group ticking over. I also got new work from the Credential Engine, people I have loved working with since they were just a project on credential transparency. We are hoping to be able to work with Google to supply data from the Credential Registry that supports their Job Training (beta) Search. I have written before that I find some aspects of the work in the “Talent Marketplace” uncomfortable. Nothing encapsulates that more than seeing Melania Trump announce that the US Government federal hiring process will value skills over degrees, and recognising how it links to the work I have been involved in linking job postings to skills and showing the competencies required to earn a credential. But, whoever takes the credit, it is work that has been building for five or more years; and whatever motives the people who announce it have, it benefits people who take non-traditional routes into jobs. I still think it is good work. My hope is that it shows the falsness of assuming Arts, Humanities and Social Sciences are inferior to STEM subjects in what they offer society, and that it helps people who aren’t able to follow the comparatively easy route of school to university to well-paid job.
Things are looking good for next year. There are some really interesting results coming. I am confident there will be interesting projects to work on, and I am looking forward to a couple of not-your-traditional-conferences I’m involved in. More on that soon.
The last couple of years I’ve been a keen follower of #PressEdConf, @pgogy and @nlafferty‘s Twitter-based conference on WordPress in Education. I haven’t been able to present in the previous two editions because they have taken place on a day that I have been travelling for an Easter visit to my parents. This year the the date changed, so I was able to present. I’ve been thinking about technologies for OER recently, partly prompted by Risquez et al paper “Towards a Devolved Model of Management of OER? The Case of the Irish Higher Education Sector” (see my thoughts here) and further stimulated by being asked to write a little about why open book publishing is important, so I thought I would take a look at WordPress as technology for OER through the lens of David Wiley’s ALMS framework.
I sometimes get asked for asked for advice by people who are thinking of setting up national infrastructure for OER based on institutional open access research repositories or similar, often with the rationale that doing such would mirror what has worked for open access research papers and cultural heritage. My advice is to think hard about whether it is appropriate to treat OER in the same way as these other types of resource. This week I read a paper, “Towards a Devolved Model of Management of OER? The Case of the Irish Higher Education Sector” by Angelica Risquez, Claire McAvinia, Yvonne Desmond, Catherine Bruen, Deirdre Ryan, and Ann Coughlan which provides important evidence and analysis on this topic.
This month I will be curating for Open Scotland, that is writing a blog post or two and tweeting a bit on the #OpenScot hashtag. I’ve published the first blog post, Sharing curation in Open Scotland, and I am not going to reproduce the content here, so follow that link to read what I have to say about Dorothy Hodgkin, kindness in scientific research and how that relates to Open Scotland.
Here’s a thing that I think has been at the root of some long discussions that I have been involved in where people involved in modelling data just don’t seem to agree on what goes in to a domain model, or on fine details of definitions. If it seems wrong or trivial to you, I’ld really appreciate any comments along the lines of ‘nope, you’ve misunderstood that …’ or ‘nope, everyone knows this and works around it, the disagreement must have been about something else’.
What I’m thinking causes these long discussions is that RDF and Object Oriented modeling methods (e.g. UML) talk about different things with the same terms. Continue reading
A play list of programs about women from the BBC’s In Our Time radio programme.
I’m a big fan of In Our Time, the BBC radio programme where Melvyn Bragg discusses the history of ideas with academics. Some time back the BBC released the entire In Our Time back catalogue as episodes to download, via podcasts and the web. A while back I created a selection from the podcast feed for Roman History, presenting the episodes in more-or-less chronological order. Here’s a similar chronological selection of women who have been topics of In Our Time programmes. I’ve also edited together an rss file for a podcast feed if you would like this selection delivered direct to your listening device.