I am continuing my January review of the projects that I am working on with this post about my work on the Open Competencies Network (OCN). OCN is a part of the T3 Network of Networks, which is an initiative of US Chamber of Commerce Foundation aiming to explore “emerging technologies and standards in the talent marketplace to create more equitable and effective learning and career pathways.” Not surprisingly the Open Competencies Network (OCN) focuses on Competencies, but we understand that term broadly, including any “assertions of academic, professional, occupational, vocational and life goals, outcomes … for example knowledge, skills and abilities, capabilities, habits of mind, or habits of practice” (see the OCN competency explainer for more). I see competencies understood in this way as the link between my interests in learning, education, credentials and the world of employment and other activities. This builds on previous projects around Talent Marketplace Signalling, which I also did for the US Chamber of Commerce Foundation.
About the work
The OCN has two working groups: Advancing Open Competencies (AOC), which deals with outreach, community building, policy and governance issues, and the Technical Advisory Workgroup. My focus is on the latter. We have a couple of major technical projects, the Competency Explorer and the Data Ecosystem Standards Mapping (DESM) Tool, both of which probably deserve their own post at some time, but in brief:
Competency Explorer aims to make competency frameworks readily available to humans and machines by developing a membership trust network of open registries each holding one or more competency frameworks and enabling search and retrieval of those frameworks and their competencies from any registry node in the network.
DESM was developed to support data standards organizations—and the platforms and products that use those standards—in mapping, aligning and harmonizing data standards to promote data interoperability for the talent marketplace (and beyond). The DESM allows for data to move from a system or product using one data standards to another system or product that uses a different data standard.
Both of these projects deal with heterogeneous metadata, working around the theme of interoperability between metadata standards.
About my role
My friend and former colleague Shiela once described our work as “going to meetings and typing things”, which pretty much sums up the OCN work. The purpose is to contribute to the development of the projects, both of which were initiated by Stuart Sutton, whose shoes I am trying to fill in OCN.
For the Competency Explorer I have helped turn community gathered use cases into features that can implemented to enhance the Explorer, and am currently one of the leads of an agile feature-driven development project with software developers at Learning Tapestry to implement as many of these features as possible and figure out what it would take to implement the others. I’m also working with data providers and Learning Tapestry to develop technical support around providing data for the Competency Explorer.
For DESM I helped develop the internal data schema used to represent the mapping between data standards, and am currently helping to support people who are using the tool to map a variety of standards in a pilot, or closed beta-testing. This has been a fascinating exercise in seeing a project through from a data model on paper, through working with programmers implementing it, to working with people as they try to use the tool developed from it.
Some personal reflections on relating educational content to curriculum frameworks prompted by some conversation about the Oak National Academy (a broad curriculum of online material available to schools, based on the English national curriculum), and OEH-Linked-Frameworks (an RDF tool for visualizing German educational frameworks). It draws heavily on the BBC curriculum ontology (by Zoe Rose, I think). I’m thinking about these with respect to work I have been involved in such as K12-OCX and LRMI.
If you want to know why you would do this, you might want to skip ahead and read the “so what?” section first. But in brief: representing curriculum frameworks in a standard, machine-readable way, and mapping curriculum materials to that, would help when sharing learning resources.
I sometimes get asked for asked for advice by people who are thinking of setting up national infrastructure for OER based on institutional open access research repositories or similar, often with the rationale that doing such would mirror what has worked for open access research papers and cultural heritage. My advice is to think hard about whether it is appropriate to treat OER in the same way as these other types of resource. This week I read a paper, “Towards a Devolved Model of Management of OER? The Case of the Irish Higher Education Sector” by Angelica Risquez, Claire McAvinia, Yvonne Desmond, Catherine Bruen, Deirdre Ryan, and Ann Coughlan which provides important evidence and analysis on this topic.
There are contributions by people I know and look up to in the OER world, and some equally good chapters by folk I had not come across before. It seems to live up to its billing of offering an international perspective by not being US-centric (though it would be nice to see more from S America, Asia and Africa), and it provides a wide view of Open Education, not limited to Open Education Resources. There is a foreword by David Wiley, a chapter on a human rights theory for open education by the editors, one on whether emancipation through open education is theory or rhetoric by Andy Lane. Other people from the Open University’s Open Education team (Martin Weller, Beatriz de los Arcos, Rob Farrow, Rebecca Pitt and Patrick McAndrew) have written about identifying categories of OER users. There are chapters on aspects such as open science, open text books, open assessment and credentials for open learning; and several case studies and reflections on open education in practice.
The chapter that Lorna and I wrote is an overview drawing on our experiences through the UKOER programme and our work on LRMI looking at managing the dissemination and discovery of open education resources. Here’s the abstract in full, and a link to the final submitted version of our chapter.
This chapter addresses issues around the discovery and use of Open Educational Resources (OER) by presenting a state of the art overview of technology strategies for the description and dissemination of content as OER. These technology strategies include institutional repositories and websites, subject specific repositories, sites for sharing specific types of content (such as video, images, ebooks) and general global repositories. There are also services that aggregate content from a range of collections, these may specialize by subject, region or resource type. A number of examples of these services are analyzed in terms of their scope, how they present resources, the technologies they use and how they promote and support a community of users. The variety of strategies for resource description taken by these platforms is also discussed. These range from formal machine-readable metadata to human readable text. It is argued that resource description should not be seen as a purely technical activity. Library and information professionals have much to contribute, however academics could also make a valuable contribution to open educational resource (OER) description if the established good practice of identifying the provenance and aims of scholarly works is applied to learning resources. The current rate of change among repositories is quite startling with several repositories and applications having either shut down or having changed radically in the year or so that the work on which this contribution is based took. With this in mind, the chapter concludes with a few words on sustainability.
Last week I was on a panel at Edinburgh University’s Repository Fringe event discussing sustainability and OER. As part of this I was asked to talk for ten minutes on some aspect of the subject. I don’t think I said anything of startling originality, but I must start posting to this blog again, so here are the notes I spoke from. The idea that I wanted to get over is that projects should be careful about what services they tried to set up, they (the services) should be suitable and sustainable, and in fact it might be best if they did the minimum that was necessary (which might mean not setting up a repository).
Between 2009 and 2012 Jisc and the HE Academy ran the UK Open Education Resources programme (UKOER), spending approximately £15M of Hefce funding in three phases. There were 65 projects, some with personal, institutional or discipline scope releasing resources openly, some with a remit of promoting dissemination or discoverability, and there were some related activities and services providing technical, legal, policy support, & there was Jorum: there was a mandate that OERs released through the project should be deposited in the Jorum repository. This was a time when open education was booming, as well as UKOER, funding from foundations in the US, notably Hewlett and Gates, was quite well established and EU funding was beginning. UKOER also, of course, built on previous Jisc programmes such as X4L, ReProduce, and the Repositories & preservation programme.
In many ways UKOER was a great success: a great number of resources were created or released, but also it established open education as a thing that people in UK HE talked about. It showed how to remove some of the blockers to the reuse and sharing of content for teaching and learning in HE (–especially in the use of standard CC licences with global scope rather than the vague, restrictive and expensive custom variations on “available to other UK HEIs” of previous programmes). Helped by UKOER, many UK HEIs were well placed to explore the possibilities of MOOCs. And in general showed the potential to change how HEIs engage with the wider world and to help make best use of online learning–but it’s not just about opening exciting but vague possibilities: being a means to avoid problems such as restrictive licensing, and being in position to explore new possibilities, means avoiding unnecessary costs in the future and helps to make OER financially attractive (and that’s important to sustainability). Evidence of this success: even though UKOER was largely based on HEFCE funding, there are direct connections from UKOER to the University of Edinburgh’s Open Ed initiative and (less directly) to their engagement with MOOCs.
But I am here to talk sustainability. You probably know that Jorum, the repository in to which UKOER projects were required to deposit their OERs, is closing. Also, many of the discipline-based and discovery projects were based at HE Academy subject centres, which are now gone. At the recent OER16 here, Pat Lockley suggested that OER were no longer being created. He did this based on what he sees coming in to the Solvonauts aggregator that he develops and runs. Martin Poulter showed the graph, there is a fairly dramatic drop in the number of new deposits he sees. That suggests something is not being sustained.
But what?
Let’s distinguish between sustainability and persistence: sustainability suggests to me a manageable on-going effort. The content as released may be persistent, it may still be available as released (though without some sort of sustainable effort of editing, updating, preservation it may not be much use). What else needs sustained effort? I would suggest: 1, the release of new content; 2, interest and community; 3, the services around the content (that includes repositories). I would say that UKOER did create a community interested in OER which is still pretty active. It could be larger, and less inward looking at times but for an academic community it doing quite well. New content is being released. But the services created by UKOER (and other OER initiatives) are dying. That, I think , is why Pat Lockley isn’t seeing new resources being published.
What is the lesson we should learn? Don’t create services to manage and disseminate your OERs that that require “project” level funding. Create the right services, don’t assume that what works for research outputs will work for educational resources, make sure that there is that “edit” button (or at least a make-your-own-editable-copy button). Make the best use of what is available. Use everything that is available. Use wikimedia services, but also use flickr, wordpress, youtube, itunes, vimeo,—and you may well want to create your own service to act as a “junction” between all the different places you’re putting your OERs, linking with them via their APIs for deposit and discovery. This is the basic idea behind POSSE: Publish (on your) Own Site, Syndicate Elsewhere.
Stefan Dietze invited me to give the keynote presentation at the pre-WWW2015 workshop Linked Learning 2015 in Florence. I’ve already posted a summary of a few of the other presentations I saw, this is a long account (from my speaker’s notes) of what I said. If you prefer you can read the abstract of my talk from the WWW2015 Companion Proceedings or look through my slides & notes on Google. This is a summary of past work at Cetis that lead to our invovement with LRMI, why we got involved, and the current status of LRMI. There’s little here that I haven’t written about before, but I think this is the first time I’ve pulled it all together in this way.
Lorna M. Campbell was co-author for the presentation; the approach I take draws heavily on her presentation from the Cetis conference session on LRMI. Most of the work that we have done on LRMI has been through our affiliation with Cetis. I’ll describe LRMI and what it has achieved presently. In general for this I want to keep things fairly basic. I don’t want to assume a great deal of knowledge about the educational standards or the specifications on which LRMI is based, not so much because I think you don’t know anything, but because, firstly I want to show what LRMI drew on, but also because whenever I talk to people about LRMI it becomes clear that different people have different starting assumptions. I want to try to make sure that we kind of align our assumptions.
LRMI prehistory and precursors
I want to start by reviewing some of what we (Lorna and I and Cetis) did before LRMI and why we got involved in it.
That means talking about metadata. Mostly metadata for the purpose of resource discovery, in order to support the reuse of educational content; we want to support the reuse of educational content in order to justify greater effort going in to the creation of better content and allowing teachers to focus on designing better educational activities. We were never interested in metadata just for its own sake, but, we felt that however good an educational resource is, if you can’t find it you can’t use it.
And we can start with the LOM, the first international standard for educational technology, designed in the late 1990’s, completed in 2002 (at least the information model was–the XML binding came a couple of years later; other serializations such as RDF were never successfully completed)
We had nothing to do with designing the LOM.
But we did promote its use, for example:
I worked on a project called FAILTE, a resource discovery service for Engineering learning resources, which involved people with various expertise (librarians, engineering educators, learning technologists) creating what was essentially a catalogue of of LOM records.
I was also involved in a wider initiative to facilitate similar services across all of UK HE, by creating an application profile for use by joint projects of two organisations, RDN & LTSN (RLLOMAP)
Meanwhile Lorna was leading work to create an application profile of the LOM with UK-wide applicability (UK-LOM core)
These were fairly typical examples of LOM implementation work at that time. Also, none of them still exists.
All these involve application profiles, that is tailoring the LOM by recommending a subset of elements to use and specifying what controlled vocabularies to use to provide values for them (see metadata principles and practicalities, section IIIA). And there’s a dilemma there, or at least you have to make a compromise, between creating descriptions which make sense in a local setting and meet local needs, and getting interoperability between different implementations of the LOM.
In fact some of the initial LRMI research was a survey of how the LOM is used, looking at LOM records being exposed through OAI-PMH found that most LOM records provided very little beyond what could be provided with simple Dublin Core elements, which agreed with previous work comparing different application profiles (e.g. Goodby, 2004). (See also a similar study by Ochoa et al (2011) conducted at about the same time which focussed repositories that had been designed to use the LOM.)
But I wasn’t talking about the LOM in Florence. Why not? Well, IEEE LOM and IMS Metadata have their uses, and if they work for you that’s great. But I’ve also mentioned some of the problems that we faced when we tried to implement the LOM in more or less open systems: lots of effort to create each record, compromise between interoperability and addressing specific requirements. The structure of the LOM as a single XML tree-like metadata record comprising all the information about a resource does little to help you get around these problems. It also means that the standard as a whole is monolithic: the designers of the LOM had to solve the problems of how to describe IPR, technical, lifecycle issues, and others (then consider that many different resource types can be used as learning resources, and what works a technical description of a text document might not work for an image or video). Solving how to describe educational properties is quite hard enough without throwing solutions to all of these others into the same standard.
So, having learnt a lot from the LOM, we moved on hoping to find approaches to learning resource description that disaggregated the problem (at both design and implementation stages) into smaller less intimidation tasks.
I want to mention some work on Semantic technologies and what was then beginning to be called linked data that Cetis helped commission and were involved in through a working group aournd 2008 – 2009. The Semantic Technologies in Learning and Teaching Jisc mini-project / Cetis working group run by Thanassis Tiropanis et al out of the University of Southampton. The SemTech project aimed to raise the profile of semantic technologies in HE, to highlight what problems they were good at solving. The project included a survey of then-existing semantic tools & services used for education to discover what they were being used for. (they found 36, using a fairly loose definition of “semantic”.
The “five year plan” outlined by that project is worth reflecting on. Basically it suggested that exposing more data which can be used by applications, thus encouraging more data to be released (a sort of optimistic virtuous cycle), and the development of shared ontologies which yield benefits when there you have large amounts of data (Notably, it didn’t suggest spending years locked in a room coming up with the one ontology that would encompass everything before releasing any data).
The development of semantic applications for teaching and learning for HE/FE over the next 5 years could be supported in a number of steps:
Encouraging the exposure of HE/FE repositories, VLEs, databases and existing Web 2.0 lightweight knowledge models in linked data formats. Enabling the development of learning and teaching applications that make use of linked data across HE/FE institutions; there is significant activity on linked open data to be considered
Enabling the deployment of semantic-based searching and matching services to enhance learning. Such applications could support group formation and learning resource recommendation based on linked data. The development of ontologies to which linked data will be matched is anticipated. The specification of patterns of semantic tools and services using linked data could be fostered
Collaborative ontology building and reasoning for pedagogical ends will be more valuable if deployed over a large volume of education related linked data where the value of searching and matching is sufficiently demonstrated. Pedagogy-aware applications making use reasoning to establish learning context and to support argumentation and critical thinking over a large linked data field could be encouraged at this stage.
Our first efforts outside of IEEE LOM were in the Dublin Core Education Application Profile Task Group , between about 2006-2011, attempting to work on a shared ontology. Meanwhile others (notably Mikael Nilsson, KTH Royal Inst Technology, Stockholm) worked to get LOM data in RDF. This work kind of fizzled out, but we did get an idea of a domain model for learning resources, which rather usefully separated the educationally relevant properties from all the others. The cloud in the middle represents any resource-type specific domain model (say one for describing videos or one for describing textual resources) to which educationally relevant properties can be added. So this diagram represents what I was saying earlier about wanting to disaggregate the problem space so that we can focus on educational matters while other domain experts focus on their specialisms.
I want to mention in passing that around this time (2008/9) work started at ISO/IEC on semantic representation of metadata for learning resources. This was kicked off in response to the IEEE LOM being submitted for ratification as an ISO standard… and it is still ongoing. We’re not involved. Cetis has done no more than comment once or twice on this work.
In fact we did very little metadata work for a while. I thought I was done with it.
At this time there was there was a an idea in educational technology circles that was encapsulated in the term #eduPunk, the idea was that lightweight personal technology could be used to support teaching and learning, a sort DIY approach to learning technology, without the constraints of large institutional, enterprise level systems–WordPress instead of the VLE, folksonomies instead of taxonomies.
In comparison to eduPunk, we were #eduProg. I’ve nothing against the virtuoso wizardry of ProgRock or a technically excellent OWL ontology, and I am not saying there is anything wrong in either. The point I am trying to make is that the interest and attention, the engagement from the Ed Tech community was not in EduProg.
The attention and engagement was in Open Educational Resources, and we supported a UK HE 3 year, £15Million programme around the release of HE resources under creative commons licences [UKOER]. Cetis provided strategic technical advice and support to the funder and to the 66 projects that released over 10,000 resources. The support include guidelines on technology approaches to the management, description and dissemination of OERs; the guidelines we gave were for lightweight dissemination technologies, minimal metadata, and putting resources where they could be found. We reflected at length on the technology approaches taken by this programme in our book Into the wild – Technology for open educational resources. We recognise the shortcomings in this approach, it’s not perfect, and some people were quite critical of it. If we had been able to point to any discovery services that were based on the LOM or any more directed approach that were unarguably successful we would have recommended it, but it seemed that Google and the open web was at least as successful as any other approach and required less effort on the part of the projects. Partly through UKOER we did see 10,000 resources and more importantly a change in culture so that using social sharing site for education became unremarkable, an I would rather have that than a few 100 perfect metadata descriptions in a repository.
As far as resource description and resource discovery is concerned I think the most important advice we gave was this:
LRMI launched in 2011. what about it got us back into educational metadata? Let’s start from first principles, and look at the motivation behind LRMI, which is to help people find resources to meet their specific needs. I’ll try to illustrate this.
Meet Pam, a school teacher. Let’s say she wants to teach a lesson about the Declaration of Arbroath.
[See credits, below]
What are her specific needs? Well, they relate to her students: their age, their abilities; to her teaching scenario: is she looking for something to use as part of a half hour lesson on a wider topic, or something that will provide a plan of work over a few lessons? introduction or revision? And there is the wider context, she’s unlikely to be teaching about the declaration of Arbroath for its own sake, more likely it will relate to some aspect of a wider curriculum, perhaps history but perhaps also something around civic engagement in Scotland, or relations between Scotland and England, or precursors to the US declaration of independence, but she will be doing so because she is following some shared curriculum or exam syllabus.
She searches Google, finds lots of resources, many of them are no more than the text of the resource.
There are also tea towels and posters.
Those that go further do not necessarily do so in a way that is suitable for her pupils. There’s a Wikipedia article but that’s not really written with school children in mind. Google doesn’t really support narrowing down Pam’s search to match her requirements such as the age and educational level of students, the time required to use in a lesson, the relevance to requirements of national curriculum or exam syllabus, so Pam is forced to look at a series of separate search services based (often) on siloed metadata [examples 1, 2, 3]. It’s worth noting that the examples show categorisation by factors such as Key Stage (i.e. educational level in the English National Curriculum), educational subject, intended educational use (e.g. revision) and others, giving hints as to what Pam might use to filter her search. Google (historically) hasn’t been especially good at this sort of filtering, partly because it cannot always work out the relevance of the text in a document.
What happened to make us think that it was worth addressing this problem was schema.org:
a joint effort, in the spirit of sitemaps.org, to improve the web by creating a structured data markup schema supported by major search engines.
An agreed vocabulary for naming the characteristics of resources and the relationships between them.
Which can be added to HTML (as microdata, RDFa or JSON-LD) to help computers understand what the strings of text mean.
Adding schema.org markup (as microdata) to HTML, turns the code behind a web page from something like:
<h1>Learning Resource Metadata Initiative: using schema.org to describe open educational resources</h1>
<p>by Phil Barker, Cetis, School of Mathematical and Computer Sciences, Heriot-Watt University <br />
Lorna M Campbell, Cetis, Institute for Educational Cybernetics, University of Bolton. April 2014</p>
i.e. just strings, not much to hint as to which string is the authors name, which string is the title of the paper, which string is the author’s affiliation. to something like
<div itemscope itemtype="http://schema.org/ScholarlyArticle">
<h1 itemprop="name">Learning Resource Metadata Initiative: using schema.org to describe open educational resources</h1>
<p itemprop="author" itemscope itemtype="http://schema.org/Person">
<span itemprop="name">Phil Barker</span>,
<span itemprop="affiliation">Cetis, School of Mathematical and Computer Sciences, Heriot-Watt University</span></p>
<p itemprop="author" itemscope itemtype="http://schema.org/Person">
<span itemprop="name">Lorna M Campbell</span>,
<span itemprop="affiliation">Cetis, Institute for Educational Cybernetics, University of Bolton</span></p>
</div>
where the main entities and their relationships are marked and text that related to properties of those items is identified: a Scholarly Article is related to two Persons who are the authors; some of the text is the name of the Scholarly Article (i.e. its title), the names of the Persons and their affiliations. Represented graphically, we could show this information as:
An entity – relation graph identifying the types of entities, their relationships to each other and to the strings that describe significant properties.
At this point the LRMI was initiated, a 3 year project funded by the Bill and Melinda Gates foundation (and later wth some additional funding from the Hewlett Foundation), managed jointly by one organisation committed to open education (Creative Commons) and another (AEP) from the commercial publishing world, with input from education,publishers and metadata experts.
I was on the technical working group. We issued a call for participation; gathered use cases; and did the usual meeting and discussing to hammer out how to meet those use cases. We worked more or less in the open,–there was an invitation only face to face meeting near the beginning (limited funding so couldn’t invite everyone) after that all the work was on open email discussion lists and conference calls. Basing the work on schema.org allowed us to leave all the generic and resource-format specific stuff for other people to handle, and we could focus just on the educational properties that we needed.
The slide on the left shows what came out. The first two properties are major relationships to other entities, and alignment to some educational framework and the intended audience, the others are mostly simple characteristics. All are defined in the LRMI specification. In a previous blog post I have attempted further explanation of the Alignment Object. Most of these were added to Schema.org in 2013, the link to licence information was added later.
Current state of LRMI and future plans.
LRMI has been implemented by a number of organisations, some with project funding to encourage uptake, others more organically. One of the nice things about piggy-backing on schema.org is that people who have never heard of LRMI are using it.
Not every organisation on this list exposes LRMI metadata in its webpages, some harvest it or create it and use it internally. The Learning Registry is especially interesting as it is a data store of information about learning resources in many different schema, which uses LRMI as JSON-LD for (many of its) descriptive records. We have reported in some depth on the various ways in which LRMI has been implemented by those projects who are funded through the initiative.
We can create a Google custom search engine that looks for the alignment object–this in itself is a good indicator that someone has considered the resource to be useful for learning; and we can add filters to find learning resources useful for specific contexts, in this case different educational levels. This helps Pam narrow down her search–at least in a proof of concept way, as they stand these are not intended to be useful services.
I would like to note the following points from these implementations:
they exist. That’s a good first step.
not every implementation exposes LRMI metadata, some use it internally.
there is no agreement on value spaces, either terms or meanings (e.g. educational level, 1st Grade, Primary 1).
The Gates funding for LRMI is now complete, and as an organization LRMI is now a task group of the Dublin Core Metadata Initiative. That provides us with with the mechanisms and governance required to maintain, promote, and if necessary extend the specification. It does not mean that LRMI terms are DC terms, they’re not, they’re in a different namespace. DCMI is more than a set of RDF terms, it’s a community of experts working together, and that’s what LRMI is part of. The LRMI specification is now a community specification of DCMI, conforming to the the requirements of DCMI, such as having well-maintained definitions in RDF, which align with the schema.org definitions but are independently extensible.
The planned work of task group is shown on the group wiki, and includes:
Extending LRMI: Events? Courses?
contributing via new schema.org extension mechanism?
Recommended value vocabularies
Linked data representation of educational frameworks (alignment)
(There’s also a background interest in the use of LRMI beyond the original schema.org scenario, for example as stand-alone JSON-LD or as EPUB metadata for eTextBooks)
It’s customary to allow time for the audience to ask difficult questions of the presenter. I tried to forestall that by asking the audience’s opinion on these questions:
Does this help with the endeavour to expose lightweight linked data?
(can you get the data out of web pages?)
How do we encourage linked data representation of educational frameworks?
How much goes into schema.org (or similar) or should we just reuse existing ontologies?
Can you cope with the the quality of data that can be provided at web-scale?
Reflections on the presentation
As far as I could judge from the questions that I couldn’t answer well, the weak points in the presentation, or in LRMI may be, seem to be around gauging the level of uptake: how many pages are there out there with LRMI data on them? I don’t know. The schema.org pages for each entity show usage , for example the Alignment Object is on between 10 and 100 domains, but I do not know the size of those domains. That also misses those services that use LRMI and do not expose it in their webpages but would expose it as linked data in some other format. I suspect uptake is less than I would like, and I would like to see more.
As presenter I was happy that even after I had talked about all that for about 45 minutes, there were people who wanted to ask me questions (the forestalling tactic didn’t work), and even after that there were people who wanted to talk to me instead of going for coffee. That seems to be a good indicator that there was interest from the workshop’s audience.
Image credits:Photo of Pam Robertson, teacher, by Vgrigas (Own work) [CC-BY-SA-3.0 ], via Wikimedia Commons; reproduction of Tyninghame (1320 A.D) copy of the Declaration of Arbroath, 1320, via Wikimedia Commons. Logos (Heriot-Watt, Cetis, LRMI, Semtech etc.) are property of the respective organisation. Unless noted otherwise on slide image, other images created by the authors and licensed as CC-BY.
Last Friday Yahoo announced that it will retire its original service, the Yahoo directory, at the end of 2014. Perhaps the only surprise was that the Yahoo directory is still running. I don’t suppose it will be missed by many, but I noticed it going because the first article I ever wrote on learning technology was Finding Information on WWW, which I wrote for the CTI-Physics newsletter in Jun 1995. It was prompted by my boss at the time, Dick Bacon, saying that he thought there were lots of really useful resources on the web, but it was really difficult to find them. I suggested three approaches: social, organised collections and search, which I think stands up reasonably well today, though we’ve kind of moved on from mailbase to twitter. Search at the time was in its infancy, Lycos being the search engine of choice (yes, not only was this before Google, it was before Alta Vista). I still work on that question “how do you find information on the web?” Through LRMI and schema.org we are helping search engine providers improve their products, and one of my favourite initiatives of the last few years, the Learning Registry, and specifically the kritikos project has seen the coming together search and social, allowing students to share what they find to be useful for their courses.
One question that we always get asked about LRMI is “who is using it?” There are two sides to this, use by search service providers and use by resource providers, this post touches on the latter.
In phase 2 of the LRMI project, various organizations were given small amounts of money to implement LRMI in their systems and workflows. Those organizations are listed on the Creative Commons web site, and Lorna is in the process of gathering together the lessons they learnt which will be reported back shortly. Perhaps more importantly, at least from the point of view of sustainability, are implementations that arise spontaneously, either by organizations with learning resources to disseminate who make a conscious decision to use LRMI, or those who in using schema.org markup find that one of the properties that LRMI added is appropriate. Of course no one doing this is under any obligation to inform us of what they are doing, so it is harder to keep track of such use. Fortunately the Google Custom Search Engine Wilbert and I cobbled together can be used to discover such implementations. It’s a bit hit-and-miss, you need to search for common topics (Math, English) and trawl through the results for new sites, but it’s better than nothing.
Listed below (in alphabetical order) are the sites we’ve found, a link to a sample page with embedded schema.org / LRMI, a link to the Google Structured Data testing tool results for that page, and sometimes a note or two.
BBC Knowledge and Learning (Beta) BBC education resources example page, testing tool result
Comment: uses typical age range and alignments to education level and subject. Alignment object name and Url properties used when targetName and targetUrl should have been.
Brokers of expertise State of California resources for educators example page, testing tool result
Comment: uses several properties, but be wary of using alignment object name and url for target and of intended end user role as property of creative work not audience.
CTE Online career technical education. example page, testing tool result
Comment: several properties used, but be wary of using alignment object name and url for target and of intended end user role as property of creative work not audience.
Of course there are others using LRMI properties in their webpages that I happened not to find (t.b.h. I didn’t spend very long looking) and more who are using them to support internal business processes that Google never sees. If you know of an interesting use of LRMI from which others might learn, post a comment below (if comments are closed, contact me).
On 17th-18th June, in Bolton, Cetis had their more-or-less annual conference. One of the sessions was Lorna and me, with some help from our friends, discussing LRMI addressing the question “What on Earth Could Justify Another Attempt at Educational Metadata?”
Lorna started with an overview of our involvement in educational metadata, from EEVL and FAILTE, through IMS Learning Resource Meta-data, RLLOMAP and the UK LOMCore to Dublin Core Education, up to ISO-MLR. She then described the origin of LRMI.
So, yes, we really love metadata, but reached a point where making ever-more elegantly complex iterations on the same idea kind of lost its appeal. So what is it that makes LRMI so different so appealing? I gave a technical overview (basically a summary of the recent Cetis paper What is schema.org? and my blog post on explaining the LRMI alignment object.
So, the difference is that LRMI/schema.org metadata is deeply embedded in the web to the extent that it is right in the pointy brackets of the HTML code of web pages, marking up what humans can see, which crucially is where Google and other search engines want it. (That is not to say that it cannot be useful elsewhere.) Which is great, but what about implementation, at what stage can we show that some tangible benefits are on the way? That needs webpages that carry LRMI mark-up and a means of searching for them. I presented a summary of the work that Wilbert and I did on building a Google Custom Search Engine and filtering Google custom searches on LRMI properties, it also pointed to some work in schema-labs on a custom search for education.
Those are first steps, proofs of concept, there seemed to be agreement that they showed potential but obviously there needs to be more coherence if they are to work as a useful discovery service in real life. What about the other first step that needs to be taken, getting those who disseminate resource to use LRMI in their resource description pages? Well, first Lorna presented on her discussions with organizations who received a little funding to implement LRMU as part of phase 2 of the initiative.
Then Ben Ryan of Jorum discussed his work in implementing schema.org / LRMI in DSpace. The integration of LRMI into repository platforms and content management systems is key to getting it used widely across the web. I’m pleased to report back that Ben didn’t report problems with the spec itself, though as always there are questions around workflow and metadata quality.
Finally I gave a short over view of some of the sites that we have found to be using LRMI because they show up in the Custom Search Engine results.
The general feeling I had from the session was that most of the people involved thought that LRMI was a sane approach: useful, realistic and manageable. One of my favourate comments during that presentation was from Jenny Gray who tweeted
Have apparently been doing #lrmi in openlearn since before it was a thing. Cant work out how!
and commented that she wasn’t sure whether what was in the OpenLearn pages was LRMI. Well, it is schema.org with properties that came from phase one of the initiative (seems someone has been extending on the work Jenny did), embedded as RDFa which was an approach for structuring data in webpages that predated schema.org. And I think it is really promising for adoption of LRMI that you don’t need any specialist knowledge of educational metadata standards in order to find yourself using it. With this widespread almost accidental adoption comes a challenge: this work isn’t happening in the highly controlled world of information experts (librarians, or semantic experts used to working with descriptive ontologies) it’s happening web-wide, where web developers / web masters will take liberties in order to say what they want to say. Martin Hepp describes the significant changes of approach involved in this move from ontologies to web ontologies in a video presentation that I cannot recommend highly enough. Thinking about the minimalism of simple Dublin Core and the EduProg of LOM and ISO MLR, I see this as the challenge of having freedom of expression but keeping coherence, is this the shape of metadata to come?