I was lucky enough to go to Florence for some of the pre WWW2015 conference workshops because Stefan Dietze invited me to talk at the Linked Learning workshop “Learning & Education with the Web of Data“. Rather than summarize everything I saw I would like to give brief pointers to three presentations from that workshop and the “Web-base Education Technologies” (WebET 2015) workshop that followed it that were personal highlights. Many thanks to Stefan for organizing the conference (and also to the Spanish company Gnoss for sponsoring it).
I’ve followed the work on Tin Can / xAPI / Experience API since its inception. Lorna and I put a section about it into our Cetis Briefing on Activity Data and Paradata, so I was especially interested in Tom De Nies‘s presentation on TinCan2PROV: Exposing Interoperable Provenance of Learning Processes Through Experience API Logs. Tin Can statements are basically elaborations of “I did this,” providing more information about the who, how and what referred to by those three words. Tom has a background in provenance metadata and saw the parallel between those statements and the recording of actions by agents more generally, and specifically with the model behind the W3C PROV ontology for recording information about entities, activities, and people involved in producing a piece of data or thing. Showing that TinCan can be mapped to W3C PROV is of more than academic interest: the TinCan spec provides only one binding, JSON but the first step of Tom’s work was to upgrade that to JSON-LD and then through the mapping to PROV allow any of the serializations for PROV to be used (RDF/XML, N3, JSON-LD), and to bring the Tin Can statements into a format that allows them to be used by semantic technologies. Tom is hopeful that the mapping is lossless, you can try it yourself at tincan2prov.org.
I also have an increasing interest in the semantic representation of courses, there’s some interest in adding Courses to schema.org, but also within my own department some of us would like to explore advantages of moving away from course descriptors as PDF documents to something that could be a little more connected with each other and the outside world. Fouad Zablith’s presentation on Interconnecting and Enriching Higher Education Programs using Linked Data was like seeing the end point of that second line of interest. The data model Fouad uses combines a model of a course with information about the concepts taught and the learning materials used to teach them. Course information is recorded using Semantic MediaWiki to produce both human readable and linked data representations of the courses across a program of study. A bookmarklet allows information about resources that are useful for these courses to be added to the graph–but importantly attached via the concept that is studied, and so available to students of any course that teaches that concept. Finally, on the topic of several courses teaching the same concepts, sometimes such repetition is deliberate, but sometimes it is unwanted.
Fouad showed how analysis of the concept – course part of the graph could be useful it surfacing cases of where there were perhaps too many concepts in a course that had been previously covered elsewhere (see image, above)
One view of a course (that makes especial sense to anyone who thinks about etymology) is that it is a learning pathway, and one view of a pathway is as an Directed Acyclic Graph, i.e. an ordered route through a series of resources. In the WebET workshop, Andrea Zielinski presented A Case Study on the Use of Semantic Web Technologies for Learner Guidance which modelled such a learning pathway as a directed graph and represented this graph using OWL 2 DL. The conclusion to her paper says “The approach is applicable in the Semantic Web context, where distributed resources often are already annotated according to metadata standards and various open-source domain and user ontologies exist. Using the reasoning framework, they can be sequenced in a meaningful and user-adaptive way.” But at this stage the focus is on showing that the expressivity of OWL 2 DL is enough to represent a learning pathway and testing the efficiency of querying such graphs.
I came across an exercise that aimed to demonstrate that numbers are easier to understand when broken down and put into context, it’s one of a number of really useful resources for the general public, journalists and teachers from the Royal Statistical Society. The idea is that large numbers associated with important government budgets–you know, a few billion here, a few billion there, pretty soon you’re dealing with large numbers–but such large numbers are difficult to get our heads around, whereas the same number expressed in a more familiar context, e.g. a person’s annual or weekly budget, should be easy to understand. I wondered whether that exercise would work as an in-class exercise using socrative,–it’s the sort of thing that might be a relevant ice breaker for a critical thinking course that I teach.
A brief aside: Socrative is a free online student response system which “lets teachers engage and assess their students with educational activities on tablets, laptops and smartphones”. The teacher writes some multiple choice or short-response questions for students to answer, normally in-class. I’ve used it in some classes and students seem to appreciate the opportunity to think and reflect on what they’ve been learning; I find it useful in establishing a dialogue which reflects the response from the class as a whole, not just one or two students.
I put the questions from the Royal Stats. Soc. into socrative as multiple choice questions, with no feedback on whether the answer was right or wrong except for the final question, just some linking text to explain what I was asking about. I left it running in “student-paced” mode and asked friends on facebook to try it out over the next few days. Here’s a run through what they saw:
Socrative lets you download the results as a spreadsheet showing the responses from each person to each question. A useful way to visualise the responses is as a sankey diagram:
[I created that diagram with sankeymatic. It was quite painless, though I could have been more intelligent in how I got from the raw responses to the input format required.]
So did it work? What I was hoping to see was the initial answers being all over the place, but converging on the correct answer, that is not so many chosing £10B per annum for Q1 as £30 per person per week for the last question. That’s not really what I’m seeing. But I have some strange friends, a few people commented that they knew the answer for the big per annum number but either could or couldn’t do the arithmetic to get to the weekly figure. Also it’s possible that the question wording was misleading people into thinking about how much would it cost to treat a person for week in an NHS hospital. Finally I have some odd friends who are more interested in educational technology than in answering questions about statistics, who might just have been looking to see how socrative worked. So I’m still interested in trying out this question in class. Certainly socrative worked well for this, and one thing I learnt (somewhat by accident) is that you can leave a quiz running in socrative open for responses for several months.
Today was spent at a QAA Scotland event which aimed to identify and share good practice in assessment and feedback, and to gather suggestions for feeding in to a policy summit for senior institutional managers that will be held on 14 May. I’ve never had much to do with technology for assessment, though I’ve worked with good specialists in that area, and so this was a useful event for catching up with what is going on.
The first presentation was from Gill Ferrell on electronic management of assessment. She started by summarising the JISC assessment and feedback programmes of 2011-2014. An initial baseline survey for this programme had identified practice that could at best be described as “excellent in parts” but with causes for concern in other areas. There were wide variations in practice for no clear reason, programmes in which assessment was fragmentary rather than building a coherent picture of a student’s capabilities and progress, there not much evidence of formative assessment, not much student involvement in deciding how assessment was carried out, assessments that did not reflect how people would work after they graduate, policies that were more about procedures than educational aims and so on. Gill identified some of the excellent parts that had served as staring points for the programme–for example the REAP project from CAPLE formerly at Strathclyde University–and she explained how the programme proceeded from there with ideas such as: projects agreeing on basic principles of what they were trying to do (the challenge was to do this in such a way that allowed for scope to change and improve practice); projects involving students in setting learning objectives; encouraging discussion around feedback; changing the timing of assessment to avoid over-compartmentalized learning; shifting from summative for formative assessment and making assessment ipsative, i.e. focussing on comparing with the students past performance to show what each individual was learning.
Steps 5, “marking and production of feedback” and 8 “Reflecting” were those were most help seemed to be needed (Gill has a blog post with more details).
The challenges were all pedagogic rather than technical; there was a clear message from the programme that the electronic management of assessment and feedback was effective and efficient. So, Jisc started scoping work on the Electronic Management of Assessment. A second baseline review in Aug 2014 showed trends in the use of technology that have also been seen in similar surveys by the Heads of eLearning Forum: eSubmission (e.g. use of TurnItIn) is the most embedded use of technology in managing assessment, followed by some use of technology for feedback. Marking and exams were the areas where least was happening. The main pain points were around systems integration: systems were found to be inflexible, many were based around US assumptions of assessment practice and processes, and assessment systems, VLEs and student record systems often just didn’t talk to each other. Staff resistance to use of technology for assessment was also reported to be a problem; students were felt to be much more accepting. There was something of an urban myth that QAA wouldn’t permit certain practices, which enshrined policy and existing procedure so that innovation happened “in the gaps between policy”.
The problems Gill identified all sounded quite familiar to me, particularly the fragmentary practice and lack of systems integration. What surprised most was the little uptake of computer marked assessments and computer set exams. My background is in mathematical sciences, so I’ve seen innovative (i.e. going beyond MCQs) computer marked assessments since about 1995 (see SToMP and CALM). I know it’s not appropriate for all subjects, but I was surprised it’s not used more where it is appropriate (more on that later). On computer set exams, it’s now nearly 10 years since school pupils first sat online exams, so why is HE so far behind?
We then split into parallel sessions for some short case-study style presentations. I heard from:
Katrin Uhilg and Anna Rolinska form the University of Glasgow about the use of wikis (or other collaborative authoring environments such as Google Docs) for learning oriented assessment in translations. The tutor sets a text to be translated, students work in groups on this, but can see and provide feedback on each other’s work. They need to make informed decisions about how to provide and how to respond to feedback. I wish there had been more time to go into some of the practicalities around this.
Jane Guiller of Glasgow Caledonian had students creating interactive learning resources using Xerte. They provide support for the use of Xerte and for issues such as copyright. These were peer assessed using a rubric. Students really appreciate demonstrating a deep understanding of a topic by creating something that is different to an essay. The approach also builds and demonstrates the students digital literacy skills. There was a mention at the end that the resources created are released as OERs.
Lucy Golden and Shona Robertson of the University of Dundee spoke about using on wordpress blogs in a distance learning course on teaching in FE. Learners were encouraged to keep a reflective blog on their progress; Lucy and Shona described how they encouraged (OK, required) the keeping of this blog through a five-step induction, and how they and the students provided feedback. These are challenges that I can relate to from asking students on one of my own course to keep a reflective blog.
Jamie McDermott and Lori Stevenson of Glasgow Caledonian University presented on using rubrics in Grademark (on TurnItIn). The suggestion came from their learning technologist John Smith, who clearly deserves a bonus, who pointed out that they had access to this facility that would speed up marking and the provision of feedback and would help clarify the criteria for various grades. After Jamie used Grademark Rubrics successfully in one module they have been implemented across a programme. Lori described the thoroughness with which they had been developed, with drafting, feedback from other staff, feedback from students and reflection. A lot of effort, but all with collateral benefits of better coherency across the programme and better understanding by the students of what was required of them
Each one of these four case studies contained something that I hope to use with my students.
The final plenary was Sally Jordan who teaches physics at the Open University talking about computer marked assessment. Sally demonstrated some of the features of the OU’s assessment system, for example the use of a computer algebra system to make sure that mathematically equivalent answers were marked appropriately (e.g. y = (x +2)/2 and y = x/2 + 1 may both be correct). Also the use of text analysis to mark short textual answers, allowing for “it decreases” to be marked as partially right and “it halves” to be marked as fully correct when the model answer is “it decreases by 50%”. This isn’t simple key word matching: you have to be able to distinguish between “kinetic energy converts to potential energy” and “potential energy converts to kinetic energy” as right and entirely wrong, even though they have the same words in them. These are useful for testing a student’s conceptual understanding of physics, and can be placed “close to the learning activity” so that they provide feedback at the right time.
Here was the innovative automatic marking I had expected to be commonly used for appropriate subjects. But Sally also said that an analysis of computer marked assessments in Moodle showed that 75% of the questions were plain old multiple choice questions, and probably much as 90% were some variety of selection response question. These lack authenticity (no patient ever says “Doctor, I’ve got one of the following four things wrong with me…”) and can be badly set so as to be guessable without previous knowledge. So why? Well, Sally had made clear that the OU is exceptional: huge numbers of students learning at a distance mean that there are fewer more cost effective options for marking and providing feedback, even when a large amount of effort is required. The numbers of students also allowed for piloting of questions and the use of assessment analytics to sort out the most useful questions and feedback. For the rest of us, Sally suggested we could do two things:
A) run moocs, with peer marking and use machine learning to infer the rules for marking automatically, or
B) talk to each other. Share the load of developing questions, share the questions (make them editable for different contexts).
So, although I haven’t worked much in assessment, I ended up feeling on familiar ground, with an argument being made for one form of Open Education or another.
A while back I went to the OER annotation summit where I learnt about hypothes.is, a tool for adding a layer of annotation on top of the web. If the idea of annotating the web sounds like one of those great ideas that has been tried a dozen times before and never worked, then you’re right and (importantly) the hypothes.is team know about it. It’s also one of the great ideas that is worth trying over and again because of its potential. Today I took a quick look at how they’re getting on, and it looks good.
I installed the Chrome extension and registered, it took about 2 minutes, so I don’t mind if you try it right now. That gives me a small icon on my browser that activates a sidebar (which is collapsed by default) for annotations. This lets you add comments to the page, highlight sections of text, add comments to those highlights, and view other people’s highlights and comments on that page. Annotations can be private, only visible to the annotator, or public and visible to everyone. Where annotations are public it is possible to reply to them, leading to conversations and discussions.
There’s also a WordPress plugin which adds annotation functionality to individual websites, I haven’t looked at that, but it could be a very useful addition to the WordPress for education tool kit.
I was worried that comments on specific phrases might be fragile, breaking if the page was changed, but a bit of experimenting showed they are reasonably robust, surviving small edits to the text annotated and changes to the preceding text that had the effect of shifting the annotated text down the page. I’m sure you can break it if you try but I think their fuzzy anchoring works for reasonable cases.
I like hypothes.is because it has the capacity make all the static content on web into the focus of reflective and social activity for education. Whether such activities are manageable and scaleable I don’t know,–how many open conversations can you have going on around around a single web page before everything just becomes swamped?–how many annotations can you save before finding your notes becomes harder than finding the content again. If there are limits, I guess that reaching them would be a nice problem to have.
I also like hypothes.is because it is an open project.I don’t just mean that it allows the content of annotations to be shared creating open discussion, though it does, an I like that. I don’t just mean that it works on the open web rather than within the confines of a single site, though it does, and I like that. I just don’t mean that it’s Open Source, though it is, and I like that. And I just don’t mean that it’s supporting open standards, though it is, and I like that. What I really like is the openness in discussing the projects goals, approaches, plans that can be found on the project wiki and blog.
A question: does WordPress have anything like the Long Term Stability branches of Ubuntu?
The Cetis website is based on WordPress, we use it as a blogging platform for our blogs, as a content management system for our publications and as a bit of both for our main site. It’s important to us that our installation (that is the WordPress core plus a variety of plugins, widgets and themes) is stable and secure. To ensure security we should keep all the components updated, which not normally a problem, but occasionally an update of WordPress or one of the plugins causes a problem due an incompatibility or bug. So there is a fair amount of testing involved whenever I do an update on the publications site, and for that reason I tend to do updates periodically rather than as soon as a new version of each component is released.
Last month was fairly typical, I updated to the latest version of WordPress and updated several plugins. Many of the updates were adding new functionality which we don’t really need, but there were also security patches that we do need–you can’t have one without the other. One of the plugins had a new dependency that broke the site, David helped me fix that. Two days later I login and half the plugins want updating again, mostly with fixes to bugs in the new functionality that I didn’t really need.
I understand that there will always be updates required to fix bugs and security issues, but the plethora of updates could be mitigated in the same way that it is for Ubuntu. Every couple of years Ubuntu is released as a Long Term Stability version. For the next few years, no new features are added to this, it lags in functionality behind current version, but important bug fixes and security patches for existing features are back-ported from the current version.
So, my question: is there anything like the concept of LTS in the WordPress ecosystem?
In a W3C Unofficial Draft White Paper “Advancing Portable Documents for the Open Web Platform: EPUB-WEB” published 21 Nov 2014, Markus Gulling of IPDF (curators of the EPUB standards) and Ivan Herman of W3C (curators of web standards) have highlighted the potential of a specification that brings EPUB on to the Web. Informally known as EPUB-WEB, the vision is that this specification would make “EPUB a first-class citizen of the Open Web Platform and as a result significantly reduce the complexity of deploying EPUB content into browsers, for online as well as offline consumption”
Firstly, on that manifest, in section 3.1 the authors note that while the zip file + XML manifest is a common pattern:
“W3C’s Web Application Working Group has, in its new charter, the task of defining a general packaging format for the Web to encompass the needs of various applications (like installing Web Applications or downloading data for local processing). It is probably advantageous for EPUB-WEB to adopt this format, thereby being compatible with what Web Browsers would implement anyway. While this general packaging format could hypothetically be compatible with the ZIP+XML manifest format used by EPUB (and also by the Open Document Format [ODF]) the broader requirements of installable applications and other types of content, and efficient incremental transmission over networks, may well imply a different and incompatible packaging format.”
Secondly, there’s a question about how you identify documents (and fragments within documents) reliably when they may be either online or off-line depending on whether the user has decided to “archive” them (and I think archive here includes download onto an ebook reader to take on holiday). “What is the URI of the offline version of the document”. Interestingly there is a link drawn with the W3C Annotation Working Group:
The recently formed W3C Annotation Working Group has a joint deliverable with the W3C Web Application Working Group called “Robust Anchoring”. This deliverable will provide a general framework for anchoring; and, although defined within the framework of annotations, the specification can also be used for other fragment identification use cases. Similarly, the W3C Media Fragments specification [media-frags] may prove useful to address some of the use cases.
And thirdly there is (of course) Metadata. EPUB 3 has plenty of places to put your Metadata. Most conventional publishing needs for metadata inside the EPUB file are covered with the range of metadata allowed in the manifest. However, there is additional potential for in-line metadata that is “agnostic to online and offline modes” that will “seamlessly support discovery and harvesting by both generic Web search engines, as well as dedicated bibliographic/archival/retailer systems” The note points to schema.org in all but name:
The adoption of HTML as the vehicle for expressing publication-level metadata (i.e., using RDFa and/or Microdata for metadata like authors or title) would have the added benefits of better I18N support than XML or JSON formats.
And what about application to learning? Taken in conjunction with the Annotation work starting at W3C, the scope for eTextBooks online (or whatever you want to call educational use of EPUBWeb for education) seems clear. One area that seems important for education use that seems inadequately addressed in the draft white paper is alternative presentations that would make the material remixable and adaptable to meet individual learner needs. There a little in draft about presentation control and personalization, but it rather limited: changing the font size or page layout rather than changing the learning pathway.
At last, it is official: “effective October 23, 2014, leadership and governance of the Learning Resource Metadata Initiative (LRMI) […] have transferred from the Association of Educational Publishers and Creative Commons to the Dublin Core Metadata Initiative (DCMI).”
This represents the end of LRMI as a project, and the start of it as a member of the family of stable-but-evolving educational metadata specifications, one which is maintained under the governance protocols of DCMI. This does not mean that LRMI is being merged in some way with Dublin Core’s well known metadata element set or terms; DCMI is more than its specifications, it is (as it says on its website banner, with my emphasis) a metadata community “supporting innovation in metadata design, implementation and best practices”. The longstanding high regard with which the DC Metadata Element Set and DC Terms are held is testament to the care and expertise that the DCMI community devotes to specification development and curation. LRMI will now benefit from that same care and expertise. It will also benefit from representation among other educational technology and metadata specification and standardization bodies, for example through the Digital Learning Metadata Alliance.
This does not mean that activity and development of LRMI will stop. It won’t (I hope), but new developments will be subject to careful scrutiny and well governed procedures leading to community acceptance. The scope and pace of new developments won’t be prescribed by project proposals, workpackages and deadlines agreed with funders, they will depend on people articulating needs and solutions that they are prepared to put effort into developing. The funding was necessary to get the initiative up and running, the funders have been great in allowing the project to be responsive and agile in ascertaining and meeting the needs of key stakeholders. The funders never dictated how the project met its aims. The project would not have reached this successful end point had that not been the case. For any initiative to continue after the initial project funding ends is a challenge, but I believe LRMI as part of DCMI is well placed to meet that challenge, now onus is on the community who use LRMI to take it forward by defining new areas for building consensus within the DCMI governance framework and by bringing new resources to this effort.
Transfer to DCMI does not mean the end of the involvement of many of the individuals who have worked on LRMI in the past. The task of setting up LRMI within DCMI is being undertaken by a task group that includes Lorna M Campbell and me, extending the work that Cetis were funded to undertake in phase 3 of the funded project, Stuart Sutton, Dan Brickley, Michael Jay and Steve Midgley all of who have been active throughout the project. Beyond that, one of the reasons why DCMI was felt to be an appropriate home for LRMI is the fact that there are no barriers put in the way of anyone who wants to contribute to their work on specifications, so I hope that we will see many others who have contributed to LRMI in the past joining us.
Many thanks to my colleagues at the Association of Educational Publishers, Creative Commons and the Dublin Core Metadata Initiative who have made this possible.
About once a year I go to some meeting or another on libraries and eBooks. I nearly always come back from it struck by the tension between libraries, as institutions of stability, and the rapid pace at which technology companies are driving forward eBook technology. This year’s event of that type was the Scottish Library and Information Council’s 13th annual eBook conference. The keynote from Gerald Leitner, chair of the European Bureau of Library, Information and Documentation Associations task force on eBooks was especially interesting to me in introducing the Right to eRead Campaign.
Leitner spoke about the ecosystem around ebooks and libraries and about the uncertainty and instability throughout the system. Can lending libraries compete with commercial lending of eBooks (Amazon kindle unlimited, £6 per month for over half a million titles)? Publishers too are threatened and are fighting, as the spat between Amazon and Hachette shows–and note, it’s not publishers who are driving the change to eBooks, it’s technology companies, notably Amazon and Apple. Libraries are at risk of being the collateral damage in this fight. And where do book lovers fit in, those who as well as reading physical books read ebooks on various mobile devices?
Leitner made the point that consumers and libraries very rarely buy eBooks; you buy a limited license that allows you to download a copy and read it under certain restrictions–and no, like most people I have never bothered to read those restrictions though I am aware of the limit to the devices on which I can read that copy, that I am not allowed to lend it and that Amazon can delete copies remotely (I don’t use Apple products, but I assume they have similar terms). A consequence of this relates to the exhaustion of rights. Under copyright authors have the right to decide whether/how their work is published, and the publishers may have the right to sell books that contain the authors work. But once bought the book becomes the property of the person who bought it; the publishers rights are exhausted, they cannot longer forbid that it be resold or lent. The right to lend and resell is provided by Article 6 of the WIPO Copyright Treaty and the EU Rental and Lending directive (2006/115/EC). Library lending rights are written into statute and accompanied by remuneration for authors. Ebooks, intangible, licensed and not sold, are classed as services by the EU Information and Service Directive (2001/29/EC), and for these there is no exhaustion of rights, no right to resell or lend, and no statutory guarantee that libraries may provide access.
The EBLIDA right to eRead campaign is about trying to secure a right for libraries to provide access to eBooks. The argument is that without this right to access information itself becomes privatised at the cost of an informed democracy. The campaign is asking for a statutory exemption with IP law, or mandatory fair licensing that provides libraries with the right to acquire and a right to lend.
I work in the area commonly known as Learning Technology, or Educational Technology. I don’t have much time for trying to pin down what exactly constitutes “technology” in that context, and certainly none for considerations like “printing is technology, does that count”. But today I bought a book which does quite literally(*) illustrate advances in printing applied to learning.
The book is a reprint of the Oliver Byrne’s The first six books of the elements of Euclid in which coloured diagrams and symbols are used instead of letters for the greater ease of learners which was first published in 1847. Instead of the conventional referencing of lines, shapes and angle by letters used in geometry text books. So instead of:
Proposition 30: Straight lines parallel to the same straight line are also parallel to one another.
Let each of the straight lines AB and CD be parallel to EF.
I say that AB is also parallel to CD.
Let the straight line GK fall upon them. Since the straight line GK falls on the parallel straight lines AB and EF, therefore the angle AGK equals the angle GHF.
Again, since the straight line GK falls on the parallel straight lines EF and CD, therefore the angle GHF equals the angle GKD.
But the angle AGK was also proved equal to the angle GHF. Therefore the angle AGK also equals the angle GKD, and they are alternate.
Therefore AB is parallel to CD.
Therefore straight lines parallel to the same straight line are also parallel to one another.
This book has:
Colour printing of books was not common in 1847, it only became commercially viable after the invention new printing techniques in the C19th and mass production of cheap synthetic dyes, starting with mauvine in 1856, so this can fairly be called advanced technology for its time. Like many uses of technology to enhance learning, when colour printing of text books did become commonplace, it wasn’t used with the same imagination as shown by the pioneers.
* except, of course, that “literally” means according to the written word and this is a book of pictures. #CetisPedantry
When the Learning Resource Metadata Initiative (LRMI) technical working group started its work it focused on identifying the properties and relationships that were important for educational resources but could not be adequately expressed using schema.org as it then stood. One of those important pieces of information was the licence under which a resource was released, and so the LRMI spec from the start had the property useRightsUrl “The URL where the owner specifies permissions for using the resource.” When schema.org adopted most of the LRMI properties, useRightsUrl was an exception, it was not adopted pending further consideration–not surprising really given the wide-ranging applicability of licence information beyond learning resources.
Back in June the good news came that with version 1.6 of schema.org included a license property for Creative Works that does all that LRMI wanted, and more.
What does this mean for LRMI adopters?
Some adopters of LRMI have already started using useRightsUrl. Such implementations are valid LRMI but not valid schema.org, which means that they will only be understood by applications that have been written specifically to understand LRMI and not by the general purpose web-scale search applications. This is sub-optimal.
In passing, let me mention another complication. With schema.org you have a choice of syntax: microdata and RDFa 1.1 lite. With RDFa there was already a mechanism for identifying a link to a licence, that is rel=”license”. Just to complicate a little more, RDFa allows name spacing, and the term license appears in at least three widely used namespaces: HTML5, Dublin Core Terms, and the Creative Commons Rights Expression Language–hopefully this will never matter to you. To exemplify one of these options I’ll use the HTML that you get when you use the Creative Commons License Chooser (but let’s be absolutely clear, what I am writing about applies to any type of license whether the terms be open or commercial):
The good news is that all these options play nicely together, you can have the best of all worlds.
If you are already using itemprop=”useRightsUrl” to identify the link to a licence using LRMI in microdata, you can also use the license property and rel=”license”. The following LRMI microdata with a bit RDFa thrown in works:
If you are using LRMI / schema.org in RDFa, then the following is valid
<body vocab="http://schema.org/" typeof="CreativeWork">
<a rel="license useRightsUrl"
Creative Commons Attribution 4.0 International (CC BY 4.0) licence
License does what LRMI asked for and more
In my opinion the schema.org license property is superior to the LRMI useRightsUrl for a few reasons. It does everything that LRMI wanted by way of identifying the URL of the licence under which the creative work is released, but also:
It belongs to a more widely recognised namespace, especially important if you are wanting to generate RDF data
I prefer the semantics of the name and definition: a license can include restrictions of use as well as grant rights and permissions.
the range, i.e. the type of value that can be provided, includes Creative Works as well as Urls
That last points allows one to encode the name, url, description, date, accountable person and a whole host of other information about the licence (albeit at the cost of the not being able to do so alongside LRMI’s useRightsUrl quite so simply)
The inclusion in schema.org of the license property is good news for aims for LRMI. If you use LRMI and care about licensing you should tag the information you provide about the license with it. If you already use LRMI’s useRightsUrl or RDFa’s rel=”license” there is no need to stop doing so.