It took a while, but we now have in LRMI a Learning Resource Type concept scheme that defines a controlled vocabulary of terms that you might use to describe the type or nature of a learning resource.
Graphical Application Profiles?
In this post I outline how a graphical representation of an application profile can be converted to SHACL that can be used for data validation.
DCAT AP DC TAP: a grown up example of TAP to SHACL
I’ve described a couple of short “toy” examples as proof of concept of turning a Dublin Core Application Profile (DC TAP) into SHACL in order to validate instance data: the SHACL Person Example and a Simple Book Example; now it is time to see how the approach fares against a real world example. I chose the EU joinup Data Catalog Application Profile (DCAT AP) because Karen Coyle had an interest in DCAT, it is well documented (pdf) with a github repo that has SHACL files, there is a Interoperability Test Bed validator for it (albeit a version late) and I found a few test instances with known errors (again a little dated). I also found the acronym soup of DCAT AP DC TAP irresistable.
Continue reading
TAP to SHACL example
Last week I posted Application Profile to Validation with TAP to SHACL about converting a DCMI Tabular Application Profile (DC TAP) to SHACL in order to validate instance data. I ended by saying that I needed more examples in order to test that it worked: that is, not only check that the SHACL is valid, but also that validates / raises errors as expected when used with instance data.
Continue reading
Application Profile to Validation with TAP to SHACL
Over the past couple of years or so I have been part of the Dublin Core Application Profile Interest Group creating the DC Tabular Application Profile (DC-TAP) specification. I described DC-TAP in a post about a year ago as a “human-friendly approach that also lends itself to machine processing”, in this post I’ll explore a little about how it lends itself to machine processing.
Continue reading
SHACL, when two wrongs make a right
I have been working with SHACL for a few months in connexion with validating RDF instance data against the requirements of application profiles. There’s a great validation tool created as part of the JoinUp Interoperability Test Bed that lets you upload your SHACL rules and a data instance and tests the latter against the former. But be aware: some errors can lead to the instance data successfully passing the tests; this isn’t an error with the tool, just a case of blind logic: the program doing what you tell it to regardless of whether that’s what you want it to do.
Continue reading
When RDF breaks records
In talking to people about modelling metadata I’ve picked up on a distinction mentioned by Staurt Sutton between entity-based modelling, typified by RDF and graphs, and record-based structures typified by XML; however, I don’t think making this distinction alone is sufficient to explain the difference, let alone why it matters. I don’t want to get into the pros and cons of either approach here, just give a couple of examples of where something that works in a monolithic, hierarchical record falls apart when the properties and relationships for each entity are described separately and those descriptions put into a graph. These are especially relevant when people familiar with XML or JSON start using JSON-LD. One of the great things about JSON-LD is that you can use instance data as if it were JSON, without really paying much regard to the “LD” part; that’s not true when designing specs because design choices that would be fine in a JSON record will not work in a linked data graph. Continue reading
Thoughts on IEEE ILR
I was invited to present as part of a panel for a meeting of the IEEE P 1484.2 Integrated Learner Records (ILR) working group discussing issues around the “payload” of an ILR, i.e. the description of what someone has achieved. For context I followed Kerri Lemoie who presented on the work happening in the W3C VC-Ed Task Force on Modeling Educational Verifiable Credentials, which is currently the preferred approach. Here’s what I said: Continue reading
Strings to things in context
As part of work to convert plain JSON records to proper RDF in JSON-LD I often want to convert a string value to a URI that identifies a thing (real world concrete thing or a concept). Continue reading
JDX: a schema for Job Data Exchange
[This rather long blog post describes a project that I have been involved with through consultancy with the U.S. Chamber of Commerce Foundation. Writing this post was funded through that consultancy.]
The U.S. Chamber of Commerce Foundation has recently proposed a modernized schema for job postings based on the work of HR Open and Schema.org, the Job Data Exchange (JDX) JobSchema+. It is hoped JDX JobSchema+ will not just facilitate the exchange of data relevant to jobs, but will do so in a way that helps bridge the various other standards used by relevant systems. The aim of JDX is to improve the usefulness of job data including signalling around jobs, addressing such questions as: what jobs are available in which geographic areas? What are the requirements for working in these jobs? What are the rewards? What are the career paths? This information needs to be communicated not just between employers and their recruitment partners and to potential job applicants, but also to education and training providers, so that they can create learning opportunities that provide their students with skills that are valuable in their future careers. Job seekers empowered with greater quantity and quality of job data through job postings may secure better-fitting employment faster and for longer duration due to improved matching. Preventing wasted time and hardship may be particularly impactful for populations whose job searches are less well-resourced and those for whom limited flexibility increases their dependence on job details which are often missing, such as schedule, exact location, and security clearance requirement. These are among the properties that JDX provides employers the opportunity to include for easy and quick identification by all. In short, the data should be available to anyone involved in the talent pipeline. This broad scope poses a problem that JDX also seeks to address: different systems within the talent pipeline data ecosystem use different data standards so how can we ensure that the signalling is intelligible across the whole ecosystem?