Proposed strategy for semantics in RDF* and Property Graphs

Graph databases such as Neo4J are gaining in popularity. These are in many ways comparable to RDF databases (triplestores), but I will highlight three differences:

  1. The underlying datamodel in most graph databases is a Property Graph (PG). This means that information can be directly attached to edges. In RDF this can only be done indirectly via reification, or reification-like models, or named graphs.
  2. RDF is based on open standards, and comes with a standard query language (SPARQL), whereas a unified set of standards have yet to arrive for PGs.
  3. RDF has a formal semantics, and languages such as OWL can be layered on providing more expressive semantics.

RDF* (and its accompanying query language SPARQL*) is an attempt to bring PGs into RDF, thus providing an answer for points 1-2. More info can be found in this post by Olaf Hartig.

You can find more info in that post and in related docs, but briefly RDF* adds syntax to add property directly onto edges, e.g

<<:bob foaf:friendOf :alice>> ex:certainty 0.9 .

This has a natural visual cognate:

Mungalls-Ontology-Design-Guidelines (7).png

We can easily imagine building this out into a large graph of friend-of connections, or connecting other kinds of nodes, and keeping additional useful information on the edges.

But what about the 3rd item, semantics?

What about semantics?

For many in both linked data/RDF and in graph database/PG camps, this is perceived as a minor concern. In fact you can often find RDF people whinging about OWL being too complex or some such. The “semantic web” has even been rebranded as “linked data”. But in fact, in the life sciences many of us have found OWL to be incredibly useful, and being able to clearly state what your graphs mean has clear advantages.

OK, but then why not just use what we have already? OWL-DL already has a mapping to RDF, and any document in RDF is automatically an RDF* document, so problem solved?

Not quite. There are two issues with continuing he status quo in the world of RDF* and PGs:

  1. The mapping of OWL to RDF can be incredibly verbose and leads to unintuitive graphs that inhibit effective computation.
  2. OWL is not the only fruit. It is great for the use cases it was designed for, but there are other modes of inference and other frameworks beyond first-order logic that people care about.

Issues with existing OWL to RDF mapping

Let’s face it, the existing mapping is pretty ugly. This is especially true for life-science ontologies that are typically construed of as relational graphs, where edges are formally SubClassOf-SomeValuesFrom axioms. See the post on obo json for more discussion of this. The basic idea here is that in OWL, object properties connect individuals (e.g. my left thumb is connected to my left hand via part-of). In contrast, classes are not connected directly via object properties, rather they are related via subClassOf and class expressions. It is not meaningful in OWL to say “finger (class) part_of hand (class)”. Instead we seek to say “all instances of finger are part_of some x, where x is an instance of a hand”. In Manchester Syntax this has compact form

Finger SubClassOf Part_of some Hand

This is translated to RDF as

Finger owl:subClassOf [

a owl:Restriction ;

owl:onProperty :part_of

owl:someValuesFrom :Hand

]

As an example, consider 3 classes in an anatomy ontology, finger, hand, and forelimb, all connected via part-ofs (i.e. every finger is part of some hand, and ever hand is part of some finger). This looks sensible when we use a native OWL syntax, but when we encode as RDF we get a monstrosity:

z

Fig2 (A) two axioms written in Manchester Syntax describing anatomical relationship between three structures (B) corresponding RDF following official OWL to RDF mapping, with 4 triples per existential axiom, and the introduction of two blank nodes (C) How the axioms are conceptualized by ontology developers, domain experts and how most browsers render them. The disconnect between B and C is an enduring source of confusion among many.

This ugliness was not the result of some kind of perverse decision by the designers of the OWL specs, it’s a necessary consequence of the existing stack which bottoms out at triples as the atomic semantic unit.

In fact, in practice many people employ some kind of simplification and bypass the official mapping and store the edges as simple triples, even though this is semantically invalid. We can see this for example in how Wikidata loads OBOs into its triplestore. This can cause confusion, for example, WD storing reciprocal inverse axioms (e.g. part-of, has-part) even though this is meaningless when collapsed to simple triples.

I would argue there is an implicit contract when we say we are using a graph-based formalism that the structures in our model correspond to the kinds of graphs we draw on whiteboards when representing an ontology or knowledge graph, and the kinds of graphs that are useful for computation; the current mapping violates that implicit contract, usually causing a lot of confusion.

It also has pragmatic implications too. Writing a SPARQL query that traverses a graph like the one in (B), following certain edge types but not others (one of the most common uses of ontologies in bioinformatics) is a horrendous task!

OWL is not the only knowledge representation language

The other reason not to stick with the status quo for semantics for RDF* and PGs is that we may want to go beyond OWL.

OWL is fantastic for the things it was designed for. In the life sciences, it is vital for automatic classification and semantic validation of large ontologies (see half of the posts in this blog site). It is incredibly useful for checking the biological validity of complex instance graphs against our encoded knowledge of the world.

However, not everything we want to say in a Knowledge Graph (KG) can be stated directly in OWL. OWL-DL is based on a fragment of first order logic (FOL); there are certainly things not in that fragment that are useful, but often we have to go outside strict FOL altogether. Much of biological knowledge is contextual and probabilistic. A lot of what we want to say is quantitative rather than qualitative.

For example, when relating a disease to a phenotype (both of which are conventionally modeled as classes, and thus not directly linkable via a property in OWL), it is usually false to say “every person with this disease has this phenotype“. We can invent all kinds of fudges for this – BFO has the concept of a disposition, but this is just a hack for not being able to state probabilistic or quantitative knowledge more directly.

A proposed path forward for semantics in Property Graphs and RDF*

RDF* provides us with an astoundingly obvious way to encode at least some fragment of OWL in a more intuitive way that preserves the graph-like natural encoding of knowledges. Rather than introduce additional blank nodes as in the current OWL to RDF mapping, we simply push the semantics onto the edge label!

Here is example of how this might look for the axioms in the figure above in RDF*

<<:finger :part-of :hand>> owlstar:hasInterpretation
owlstar:SubClassOfSomeValuesFrom .
<<:hand :part-of :forelimb>> owlstar:hasInterpretation owlstar:SubClassOfSomeValuesFrom .

I am assuming the existing of a vocabulary called owlstar here – more on that in a moment.

In any native visualization of RDF* this will end up looking like Fig1C, with the semantics adorning the edges where they belong. For example:

Mungalls-Ontology-Design-Guidelines (8)

proposed owlstar mapping of an OWL subclass restriction. This is clearly simpler than the corresponding graph fragment in 2B. While the edge properties (in square brackets) may be too abstract to show an end user (or even a bioinformatician performing graph-theoretiic operations), the core edge is meaningful and corresponds to how an anatomist or ordinary person might think of the relationship.

Maybe this is all pretty obvious, and many people loading bio-ontologies into either Neo4j or RDF end up treating edges as edges anyway. You can see the mapping we use in our SciGraph Neo4J OWL Loader, which is used by both Monarch Initiative and NIF Standard projects. The OLS Neo4J representation is similar. Pretty much anyone who has loaded the GO into a graph database has done the same thing, ignoring the OWL to RDF mapping. The same goes for the current wave of Knowledge Graph embedding based machine learning approaches, which typically embed a simpler graphical representation.

So problem solved? Unfortunately, everyone is doing this differently, and are essentially throwing out OWL altogether. We lack a standard way to map OWL into Property Graphs, so everyone invents their own. This is also true for people using RDF stores, people often have their own custom OWL mapping that is less verbose. In some cases this is semantically dubious, as is the case for the Wikipedia mapping.

The simple thing is for everyone to get around a common standard mapping, and RDF* seems a good foundation. Even if you are using plain RDF, you could follow this standard and choose to map edge properties to reified nodes, or to named graphs, or to the Wikidata model. And if you are using a graph database like Neo4J, there is a straightforward mapping to edge properties.

I will call this mapping OWL*, and it may look something like this:

RDF* OWL Interpretation
<<?c ?p ?d>> owlstar:interpretation owlstar:subClassOfSomeValuesFrom ?c SubClassOf ?p some ?d
<<?c ?p ?d>> owlstar:interpretation owlstar:subClassOfQCR, owlstar:cardinality ?n ?c SubClassOf ?p exactly 5 ?d
<<?c ?p ?d>> owlstar:interpretation owlstar:subjectContextProperty ?cp, owlstar:subjectContextFiller ?cf, owlstar:interpretation owlstar:subClassOfSomeValuesFrom (?c and ?cp some cf?) SubClassOf ?p some ?d

Note that the code of each of these mappings is a single edge/triple between class c, class d, and an edge label p. The first row is a standard existential restriction common to many ontologies. The second row is for statements such as ‘hand has part 5 fingers’, which is still essentially a link between a hand concept and a finger concept. The 3rd is for a GCI, an advanced OWL concept which turns out to be quite intuitive and useful at the graph level, where we are essentially contextualizing the statement. E.g. in developmentally normal adult humans (context), hand has-part 5 finger.

When it comes to a complete encoding of all of OWL there may be decisions to be made as to when to introduce blank nodes vs cramming as much into edge properties (e.g. for logical definitions), but even having a standard way of encoding subclass plus quantified restrictions would be a huge boon.

Bonus: Explicit deferral of semantics where required

Many biological relationships expressed in natural language in forms such as “Lmo-2 binds to Elf-2” or “crocodiles eat wildebeest” can cause formal logical modelers a great deal of trouble. See for example “Lmo-2 interacts with Elf-2”On the Meaning of Common Statements in Biomedical Literature (also slides) which lays out the different ways these seemingly straightforward statements about classes can be modeled. This is a very impressive and rigorous work (I will have more to say on how this aligns with GO-CAM in a future post), and ends with an impressive Wall of Logic:

Screen Shot 2019-07-08 at 10.16.38 PM.png

Dense logical axioms proposed by Schulz & Jansen for representing biological interactions

this is all well and good, but when it comes to storing the biological knowledge in a database, the majority of developers are going to expect to see this:

Mungalls-Ontology-Design-Guidelines (6).png

protein interaction represented as a single edge connecting two nodes, as represented in every protein interaction database

And this is not due to some kind of semantic laziness on their part: representing biological interactions using this graphical formalism (whether we are representing molecular interactions or ecological interactions) allows us to take advantage of powerful graph-theoretic algorithms to analyze data that are frankly much more useful than what we can do with a dense FOL representation.

I am sure this fact is not lost on the authors of the paper who might even regard this as somewhat trivial, but the point is that right now we don’t have a standard way of serializing more complex semantic expressions into the right graphs. Instead we have two siloed groups, one from a formal perspective producing complex graphs with precise semantics, and the other producing useful graphs with no encoding of semantics.

RDF* gives us the perfect foundation for being able to directly represent the intuitive biological statement in a way that is immediately computationally useful, and to adorn the edges with additional triples that more precisely state the desired semantics, whether it is using the Schulz FOL or something simpler (for example, a simple some-some statement is logically valid, if inferentially weak here).

Beyond FOL

There is no reason to have a single standard for specifying semantics for RDF* and PGs. As hinted in the initial example, there could be a vocabulary or series of vocabularies for making probabilistic assertions, either as simple assignments of probabilities or frequencies, e.g.

<<:RhinovirusInfection :has-symptom :RunnyNose>> probstar:hasFrequency
0.75 .

or more complex statements involving conditional probabilities between multiple nodes (e.g. probability of symptom given disease and age of patient), allowing encoding of ontological Bayesian networks and Markov networks.

We could also represent contextual knowledge, using a ‘that’ construct borrowed from ILK:

<<:clark_kent owl:sameAs :superman>> a ikl:that ; :believed-by :lois_lane .

which could be visually represented as:

Mungalls-Ontology-Design-Guidelines (10)

Lois Lane believes Clark Kent is Superman. Here an edge has a link to another node rather than simply literals. Note that while possible in RDF*, in some graph databases such as Neo4j, edge properties cannot point directly to nodes, only indirectly through key properties. In other hypergraph-based graph DBs a direct link is possible.

Proposed Approach

What I propose is a series of lightweight vocabularies such as my proposed OWL*, accompanied by mapping tables such as the one above. I am not sure if W3C is the correct approach, or something more bottom-up. These would work directly in concert with RDF*, and extensions could easily be provided to work with various ways to PG-ify RDF, e.g. reification, Wikidata model, NGs.

The same standard could work for any PG database such as Neo4J. Of course, here we have the challenge of how to best to encode IRIs in a framework that does not natively support these, but this is an orthogonal problem.

All of this would be non-invasive and unobtrusive to people already working with these, as the underlying structures used to encode knowledge would likely not change, beyond an additional adornments of edges. A perfect stealth standard!

It would help to have some basic tooling around this. I think the following would be straightforward and potentially very useful:

  • Implementation of the OWL* mapping of existing OWL documents to RDF* in tooling – maybe the OWLAPI, although we are increasingly looking to Python for our tooling (stay tuned to hear more on funowl).
  • This could also directly bypass RDF* and go directly to some PG representation, e.g. networkx in Python, or stored directly into Neo4J
  • Some kind of RDF* to Neo4J and SPARQL* to OpenCypher [which I assume will happen independently of anything proposed here]
  • And OWL-RL* reasoner that could demonstrate simple yet powerful and useful queries, e.g. property chaining in Wikidata

A rough sketch of this approach was posted on public-owl-dev to not much fanfare, but, umm, this may not be the right forum for this.

Glossing over the details

For a post about semantics, I am glossing over the semantics a bit, at least from a formal computer science perspective. Yes of course, there are some difficult details to be worked out regarding the extent to which existing RDF semantics can be layered on, and how to make these proposed layers compatible. I’m omitting details here to try and give as simple an overview as possible. And it also has to be said, one has to be pragmatic here. People are already making property graphs and RDF graphs conforming to the simple structures I’m describing here. Just look at Wikidata and how it handles (or rather, ignores) OWL. I’m just the messenger here, not some semantic anarchist trying to blow things up. Rather than worrying about whether such and such a fragment of FOL is decidable (which lets face it is not that useful a property in practice) let’s instead focus on coming up with pragmatic standards that are compatible with the way people are already using technology!

 

 

 

 

 

 

Advertisements

Never mind the logix: taming the semantic anarchy of mappings in ontologies

Mappings between ontologies, or between an ontology and an ontology-like resource, are a necessary fact of life when working with ontologies. For example, GO provides mappings to external resources such as KEGG, MetaCyc, RHEA, EC, and many others. Uberon (a multi-species anatomy ontology) provides mappings to species-specific anatomy ontologies like ZFA, FMA, and also to more specialized resources such as the Allen Brain Atlases. These mappings can be used for a variety of purposes, such as data integration – data annotated using different ontologies can be ‘cross-walked’ to use a single system.

Screen Shot 2019-05-26 at 7.53.12 PMOxo Mappings: Mappings between ontologies and other resources, visualized using OxO, with UBERON mapping sets highlighted.

Ontology mapping is a problem. With N resources, with each resource providing their own mappings to other resources, we have the potential of N^2-N mappings. This is expensive to produce, maintain, and is inherently error-prone, and can be frustrating for users if mappings do not globally agree. With the addition of third party mapping providers, the number of combinations increases.

One approach is to make an ‘uber-ontology’ that unifies the field, and do all mappings through this (reducing the number of mappings to N, and inferring pairwise mappings). But sometimes this just ends up producing another resource that needs to be mapped. And so the cycle continues.

managing-mappings-in-robot-e1558925811975.png

N^2 vs Uber. With 4 ontologies, we have 12 sets of mappings (each edge denotes 2 sets of mappings, since reciprocal calls may not agree). With the Uber approach we reduce this to 4, and can infer the pairwise mappings (inferred mapping sets as dotted lines). However, the Uber may become another resource meaning we now have 20 mappings.

Ideally we would have less redundancy and more cooperation, reducing the need for mappings. The OBO Foundry is based on the idea of groups coordinating and agreeing on how a domain is to be carved up, reducing redundancy, and leading to logical relationships (not mappings) between classes. For example, CHEBI and metabolic branches of GO cover different aspects of the same domain. Rather than mapping between classes, we have logical relationships, such as GO:serine biosynthesis has-output CHEBI:serine.

Even within OBO, mappings can be useful. Formally Uberon is orthogonal to species-specific anatomy ontologies such as ZFA. Classes in Uberon are formally treated as superclasses of ZFA classes, and so is not really a ‘mapping’. But for practical purposes, it can help to treat these links the same way we treat mappings between an OBO class and an entry in an outside resource, because people want to operate on it in the same way as they do other mappings.

Ontology mapping is a rich and active field, encompassing a large variety of techniques, leveraging lexical properties or structural properties of the ontology to automate or semi-automate mappings. See the Ontology Alignment Evaluation Initiative for more details.

I do not intend to cover alignment algorithms here, rich an interesting as a topic as this is (this may be the subject of a future post). I want to deal with the more prosaic issue of how we provide mappings to users, which is not something we do a great job of with OBO. This is also tied with the issue of how ontology developers maintain mappings for their ontology, which is also something we don’t do a great job of. I want to restrict this post just to the subject of how we represent mappings in the ontology files we produce for the community; mappings can also be queried via APIs but this is another topic.

This may not be the most thrilling topic, but I bet many of you have struggled with and cursed at this issue for a while. If so, your comments are most welcome here.

There are three main ways that mappings are handled in the OWL files we produce (including obo format files; obo format is another serialization of OWL), which can cause confusion. These are: direct logical axioms, xrefs, and skos. You might ask why we don’t just pick one. The answer is that each serves overlapping but distinct purposes. Also, there are existing infrastructures and toolchains that rely on doing it one way, and we don’t want to break things. But there are probably better ways of doing things, this post is intended to spur discussion on how to do this better.

Expressing Mappings in OWL

Option 1. Direct logical axioms

OWL provides constructs that allow us to unambiguously state the relationship between two things (regardless of whether the things are in the same ontology or two different ones). If we believe that GO:0000010 (trans-hexaprenyltranstransferase activity) and RHEA:20836 are equivalent, we can write this as:

GO:0000010 owl:equivalentClass RHEA:20836

This is a very strong statement to make, so we had better be sure! Fortunately RHEA makes the semantics of each of their entries very precise, with a precise CHEBI ID with a specific structure for each participant:
Screen Shot 2019-05-26 at 7.58.48 PM.png

If instead we believe the GO class to be broader (perhaps if the reactants were broader chemical entities) we could say

RHEA:20836 owl:subClassOf GO:0000010

(there is no superClassOf construct in OWL, so we must express this as the semantically equivalent structural form with the narrower class first).

In this case, the relationship is equivalence. Note that GO and RHEA curators have had many extensive discussions about the semantics of their respective resources, so we can be extra sure.

Sometimes the relationship is more nuanced, but if we understand the OWL interpretation of the respective classes we can usually write the relationship in a precise an unambiguous way. For example, the Uberon class heart is species-agnostic, and encompasses the 4 chambered heart of mammals as well as simpler structures found in other vertebrates (it doesn’t encompass things like the dorsal vessel of Drosophila, but there is a broader class of circulatory organ for such things). In contrast the Zebrafish Anatomy (ZFA) class with the same name ‘heart’ only covers Danio.

If you download the uberon OWL bridging axioms for ZFA, you will see this is precisely expressed as:

ZFA:0000114 EquivalentTo (UBERON:0000948 and part_of some NCBITaxon:7954)

(switching to Manchester syntax here for brevity)

i.e the ZFA heart class is the same as the uberon heart class when that heart is part of a Danio. In uberon we call this axiom pattern a “taxon equivalence” axiom. Note that this axiom entails that the Uberon heart subsumes the ZFA heart.

Venn diagram illustrating intersection of uberon heart and all things zebrafish is the zebrafish heart
There are obvious advantages to expressing things directly as OWL logical axioms. We are being precise, and we can use OWL reasoners to both validate and to infer relationships without programming ad-hoc rules.

For example, imagine we were to make an axiom in Uberon that says every heart has two ventricles and two atria (we would not in fact do this, as Uberon is species-agnostic, and this axiom is too strong if the heart is to cover all vertebrates). ZFA may state that the ZFA class for heart has a single one of each. If we then include the bridging axiom above we will introduce an unsatisfiability. We will break ZFA’s heart. We don’t want to do this, as Uberon [:heart:] ZFA.

As another example, if we make a mistake and declare two distinct GO classes to be equivalent to the same RHEA class, then through the properties of transitivity and symmetry of equivalence, we infer the two GO classes to be equivalent.

Things get even more interesting when multiple ontologies are linked. Consider the following, in which the directed black arrows denote subClassOf, and the thick blue lines indicate equivalence axioms. Note that all mappings/equivalences are locally 1:1. Can you tell which entailments follow from this?

3 way equivalence sets

Answer: everything is entailed to be equivalent to everything else! It’s just one giant clique (this follows from the transitivity property of equivalence; as can be seen, anything can be connected by simply hopping along the blue lines). This is not an uncommon structure, as we often see a kind of “semantic slippage” where concepts shift slightly in concept space, leading to global collapse.

mappings between EC, GO, MetaCyc, Rhea, and KEGG

Above is another more realistic example; when we try and treat mutual mappings between EC, GO, MetaCyc, RHEA, and KEGG. Grey lines indicate mappings provided by individual sources. Although this nominally means the two entries are the same, this cannot always be the case: as we follow links we traverse up and down the hierarchy, illustrating how ‘semantic slippage’ between similar resources leads to incoherence.

As we use ROBOT as part of the release process, we automatically detect this using the reason command, and the ontology editor can then fix the mappings.

Because equivalence means that any logical properties of one class can be substituted for the other, users can be confident in data integration processes. If we know the RHEA class has a particular CHEBI chemical as a participant, then the equivalent GO class will have the same CHEBI class as a participant. This is very powerful! We intend to use this strategy in the GO. Because RHEA is an expert curated database of Reactions, it doesn’t make sense for GO to replicate work in the leaf nodes of the GO MF hierarchy. Instead we declare the GO MF and RHEA classes as equivalent, and bring across expert curated knowledge, such as the CHEBI participants (this workflow is in progress, stay tuned).

Screen Shot 2019-05-26 at 7.58.48 PM

Coming soon to GO: Axiomatization of reactions using CHEBI classes via RHEA

So why don’t we just express all mappings as OWL logical axioms and be done with it? Well, it’s not always possible to be this precise, and there may be additional pragmatic concerns. I propose that the following criteria SHOULD or MUST be satisfied when making an OWL logical axiom involving an external resource:

  1. The entity in the external resource MUST have URIs denoting each class, and that URI SHOULD be minted by the external resource rather than a 3rd party.
  2. The external resource SHOULD have a canonical OWL serialization, maintained by the resource.
  3. That OWL serialization MUST be coherent and SHOULD accurately reflect the intent of the maintainer of that resource. This includes any upper ontology commitments.

The first criteria is fairly mundane but often a tripping point. You may have noticed in the axioms above I wrote URIs in CURIE form (e.g. GO:0000010). This assumes the existence of prefix declarations in the same document. E.g.

Prefix GO: <http://purl.obolibrary.org/obo/GO_>

Prefix UBERON: <http://purl.obolibrary.org/obo/UBERON_>

Prefix ZFA: <http://purl.obolibrary.org/obo/ZFA_>

Prefix RHEA: <http://rdf.rhea-db.org/&gt;

For any ontology that is part of OBO, or any ontology ‘born natively’ in OWL the full URI is known. However, if we want to map to a resource like OMIM, do we use the URL that resolves to the website entry? These things often change (at one point they were NCBI URLs). Perhaps we use the identifiers.org URL? Or the n2t.net one? Unless we have consensus on these things then different groups will make different choices, and things won’t link up. It’s an annoying issue, but a very important and expensive one. It is outside the scope of this post, but important to bear in mind. See McMurry et al for more on the perils of identifiers.

The second and third criteria pertain to the semantics of the linked resource. Resources like MESH take great care to state they are not an ontology, so treating it as an ontology of OWL classes connected by subClassOf is not really appropriate (and gets you some strange places). Similarly, UMLS, which contains cycles in its subClassOf graph. Even in cases where the external resource is an ontology (or believes itself to be), can you be sure they are making the same ontological commitments as you?

This is important: In making an equivalence axiom, you are ‘injecting’ entailments into the external resource, when all resources are combined (i.e a global view). This could lead to global errors (i.e errors that are only manifest when all resources are integrated). Or it could be seen as impolite to inject without commitment from the maintainers of the external resource.

Scenario: If I maintain an ontology of neoplasms, and I have axioms stating my neoplasms are BFO material entities, and I make equivalence axioms between my neoplasms and the neoplasm hierarchy in NCIT I may be ignoring an explicit non-commitment about that nature of the neoplasm hierarchy in NCIT. This could lead to global errors, such as when we see that NCIT classifies Lynch syndrome in the neoplasm hierarchy (see figure). Also, if I were the NCIT maintainers, I may be a bit miffed about other people making ontological commitments on my behalf, especially if I don’t agree with them.
ncit.png

Example of injecting commitments. White boxes indicate current NCIT classes, arrows are OWL subClassOf edges. The yellow ontology insists the NCIT neoplasm is equivalent to its neoplasm, which is committed to be a material entity. The cyan ontology doesn’t care about neoplasm per se, and wants to make the NCIT class for generic disorder equivalent to its own genetic disease, which is committed to be a BFO disposition (BFO is black boxes), which is disjoint with material entity. As a result, the global ontology that results from merging these axioms is incoherent: HNCC and its subclass Lynch syndrome become unsatisfiable.

Despite these caveats, it can be really useful to sometimes ‘overstate’ and make explicit logical axioms even when technical or semantic criteria are not met. These logical axioms can be very powerful for validation and data integration. However, I would recommend in general not distributing these overstated axioms with the main ontology. Instead they can be distributed as separate bridging axioms that must be explicitly included, and documenting these bridge axioms and any caveats. An example of this is the bridge axioms to MOD anatomy ontologies with Uberon.

To be clear, this caveat does not apply to cases such as axioms that connect GO and CHEBI. First these are not even ‘mappings’ except in the broadest sense. And second, there is clarity and agreement on the semantics of the respective classes so we can hopefully be sure the axioms make sense and don’t inject unwanted inferences.

In summary, OWL logical axioms are very powerful, which can be very useful, but remember, with great power comes great responsibility.

Option 2. Use oboInOwl hasDbXref property

Before there was OWL, there was OBO-Format. And lo, OBO-Format gave us the xref. Well not really, the xref was just an example of the long standing tradition of database cross-reference in bioinformatics. In bioinformatics we love minting new IDs. For any given gene you may have its ENSEMBL ID, it’s MOD or HGNC ID, it’s OMIM ID, it’s NCBI Gene/Entrez ID, and a host of other IDs in other databases. The other day I caught my cat minting gene IDs. It’s widespread. This necessitates a system of cross-references. These are rarely 1:1, since there are reasons for representations in different systems to diverge. The OBO-Format xref was for exactly the same use case. When GO started, there were already similar overlapping databases and classifications, including longstanding efforts like EC.

 

In the OWL serialization of OBO-Format (oboInOwl) this becomes an annotation assertion axioms using the oboInOwl:hasDbXref property. Many ontologies such as GO, HPO, MONDO, UBERON, ZFA, DO, MP, CHEBI, etc continue to use the xref as the primary way to express mappings, even though they are no longer tied to obo format for development.

Below is an example of a GO class with two xrefs, in OBO format

[Term]
id: GO:0000010
name: trans-hexaprenyltranstransferase activity
namespace: molecular_function
def: “Catalysis of the reaction: all-trans-hexaprenyl diphosphate + isopentenyl diphosphate = all-trans-heptaprenyl diphosphate + diphosphate.” [KEGG:R05612, RHEA:20836]
xref: KEGG:R05612
xref: RHEA:20836
is_a: GO:0016765 ! transferase activity, transferring alkyl or aryl (other than methyl) groups

[

 

The same thing in the OWL serialization:

<owl:Class rdf:about=”http://purl.obolibrary.org/obo/GO_0000010″&gt;
<rdfs:subClassOf rdf:resource=”http://purl.obolibrary.org/obo/GO_0016765″/&gt;
<obo:IAO_0000115 rdf:datatype=”http://www.w3.org/2001/XMLSchema#string”>Catalysis of the reaction: all-trans-hexaprenyl diphosphate + isopentenyl diphosphate = all-trans-heptaprenyl diphosphate + diphosphate.</obo:IAO_0000115>
<oboInOwl:hasDbXref rdf:datatype=”http://www.w3.org/2001/XMLSchema#string”>KEGG:R05612</oboInOwl:hasDbXref&gt;
<oboInOwl:hasDbXref rdf:datatype=”http://www.w3.org/2001/XMLSchema#string”>RHEA:20836</oboInOwl:hasDbXref&gt;

 

Note that the value of hasDbXref is always an OWL string literal (e.g. “RHEA:20836”). This SHOULD always be CURIE syntax identifier (i.e prefixed), although note that any expansion to a URI is generally ambiguous. The recommendation is that the prefix should be registered somewhere like the GO db-xref prefixes or prefixcommons, but prefix registries may not agree on a canonical prefix (See McMurry et al ), leading to the need to repair prefixes when merging data. E.g. one group may use “MIM” another “OMIM”.

This all poses the question:

So what does xref actually mean?

The short answer is that it can mean whatever the provider wants it to mean. Often it means something like “these two things are the same”, but there is no guarantee a mapping means equivalence in the OWL sense, or is even 1:1. In fact sometimes an xref is often stretched for other use cases. In GO, we have always xreffed between GO classes and InterPro: this means “any protein with this domain will have this function” (which is incredibly useful for functional annotation). Xrefs between GO and Reactome mean “this reactome entry is an example of this GO class”. Some ontologies like ORDO and MONDO have axioms on their annotations that attempt to provide additional metadata about the mapping, but this is not standardized. In the past, xrefs were used to connect phenotype classes to anatomy classes (e.g. for “abnormal X” terms); however, this usage has now largely been superseded by more precise logical axioms (see above) through projects like uPheno. In uberon, an xref can connect equivalent classes, or taxon equivalents. Overall xref is used very broadly, and can mean many things depending on unwritten rules.

This is SEMANTIC ANARCHY!

never mind the logix: picture of anarchist owl with anarchy symbol. ANARCHY IN THE ONTOLOGY [sex pistols font]

This causes some to throw their hands up in despair. However, many manage to muddle along. Usually xrefs are used consistently within an ontology for any given external resource. Ideally there is clear documentation for each set of mappings, but unfortunately this is not always the case. Many consumers of ontologies may be making errors and propagating information across xrefs that are not one-to-one or equivalent. In many scenarios this could result in erroneous propagation of gene functions, or erroneous propagation of information about a malignant neoplasm to its benign analog, which could have bad consequences.

Increasingly ontologies will publish more precise logical axioms alongside their xrefs (Uberon has always done this), but in practice the xrefs are used more widely, despite their issues.

How widely are they used? There are currently almost 1.5 million distinct hasDbXref values in OBO at the moment. 175 ontologies in OntoBee make use of hasDbXref annotations (may be an overestimate due to imports). The ontologies that have the most xrefs are PR, VTO, TTO, CHEBI, and MONDO (covering distinct proteins, taxa, chemicals – areas we would expect high identifier density). These have myriad uses inside multiple database pipelines and workflows, so even if a better solution to the xref is proposed, we can’t just drop xrefs as this would break all of the things (that would be truly anarchic).

But it must also be acknowledged that xrefs are crusty and have issues, see this comment from Clement Jonquet for one example.

Option 3. Use SKOS vocabulary for mapping properties

In the traditional tale of Goldilocks and the three OWLs, Goldilocks tries three bowls of semantic porridge. The first is too strong, the second too weak, and the third one is just right. If the first bowl is OWL logical axioms, the second bowl is oboInOwl xrefs, the third bowl would be the Simple Knowledge Organization System (SKOS) mapping vocabulary.

This provides a hierarchy of mapping properties

  • mappingRelation
    • closeMatch
    • broadMatch
    • narrowMatch
    • exactMatch

These can be used to link SKOS concepts across different concept schemes. The exactMatch property has the properties of transitivity and symmetry, but is still weaker than owl equivalence as it lacks the property of substitutibility. SKOS properties are axiomatized allowing entailment. Note that broad and narrow match are not transitive, but they both entail broader transitive properties transitiveBroadMatch and narrowBroadMatch.

Using skos mapping relations, we can map between an OBO ontology and MESH without worrying about the lack of OWL semantics for MESH. We can use exactMatch for 1:1 mappings, and closeMatch if we are less confident. We don’t have to worry about injecting semantics, it’s just a mapping!

Many people are like Goldilocks and find this to be just the right amount of semantics. But note that we can’t express things like our Uberon-ZFA heart relationship precisely here.

There are some other issues. SKOS doesn’t mix well with OWL as the SKOS properties need to be object properties for the SKOS entailment rules to work, and this induces punning. See also SKOS and OWL, and also the paper SKOS with OWL: Don’t be Full-ish! By Simon Jupp (I strongly approve of puns in paper titles). These outline some of the issues. However, for practical purposes I believe it is OK to mix SKOS and OWL.

It should also be noted that unlike oboInOwl xrefs, SKOS mapping relations should only be used between two URIs. This involves selecting a canonical URI for classes in a resource, which is not always easy (see notes on OWL above).

Where do we go now?

As I have hopefully shown, different representations of mappings serve different purposes. In particular, OWL direct axiomatiziation provides very precise semantics with powerful entailments, but its use sometimes involves overstepping and imposing ontological commitments. And it lacks a way to indicate fuzziness. E.g. we may want to make a non 1:1 mapping.

OboInOwl xrefs are somewhat surplus to requirements, when we can see we can express things a little bit more explicitly using SKOS, while remaining just the right side of fuzziness. However, vast swathes of infrastructure will ignore SKOS and expect xrefs (usually in OBO format).

I want it all!

So why not include xrefs, skos AND owl direct axioms in the release of an ontology? Well we have started to do this in some cases!

In MONDO, we publish an OWL version that has OWL equivalence axioms connecting to external resources. These are left ‘dangling’. A lot of tools don’t deal with this too well, so we also make an obo version that excludes these logical axioms. However, we use the equivalence axioms in Monarch, for consistency checking and data integration.

In both obo format and owl editions, we include BOTH skos AND xrefs. Thus clients can choose which of these they like. The xrefs are more popular, and are consumed in many pipelines. They are expressed as CURIE style IDs rather than URIs, which is annoying for some purpises, but preferred in others. The skos mappings provide a bit more precision, and allow us to distinguish between close and exact mappings. They also connect IRIs.

Note the xrefs in MONDO also communicate additional information through axiom annotations. These could potentially be put onto both the skos and the OWL axioms but we haven’t done that yet.

This is potentially confusing, so we do our best to document each product on the OBO page. We want to give a firm “service level agreement” to consumes of the different files.

For Uberon, we have always supported both xrefs and precise logical axioms (the latter downloaded from a separate file). For a while we attempted to communicate the semantics of the xref with a header in the obo file (the ‘treat-xrefs-as-X’ header tags in obo format), but no one much cared about these. Many folks just want xrefs and intuit what to do with them. We will also provide SKOS mappings in Uberon in the future.

So by being pluralistic and providing all 3 we can have our semantic cake and eat it. The downside here is that people may find the plethora of options confusing. There will need to be good documentation on which to use when. We will also need to extend tooling – e.g. add robot commands to generate the different forms, given some source of mappings and rules. This latter step is actually quite difficult due to the variety of ways in which ontology developers manage mappings in their ontologies (some may manage as xrefs; others as external TSVs; others pull them from upstream, e.g. as GO does for interpro2go).

Comments welcome!!! You can also comment on this ticket in the ontology metadata tracker.

Just give me my TSVs already

At the end of the day, a large number of users are confused by all this ontological malarkey and just want a TSV. It’s just 2 columns dude, not rocket science! Why do you people have to make it so complicated?

Unfortunately we don’t do a great job of providing TSVs in a consistent way. GO provides the mappings in a separate TSV-like format whose origins are lost in the mists of time, that is frankly a bit bonkers. Other ontologies will provide various ad-hoc TSVs of mappings but this is not done consistently across ontologies.

I feel bad about this and would really like to see a standard TSV export to be rolled out more universally. We have an open ticket in ROBOT, comments welcome here: https://github.com/ontodev/robot/issues/312

There are a few things that need to be decided on. E.g. keep it simple with 2 columns, include labels of concepts, include additional metadata including type of mapping (e.g. skos predicate)?

 

TSV? That’s so retro. This OWL is full of angle brackets. Is this 2005? The web is based on JSON

I have a post on that ! https://douroucouli.wordpress.com/2016/10/04/a-developer-friendly-json-exchange-format-for-ontologies/

And there is also JSON-LD which will be semantically equivalent to any OWL serialization.

So basically the syntax is not so relevant, the information in the JSON is the same, and we have the same choices of logical axiom, xref, or skos.

Summary

This is more than I intended to write on what seems like a simple matter of standardizing the representation of simple mappings. But like many things, it’s not quite so simple when you scratch beneath the surface. We have differences in how we write ID/URIs, differences in degrees of semantic strength, and a lot of legacy systems that expect things just so which always make things more tedious.

Maybe one day we won’t need mappings as everything will be OBO-ized, there will be no redundancy, and the relationship between any two classes will be explicit in the form of unambiguous axioms. Until that day it looks like we still need mappings, and there will be a need to provide a mix of xrefs, skos, and sometimes overstated OWL logical axioms.

 

Parting thoughts on prefixes

Converting between CURIE strings and full URIs is often necessary for interconversion and integration. Usually this is done by some external piece of code, which can be annoying if you are doing everything in a declarative way in SPARQL. This is because the mapping between a CURIE and a URI is treated as syntactic by RDF tools, the CURIE isn’t a first-class entity (prefix declarations aren’t visible after parsing)

One thing I have started doing is including explicit prefix declarations using the SHACL vocabulary. Here is an example from the ENVO repo where we are mapping to non-OBO ontologies and classifications like SWEET, LTER:

@prefix owl: <http://www.w3.org/2002/07/owl#&gt; .
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#&gt; .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#&gt; .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#&gt; .
@prefix obo: <http://purl.obolibrary.org/obo/&gt; .

@prefix sh: <http://www.w3.org/ns/shacl#&gt; .

<http://purl.obolibrary.org/obo/envo/imports/prefixes.owl&gt;
a owl:Ontology ;
rdfs:label “Prefix declarations”@en ;
rdfs:comment “Prefixes used in xrefs.”@en ;
sh:declare [
sh:prefix “SWEET” ;
sh:namespace “http://sweetontology.net/&#8221; ;
] ;
sh:declare [
sh:prefix “LTER” ;
sh:namespace “http://vocab.lternet.edu?tema=&#8221; ;
] ;
sh:declare [
sh:prefix “MEO” ;
sh:namespace “http://purl.jp/bio/11/meo/&#8221; ;
] .

   

The nice thing about this is that it allows the prefixes to be introspected in SPARQL allowing interconversion between CURIE string literals and URIs. E.g. this SPARQL will generate SKOS triples from xrefs that have been annotated in a particular way:

prefix owl: <http://www.w3.org/2002/07/owl#&gt;
prefix skos: <http://www.w3.org/2004/02/skos/core#&gt;
prefix oio: <http://www.geneontology.org/formats/oboInOwl#&gt;

prefix sh: <http://www.w3.org/ns/shacl#&gt;

CONSTRUCT {
?c skos:exactMatch ?xuri
}
WHERE {
?ax owl:annotatedSource ?c ;
owl:annotatedTarget ?x ;
owl:annotatedProperty oio:hasDbXref ;
oio:source “ENVO:equivalentTo” .

bind( strbefore(?x, “:”) as ?prefix)

?decl sh:prefix ?prefix ;
sh:namespace ?ns .

bind( strafter(?x, “:”) as ?suffix)
bind( uri(concat(?ns, ?suffix)) AS ?xuri)
}