OntoTip: Don’t over-specify OWL definitions

This is one post in a series of tips on ontology development, see the parent post for more details.

A common mistake is to over-specify an OWL definition (another post will be on under-specification). While not technically wrong, over-specification loses you reasoning power, limiting your ability to auto-classify your ontology. Formally, what I mean by over-specifying here is: stating more conditions than is required for correct entailments

One manifestation of this anti-pattern is the over-specified genus. (this is where I disagree with Seppala et al on S3.1.1, use the genus proximus, see previous post). I will use a contrived example here, although there are many real examples. GO contains a class ‘Schwann cell differentiation’, with an OWL definition referencing ‘Schwann cell’ from the cell ontology (CL).  I consider the logical definition to be neither over- nor under- specified:

‘Schwann cell differentiation’ EquivalentTo ‘cell differentiation’ and results-in-acquisition-of-features-of some ‘Schwann cell’

We also have a corresponding logical definition for the parent:

‘glial cell differentiation’ EquivalentTo ‘cell differentiation’ and results-in-acquisition-of-features-of some ‘glial cell’

The Cell Ontology (CL) contains the knowledge that Schwann cells are subtypes of glial cells, which allows us to infer that ‘Schwann cell differentiation’ is a subtype of ‘glial cell differentiation’. So far, so good (if you read the post on Normalization you should be nodding along). This definition does real work for us in the ontology: we infer the GO hierarchy based on the definition and classification of cells in CL. 

Now, imagine that in fact GO had an alternate OWL definition:

Schwann cell differentiation’ EquivalentTo ‘glial cell differentiation’ and results-in-acquisition-of-features-of some ‘Schwann cell’

This is not wrong, but is far less useful. We want to be able to infer the glial cell parentage, rather than assert it. Asserting it violates DRY (the Don’t Repeat Yourself principle) as we implicitly repeat the assertion about Schwann cells being glial cells in GO (when in fact the primary assertion belongs in CL). If one day the community decides that in fact that Schwann cells are not glial but in fact neurons (OK, so this example is not so realistic…), then we have to change this in two places. Having to change things in two places is definitely a bad thing.

I have seen this kind of genus-overspecification in a number of different ontologies; this can be a side-effect of the harmful misapplication of the single-inheritance principle (see ‘Single inheritance considered harmful’, a previous post). This can also arise from tooling limitations: the NCIT neoplasm hierarchy has a number of examples of this due to the tool they originally used for authoring definitions.

Another related over-specification is too many differentiae, which drastically limits the work a reasoner and your logical axioms can do for you. As a hypothetical example, imagine that we have a named cell type ‘hippocampal interneuron’, conventionally defined and used in the (trivial) sense of any interneuron whose soma is located in a hippocampus. Now let’s imagine that single-cell transcriptomics has shown that these cells always express genes A, B and C (OK, there are may nuances with integrating ontologies with single-cell data but let’s make some simplifying assumptions for now)/

It may be tempting to write a logical definition:

‘hippocampal interneuron’ EquivalentTo

  • interneuron AND
  • has-soma-location SOME hippocampus AND
  • expresses some A AND
  • expresses some B AND
  • expresses some C

This is not wrong per se (at least in our hypothetical world where hippocampal neurons always express these), but the definition does less work for us. In particular, if we later include a cell type ‘hippocampus CA1 interneuron’ defined as any interneuron in the CA1 region of the hippocampus, we would like this to be classified under hippocampal neuron. However, this will not happen unless we redundantly state gene expression criteria for every class, violating DRY.

The correct thing to do here is to use what is sometimes called a ‘hidden General Class Inclusion (GCI) axiom’ which is just a fancy way of saying that SubClassOf (necessary conditions) can be mixed in with an equivalence axiom / logical definition:

‘hippocampal interneuron’ EquivalentTo interneuron AND has-soma-location SOME hippocampus

‘hippocampal interneuron’ SubClassOf expresses some A

‘hippocampal interneuron’ SubClassOf expresses some B

‘hippocampal interneuron’ SubClassOf expresses some C

In a later post, I will return to the concept of an axiom doing ‘work’, and provide a more formal definition that can be used to evaluate logical definitions. However, even without a formal metric, the concept of ‘work’ is intuitive to people who have experience using OWL logical definitions to derive hierarchies. These people usually intuitively test things in the reasoner as they go along, rather than simply writing an OWL definition and hoping it will work.

Another sign that you may be overstating logical definitions is if they are for groups of similar classes, yet they do not fit into any design pattern template.

For example, in the above examples, the cell differentiation branch of GO fits into a standard pattern

cell differentiation and results-in-acquisition-of-features-of some C

where C is any cell type. The over-specified definition does not fit this pattern.

 

 

 

Advertisements

Proposed strategy for semantics in RDF* and Property Graphs

Graph databases such as Neo4J are gaining in popularity. These are in many ways comparable to RDF databases (triplestores), but I will highlight three differences:

  1. The underlying datamodel in most graph databases is a Property Graph (PG). This means that information can be directly attached to edges. In RDF this can only be done indirectly via reification, or reification-like models, or named graphs.
  2. RDF is based on open standards, and comes with a standard query language (SPARQL), whereas a unified set of standards have yet to arrive for PGs.
  3. RDF has a formal semantics, and languages such as OWL can be layered on providing more expressive semantics.

RDF* (and its accompanying query language SPARQL*) is an attempt to bring PGs into RDF, thus providing an answer for points 1-2. More info can be found in this post by Olaf Hartig.

You can find more info in that post and in related docs, but briefly RDF* adds syntax to add property directly onto edges, e.g

<<:bob foaf:friendOf :alice>> ex:certainty 0.9 .

This has a natural visual cognate:

Mungalls-Ontology-Design-Guidelines (7).png

We can easily imagine building this out into a large graph of friend-of connections, or connecting other kinds of nodes, and keeping additional useful information on the edges.

But what about the 3rd item, semantics?

What about semantics?

For many in both linked data/RDF and in graph database/PG camps, this is perceived as a minor concern. In fact you can often find RDF people whinging about OWL being too complex or some such. The “semantic web” has even been rebranded as “linked data”. But in fact, in the life sciences many of us have found OWL to be incredibly useful, and being able to clearly state what your graphs mean has clear advantages.

OK, but then why not just use what we have already? OWL-DL already has a mapping to RDF, and any document in RDF is automatically an RDF* document, so problem solved?

Not quite. There are two issues with continuing he status quo in the world of RDF* and PGs:

  1. The mapping of OWL to RDF can be incredibly verbose and leads to unintuitive graphs that inhibit effective computation.
  2. OWL is not the only fruit. It is great for the use cases it was designed for, but there are other modes of inference and other frameworks beyond first-order logic that people care about.

Issues with existing OWL to RDF mapping

Let’s face it, the existing mapping is pretty ugly. This is especially true for life-science ontologies that are typically construed of as relational graphs, where edges are formally SubClassOf-SomeValuesFrom axioms. See the post on obo json for more discussion of this. The basic idea here is that in OWL, object properties connect individuals (e.g. my left thumb is connected to my left hand via part-of). In contrast, classes are not connected directly via object properties, rather they are related via subClassOf and class expressions. It is not meaningful in OWL to say “finger (class) part_of hand (class)”. Instead we seek to say “all instances of finger are part_of some x, where x is an instance of a hand”. In Manchester Syntax this has compact form

Finger SubClassOf Part_of some Hand

This is translated to RDF as

Finger owl:subClassOf [

a owl:Restriction ;

owl:onProperty :part_of

owl:someValuesFrom :Hand

]

As an example, consider 3 classes in an anatomy ontology, finger, hand, and forelimb, all connected via part-ofs (i.e. every finger is part of some hand, and ever hand is part of some finger). This looks sensible when we use a native OWL syntax, but when we encode as RDF we get a monstrosity:

z

Fig2 (A) two axioms written in Manchester Syntax describing anatomical relationship between three structures (B) corresponding RDF following official OWL to RDF mapping, with 4 triples per existential axiom, and the introduction of two blank nodes (C) How the axioms are conceptualized by ontology developers, domain experts and how most browsers render them. The disconnect between B and C is an enduring source of confusion among many.

This ugliness was not the result of some kind of perverse decision by the designers of the OWL specs, it’s a necessary consequence of the existing stack which bottoms out at triples as the atomic semantic unit.

In fact, in practice many people employ some kind of simplification and bypass the official mapping and store the edges as simple triples, even though this is semantically invalid. We can see this for example in how Wikidata loads OBOs into its triplestore. This can cause confusion, for example, WD storing reciprocal inverse axioms (e.g. part-of, has-part) even though this is meaningless when collapsed to simple triples.

I would argue there is an implicit contract when we say we are using a graph-based formalism that the structures in our model correspond to the kinds of graphs we draw on whiteboards when representing an ontology or knowledge graph, and the kinds of graphs that are useful for computation; the current mapping violates that implicit contract, usually causing a lot of confusion.

It also has pragmatic implications too. Writing a SPARQL query that traverses a graph like the one in (B), following certain edge types but not others (one of the most common uses of ontologies in bioinformatics) is a horrendous task!

OWL is not the only knowledge representation language

The other reason not to stick with the status quo for semantics for RDF* and PGs is that we may want to go beyond OWL.

OWL is fantastic for the things it was designed for. In the life sciences, it is vital for automatic classification and semantic validation of large ontologies (see half of the posts in this blog site). It is incredibly useful for checking the biological validity of complex instance graphs against our encoded knowledge of the world.

However, not everything we want to say in a Knowledge Graph (KG) can be stated directly in OWL. OWL-DL is based on a fragment of first order logic (FOL); there are certainly things not in that fragment that are useful, but often we have to go outside strict FOL altogether. Much of biological knowledge is contextual and probabilistic. A lot of what we want to say is quantitative rather than qualitative.

For example, when relating a disease to a phenotype (both of which are conventionally modeled as classes, and thus not directly linkable via a property in OWL), it is usually false to say “every person with this disease has this phenotype“. We can invent all kinds of fudges for this – BFO has the concept of a disposition, but this is just a hack for not being able to state probabilistic or quantitative knowledge more directly.

A proposed path forward for semantics in Property Graphs and RDF*

RDF* provides us with an astoundingly obvious way to encode at least some fragment of OWL in a more intuitive way that preserves the graph-like natural encoding of knowledges. Rather than introduce additional blank nodes as in the current OWL to RDF mapping, we simply push the semantics onto the edge label!

Here is example of how this might look for the axioms in the figure above in RDF*

<<:finger :part-of :hand>> owlstar:hasInterpretation
owlstar:SubClassOfSomeValuesFrom .
<<:hand :part-of :forelimb>> owlstar:hasInterpretation owlstar:SubClassOfSomeValuesFrom .

I am assuming the existing of a vocabulary called owlstar here – more on that in a moment.

In any native visualization of RDF* this will end up looking like Fig1C, with the semantics adorning the edges where they belong. For example:

Mungalls-Ontology-Design-Guidelines (8)

proposed owlstar mapping of an OWL subclass restriction. This is clearly simpler than the corresponding graph fragment in 2B. While the edge properties (in square brackets) may be too abstract to show an end user (or even a bioinformatician performing graph-theoretiic operations), the core edge is meaningful and corresponds to how an anatomist or ordinary person might think of the relationship.

Maybe this is all pretty obvious, and many people loading bio-ontologies into either Neo4j or RDF end up treating edges as edges anyway. You can see the mapping we use in our SciGraph Neo4J OWL Loader, which is used by both Monarch Initiative and NIF Standard projects. The OLS Neo4J representation is similar. Pretty much anyone who has loaded the GO into a graph database has done the same thing, ignoring the OWL to RDF mapping. The same goes for the current wave of Knowledge Graph embedding based machine learning approaches, which typically embed a simpler graphical representation.

So problem solved? Unfortunately, everyone is doing this differently, and are essentially throwing out OWL altogether. We lack a standard way to map OWL into Property Graphs, so everyone invents their own. This is also true for people using RDF stores, people often have their own custom OWL mapping that is less verbose. In some cases this is semantically dubious, as is the case for the Wikipedia mapping.

The simple thing is for everyone to get around a common standard mapping, and RDF* seems a good foundation. Even if you are using plain RDF, you could follow this standard and choose to map edge properties to reified nodes, or to named graphs, or to the Wikidata model. And if you are using a graph database like Neo4J, there is a straightforward mapping to edge properties.

I will call this mapping OWL*, and it may look something like this:

RDF* OWL Interpretation
<<?c ?p ?d>> owlstar:interpretation owlstar:subClassOfSomeValuesFrom ?c SubClassOf ?p some ?d
<<?c ?p ?d>> owlstar:interpretation owlstar:subClassOfQCR, owlstar:cardinality ?n ?c SubClassOf ?p exactly 5 ?d
<<?c ?p ?d>> owlstar:interpretation owlstar:subjectContextProperty ?cp, owlstar:subjectContextFiller ?cf, owlstar:interpretation owlstar:subClassOfSomeValuesFrom (?c and ?cp some cf?) SubClassOf ?p some ?d

Note that the code of each of these mappings is a single edge/triple between class c, class d, and an edge label p. The first row is a standard existential restriction common to many ontologies. The second row is for statements such as ‘hand has part 5 fingers’, which is still essentially a link between a hand concept and a finger concept. The 3rd is for a GCI, an advanced OWL concept which turns out to be quite intuitive and useful at the graph level, where we are essentially contextualizing the statement. E.g. in developmentally normal adult humans (context), hand has-part 5 finger.

When it comes to a complete encoding of all of OWL there may be decisions to be made as to when to introduce blank nodes vs cramming as much into edge properties (e.g. for logical definitions), but even having a standard way of encoding subclass plus quantified restrictions would be a huge boon.

Bonus: Explicit deferral of semantics where required

Many biological relationships expressed in natural language in forms such as “Lmo-2 binds to Elf-2” or “crocodiles eat wildebeest” can cause formal logical modelers a great deal of trouble. See for example “Lmo-2 interacts with Elf-2”On the Meaning of Common Statements in Biomedical Literature (also slides) which lays out the different ways these seemingly straightforward statements about classes can be modeled. This is a very impressive and rigorous work (I will have more to say on how this aligns with GO-CAM in a future post), and ends with an impressive Wall of Logic:

Screen Shot 2019-07-08 at 10.16.38 PM.png

Dense logical axioms proposed by Schulz & Jansen for representing biological interactions

this is all well and good, but when it comes to storing the biological knowledge in a database, the majority of developers are going to expect to see this:

Mungalls-Ontology-Design-Guidelines (6).png

protein interaction represented as a single edge connecting two nodes, as represented in every protein interaction database

And this is not due to some kind of semantic laziness on their part: representing biological interactions using this graphical formalism (whether we are representing molecular interactions or ecological interactions) allows us to take advantage of powerful graph-theoretic algorithms to analyze data that are frankly much more useful than what we can do with a dense FOL representation.

I am sure this fact is not lost on the authors of the paper who might even regard this as somewhat trivial, but the point is that right now we don’t have a standard way of serializing more complex semantic expressions into the right graphs. Instead we have two siloed groups, one from a formal perspective producing complex graphs with precise semantics, and the other producing useful graphs with no encoding of semantics.

RDF* gives us the perfect foundation for being able to directly represent the intuitive biological statement in a way that is immediately computationally useful, and to adorn the edges with additional triples that more precisely state the desired semantics, whether it is using the Schulz FOL or something simpler (for example, a simple some-some statement is logically valid, if inferentially weak here).

Beyond FOL

There is no reason to have a single standard for specifying semantics for RDF* and PGs. As hinted in the initial example, there could be a vocabulary or series of vocabularies for making probabilistic assertions, either as simple assignments of probabilities or frequencies, e.g.

<<:RhinovirusInfection :has-symptom :RunnyNose>> probstar:hasFrequency
0.75 .

or more complex statements involving conditional probabilities between multiple nodes (e.g. probability of symptom given disease and age of patient), allowing encoding of ontological Bayesian networks and Markov networks.

We could also represent contextual knowledge, using a ‘that’ construct borrowed from ILK:

<<:clark_kent owl:sameAs :superman>> a ikl:that ; :believed-by :lois_lane .

which could be visually represented as:

Mungalls-Ontology-Design-Guidelines (10)

Lois Lane believes Clark Kent is Superman. Here an edge has a link to another node rather than simply literals. Note that while possible in RDF*, in some graph databases such as Neo4j, edge properties cannot point directly to nodes, only indirectly through key properties. In other hypergraph-based graph DBs a direct link is possible.

Proposed Approach

What I propose is a series of lightweight vocabularies such as my proposed OWL*, accompanied by mapping tables such as the one above. I am not sure if W3C is the correct approach, or something more bottom-up. These would work directly in concert with RDF*, and extensions could easily be provided to work with various ways to PG-ify RDF, e.g. reification, Wikidata model, NGs.

The same standard could work for any PG database such as Neo4J. Of course, here we have the challenge of how to best to encode IRIs in a framework that does not natively support these, but this is an orthogonal problem.

All of this would be non-invasive and unobtrusive to people already working with these, as the underlying structures used to encode knowledge would likely not change, beyond an additional adornments of edges. A perfect stealth standard!

It would help to have some basic tooling around this. I think the following would be straightforward and potentially very useful:

  • Implementation of the OWL* mapping of existing OWL documents to RDF* in tooling – maybe the OWLAPI, although we are increasingly looking to Python for our tooling (stay tuned to hear more on funowl).
  • This could also directly bypass RDF* and go directly to some PG representation, e.g. networkx in Python, or stored directly into Neo4J
  • Some kind of RDF* to Neo4J and SPARQL* to OpenCypher [which I assume will happen independently of anything proposed here]
  • And OWL-RL* reasoner that could demonstrate simple yet powerful and useful queries, e.g. property chaining in Wikidata

A rough sketch of this approach was posted on public-owl-dev to not much fanfare, but, umm, this may not be the right forum for this.

Glossing over the details

For a post about semantics, I am glossing over the semantics a bit, at least from a formal computer science perspective. Yes of course, there are some difficult details to be worked out regarding the extent to which existing RDF semantics can be layered on, and how to make these proposed layers compatible. I’m omitting details here to try and give as simple an overview as possible. And it also has to be said, one has to be pragmatic here. People are already making property graphs and RDF graphs conforming to the simple structures I’m describing here. Just look at Wikidata and how it handles (or rather, ignores) OWL. I’m just the messenger here, not some semantic anarchist trying to blow things up. Rather than worrying about whether such and such a fragment of FOL is decidable (which lets face it is not that useful a property in practice) let’s instead focus on coming up with pragmatic standards that are compatible with the way people are already using technology!

 

 

 

 

 

 

OntoTip: Write simple, concise, clear, operational textual definitions

This is a post in a series of tips on ontology development, see the parent post for more details.

Ontologies contain both textual definitions (aimed primarily at humans) and logical definitions (aimed primarily at machines). There is broad agreement that textual definitions are highly important (they are an OBO principle), and the utility of logical definitions has been shown for both ontology creation/maintenance (see previous post) as well as for analytic applications. However, there has been insufficient attention paid to the crafting of definitions, and to addressing questions such as how textual and logical definitions inter-relate, leading to a lot of inconsistent practice across OBO ontologies. 

Mungalls-Ontology-Design-Guidelines (3)

text definitions are for consumption by biocurators and domain scientists, logical definitions for machines. Logical definition here shown in OWL Manchester syntax, with units written as human-readable labels in quotes. Note the correspondence between logical and textual definitions.

Two people who have thought deeply about this are Selja Seppälä and Alan Ruttenberg. They organized the  2016 International Workshop on Definitions in Ontologies (IWOOD 2016), and I will lift a quote directly from the website here:

Definitions of terms in ontologies serve a number of purposes. For example, logical definitions allow reasoners to assist in and verify classification, lessening the development burden and enabling expressive queries. Natural language (text) definitions allow humans to understand the meaning of classes, and can help ameliorate low inter-annotator agreement. Good definitions allow for non-experts and experts in adjacent disciplines to understand unfamiliar terms making it possible to confidently use terms from external ontologies, facilitating data integration. 

Despite the importance of definitions in ontologies, developers often have little if any training in writing definitions and axioms, as shown in Selja Seppälä and Alan Ruttenberg, Survey on defining practices in ontologies: Report, July 2013. This leads to varying definition practices and inconsistent definition quality. Worse, textual and logical definitions are often left out of ontologies altogether. 

I would also state that poorly constructed textual definitions can have severe long term ramifications. They can introduce cryptic ambiguities or misunderstandings that may not be uncovered for years, at which point they necessitate expensive ontology repair and re-curation efforts. My intent in this post is not to try and impose my own stylistic quirks on everyone else, but to improve the quality of engineering in ontologies, and to improve the lives of curators using definitions for their daily work.

There is an excellent follow-up paper Guidelines for writing definitions in ontologies by Seppälä, Smith, and Ruttenberg (henceforth referred to as the SRS paper), which should be required reading for anyone who is involved in building ontologies. The authors provide a series of guidelines based on their combined ontology development expertise and empirical work on surveying usage and attitudes.

While there is potentially an aspect of personal preference and stylistic preference in crafting text, I think that their guidelines are eminently sensible and deserve further exposure and adoption. I recommend reading the full paper. Here I will look at a subset of these, and give my own informal take on them. In their paper, SRS use a numbering system for their guidelines. I prefix their numbering system with S, and will go through them in a different order.

I have transcribed the guidelines to a table here, with the guidelines I discuss here in bold:

S1 Conform to conventions
S1.1 Harmonize definitions
S2 Principles of good practice
S3 Use the genus differentia form
S3.1 Include exactly one genus
S3.1.1 Use the genus proximus
S3.1.2 Avoid plurals
S3.1.3 Avoid conjunctions and disjunctions
S3.1.4 Avoid categorizers
S4 Avoid use/mention confusion
S5 Include necessary, and whenever possible, jointly sufficient conditions
S5.1 Avoid encyclopedia information
S5.2 Avoid negative terms
S5.3 Avoid definitions by extension
S6 Adjust the scope
S6.1 Definition should be neither too broad nor too narrow
S6.2 Define only one thing with a single textual definition
S7 Avoid circularity
S8 Include jointly satisfiable features
S9 Use appropriate degree of generality
S9.1 Avoid generalizing exprressions
S9.2 Avoid examples and lists
S9.3 Avoid indexical and dialectic terms
S9.4 Avoid subjective and evaluative statements
S10 Define abbreviations and acronyms
S11 Match text and logical definitions
S11.1 Proofread definitions

Concisely state necessary and sufficient conditions, cut the chit-chat

Cut_the_Crap

Listen to The Clash: cut the c**p

Combining S6.1 “A definition should be neither too broad nor too narrow” with S9.4 “avoid subjective and evaluative statements”, I would choose to emphasize that textual definitions should concisely encapsulate necessary and sufficient conditions, avoiding weasel words, irrelevant verbiage, chit-chat and random blethering. This makes it easier for a reader to hone in on the intended meaning of the class. It also encourages a standard style (S1), which can make it easier for others to write definitions when creating new classes. It also makes it easier to be consistent with the logical definition, when provided (S11; see below). 

SRS provide this example under S9.4:

cranberry bean: Also called shell bean or shellout, and known as borlotti bean in Italy, the cranberry bean has a large, knobby beige pod splotched with red. The beans inside are cream- colored with red streaks and have a delicious nutlike flavor. Cranberry beans must be shelled before cooking. Heat diminishes their beautiful red color. They’re available fresh in the summer and dried throughout the year (FOODON_03411186)

While this text contains potentially useful information, this is not a good operational definition, it lacks easy to apply objective criteria to determine what is and what is not a member of this class.

If you need to include discursive text, use either the definition gloss or a separate description field. The ‘gloss’ is the part of the text definition that comes after the first period/full-stop. A common practice in the GO is to recapitulate the definition of the differentia in the gloss. For example, the definition for ‘ectoderm development’ is

The process whose specific outcome is the progression of the ectoderm over time, from its formation to the mature structure. In animal embryos, the ectoderm is the outer germ layer of the embryo, formed during gastrulation.”.

(embedded ‘ectoderm’ definition underlined)

This suffers some problems as it violates DRY (if the wording of the definition of ectoderm changes, then the wording of the definition of ‘ectoderm development’ changes). However, it provides utility as users do not have to traverse the elements of the OWL definition to achieve the bigger picture. It is marginally easier to semi-automatically update the gloss, compared to the situation where the redundant information permeates the core text definition. 

When the conventions for a particular ontology allow for gloss, it is important to be consistent about how this is used, and to include only necessary and sufficient conditions before the period. Recently in GO we were puzzling over what was included and excluded in the following definition:

An apical plasma membrane part that forms a narrow enfolded luminal membrane channel, lined with numerous microvilli, that appears to extend into the cytoplasm of the cell. A specialized network of intracellular canaliculi is a characteristic feature of parietal cells of the gastric mucosa in vertebrates

It is not clear if parietal cells are included as an exemplar, or if this is intended as a necessary condition. S5.1 “avoid encyclopedic information” is excellent advice. This recommends putting examples of usage in a dedicated field. Unfortunately the practice of including examples in definitions is common because many curation tools limit which fields are shown, and examples can help curators immensely. I would therefore compromise on this advice and say that IF examples are to be included in the definition field, THEN this MUST be included in the gloss (after the necessary and sufficient conditions, separated by a period), AND it should be clearly indicated as an example. GO uses the string “An example of this {process,component,…} is found in …” to indicate an example.

Genus-differentia definitions are your friend

(S3)

Mungalls-Ontology-Design-Guidelines (4).png

Genus-differentia definitions are your friend.

In the introduction, SRS define a ‘classic definition’ as one following genus-differentia style i.e. “a G that D”. The precise lexical structure can be modified for readability, but the important part is to state differentiating characteristics from a generic superclass

The example in the paper is the Uberon definition of skeletal ligament: “Dense regular connective tissue connecting two or more adjacent skeletal elements”. Here the genus is “dense regular connective tissue” (which should be the name of a superclass in the ontology; not necessarily the direct parent post-reasoning) and the differentiating characteristics are property of “connecting two or more adjacent skeletal elements” (which is also expressed via relationships in the ontology). As it happens, this definition violates one of the other principles as we should say later.

I agree enthusiastically with S3 “Use the genus-differentia form”. (Note that this should not be confused with elevation of single-inheritance as desired property in released ontologies; see this post)

The genus-differentia definition should be both necessary (i.e. the genus and the characteristics hold for all instances of the class) and sufficient (i.e. anything that satisfies the genus and characteristics must be an instance of the class).

Genus-differentia definitions encourage modularity and reuse. We can construct an ontology in a modular fashion, reusing simpler concepts to fashion more complex concepts.

Genus-differentia form is an excellent way to ensure definitions are operational. The set of all genus-differentia definitions form a decision tree, we can work up or down the tree to determine if an observation falls into an ontology class.

I also agree with S3.1 “include exactly one genus”. SRS give the example in OBI of

recombinant vector: “A recombinant vector is created by a recombinant vector cloning process”

which omits a genus (it could be argued that a more serious issue is the practice of defining an object in terms of its creation process rather than vice versa).

In fact, omission of a genus is often observed in logical definitions too, and is usually the result of an error, and will give unintended results in reasoning. I chose the following example from CLO (reported here):

http://purl.obolibrary.org/obo/CLO_0000266 immortal uterine cervix-derived cell line cell

This is wrong because a reasoner will classify anything that comes from a cervix as being a cell line!

In a rare disagreement with SRS, I have a slight issue with S3.1.1 “use the genus proximus”, i.e. use the closest parent term, but I cover this in a future post. Using the closest parent can lead to redundancy and violations of DRY. 

Avoid indexicals (S9.3)

Quoting SRS’ wording for S9.3:

Avoid indexical and deictic terms, such as ‘today’, ‘here’, and ‘this’ when they refer to (the context of ) the author of the definition or the resource itself. Such expressions often indicate the presence of a non-defining feature or a case of use/mention confusion. Most of the times, the definition can be edited and rephrased in a more general way

Here is a bad disease definition for a fictional disease (adapted from a real example): “A recently discovered disease that affects the anterior diplodocus organ…”. Don’t write definitions like this. This is obviously bad as it will become outdated and your ontology will look sad. If the date of discovery is important, include an annotation assertion for date of discovery (or better yet, a field for originating publication, which entails a date). But it’s more likely this is unnecessary verbiage that detracts from the business of precisely communicating the meaning of the class (S9.4).

Conform to conventions (S1)

As well as following natural language conventions and conventions of the domain of the ontology, it’s good to follow conventions, if not across ontologies, at least within the same ontology.

Do not replicate the name of the class in the definition

An example is a hypothetical definition for ‘nucleus’

A nucleus is a membrane-bounded organelle that …

This violates DRY and is not robust to changes in the name. Under S1.1 this is stated as “limiting the definition to the definiens”, alternatively states as “avoid including the definiendum and copula”.  If you really must include the name (definiendum), do this consistently throughout the ontology rather than ad-hoc. But I strongly recommend not to, and to start the text of the definition with the string “A <genus> that …”.

Here is another bad made-up definition for a fictional disease (based on real examples):

Spattergroit (also known as contagious purple pustulitis) is a highly contagious disease caused by…”.

Including a synonym in the definition violates DRY, and will lead to inconsistency if the synonym becomes a class in its own right. Remember, we are not writing encyclopedic descriptions, but ontology definitions. Information such as synonyms can go in dedicated fields (where they can be used computationally, and presented appropriately to the user).

S11 Match Textual and Logical Definitions

The OWL definition (aka logical definition, aka equivalence axiom), when it exists, should correspond in some broad sense to the text definition. This does not mean that it should be a literal transcription of the OWL. On the contrary, you should always avoid strange computerese conventions in text intended for humans (this includes the use of IDs in text, connecting_words_with_underscoresOrCamelCase, use of odd characters, as well as strange unwieldy grammatical forms; see S1). It does mean that if your OWL definition veers wildly from your text then you have a bad smell you need to get rid of before visitors come around.

If your OWL definition doesn’t match your text definition, it is often a sign you are writing overly clever complex Boolean logic OWL definitions that don’t correspond to how domain scientists think about the class [covered in a future post]. Or maybe you are over-axiomatizing, and you should drop your equivalence axiom since on examination it’s not right (see the over-axiomatizing principle).

SRS provide one positive example, but no negative examples. The positive example is from IDO:

Screen Shot 2019-07-06 at 1.50.53 PM.png

Positive example from IDO: bacteremia: An infection that has as part bacteria located in the blood. Matches the logical def of infection and (has_part some
(infectious agent and Bacteria and (located_in some blood)))

Unfortunately, there are many cases where text and logical definitions deviate. An example reported for OBI is oral administration:

The administration of a substance into the mouth of an organism”

the text def above is considerably different from the logical one:

EquivalentTo (realizes some material to be added role) and (realizes some (target of material addition role and (role of some mouth)))

Use of DOSDPs can help here, as a standard textual definition form here can be generated for classes with OWL definitions. One thing that would be useful would be a tool that could help spot cases where the text definition and logical definition have veered widely.

Summary

I was able to write this post by cribbing from the SRS paper (Seppala et al) which I strongly recommend reading. Even if you don’t agree with everything in either the paper or my own take, I think it’s important if the ontology community discuss some of these and reach some kind of consensus on which principles to apply when.

Of course, there will always be an element of subjectivity and stylistic preference that will be harder to agree on. When making recommendations here there is the danger of being perceived as the ‘ontology police’. But I think there is a core set of common-sense principles that help with making ontologies more usable, consistent, and maintainable. My own experience strongly suggests that when this advice is not heeded, we end up with costly misannotation due to differing interpretations of terms, and many other issues.

I would like OBO to play more of a role in the process of coming up with these guidelines, and on evaluating their usage in existing ontologies. Stay tuned for more on this, and please provide feedback on what you think!