Debugging Ontologies using OWL Reasoning, Part 3: robot explain

This is the 3rd part in a series. See part two.

In the first part of this series, I covered the use of disjointness axioms to make it easier to detect logical errors in your ontologies, and how you could use robot reason as part of an ontology release workflow to avoid accidentally releasing incoherent ontologies.

In the second part I covered unintentional inference of equivalence axioms – something that is not inherently incoherent, yet is usually unintended, and how to configure robot to catch these.

In both cases, the standard operating procedure was:

  • Detect the incoherency using robot
  • Diagnose the incoherency using the “explain” feature in Protege
  • Repair the problem my removing or changing offending axioms, either in your ontology (or if you are unlucky, the issue is upstream and you need to coordinate with the developer of an upstream ontology)

In practice, repairing these issues can be very hard. This is compounded if the ontology uses complex hard-to-mentally-reason-over OWL axioms involving deep nesting and unusual features, or if the ontology has ad-hoc axioms not conforming to design patterns. Sometimes even experienced ontology developers can be confounded by long complex chains of axioms in explanations.

But never fear! Help is at hand, there are many in the OBO community who can help! I always recommend making an issue in GitHub as soon as you detect an incoherency. However, you want to avoid having other people do the duplicative work of diagnosing the incoherency. They may need to clone your repo, fire up Protege, wait for the reasoner to sync, etc. You can help people help you by providing as much information up-front as possible.

Previously my recommendation was to paste a screenshot of the Protege explanation in the ticket. This helps a lot as often I can look at one of these and immediately tell what the problem is and how to fix it.

But this was highly imperfect. Screenshots are not searchable by the GitHub search interface, they are not accessible, and the individual classes in the screenshot are not hyperlinked.

A relatively new feature of robot is the explain command, which allows you to generate explanations without firing up Protege. Furthermore, you can generate explanations in markdown format, and if you paste this markdown directly into a ticket it will render beautifully, with all terms clickable!

A recent example was debugging an issue related to fog in ENVO. As someone who lives in the Bay Area, I have a lot of familiarity with fog.

The explanation is rendered as nested lists:

Both the relations (object properties) and classes are hyperlinked, so if you want to find out more about rime just click on it.

In this case the issue is caused by the use of  results in formation of where the subject is a material entity, whereas it is intended for processes. This was an example of a “cryptic incoherency”. It went undetected because the complete set of RO axioms were not imported into ENVO (I will cover imports and their challenges in a future post)

The robot explain command is quite flexible, as can be seen from the online help. I usually use it set to report all incoherencies (unsatisfiable classes plus inconsistencies). Sometimes if you have an unsatisfiable class high up in the hierarchy (or high up in the existential dependency graph) then all subclasses/dependent classes will be unsastifiable. In these cases it can help to hone in on the root cause, so the “mode” option can help here.

File:Golden Fog, San Francisco.jpg - Wikimedia Commons