JOWO 2019: Tutorials
The Joint Ontology Workshops take place at the Medical University of Graz, September 23-25, 2019.
This fifth edition of the Joint Ontology Workshops (JOWO 2019) includes the following tutorials (click to expand):
DOnEReCA: Data-driven ontology engineering with Relational Concept Analysis
Organizers: Petko Valtchev, Mickael Wajnberg
DOnEReCA website
Data can successfully support ontology engineering tasks such as design or maintenance, assuming it has been properly analyzed to discover possible trends and/or groups. For instance, when an ontology is designed from a relational database (RDB), a first (rough) ontology can be enhanced by the result of a conceptual clustering to reveal missing classes, and even properties, in that ontology. Similarly, when populating an existing ontology with an independently created data, one might want to determine how well the data fit the ontology w.r.t. the mapping of resources to ontology classes. This warrants analysis of data descriptions to detect characteristic associations among ontology types, on one hand, and own descriptions in term of properties, on the other hand, which might reveal anomalous configurations.
Formal Concept Analysis (FCA) provides a knowledge discovery framework enabling both (1) conceptual clustering of data objects and (2) pattern/association discovery. It was thought as a mathematical approach to the design of concept hierarchies (called concept lattices) from a sets of observations (introduced as object x attribute tables, called formal contexts). FCA, as most data mining approaches focuses on a single data table, whereas Linked Data are inherently multi-table, a.k.a. multi-relational. Relational Concept Analysis (RCA) is a Multi-relational data mining (MRDM) method extending FCA.
To bring the mathematical strength of FCA to the realm of multi-relational data, and hence RDF and Linked Data, RCA admits a set of contexts, i.e. multiple object sorts, as well binary relations between object sorts. To discover plausible concepts from such datasets, propositionalization mechanism called scaling is used to refine object descriptions as per input contexts : Description Logic-inspired relational scaling operators replace inter-object links with restriction-like attributes, called relational, that refer to concepts from the range context. Potential cycles in data are dealt with in an iterative fix-point computation that gradually expands the ordinary concept lattices with relational attributes. As RCA fix-point lattices reflect the refined contexts much in the same way as with FCA, clusters and patterns are drawn thereof by existing FCA methodologies. Cycles are, in turn, resolved by expanding concept descriptions in a minimal fashion.
RCA has been applied to practical problems from a wide range of fields such as software engineering, hydroecology, neurology, data interlinking, linguistics. In this tutorial, we will focus on the way RCA can support various ontology engineering tasks. First we bring to the audience an understanding of the mathematical foundations of the RCA method and the algorithms used in the iterative lattice construction. We present existing tools as well as examples of RCA applications from the literature. In the second part, the focus will be on the intricate links between RCA and ontologies. We present a small number of ontology engineering scenarios and show how RCA-based tools support them through proper analysis of the data.
DOnEReCA website
Data can successfully support ontology engineering tasks such as design or maintenance, assuming it has been properly analyzed to discover possible trends and/or groups. For instance, when an ontology is designed from a relational database (RDB), a first (rough) ontology can be enhanced by the result of a conceptual clustering to reveal missing classes, and even properties, in that ontology. Similarly, when populating an existing ontology with an independently created data, one might want to determine how well the data fit the ontology w.r.t. the mapping of resources to ontology classes. This warrants analysis of data descriptions to detect characteristic associations among ontology types, on one hand, and own descriptions in term of properties, on the other hand, which might reveal anomalous configurations.
Formal Concept Analysis (FCA) provides a knowledge discovery framework enabling both (1) conceptual clustering of data objects and (2) pattern/association discovery. It was thought as a mathematical approach to the design of concept hierarchies (called concept lattices) from a sets of observations (introduced as object x attribute tables, called formal contexts). FCA, as most data mining approaches focuses on a single data table, whereas Linked Data are inherently multi-table, a.k.a. multi-relational. Relational Concept Analysis (RCA) is a Multi-relational data mining (MRDM) method extending FCA.
To bring the mathematical strength of FCA to the realm of multi-relational data, and hence RDF and Linked Data, RCA admits a set of contexts, i.e. multiple object sorts, as well binary relations between object sorts. To discover plausible concepts from such datasets, propositionalization mechanism called scaling is used to refine object descriptions as per input contexts : Description Logic-inspired relational scaling operators replace inter-object links with restriction-like attributes, called relational, that refer to concepts from the range context. Potential cycles in data are dealt with in an iterative fix-point computation that gradually expands the ordinary concept lattices with relational attributes. As RCA fix-point lattices reflect the refined contexts much in the same way as with FCA, clusters and patterns are drawn thereof by existing FCA methodologies. Cycles are, in turn, resolved by expanding concept descriptions in a minimal fashion.
RCA has been applied to practical problems from a wide range of fields such as software engineering, hydroecology, neurology, data interlinking, linguistics. In this tutorial, we will focus on the way RCA can support various ontology engineering tasks. First we bring to the audience an understanding of the mathematical foundations of the RCA method and the algorithms used in the iterative lattice construction. We present existing tools as well as examples of RCA applications from the literature. In the second part, the focus will be on the intricate links between RCA and ontologies. We present a small number of ontology engineering scenarios and show how RCA-based tools support them through proper analysis of the data.
FOUNT: Towards a systematic methodology for foundational ontologies: properties, relations, and truthmaking
Organizers: Nicola Guarino, Giancarlo Guizzardi, Daniele Porello
Formal ontologies are increasingly used in a variety of domains such as AI, Multiagent Systems, Conceptual Modelling, Database Design, NLP and and Software Engineering. Ontologies are a way to express the information about a certain domain in a peculiar way: they intend to make the modelling choices and the assumptions of the modeller clear, justified, and sharable among the community of users.
Formal ontological analysis aims at eliciting and formalizing the implicit ontological foundations of a body of knowledge, i.e., the nature and structure of the world that justifies such knowledge, in terms of very general categories and relations. For instance, notions like object, property, relation, event, time, space, quality, modality, disposition, and so on, are at the core of formal ontological analysis. Nowadays, these general notions are systematized in foundational ontologies (such as DOLCE, BFO and UFO), which have been constructed by means of a tight confrontation with the literature in linguistics, cognitive science, logic, and analytic philosophy, and provide a well-developed theory for comprehending and justifying the modeller's ontological choices.
In this tutorial, we develop a systematic methodology for identifying ontological commitments by articulating a suitable notion of truthmakers of sentences. We apply this methodology to the identification of the truthmakers of sentences containing unary predicates (properties) and n-ary predicates (relations). We start by relying on the notion of individual quality (common to DOLCE, BFO and UFO), then, by means of the extended treatment of qualities elaborated in UFO, we expand this view towards an account of the specific ontological status of relationships.
Formal ontologies are increasingly used in a variety of domains such as AI, Multiagent Systems, Conceptual Modelling, Database Design, NLP and and Software Engineering. Ontologies are a way to express the information about a certain domain in a peculiar way: they intend to make the modelling choices and the assumptions of the modeller clear, justified, and sharable among the community of users.
Formal ontological analysis aims at eliciting and formalizing the implicit ontological foundations of a body of knowledge, i.e., the nature and structure of the world that justifies such knowledge, in terms of very general categories and relations. For instance, notions like object, property, relation, event, time, space, quality, modality, disposition, and so on, are at the core of formal ontological analysis. Nowadays, these general notions are systematized in foundational ontologies (such as DOLCE, BFO and UFO), which have been constructed by means of a tight confrontation with the literature in linguistics, cognitive science, logic, and analytic philosophy, and provide a well-developed theory for comprehending and justifying the modeller's ontological choices.
In this tutorial, we develop a systematic methodology for identifying ontological commitments by articulating a suitable notion of truthmakers of sentences. We apply this methodology to the identification of the truthmakers of sentences containing unary predicates (properties) and n-ary predicates (relations). We start by relying on the notion of individual quality (common to DOLCE, BFO and UFO), then, by means of the extended treatment of qualities elaborated in UFO, we expand this view towards an account of the specific ontological status of relationships.
MLwO: Semantic similarity and machine learning with ontologies
Organizers: Robert Hoehndorf, Maxat Kulmanov
Tutorial website
Ontologies have long provided a core foundation in the organization of domain knowledge and are widely applied in several domains. With hundreds of ontologies currently available and large volumes of data accessible through ontologies, there are a number of new and exciting opportunities emerging in using ontologies for data analysis and predictive analysis. This tutorial will review existing methods for computational data analysis through ontologies based on semantic similarity and introduce different methods for machine learning with ontologies that were recently developed. I will introduce knowledge graph embeddings that project ontologies (as components of knowledge graphs) into vector spaces, machine learning approaches based on random walks, and model-theoretic approaches for learning with ontologies. The tutorial will include hands-on components using Jupyter notebooks, and participants should participate with their own laptop computer.
Tutorial website
Ontologies have long provided a core foundation in the organization of domain knowledge and are widely applied in several domains. With hundreds of ontologies currently available and large volumes of data accessible through ontologies, there are a number of new and exciting opportunities emerging in using ontologies for data analysis and predictive analysis. This tutorial will review existing methods for computational data analysis through ontologies based on semantic similarity and introduce different methods for machine learning with ontologies that were recently developed. I will introduce knowledge graph embeddings that project ontologies (as components of knowledge graphs) into vector spaces, machine learning approaches based on random walks, and model-theoretic approaches for learning with ontologies. The tutorial will include hands-on components using Jupyter notebooks, and participants should participate with their own laptop computer.
SNOMED CT Tutorial
Organizers: Stefan Schulz, Yongsheng Gao, Stefan Sabutsch, Nina Sjencic
The international standard SNOMED CT, an ontology-based clinical terminology is increasingly used to support interoperability in health care. With about 350,000 classes and a rich set of axioms conforming to OWL-EL profile it is probably the world’s largest ontology. However, many legacy issues prevail, and collaboration with the Applied Ontology community is of great value for quality improvement and ontological well-formedness. This tutorial of 2 x 90 min will present SNOMED CT to the typical audience of JOWO, but is also open for implementers and potential users. In encompasses SNOMED CT’s architectural principles and design patterns, foundational issues like implicit and explicit upper-level assumptions, the dealing with epistemic aspects, interfacing with other ontologies, SNOMED CT and natural language, formats and use cases. The tutorial is delivered by Stefan Schulz, Medical University of Graz. He has accompanied the evolution of SNOMED CT during the past 15 years, participated in several projects around SNOMED CT and served the SNOMED organisation (SNOMEDInternational, former IHTSDO) in working groups and advisory committees.
The international standard SNOMED CT, an ontology-based clinical terminology is increasingly used to support interoperability in health care. With about 350,000 classes and a rich set of axioms conforming to OWL-EL profile it is probably the world’s largest ontology. However, many legacy issues prevail, and collaboration with the Applied Ontology community is of great value for quality improvement and ontological well-formedness. This tutorial of 2 x 90 min will present SNOMED CT to the typical audience of JOWO, but is also open for implementers and potential users. In encompasses SNOMED CT’s architectural principles and design patterns, foundational issues like implicit and explicit upper-level assumptions, the dealing with epistemic aspects, interfacing with other ontologies, SNOMED CT and natural language, formats and use cases. The tutorial is delivered by Stefan Schulz, Medical University of Graz. He has accompanied the evolution of SNOMED CT during the past 15 years, participated in several projects around SNOMED CT and served the SNOMED organisation (SNOMEDInternational, former IHTSDO) in working groups and advisory committees.
TLO: Top Level Ontologies (ISO/IEC 21838)
Organizers: Barry Smith, Michael Gruninger
This tutorial will provide an introduction to ISO/IEC:21838 Top-Level Ontologies, a multi-part international standard currently in the final stages of review. Part 1 of the standard lays down the definition of ‘top-level ontology’ and a statement of the requirements to be satisfied by any ontology claiming to be conformant to this definition. Part 2 documents Basic Formal Ontology (BFO) in light of the requirements stated in Part 1. Proposals are envisaged for further parts, including a DOLCE specification, and a specification of a potential ISO upper level ontology.
The tutorial will be in three corresponding parts.
Part 1 will describe the ISO standardization process. It will provide a detailed overview of the contents of this standard and of the process to be followed in assessing candidate top-level ontologies to be included as further Parts.
Part 2 will provide an introduction to BFO and an outline of the changes made in BFO as a result of this standardization process. These changes include a Common Logic (CL) formalization of BFO that has been proven consistent, and an OWL formalization of BFO that has been proven to be derivable from BFO-ISO-CL. It will address how BFO deals with attributes of processes, organizations, abstract entities, and capabilities, and conclude with an overview of some of the major applications of BFO in biomedicine, industry and defense.
Part 3 will present draft proposals for further part, and provide an opportunity for discussion of the issues raised by these proposals and, more generally, by the idea of a top-level ontology as defined in this standard.
This tutorial will provide an introduction to ISO/IEC:21838 Top-Level Ontologies, a multi-part international standard currently in the final stages of review. Part 1 of the standard lays down the definition of ‘top-level ontology’ and a statement of the requirements to be satisfied by any ontology claiming to be conformant to this definition. Part 2 documents Basic Formal Ontology (BFO) in light of the requirements stated in Part 1. Proposals are envisaged for further parts, including a DOLCE specification, and a specification of a potential ISO upper level ontology.
The tutorial will be in three corresponding parts.
Part 1 will describe the ISO standardization process. It will provide a detailed overview of the contents of this standard and of the process to be followed in assessing candidate top-level ontologies to be included as further Parts.
Part 2 will provide an introduction to BFO and an outline of the changes made in BFO as a result of this standardization process. These changes include a Common Logic (CL) formalization of BFO that has been proven consistent, and an OWL formalization of BFO that has been proven to be derivable from BFO-ISO-CL. It will address how BFO deals with attributes of processes, organizations, abstract entities, and capabilities, and conclude with an overview of some of the major applications of BFO in biomedicine, industry and defense.
Part 3 will present draft proposals for further part, and provide an opportunity for discussion of the issues raised by these proposals and, more generally, by the idea of a top-level ontology as defined in this standard.
See JOWO 2019 workshops and keynotes.
Contact:
JOWO 2019 Chairs: jowo2019@gmail.com
JOWO Steering Committee: jowo.steering@gmail.com