DOGMA


DOGMA, short for Developing Ontology-Grounded Methods and Applications, is the name of research project in progress at Vrije Universiteit Brussel's STARLab, Semantics Technology and Applications Research Laboratory. It is an internally funded project, concerned with the more general aspects of extracting, storing, representing and browsing information.

Methodological Root

DOGMA, as a dialect of the approach, has its root in database semantics and model theory. It adheres to the fact-based information management methodology towards Conceptualization and 100% principle of .
The DOGMA methodological principles include:
  1. Data independence: the meaning of data shall be decoupled from the data itself.
  2. Interpretation independence: unary or binary fact types shall be adhere to formal interpretation in order to store semantics; lexons themselves do not carry semantics
  3. Multiple views on and uses of stored conceptualization. An ontology shall be scalable and extensible.
  4. Language neutral. An ontology shall meet multilingual needs.
  5. Presentations independence: an ontology in DOGMA shall meet any kinds of users' needs of presentation. As an FBM dialect, DOGMA supports both graphical notations and textual presentation in a controlled language. Semantic decision tables, for example, is a means to visualize processes in a DOGMA commitment. SDRule-L is to visualize and publish ontology-based decision support models.
  6. Concepts shall be validated by the stakeholders.
  7. Informal textual definitions shall be provided in case the source of the ontology is missing or incomplete.

    Technical introduction

DOGMA is an ontology approach and framework that is not restricted to a particular representation language. This approach has some distinguishing characteristics that make it different from traditional ontology approaches such as its groundings in the linguistic representations of knowledge and the methodological separation of the domain-versus-application conceptualization, which is called the ontology double articulation principle. The idea is to enhance the potential for re-use and design scalability. Conceptualisations are materialised in terms of lexons. A lexon is a 5-tuple declaring either :
  1. taxonomical relationship : e.g., < G, manager, is a, subsumes, person >;
  2. non-taxonomical relationship : ''e.g.', < G, manager, directs, directed by, company >.
Lexons could be approximately considered as a combination of an RDF/OWL triple and its inverse, or as a conceptual graph style relation. The next section elaborates more on the notions of context.

Language versus conceptual level

Another distinguishing characteristic of DOGMA is the explicit duality in interpretation between the language level and conceptual level. The goal of this separation is primarily to disambiguate the lexical representation of terms in a lexon into concept definitions, which are word senses taken from lexical resources such as WordNet. The meaning of the terms in a lexon is dependent on the context of elicitation.
For example, consider a term “capital”. If this term was elicited from a typewriter manual, it has a different meaning than when elicited from a book on marketing. The intuition that a context provides here is: a context is an abstract identifier that refers to implicit or tacit assumptions in a domain, and that maps a term to its intended meaning within these assumptions.

Ontology evolution

Ontologies naturally co-evolve with their communities of use. Therefore, in De Leenheer he identified a set of primitive operators for changing ontologies. We make sure these change primitives are conditional, which means that their applicability depends on pre- and post-conditions. Doing so, we guarantee that only valid structures can be built.

Context dependency types

De Leenheer and de Moor distinguished four key characteristics of context:
  1. a context packages related knowledge: it defines part of the knowledge of a particular domain,
  2. it disambiguates the lexical representation of concepts and relationships by distinguishing between language level and conceptual level,
  3. it defines context dependencies between different ontological contexts and
  4. contexts can be embedded or linked, in the sense that statements about contexts are themselves in context.
Based on this, they identified three different types of context dependencies within one ontology and between different ontologies : articulation, application, and specialisation. One particular example in the sense of conceptual graph theory would be a specialisation dependency for which the dependency constraint is equivalent to the conditions for CG-specialisation
Context dependencies provide a better understanding of the whereabouts of knowledge elements and their inter-dependencies, and consequently make negotiation and application less vulnerable to ambiguity, hence more practical.