Ontology alignment

From Wikipedia, the free encyclopedia

Ontology Alignment, or ontology matching, is the process of determining correspondences between concepts. A set of correspondences is also called an alignment. The phrase takes on a slightly different meaning, in computer science, cognitive science or philosophy.

Contents

[edit] Computer Science

For computer scientists, concepts are expressed as labels for data. Historically, the need for ontology alignment arose out of the need to integrate hetereogeneous databases, ones developed independently and thus each having their own data vocabulary. In the Semantic web context involving many actors providing their own ontologies, ontology matching has taken a critical place for helping heterogeneous resources to interoperate. Ontology alignment tools find classes of data that are "semantically equivalent," for example, "Truck" and "Lorry." The classes are not necessarily logically identical. According to Shvaiko and Euzenat 2005, there are three major dimensions for similarity: syntactic, external, and semantic. Coincidentally, they roughly correspond to the dimensions identified by Cognitive Scientists below. A number of tools and frameworks have been developed for aligning ontologies, some with inspiration from Cognitive Science and some independently.

Ontology alignment tools have generally been developed to operate on database schemas, XML schemas, taxonomies, formal languages, entity-relationship models, dictionaries, and other label frameworks. They are usually converted to a graph representation before being matched. Since the emergence of the Semantic web, such graphs can be represented in the Resource Description Framework line of languages by triples of the form <subject, predicate, object>, as illustrated in the Notation 3 syntax.

In this context, aligning ontologies is sometimes referred to as "ontology matching".

[edit] More Formally

Given two ontologies i = < Ci,Ri,Ii,Ai > and j = < Cj,Rj,Ij,Aj > we can define different type of (inter-ontology) relationships among their terms. Such relationships will be called, all together, alignments and can be categorized among different dimensions:

  • similarity vs logic: this is the difference between matchings, predicating about the similarity of ontology terms, and mappings, logical axioms, typically expressing logical equivalence or inclusion among ontology terms.
  • atomic vs complex: whether the alignments we considered are one-to-one, or can involve more terms in a query-like formulation (e.g., LAV/GAV mapping)
  • homogeneous vs heterogeneous: do the alignments predicate on terms of the same type (e.g., classes are related only to classes, individuals to individuals, etc.) or we allow heterogeneity in the relationship?
  • type of alignment: the semantics associated to an alignment. It can be subsumption, equivalence, disjointness, part-of or any user-specify relationship.

Subsumption, atomic, homogeneous alignments are the building blocks to obtain richer alignments, and have a well defined semantics in every Description Logic. Let's now introduce more formally ontology matching and mapping.

An atomic homogeneous \emph{matching} is an alignment that carries a similarity degree s\in [0,1], describing the similarity of two terms of the input ontologies i and j</math>. Matching can be both \emph{computed}, by means heuristic algorithms, or \emph{inferred} from other matchings.

Formally we can say that, a matching is a triple m = < id,ti,tj,s > , where ti and tj are homogeneous ontology terms, s is the similarity degree of m. A (subsumbtion, homogeneous, atomic) mapping is defined as a pair μ = < ti,tj > , where ti and tj are homogeneous ontology terms.

The problem of Ontology Alignment has been tackled recently by trying to computed matching first and mapping (based on the matching) in an automatic fashion. Systems like X-SOM[1] or COMA++ obtained at the moment very high precision and recall. To compare existing approaches has been create a competition Ontology Alignment Evaluation Initiative [oaei.ontologymatching.org] that every year compare the best approaches against a benchmark.

[edit] Cognitive Science

For cognitive scientists interested in ontology alignment, the "concepts" are nodes in a semantic networks that reside in brains as "conceptual systems." The focal question is: if everyone has unique experiences and thus different semantic networks, then how can we ever understand each other? This question has been addressed by a model called ABSURDIST (Aligning Between Systems Using Relations Derived Inside Systems for Translation). Goldstone and Rogosky 2002 identified three major dimensions for similarity as equations for "internal similarity, external similarity, and mutual inhibition."

Ontology alignment is closely related to analogy formation, where "concepts" are variables in logic expressions.

[edit] Philosophy

For philosophers, much like cognitive scientists, the interest is in the nature of "understanding." The roots of discourse, however, may be traced to radical interpretation.

[edit] Visualization Tools

[edit] References

  1. ^ Carlo A. Curino and Giorgio Orsi and Letizia Tanca (2007). "X-SOM: A Flexible Ontology Mapper". International Workshop on Semantic Web Architectures For Enterprises (SWAE’07) in conjunction with the 18th International Conference on Database and Expert Systems Applications (DEXA’07). 

[edit] See also

[edit] External links