Academics | |
---|---|
Disciplines: | Natural Language Processing Computational Linguistics Semantics |
Umbrella Organization: |
ACL-SIGLEX |
Workshop Overview | |
Founded: | 1998 (Senseval) |
Latest: | SemEval-2010 ACL @ Uppsala, Sweden |
Upcoming: | SemEval-2012 *SEM @ Montreal, Canada |
History | |
Senseval-1 | 1998 @ Sussex |
Senseval-2 | 2001 @ Toulouse |
Senseval-3 | 2004 @ Barcelona |
SemEval-2007 | 2007 @ Prague |
SemEval-2010 | 2010 @ Uppsala |
SemEval-2012 | 2012 @ Montreal |
SemEval (Semantic Evaluation) is an ongoing series of evaluations of computational semantic analysis systems; it evolved from the Senseval word sense evaluation series. The evaluations are intended to explore the nature of meaning in language. While meaning is intuitive to humans, transferring those intuitions to computational analysis has proved elusive.
This series of evaluations is providing a mechanism to characterize in more precise terms exactly what is necessary to compute in meaning. As such, the evaluations provide an emergent mechanism to identify the problems and solutions for computations with meaning. These exercises have evolved to articulate more of the dimensions that are involved in our use of language. They began with apparently simple attempts to identify word senses computationally. They have evolved to investigate the interrelationships among the elements in a sentence (e.g., semantic role labeling), relations between sentences (e.g., coreference), and the nature of what we are saying (semantic relations and sentiment analysis).
The purpose of the SemEval exercises and SENSEVAL is to evaluate semantic analysis systems. Semantic Analysis" refers to a formal analysis of meaning, and "computational" refer to approaches that in principle support effective implementation[1].
The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the fourth workshop, SemEval-2007 (SemEval-1), the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation[2].
Contents |
From the earliest days, assessing the quality of word sense disambiguation (WSD) algorithms had been primarily a matter of intrinsic evaluation, and “almost no attempts had been made to evaluate embedded WSD components”[3]. Only very recently (2006) had extrinsic evaluations begun to provide some evidence for the value of WSD in end-user applications[4]. Until 1990 or so, discussions of the sense disambiguation task focused mainly on illustrative examples rather than comprehensive evaluation. The early 1990s saw the beginnings of more systematic and rigorous intrinsic evaluations, including more formal experimentation on small sets of ambiguous words[5].
In April 1997, a workshop entitled Tagging with Lexical Semantics: Why, What, and How? was held in conjunction with the Conference on Applied Natural Language Processing[6]. At the time, there was a clear recognition that manually annotated corpora had revolutionized other areas of NLP, such as part-of-speech tagging and parsing, and that corpus-driven approaches had the potential to revolutionize automatic semantic analysis as well[7]. Kilgarriff recalled that there was “a high degree of consensus that the field needed evaluation,” and several practical proposals by Resnik and Yarowsky kicked off a discussion that led to the creation of the Senseval evaluation exercises.[8]
The framework of the SemEval/Senseval evaluation workshops emulates Message Understanding Conferences (MUCs) and other evaluation workshops ran by ARPA (Advanced Research Projects Agency, renamed the Defense Advanced Research Projects Agency (DARPA)).
Stages of SemEval/Senseval evaluation workshops[10]
Senseval-1 & Senseval-2 focused on evaluation WSD systems on major languages that were available corpus and computerized dictionary. Senseval-3 looked beyond the lexemes and started to evaluate systems that looked into wider areas of semantics, such as Semantic Roles (technically known as Theta roles in formal semantics), Logic Form Transformation (commonly semantics of phrases, clauses or sentences were represented in first-order logic forms) and Senseval-3 explored performances of semantics analysis on Machine Translations.
As the types of different computational semantic systems grew beyond the coverage of WSD, Senseval evolved into SemEval, where more aspects of computational semantic systems were evaluated. The tables below (1) reflects the workshop growth from Senseval to SemEval and (2) gives an overview of which area of computational semantics was evaluated throughout the Senseval/SemEval workshops.
The SemEval exercises provide a mechanism for examining issues in semantic analysis of texts. The topics of interest fall short of the logical rigor that is found in formal computational semantics, attempting to identify and characterize the kinds of issues relevant to human understanding of language. The primary goal is to replicate human processing by means of computer systems. The tasks (shown below) are developed by individuals and groups to deal with identifiable issues, as they take on some concrete form.
The first major area in semantic analysis is the identification of the intended meaning at the word level (taken to include idiomatic expressions). This is word-sense disambiguation (a concept that is evolving away from the notion that words have discrete senses, but rather are characterized by the ways in which they are used, i.e., their contexts). The tasks in this area include lexical sample and all-word disambiguation, multi- and cross-lingual disambiguation, and lexical substitution. Given the difficulties of identifying word senses, other tasks relevant to this topic include word-sense induction, subcategorization acquisition, and evaluation of lexical resources.
The second major area in semantic analysis is the understanding of how different sentence and textual elements fit together. Tasks in this area include semantic role labeling, semantic relation analysis, and coreference resolution. Other tasks in this area look at more specialized issues of semantic analysis, such as temporal information processing, metonymy resolution, and sentiment analysis. The tasks in this area have many potential applications, such as information extraction, question answering, document summarization, machine translation, construction of thesauri and semantic networks, language modeling, paraphrasing, and recognizing textual entailment. In each of these potential applications, the contribution of the types of semantic analysis constitutes the most outstanding research issue.
Senseval-1 & Senseval-2 focused on evaluation WSD systems on major languages that were available corpus and computerized dictionary. Senseval-3 looked beyond the lexemes and started to evaluate systems that looked into wider areas of semantics, viz. Semantic Roles (technically known as Theta roles in formal semantics), Logic Form Transformation (commonly semantics of phrases, clauses or sentences were represented in first-order logic forms) and Senseval-3 explored performances of semantics analysis on Machine Translations.
As the types of different computational semantic systems grew beyond the coverage of WSD, Senseval evolved into SemEval, where more aspects of computational semantic systems were evaluated. The tables below (1) reflects the workshop growth from Senseval to SemEval and (2) gives an overview of which area of computational semantics was evaluated throughout the Senseval/SemEval workshops.
Workshop | No. of Tasks | Areas of study | Languages of Data Evaluated |
---|---|---|---|
Senseval-1 | 3 | Word Sense Disambiguation (WSD) - Lexical Sample WSD tasks | English, French, Italian |
Senseval-2 | 12 | Word Sense Disambiguation (WSD) - Lexical Sample, All Words, Translation WSD tasks | Czech, Dutch, English, Estonian, Basque, Chinese, Danish, English, Italian, Japanese, Korean, Spanish, Swedish |
Senseval-3 | 16 (including 2 cancelled tasks) | Logic Form Transformation, Machine Translation (MT) Evaluation, Semantic Role Labelling, WSD | Basque, Catalan, Chinese, English, Italian, Romanian, Spanish |
SemEval-2007 | 19 (including 1 cancelled task) | Cross-lingual, Frame Extraction, Information Extraction, Lexical Substitution, Lexical Sample, Metonymy, Semantic Annotation, Semantic Relations, Semantic Role Labelling, Sentiment Analysis, Time Expression, WSD | Arabic, Catalan, Chinese, English, Spanish, Turkish |
SemEval-2010 | 18 (including 1 cancelled task) | Coreference, Cross-lingual, Ellipsis, Information Extraction, Lexical Substitution, Metonymy, Noun Compounds, Parsing, Semantic Relations, Semantic Role Labeling, Sentiment Analysis, Textual Entailment, Time Expressions, WSD | Catalan, Chinese, Dutch, English, French, German, Italian, Japanese, Spanish |
The major tasks in semantic evaluation include the following areas of natural language processing. This list is expected to grow as the field progresses[11]. The following table shows the areas of studies that were involved in Senseval-1 through SemEval-2010:
Areas of Study | Senseval-1 | Senseval-2 | Senseval-3 | SemEval-2007 | SemEval-2010 |
---|---|---|---|---|---|
Coreference Resolution | ✓ | ||||
Multi-lingual or Cross-lingual Lexical Substitution | ✓ | ✓ | ✓ | ||
Ellipsis | ✓ | ||||
Keyphrase Extraction (Information Extraction) | ✓ | ||||
Metonymy (Information Extraction) | ✓ | ✓ | |||
Noun Compounds (Information Extraction) | ✓ | ||||
Semantic Relation Identification | ✓ | ✓ | |||
Semantic Role Labeling | ✓ | ✓ | ✓ | ||
Sentimental Analysis | ✓ | ✓ | |||
Time Expression | ✓ | ✓ | |||
Textual Entailment | ✓ | ||||
Word sense disambiguation (Lexical Sample) | ✓ | ✓ | ✓ | ✓ | ✓ |
Word sense disambiguation (All-Words) | ✓ | ✓ | ✓ | ✓ | |
Word sense induction | ✓ | ✓ |