Word sense disambiguation
From Wikipedia, the free encyclopedia
In computational linguistics, word sense disambiguation (WSD) is the problem of determining in which sense a word having a number of distinct senses is used in a given sentence. For example, consider the word bass, two distinct senses of which are:
- a type of fish
- tones of low frequency
and the sentences The bass part of the song is very moving and I went fishing for some sea bass. To a human it is obvious the first sentence is using the word bass in sense 2 above, and in the second sentence it is being used in sense 1. But although this seems obvious to a human, developing algorithms to replicate this human ability is a difficult task.
[edit] Difficulties
One problem with word sense disambiguation is deciding what the senses are. In cases like the word bass above, at least some senses are obviously different. In other cases, however, the different senses can be closely related (one meaning being a metaphorical or metonymic extension of another), and in such cases division of words into senses becomes much more difficult. Different dictionaries will provide different divisions of words into senses. One solution some researchers have used is to choose a particular dictionary, and just use its set of senses. Generally, however, research results using broad distinctions in senses have been much better than those using narrow, so most researchers ignore the fine-grained distinctions in their work.
Another problem is inter-judge variance. WSD systems are normally tested by having their results on a task compared against those of a human. However, humans do not agree on the task at hand -- give a list of senses and sentences, and humans will not always agree on which word belongs in which sense. A computer cannot be expected to give better performance on such a task than a human (indeed, since the human serves as the standard, the computer being better than the human is incoherent), so the human performance serves as an upper bound. Human performance, however, is much better on coarse-grained than fine-grained distinctions, so this again is why research on coarse-grained distinctions is most useful.
[edit] Approaches
As in all natural language processing, there are two main approaches to WSD — deep approaches and shallow approaches.
Deep approaches presume access to a comprehensive body of world knowledge. Knowledge such as "you can go fishing for a type of fish, but not for low frequency sounds" and "songs have low frequency sounds as parts, but not types of fish" is then used to determine in which sense the word is used. These approaches are not very successful in practice, mainly because such a body of knowledge does not exist in computer-readable format outside of very limited domains. But if such knowledge did exist, they would be much more accurate than the shallow approaches.
Shallow approaches don't try to understand the text. They just consider the surrounding words, using information like "if bass has words sea or fishing nearby, it probably is in the fish sense; if bass has the words music or song nearby, it is probably in the music sense." These rules can be automatically derived by the computer, using a training corpus of words tagged with their word senses. This approach, while theoretically not as powerful as deep approaches, gives superior results in practice, due to computers' limited world knowledge. It can, though, be confused by sentences like The dogs bark at the tree, which contains the word bark near both tree and dogs.
These approaches normally work by defining a window of N content words around each word to be disambiguated in the corpus, and statistically analyzing those N surrounding words. Two shallow approaches used to train and then disambiguate are Naïve Bayes classifiers and decision trees. In recent research, kernel based methods such as support vector machines have shown superior performance in supervised learning. But over the last few years, there hasn't been any major improvement in performance of any of these methods.
It is instructive to compare the word sense disambiguation problem with the problem of part-of-speech tagging. Both involve disambiguating or tagging with words, be it with senses or parts of speech. However, algorithms used for one do not tend to work well for the other, mainly because the part of speech of a word is primarily determined by the immediately adjacent one to three words, whereas the sense of a word may be determined by words further away. The success rate for part-of-speech tagging algorithms is at present much higher than that for WSD, state-of-the art being around 95% accuracy or better, as compared to less than 75% accuracy in word sense disambiguation with supervised learning. These figures are typical for English, and may be very different from those for other languages.
[edit] External links
- http://www.senseval.org (Evaluation Exercises for Word Sense Disambiguation) The de-facto standard benchmarks for WSD systems.
- Word Sense Disambiguation: The State of the Art (PDF) A comprehensive overview By Prof. Nancy Ide & Jean Véronis (1998).
- A tutorial on Word Sense Disambiguation.
- www.wsdbook.org Companion website for the book Word Sense Disambiguation: Algorithms and Applications, edited by Agirre and Edmonds (2006). Covers the entire field with chapters contributed by leading researchers.