Language identification

From Wikipedia, the free encyclopedia

Language identification is the process of determining which natural language given content is in. Traditionally, language identification (as practiced, for instance, in library science) has relied on manually identifying frequent words and letters known to be characteristic of particular languages. More recently, computational approaches have been applied to the problem, by viewing language identification as a kind of text categorization, a Natural Language Processing approach which relies on statistical methods.

Contents

[edit] Non-Computational Approaches

In the field of library science, language identification is important for categorizing materials. As librarians often have to categorize materials which are in languages they are not familiar with, they sometimes rely on tables of frequent words and distinctive letters or characters to help them identify languages. While identifying a single such word or character may not suffice to distinguish a language from another with a similar orthography, identifying several is often highly reliable.


[edit] Statistical Approaches

This can be done by comparing the compressibility of the text to the compressibility of texts in the known languages. This approach is known as mutual information based distance measure [1]. The same techniques can also be used to empirically construct family trees of languages which closely correspond to the trees constructed using historical methods.

Another technique, as described by Dunning (1994) is to create a language n-gram model from a "training text" for each of the languages. Then, for any piece of text needing to be identified, a similar model is made, and the two models are compared. The stored language model which is most similar to the model from the piece of text is the most likely language.

A related problem is the induction of a grammar for an unknown language given a parallel text in a known language, what might be called the "Rosetta Stone" problem. Kuhn's ACL paper (2004) presents techniques for this problem [2].

[edit] See also

[edit] References

  • Benedetto, D., E. Caglioti and V. Loreto. Language trees and zipping. Physical Review Letters, 88:4 (2002) [3], [4], [5].
  • Cilibrasi, Rudi and Paul M.B. Vitanyi. "Clustering by compression". IEEE Transactions on Information Theory 51(4), April 2005, 1523-1545. [6]
  • Dunning, T. (1994) "Statistical Identification of Language". Technical Report MCCS 94-273, New Mexico State University, 1994.
  • Goodman, Joshua. (2002) Extended comment on "Language Trees and Zipping". Microsoft Research, Feb 21 2002. (This is a criticism of the data compression in favor of the Naive Bayes method.) [7]
  • Poutsma, Arjen. (2001) Applying Monte Carlo techniques to language identification. SmartHaven, Amsterdam. Presented at CLIN 2001.
  • The Economist. (2002) "The elements of style: Analysing compressed data leads to impressive results in linguistics [8]
  • Survey of the State of the Art in Human Language Technology, (1996), section 8.7 Automatic Language Identification [9]

[edit] External Links

  • Unknown Language Identification at Georgetown University [10]
  • Links to LID tools by Gertjan van Noord [11]
  • Implementation of an n-gram based LID tool in Python by Damir Cavar [12]
In other languages