Natural language processing
From Wikipedia, the free encyclopedia
Natural language processing (NLP) is a subfield of artificial intelligence and linguistics. It studies the problems of automated generation and understanding of natural human languages. Natural language generation systems convert information from computer databases into normal-sounding human language, and natural language understanding systems convert samples of human language into more formal representations that are easier for computer programs to manipulate.
Contents |
[edit] Tasks and limitations
In theory natural language processing is a very attractive method of human-computer interaction. Early systems such as SHRDLU, working in restricted "blocks worlds" with restricted vocabularies, worked extremely well, leading researchers to excessive optimism which was soon lost when the systems were extended to more realistic situations with real-world ambiguity and complexity.
Natural language understanding is sometimes referred to as an AI-complete problem, because natural language recognition seems to require extensive knowledge about the outside world and the ability to manipulate it. The definition of "understanding" is one of the major problems in natural language processing.
[edit] Concrete problems
Some examples of the problems faced by natural language understanding systems:
- The sentences We gave the monkeys the bananas because they were hungry and We gave the monkeys the bananas because they were over-ripe have the same surface grammatical structure. However, in one of them the word they refers to the monkeys, in the other it refers to the bananas: the sentence cannot be understood properly without knowledge of the properties and behaviour of monkeys and bananas.
- A string of words may be interpreted in myriad ways. For example, the string Time flies like an arrow may be interpreted in a variety of ways:
- time moves quickly just like an arrow does;
- measure the speed of flying insects like you would measure that of an arrow - i.e. (You should) time flies like you would an arrow.;
- measure the speed of flying insects like an arrow would - i.e. Time flies in the same way that an arrow would (time them).;
- measure the speed of flying insects that are like arrows - i.e. Time those flies that are like arrows;
- a type of flying insect, "time-flies," enjoy arrows (compare Fruit flies like a banana.)
English is particularly challenging in this regard because it has little inflectional morphology to distinguish between parts of speech.
- English and several other languages don't specify which word an adjective applies to. For example, in the string "pretty little girls' school".
- Does the school look little?
- Do the girls look little?
- Do the girls look pretty?
- Does the school look pretty?
[edit] Subproblems
- Speech segmentation
- In most spoken languages, the sounds representing successive letters blend into each other, so the conversion of the analog signal to discrete characters can be a very difficult process. Also, in natural speech there are hardly any pauses between successive words; the location of those boundaries usually must take into account grammatical and semantical constraints, as well as the context.
- Text segmentation
- Some written languages like Chinese, Japanese and Thai do not have single word boundaries either, so any significant text parsing usually requires the identification of word boundaries, which is often a non-trivial task.
- Word sense disambiguation
- Many words have more than one meaning; we have to select the meaning which makes the most sense in context.
- Syntactic ambiguity
- The grammar for natural languages is ambiguous, i.e. there are often multiple possible parse trees for a given sentence. Choosing the most appropriate one usually requires semantic and contextual information. Specific problem components of syntactic ambiguity include sentence boundary disambiguation.
- Imperfect or irregular input
- Foreign or regional accents and vocal impediments in speech; typing or grammatical errors, OCR errors in texts.
- Speech acts and plans
- Sentences often don't mean what they literally say; for instance a good answer to "Can you pass the salt" is to pass the salt; in most contexts "Yes" is not a good answer, although "No" is better and "I'm afraid that I can't see it" is better yet. Or again, if a class was not offered last year, "The class was not offered last year" is a better answer to the question "How many students failed the class last year?" than "None" is.
[edit] Statistical NLP
Statistical natural language processing uses stochastic, probabilistic and statistical methods to resolve some of the difficulties discussed above, especially those which arise because longer sentences are highly ambiguous when processed with realistic grammars, yielding thousands or millions of possible analyses. Methods for disambiguation often involve the use of corpora and Markov models. The technology for statistical NLP comes mainly from machine learning and data mining, both of which are fields of artificial intelligence that involve learning from data.
[edit] Major tasks in NLP
- Automatic summarization
- Foreign Language Reading Aid
- Foreign Language Writing Aid
- Information extraction
- Information retrieval
- Machine translation
- Natural language generation
- Optical Character Recognition
- Question answering
- Speech recognition
- Spoken dialogue management
- Text simplification
- Text to speech
- Text-proofing
[edit] Evaluation of natural language processing
The goal of NLP evaluation is to measure one or more qualities of an algorithm or a system, in order to determine if (or to what extent) the system answers the goals of its designers, or the needs of its users. Research in NLP evaluation has received considerable attention, because the definition of proper evaluation criteria is one way to specify precisely an NLP problem, going thus beyond the vagueness of tasks defined only as language understanding or language generation. A precise set of evaluation criteria, which includes mainly evaluation data and evaluation metrics, enables several teams to compare their solutions to a given NLP problem.
- History of evaluation in NLP
...
Depending on the evaluation procedures, a number of distinctions are traditionally made in NLP evaluation.
- Intrinsic vs. extrinsic evaluation
Intrinsic evaluation considers an isolated NLP system and characterizes its performance mainly with respect to a gold standard result, pre-defined by the evaluators. Extrinsic evaluation, also called evaluation in use considers the NLP system in a more complex setting, either as an embedded system or serving a precise function for a human user. The extrinsic performance of the system is then characterized in terms of its utility with respect to the overall task of the complex system or the human user.
- Black-box vs. glass-box evaluation
Black-box evaluation requires one to run an NLP system on a given data set and to measure a number of parameters related to the quality of the process (speed, reliability, resource consumption) and, most importantly, to the quality of the result (e.g. the accuracy of data annotation or the fidelity of a translation). Glass-box evaluation looks at the design of the system, the algorithms that are implemented, the linguistic resources it uses (e.g. vocabulary size), etc. Given the complexity of NLP problems, it is often difficult to predict performance only on the basis of glass-box evaluation, but this type of evaluation is more informative with respect to error analysis or future developments of a system.
- Automatic vs. manual evaluation
In many cases, automatic procedures can be defined to evaluate an NLP system by comparing its output with the gold standard (or desired) one. Although the cost of producing the gold standard can be quite high, automatic evaluation can be repeated as often as needed without much additional costs (on the same input data). However, for many NLP problems, the definition of a gold standard is a complex task, and can prove impossible when inter-annotator agreement is insufficient. Manual evaluation is performed by human judges, which are instructed to estimate the quality of a system, or most often of a sample of its output, based on a number of criteria. Although, thanks to their linguistic competence, human judges can be considered as the reference for a number of language processing tasks, there is also considerable variation across their ratings. This is why automatic evaluation is sometimes referred to as objective evaluation, while human one appears to be more subjective.
[edit] Organizations and conferences
- Association for Computational Linguistics
- Association for Machine Translation in the Americas
- AFNLP - Asian Federation of Natural Language Processing Associations
[edit] See also
- computational linguistics
- controlled natural language
- information retrieval
- latent semantic indexing
- lojban / loglan
- Transderivational search
- Biomedical text mining
- Computer-assisted reviewing
- Chatterbot
- the Inform 7 programming language
- The fictional universal translator
[edit] External links
[edit] Resources
- Introductory book.
- Resources for Text, Speech and Language Processing
- A comprehensive list of resources, classified by category
- Language Technology Documentation Centre in Finland (FiLT)
[edit] Implementations
- OpenNLP
- DELPH-IN: integrated technology for deep language processing
- LinguaStream: a generic platform for Natural Language Processing experimentation
- GATE - a Java Library for Text Engineering
- Natural Language Toolkit
- MARF: Modular Audio Recognition Framework for voice and statistical NLP processing
- FreeLing: an open source suite of language analyzers
- LingPipe: Java Natural Language Processing Toolkit
- The wraetlic toolkit
- Antelope framework for Microsoft .NET 2.0
- Teach Rose - Web based natural learning project