A treebank or parsed corpus is a text corpus in which each sentence has been parsed, i.e. annotated with syntactic structure. Syntactic structure is commonly represented as a tree structure, hence the name Treebank. The term Parsed Corpus is often used interchangeably with Treebank: with the emphasis on the primacy of sentences rather than trees.
Treebanks are often created on top of a corpus that has already been annotated with part-of-speech tags. In turn, treebanks are sometimes enhanced with semantic or other linguistic information.
Treebanks can be created completely manually, where linguists annotate each sentence with syntactic structure, or semi-automatically, where a parser assigns some syntactic structure which linguists then check and, if necessary, correct. In practice, fully checking and completing the parsing of natural language corpora is a labour intensive project that can take teams of graduate linguists several years. The level of annotation detail and the breadth of the linguistic sample determine the difficulty of the task and the length of time required to build a treebank.
Some treebanks follow a specific linguistic theory in their syntactic annotation (e.g. the BulTreeBank follows HPSG) but most try to be less theory-specific. However, two main groups can be distinguished: treebanks that annotate phrase structure (for example the Penn Treebank or ICE-GB) and those that annotate dependency structure (for example the Prague Dependency Treebank or the Quranic Arabic Dependency Treebank).
It is important to clarify the distinction between the formal representation and the file format used. Treebanks are necessarily constructed according to a particular grammar. The same grammar may be implemented by different file formats.
For example, the syntactic analysis for John loves Mary, shown in the figure on the right, may be represented by simple labelled brackets in a text file, like this (following the Penn Treebank notation):
(S (NP (NNP John)) (VP (VPZ loves) (NP (NNP Mary))) (. .))
This type of representation is popular because it is 'light' on resources, and the tree structure is relatively easy to 'read' without software tools. However as corpora become increasingly complex, other file formats may be preferred. Alternatives include treebank-specific XML schemes, numbered indentation and various types of standoff notation. If you want to review schemes, see the Amalgam Multi-Treebank, a pico corpus of 20 sentences annotated by different grammars and notation schemes.
Contents |
Treebanks can be used in corpus linguistics for studying syntactic phenomena or in computational linguistics for training or testing parsers. Diachronic corpora can be used to study the time course of syntactic change.
The value of parsed corpora is becoming more and more widely understood. The data of introspection has been crucial to syntactic research because introspection provides evidence, not only of what is possible in a given language but also of what is not possible. Such negative evidence is, of course, not available in corpora of actual writing or speech. On the other hand, introspection about grammar is itself inevitably partial, as linguists have found when attempting to parse actual speech and writing, and it provides relatively poor information about the information structure of sentences; that is, the discourse contexts in which given syntactic constructions are licensed.
Once parsed, a corpus will contain evidence of both frequency (how common different grammatical structures are in use) and coverage (the discovery of new, unanticipated, grammatical phenomena).
An automatically parsed corpus that is not corrected by human linguists is useful. It can provide evidence of rule frequency for a parser. A parser may be improved by applying it to large amounts of text and gathering rule frequencies. However, it should be obvious that only by a process of correcting and completing a corpus by hand is it possible then to identify rules absent from the parser knowledge base. (As a bonus, frequencies are likely to be more accurate.)
Potentially, however, by far the most interesting question for theoretical linguists and psycholinguists is interaction evidence in parsed corpora. A completed treebank can help linguists carry out experiments as to how the decision to use one grammatical construction tends to influence the decision to form others. The idea here is not to improve parsing algorithms but to go to the heart of the question of linguistic choice: to try to understand how speakers and writers make decisions as they form sentences.
Interaction research is particularly fruitful as further layers of annotation, e.g. semantic, pragmatic, are added to a corpus. It is then possible to evaluate the impact of 'non-syntactic' phenomena on grammatical choices.
The parsing and exploitation of parsed corpora has become an important subdiscipline of Corpus Linguistics ever since the first large-scale treebank, The Penn Treebank, was published. Many of the theoretical criticisms of lexical corpora do not apply to parsed corpora. Results from a parsed corpus are more closely commensurate with linguistic theories. However, a new epistemological problem arises: a parsed corpus necessarily requires a particular analysis, and this analysis, and the theory behind it, may be incorrect or deficient.
One of the key ways to extract evidence from a treebank is through search tools. Search tools for parsed corpora typically depend on the annotation scheme that was applied to the corpus. User interfaces range in sophistication from expression-based query systems aimed at computer programmers to full exploration environments aimed at general linguists.
The question facing a new researcher is not only, "which corpus is relevant to my needs?" but also "how can I find the information I want in this corpus, and how do I know that the results of my experiments mean what I think they do?"
Wallis 2008[1] discusses the principles of searching treebanks in detail and reviews the state of the art (in 2006).
In addition to strictly Treebank search tools, some tools for searching speech data also exist. These tools are designed to support searches on overlapping hierarchies or graph structures.