In linguistics, the poverty of the stimulus (POTS) is the assertion that natural language grammar is unlearnable given the relatively limited data available to children learning a language, and therefore that this knowledge is supplemented with some sort of innate linguistic capacity. As such, the argument strikes against empiricist accounts of language acquisition and is usually construed as being in favor of linguistic nativism.
Nativists claim that humans are born with a specific representational adaptation for language that both funds and limits their competence to acquire specific types of natural languages over the course of their cognitive development and linguistic maturation. The argument is now generally used to support theories and hypotheses of generative grammar. The name was coined by Chomsky in his work Rules and Representations.[1] The thesis emerged out of several of Chomsky's writings on the issue of language acquisition. The argument has long been controversial within linguistics, forming the backbone for the theory of universal grammar.
Contents |
Though Chomsky and his supporters have reiterated the argument in a variety of different manners (indeed Pullum and Scholz (2002a) provide no less than 13 different "sub arguments" that can optionally form part of a poverty-of-stimulus argument),[2] one frequent structure to the argument can be summed up as follows:
The validity of the argument itself is bitterly contested by constructionists. Constructionists are theorists who do not believe Chomskian arguments and believe language is learned through some kind of functional distributional analysis (Tomasello 1992). One problem in language is called the no negative evidence problem. This is basically that children cannot use only positive evidence to learn language. Constructionists stumble at this point whereas nativists simply use linguistic constraint theories (Baker 1979, Jackendoff 1975).
Several patterns in language have been claimed to be unlearnable from positive evidence alone. One example is the hierarchical nature of languages. The grammars of human languages produce hierarchical tree structures and some linguists argue that human languages are also capable of infinite recursion (see Context-free grammar). For any given set of sentences generated by a hierarchical grammar capable of infinite recursion there are an indefinite number of grammars that could have produced the same data. This would make learning any such language impossible. Indeed, a proof by E. Mark Gold showed that any formal language that has hierarchical structure capable of infinite recursion is unlearnable from positive evidence alone,[3] in the sense that it is impossible to formulate a procedure that will discover with certainty the correct grammar given any arbitrary sequence of positive data in which each utterance occurs at least once.[4] However, this does not preclude arriving at the correct grammar using typical input sequences rather than particularly malicious sequences or arrive at an almost perfect approximation to the correct grammar. Indeed, it has been proposed that under very mild assumptions (ergodicity and stationarity), the probability of producing a sequence that renders language learning impossible is in fact zero.[5]
Another example of language pattern claimed to be unlearnable from positive evidence alone is subject-auxiliary inversion in questions, i.e.:
There are two hypotheses the language learner might postulate about how to form questions: (1) The first auxiliary verb in the sentence (here: 'are') moves to the beginning of the sentence, or (2) the 'main' auxiliary verb in the sentence moves to the front. In the sentence above, both rules yield the same result since there is only one auxiliary verb. But, the difference is apparent in this case:
Of course, the result of rule (1) is ungrammatical while the result of rule (2) is grammatical. So, rule (2) is (approximately) what we actually have in English, not rule (1). The claim, then, first is that children don't see sentences as complicated as this one enough to witness a case where the two hypotheses yield different results, and second that just based on the positive evidence of the simple sentences, children could not possibly decide between (1) and (2). Moreover, even sentences such as (1) and (2) are compatible with a number of incorrect rules (such as "front any auxiliary).[6] Thus, if rule (2) was not innately known to infants, we would expect half of the adult population to use (1) and half to use (2). Since that doesn't occur, rule (2) must be innately known. (See Pullum 1996 for the complete account and critique.)[7]
The last premise, that children successfully learn language, is considered to be evident in human speech. Though people occasionally make mistakes, human beings rarely speak ungrammatical sentences, and generally do not label them as such when they say them. (Ungrammatical in the descriptive sense, not the prescriptive sense.)
That many linguists accept all three of the premises is testimony to Chomsky's influence in the discipline, and the persuasiveness of the argument. Nonetheless, the APS has many critics, both inside and outside linguistics.
The soundness of the poverty of stimulus argument is widely questioned. Indeed, every one of the three premises of the argument has been questioned at some point in time. Much of the criticism comes from researchers who study language acquisition and computational linguistics. Additionally, connectionist researchers have never accepted most of Chomsky's premises, because these premises are at odds with connectionist beliefs about the structure of cognition.
The first and most common critique, is that positive evidence is actually enough to learn the various patterns that linguists claim are unlearnable by positive evidence alone. A common argument is that the brain's mechanisms of statistical pattern recognition could solve many of the imagined difficulties. For example, researchers using neural networks and other statistical methods have programmed computers to learn rules such as (2) cited above, and have claimed to have successfully extracted hierarchical structures, all using positive evidence alone.[8][9] Indeed, Klein & Manning (2002)[10] report constructing a computer program that is able to retrieve 80% of all correct syntactic analyses of text in the Wall Street Journal Corpus using a statistical learning mechanism (unsupervised grammar induction) demonstrating a clear move away from "toy" grammars. In another study, a probabilistic type of computer without any programmed preconceptions about grammar at all were presented with lots of newspaper articles. Despite the fact that the scientists had censored all articles containing the sentence "colorless green ideas sleep furiously", the computer, after "reading" thousands of articles, deemed that sentence 10000 times more probable than a scrambled ungrammatical version. This proves that statistical analysis without preconceptions can reveal general grammatical rules at a humanlike accuracy.[11]
There is also much criticism about whether negative evidence is really so rarely encountered by children. Pullum argues that learners probably do get certain kinds of negative evidence. In addition, if one allows for statistical learning, negative evidence is abundant. Consider that if a language pattern is never encountered, but its probability of being encountered would be very high were it acceptable, then the language learner might be right in considering absence of the pattern as negative evidence.[7] Chomsky accepts that this kind of negative evidence plays a role in language acquisition, terming it "indirect negative evidence", though he does not think that indirect negative evidence is sufficient for language acquisition to proceed without Universal Grammar.[12] However, contra this claim, Ramscar and Yarlett (2007) designed a learning model that successfully simulates the learning of irregular plurals based on negative evidence, and backed the predictions of this simulation in empirical tests of young children. Ramscar and Yarlett suggest that failures of expectation function as forms of implicit negative feedback that allow children to correct their errors.[13]
As for the argument based on Gold's proof, it's not clear that human languages are truly capable of infinite recursion. Clearly, no speaker can ever in fact produce a sentence with an infinite recursive structure, and in certain cases (for example, center embedding), people are unable to comprehend sentences with only a few levels of recursion. Chomsky and his supporters have long argued that such cases are best explained by restrictions on working memory, since this provides a principled explanation for limited recursion in language use. Some critics argue that this removes the falsifiability of the premise. Returning to the big picture, it is questionable whether Gold's research actually bears on the question of natural language acquisition at all, since what Gold showed is that there are certain classes of formal languages for which some language in the class cannot be learned given positive evidence alone. It's not at all clear that natural languages fall in such a class, let alone whether they are the ones that are not learnable.[14]
Finally, it has been argued that people may not learn exactly the same grammars as each other. If this is the case, then only a weak version of the third premise is true, as there would be no fully "correct" grammar to be learned. However, in many cases, Poverty of Stimulus arguments do not in fact depend on the assumption that there is only one correct grammar, but rather that there is only one correct class of grammars. For example, the Poverty of Stimulus argument from question formation depends only on the assumption that everyone learns a structure-dependent grammar.