Artificial grammar learning
From Wikipedia, the free encyclopedia
Artificial Grammar Learning is a paradigm of study within cognitive psychology. Its goal is to investigate the processes that underlie human language learning, by testing subjects' ability to learn a made-up language in a laboratory setting. The area of interest is typically the subjects' ability to detect patterns and statistical regularities during a training phase, and then use their new knowledge of those patterns in a testing phase. The testing phase can either use the symbols or sounds used in the training phase or transfer the patterns to another set of symbols or sounds as surface structure.
A related problem is the induction of a grammar for an unknown language given a parallel text in a known language, what might be called the "Rosetta Stone" problem. Kuhn's ACL paper (2004) presents techniques for this problem [1].
A list of researchers who have made contributions to the empirical understanding of Artificial Grammar Learning: Reber, Perruchet, Friederici, Cleermans, Gomez, Gerken, van der Linden, Christiansen, Dominey, Petersson