Concept learning

From Wikipedia, the free encyclopedia

Concept learning, also known as concept attainment, is largely based on the works of the cognitive psychologist Jerome Bruner. Bruner, Goodnow, & Austin (1967) defined concept attainment (or concept learning) as "the search for and listing of attributes that can be used to distinguish exemplars from non exemplars of various categories." More simply put, concepts are the mental categories that help us classify objects, events, or ideas and each object, event, or idea has a set of common relevant features. Thus, concept learning is a strategy which requires a learner to compare and contrast groups or categories that contain concept-relevant features with groups or categories that do not contain concept-relevant features.

Concept learning also refers to a learning task in which a human or machine learner is trained to classify objects by being shown a set of example objects along with their class labels. The learner will simplify what has been observed in an example. This simplified version of what has been learned will then be applied to future examples. Concept learning ranges in simplicity and complexity because learning takes place over many areas. When a concept is more difficult, it will be less likely that the learner will be able to simplify, and therefore they will be less likely to learn. Colloquially, this task is known as learning from examples. Most theories of concept learning are based on the storage of exemplars and avoid summarization or overt abstraction of any kind.

Contents

[edit] Types of Concepts

  1. Not a Concept. Learning through reciting something from memory (recall) or discriminating between two things that differ (discrimination) is not the same as concept learning.
  2. Concrete or Perceptual Concepts
  3. Defined (or Relational) and Associated Concepts
  4. Complex Concepts. Constructs such as a schema and a script are examples of complex concepts. A schema is an organization of smaller concepts (or features) and is revised by situational information to assist in comprehension. A script on the other hand is a list of actions that a person follows in order to complete a desired goal. An example of a script would be buying a CD. There are several actions that must occur before the actual act of purchasing the CD and a script provides you with the necessary actions and proper order of these actions in order to be successful in purchasing the CD.

[edit] The Theoretical Issues

The theoretical issues underlying concept learning are those underlying induction in general. These issues are addressed in many diverse literatures, including Version Spaces, Statistical Learning Theory, PAC Learning, Information Theory, and Algorithmic Information Theory. Some of the broad theoretical ideas are also discussed by Watanabe (1969,1985), Solomonoff (1964a,1964b), and Rendell (1986).

[edit] Modern Psychological Theories of Concept Learning

It is difficult to make any general statements about human (or animal) concept learning without already assuming a particular psychological theory of concept learning. Although the classical views of concepts and concept learning in philosophy speak of a process of abstraction, data compression, simplification, and summarization, currently popular psychological theories of concept learning diverge on all these basic points.

[edit] Rule-Based Theories of Concept Learning

Rule-based theories of concept learning take classification data and a rule-based theory as input, which are the result of a rule-based learner with the hopes of producing a more accurate model of the data (Hekenaho 1997). The majority of rule-based models that have been developed are heuristic, meaning that rational analyses have not been provided and the models are not related to statistical approaches to induction. A rational analysis for rule-based models could presume that concepts are represented as rules, and would then ask what degree of belief a rational agent should be in agreement with each rule, provided some observed examples (Goodman, Griffiths, Feldman, and Tenenbaum). Rule based theories of concept learning are focused more so on perceptual learning and less on definition learning. Rules can be used in learning when the stimuli are confusable as opposed to simple. When rules are used in learning, the decisions are made based on properties alone and rely on simple criteria that do require a lot of memory ( Rouder and Ratcliff, 2006).

Example of Rule based theory:

"A radiologist using rule-based categorization would observe whether specific properties of the X-ray meet certain criteria; for example, is there an extreme difference in brightness in a suspicious region relative to the other regions? A decision is then based on this property alone" (Rouder and Ratcliff 2006)

[edit] Prototype Theory of Concept Learning

The prototype view on concept learning holds that people abstract out the central tendency (or prototype) of the experienced examples, and use this as a basis for their categorization decisions.

Prototype theory:

The prototype view on concept learning holds that people categorize based on one or more central examples of a given category followed by a penumbra of decreasingly typical examples. This implies that people do not categorize based on a list of things that all correspond to a definition; rather, a hierarchical inventory based on semantic similarity to the central example(s).

To illustrate this, imagine the following mental representations of the category: Sports

The first illustration may demonstrate a mental representation if we were to categorize by definition:

Definition of Sports: an athletic activity requiring skill or physical prowess and often of a competitive nature.


                                    Basketball   Football    Bowling
                         Baseball                                     Skiing
                 Track and field                                           Snowboarding
            Lacrosse                                                                    rugby   
                Soccer                            Sports                       Skateboarding    
                     Golf                                                   Bike-Racing
                       Hockey                                           Surfing 
                                  Weightlifting              Tennis


The second illustration may demonstrate a mental representation that Prototype Theory would predict:

1. Baseball

2. Football

3. Basketball

4. Soccer

5. Hockey

6. Tennis

7. Golf

...

15. Bike-racing

16. Weightlifting

17. Skateboarding

18. Snowboarding

19. Boxing

20. Wrestling

...

32. Fishing

33. Hunting

34. Hiking

35. sky-diving

36. bunji-jumping

...

62. cooking

63. walking

...

82. Gatorade

83. water

84. protein

85. diet

As you can see the Prototype theory hypothesizes a more continuous (less discrete) way of categorization in which we don’t limit the list to things that match the category’s definition.

[edit] Exemplar Theories of Concept Learning

Exemplar theory is the storage of specific instances (exemplars), with new objects evaluated only with respect to how closely they resemble specific known members (and nonmembers) of the category. This theory hypothesizes that learners store examples verbatim. This theory views concept learning as highly simplistic. Only individual properties are represented. These individual properties are not abstract and they do not create rules. An example of what Exemplar theory would look at is, “water is wet;” it simply knows that some (or one, or all) stored examples of water have the property wet. Exemplar based theories have become more empirically popular over the years with some evidence suggesting that human learners use exemplar based strategies only in early learning, forming prototypes and generalizations later in life. An important result of exemplar models in psychological literature has been a de-emphasis of complexity in concept learning. Some of the best known exemplar theory of concept learning are the Generalized Context Model (GCM), Nosofsky's (1986) generalization of Medin ans Schaffer's (1978) Context Model. A connectionist version of the GCM, called ALCOVE, has been developed by Kruschke (1992). The ALCOVE model addresses trial-by-trial concept learning. On each training trial, ALCOVE is presented with a stimulus, makes a prediction of the distribution of category choices, is presented with the correct classification, and then adjusts its associative weights and dimensional attention strengths. All of these models are matching models that exemplar sets for a category contains all of the category's exemplars.

Problems with Exemplar Theory

Exemplar models critically depend on two measures:

1. Similarity between exemplars

2. Rule to determine Group Membership

Sometimes it is difficult to attain or distinguish these measures.

[edit] Explanation-Based Theories of Concept Learning

The basic idea of explanation-based learning suggests that a new concept is acquired by experiencing examples of it and forming a basic outline1. Put simply, by observing or receiving the qualities of a thing the mind forms a concept which possesses and is identified by those qualities.

The original theory proposed by Mitchell, Keller, and Kedar-Cabelli in 1986, called explanation-based generalization, is that learning occurs through progressive generalizing2. This theory was first developed to program machines to learn. When applied to human cognition, it translates as such - the mind actively separates information that applies to more than one thing and enters it into a broader description of a category of things. This is done by identifying sufficient conditions for a thing fitting a category, similar to schematizing.

The revised model revolves around the integration of four mental processes – generalization, chunking, operationalization, and analogy3.

- Generalization is the process by which the characteristics of a concept which are fundamental to it are recognized and labeled. For example, birds have feathers and wings. Any thing with feathers and wings will be identified as ‘bird’.

- When information is grouped mentally, whether by similarity or relatedness, the group is called a chunk. Chunks can vary in size from a single item with parts or many items with many parts4.

- A concept is operationalized when the mind is able to actively recognize examples of it by characteristics and label it appropriately5.

- Analogy is the recognition of similarities between potential examples6.

This particular theory of concept learning is relatively new and more research is now being conducted to test it.

[edit] Bayesian Theories of Concept Learning

Bayesian theories are those which directly apply normative probability theory to achieve optimal learning. They generally base their categorization of data on the posterior probability for each category, where for category i, this posterior is given by Bayes rule,



P(C_{i}|D) = \frac{P(D|C_{i})P(C_{i})}{P(D)}


where P(D | Ci) is the probability of observing the given data on the assumption it was generated from category Ci, P(Ci) is the prior probability of category Ci, and P(D) is the marginal probability of observing the data, which usually does not enter into consideration. In general, the category possessing the maximum posterior P(Ci | D) would be the category selected for the given data.

Bayes' theorem is important because it provides a powerful tool for understanding, manipulating and controlling data5 that takes a larger view that is not limited to data analysis alone6. The approach is subjective and this requires the assessment of prior probabilities6, making it also very complex. However, if Bayesian's show that the accumulated evidence and the application of Bayes's law are sufficient the work will overcome the subjectivity of the inputs involved7. Bayesian inference can be used for any honestly collected data and has a major advantage because of its scientific focus6.

One model that incorporates the Bayesian theory of concept learning is the ACT-R model, developed by John R. Anderson. The ACT-R model is a programming language that works to define the basic cognitive and perceptual operations that enable the human mind by producing a step-by-step simulation of human behavior. This theory works along with the idea that each task humans perform should consist of a series of discrete operations. The model has been applied to learning and memory, higher level cognition, natural language, perception and attention, human-computer interaction, education and computer generated forces. [1]

You can find more information on this topic on the Wikipedia page for ACT-R

In addition to the work of John R. Anderson, Joshua Tenenbaum has been a contributor to the field of concept learning; studying the computational basis of human learning and inference using behavioral testing of adults, children, and machines from Bayesian statistics and probability theory, but also from geometry, graph theory, and liner algebra. Tenenbaum is working to achieve a better understanding of human learning in computational terms and trying to build computational systems that come closer to the capacities of human learners. [2]

[edit] Component Display Theory

M. D. Merrill's Component Display Theory (CDT) is a cognitive matrix that focuses on the interaction between two dimensions: the level of performance expected from the learner and the types of content of the material to be learned. Merrill classifies learner's level of performance as find, use, remember and material content as facts, concepts, procedures, and principles. The theory also calls upon four primary presentation forms, and several other secondary presentation forms. The primary presentation forms include: rules, examples, recall, and practice. Secondary presentation forms include: prerequisites, objectives, helps, mnemonics, and feedback. A complete lesson should include a combination of these primary and secondary presentation forms, but the most effective combination varies from learner to learner and also from concept to concept. Another significant aspect of the CDT model is that it allows for the learner to control the instructional strategies used and adapt them to meet his or her own learning style and preference. A major goal of this model was to reduce three common errors in concept formation: over-generalization, under-generalization and misconception.

Main principles of this theory are:

1. Having all three primary learner level of performance forms (find, use, and remember) present yields the most effective instruction.

2. Primary presentation forms can either be presented through an explanation learning strategy or through an investigation learning strategy.

3. As long as all of the primary presentation forms are present in the instruction, the order in which the primary presentation forms are presented does not matter.

4. Learners should have control over the number of instances or practice items that they receive.

[edit] Machine Learning Approaches to Concept Learning

This is a budding field due to recent progress in algorithms, computational power, and the expansion of information on the internet. Unlike the situation in Psychology, the problem of concept learning within machine learning is not one of finding the "right" theory of concept learning, but one of finding the most effective method for a given task. As such, there has been a huge proliferation of concept learning theories. In the machine learning literature, this concept learning is more typically called supervised learning or supervised classification, in contrast to unsupervised learning or unsupervised classification, in which the learner is not provided with class labels. In machine learning, algorithms of in Exemplar theory are also known as instance learners or lazy learners.

There are three important roles for machine learning.

1. Data Mining: this is using historical data to improve decisions. An example is looking at medical records and applying it to medical knowledge when making a diagnoses.

2. Software applications that we cannot program by hand: Examples of this are autonomous driving and speech recognition

3. Self-customizing programs: An example of this is a newsreader that learns a readers particular interests and highlights these when the reader visits the site.

Machine learning has an exciting future. Some future advantages include; learning across full mixed-media data, learning across multiple internal databases (including the internet and newsfeeds), learning by active experimentation, learning decisions rather than predictions, and the possibility of programming languages with learning embedded.

[edit] Minimum Description Length Theories:

The minimum description length principle is a formalization of Occam's Razor in which the best hypothesis for a given set of data is the one that leads to the largest compression of the data. In short, data that shows a lot of regularities and/or patterns, may be compressed without losing any important information. Applying this to learning, we can conclude that the more regularity and/or patterns we are able to find within data, the more we have learned about the data.

To illustrate this imagine the following as representations of two sets of data:

Set 1: 100110111011011001010110110010001100101101 Set 2: 011011011011011011011011011011011011011

Set 1 appears as to be random, but we with set 2 we are able to detect a pattern, thereby allowing us to describe it as "011 repeating 13 times.

For more information see: http://learningtheory.org/articles/mdlintro.pdf

[edit] See also

[edit] Footnotes

[edit] References

  1. ^ http://en.wikipedia.org/wiki/ACT-R
  2. ^ MIT : Brain and Cognitive Sciences : People : Faculty : Joshua Tenenbaum