Law of effect

From Wikipedia, the free encyclopedia

The law of effect is a principle of psychology described by Edward Thorndike in 1898[1]. It holds that responses to stimuli that produce a satisfying or pleasant state of affairs in a particular situation are more likely to occur again in the situation. Conversely, responses that produce a discomforting, annoying or unpleasant effect are less likely to occur again in the situation.

The law is important in understanding learning, especially as it relates to operant conditioning. However its status is controversial. Particularly in relation to animal learning, it is not obvious how to define a "satisfying state of affairs" or an "annoying state of affairs" independent of their ability to induce instrumental learning, and the law of effect has therefore been widely criticised as logically circular. In the study of operant conditioning, most psychologists have therefore adopted B. F. Skinner's proposal to define a reinforcer as any stimulus which, when presented after a response, leads to an increase in the future rate of that response. On that basis, the law of effect follows tautologically from the definition of a reinforcer.

In an influential paper of 1970[2], R. J. Herrnstein proposed a quantitative relationship between response rate (B) and reinforcement rate (Rf):

B = k Rf / (Rf0 + Rf)

where k and Rf0 are constants. Herrnstein proposed that this formula, which he derived from the matching law he had observed in studies of concurrent schedules of reinforcement, should be regarded as a quantification of the law of effect. While the qualitative law of effect may be a tautology, this quantitative version is not.

[edit] References

  1. ^ Thorndike, E. L. (1898). Animal intelligence: An experimental study of the associative processes in animals. Psychological Review Monograph Supplement, 2 (no. 4), 1-109.
  2. ^ Herrnstein, R. J. (1970). On the law of effect. Journal of the Experimental Analysis of Behavior, 13, 243-266.
In other languages