Reinforcement

This article is about the psychological concept. For the construction materials reinforcement, see Rebar. For reinforcement learning in computer science, see Reinforcement learning. For beam stiffening, see Stiffening.
Diagram of operant conditioning

In behavioral psychology, reinforcement is a consequence that will strengthen an organism's future behavior whenever that behavior is preceded by a specific antecedent stimulus. This strengthening effect may be measured as a higher frequency of behavior (e.g., pulling a lever more frequently), longer duration (e.g., pulling a lever for longer periods of time), greater magnitude (e.g., pulling a lever with greater force), or shorter latency (e.g., pulling a lever more quickly following the antecedent stimulus).

Although in many cases a reinforcing stimulus is a rewarding stimulus which is "valued" or "liked" by the individual (e.g., money received from a slot machine, the taste of the treat, the euphoria produced by an addictive drug), this is not a requirement. Indeed, reinforcement does not even require an individual to consciously perceive an effect elicited by the stimulus.[1] Furthermore, stimuli that are "rewarding" or "liked" are not always reinforcing: if an individual eats at a fast food restaurant (response) and likes the taste of the food (stimulus), but believes it is bad for their health, they may not eat it again and thus it was not reinforcing in that condition. Thus, reinforcement occurs only if there is an observable strengthening in behavior.

In most cases reinforcement refers to an enhancement of behavior but this term may also refer to an enhancement of memory. One example of this effect is called post-training reinforcement where a stimulus (e.g. food) given shortly after a training session enhances the learning.[2] This stimulus can also be an emotional one. A good example is that many people can explain in detail where they were when they found out the World Trade Center was attacked.[3][4]

Reinforcement is an important part of operant or instrumental conditioning.

Addiction glossary[5][6][7]
addiction – a state characterized by compulsive engagement in rewarding stimuli, despite adverse consequences
reinforcing stimuli – stimuli that increase the probability of repeating behaviors paired with them
rewarding stimuli – stimuli that the brain interprets as intrinsically positive or as something to be approached
addictive drug – a drug that is both rewarding and reinforcing
addictive behavior – a behavior that is both rewarding and reinforcing
sensitization – an amplified response to a stimulus resulting from repeated exposure to it
drug tolerance – the diminishing effect of a drug resulting from repeated administration at a given dose
drug sensitization or reverse tolerance – the escalating effect of a drug resulting from repeated administration at a given dose
drug dependence – an adaptive state associated with a withdrawal syndrome upon cessation of repeated drug intake
physical dependence – dependence that involves persistent physical–somatic withdrawal symptoms (e.g., fatigue, delirium tremens, and/or persistent insomnia depending on substance)
psychological dependence – dependence that involves emotional–motivational withdrawal symptoms (e.g., dysphoria and anhedonia)

Introduction

B.F. Skinner was a high profile researcher that articulated many of the theoretical constructs of reinforcement and behaviorism. Skinner defined reinforcers according to the change in response strength rather than to more subjective criteria, such as what is pleasurable or valuable to someone. Accordingly, activities, foods or items considered pleasant or enjoyable may not necessarily be reinforcing (because they produce no increase in the response preceding them). Stimuli, settings, and activities only fit the definition of reinforcers if the behavior that immediately precedes the potential reinforcer increases in similar situations in the future, for example, a child who receives a cookie when he or she asks for one. If the frequency of "cookie-requesting behavior" increases, the cookie can be seen as reinforcing "cookie-requesting behavior". If however, "cookie-requesting behavior" does not increase the cookie cannot be considered reinforcing.

The sole criteria that determines if an item, activity, or food is reinforcing is the change in probability of a behavior after administration of that potential reinforcer. Other theories may focus on additional factors such as whether the person expected the strategy to work at some point, but in the behavioral theory, reinforcement is descriptive of an increased probability of a response.

The study of reinforcement has produced an enormous body of reproducible experimental results. Reinforcement is the central concept and procedure in special education, applied behavior analysis, and the experimental analysis of behavior.

Brief history

Much of the work regarding reinforcement began with behavioral psychologists such as Edward Thorndike, J. B. Watson and B.F. Skinner and their use of animal experiments. B.F. Skinner is famous for his work on reinforcement and believed that positive reinforcement is superior to punishment in shaping behavior.[8] At first glance, punishment can seem like just the opposite of reinforcement, yet Skinner argued that they differ immensely; he claimed that positive reinforcement results in lasting behavioral modification (long-term) whereas punishment changes behavior only temporarily (short-term) and has many detrimental side-effects. Skinner defined reinforcement as creating situations that a person likes or removing a situation he doesn't like, and punishment as removing a situation a person likes or setting up one he doesn't like.[8] Thus, the distinction was based mainly on the pleasant or aversive (unpleasant) nature of the stimulus.

Two other researchers, Azrin and Holz, expanded upon operant conditioning by focusing on the definition of punishment in their chapter to Honig’s volume on operant behavior, and they defined it as a “consequence of behavior that reduces the future probability of that behavior.”[9] Skinner’s assumptions regarding reinforcement and punishment were thus challenged throughout the 1960s, and some studies have shown that positive reinforcement and punishment are equally effective in modifying behavior; that debate, however, continues in studies today as to whether or not reinforcement is more or equally as effective as punishment.[10] Edward Thorndike also did some work regarding reinforcement in learning theory and believed that learning could occur unconsciously; that is, reinforcements or punishments could have an effect upon learning even if the person or organism is unaware of it.[11] The research on the effects of positive and negative reinforcement alongside punishment continue today as those concepts apply directly to many forms of learning and behavior.

Operant conditioning

Main article: Operant conditioning

The basic definition is that a positive reinforcer adds a stimulus to increase or maintain frequency of a behavior while a negative reinforcer removes a stimulus to increase or maintain the frequency of the behavior. As mentioned above, positive and negative reinforcement are components of operant conditioning, along with positive punishment and negative punishment, all explained below:

Reinforcement

Positive reinforcement occurs when an event or stimulus is presented as a consequence of a behavior and the behavior increases.[12]:253

Negative reinforcement occurs when the rate of a behavior increases because an aversive event or stimulus is removed or prevented from happening.[12]:253

Punishment

Positive punishment occurs when a response produces a stimulus and that responses decreases in probability in the future in similar circumstances.

Negative punishment occurs when a response produces the removal of a stimulus and that response decreases in probability in the future in similar circumstances.

Simply put, reinforcers serve to increase behaviors whereas punishers serve to decrease behaviors; thus, positive reinforcers are stimuli that the subject will work to attain, and negative reinforcers are stimuli that the subject will work to be rid of or to end.[13] The table below illustrates the adding and subtracting of stimuli (pleasant or aversive) in relation to reinforcement vs. punishment.

Pleasant Stimulus Aversive (unpleasant) Stimulus
Adding/Presenting Positive Reinforcement Positive Punishment
Removing/Taking Away Negative Punishment Negative Reinforcement

Further ideas and concepts:

Primary reinforcers

A primary reinforcer, sometimes called an unconditioned reinforcer, is a stimulus that does not require pairing to function as a reinforcer and most likely has obtained this function through the evolution and its role in species' survival.[15] Examples of primary reinforcers include sleep, food, air, water, and sex. Some primary reinforcers, such as certain drugs, may mimic the effects of other primary reinforcers. While these primary reinforcers are fairly stable through life and across individuals, the reinforcing value of different primary reinforcers varies due to multiple factors (e.g., genetics, experience). Thus, one person may prefer one type of food while another abhors it. Or one person may eat lots of food while another eats very little. So even though food is a primary reinforcer for both individuals, the value of food as a reinforcer differs between them.

Secondary reinforcers

A secondary reinforcer, sometimes called a conditioned reinforcer, is a stimulus or situation that has acquired its function as a reinforcer after pairing with a stimulus that functions as a reinforcer. This stimulus may be a primary reinforcer or another conditioned reinforcer (such as money). An example of a secondary reinforcer would be the sound from a clicker, as used in clicker training. The sound of the clicker has been associated with praise or treats, and subsequently, the sound of the clicker may function as a reinforcer. As with primary reinforcers, an organism can experience satiation and deprivation with secondary reinforcers.

Other reinforcement terms

Natural and artificial

In his 1967 paper, Arbitrary and Natural Reinforcement, Charles Ferster proposed classifying reinforcement into events that increase frequency of an operant as a natural consequence of the behavior itself, and events that are presumed to affect frequency by their requirement of human mediation, such as in a token economy where subjects are "rewarded" for certain behavior with an arbitrary token of a negotiable value. In 1970, Baer and Wolf created a name for the use of natural reinforcers called "behavior traps".[19] A behavior trap requires only a simple response to enter the trap, yet once entered, the trap cannot be resisted in creating general behavior change. It is the use of a behavioral trap that increases a person's repertoire, by exposing them to the naturally occurring reinforcement of that behavior. Behavior traps have four characteristics:

As can be seen from the above, artificial reinforcement is in fact created to build or develop skills, and to generalize, it is important that either a behavior trap is introduced to "capture" the skill and utilize naturally occurring reinforcement to maintain or increase it. This behavior trap may simply be a social situation that will generally result from a specific behavior once it has met a certain criterion (e.g., if you use edible reinforcers to train a person to say hello and smile at people when they meet them, after that skill has been built up, the natural reinforcer of other people smiling, and having more friendly interactions will naturally reinforce the skill and the edibles can be faded).

Intermittent reinforcement

Pigeons experimented on in a scientific study were more responsive to intermittent reinforcement, than continuous reinforcement.[21] In other words, pigeons were more prone to act when they only sometimes could get what they wanted. This effect was such that behavioral responses were maximized when the reward rate was at 50% (in other words, when the uncertainty was maximized), and would gradually decline toward values on either side of 50%.[22] R.B Sparkman, a journalist specialized on what motivates human behavior, claims this is also true for humans, and may in part explain human tendencies such as gambling addiction.[23]

Schedules

When an animal's surroundings are controlled, its behavior patterns after reinforcement become predictable, even for very complex behavior patterns. A schedule of reinforcement is a rule or program that determines how and when the occurrence of a response will be followed by the delivery of the reinforcer, and extinction, in which no response is reinforced. Schedules of reinforcement influence how an instrumental response is learned and how it is maintained by reinforcement. Between these extremes is intermittent or partial reinforcement where only some responses are reinforced.

Specific variations of intermittent reinforcement reliably induce specific patterns of response, irrespective of the species being investigated (including humans in some conditions). The orderliness and predictability of behavior under schedules of reinforcement was evidence for B.F. Skinner's claim that by using operant conditioning he could obtain "control over behavior", in a way that rendered the theoretical disputes of contemporary comparative psychology obsolete. The reliability of schedule control supported the idea that a radical behaviorist experimental analysis of behavior could be the foundation for a psychology that did not refer to mental or cognitive processes. The reliability of schedules also led to the development of applied behavior analysis as a means of controlling or altering behavior.

Many of the simpler possibilities, and some of the more complex ones, were investigated at great length by Skinner using pigeons, but new schedules continue to be defined and investigated.

Simple schedules

A chart demonstrating the different response rate of the four simple schedules of reinforcement, each hatch mark designates a reinforcer being given

Simple schedules have a single rule to determine when a single type of reinforcer is delivered for specific response.

Other simple schedules include:

Effects of different types of simple schedules

Compound schedules

Compound schedules combine two or more different simple schedules in some way using the same reinforcer for the same behavior. There are many possibilities; among those most often used are:

Superimposed schedules

The psychology term superimposed schedules of reinforcement refers to a structure of rewards where two or more simple schedules of reinforcement operate simultaneously. Reinforcers can be positive, negative, or both. An example is a person who comes home after a long day at work. The behavior of opening the front door is rewarded by a big kiss on the lips by the person's spouse and a rip in the pants from the family dog jumping enthusiastically. Another example of superimposed schedules of reinforcement is a pigeon in an experimental cage pecking at a button. The pecks deliver a hopper of grain every 20th peck, and access to water after every 200 pecks.

Superimposed schedules of reinforcement are a type of compound schedule that evolved from the initial work on simple schedules of reinforcement by B.F. Skinner and his colleagues (Skinner and Ferster, 1957). They demonstrated that reinforcers could be delivered on schedules, and further that organisms behaved differently under different schedules. Rather than a reinforcer, such as food or water, being delivered every time as a consequence of some behavior, a reinforcer could be delivered after more than one instance of the behavior. For example, a pigeon may be required to peck a button switch ten times before food appears. This is a "ratio schedule". Also, a reinforcer could be delivered after an interval of time passed following a target behavior. An example is a rat that is given a food pellet immediately following the first response that occurs after two minutes has elapsed since the last lever press. This is called an "interval schedule".

In addition, ratio schedules can deliver reinforcement following fixed or variable number of behaviors by the individual organism. Likewise, interval schedules can deliver reinforcement following fixed or variable intervals of time following a single response by the organism. Individual behaviors tend to generate response rates that differ based upon how the reinforcement schedule is created. Much subsequent research in many labs examined the effects on behaviors of scheduling reinforcers.

If an organism is offered the opportunity to choose between or among two or more simple schedules of reinforcement at the same time, the reinforcement structure is called a "concurrent schedule of reinforcement". Brechner (1974, 1977) introduced the concept of superimposed schedules of reinforcement in an attempt to create a laboratory analogy of social traps, such as when humans overharvest their fisheries or tear down their rainforests. Brechner created a situation where simple reinforcement schedules were superimposed upon each other. In other words, a single response or group of responses by an organism led to multiple consequences. Concurrent schedules of reinforcement can be thought of as "or" schedules, and superimposed schedules of reinforcement can be thought of as "and" schedules. Brechner and Linder (1981) and Brechner (1987) expanded the concept to describe how superimposed schedules and the social trap analogy could be used to analyze the way energy flows through systems.

Superimposed schedules of reinforcement have many real-world applications in addition to generating social traps. Many different human individual and social situations can be created by superimposing simple reinforcement schedules. For example a human being could have simultaneous tobacco and alcohol addictions. Even more complex situations can be created or simulated by superimposing two or more concurrent schedules. For example, a high school senior could have a choice between going to Stanford University or UCLA, and at the same time have the choice of going into the Army or the Air Force, and simultaneously the choice of taking a job with an internet company or a job with a software company. That is a reinforcement structure of three superimposed concurrent schedules of reinforcement.

Superimposed schedules of reinforcement can create the three classic conflict situations (approach–approach conflict, approach–avoidance conflict, and avoidance–avoidance conflict) described by Kurt Lewin (1935) and can operationalize other Lewinian situations analyzed by his force field analysis. Other examples of the use of superimposed schedules of reinforcement as an analytical tool are its application to the contingencies of rent control (Brechner, 2003) and problem of toxic waste dumping in the Los Angeles County storm drain system (Brechner, 2010).

Concurrent schedules

In operant conditioning, concurrent schedules of reinforcement are schedules of reinforcement that are simultaneously available to an animal subject or human participant, so that the subject or participant can respond on either schedule. For example, in a two-alternative forced choice task, a pigeon in a Skinner box is faced with two pecking keys; pecking responses can be made on either, and food reinforcement might follow a peck on either. The schedules of reinforcement arranged for pecks on the two keys can be different. They may be independent, or they may be linked so that behavior on one key affects the likelihood of reinforcement on the other.

It is not necessary for responses on the two schedules to be physically distinct. In an alternate way of arranging concurrent schedules, introduced by Findley in 1958, both schedules are arranged on a single key or other response device, and the subject can respond on a second key to change between the schedules. In such a "Findley concurrent" procedure, a stimulus (e.g., the color of the main key) signals which schedule is in effect.

Concurrent schedules often induce rapid alternation between the keys. To prevent this, a "changeover delay" is commonly introduced: each schedule is inactivated for a brief period after the subject switches to it.

When both the concurrent schedules are variable intervals, a quantitative relationship known as the matching law is found between relative response rates in the two schedules and the relative reinforcement rates they deliver; this was first observed by R.J. Herrnstein in 1961. Matching law is a rule for instrumental behavior which states that the relative rate of responding on a particular response alternative equals the relative rate of reinforcement for that response (rate of behavior = rate of reinforcement). Animals and humans have a tendency to prefer choice in schedules.[27]

Shaping

Main article: Shaping (psychology)

Shaping is reinforcement of successive approximations to a desired instrumental response. In training a rat to press a lever, for example, simply turning toward the lever is reinforced at first. Then, only turning and stepping toward it is reinforced. The outcomes of one set of behaviours starts the shaping process for the next set of behaviours, and the outcomes of that set prepares the shaping process for the next set, and so on. As training progresses, the response reinforced becomes progressively more like the desired behavior; each subsequent behaviour becomes a closer approximation of the final behaviour.[28]

Chaining

Main article: Chaining

Chaining involves linking discrete behaviors together in a series, such that each result of each behavior is both the reinforcement (or consequence) for the previous behavior, and the stimuli (or antecedent) for the next behavior. There are many ways to teach chaining, such as forward chaining (starting from the first behavior in the chain), backwards chaining (starting from the last behavior) and total task chaining (in which the entire behavior is taught from beginning to end, rather than as a series of steps). An example is opening a locked door. First the key is inserted, then turned, then the door opened.

Forward chaining would teach the subject first to insert the key. Once that task is mastered, they are told to insert the key, and taught to turn it. Once that task is mastered, they are told to perform the first two, then taught to open the door. Backwards chaining would involve the teacher first inserting and turning the key, and the subject is taught to open the door. Once that is learned, the teacher inserts the key, and the subject is taught to turn it, then opens the door as the next step. Finally, the subject is taught to insert the key, and they turn and open the door. Once the first step is mastered, the entire task has been taught. Total task chaining would involve teaching the entire task as a single series, prompting through all steps. Prompts are faded (reduced) at each step as they are mastered.

Persuasive communication & the reinforcement theory

Persuasive communication
Persuasion influences any person the way they think, act and feel. Persuasive skill tells about how people understand the concern, position and needs of the people. Persuasion can be classified into informal persuasion and formal persuasion.
Informal persuasion
This tells about the way in which a person interacts with his/her colleagues and customers. The informal persuasion can be used in team, memos as well as e-mails.
Formal persuasion
This type of persuasion is used in writing customer letter, proposal and also for formal presentation to any customer or colleagues.
Process of persuasion
Persuasion relates how you influence people with your skills, experience, knowledge, leadership, qualities and team capabilities. Persuasion is an interactive process while getting the work done by others. Here are examples for which you can use persuasion skills in real time. Interview: you can prove your best talents, skills and expertise. Clients: to guide your clients for the achievement of the goals or targets. Memos: to express your ideas and views to coworkers for the improvement in the operations. Resistance identification and positive attitude are the vital roles of persuasion.

Persuasion is a form of human interaction. It takes place when one individual expects some particular response from one or more other individuals and deliberately sets out to secure the response through the use of communication. The communicator must realize that different groups have different values.[29]:24–25

In instrumental learning situations, which involve operant behavior, the persuasive communicator will present his message and then wait for the receiver to make a correct response. As soon as the receiver makes the response, the communicator will attempt to fix the response by some appropriate reward or reinforcement.[30]

In conditional learning situations, where there is respondent behavior, the communicator presents his message so as to elicit the response he wants from the receiver, and the stimulus that originally served to elicit the response then becomes the reinforcing or rewarding element in conditioning.[29]

Mathematical models

A lot of work has been done in building a mathematical model of reinforcement. This model is known as MPR, short for mathematical principles of reinforcement. Killeen and Sitomer are among the key researchers in this field.

Criticisms

The standard definition of behavioral reinforcement has been criticized as circular, since it appears to argue that response strength is increased by reinforcement, and defines reinforcement as something that increases response strength (i.e., response strength is increased by things that increase response strength). However, the correct usage[31] of reinforcement is that something is a reinforcer because of its effect on behavior, and not the other way around. It becomes circular if one says that a particular stimulus strengthens behavior because it is a reinforcer, and does not explain why a stimulus is producing that effect on the behavior. Other definitions have been proposed, such as F.D. Sheffield's "consummatory behavior contingent on a response", but these are not broadly used in psychology.[32]

History of the terms

In the 1920s Russian physiologist Ivan Pavlov may have been the first to use the word reinforcement with respect to behavior, but (according to Dinsmoor) he used its approximate Russian cognate sparingly, and even then it referred to strengthening an already-learned but weakening response. He did not use it, as it is today, for selecting and strengthening new behaviors. Pavlov's introduction of the word extinction (in Russian) approximates today's psychological use.

In popular use, positive reinforcement is often used as a synonym for reward, with people (not behavior) thus being "reinforced", but this is contrary to the term's consistent technical usage, as it is a dimension of behavior, and not the person, which is strengthened. Negative reinforcement is often used by laypeople and even social scientists outside psychology as a synonym for punishment. This is contrary to modern technical use, but it was B.F. Skinner who first used it this way in his 1938 book. By 1953, however, he followed others in thus employing the word punishment, and he re-cast negative reinforcement for the removal of aversive stimuli.

There are some within the field of behavior analysis[33] who have suggested that the terms "positive" and "negative" constitute an unnecessary distinction in discussing reinforcement as it is often unclear whether stimuli are being removed or presented. For example, Iwata poses the question: "...is a change in temperature more accurately characterized by the presentation of cold (heat) or the removal of heat (cold)?"[34]:363 Thus, reinforcement could be conceptualized as a pre-change condition replaced by a post-change condition that reinforces the behavior that followed the change in stimulus conditions.

Applications

Partial or intermittent negative reinforcement can create an effective climate of fear and doubt.[35]

See also

References

  1. Winkielman P., Berridge KC, and Wilbarger JL. (2005). Unconscious affective reactions to masked happy verses angry faces influence consumption behavior and judgement value. Pers Soc Psychol Bull: 31, 121–35.
  2. Mondadori C, Waser PG, and Huston JP. (2005). Time-dependent effects of post-trial reinforcement, punishment or ECS on passive avoidance learning. Physiol Behav: 18, 1103–9. PMID 928533
  3. White NM, Gottfried JA (2011). "Reward: What Is It? How Can It Be Inferred from Behavior?". PMID 22593908.
  4. White NM. (2011). Reward: What is it? How can it be inferred from behavior. In: Neurobiology of Sensation and Reward. CRC Press PMID 22593908
  5. Malenka RC, Nestler EJ, Hyman SE (2009). "Chapter 15: Reinforcement and Addictive Disorders". In Sydor A, Brown RY. Molecular Neuropharmacology: A Foundation for Clinical Neuroscience (2nd ed.). New York: McGraw-Hill Medical. pp. 364–375. ISBN 9780071481274.
  6. Nestler EJ (December 2013). "Cellular basis of memory for addiction". Dialogues Clin. Neurosci. 15 (4): 431–443. PMC 3898681. PMID 24459410.
  7. "Glossary of Terms". Mount Sinai School of Medicine. Department of Neuroscience. Retrieved 9 February 2015.
  8. 8.0 8.1 Skinner, B.F. (1948). Walden Two. Toronto: The Macmillan Company.
  9. Honig, Werner (1966). Operant Behavior: Areas of Research and Application. New York: Meredith Publishing Company. p. 381.
  10. Domjan, W. (2003). Aversive control: Avoidance and punishment. In: The Principles of Learning and Behavior. CA: Thompson Learning. p. 302.
  11. Shanks, David (2010). "Learning: From Association to Cognition". Annual Review of Psychology (61): 273–301. doi:10.1146/annurev.psych.093008.100519.
  12. 12.0 12.1 12.2 Flora, Stephen (2004). The Power of Reinforcement. Albany: State University of New York Press.
  13. D'Amato, M. R. (1969). Melvin H. Marx, ed. Learning Processes: Instrumental Conditioning. Toronto: The Macmillan Company.
  14. Harter, J. K. (2002). C. L. Keyes, ed. Well-Being in the Workplace and its Relationship to Business Outcomes: A Review of the Gallup Studies. Washington D.C.: American Psychological Association.
  15. Skinner, B.F. (1974). About Behaviorism
  16. 16.0 16.1 16.2 16.3 16.4 16.5 16.6 Miltenberger, R. G. "Behavioral Modification: Principles and Procedures". Thomson/Wadsworth, 2008.
  17. Tucker, M.; Sigafoos, J. & Bushell, H. (1998). Use of noncontingent reinforcement in the treatment of challenging behavior. Behavior Modification, 22, 529–47.
  18. Poling, A. & Normand, M. (1999). Noncontingent reinforcement: an inappropriate description of time-based schedules that reduce behavior. Journal of Applied Behavior Analysis, 32, 237–8.
  19. Baer and Wolf, 1970, The entry into natural communities of reinforcement. In R. Ulrich, T. Stachnik, & J. Mabry (eds.), Control of human behavior (Vol. 2, pp. 319–24). Gleenview, IL: Scott, Foresman.
  20. Kohler & Greenwood, 1986, Toward a technology of generalization: The identification of natural contingencies of reinforcement. The Behavior Analyst, 9, 19–26.
  21. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1333219/
  22. Zeiler, MD (March 1972). "Fixed-interval behavior: effects of percentage reinforcement.". Journal of the Experimental Analysis of Behavior 17 (2): 177–89. doi:10.1901/jeab.1972.17-177. PMID 16811580.
  23. Sparkman, R. B. (1979). The Art of Manipulation. Doubleday Publishing. p. 34. ISBN 0385270070.
  24. Derenne, A. & Flannery, K.A. (2007). Within Session FR Pausing. The Behavior Analyst Today, 8(2), 175–86 BAO
  25. McSweeney, F.K.; Murphy, E.S. & Kowal, B.P. (2001) Dynamic Changes in Reinforcer Value: Some Misconceptions and Why You Should Care. The Behavior Analyst Today, 2(4), 341–7 BAO
  26. Iversen, I.H. & Lattal, K.A. Experimental Analysis of Behavior. 1991, Elsevier, Amsterdam.
  27. Toby L. Martin, C.T. Yu, Garry L. Martin & Daniela Fazzio (2006): On Choice, Preference, and Preference For Choice. The Behavior Analyst Today, 7(2), 234–48 BAO
  28. Schacter, Daniel L., Daniel T. Gilbert, and Daniel M. Wegner. "Chapter 7: Learning." Psychology. ; Second Edition. N.p.: Worth, Incorporated, 2011. 284-85.
  29. 29.0 29.1 Bettinghaus, Erwin P., Persuasive Communication, Holt, Rinehart and Winston, Inc., 1968
  30. Skinner, B.F., The Behavior of Organisms. An Experimental Analysis, New York: Appleton-Century-Crofts. 1938
  31. Epstein, L.H. 1982. Skinner for the Classroom. Champaign, IL: Research Press
  32. Franco J. Vaccarino, Bernard B. Schiff & Stephen E. Glickman (1989). Biological view of reinforcement. in Stephen B. Klein and Robert Mowrer. Contemporary learning theories: Instrumental conditioning theory and the impact of biological constraints on learning. Hillsdale, NJ, Lawrence Erlbaum Associates
  33. Michael, J. (1975, 2005). Positive and negative reinforcement, a distinction that is no longer necessary; or a better way to talk about bad things. Journal of Organizational Behavior Management, 24, 207–22.
  34. Iwata, B.A. (1987). Negative reinforcement in applied behavior analysis: an emerging technology. Journal of Applied Behavior Analysis, 20, 361–78.
  35. Braiker, Harriet B. (2004). Who's Pulling Your Strings ? How to Break The Cycle of Manipulation. ISBN 0-07-144672-9.

Further reading

External links