Notebook, 1993-

Return to - Notes for a Perspective on Art Education -- NOTES on Child Development

Notes from: Coon, Dennis. Introduction to Psychology, Exploration and Application. St. Paul: West Publishing Company, 1989.

The Brain, Biology, and Behavior - Sensation & reality - Perceiving the World - States of Consciousness

Conditioning & Learning - Cognition & Creativity - Artificial Intelligence - Enhancing Creativity

Emotion - Health, Stress & Coping - ANS Effects

Theories of Personality - Dimensions of Personality - From Birth to Death - Child Development

Conditioning and

Learning. Any relatively permanent change in behavior due to reinforcement --that can be attributed to experience. Note that this excludes temporary changes caused by motivation, fatigue, maturation, disease, injury, or drugs. Each of these can alter behavior, but none qualifies as learning.

Species-specific behavior. Patterned behavior that is exhibited by all normal members of a particular species (an FAP, for example).

Reflex. An innate, automatic response to a stimulus; for example, an eye blink, knee jerk, or dilation of the pupil.

FAP. Fixed action pattern. An instinctual chain of movements found in almost all members of a species. A genetically programmed sequence of actions that occur mechanically and almost universally in species.

Innate behavior. A genetically programmed or inborn behavior pattern.

Reinforcement. Refers to any event that increases chances that a response will occur again.

Antecedent. Events before a response .

Consequence. Events that follow a response.

(Also called classical conditioning and respondent conditioning). A learned automatic reflex response. (Dog salivates on sight of person who does nothing but brings it food). Antecedent events become associated with one another: A stimulus that does not produce a response is linked with one that does. Learning is evident when the new stimulus also begins to elicit (bring forth) responses.

Classical conditioning is passive and involuntary. It simply "happens to" the learner when a CS and US are associated.

Classical conditioning depends on reflex responses --a dependable, inborn stimulus-and-response connection. Pain causes reflex withdrawal of various parts of the body. The pupil of the eye reflexively narrows in response to bright lights. Various foods cause salivation. It is entirely possible for humans to associate any of these --or other --reflex responses with a new stimulus.

In addition, more complex emotional, or "gut," responses may be conditioned to new stimuli. Many involuntary, autonomic nervous system responses ("Fight-or-flight" reflexes) are linked with new stimuli and situations by classical conditioning.

Phobia. Fears that persist even when no realistic danger exists. Persons with fears of animals, water, heights, thunder, fire, bugs, or whatever, can often trace their fear to a time when they were frightened, injured, upset, or in pain while exposed to the feared object or stimulus. Reactions of this type, called:

Conditioned emotional responses (CERs) are often broadened into phobias by stimulus generalization. CERs can be learned indirectly, a fact that adds to their effect on us. Children who learn to fear thunder by watching as their parents react to it have undergone similar conditioning.

Vicarious classical conditioning. Occurs when we observe the emotional reactions of another person to a stimulus and thereby learn to respond emotionally to the same stimulus. Such learning probably affects feelings in many situations. the Move 'Jaws' made ocean swimming a conditioned fear stimulus for many viewers. If movies affect us, we might expect the emotions of parents, friends, and relatives to have even more impact. The emotional attitudes we develop towards certain types of food, political parties, minority groups, escalators --whatever --are probably not only conditioned by direct experience but vicariously as well.

Desensitization. A therapy now widely used to extinguish or countercondition fears, anxieties, and phobias.

Neutral stimulus (NS) A stimulus that does not evoke a response, such as a bell.

Conditioned stimulus (CS). A stimulus to which one has learned to respond.

Unconditioned stimulus (US). Something one does not need to learn a response to (the food) --typically produces a reflex response or "built in," unconditioned (nonlearned) response (UR).

BEFORE CONDITIONING: - - - - - - - - - - - - - - - AFTER CONDITIONING:

US (puff of air) = UR (eye blink) - - - - - - - - - - - - - - - CS (horn) = CR (eye blink)

NS (horn) = no effect

Acquisition. Training. A conditioned response must be reinforced or strengthened during acquisition.

Higher-order conditioning. A well learned CS is used to reinforce further learning. Learning can be extended one or more steps beyond the original conditioned stimulus. Advertisers try to use this effect by pairing images that evoke good feelings (such as people smiling and having fun) with pictures of their products. Obviously, they hope that you will learn, by association, to feel good when you see their products.

Extinction. Removing the reinforcement. If the bell continues to ring but is not followed by food or juice, the salivating will be inhibited or suppressed. It is clear that classical conditioning can be weakened by removing reinforcement. Several extinction sessions may be necessary to completely reverse conditioning.

Spontaneous recovery. If the stimulus returns the next day, if the bell is rung, the child might respond again at first. Expectation takes time to extinguish.

Stimulus generalization. Generalization extends learning to new settings and similar situations. Otherwise, we wouldn't be so adaptable. . Other stimuli similar to the conditioned stimulus may also trigger a response. Telephone or doorbell may trigger same response as CS. Stimulus generalization can be applied to learning that things similar to matches are dangerous --lighters, fireplaces, stoves, and so forth.

Discrimination. Learning to discriminate or respond differently to bell and buzzer --generalized response to buzzer is distinguished.

Stimulus discrimination. Most quickly learn to discriminate voice tones associated with pain from those associated with praise or affection.

Involves learning that is affected by consequences. Each time a response is made, it may be followed by reinforcement, punishment, or nothing. These results determine whether a response is likely to be made again.

Reinforcement is used to alter the frequency of responses, the performance of responses, or to mold them into new patterns.

The learner actively "operates on" the environment. Thus, operant conditioning refers mainly to learning voluntary responses. Waving your hand in class to get a teacher's attention is a learned operant response. How do we learn to associate responses with their consequences?

Law of effect. Acts followed by reinforcement tend to be repeated. Learning is stengthened each time a response is followed by a satisfying state of affairs.

Operant reinforcer. Any event that follows a response and increases its probability.

Conditioning chamber (also called a Skinner box). Barren, except for conditioning.

Response contingent. Operant reinforcement must be given only after desired responses. Contingent reinforcement also affects the performance of responses.

Shaping. Gradual molding of responses to a final desired pattern. We can reward responses that come closer and closer to the final desired pattern until it occurs. Successive approximations are reinforced.

Successive approximations. Ever closer matches (to the desired responses).

Biological constraints. Limits. Some responses are easier to learn than others.

Instinctive drift. Learned responses tend to "drift" towards innate ones. Laws of learning operate within a framework of biological limits and possibilities.

Operant extinction. If a learned response is not reinforced, it gradually drops out of behavior. It takes time, though.

Spontaneous recovery. A brief return of learned behavior recently extinguished. Seems to be very adaptive --"just checking to see if the rules have changed...."

Primary reinforcers. Natural or not learned. Apply almost universally to a particular species. They are usually of a biological nature and produce comfort, end discomfort, or fill an immediate physical need. Food, water, and sex are obvious primary reinforcers.

Intra-cranial stimulation (ICS). Direct stimulation of "pleasure centers" in the brain. Permanent implantation of tiny electrodes in specific areas of the brain. A rat "wired for pleasure" can be trained to press the bar in a Skinner box to deliver electrical stimulation to its own brain. Will ignore food, water, and sex in favor of bar pressing for brain stimulation.

Secondary reinforcers. Learned reinforcers, such as money, praise, attention, approval, success, affection, grades, and similar rewards. How do they gain an ability to promote learning? Some are simply associated with a primary reinforcer. Although the tone produces no food, it is sounded --its association with food makes it a secondary reinforcer.

Tokens. Secondary reinforcers that can be exchanged for primary reinforces --money that can buy food, etc. A major advantage to tokens is that they do not lose reinforcing value as quickly as primary reinforcers do (one's hunger can be satiated). Goal is to provide an immediate, tangible reward as an incentive for learning. Tokens may be exchanged for food, desired goods, special privileges, or trips to movies, amusement parks, and so forth.

Generalized Reinforcers. Hoarding, even when hungry. When money or token or some other secondary reinforcer becomes largely independent of its link to primary reinforcer. In addition, it can lead to other secondary reinforcers, such as prestige, attention, approval, status, or power.

Prepotent responses. Discovering what will serve as a reinforcer in order to change one's own behavior --usually frequent response can be used to effect infrequent response. Requiring oneself to take out the trash before listening to music, which you do frequently and enjoy.

Premack principle (Idea advanced by David Premack - 1965) Any frequent (or "prepotent") response can be used to reinforce an infrequent response.

Delay of reinforcement. Reinforcement has its greatest effect on operant learning when the time lapse between a response and its consequences is short.

Response chaining. A single reinforcer can often maintain a long chain of responses. The long series of events necessary to prepare a meal, for instance, is rewarded by the final eating. A violin maker may spend three months carrying out thousands of steps for the final reward of hearing the first note from an instrument.

Superstitious behavior. Developing unnecessary responses --attaching them to the reinforcement. They appear to pay off. Crediting things with good fortune. Magic. Rituals to produce abundant crops, bring rain, ward off illness....

In daily life, learning is usually based on partial reinforcement. Most of our responses are not consistently rewarded. May be given in a number of patterns, each of which affects responding. There are specific effects and there is a general effect.

Serendipity (n): To discover one thing while looking for another.

Partial reinforcement effect. Responses acquired by partial reinforcement are highly resistant to extinction. Since you have developed the expectation that any play may be "the one," it will be hard to resist just one more play... and one more..... and one more. And, partial reinforcement includes long periods of nonreward, so it will be harder to discriminate between periods of reinforcement and extinction.

Schedules of reinforcement. A rule or plan for determining which responses will be reinforced.

Continuous reinforcement. A schedule in which every response is followed by a reinforcer.

Fixed Ratio schedule (FR). The ratio of reinforcers to responses is fixed; FR-2 means that every other response is rewarded. FR-3 means every third. Fixed ratio schedules produce very high rates of response. When a fixed number of items must be produced for a set amount of pay, work output is high. A rat will quickly perform the 10 responses in an FR-10, eat, then perform the next 10.

Variable Ratio schedule (VR). Slight variation on FR. VR-4 gets rewarded on the average every fourth response. Since reinforcement is less predictable, greater resistance to extinction than FR schedules. One good shot in ten may be all that's needed to create a sports fanatic. Produces very high rates of response.

Fixed Interval schedule (FI) Reinforcement is given for the first correct response made after a fixed amount of time has passed --must wait the 30 seconds before response brings about the reinforcement, irregardless of correctness and frequency of response in the meantime. Produces moderate response rates. Animals on this schedule develop a keen sense of the passage of time.

Variable Interval schedule (VI) Variation on FI. Varied amount of time intervals. Produces slow, steady rates of response and tremendous resistance to extinction. Success at fishing and at getting through to busy telephone number are on VI schedules.

Many of the stimuli we encounter each day act like stop or go signals that guide responding. In other words, antecedent stimuli (events that come before a response) also affect operant conditioning. Response tending to come under the control of stimuli present in a specific situation.... Stimuli signals what consequence will follow if a response is made --influence when and were a response will be made. Asking for something when the person is in a good mood, etc.

Generalization in operant conditioning.

Discrimination in operant conditioning.


[Notes from: Coon, Dennis. Introduction to Psychology, Exploration and Application. St. Paul: West Publishing Company, 1989.]



The contents of this site, including all images and text, are for personal, educational, non-commercial use only. The contents of this site may not be reproduced in any form without proper reference to Text, Author, Publisher, and Date of Publication [and page #s when suitable].