What are Classical and Operant Conditioning? (Include the main figures and concepts of each learning theory. Give examples). Thanks.© BrainMass Inc. brainmass.com October 9, 2019, 4:08 pm ad1c9bdddf
Classical and Operant Conditioning
1. Classical Conditioning.
One important type of learning, Classical Conditioning, was actually discovered accidentally by Ivan Pavlov (1849-1936). Pavlov was a Russian physiologist who discovered this phenomenon while doing research on digestion. His research was aimed at better understanding the digestive patterns in dogs.
During his experiments, he would put meat powder in the mouths of dogs who had tubes inserted into various organs to measure bodily responses. What he discovered was that the dogs began to salivate before the meat powder was presented to them. Then, the dogs began to salivate as soon as the person feeding them would enter the room.
He soon began to gain interest in this phenomenon and abandoned his digestion research in favor of his now famous Classical Conditioning study.
Basically, the findings support the idea that we develop responses to certain stimuli that are not naturally occurring. When we touch a hot stove, our reflex pulls our hand back. It does this instinctually, no learning involved. It is merely a survival instinct. But why now, do some people, after getting burned, pull their hands back even when the stove is not turned on? Pavlov discovered that we make associations which cause us to generalize our response to one stimuli onto a neutral stimuli it is paired with. In other words hot burner = ouch, stove = burner, therefore, stove = ouch.
Pavlov began pairing a bell sound with the meat powder and found that even when the meat powder was not presented, the dog would eventually begin to salivate after hearing the bell. Since the meat powder naturally results in salivation, these two variables are called the unconditioned stimulus (UCS) and the unconditioned response (UCR), respectively. The bell and salivation are not naturally occurring; the dog was conditioned to respond to the bell. Therefore, the bell is considered the conditioned stimulus (CS), and the salivation to the bell, the conditioned response (CR).
Many of our behaviors today are shaped by the pairing of stimuli. If you ever noticed certain stimuli, such as the smell of a cologne or perfume, a certain song, a specific day of the year, results in fairly intense emotions. Its not that the smell or the song are the cause of the emotion, but rather what that smell or song has been paired with...perhaps an ex-boyfriend or ex-girlfriend, the death of a loved one, or maybe the day you met you current husband or wife. We make these associations all the time and often don't realize the power that these connections, or pairings have on us. But, in fact, we have been classically conditioned.
2. Operant Conditioning
The scientific study of operant conditioning dates from the beginning of the twentieth century with the work of Edward L. Thorndike in the U.S. and C. Lloyd Morgan in the U.K. (http://www.scholarpedia.org/article/Operant_conditioning). B.F. Skinner is another wll-known figure.
Another type of learning, very similar to that discussed above, is called Operant Conditioning. The term "Operant" refers to how an organism operates on the environment, and hence, operant conditioning comes from how we respond to what is presented to us in our environment. It can be thought of as learning due to the natural consequences of our actions.
Lets explain that a little further.
The classic study of Operant Conditioning involved a cat who was placed in a box with only one way out; a specific area of the box had to be pressed in order for the door to open. The cat initially tries to get out of the box because freedom is reinforcing. In its attempt to escape, the area of the box is triggered and the door opens. The cat is now free. Once placed in the box again, the cat will naturally try to remember what it did to escape the previous time and will once again find the area to press. The more the cat is placed back in the box, the quicker it will press that area for its freedom. It has learned, through natural consequences, how to gain the reinforcing freedom.
We learn this way everyday in our lives. Imagine the last time you made a mistake; you most likely remember that mistake and do things differently when the situation comes up again. In that sense, you've learned do act differently based on the natural consequences of your previous actions. The same holds true for positive actions. If something you did results in a positive outcome, you are likely to do that same activity again
The following excerpt and the article excerpt at the end of this response further details the theories.
The scientific study of operant conditioning dates from the beginning of the twentieth century with the work of Edward L. Thorndike in the U.S. and C. Lloyd Morgan in the U.K.
Graduate student Thorndike's early experimental work, looking at cats escaping from puzzle boxes in William James' basement at Harvard, led to his famous "Law of [Effect]":
Of several responses made to the same situation, those which are accompanied or closely followed by satisfaction to the animal...will, other things being equal, be more firmly connected with the situation...; those which are accompanied or closely followed by discomfort...will have their connections with the situation weakened...The greater the satisfaction or discomfort, the greater the strengthening or weakening of the bond. (Thorndike, 1911, p. 244)
Thorndike soon gave up work with animals and became an influential educator at Columbia Teachers College. But the Law of Effect, which is a compact statement of the principle of operant reinforcement, was taken up by what became the dominant movement in American psychology in the first half of the twentieth century: Behaviorism.
The founder of behaviorism was John B. Watson at Johns Hopkins university. His successors soon split into two schools: Clark Hull at Yale and Kenneth Spence at Iowa were neo-behaviorists. They sought mathematical laws for learned behavior. For example, by looking at the performance of groups of rats learning simple tasks such as discrimination the correct arm in a T-maze, they were led to the idea of an exponential learning curve and a learning principle of the form V(t+1) = A(1-V(t)), where V is response strength A is a learning parameter less than one, and t is a small time step.
Soon, B. F. Skinner, at Harvard, reacted against Hullian experimental methods (group designs and statistical analysis) and theoretical emphasis, proposing instead his radical a-theoretical behaviorism. The best account of Skinner's method, approach and early findings can be found in a readable article -- "A case history in scientific method" -- that he contributed to an otherwise almost forgotten multi-volume project "Psychology: A Study of a Science" organized on positivist principles by editor Sigmund Koch. (A third major behaviorist figure, Edward Chace Tolman, on the West coast, was close to what would now be called a cognitive psychologist and stood rather above the fray.)
Skinner opposed Hullian theory and devised experimental methods that allowed learning animals to be treated much like physiological preparations. He had his own theory, but it was much less elaborate that Hull's and (with one notable exception) he neither derived nor explicitly tested predictions from it in the usual scientific way. Skinner's 'theory' was more an organizing framework than a true theory. It was nevertheless valuable because it introduced an important distinction between reflexive behavior, which Skinner termed elicited by a stimulus, and operant behavior, which he called emitted because when it first occurs (i.e., before it can be reinforced) it is not (he believed) tied to any stimulus.
The view of operant behavior as a repertoire of emitted acts from which one is selected by reinforcement, immediately forged a link with the dominant idea in biology: Charles Darwin's natural selection, according to which adaptation arises via selection from a population that contains many heritable variants, some more effective - more likely to reproduce - than others. Skinner and several others noted this connection which has become the dominant view of operant conditioning. Reinforcement is the selective agent, acting via temporal contiguity (the sooner the reinforcer follows the response, the greater its effect), frequency (the more often these pairings occur the better) and contingency (how well does the target response predict the reinforcer). It is also true that some reinforcers are innately more effective with some responses - flight is more easily conditioned as an escape response in pigeons than pecking, for example.
Contingency is easiest to describe by example. Suppose we reinforce with a food pellet every 5th occurrence of some arbitrary response such as lever pressing by a hungry lab rat. The rat presses at a certain rate, say 10 ...
Classical and Operant Conditioning are overviewed, including the main figures and concepts of each learning theory. Examples and an article is provided for further expansion of the two concepts.