Presentation is loading. Please wait.

Presentation is loading. Please wait.

Famous Psychology Experiments

Similar presentations


Presentation on theme: "Famous Psychology Experiments"— Presentation transcript:

1 Famous Psychology Experiments
This presentation will take a look at some famous experiments in psychology. The experiments you’ll learn about fall under three broad categories: learning, social psychology, and brain disorders. Before we discuss specific experiments, let’s review some of the basics of psychological experimentation.

2 Ivan Pavlov Dog Classical Conditioning Experiments on dogs
Ivan Pavlov was a Russian scientist who specialized in studying the digestive system. He earned Russia’s first Nobel Prize in Pavlov became famous for his experiments in classical conditioning. In the 1890s and early 1900s, he conducted these experiments on dogs, whose digestive systems he had been studying. The next few slides will demonstrate these experiments and will explain how classical conditioning works. Classical Conditioning Experiments on dogs Smarty Pants: Nobel Prize Dog

3 Pavlov’s Methodology and Results
Present external (neutral) stimulus (bell) immediately before giving food. Order is important Results: After a few trials, the dog salivates upon hearing the bell Works with other stimuli as well Pavlov placed a dog in a small room and harnessed it so it couldn’t move around. He then attached a collection device to the dog’s mouth to measure how much the dog salivated. He began the experiment by ringing a bell and then immediately placing some food into the dog’s mouth. He measured the amount of saliva the dog excreted after each feeding. After repeating this bell-and-feeding sequence several times, Pavlov found that the dog would begin to salivate before it actually received the food. This confirmed his hypothesis. Pavlov achieved similar results when he tried other external stimuli, including a buzzer, a light, and touching the dog’s leg.

4 Classical Conditioning Components
CS-Conditioned Stimulus Learned trigger (initially neutral) UCS- Unconditioned Stimulus Automatically triggers a response UCR- Unconditioned Response Naturally occurring response CR- Conditioned Response Learned response

5 Continuing Pavlov’s Experiment
Other Aspects of Classical Conditioning Acquisition Learning the pairing CS+ UCS Making the association Extinction Represses CR (not eliminated) Spontaneous Recovery After extinction, time passes, recurring of the CR w/o UCS Generalization CR to stimuli that are similar Discrimination CR to a particular stimulus only Over the next few decades, Pavlov continued his experiment to learn more about how the dogs became conditioned to the external stimuli. He identified the following additional aspects of classical conditioning: Acquisition: There should be very little time between the presentation of the external stimulus (the bell) and the food. Also, if the food is presented before the bell rings, the dog does not become conditioned to salivate upon hearing the bell. Extinction: Once the dog has been conditioned, its conditioned salivation response will not last forever. The conditioned response (CR) gradually becomes less pronounced until it becomes “extinct.” Spontaneous Recovery: Interestingly, when Pavlov extinguished the conditioned response by not providing food with the bell, after a few hours the dogs would salivate at a weakened level upon hearing a bell, even if no food was presented. Generalization: The dogs salivated upon hearing the sound of bells that were similar to, but not the same as, the one to which Pavlov conditioned them to respond. Discrimination: Pavlov was also able to train his dogs to discriminate one sound from another and to respond to only one type of bell. Pavlov’s studies have had a significant impact on theories of learning in humans. Think about how each of Pavlov’s findings might relate to the ways in which people learn and react to their environment.

6 John Watson and Rosalie Rayner: Hypothesis, Methodology, Results
Conditioned fear into an infant Presented a rat immediately followed by a loud noise, startling the baby + = After a few tries, Albert was afraid of the rat Following up on Pavlov’s work, in 1920 John Watson and Rosalie Rayner hypothesized that humans could be conditioned to have certain fears. In particular, they hypothesized that a human child could be conditioned to fear a rat. The child they studied was an 11-month-old boy named Albert B., or “Little Albert.” Before the experiment, Little Albert was not afraid of rats, but he was afraid of loud noises. Watson and Rayner began the experiment by showing Little Albert a white rat. As Albert reached for the rat, the experimenters pounded a hammer directly behind his head, startling Albert. After doing this several times, Albert became frightened and began to cry simply upon seeing the rat without any accompanying noise. Watson and Rayner had therefore successfully conditioned him to fear the rat. A few days after conditioning Little Albert to fear the rat, Watson and Rayner found that Albert had generalized his fears to other furry creatures, including a rabbit, a dog, a sealskin coat, or Santa Claus mask. He did not express fear when exposed to non-furry toys. Albert generalized his fears to other furry objects

7 Mary Cover Jones DAY 1 DAY 2 DAY 3 Colleague of Watson
Deconditioned 3-year-old Peter from his fears by gradually moving a rabbit (and other things) closer to him while he was eating DAY 1 DAY 2 DAY 3 In 1925, Mary Cover Jones (a colleague of Watson) hypothesized that she would be able to decondition a three-year-old boy named Peter from some of his fears, which included feathers, cotton, frogs, fish, rats, rabbits, and mechanical toys. She began by bringing a caged rabbit into the same room where Peter was having a snack in his highchair. The rabbit was far enough away that it did not bother Peter. The next day, she brought the rabbit increasingly closer to Peter until he began to become disturbed. On subsequent days, the rabbit was moved closer and closer to Peter’s highchair only to the point at which Peter became afraid, at which time they’d end the experiment for the day. Eventually, Peter was able to pet the rabbit, having been deconditioned from his fear. Jones was able to decondition most of Peter’s other fears in this manner. Watson and Rayner’s experiment with Little Albert showed that people can be conditioned to fear specific types of objects. Conversely, Jones suggested that people could be deconditioned from their fears. Similar methods are used today to help people overcome phobias.

8 B.F. Skinner and Operant Conditioning
B.F. Skinner had an enormous influence on psychology in general and on the field of psychology known as behaviorism in particular. His key theories were published in the early 1950s. As Pavlov’s experiments showed, classical conditioning involves associating a neutral external stimulus with a response that is generally automatic (such as salivating). Skinner’s research revealed the power of operant conditioning, which involves learning how to operate on one’s environment to elicit a particular stimulus (a reward) or to avoid a punishment. In operant conditioning, the subject controls his or her response. You will learn how this works in the next few slides. Classical conditioning involves an automatic response to a stimulus Operant conditioning involves learning how to control one’s response to elicit a reward or avoid a punishment

9 The “Skinner Box”: Skinner’s Hypothesis, Methodology, and Results
Rats placed in “Skinner boxes” Shaping-guiding behavior Shape rats closer and closer to the bar to receive food. Food is a reinforcer Skinner hypothesized that rats could be trained to perform specific behaviors in order to receive a food reward. He placed the rats into what is technically called an “operant chamber” but became more commonly known as a “Skinner box.” The soundproof glass box contained a bar or a key that the rat could press down to receive food. This bar or key was hooked up to an instrument that recorded how many times the rat pressed it. Skinner used a process called “shaping” to teach the rats to press the bar for food. For example, if a rat approached the bar, he might initially give it a pellet of food as a reward for getting close to the bar. Skinner would gradually make the rat get closer to the bar before giving it food. Eventually, the rat learned that it had to press the bar in order to get any food. The food in this case is referred to as a “reinforcer,” since it reinforces the rat’s behavior of stepping closer to and eventually pressing the bar.

10 Basic Types of Reinforcement
Reinforcer: any event that increases or strengthens a behavior it follows. Primary Reinforcer: innately satisfying (not learned, i.e. food) Secondary Reinforcer: power through association with primary reinforcers (learned, i.e. good grades) Skinner and his associates tried the following types of reinforcement schedules with animals such as rats and pigeons: Fixed-ratio: In a fixed-ratio schedule, behavior is reinforced after a set number of responses. For example, an animal might receive food every ten times it presses the bar. After receiving its food reward, the rat presses the bar rapidly until it receives another one. Variable-ratio: Reinforcement is provided after a variable number of responses. Sometimes the animal receives food after two responses, sometimes after twenty; the number of times the rat has to press the bar varies. Animals press the bar frequently because they know they’ll get more food the more they press. Fixed-interval: Reinforcement is based on a time schedule. As the time for another reward draws near, the animal will press the bar more often. Variable-interval: Reinforcement is provided from time to time at a variable rate but is not dependent on how many times the rat presses the bar. The animal tends to press the bar at a slow but steady rate since it has no idea how long it will have to wait for its reward. Can you think of examples of these types of reinforcement in people’s lives? What about gambling? What about factory work in which people are paid by the number of items they produce? What about jobs that pay hourly wages? Positive Reinforcement: strengthens a response by presenting a desirable stimulus after a response (Praise, money) Pass Out Handout!

11 Reinforcement Schedules
Fixed-ratio: In a fixed-ratio schedule, behavior is reinforced after a set number of responses. For example, an animal might receive food every ten times it presses the bar. After receiving its food reward, the rat presses the bar rapidly until it receives another one. Variable-ratio: Reinforcement is provided after a variable number of responses. Sometimes the animal receives food after two responses, sometimes after twenty; the number of times the rat has to press the bar varies. Animals press the bar frequently because they know they’ll get more food the more they press. Fixed-interval: Reinforcement is based on a time schedule. As the time for another reward draws near, the animal will press the bar more often. Variable-interval: Reinforcement is provided from time to time at a variable rate but is not dependent on how many times the rat presses the bar. The animal tends to press the bar at a slow but steady rate since it has no idea how long it will have to wait for its reward.

12 What is Negative Reinforcement?
Write down a word or phrase that means the same thing as Negative Reinforcement.

13 Negative Reinforcement and Punishment
Negative reinforcement: Removing an unpleasant stimulus Punishment 1. Introducing an unpleasant stimulus 1. Unpleasant stimulus = In operant conditioning, negative reinforcement and punishment are two different things that can be easy to confuse. The food rewards Skinner used are known as positive reinforcement. Negative reinforcement, on the other hand, occurs when an unpleasant stimulus is removed. For example, in one experiment Skinner would play a loud noise inside the box; to make the noise stop, the rat would have to press the bar. Punishment involves either the introduction of an unpleasant stimulus or the removal of a pleasant stimulus. When Skinner gave a rat a painful electric shock after pushing the bar, the rat learned not to push the bar. Another example of punishment would be a parent withholding dessert from a child who misbehaves at the dinner table. 2. Removal of unpleasant stimulus 2. Withholding a pleasant stimulus =

14 Big Bang Theory Operant Conditioning
What kind of reinforcement was used?

15 Law of Effect-Thorndike
Reinforced behaviors are strengthened Punished behaviors are decreased

16 Rates and Types of Reinforcement: Additional Experiments
Fixed-ratio: after a fixed number of responses reinforcement is given. (sales) Produces high response rate Variable-ratio: after an unpredictable number of responses reinforcement is given (gambling) Produces high response rate Skinner and his associates tried the following types of reinforcement schedules with animals such as rats and pigeons: Fixed-ratio: In a fixed-ratio schedule, behavior is reinforced after a set number of responses. For example, an animal might receive food every ten times it presses the bar. After receiving its food reward, the rat presses the bar rapidly until it receives another one. Variable-ratio: Reinforcement is provided after a variable number of responses. Sometimes the animal receives food after two responses, sometimes after twenty; the number of times the rat has to press the bar varies. Animals press the bar frequently because they know they’ll get more food the more they press. Fixed-interval: Reinforcement is based on a time schedule. As the time for another reward draws near, the animal will press the bar more often. Variable-interval: Reinforcement is provided from time to time at a variable rate but is not dependent on how many times the rat presses the bar. The animal tends to press the bar at a slow but steady rate since it has no idea how long it will have to wait for its reward. Can you think of examples of these types of reinforcement in people’s lives? What about gambling? What about factory work in which people are paid by the number of items they produce? What about jobs that pay hourly wages? Fixed-interval: after a fixed amount of time reinforcement is given. (mail) Variable-interval: after an unpredictable amount of time reinforcement is given ( ) Predictability Matters

17 Skinner’s Importance Education: programmed instruction Work Parenting
Personal goals Skinner believed that humans learn behavior through reinforcement, much as rats learn to press a bar when that behavior is reinforced with food. His contributions to the fields of psychology and education therefore focused on learned behaviors and reinforcement. For example, Skinner and his followers promoted the use of machines to teach students concepts in small incremental steps, giving them rewards for right answers; this method is called “programmed instruction.” Skinner emphasized the importance of receiving feedback for each step (e.g., for each math problem in a sequence) before going on to the next one. Skinner’s behaviorist ideas have also been used in the workplace to give workers added incentives and to provide immediate reinforcement for good work. Operant conditioning also appears commonly as a parenting technique, as when parents reinforce good behaviors and try to extinguish negative behaviors. It’s also possible to use Skinner’s techniques to accomplish personal goals. Imagine that you want to spend more time on your psychology homework but never seem to get around to it or feel that you can’t get organized. You could begin by observing how much studying you currently do for this class. You’d then set a goal to study a certain number of minutes or hours more each evening and to reinforce that behavior with a reward (such as candy or time playing a video game). Over time, your study skills would hopefully become more natural and you wouldn’t always need the reward.

18 Rodney Atkins Watching You
What does this song reveal about the nature of human learning? There is a handout of the lyrics. - Click on Rodney Atkins – Watching you to play the video. After the video have the kids underline certain parts of the song that show examples of how we learn. Have them first share with the DP’s (Desk Partners) and jot down answers to the questions on the blue handout. Follow up with a large group discussion.

19 Brain Development: Mirror Neurons
Neural firing in response to observation. Wired to be empathic Observation Learning is a large part of how we learn how to behave. Part of what makes this principle of learning so strong is how our brains are wired. For example, you see a stranger stub her toe and you immediately flinch in sympathy, or you notice a friend wrinkle up his face in disgust while tasting some food and suddenly your own stomach recoils at the thought of eating. This ability to instinctively and immediately understand what other people are experiencing has long baffled neuroscientists, but recent research now suggests a fascinating explanation: brain cells called mirror neurons. In the early 1990s, Italian researchers made an astonishing and quite unexpected discovery. They had implanted electrodes in the brains of several macaque monkeys to study the animals’ brain activity during different motor actions, including the clutching of food. One day, as a researcher reached for his own food, he noticed neurons begin to fire in the monkeys’ motor cortex—the same area that showed activity when the animals made a similar hand movement. How could this be happening when the monkeys were sitting still and merely watching him? During the ensuing two decades, this unexpected discovery of mirror neurons—a special class of brain cells that fire not only when an individual performs an action, but also when the individual observes someone else make the same movement—has radically altered the way we think about our brains and ourselves, particularly our social selves. Before the discovery of mirror neurons, scientists generally believed that our brains use logical thought processes to interpret and predict other people’s actions. Now, however, many have come to believe that we understand others not by thinking, but by feeling. For mirror neurons appear to let us “simulate” not just other people’s actions, but the intentions and emotions behind those actions. When you see someone smile, for example, your mirror neurons for smiling fire up, too, creating a sensation in your own mind of the feeling associated with smiling. You don’t have to think about what the other person intends by smiling. You experience the meaning immediately and effortlessly. Mirror Neurons have helped to explain our ability to learn through observational experiences and to help us understand more about the following: New insight into how and why we develop empathy for others. More knowledge about autism, schizophrenia, and other brain disorders characterized by poor social interactions. A new theory about the evolution of language. New therapies for helping stroke victims regain lost movement. 21 21

20 Observational Learning
What is the impact of prosocial modeling and of antisocial modeling? ( )

21 Albert Bandura: Hypothesis
= The experiments of Pavlov and Skinner clearly proved that humans can be conditioned to have certain responses and behaviors. This conditioning represents just one type of learning. Albert Bandura believed that we also learn from observing and imitating others and that, in fact, learning can be much easier when done in this social context. He then hypothesized that children could learn aggressive behavior by observing adults behaving aggressively. His key theories were published in the early 1960s. Believed we learn through observation and imitation Hypothesized that children would imitate aggressive behavior they observed

22 Bandura’s Methodology
Bandura had preschool children watch films in which adults punched inflatable dolls, called Bobo dolls, while yelling such things as, “Pow—right in the nose!” The children were divided into three groups. Each group saw the same film but with a different ending. The three endings could be categorized as follows: aggression-rewarded: the adult was praised and received treats at the end of the film; he was rewarded for his aggressive behavior toward the Bobo doll aggression-punished: the adult was punished by being called a bully, swatted, and made to cower no consequences: the adult was neither rewarded nor punished for his aggressive behavior toward the Bobo doll Children watched films of adults beating Bobo dolls Three groups: aggression-rewarded, aggression-punished, no consequences Children went into rooms with toys that they were told not to play with

23 EFFECT OF OBSERVED CONSEQUENCE ON IMITATIVE BEHAVIOR
Bandura’s Results EFFECT OF OBSERVED CONSEQUENCE ON IMITATIVE BEHAVIOR Immediately after watching this film, the children in all groups were taken into a room with toys but told not to play with the toys. They were then taken into another room with a Bobo doll and other toys, much as they’d seen in the film. By this time, children were generally frustrated that they hadn’t been able to play with any of the toys. Researchers observed their behavior and found that the children who had watched the aggression-rewarded and the no consequences films were equally likely to behave aggressively toward the Bobo doll. Children who had watched the aggression-punished film, however, imitated the adults in their aggression significantly fewer times than children in the other two groups. Children in the aggression-punished group expressed the fewest aggressive behaviors toward the Bobo dolls Children in the other two groups expressed an equal number of aggressive behaviors and were more aggressive than children in the aggression-punished group

24 Bandura’s Experiment, continued
+ = Viewing aggressive behavior Rewards for imitation Aggressive behavior The experiment continued when an adult researcher told the children that they’d be rewarded with stickers and juice if they could imitate the adult they had seen in the film. After this promise, the children in all three groups exhibited an equal number of aggressive behaviors toward the Bobo dolls. They had apparently learned the aggressive adult’s behaviors but, in the case of the aggression-punished group, had suppressed imitation for fear of punishment. Children promised rewards for imitating the adult in the film Now, all three groups were equally aggressive Children had learned the aggressive behavior from the film, but those who saw the adults being punished were less likely to act aggressively

25 Bandura’s Social Learning Theory
We learn by observation and imitation Relates to effects of violence and other images on TV and in the movies Children imitate good and neutral behaviors as well as bad ones Bandura’s results led him to develop his social learning theory, which states that we learn by observation and imitation. Bandura’s theory has numerous implications. For example, subsequent research has indicated that children imitate the things they see on television and in movies, including acts of aggression and violence. Research indicates that children who have watched a violent TV program or movie might not imitate the actual behavior they observed, such as killing or beating, but they may be more likely to express generally aggressive behaviors such as hitting a sibling. On the other hand, children imitate good or neutral behavior as well, such as performing nice deeds for other people or assembling a puzzle. Think about the ways in which you, either now or at a younger age, have learned through observation and imitation.


Download ppt "Famous Psychology Experiments"

Similar presentations


Ads by Google