Presentation is loading. Please wait.

Presentation is loading. Please wait.

Unit 6: Learning. Unit 06 - Overview How We Learn and Classical Conditioning Operant Conditioning Operant Conditioning’s Applications, and Comparison.

Similar presentations


Presentation on theme: "Unit 6: Learning. Unit 06 - Overview How We Learn and Classical Conditioning Operant Conditioning Operant Conditioning’s Applications, and Comparison."— Presentation transcript:

1 Unit 6: Learning

2 Unit 06 - Overview How We Learn and Classical Conditioning Operant Conditioning Operant Conditioning’s Applications, and Comparison to Classical ConditioningOperant Conditioning’s Applications, and Comparison to Classical Conditioning Biology, Cognition, and Learning Learning By Observation Click on the any of the above hyperlinks to go to that section in the presentation.

3 Module 26: How We Learn and Classical Conditioning

4 How Do We Learn?

5 Learning Habituation Stimulus Associative learning –Classical conditioningClassical conditioning –Operant conditioningOperant conditioning –Cognitive learningCognitive learning Observational learning

6 6 Definition  Learning is the process of acquiring new and relatively enduring information or behaviors.  Ex.-We humans are able to adapt to our environments.

7 7 How Do We Learn?  We learn by association. Our minds naturally connect events that occur in sequence.  Associative Learning-learning that two events occur together  two stimuli  a response and its consequences  Habituation is an organism’s decreasing response to a stimulus with repeated exposure to it.  2000 years ago, Aristotle suggested this law of association. Then 200 years ago Locke and Hume reiterated this law.

8 How Do We Learn? Classical Conditioning

9

10

11

12

13

14 How Do We Learn? Operant Conditioning

15

16

17

18 How Do We Learn? Conditioning is not the only form of learning. Through cognitive learning we acquire mental information that guides our behavior. – Ex observational learning.

19 Classical Conditioning

20 Classical Conditioning Pavlov’s Experiments Ivan Pavlov –Background –Experimental procedure

21 21  Ideas of classical conditioning originate from old philosophical theories.  However, it was the Russian physiologist Ivan Pavlov who elucidated classical conditioning.  His work provided a basis for later behaviorists like John Watson and B. F. Skinner.  Studied digestive secretions. Nobel Prize 1904 Classical Conditioning Ivan Pavlov (1849-1936) Sovfoto

22 22 Pavlov-Classical Conditioning  Organism comes to associate two stimuli  A neutral stimulus that signals an unconditioned stimulus begins to produce a response that anticipates and prepares for the unconditioned stimulus

23 Classical Conditioning Pavlov’s Experiments Parts of Classical Conditioning –Neutral Stimulus (NS)Neutral Stimulus (NS) –Unconditioned stimulus (US)Unconditioned stimulus (US) –Unconditioned response (UR)Unconditioned response (UR) –Conditioned stimulus (CS)Conditioned stimulus (CS) –Conditioned response (CR)Conditioned response (CR)

24 24 Classical Conditioning  Unconditioned Stimulus (US)  stimulus that unconditionally--automatically and naturally--triggers a response  Unconditioned Response (UR)  unlearned, naturally occurring response to the unconditioned stimulus  salivation when food is in the mouth  Conditioned Stimulus (CS)  originally irrelevant stimulus that, after association with an unconditioned stimulus, comes to trigger a conditioned response  Conditioned Response (CR)  learned response to a previously neutral conditioned stimulus

25 Classical Conditioning Pavlov’s Experiments

26

27

28

29

30 30 Acquisition  Acquisition is the initial stage in classical conditioning in which an association between a neutral stimulus and an unconditioned stimulus takes place. 1.In most cases, for conditioning to occur, the neutral stimulus needs to come before the unconditioned stimulus. 2.The time in between the two stimuli should be about half a second.

31 Classical Conditioning

32

33 The larger lesson: Conditioning helps an animal survive and reproduce— by responding to cues that help it gain food, avoid dangers, locate mates, and produce offspring (Hollis, 1997).

34 Classical Conditioning Acquisition Acquisition Higher-order conditioningHigher-order conditioning

35 Higher-Order Conditioning Higher-order conditioning a procedure in which the conditioned stimulus in one conditioning experience is paired with a new neutral stimulus, creating a second (often weaker) conditioned stimulus. For example, an animal that has learned that a tone predicts food might then learn that a light predicts the tone and begin responding to the light alone. (Also called second-order conditioning.)

36 Classical Conditioning Extinction and Spontaneous Recovery Extinction  When the US (food) does not follow the CS (tone), CR (salivation) begins to decrease and eventually causes extinction.

37 Extinction & Spontaneous Recovery Spontaneous recovery After a rest period, an extinguished CR (salivation) spontaneously recovers, but if the CS (tone) persists alone, the CR becomes extinct again.

38 Classical Conditioning Generalization Generalization Tendency to respond to stimuli similar to the CS. Toddlers taught to fear moving cars in the street similarly respond to trucks and motorcycles.

39 Classical Conditioning Discrimination Discrimination - The learned ability to distinguish between a conditioned stimulus and other stimuli that do not signal an unconditioned stimulus.  Being able to recognize these difference is adaptive  Confronted by a guard dog, your heart may race; confronted by a guide dog, it probably will not.

40 Classical Conditioning Pavlov’s Legacy Classical conditioning applies to other organisms Showed how to study a topic scientifically

41 Classical Conditioning Classical conditioning –Ivan Pavlov –John B. Watson –BehaviorismBehaviorism

42 Classical Conditioning Pavlov’s Legacy: Applications of Classical Conditioning John Watson and Baby Albert

43 Little Albert In Watson and Rayner’s experiments, “Little Albert” learned to fear a white rat after repeatedly experiencing a loud noise as the rat was presented. In this experiment, what was the US? The UR? The NS? The CS? The CR? ANSWER: The US was the loud noise; the UR was the fear response; the NS was the rat before it was paired with the noise; the CS was the rat after pairing; the CR was fear.

44 44  Watson used classical conditioning procedures to develop advertising campaigns for a number of organizations, including Maxwell House, making the “coffee break” an American custom. Applications of Classical Conditioning John B. Watson Brown Brothers

45 45 1.Alcoholics may be conditioned (aversively) by reversing their positive-associations with alcohol. 2.Through classical conditioning, a drug (plus its taste) that affects the immune response may cause the taste of the drug to invoke the immune response. Applications of Classical Conditioning

46 Module 27: Operant Conditioning

47 47 What is Operant Conditioning? A type of learning in which behavior is strengthened if followed by a reinforcer or diminished if followed by a punisher. Reinforcer- A condition (involving either the presentation or removal of a stimulus) that occurs after a response and strengthens that response

48 Operant Conditioning Classical Conditioning –Respondent behavior Operant conditioning –Actions associated with consequences –Operant behavior

49 Skinner’s Experiments

50 Edward Thorndike’s Law of Effect This law states that rewarded behavior is likely to occur again. B.F. Skinner –Behavioral technology –Behavior control –These principles also enabled him to teach pigeons such unpigeon-like behaviors as walking in a figure 8, playing Ping-Pong, and keeping a missile on course by pecking at a screen target.

51 Skinner’s Experiments Operant Chamber (Skinner Box)Operant Chamber Reinforcement  The operant chamber, or Skinner box, comes with a bar or key that an animal manipulates to obtain a reinforcer like food or water.  The bar or key is connected to devices that record the animal’s response. Inside the box, the rat presses a bar for a food reward. Outside, a measuring device (not shown here) records the animal’s accumulated responses.

52 52 Shaping  Shaping is the operant conditioning procedure in which reinforcers guide behavior towards the desired target behavior through successive approximations. Fred Bavendam/ Peter Arnold, Inc. Using a method of successive approximations, with a food reward for each small step— hopping up on the piano bench, putting her paws on the keys, actually making sounds—this dog was taught to “play” the piano, and now does so frequently

53 Skinner’s Experiments Shaping Behavior –Discriminative stimulusDiscriminative stimulus –in operant conditioning, a stimulus that elicits a response after association with reinforcement (in contrast to related stimuli not associated with reinforcement)

54 Skinner’s Experiments Types of Reinforcers Reinforcer –Positive reinforcementPositive reinforcement –Negative reinforcementNegative reinforcement

55 Skinner’s Experiments Types of Reinforcers Reinforcer –Positive reinforcementPositive reinforcement –Negative reinforcementNegative reinforcement

56 Skinner’s Experiments Types of Reinforcers Reinforcer –Positive reinforcementPositive reinforcement –Negative reinforcementNegative reinforcement

57 Skinner’s Experiments Types of Reinforcers Reinforcer –Positive reinforcementPositive reinforcement –Negative reinforcementNegative reinforcement

58 Skinner’s Experiments Types of Reinforcers Reinforcer (always increases a behavior) –Positive reinforcementPositive reinforcement –Negative reinforcementNegative reinforcement

59 Types of Reinforcers

60 Skinner’s Experiments Types of Reinforcers: Primary and Secondary Reinforcers Primary reinforcer Conditioned reinforcer –Secondary reinforcer Immediate vs delayed reinforcers

61 61 1.Primary Reinforcer: An innately reinforcing stimulus like food or drink.- They are unlearned. 2.Conditioned/Secondary Reinforcer: A learned reinforcer that gets its reinforcing power through association with the primary reinforcer. Ex. The rat pressing the bar. Primary & Secondary Reinforcers

62 62 1.Immediate Reinforcer: A reinforcer that occurs instantly after a behavior. A rat gets a food pellet for a bar press. 2.Delayed Reinforcer: A reinforcer that is delayed in time for a certain behavior. A paycheck that comes at the end of a week. Immediate & Delayed Reinforcers We may be inclined to engage in small immediate reinforcers (watching TV) rather than large delayed reinforcers (getting an A in a course) which require consistent study.

63 Skinner’s Experiments Reinforcement Schedules Continuous reinforcement Partial (intermittent) reinforcement Schedules –Fixed-ratio scheduleFixed-ratio schedule –Variable-ratio scheduleVariable-ratio schedule –Fixed-interval scheduleFixed-interval schedule –Variable-interval scheduleVariable-interval schedule

64 64 Reinforcement Schedules 1.Continuous Reinforcement: Reinforces the desired response each time it occurs. –which makes this the best choice for mastering a behavior, extinction also occurs rapidly. 2.Partial (intermittent) Reinforcement: Reinforces a response only part of the time. –Learning is slower to appear, but resistance to extinction is greater than with continuous reinforcement –Gambling machines & lottery tickets reward gamblers the same way

65 65 Ratio Schedules 1.Fixed-ratio schedule: reinforce behavior after a set number of responses; response rate is usually high (2 nd highest)  Ex. Coffee shops may reward us with a free drink after every 10 purchased 2.Variable-ratio schedule: provide reinforcers after a seemingly unpredictable number of responses. – This is what slot-machine players and fly-casting anglers experience—unpredictable reinforcement— and what makes gambling and fly fishing so hard to extinguish even when both are getting nothing for something. Response rate is the highest.

66 66 Interval Schedules Time is of the essence … 1.Fixed-interval schedule: Reinforces a response only after a specified time has elapsed; time period between rewards remain constant. Results in lowest response rate  People check more frequently for the mail as the delivery time approaches.  A hungry child jiggles the Jell-O more often to see if it has set.  Pigeons peck keys more rapidly as the time for reinforcement draws nearer. 2.Variable-interval schedule: Reinforce the first response after varying time intervals. Most unpredictable of all; response can be low or high steady responses. (e.g., pop quiz; fishing; rechecking e-mail or Facebook)

67 Skinner’s Experiments Reinforcement Schedules

68

69

70

71

72

73

74

75

76

77 Reinforcement vs Punishment Reinforcement increases a behavior; punishment does the opposite. Punishment decreases a behavior!

78 78 Skinner’s Experiments Punishment-- 2 Types  Defined- an event that tends to decrease the behavior that it follows.  Positive punishment- presenting an unpleasant (aversive) stimulus (spanking) after a response/behavior.  The aversive stimulus decreases the chances that the response/behavior will recur.  Negative punishment-removing a reinforcing stimulus (child’s allowance) after a response/behavior.  this removal decreases the chance that the response/behavior will recur.

79 Skinner’s Experiments Punishment Punishment –Positive punishment –Negative punishment

80 Skinner’s Experiments Punishment Punishment –Positive punishment –Negative punishment

81 Skinner’s Experiments Punishment Punishment –Positive punishment –Negative punishment

82 Skinner’s Experiments Punishment Punishment –Positive punishment –Negative punishment

83 Skinner’s Experiments Punishment Punishment –Positive punishment –Negative punishment

84 Skinner’s Experiments Punishment Negatives of using punishment –Punished behavior is suppressed not forgotten –Punishment teaches discrimination –Punishment can teach fear –Physical punishment may increase aggression

85 Skinner’s Legacy

86 Controversies surrounding Skinner’s Operant Conditioning

87 Module 28: Operant Conditioning’s Applications, and Comparison to Classical Conditioning

88 Application of Operant Conditioning

89 At school-electronic adaptive quizzing In sports- Reinforcement principles can enhance athletic performance At work-rewards are more likely to increase productivity At home- In children, reinforcing good behavior increases the occurrence of these behaviors. Ignoring unwanted behavior decreases their occurrence For self- improvement

90 Contrasting Classical and Operant Conditioning

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107 Module 29: Biology, Cognition, and Learning

108 Biological Constraints on Conditioning

109

110

111

112

113 Biological Constraints on Conditioning Limits on Classical Conditioning John Garcia –Conditioned Taste Aversion –Biologically primed associations Natural Selection and Learning –Genetic predisposition

114 114 Biological Predispositions  Pavlov and Watson believed that laws of learning were similar for all animals.  However, behaviorists later suggested that learning is constrained by an animal’s biology.  Behaviorist, John Garcia was among those who challenged the prevailing idea that any association can be learned equally well.  Each species is biologically prepared to learn associations that enhance its survival  Humans fear of spiders and snakes; rats aversion to tastes associated with nausea  Training that attempts to override biological constraints will probably not endure because animals will revert to predisposed patterns.

115 Biological Constraints on Conditioning Limits on Classical Conditioning

116

117

118 Even humans can develop classically to conditioned nausea.

119 Biological Constraints on Conditioning Limits on Operant Conditioning Naturally adaptive behaviors- Biological constraints predispose organisms to learn associations that are naturally adaptive. Instinctive drift- when animals revert back to their biologically predisposed patterns Natural athletes Animals can most easily learn and retain behaviors that draw on their biological predispositions, such as horses’ inborn ability to move around obstacles with speed and agility

120 Cognition’s Influence on Conditioning

121 Cognition’s Influence on Conditioning Cognitive Processes and Classical Conditioning Predictability of an event –Expectancy Stimulus associations

122 122 Cognitive Processes  Early behaviorists believed that learned behaviors of various animals could be reduced to mindless mechanisms.  However, later behaviorists (Robert Rescorla and Allan Wagner) suggested that animals learn the predictability of a stimulus,  meaning they learn expectancy or awareness of a stimulus (Rescorla, 1988).  Organisms learn what to expect.  This principle helps explain why classical conditioning treatments that ignore cognition often have limited success.

123 Cognition’s Influence on Conditioning Cognitive Processes and Operant Conditioning Latent learning –Cognitive mapCognitive map Insight Intrinsic motivationIntrinsic motivation Extrinsic motivationExtrinsic motivation

124 124 Latent Learning: Studies Tolman (1948) allowed his rats to wander freely about a maze for several hours, during which they received no reinforcement at all. Yet, despite the lack of reinforcement (which behaviorists supposed to be essential for maze learning), the rats later were able to navigate the maze for a food reward more quickly than rats that had never seen the maze before. Tolman called this latent learning.

125 125 Cognition & Operant Conditioning  Evidence of cognitive processes during operant learning comes from rats during a maze exploration in which they navigate the maze without an obvious reward.  Rats seem to develop cognitive maps, or mental representations, of the layout of the maze (environment).  Walking through your house in the dark or given directions to someone are examples of cognitive maps  Cognitive maps were proposed by Edward Tolman (1886-1959)  A cognitive map, Tolman argued, was the only way to account for a rat quickly selecting an alternative route in a maze when the preferred path to the goal was blocked.  This supported his claim that learning was mental and not behavioral (Skinner)

126 Influences on Conditioning

127

128

129

130

131 Learning and Personal Control

132 Cope Problem-focused coping Emotion-focused coping

133 Learning and Personal Control We need to learn to cope with the problems in our lives by alleviating the stress they cause with emotional, cognitive, or behavioral methods. Problem-focused coping: attempting to alleviate stress directly—by changing the stressor or the way we interact with that stressor. Emotion-focused coping: attempting to alleviate stress by avoiding or ignoring a stressor and attending to emotional needs related to one’s stress reaction – We turn to emotion-focused coping when cannot or believe we cannot change a situation.

134 Learning and Personal Control Learned Helplessness Learned helplessness (Martin Seligman) the hopelessness and passive resignation an animal or human learns when unable to avoid repeated aversive events.Learned helplessness

135 Learning and Personal Control Learned Helplessness Learned helplessness (Martin Seligman)Learned helplessness Ex. A famous study of elderly nursing home residents with little perceived control over their activities found that they declined faster and died sooner than those given more control (Rodin, 1986).

136 Learning and Personal Control Learned Helplessness Learned helplessness (Martin Seligman)Learned helplessness

137 Learning and Personal Control Learned Helplessness Learned helplessness (Martin Seligman)Learned helplessness

138 Learning and Personal Control Learned Helplessness: Internal Versus External Locus of Control External locus of control- the perception that chance or outside forces beyond our personal control determine our fate.External locus of control Internal locus of control- the perception that you control your own fate.Internal locus of control “internals” have achieved more in school and work, acted more independently, enjoyed better health, and felt less depressed than did “externals” (Lefcourt, 1982; Ng et al., 2006). Moreover, they were better at delaying gratification and coping with various stressors, including marital problems (Miller & Monge, 1986).

139 Learning and Personal Control Learned Helplessness: Depleting and Strengthening Self-Control Self-control the ability to control impulses and delay short-term gratification for greater long-term rewards Self-control often fluctuates. Like a muscle, self-control temporarily weakens after an exertion, replenishes with rest, and becomes stronger with exercise Exercising willpower temporarily depletes the mental energy needed for self-control on other tasks (Gailliott & Baumeister, In one experiment, hungry people who had resisted the temptation to eat chocolate chip cookies abandoned a tedious task sooner than those who had not resisted the cookies

140 Module 30: Learning by Observation

141 Mirrors and Imitation in the Brain

142 Observational learning –Social learning –ModelingModeling –Bandura’s Bobo Doll Experiment

143 Learning By Observation Observational Learning-(also called social learning)-- higher animals, especially humans, learn without direct experience, by watching and imitating others. Modeling- the process of observing and imitating a specific behavior. – We learn our native languages and various other specific behaviors by observing and imitating others; Fads, Fashions, habits…

144 Mirrors and Imitation in the Brain Picture this scene from an experiment by Albert Bandura, the pioneering researcher of observational learning (Bandura et al., 1961): A preschool child works on a drawing. An adult in another part of the room is building with Tinkertoys. As the child watches, the adult gets up and for nearly 10 minutes pounds, kicks, and throws around the room a large inflated Bobo doll, yelling, “Sock him in the nose.… Hit him down.… Kick him.”

145 Mirrors and Imitation in the Brain

146 By watching a model, we experience vicarious reinforcement or vicarious punishment, and we learn to anticipate a behavior’s consequences in situations like those we are observing.

147 Mirrors and Imitation in the Brain Mirror neurons- frontal lobe neurons that some scientists believe fire when performing certain actions or when observing another doing so. The brain’s mirroring of another’s action may enable imitation and empathyMirror neurons

148 Mirrors and Imitation in the Brain Cognitive imitation Monkey A (far left) watched Monkey B touch four pictures on a display screen in a certain order to gain a banana. Monkey A learned to imitate that order, even when shown the same pictures in a different configuration (Subiaul et al., 2004

149 Applications of Observational Learning

150 Applications of Observational Learning Prosocial versus Antisocial Effects Prosocial effects-positive, helpfulProsocial effects People who exemplify nonviolent, helpful behavior can also prompt similar behavior in others. India’s Mahatma Gandhi and America’s Martin Luther King, Jr., both drew on the power of modeling, making nonviolent action a powerful force for social change in both countries. Parents are also powerful models.

151 Antisocial Effects This helps us understand why abusive parents might have aggressive children, and why many men who beat their wives had wife-battering fathers (Stith et al., 2000). While watching TV and videos, children may “learn” that bullying is an effective way to control others, that free and easy sex brings pleasure without later misery or disease, or that men should be tough and women gentle. Researchers studied more than 400 third to fifth-graders. After controlling for existing differences in hostility and aggression, the researchers reported increased aggression in those heavily exposed to violent TV, videos, and video games (Gentile et al., 2004).

152 The End

153 Teacher Information Types of Files – This presentation has been saved as a “basic” Powerpoint file. While this file format placed a few limitations on the presentation, it insured the file would be compatible with the many versions of Powerpoint teachers use. To add functionality to the presentation, teachers may want to save the file for their specific version of Powerpoint. Animation – Once again, to insure compatibility with all versions of Powerpoint, none of the slides are animated. To increase student interest, it is suggested teachers animate the slides wherever possible. Adding slides to this presentation – Teachers are encouraged to adapt this presentation to their personal teaching style. To help keep a sense of continuity, blank slides which can be copied and pasted to a specific location in the presentation follow this “Teacher Information” section.

154 Teacher Information Unit Coding – Just as Myers’ Psychology for AP 2e is color coded to the College Board AP Psychology Course Description (Acorn Book) Units, so are these Powerpoints. The primary background color of each slide indicates the specific textbook unit. Psychology’s History and Approaches Research Methods Biological Bases of Behavior Sensation and Perception States of Consciousness Learning Cognition Motivation, Emotion, and Stress Developmental Psychology Personality Testing and Individual Differences Abnormal Psychology Treatment of Abnormal Behavior Social Psychology

155 Teacher Information Hyperlink Slides - This presentation contain two types of hyperlinks. Hyperlinks can be identified by the text being underlined and a different color (usually purple). – Unit subsections hyperlinks: Immediately after the unit title and module title slide, a page can be found listing all of the unit’s subsections. While in slide show mode, clicking on any of these hyperlinks will take the user directly to the beginning of that subsection. – Bold print term hyperlinks: Every bold print term from the unit is included in this presentation as a hyperlink. While in slide show mode, clicking on any of the hyperlinks will take the user to a slide containing the formal definition of the term. Clicking on the “arrow” in the bottom left corner of the definition slide will take the user back to the original point in the presentation. These hyperlinks were included for teachers who want students to see or copy down the exact definition as stated in the text. Most teachers prefer the definitions not be included to prevent students from only “copying down what is on the screen” and not actively listening to the presentation. For teachers who continually use the Bold Print Term Hyperlinks option, please contact the author using the email address on the next slide to learn a technique to expedite the returning to the original point in the presentation.

156 Teacher Information Continuity slides – Throughout this presentation there are slides, usually of graphics or tables, that build on one another. These are included for three purposes. By presenting information in small chunks, students will find it easier to process and remember the concepts. By continually changing slides, students will stay interested in the presentation. To facilitate class discussion and critical thinking. Students should be encouraged to think about “what might come next” in the series of slides. Please feel free to contact me at kkorek@germantown.k12.wi.us with any questions, concerns, suggestions, etc. regarding these presentations.kkorek@germantown.k12.wi.us Kent Korek Germantown High School Germantown, WI 53022 262-253-3400 kkorek@germantown.k12.wi.us

157 Division title (red print) subdivision title ( blue print) xxx –xxx

158 Division title (red print in text) subdivision title ( blue print in text) Use this slide to add a table, chart, clip art, picture, diagram, or video clip. Delete this box when finished

159 Definition Slide = add definition here

160 Definition Slides

161 Learning = the process of acquiring new and relatively enduring information or behaviors.

162 Habituation = an organism’s decreasing response to a stimulus with repeated exposure to it.

163 Associative Learning = learning that certain events occur together. The events may be two stimuli (as in classical conditioning) or a response and its consequence (as in operant conditioning).

164 Stimulus = any event or situation that evokes a response.

165 Cognitive Learning = the acquisition of mental information, whether by observing events, by watching others, or through language

166 Classical Conditioning = a type of learning in which one learns to link two or more stimuli and anticipate events.

167 Behaviorism = the view that psychology (1) should be an objective science that (2) studies behavior without reference to mental processes. Most research psychologists today agree with (1) but not with (2).

168 Neutral Stimulus = in classical conditioning, a stimulus that elicits no response before conditioning.

169 Unconditioned Response (UR) = in classical conditioning, an unlearned, naturally occurring response to the unconditioned stimulus (US), such as salivation when food is in the mouth.

170 Unconditioned Stimulus (US) = in classical conditioning, a stimulus that unconditionally – naturally and automatically – triggers a response (UR).

171 Conditioned Response (CR) = in classical conditioning, a learned response to a previously neutral (but now conditioned) stimulus (CS).

172 Conditioned Stimulus (CS) = in classical conditioning, an originally irrelevant stimulus that, after association with an unconditioned stimulus (US), comes to trigger a conditioned response (CR).

173 Acquisition = in classical conditioning, the initial stage, when one links a neutral stimulus and an unconditioned stimulus so that the neutral stimulus begins triggering the conditioned response. In operant conditioning, the strengthening of a reinforced response.

174 Higher-Order Conditioning = a procedure in which the conditioned stimulus in one conditioning experience is paired with a new neutral stimulus, creating a second (often weaker) conditioned stimulus. For example, an animal that has learned that a tone predicts food might then learn that a light predicts the tone and begin responding to the light alone. (Also called second-order conditioning.)

175 Extinction = the diminishing of a conditioned response; occurs in classical conditioning when an unconditioned stimulus (US) does not follow a conditioned stimulus (CS); occurs in operant conditioning when a response is no longer reinforced.

176 Spontaneous Recovery = the reappearance, after a pause, of an extinguished conditioned response.

177 Generalization = the tendency, once a response has been conditioned, for stimuli similar to the conditioned stimulus to elicit similar responses.

178 Discrimination = in classical conditioning, the learned ability to distinguish between a conditioned stimulus and stimuli that do not signal an unconditioned stimulus.

179 Operant Conditioning = a type of learning in which behavior is strengthened if followed by a reinforcer or diminished if followed by a punisher.

180 Law of Effect = Thorndike’s principle that behaviors followed by favorable consequences become more likely, and that behaviors followed by unfavorable consequences become less likely.

181 Operant Chamber = in operant conditioning research, a chamber (also known as a Skinner Box) containing a bar or key that an animal can manipulate to obtain a food or water reinforcer; attached devices record the animal’s rate of bar pressing or key pecking.

182 Reinforcement = in operant conditioning, any event that strengthens the behavior it follows.

183 Shaping = an operant conditioning procedure in which reinforcers guide behavior toward closer and closer approximations of the desired behavior.

184 Discriminative Stimulus = in operant conditioning, a stimulus that elicits a response after association with reinforcement (in contrast to related stimuli not associated with reinforcement).

185 Positive Reinforcement = increasing behaviors by presenting positive reinforcers. A positive reinforcer is any stimulus that, when presented after a response, strengthens the response.

186 Negative Reinforcement = increases behaviors by stopping or reducing negative stimuli, such as shock. A negative reinforcer is any stimulus that, when removed after a response, strengthens the response Note: negative reinforcement is NOT punishment.

187 Primary Reinforcer = an innately reinforcer stimulus, such as one that satisfies a biological need.

188 Conditioned Reinforcer = a stimulus that gains its reinforcing power through its association with a primary reinforcer; also known as a secondary reinforcer.

189 Reinforcement Schedule = a pattern that defines how often a desired response will be reinforced.

190 Continuous Reinforcement = reinforcing the desired response every time it occurs.

191 Partial (intermittent) Reinforcement = reinforcing a response only part of the time; results in slower acquisition of a response but much greater resistance to extinction than does continuous reinforcement.

192 Fixed-Ratio Schedule = in operant conditioning, a reinforcement schedule that reinforces a response only after a specific number of responses.

193 Variable-Ratio Schedule = in operant conditioning, a reinforcement schedule that reinforces a response after an unpredictable number of responses.

194 Fixed-Interval Schedule = in operant conditioning, a reinforcement schedule that reinforces a response only after a specific time has elapsed.

195 Variable-Interval Schedule = in operant conditioning, a reinforcement schedule that reinforces a response at unpredictable time intervals.

196 Punishment = an event that tends to decrease the behavior that it follows.

197 Biofeedback = a system for electronically recording, amplifying, and feeding back information regarding a subtle physiological state, such as blood pressure or muscle tension.

198 Respondent Behavior = behavior that occurs as an automatic response to some stimulus.

199 Operant Behavior = behavior that operates on the environment, producing consequences.

200 Cognitive Map = a mental representation of the layout of one’s environment. For example, after exploring a maze, rats act as if they have learned a cognitive map of it.

201 Latent Learning = learning that occurs but is not apparent until there is an incentive to demonstrate it.

202 Insight = a sudden realization problem’s solution.

203 Intrinsic Motivation = a desire to perform a behavior effectively for its own sake.

204 Extrinsic Motivation = a desire to perform a behavior to receive promised rewards or avoid threatened punishment.

205 Coping = alleviating stress using emotional, cognitive, or behavioral methods.

206 Problem-Focused Coping = attempting to alleviate stress directly – by changing the stressor or the way we interact with that stressor.

207 Emotion-Focused Coping = attempting to alleviate stress by avoiding or ignoring a stressor and attending to emotional needs related to one’s stress reaction.

208 Learned Helplessness = the helplessness and passive resignation an animal or human learns when unable to avoid repeated aversive events.

209 External Locus of Control = the perception that chance or outside forces beyond our personal control determine our fate.

210 Internal Locus of Control = the perception that you control your own fate.

211 Self-Control = the ability to control impulses and delay short-term gratification for greater long- term rewards.

212 Observational Learning = learning by observing others. Also called social learning.

213 Modeling = the process of observing and imitating a specific behavior.

214 Mirror Neurons = frontal lobe neurons that some scientists believe fire when performing certain actions or when observing another doing so. The brain’s mirroring of another’s action may enable imitation and empathy.

215 Prosocial Behavior = positive, constructive, helpful behavior. The opposite of antisocial behavior.


Download ppt "Unit 6: Learning. Unit 06 - Overview How We Learn and Classical Conditioning Operant Conditioning Operant Conditioning’s Applications, and Comparison."

Similar presentations


Ads by Google