Quiz #3 Last class, we talked about 6 techniques for self- control. Name and briefly describe 2 of those techniques. 1.

Slides:



Advertisements
Similar presentations
Schedules of reinforcement
Advertisements

Steven I. Dworkin, Ph.D. 1 Choice and Matching Chapter 10.
Chapter 10 Maintaining Behavior Changes. Relapses in Behavior behavior can regress after goals have been attained a relapse is an extended return to original.
Mean = = 83%
The Matching Law Richard J. Herrnstein. Reinforcement schedule Fixed-Ratio (FR) : the first response made after a given number of responses is reinforced.
Schedules of Reinforcement There are several alternate ways to arrange the delivery of reinforcement A. Continuous reinforcement (CRF), in which every.
Developing Behavioral Persistence Through the Use of Intermittent Reinforcement Chapter 6.
Operant or Instrumental Conditioning Psychology 3306.
Copyright © 2011 Pearson Education, Inc. All rights reserved. Developing Behavioral Persistence Through the Use of Intermittent Reinforcement Chapter 6.
Instrumental Learning A general class of behaviors inferring that learning has taken place.
Operant Conditioning. Shaping shaping = successive approximations toward a goal a process whereby reinforcements are given for behavior directed toward.
Learning Operant Conditioning.  Operant Behavior  operates (acts) on environment  produces consequences  Respondent Behavior  occurs as an automatic.
Myers EXPLORING PSYCHOLOGY (6th Edition in Modules) Module 19 Operant Conditioning James A. McCubbin, PhD Clemson University Worth Publishers.
Chapter 8 Operant Conditioning.  Operant Conditioning  type of learning in which behavior is strengthened if followed by reinforcement or diminished.
 JqjlrHA JqjlrHA.
More Instrumental (Operant) Conditioning. B.F. Skinner Coined the term ‘Operant conditioning’ Coined the term ‘Operant conditioning’ The animal operates.
PSY402 Theories of Learning Chapter 4 (Cont.) Schedules of Reinforcement.
Schedules of Reinforcement Lecture 14. Schedules of RFT n Frequency of RFT after response is important n Continuous RFT l RFT after each response l Fast.
PSY 402 Theories of Learning Chapter 7 – Behavior & Its Consequences Instrumental & Operant Learning.
PSY 402 Theories of Learning Chapter 7 – Behavior & Its Consequences Instrumental & Operant Learning.
Lectures 15 & 16: Instrumental Conditioning (Schedules of Reinforcement) Learning, Psychology 5310 Spring, 2015 Professor Delamater.
OPERANT CONDITIONING DEF: a form of learning in which responses come to be controlled by their consequences.
Chapter 7 Operant Conditioning:
 Also called Differentiation or IRT schedules.  Usually used with reinforcement  Used where the reinforcer depends BOTH on time and the number of reinforcers.
Week 5: Increasing Behavior
Ratio Schedules Focus on the number of responses required before reinforcement is given.
Psychology of Learning EXP4404 Chapter 6: Schedules of Reinforcement Dr. Steve.
Chapter 9 Adjusting to Schedules of Partial Reinforcement.
Operant Conditioning: Schedules and Theories Of Reinforcement.
Chapter 6 Operant Conditioning Schedules. Schedule of Reinforcement Appetitive outcome --> reinforcement –As a “shorthand” we call the appetitive outcome.
Ninth Edition 5 Burrhus Frederic Skinner.
Operant Conditioning: Schedules and Theories of Reinforcement
B.F. SKINNER - "Skinner box": -many responses -little time and effort -easily recorded -RESPONSE RATE is the Dependent Variable.
Chapter 7- Powell, et al. Reinforcement Schedules.
Copyright © Allyn & Bacon 2007 Big Bang Theory. I CAN Explain key features of OC – Positive Reinforcement – Negative Reinforcement – Omission Training.
Organizational Behavior Types of Intermittently Reinforcing Behavior.
Reinforcement Consequences that strengthen responses.
Chapter 13: Schedules of Reinforcement
Chapter 6 Developing Behavioral Persistence Through the Use of Intermittent Reinforcement.
PSY402 Theories of Learning Chapter 6 – Appetitive Conditioning.
Principles of Behavior Sixth Edition Richard W. Malott Western Michigan University Power Point by Nikki Hoffmeister.
Info for the final 35 MC = 1 mark each 35 marks total = 40% of exam mark 15 FB = 2 marks each 30 marks total = 33% of exam mark 8 SA = 3 marks each 24.
PRINCIPLES OF APPETITIVE CONDITIONING Chapter 6 1.
Reinforcement Procedures. Copyright  2007 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Shaping Reinforcement of behaviors.
Schedules of Reinforcement and Choice. Simple Schedules Ratio Interval Fixed Variable.
Schedules of Reinforcement CH 17,18,19. Divers of Nassau Diving for coins Success does not follow every attempt Success means reinforcement.
Schedules of Reinforcement Thomas G. Bowers, Ph.D.
Schedules of Reinforcement or Punishment: Ratio Schedules
Schedules of reinforcement
Maintaining Behavior Change Dr. Alan H. Teich Chapter 10.
Principles of Behavior Sixth Edition Richard W. Malott Western Michigan University Power Point by Nikki Hoffmeister.
Schedules of Reinforcement
Working Hypothesis: If created by conditioning, could be eliminated using principles of conditioning Behavior Therapy: treatment based on environmental.
1 Quiz Question: In what way are fixed-ratio (FR) and variable-ratio (VR) reinforcement schedules: (a) similar? (b) different?
Schedules of Reinforcement
Seminar 4 Applied Behavior Analysis I PS 360 Israel A. Sarasti, Ph.D.
Reinforcements. Clinician’s Basic Task Create communication behaviors Increase communication behaviors Both.
Schedules and more Schedules
Operant or Instrumental Conditioning
Factors Affecting Performance on Reinforcement Schedules
Choice Behavior One.
Performance Management
Schedules of Reinforcement
Maintaining Behavior Change Chapter 10
PSY402 Theories of Learning
Operant Conditioning.
Operant Conditioning, Continued
Schedules of Reinforcement
Learning Any relatively permanent change in behavior (or behavior potential) produced by experience.
Operant Conditioning What the heck is it?
Presentation transcript:

Quiz #3 Last class, we talked about 6 techniques for self- control. Name and briefly describe 2 of those techniques. 1

Schedules of Reinforcement 2

Schedule of Reinforcement Delivery of reinforcement Continuous reinforcement (CRF) Fairly consistent patterns of behaviour Cumulative recorder 3

Cumulative Record Use a cumulative recorder No response: flat line Response: slope Cumulative record 4

paper strip pen roller Cumulative Recorder 5

Recording Responses 6

The Accumulation of the Cumulative Record 7 VI-25

Schedules: 4 Basic: Fixed Ratio Variable Ratio Fixed Interval Variable Interval Others and mixes (concurrent) 8

Fixed Ratio (FR) N responses required; e.g., FR 25 CRF = FR1 Rise-and-run Postreinforcement pause Steady, rapid rate of response Ration strain 9 no responses reinforcement responses “pen” resetting slope

Variable Ratio (VR) Varies around mean number of responses; e.g., VR 25 Rapid, steady rate of response Short, if any postreinforcement pause Longer schedule --> longer pause Never know which response will be reinforced 10

Fixed Interval (FI) Depends on time; e.g., FI 25 Postreinforcement pause Scalloping Time estimation Clock doesn’t start until reinforcer given 11

Variable Interval (VI) Varies around mean time; e.g., VI 25 Steady, moderate response rate Don’t know when time has elapsed Clock doesn’t start until reinforcer given 12

Response Rates 13

Duration Schedules Continuous responding for some time period to receive reinforcement Fixed duration (FD) Period of duration is a set time period Variable duration (VD) Period of duration varies around a mean 14

Differential Rate Schedules Differential reinforcement of low rates (DRL) Reinforcement only if X amount of time has passed since last response Sometimes “superstitious behaviours” occur Differential reinforcement of high rates (DRH) Reinforcement only if more than X responses in a set time Or, reinforcement if less that X amount of time has passed since last response 15

Noncontingent Schedules Reinforcement delivery not contingent upon a response, but on passage of time Fixed time (FT) Reinforcer given after set time elapses Variable time (VT) Reinforcer given after some time varying around a mean 16

Stretching the Ratio Increasing the number of responses e.g., FR 5 --> FR 50 Extinction problem Use shaping Increase in gradual increments e.g., FR 5, FR 8, FR 14, FR 21, FR 35, FR 50 “Low” or “high” schedules 17

Extinction CRF (FR 1) easiest to extinguish than intermittent schedules (anything but FR 1) Partial reinforcement effect (PRE) High schedules harder to extinguish than low Variable schedules harder to extinguish than fixed 18

Discrimination Hypothesis Difficult to discriminate between extinction and intermittent schedule High schedules more like extinction than low schedules e.g., CRF vs. FR 50 19

Frustration Hypothesis Non-reinforcement for response is frustrating On CRF every response reinforced, so no frustration Frustration grows continually during extinction Stop responding, stop frustration (neg. reinf.) Any intermittent schedule always some non- reinforced responses Responding leads to reinforcer (pos. reinf.) Frustration = S + for reinforcement 20

Sequential Hypothesis Response followed by reinf. or nonreinf. On intermittent schedules, nonreinforced responses are S + for eventual delivery of reinforcer High schedules increase resistance to extinction because many nonreinforced responses in a row leads to reinforced Extinction similar to high schedule 21

Response Unit Hypothesis Think in terms of behavioural “units” FR1: 1 response = 1 unit --> reinforcement FR2: 2 responses = 1 unit --> reinforcement Not “response-failure, response-reinforcer” but “response-response-reinforcer” Says PRE is an artifact 22

Mowrer & Jones (1945) Response unit hypothesis More responses in extinction on higher schedules disappears when considered as behavioural units Number of responses/units during extinction FR1FR2FR3FR4 absolute number of responses number of behavioural units

Complex Schedules Multiple Mixed Chain Tandem cooperative 24

Choice Two-key procedure Concurrent schedules of reinforcement Each key associated with separate schedule Distribution of time and behaviour The measure of choice and preference 25

Concurrent Ratio Schedules Two ratio schedules Schedule that gives most rapid reinforcement chosen exclusively Rarely used in choice studies 26

Concurrent Interval Schedules Maximize reinforcement Must shift between alternatives Allows for study of choice behaviour 27

Interval Schedules FI-FI Steady-state responding Less useful/interesting VI-VI Not steady-state responding Respond to both alternatives Sensitive to rate of reinforcemenet Most commonly used to study choice 28

Alternation and the Changeover Response Maximize reinforcers from both alternatives Frequent shifting becomes reinforcing Simple alternation Concurrent superstition 29

Changeover Delay COD Prevents rapid switching Time delay after “changeover” before reinforcement possible 30

Herrnstein’s (1961) Experiment Concurrent VI-VI schedules Overall rates of reinforcement held constant 40 reinforcers/hour between two alternatives 31

The Matching Law The proportion of responses directed toward one alternative should equal the proportion of reinforcers delivered by that alternative. 32

Key 1 Key 2 VI-3min Rein/hour = 20 Resp/hour = 2000 VI-3min Rein/hour = 20 Resp/hour = 2000 Proportional Rate of Reinforcement R 1 = reinf. on key 1 R 2 = reinf. on key 2 R 1 R 1 +R 2 = =0.5 Proportional Rate of Response B 1 = resp. on key 1 B 2 = resp. on key 2 B 1 B 1 +B 2 = =0.5 MATCH!!!

Key 1 Key 2 VI-9min Rein/hour = 6.7 Resp/hour = 250 VI-1.8min Rein/hour = 33.3 Resp/hour = 3000 Proportional Rate of Reinforcement R 1 = reinf. on key 1 R 2 = reinf. on key 2 R 1 R 1 +R 2 = =0.17 Proportional Rate of Response B 1 = resp. on key 1 B 2 = resp. on key 2 B 1 B 1 +B 2 = =0.08 NO MATCH  (but close…)

Bias Spend more time on one alternative than predicted Side preferences Biological predispositions Quality and amount 35

Varying Quality of Reinforcers Q 1 : quality of first reinforcer Q 2 : quality of second reinforcer 36

Varying Amount of Reinforcers A 1 : amount of first reinforcer A 2 : amount of second reinforcer 37

Combining Qualities and Amounts 38

Applications Gambling Reinforcement history Economics Value of reinforcer and stretching the ratio Malingering 39