Presentation is loading. Please wait.

Presentation is loading. Please wait.

Today we will focus on two basic models of learning, classical and operant conditioning. Specific topics include: * Pavlov's classical conditioning and.

Similar presentations


Presentation on theme: "Today we will focus on two basic models of learning, classical and operant conditioning. Specific topics include: * Pavlov's classical conditioning and."— Presentation transcript:

1 Today we will focus on two basic models of learning, classical and operant conditioning. Specific topics include: * Pavlov's classical conditioning and how it might be implemented in the nervous system. Thorndike's law of Effect and its evolution into Skinner's operant conditioning. A model: How operant conditioning operates together with basic motivations as an adaptive mechanism. * Uses and abuses of operant conditioning. * If there is time, we will consider the contributions of behaviorism against its "overclaiming".

2 7.3 Apparatus for salivary conditioning Here is an early version of Pavlov’s apparatus for classical conditioning of the salivary response. The dog was held in a harness; sounds or lights acted as conditioned stimuli (CS), and meat powder in a dish was the unconditioned stimulus (US). The conditioned response (CR) was assessed with the aid of a tube connected to an opening in one of the animal’s salivary glands.

3 Bettman/Corbis 7.2 Ivan Petrovich Pavlov (1849–1936) Pavlov (center) in his laboratory, with some colleagues and his experimental subject.

4

5 The CS as a Signal Evidence suggests that the CS serves as a signal for upcoming events. Learning is less likely if the CS is simultaneous with the US or follows it. Learning occurs only if there is some contingency between the CS and US. Contiguity isn’t enough. absence of contingency Contingency alone is not sufficient to produce learning; the new CS must provide new predictive value. This is demonstrated by the phenomenon of blocking, in which an animal does not make an association between a new CS and a US if the new CS merely accompanies a formerly learned CS (and therefore provides no new information). Some theorists argue that an animal will be most likely to learn the relationship between a new CS and US if the US is unexpected or surprising.

6

7

8

9

10 (photos) Archives of the History of American Psychology/University of Akron

11 Neural Basis of Learning
The neural bases for learning involve diverse mechanisms, such as: presynaptic facilitation postsynaptic changes long-term potentiation (LTP) creation of new synapses growth of new dendritic spines In all cases, learning depends on neural plasticity—the capacity of neurons to change the way they function as a consequence of experience. Learning can produce an increase in the neurotransmitter released by the presynaptic neuron, as demonstrated even in creatures with very simple nervous systems, such as the marine mollusk Aplysia. Learning can also increase the sensitivity of the postsynaptic neuron, as seen in a particularly important form of neural plasticity, long-term potentiation (LTP). LTP increases the responsiveness of a neuron for long periods of time. LTP is a postsynaptic mechanism, with effects on the receiving side. If neurons A and B are both connected to neuron C, and A and B are repeatedly activated together, C will come to detect the correlation between them, eventually becoming more sensitive to the activity of both. Another third form of neural plasticity is the possibility of structural changes; learning can lead to the creation of entirely new synapses. The creation of these synapses depends on the growth of new dendritic spines, which are the “receiving stations” for most synapses.

12

13

14 New York Public Library
7.16 Edward L. Thorndike (1874–1949) An important early behaviorist, Thorndike was the first to formulate the law of effect.

15 7. 17 Puzzle box This box is much like those used by Edward Thorndike
7.17 Puzzle box This box is much like those used by Edward Thorndike. By stepping on a lever attached to a rope, the animal releases the latch and so unlocks the door.

16

17

18 Nina Leen/Time Pix/Getty Images
7.20 B. F. Skinner (1904–1990) Unmistakably the most influential of the learning theorists, Skinner made a sharp distinction between classical and operant conditioning.

19 7.21 Animals in operant chambers This rat is trained to press a lever for food reinforcement. Reinforcement consists of a few seconds’ access to a feeder located near the response lever.

20

21

22 Schedules of Reinforcement
Partial reinforcement: The response is reinforced only some of the time. schedule of reinforcement in ratio schedules, reinforcement after a number of responses; the ratio used may be fixed or variable in interval schedules, reinforcers for the first response made after a given interval since the last reinforcement; this interval can be fixed or variable An important issue in extrinsic reinforcement is schedule; in real life, responses are rarely reinforced every time they occur. In the laboratory, such partial-reinforcement conditions are created by means of various reinforcement schedules (e.g., fixed ratio, variable ratio, fixed interval, and variable interval). Responses acquired under partial-reinforcement conditions are much more resistant to extinction than those acquired under continuous reinforcement.

23 Julian Cotton/International Stock/Image State
7.24 Ratio reinforcement (A) Workers in a garment factory are usually paid a certain amount for each item of clothing completed, so they’re rewarded on a fixed-ratio schedule of reinforcement. (B) By law, slot machines pay out on a certain percentage of tries. But these machines pay out at random, so that people feeding coins to a machine are rewarded on a variable-ratio reinforcement schedule.

24 Brownie Harris/Corbis

25 Marc Romanelli/Getty Images
7.25 Interval reinforcement (A) The behavior of checking the mailbox isn’t rewarded until the mail has actually been delivered. Then, the first check of the mailbox after this delivery will be rewarded (by the person actually getting some mail!). This is a fixed-interval schedule of reinforcement. (B) When a wolf prowls through a meadow, the rodents it seeks will all retreat into hiding; so, if the wolf returns soon, he’ll find no prey. Eventually, though, the rodents will come back, and then another hunting trip by the wolf will pay off—with dinner. Therefore, a return visit by the wolf will be rewarded only after some time has passed, and hence this is an interval schedule. But the rodents may sometimes return a little sooner, and sometimes a little later, and so this is a variable-interval schedule.

26

27

28 Courtesy Psychology Department, University of California, Berkeley
7.26 Edward C. Tolman (1886–1959) Tolman was an early advocate for the idea that learning involves a change in knowledge rather than a change in overt behavior.

29

30 Variations in Learning: Behavioral Challenges
Learned helplessness Observational learning Biologically biased learning Belongingness/expectations in learning

31

32 7.28 Mirror neurons Panel A shows the responses of a neuron in a monkey’s motor cortex when the animal breaks a peanut. Panel B shows the remarkably similar pattern of activity when the monkey watches someone else open a peanut.

33 7.31 The specificity of taste aversion Rats that had been shocked associated the shock with the lights and sounds (but not the taste) that had accompanied the painful experience. Rats that had become ill associated the illness with a taste (but not with the lights and sounds).

34 Conclusions: Broad Applications Coupled with Severe Limitations of the Behaviorist Viewpoint
Next: The Power Law of Practice: Learning just keeps going and going and going….

35

36

37


Download ppt "Today we will focus on two basic models of learning, classical and operant conditioning. Specific topics include: * Pavlov's classical conditioning and."

Similar presentations


Ads by Google