The Levels of Processing Model (LOP) (Craik & Lockhart, 1972)

Slides:



Advertisements
Similar presentations
Critical Reading Strategies: Overview of Research Process
Advertisements

Lesson Three: Encoding, Capacity, and Duration Specification A – Models of Memory 1.The multi-store model including concepts of encoding, capacity and.
Experimental Psychology PSY 433 Chapter 10 Memory.
STM and Encoding Objective – to describe how encoding works in the STM.
Research Methods & Approaches
Lecture 6 – Long Term Memory (2)1 1. Do we learn only with intention – or also without intention? We learn with and without intention. 2.Is learning influenced.
Sentence Memory: A Constructive Versus Interpretive Approach Bransford, J.D., Barclay, J.R., & Franks, J.J.
Evidence for Multi-store model: 1.Primacy-Recency Effect - Atkinson (1970). When presented with lists to remember we recall first and last items best.
Levels of Processing Theory What if we don’t have separate memory systems?
The aim of this part of the curriculum design process is to find the situational factors that will strongly affect the course.
ACADEMIC TUTORIAL Extended Essay Writing. TODAY WE WILL AIM TO TODAY... ACTIVITY 1: Marking Exercise (20 mins) Reading through 2 candidate answers identifying.
Cognitive Psychology. This unit is split into 4 aspects:  The nature of memory, including its stages, capacity, duration, encoding  Models of memory,
Fig Theory construction. A good theory will generate a host of testable hypotheses. In a typical study, only one or a few of these hypotheses can.
RECOGNIZING AUTHORS’ WRITING PATTERNS
How to Write a Literature Review
Midterm Study Guide Sections in Reed Textbook… Chapter 1 –Introduction (1 st 1-2 pages of chapter) –The Information-Processing Approach –The Growth of.
Encoding What are the processes through which information is encoded? What are relationships among processes? How can I use this information to my advantage?
Cognitive Level of Analysis
MEMORY. What do we know about memory? w The “7 +/- 2” Rule Memory works best on sets of 5-9 items w Certain strategies can improve memory ‘Chunking’ things.
Questions about Memory 1. Do we learn only with intention – or also without intention? We learn with and without intention. 2. Is learning influenced by.
Using Language to Persuade Language that YOU can use!
Multi Store Model Calvin Laverick. Why have a model? A model is useful so we can test and investigate how memory works. – Without one, this would be very.
MODELS OF MEMORY DEFINITION OF MEMORY: “The retention of learning or experience”
Help for Thursday’s final (Internal Assessment A and B)
Forgetting The inability to recall or recognise something that was previously learned In short-term memory Decay Decay Displacement Displacement In long-term.
The Cognitive Perspective Computers vs. Humans. Starter (10 mins) Name the 5 perspectives in Psychology. Name the 5 perspectives in Psychology. Name 3.
Levels of processing An alternative to the multistore model Emphasises memory process rather than structure Based on the idea that the strength of a memory.
Module 11 Types of Memory. INTRODUCTION Definitions –Memory ability to retain information over time through three processes: encoding, storing, and retrieving.
MEMORY COGNITIVE PSYCHOLOGY PAPER 1 LEARNING OBJECTIVES By the end of the session you will be able to: –Describe what memory is –Outline (draw) the MSM.
Writing Exercise Try to write a short humor piece. It can be fictional or non-fictional. Essay by David Sedaris.
Essay Reflection- tips Appropriate detail for AO1 Be Evidence based throughout Remain formal in your writing Do not include too much detail in the studies.
Models of Memory Attempt to give a systematic account of the structure and functioning of memory Tend to be sketchier than full theories, hence ‘models’
Strategies for Essay Tests. Preparing for the test Know what is expected of you. What content will be covered? How many questions will be on the test?
MSM Vs LOP H/W. Craik & Tulving Study of Levels of processing (1975)
 = any indication that learning has persisted over time  We do not know exactly how memory happens  Use models to help us understand 1. Three Box (Information.
Loftus & Palmer Cognitive Psychology The Core Studies.
Jette hannibal Internal assessment Experimental research.
OUR TARGETS IA INTRO TIPS Brief summary of the original study. Include the name of the study and the researchers. With their aim and their findings. Review.
MODELS OF MEMORY The Multi Store Model (MSM) (Atkinson & Shiffrin, 1968)
3.5 Evaluate two models of memory
Memory Assembly. Memory and Forgetting: What does your memory look like? Why is it you can remember the time when you split your trousers in front of.
Bartlett SAQ. Lesson Objectives Compile a key study sheet for the ‘chocolates make you smarter study’ and evaluate our study using the MECG framework.
Levels of processing theory - Craik and Lockhart (1972).
Academic Writing Fatima AlShaikh. A duty that you are assigned to perform or a task that is assigned or undertaken. For example: Research papers (most.
Vocabulary Acquisition in a Second Language: Do Learners Really Acquire Most Vocabulary by Reading? Some Empirical Evidence Batia Laufer.
Essay Writing Skills for University. Aims of session To improve awareness of the process of writing a well structured essay. To improve your essay writing.
COGNITIVE LEVEL OF ANALYSIS An Introduction. Cognitive Psychology studies: how the human mind comes to know things about the world AND how the mind uses.
Structural, Phonological, Semantic
The more difficult topics
Cognitive Psychology Memory
Models of Memory SAQ workshop.
Levels of Processing Memory Model (LoP)
Multi-Store Memory Model
3.5 Evaluate two models of memory
Models of Memory Psychology 3717.
Loftus and Palmer (1974) (A2) Reconstruction of automobile destruction and example of the interaction between language and memory.
Linking in to Research Methods -the experimental Method
Memory.
Learning Ms. Carmelitano.
DEFINITION OF MEMORY: “The retention of learning or experience”
Lesson 5. Lesson 5 Extraneous variables Extraneous variable (EV) is a general term for any variable, other than the IV, that might affect the results.
Experimental research
Perspective on Processing
Trace decay theory - Hebb Cue-Dependency- Tulving
Models of Memory Psychology 3717.
IB Psychology Today’s Agenda: Turn in: Nothing
Experimental research
Questions about Memory
Presentation transcript:

The Levels of Processing Model (LOP) (Craik & Lockhart, 1972) MODELS OF MEMORY The Levels of Processing Model (LOP) (Craik & Lockhart, 1972)

MODELS OF MEMORY The IB Syllabus says: Evaluate two models or theories of one cognitive process with reference to research studies. For this section we will be evaluating TWO MODELS OF MEMORY these have been developed by cognitive psychologists to explain how memory works These are the two main models of memory we will be studying for this topic: Multi-store Model (Atkinson & Shiffrin, 1968) Levels of Processing (Craik & Lockhart, 1972)

Principles Demonstrated in research into Models of Memory: Mental processes can and should be scientifically investigated. Models of psychological functions can be proposed. Cognitive processes actively organize and manipulate information that we receive - humans are not passive responders to their environment. (Soft determinism.)

Levels of Processing (Craik & Lockhart, 1972) This model was proposed as alternative to the multi-store model. Craik & Lockhart rejected the idea of separate memory structures put forward by Atkinson & Shiffrin The model places an emphasizes memory process rather than the structure like the MSM model. The LOP model based on the idea that the strength of a memory trace is determined by how the original information was processed

LOP: Shallow & Deep Processing The model proposed that there are different levels of processing that influence how much is remember Shallow processing – the 1st stage of processing – e.g. recognising the stimulus in terms of its physical appearance or structure – e.g. the shape of the letters a word is written in Acoustic encoding – is also in between structural and semantic encoding – it deeper than structural processing, but shallower than semantic processing Deep processing – the deepest level of processing involves encoding the input in terms its meaning (semantics) The model assumes that shallow processing will lead to weak short term retention and deep processing will enable long term retention

Levels of processing Shallow processing Deep processing Structural (looks like) Acoustic (sounds like) Semantic (means) Weak memory trace -leading so short term retention Strong memory trace – leading to long term retention

Experimental research in Cognitive Psychology Your task is to work in groups to partially replicate one of the following Research Support: Elias & Perfetti (1973)(Dillon Alessandra Zach) Hyde and Jenkins (1973)(Krystal Klaire HK) Refuting Research: Tyler et al (1979) (Nikki Micah Audrey) Palmere et al. (1983) (Katalyna Christina Louis) You will gather data from this class, so you need to: Identify the IV & the DV and the design of your study – think about the controls you will put in place (fill out a key study sheet template – one for the group) Have a Standardized Procedure – a script for how the experiment is going to be run, and what the experimenters are going to say. – make sure you get informed consent and do a debriefing) Create the materials needed – i.e. PowerPoint with word lists, numbers collect the data in and analyze the data and report it back to the class

LOP: Maintenance & Elaborative Rehearsal The model also proposed that different ways of rehearsing also have an influence on how well we remember: Rehearsing material simply by rote repetition is called maintenance rehearsal and is regarded as shallow processing. Making links with semantic (meaning) associations is called elaborative rehearsal and is seen as deep processing The assumption of the model is that shallow processing will give rise to weak short term retention and deep processing will ensure strong, lasting retention

Elias & Perfetti (1973) experiment on acoustic & semantic encoding AIM: Elias & Perfetti (1973) aimed to investigate encoding and memory PROCEDURE: They gave PPs a number of different tasks to perform on each word in a list, such as: finding another word that rhymes (acoustic task) or finding a word that means the same or similar (synonym) (semantic task) to the word on the list. The rhyming task involved only acoustic coding and hence was a shallow level of processing. The synonym task involved semantic coding and hence was a deep level of processing.

The participants were not told that they would be asked to recall the words, but nevertheless they did remember some of the words when subsequently tested. This is called incidental learning as opposed to intentional or deliberate learning. FINDINGS & CONCLUSIONS: The PPs recalled significantly more words following the synonym task (semantic) than following the rhyming task (acoustic) suggesting that deeper levels of processing leads to better recall and thus supporting the LOP model. EVALUATION: Ecological Validity/ Experimental research

♦rating the words for pleasantness Hyde and Jenkins (1973) experiment on the effect of the way in which words are processed on recall AIM: To investigate the effects of shallow & deep processing on recall. PROCEDURE: Hyde and Jenkins (1973) presented auditorily lists of 24 words and asked different groups of participants to perform one of the following so-called orienting tasks: ♦rating the words for pleasantness ♦estimating the frequency with which each word is used in the English language ♦detecting the occurrence of the letters ‘e' and 'g' in any of the words ♦deciding the part of speech appropriate to each word (e.g. noun, adjective) ♦deciding whether the words fitted into a particular sentence frame.

Rating the words for pleasantness (e. g. is “donkey” a pleasant word Rating the words for pleasantness (e.g. is “donkey” a pleasant word?) (semantic) Estimating the frequency with which each word is used in the English language (e.g. how often does “donkey” appear in the English language?) (semantic) Detecting the occurrence of the letters “e” & “g” in the list words (e.g. is there an “e” or a “g” in the word “donkey”?) (structural) Deciding the part of speech appropriate to each word (e.g. is “donkey” a verb, noun or an adjective?) (structural) Deciding whether the words fitted into particular sentences (e.g. does the word “donkey” fit into the following sentence > “I went to the doctor and showed him my ............”) (semantic)

Five groups of participants performed one of these tasks, without knowing that they were going to be asked to recall the words (incidental learning group).. An additional five groups of participants performed the tasks but they were told that they should learn the words. (intentional learning group) Finally, there was a control group of participants who were instructed to learn the words but did not do the tasks)

FINDINGS & CONCLUSIONS After testing all the participants for recall of the original word list Hyde and Jenkins found that there were minimal differences in the number of items correctly recalled between the intentional learning groups and the incidental learning groups. This finding is predicted by Craik and Lockhart and supports LOP because they believe that retention is simply a byproduct of processing and so intention to learn is unnecessary for learning to occur.

In addition, Hyde & Jenkins found that the pleasantness rating and rating frequency of usage tasks produced the best recall. it was found that recall was significantly better for words which had been analysed semantically (deep) (i.e. rated for pleasantness or for frequency) than words which had been rated more superficially (shallow - structural) (i.e. detecting 'e' and 'g'). This is also in line with the LOP model because semantic analysis is assumed to be a deeper level of processing than structural (shallow) analysis. They claimed that this was because these tasks involved semantic processing whereas the other tasks did not.

one interesting finding was that incidental learners performed just as well as intentional learners in all tasks – this suggests that it is the nature of the processing that determines how much you will remember rather than intention to learn. Bear this in mind when you are revising – the more processing you perform on the information (e.g. quizzes, essays, spider diagrams etc.) the more likely you are to remember it . EVALUATION: Not totally clear what level of processing is used for the different tasks. Is it really the depth of processing or is it the amount of effort that people put into processing that determines recall Ecological validity, experimental method/ applicability?

The Criticisms/Limitations of the LOP Model It is usually the case that deeper levels of processing do lead to better recall. However, there is an argument about whether it is the depth of processing that leads to better recall or the amount of processing effort that produces the result – see Tyler et al (1970) Also any of the MSM and research that supports can be used as a counter claim in evaluation of the LOP – as it fails to recognize that there are indeed two separate stores of memory

Tyler et al (1979) experiment on the effect of cognitive effort on recall AIM: Tyler et al (1979) investigated the effects of cognitive effort on memory PROCEDURES: They gave participants two sets of anagrams to solve - easy ones, such as DOCTRO or difficult ones such as TREBUT. Afterwards, participants were given an unexpected test for recall of the anagram

FINDINGS & CONCLUSIONS: Although the processing level was the same, because participants were processing on the basis of meaning, participants remembered more of the difficult anagram words than the easy ones. So Tyler et al concluded that retention is a function of processing effort, not processing depth. KEY EVALUATION POINT : Craik and Lockhart themselves (1986) have since suggested that factors such as elaboration and distinctiveness are also important in determining the rate of retention; this idea has been supported by research. For example, Hunt and Elliott (1980) found that people recalled words with distinctive sequences of tall and short letters better than words with less distinctive arrangements of letters

Palmere et al. (1983) experiment on the effect of elaboration and recall AIM: Palmere et al. (1983) a study of the effects of elaboration on recall PROCEDURE: They made up a 32- paragraph description of a fictitious African nation. Eight paragraphs consisted of a sentence containing a main idea, followed by three sentences each providing an example of the main theme; Eight paragraphs consisted of one main sentence followed by two supplementary sentences; Eight paragraphs consisted of one main sentence followed by a single supplementary sentence The remaining eight paragraphs consisted of a single main sentence with no supplementary information

FINDINGS & CONCLUSIONS: Recall of the main ideas varied as a function of the amount of elaboration (extra info given). Significantly more main ideas were recalled from the elaborated paragraphs than from the single-sentence paragraphs. This kind of evidence suggests that the effects of processing on retention are not as simple as first proposed by the levels of processing model. EVALUATION: suggests that elaboration is important – and Craik & Lockhart (1986) did update their model to include ‘elaboration & distinctiveness’ as having a major influence on retention

General evaluative points relating to the research Another problem is that participants typically spend a longer time processing the deeper or more difficult tasks. So, it could be that the results are partly due to more time being spent on the material. The type of processing, the amount of effort & the length of time spent on processing tend to be confounded. Deeper processing goes with more effort and more time, so it is difficult to know which factor influences the results.

Associated with the previous point, it is often difficult with many of the tasks used in levels of processing studies to be sure what the level of processing actually is. For example, in the study by Hyde & Jenkins (described above) they assumed that judging a word’s frequency involved thinking of its meaning, but it is not altogether clear why this should be so. Also, they argued that the task of deciding the part of speech to which a word belongs is a shallow processing task - but other researchers claim that the task involves deep or semantic processing. So, a major problem is the lack of any independent measure of processing depth. How deep is deep?

A major problem with the LOP is circularity, i.e. there is no independent definition of depth. The model predicts that deep processing will lead to better retention - researchers then conclude that, because retention is better after certain orienting tasks, they must, by definition, involve deep processing The model is descriptive rather than explanatory

Eysenck (1978) claims “In view of the vagueness with which depth is defined, there is danger of using retention-test performance to provide information about the depth of processing and then using the ... depth of processing to ‘explain’ the retention-test performance, a self-defeating exercise in circularity”. What he means is that if a person performs well on a test of recall after performing a particular task then some researchers will claim that they must have performed a deep level of processing on the information in order to remember it - a circular argument. Another objection is that levels of processing theory does not really explain why deeper levels of processing is more effective – it is descriptive rather than explanatory

Eysenck (1990) claims that it describes rather than explains what is happening. However, recent studies have clarified this point - it appears that deeper coding produces better retention because it is more elaborate. Elaborative encoding enriches the memory representation of an item by activating many aspects of its meaning and linking it into the pre-existing network of semantic associations. Deep level semantic coding tends to be more elaborated than shallow physical coding and this is probably why it worked better.

General evaluative points for LOP model of memory