Download presentation
Presentation is loading. Please wait.
Published byConrad Norman Modified over 9 years ago
1
Searching the truth: Visual search for abstract, well-learned objects Denis Cousineau, Université de Montréal This talk will be available at www.mapageweb.umontreal.ca/cousined
2
How do we find a target?
3
3 Visual search: a basic proficiency… very little understood…
4
4 Two models of visual search… Serial search: The famous 2 : 1 ratio of mean slopes; Based on the MEAN response times; Parallel search Flat performance. Unlimited capacity
5
5 Some problems with these models… This dichotomy difficult to conciliate with progressive transitions Mean performances are little diagnostic Mimicking (Townsend, 1990) Standard deviations can also be mimicked… 2:1 ratio depends heavily on the stopping rule How do we stop searching?
6
6 Standard model: Serial Self-Terminating Search (SSTS) Get ready Implicitly: a Random-Order visual search model
7
Experiment 1
8
8 Methodology: Visual search task 34 sessions of training; 10 sessions of test, 4 subjects, consistent mapping: Targets:Distractors: Targets had to be learned; * Fixation point Test display Reaction time measured since stimulus presentation Circles indicating where the stimuli will appear
9
9 Mean results A seems to be perfectly serial; B is the least “serial” Yet, we will see that B is nearly identical to A None of them are random-order serial
10
10 Results of Target-present RT distributions A and B are the most similar!
11
11 Modeling the modes of the distributions The D =1 condition could be modeled with a normal distribution with parameters ; The D = 2 condition should be the same as the D = 1 condition except shifted by and variance doubled; In general, the distributions have parameters The modes are pooled: a “mixture of distribution” -With parameter according to SSTS -With free mixture parameter unrestricted model
12
12 Results of Target-present RT distributions For all participants, the mixture parameters are not equal to 1/D. The last mode is underrepresented. Errors?
13
13 Results of Target-absent RT distributions B perform early termination A does not, yet her ps are not equal! C does this too often compared to his error rate
14
14 In sum 1.Regarding the exhaustivity prediction: The participants sometimes stop earlier than predicted by an exhaustive search This predicts errors, but too many errors are predicted. Regarding the random-order prediction: The participants are serial… …but they are not random Seriality is one process going on, but there must be a second process which aims at biasing the search itinerary so that targets will be visited earlier than by chance.
15
15 A new model of visual search: m-Sr-STS The Mostly Serial, Roughly Self-Terminating Search Essentially a two-stage model (Chun & Wolfe, 1996, Wolfe, 1994, Cousineau & Larochelle, 2004). The pre-attentive module outputs probabilities
16
16 Yet, there is still some magic left… Unbeknownst to the participants was diagnostic:was irrelevant: The pre-attentive module could drive attention on the stimuli having those conjunctions of features A parallel search for conjunctions It should be an impossible feat according to Treisman (1980), Wolfe (1994) and many others.
17
17 Let’s concentrate on the decision mechanism The Mostly Serial, Roughly Self-Terminating Search The pre-attentive module outputs probabilities What is “Recognizing a target”? How does cycling occurs?
18
Experiment 2
19
19 Methodology: Same-different task Well-trained participants (10 hours to reach asymptote then 5 hours of testing). The display size D is fixed at 1; The stimuli are varying in complexity C, e.g.
20
20 Mean “Same” response times Saying “Same” is very fast affected by C (20 ms/spike) Linearity is not found using characters instead of complex stimuli Parallel, limited-capacity models complies with such results e.g. a template matching process?
21
21 Mean “Different” response times A main effect of the number of differences but no effect of complexity! Suggests that responding “Different” requires the localization of at least one difference. Parallel search for a difference benefits from the presence of many differences
22
22 The Revised possible explanation There might be two distinct processes: one for confirming the sameness, one for establishing the “differenceness” How do they relate to one another? In succession?
23
23 Slow “Same” vs. fast “Different” in the C = 4 condition The two conditions are very close (mean difference of 13 ms). Do they follow in time? Again, let’s look at distributions
24
24 Distributions of RT in Same and (very) Different responses at C = 4 The slow “Different” responses are faster (by 4 ms) than the slow “Same” responses. One process cannot operate *after* the other.
25
25 Revised revised-architecture “No” may not be an option for a neural decision mechanism…
26
In conclusion…
27
27 Visual search is a proficiency (1/2) Proficiencies are an amalgam of processes Parallel pre attention process outputs probabilities Serial deployment of central attention Stopping rule which can end prematurely Unitary (template matching?) recognition process Unitary (find-a-difference) rejection process In sum, the SSTS architecture was all wrong.
28
28 Visual search is a proficiency (1/2) Processes are univoque (from french: One and only one meaning, one and only one semantic content, but also one and only one voice) As an example If a “not-face” is presented to a face recognition module, does it “knows” that it is not a face, or does it remains “silent” by omitting to respond… What would be a brain which detects objects (of many kind) and their negation? what would be the EEG of such a system? Negation is not part of the neural process toolbox it is not “To be or not to be” but “To be and to un-be” “NO” branches should be forbidden in psychology.
29
29 Methodological consideration Distribution analyses rocks! Mean results can be interpreted in so many ways that they cannot reject any model at all. We have been stuck with a fruitless dichotomy for over 40 years because we were unable to make the data speak. Anyone with a serious model should implement it using distributions or remain quiet Distribution modeling and testing is not difficult (it can be learned in 3 hours). as long as you know matlab or mathematica…
30
Thank you. This talk will be available at www.mapageweb.umontreal.ca/cousined
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.