Download presentation
Presentation is loading. Please wait.
Published byErica Carson Modified over 8 years ago
1
maxlab proprietary information – 5/4/09 – Maximilian Riesenhuber http://maxlab.neuro.georgetown.edu CT2WS at Georgetown: Letting your brain be all that it can be
2
maxlab proprietary information The underlying computational model of object recognition in cortex: Feedforward and fast Riesenhuber and Poggio, Nature Neuroscience, 1999 & 2000 Freedman et al., Science, 2001 Riesenhuber & Poggio, CONB, 2002 Freedman et al., J Neurosci, 2003 Jiang et al., Neuron, 2006 Jiang et al., Neuron, 2007 Glezer et al., Neuron, 2009 Prediction: Feed- forward processing allows object recognition even for ultra-short presentation rates
3
maxlab proprietary information Experimental Paradigm: Ultrashort image presentation to probe temporal limits of object detection Fixation Stimulus: Target/Distractor Response Behavioral performance at ceiling Max 4000 500 17 33 500 Time (ms)
4
maxlab proprietary information So, how does the brain do it? And how can we visualize the results?
5
maxlab proprietary information Mean target signals
6
maxlab proprietary information Distractor signals
7
maxlab proprietary information Mean differential signals (targets-distractors)
8
maxlab proprietary information Plot signals as color image
9
maxlab proprietary information EEG signals correspond well to the model
10
maxlab proprietary information EEG signals correspond well to the model
11
maxlab proprietary information EEG signals correspond well to the model
12
maxlab proprietary information Putative feedback signals from task circuits provide additional information to read out decision-related signals
13
maxlab proprietary information Goal 1: Use hybrid brain-machine system to better exploit the brain’s temporal processing bandwidth and avoid response bottlenecks Identify robust target-related signals in high-rate RSVP streams
14
maxlab proprietary information S S S S S S From single images to RSVP D D T1 D T2 D D – Distractors T1– 1 st Target T2 – 2 nd Target Paradigm: –Rapid Serial Visual Presentation (RSVP) –12Hz Two targets presented –Second one at a specific lag after the first –Embedded within a stream of distractors 83 msec Time Report number of targets at end of stream Lag 2
15
maxlab proprietary information RSVP Example Movie
16
maxlab proprietary information RSVP Example Movie
17
maxlab proprietary information Predictions for behavior in RSVP streams based on single-image EEG signals For CT2WS, we are interested in RSVP scenario, i.e., continuous image streams. Predictions: Targets in rapid succession (consecutive) will be hard to identify because target-related neural responses will overlap. Successive targets at intermediate lags (T1 followed by T2) will be hard to identify because T1-related feedback signals will corrupt T2-related feedforward activation. Two different mechanisms that impair target detection in RSVP.
18
maxlab proprietary information Hypotheses borne out in behavior: The Attentional Blink General finding across hundreds of studies: Subjects are impaired at detecting targets in close succession.
19
maxlab proprietary information EEG signals confirm hypotheses Compare EEG signals for T1-only with Lag 1 (T1+T2): Little signal difference in visual areas. 62% correct 85% correct
20
maxlab proprietary information EEG signals confirm hypotheses In “blinkers”, T1-related feedback signals interfere with T2- related feedforward signals. 44% correct86% correct Compare EEG signals for T1-only with Lag 3 (T1+D+D+T2)
21
maxlab proprietary information However, we find that the Attentional Blink can be reduced by increasing subject vigilance! Subjects were told about the Attentional Blink and instructed to “pay attention after you see the first target, because the second target can come immediately after and look like it is part of the first.”
22
maxlab proprietary information Strong T2 signal in vigilant subject 83 % correct
23
maxlab proprietary information Comparing signals before and after vigilance: Attentional boost! Vigilance appears to increase performance by reducing interference at the neural level through decreasing T1-related feedback and increasing T2-related feedforward signals!
24
maxlab proprietary information GU goal 2: Exploit the brain’s parallel processing capabilities Parallel processing along the visual cortical hierarchy: potential to present and process multiple images simultaneously!
25
maxlab proprietary information Proof of principle: Quad stream presentation (48 images/s)
26
maxlab proprietary information Proof of principle: Quad stream presentation (48 images/s)
27
maxlab proprietary information EEG signal shows robust detection signal Washout in visual areas because of varying target position
28
maxlab proprietary information Quad stream performance follows parallel process prediction N=4
29
maxlab proprietary information From “bench to bread truck” There are two main Phase I results from the GU project that can be translated to the NPU in Phase II: –High RSVP rates: ≥12 Hz –Multiple concurrent image streams
30
maxlab proprietary information Robust EEG-based classification performance at 12 Hz
31
maxlab proprietary information Boost classification performance through bagging Already high classification performance can be further increased by combining signals from repeated presentations (bagging). Here: Two presentations. N=3 Target vs. distractor at 12Hz, single stream
32
maxlab proprietary information From single to multiple streams: Quad stream performance Classification performance again follows behavior.
33
maxlab proprietary information Bagging also work for quad stream case
34
maxlab proprietary information GU results suggest potential for flexible speed/accuracy trade-off Can adjust number of streams/repeats per image depending on operational demands (e.g., quick overview over new scenario vs. point control). Can also flexibly adjust number of streams/presentation frame rate depending on user state.
35
maxlab proprietary information Implications of GU findings for throughput In Yuma test, had about 768 ROI for 32 targets TIME = 24 X 32 X 3.29 X 5 10 X 1 = 1263 secs = 21 min System tested at Yuma TIME = 24 X 32 X 2 12 X 1 = 128 secs = 2.1 min Double- Bagging, 12 Hz = optimized for accuracy TIME = 24 X 32 X 1 12 X 4 = 16 secs = 0.3 min 12 Hz, four parallel streams = optimized for speed
36
maxlab proprietary information GU Phase II plans “Boost mode”: Selectively boost image-related signals through attentional cueing. Explore limits of temporal and parallel processing: 12 Hz and quad streams was a convenient choice, but it’s not the limit! Monitor ongoing activity (e.g., in active exploration/binocular mode): Make sure nothing is missed. “Smart Cueing”: Decrease neural signal interference (temporal as well as spatial) in high-throughput scenarios by sorting ROI to decrease interference along cortical processing hierarchy. Optimize classification by bagging specialized classifiers (e.g., for incorrect trials).
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.