ACT-R/S: Extending ACT-R to make big predictions Christian Schunn, Tony Harrison, Xioahui Kong, Lelyn Saner, Melanie Shoup, Mike Knepp, … University of.

Slides:



Advertisements
Similar presentations
The Implications of a City`s layout's Visibility on Wayfinding Performance (preliminary study) Itzhak Omer and Ran Goldblatt Tel Aviv University, Israel.
Advertisements

Active Appearance Models
Chapter 2: Marr’s theory of vision. Cognitive Science  José Luis Bermúdez / Cambridge University Press 2010 Overview Introduce Marr’s distinction between.
Attention and neglect.
PARTITIONAL CLUSTERING
Medical Image Registration Kumar Rajamani. Registration Spatial transform that maps points from one image to corresponding points in another image.
Working Memory Dr. Claudia J. Stanny EXP 4507 Memory & Cognition Spring 2009.
Pattern Recognition and Machine Learning
MICHAEL MILFORD, DAVID PRASSER, AND GORDON WYETH FOLAMI ALAMUDUN GRADUATE STUDENT COMPUTER SCIENCE & ENGINEERING TEXAS A&M UNIVERSITY RatSLAM on the Edge:
Cognitive Processes PSY 334
University of Amsterdam Search, Navigate, and Actuate - Quantitative Navigation Arnoud Visser 1 Search, Navigate, and Actuate Quantative Navigation.
Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei Li,
1 Last lecture  Configuration Space Free-Space and C-Space Obstacles Minkowski Sums.
DMEC Neurons firing Black trace is the rat’s trajectory. Red dots are spikes recorded from one neuron. Eventually a hexagonal activity pattern emerges.
Methods Experiment 1 Conclusions Blindly Biased: Restricting Cue Access Impacts Spatial Orientation in a Human Analogue of the Water Maze B.P. Apger, D.M.
ECE 7340: Building Intelligent Robots QUALITATIVE NAVIGATION FOR MOBILE ROBOTS Tod S. Levitt Daryl T. Lawton Presented by: Aniket Samant.
1Ellen L. Walker Segmentation Separating “content” from background Separating image into parts corresponding to “real” objects Complete segmentation Each.
Matching brain and body dynamics Daniel Wolpert: – "Why don't plants have brains?" – "Plants don't have to move!" Early phases of embodied artificial intelligence:
Humans were able to accurately use dead-reckoning to estimate distance and direction on a smaller scale. Participants modulated their peak speed on the.
Ensemble Tracking Shai Avidan IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE February 2007.
Mind and Maze Ann Sloan Devlin, 2001 Preetha Lakshmi Chris Mueller CSCI 8715 Professor Shashi Shekhar.
PART 4: BEHAVIORAL PLASTICITY #24: SPATIAL NAVIGATION IN RATS I
Models of Human Performance Dr. Chris Baber. 2 Objectives Introduce theory-based models for predicting human performance Introduce competence-based models.
Mind and Maze Ann Sloan Devlin, 2001 Preetha Lakshmi Chris Mueller CSCI 8715 Professor Shashi Shekhar.
Methods Experiment 1 Conclusions Does the home base anchor spatial mapping? *D.G. Wallace 1 ; D.A. Hamilton 2 ; A.D. Choate 1 1 Department of Psychology,
Scale-Invariant Feature Transform (SIFT) Jinxiang Chai.
Cogs1 mapping space in the brain Douglas Nitz – April 29, 2010.
Chapter 2 Volume 1 Establishing and Maintaining Orientation for Mobility.
Neural mechanisms of Spatial Learning. Spatial Learning Materials covered in previous lectures Historical development –Tolman and cognitive maps the classic.
Modeling meditation ?! Marieke van Vugt.
Multimodal Interaction Dr. Mike Spann
When do honey bees use snapshots during navigation? By Frank Bartlett Bees and wasps learn information about visual landmarks near the goal Edge orientation.
Chapter 5: Spatial Cognition Slide Template. FRAMES OF REFERENCE.
CS654: Digital Image Analysis Lecture 3: Data Structure for Image Analysis.
Information Visualization Using 3D Interactive Animation Meng Tang 05/17/2001 George G. Robertson, Stuart K. Card, and Jock D. Mackinlay.
RADAR: An In-Building RF-based User Location and Tracking System Presented by: Michelle Torski Paramvir Bahl and Venkata N. Padmanabhan.
Report on Intrusion Detection and Data Fusion By Ganesh Godavari.
The brain is impossibly complicated - if it were simple enough to understand, we'd be too simple to understand it. - Lyall Watson.
University of Amsterdam Search, Navigate, and Actuate - Qualitative Navigation Arnoud Visser 1 Search, Navigate, and Actuate Qualitative Navigation.
Notes: 1. Exam corrections and assignment 3 due today. 2. Last exam – last day of class 3. Chapter 24 reading assignment - pgs. 704 – New website:
Data and Computer Communications Chapter 11 – Asynchronous Transfer Mode.
Computational Modeling of Place Cells in the Rat Hippocampus Nov. 15, 2001 Charles C. Kemp.
ACT-R/S Focal Space Action Space Coordinate System Origin Center of gaze Object Rectangle obstacles Location Vectors to obstacle edges Units Angular vectors.
Children Getting Lost: Language, space, and the development of cognitive flexibility in humans.
A new neural framework for visuospatial processing Group #4 Alicia Iafonaro Alyona Koneva Barbara Kim Isaac Del Rio.
Improving O&M Skills Through the Use of VE for People Who Are Blind: Past Research and Future Potential O. Lahav School of Education, Tel Aviv University.
Dynamic Decision Making Laboratory Carnegie Mellon University 1 Social and Decision Sciences Department ACT-R models of training Cleotilde Gonzalez and.
M. A. Wilson and B. L. McNaughton Presented by: Katie Herdman, Monika Walerjan, Scott Good, Snir Seitelbach and David Dudar.
Using Bayesian Networks to Predict Plankton Production from Satellite Data By: Rob Curtis, Richard Fenn, Damon Oberholster Supervisors: Anet Potgieter,
By: Matthew A. Wilson & Bruce L. McNaughton GROUP A2: Anna Loza Elyse Rosa Britni Rowe Caroline Olsen Vedran Dzebic Kris Clark.
The Task Gallery A 3-D Window Manager Presented By - - Priya Shivakumar Developed By – - Microsoft Research George Robertson Daniel Robbins..
Cogs1 mapping space in the brain Douglas Nitz – Feb. 19, 2009 any point in space is defined relative to other points in space.
Exploring Individual Variability Using ACT-R Christian Schunn George Mason University.
Abstract Neurobiological Based Navigation Map Created During the SLAM Process of a Mobile Robot Peter Zeno Advisors: Prof. Sarosh Patel, Prof. Tarek Sobh.
Cognitive Modeling Cogs 4961, Cogs 6967 Psyc 4510 CSCI 4960 Mike Schoelles
Emulating the Functionality of Rodents’ Neurobiological Navigation and Spatial Cognition Cells in a Mobile Robot Peter J. Zeno Department of Computer Science.
Dmitriy Aronov, David W. Tank  Neuron 
Joseph Xu Soar Workshop 31 June 2011
A Similarity Retrieval System for Multimodal Functional Brain Images
Chap. 7 Regularization for Deep Learning (7.8~7.12 )
RADAR: An In-Building RF-based User Location and Tracking System
Space Psychology 3926.
Functional Split between Parietal and Entorhinal Cortices in the Rat
Hippocampal “Time Cells”: Time versus Path Integration
Spatially Periodic Activation Patterns of Retrosplenial Cortex Encode Route Sub-spaces and Distance Traveled  Andrew S. Alexander, Douglas A. Nitz  Current.
What do grid cells contribute to place cell firing?
Perceptual learning Nisheeth 15th February 2019.
Motivation It can effectively mine multi-modal knowledge with structured textural and visual relationships from web automatically. We propose BC-DNN method.
The Hippocampus, Memory, and Place Cells
Optimization under Uncertainty
Presentation transcript:

ACT-R/S: Extending ACT-R to make big predictions Christian Schunn, Tony Harrison, Xioahui Kong, Lelyn Saner, Melanie Shoup, Mike Knepp, … University of Pittsburgh

Approach Combine functional analysis –Computational level (Marr); Knowledge level (Newell); Rational level (Anderson) with neuroscience understanding –most elaborated about gross structure to build a spatial cognitive architecture for problem solving

Need for 3 Systems Computational Considerations –Some tasks need to ignore size, orientation, location –Some tasks need highly metric 3D part reps

Computational Considerations –Some tasks need to ignore size, orientation, location –Some tasks need highly metric 3D part reps –Some tasks need relative 3D locations of blob objects Need for 3 Systems

ACT-R/S: Three Visiospatial Systems - object identification Visual - navigation Configural - grasping & tracking Manipulative Traditional “what” system Traditional “where” system

Visual input of nearby chairVisual Representation Manipulative RepresentationConfigural Representation

Allocentric vs. egocentric representations All ACT-R/S representations are inherently egocentric representations => Allocentric view points must be inferred (computed) Q: –What about data suggestive of allocentric representations?

Configural System Representation

Configural Buffer Triangle-T1 Vectors Identity-tag Vectors Identity-tag Circle-T1 Vectors Identity-tag Vectors Identity-tag Circle-TN Vectors Identity-tag Vectors Identity-tag Triangle-TN Vectors Identity-tag Vectors Identity-tag Circ-Tri-T1 Triangle-ID Circle-ID delta-heading delta-pitch triangle-range circle-range Triangle-ID Circle-ID delta-heading delta-pitch triangle-range circle-range Circ-Tri-TN Triangle-ID Circle-ID delta-heading delta-pitch triangle-range circle-range Triangle-ID Circle-ID delta-heading delta-pitch triangle-range circle-range + + Path Integrator Path Integrator

Pyramidal cells in rodent hippocampus (CA1/CA3) Fires maximally w/r rodent’s location - regardless of orientation Span many modalities (aural, olfactory, visual, haptic & vestibular) Stable across time Plot cell-firing rate across space “Place-cells” from Muller, 1984 Single place-cell

Cell firing within a rat is also correlated with: – Goal (Shapiro & Eichenbaum, 1999) – Direction of travel (O’Keefe, 1999) – Duration in the environment (Ludvig, 1999) – Relative configuration of landmarks (Tanila, Shapiro & Eichenbaum, 1997; Fenton, Csizmadia, & Muller, 2000) “Place-cells” (the not-so pretty picture) from Burgess, Jackson, Hartley & O’Keefe 2000

ACT-R/S and “Place-cells” Configural representation (vectors) supports lowest level navigation - but defines an infinite set of locations Configural relationship (between two) establishes a unique location in space

Egocentric Representation Allocentric Interpetation Circ-Tri-TN Triangle-ID Circle-ID delta-heading delta-pitch triangle-range circle-range Triangle-ID Circle-ID delta-heading delta-pitch triangle-range circle-range Circle-TN Vectors Identity-tag Vectors Identity-tag Triangle-TN Vectors Identity-tag Vectors Identity-tag

Virtual rat searching for food Square environment with each wall as a landmark (obstacle free) When no food is available, rat free roams or returns to previously successful location Food is placed semi-randomly to force rat to cover the entire environment multiple times Record activation across time and space for preselected configural-relationships (Add Guasssian noise) Foraging Model

“Single-Chunk” Recording Multiple passes through same region will reactivate configural relation chunk. Stable fields are a function of regularities in the learned attending pattern. Multi-modal peaks likewise influenced by goal (same landmarks, different order).

What about humans? Small scale orientation and navigation data typically reports egocentric representations –Diwadkar & McNamara, 1997; Roskos-Ewoldsen, McNamara, Shelton, & Carr, 1998; Shelton & McNamara, 1997 One famous counter-example –Mou & McNamara, 2002

Mou & McNamara (2002) Subjects study a view of objects from 315 deg. Study it as if from intrinsic axis (0 deg) –A-B –C-D-E –F-G Testing asks subjects to imagine: –Standing at X –Look at Y –Point to Z Plot pointing error as function of imagined heading (X-Y) 0, 90, 180, 270 much lower error! 0º 315º View position A B C D E F E

Zero parameter egocentric prediction 1.The hierarchical task analysis of training and testing –But extra boost from encoding configuration chunks (egocentric vectors as in ACT-R/S) 2.Count number of times any specific chunk will be accessed 3.Compute probability of successful retrieval of chunks (location, facing, pointing), using basic ACT-R chunk learning and retrieval functions, default parameters, delay of 10 minutes

Modeling Frames of Reference Data (Exp 1) Zero parameter prediction Playing with noise parameter(s) and retrieval threshold (  ) improve absolute fit (RMSE) All (reasonable) parameter values produce similar qualitative fit

More data Having mats on the floor which emphasize allocentric frame of reference –No effect (as predicted) Square vs. round room –No effect (as predicted) Training order from ego vs. allocentric orientation –Big effect (as predicted)

Data Model Training Order “Allocentric” “Egocentric” Mou & McNamara (2002) Exp 2 r=.85 r=.62