When do honey bees use snapshots during navigation? By Frank Bartlett Bees and wasps learn information about visual landmarks near the goal Edge orientation.

Slides:



Advertisements
Similar presentations
The bead maze: Adapting the water maze task to the human manipulatory scale E.L. Loda, A.B. Thoennes, J.R. Köppen, S.S. Winter, D.A. Hamilton, D.G. Wallace.
Advertisements

Perception Chapter 9: Event Perception Event Perception: an event is defined as a change in both time and space. Thus far we have discussed how our visual.
Visual Attention Attention is the ability to select objects of interest from the surrounding environment A reliable measure of attention is eye movement.
1 FIREBIRD Science Overview Marcello Ruffolo Nathan Hyatt Jordan Maxwell 2 August 2013FIREBIRD Science.
MICHAEL MILFORD, DAVID PRASSER, AND GORDON WYETH FOLAMI ALAMUDUN GRADUATE STUDENT COMPUTER SCIENCE & ENGINEERING TEXAS A&M UNIVERSITY RatSLAM on the Edge:
Electrical & Computer Engineering Dept. University of Patras, Patras, Greece Evangelos Skodras Nikolaos Fakotakis.
Insect-level intelligence. Information for performing tasks Learning about home – a routine for acquisition Exploration and the return from newly discovered.
Insects as Gibsonian Animals Amelia Grant-Alfieri Mandyam V. Srinivasan Ecological Psychology, 1998 Centre for Visual Science, Research School of Biological.
Patch to the Future: Unsupervised Visual Prediction
Visual Navigation in Modified Environments From Biology to SLAM Sotirios Ch. Diamantas and Richard Crowder.
Fast High-Dimensional Feature Matching for Object Recognition David Lowe Computer Science Department University of British Columbia.
Visual Pathways W. W. Norton Primary cortex maintains distinct pathways – functional segregation M and P pathways synapse in different layers Ascending.
Read Lamme (2000) TINS article for Wednesday. Visual Pathways V1 is, of course, not the only visual area (it turns out it’s not even always “primary”)
Methods Experiment 1 Conclusions Blindly Biased: Restricting Cue Access Impacts Spatial Orientation in a Human Analogue of the Water Maze B.P. Apger, D.M.
How does the visual system represent visual information? How does the visual system represent features of scenes? Vision is analytical - the system breaks.
ECE 7340: Building Intelligent Robots QUALITATIVE NAVIGATION FOR MOBILE ROBOTS Tod S. Levitt Daryl T. Lawton Presented by: Aniket Samant.
Tracking a moving object with real-time obstacle avoidance Chung-Hao Chen, Chang Cheng, David Page, Andreas Koschan and Mongi Abidi Imaging, Robotics and.
Psychophysics of the structure of object memory Cristina Savin.
Spatial Cognition Navigation: Finding the way to a goal Discriminate different headings (need a sense of direction) External directional reference: sun,
Ensemble Tracking Shai Avidan IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE February 2007.
Mind and Maze Ann Sloan Devlin, 2001 Preetha Lakshmi Chris Mueller CSCI 8715 Professor Shashi Shekhar.
Behavior Planning for Character Animation Manfred Lau and James Kuffner Carnegie Mellon University.
Lecture 5: Interaction and Navigation Dr. Xiangyu WANG Acknowledge the notes from Dr. Doug Bowman.
PY202 Overview. Meta issue How do we internalise the world to enable recognition judgements to be made, visual thinking, and actions to be executed.
Understanding Perception and Action Using the Kalman filter Mathematical Models of Human Behavior Amy Kalia April 24, 2007.
Goal: Fast and Robust Velocity Estimation P1P1 P2P2 P3P3 P4P4 Our Approach: Alignment Probability ●Spatial Distance ●Color Distance (if available) ●Probability.
Mind and Maze Ann Sloan Devlin, 2001 Preetha Lakshmi Chris Mueller CSCI 8715 Professor Shashi Shekhar.
Sonar-Based Real-World Mapping and Navigation by ALBERTO ELFES Presenter Uday Rajanna.
Real-Time Face Detection and Tracking Using Multiple Cameras RIT Computer Engineering Senior Design Project John RuppertJustin HnatowJared Holsopple This.
1B50 – Percepts and Concepts Daniel J Hulme. Outline Cognitive Vision –Why do we want computers to see? –Why can’t computers see? –Introducing percepts.
Copyright 1999 all rights reserved Human Visual Understanding System n Anything that is seen by our eyes has to be processed n The processing difficulty.
Studying Visual Attention with the Visual Search Paradigm Marc Pomplun Department of Computer Science University of Massachusetts at Boston
Neural mechanisms of Spatial Learning. Spatial Learning Materials covered in previous lectures Historical development –Tolman and cognitive maps the classic.
Multimodal Interaction Dr. Mike Spann
Chapter 5: Spatial Cognition Slide Template. FRAMES OF REFERENCE.
Landing a UAV on a Runway Using Image Registration Andrew Miller, Don Harper, Mubarak Shah University of Central Florida ICRA 2008.
Biologically-inspired Visual Landmark Navigation for Mobile Robots
3D SLAM for Omni-directional Camera
Using Soar for an indoor robotic search mission Scott Hanford Penn State University Applied Research Lab 1.
University of Amsterdam Search, Navigate, and Actuate - Qualitative Navigation Arnoud Visser 1 Search, Navigate, and Actuate Qualitative Navigation.
Studying Memory Encoding with fMRI Event-related vs. Blocked Designs Aneta Kielar.
Stylization and Abstraction of Photographs Doug Decarlo and Anthony Santella.
The Relationship Between Geometric Shape and Slope for the Representation of a Goal Location in Pigeons (Columba livia) Daniele Nardi Bowling Green State.
Animal Behavior.
Elliptical Head Tracking Using Intensity Gradients and Color Histograms Stan Birchfield Stanford University Autodesk Advanced Products Group
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
Pictorial Structures and Distance Transforms Computer Vision CS 543 / ECE 549 University of Illinois Ian Endres 03/31/11.
CHAPTER 8: VISUAL IMAGERY AND SPATIAL COGNITION Jennifer Hightower, Jordan Scales, and Kandace Howard.
확률 변수 및 확률과정의 기초 Natural-scene geometry predicts the perception of angles and line orientation Catherine Q.Howe and Dale Purves.
Chapter 2 Sketching, Constraining & Dimensioning.
Territory and Navigation Part II Navigation. Navigation We talked about territoriality To get around a territory an animal must know where it is going.
HFE 760 Virtual Environments Winter 2000 Jennie J. Gallimore
Eye Movements and Working Memory Marc Pomplun Department of Computer Science University of Massachusetts at Boston Homepage:
Cogs1 mapping space in the brain Douglas Nitz – Feb. 19, 2009 any point in space is defined relative to other points in space.
Lesson 1: Reflection and its Importance
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
Garmin 60CSx How To Use The Available Features Photos by IN-TF1 Technical Search, Stephen Bauer Written by IN-TF1 Technical Search, Jean Seibert.
A Novel 2D-to-3D Conversion System Using Edge Information
Paper – Stephen Se, David Lowe, Jim Little
Sketching.
Territory and Navigation
Cheng-Ming Huang, Wen-Hung Liao Department of Computer Science
Binocular Stereo Vision
Using artificial evolution and selection to model insect navigation
Two-Dimensional Substructure of MT Receptive Fields
Space Psychology 3926.
Robert A. Harris, Paul Graham, Thomas S. Collett  Current Biology 
Topological Signatures For Fast Mobility Analysis
Figure 2 Model flowchart: (1) Look for a resource; (2) determine whether there are one or more resources in the ... Figure 2 Model flowchart: (1) Look.
How Navigational Guidance Systems Are Combined in a Desert Ant
Presentation transcript:

When do honey bees use snapshots during navigation? By Frank Bartlett Bees and wasps learn information about visual landmarks near the goal Edge orientation (Srinivasan et al., 1994) Color (von Frisch, 1967; Cheng et al., 1986) Size (Cartwright & Collett, 1979; Ronacher, 1998) Spatial relationships among multiple landmarks (Cartwright & Collett, 1983) How is this information subsequently used over successive visits? Snapshot template matching (Cartwright & Collett, 1983) Niko Tinbergen (1938) Departing Wasp Returning Wasp

What is snapshot navigation? View of landmarks is memorized from the goal After Cartwright & Collett, 1983 Upon return the bee steers flight by sequentially matching her memory to the environment

Experiments revealing the contents of snapshot memories -- When a single landmark is present bees rely on retinal image size. -- When multiple landmarks are available bees rely on the inter-landmark angles (or the spaces between landmarks) Single Landmark From Cartwright & Collett, 1983 Training Distance from landmark Landmark Goal 3 equidistant Landmarks Training Landmarks Goal

Testing the snapshot hypothesis in a small scale arena environment ► The snapshot hypothesis makes accurate predictions about where insects should spend their time searching for the goal.  Can we replicate these findings? ► The hypothesis also generates predictions of flight paths to the goal from more distant locations.  Do steering commands generated by snapshot matching predict honey bee flight behavior while en-route to a familiar goal?  This has not been tested explicitly.

Training: bees visit an initial landmark configuration (60+ visits) Testing: track with original configuration and other landmark manipulations The camera records bee position and body axis orientation at 60 Hz. Methods

Search distributions Replication of Cartwright & Collett (1983) Training 2 x -- When a single landmark is present bees rely on retinal image size. -- When multiple landmarks are available bees rely on the inter-landmark angles -- These results are consistent with previous studies

Model The model predictions were generated in Matlab based on the algorithm provided by Cartwright & Collett (1983) Flight paths to the goal location Bee Flights E F Bees appear to be attracted to the nearest landmark and use it as a beacon even over very short distances

Conclusions ► Search at the goal  Consistent with previous findings ► Steering from more remote locations using template matching  Bee flights not consistent with model predictions  Strong role of beacons ► Consistent with other results (Fry & Wehner, 2005; Collett & Baron, 1994) but extended to shorter distances and more complex arrays ► Beacon selection probably driven by visual salience

Acknowledgments NSF IGERT Fred Dyer Steven Fry Mike Mack Chris Speilburg Yoav Littman Jenny Jones Lora Bramlett Kourtney Trudgen Lauren Davenport

Short range visual navigation in flying hymenopterans ► Bees and wasp learn information about visual landmarks near the goal  Edge orientation (Srinivasan et al.; 1994)  Color (von Frisch, 1967; Cheng et al., 1986)  Size (Cartwright & Collett, 1979; Ronacher, 1998)  Spatial relationships among multiple landmarks (Cartwright & Collett; 1983) ► How is this information represented and subsequently used over successive visits?  Snapshot template matching (Cartrwright & Collett; 1983) Niko Tinbergen (1938)

How is this information learned? The turn-back-and-look Tenth Visit First Visit From Lehrer, Motion parallax cues allow bees to distinguish nearby landmarks from distant landmarks (Lehrer, 1993) -- Believed to aid in the selection and learning of the landmarks near a goal

What is snapshot navigation? View of landmarks is memorized from the goal After Cartwright & Collett, Insect visual memory is thought to be comprised of a two dimensional “snapshot” that encodes the retinotopic sizes and positions of landmarks and the gaps between them. Bee sequentially matches her memory to the environment upon return

Model predictions vs. Flight Behavior: pushed off course E F After their course is diverted by the novel landmark, bees again use the next nearest landmark as a beacon to guide flight.

Finding the match ► Near the goal bees prefer to maintain a southern facing body axis  Snapshot is probably anchored to the retina (Collett & Baron, 1994) ► Bees perform bouts of lateral flight during their return to the goal  Probably to help bring their memory into register with their current view (Collett & Reese, 1997)

E F Model predictions vs. flight Behavior: middle landmark removed

Model predictions vs. Flight Behavior: farthest landmark removed E F

Model predictions vs. Flight Behavior: removed nearest landmark E F

Fixed body axis and scanning flights * * Bees preferred a southern facing body axis orientation during their first pass through the goal region Bees rarely performed lateral scanning flights near the landmark. Circling flights were the norm.

Snapshot overview ► Insects memorize a visual template or “snapshot” of landmarks they experience at important locations of their environment ► The memory encodes the sizes and retinal locations of landmarks ► Insects sequentially match this template to the environment upon return while maintaining consistent body alignment ► Lateral scanning movement may aid the matching process

Snapshot overview ► Insects memorize a visual template or “snapshot” of landmarks they experience at important locations of their environment ► The memory encodes the sizes and retinal locations of landmarks ► Insects sequentially match this template to the environment upon return while maintaining consistent body alignment ► Lateral scanning movement may aid the matching process

Testing the snapshot hypothesis in a small scale arena environment ► Honey bee flight behavior during other visual navigation experiments in our apparatus appeared inconsistent with snapshot guidance. ► Investigated elements of snapshot navigation in a carefully controlled arena environment  Snapshot predictions of search behavior near the goal location (tested by Cartwright & Collett, 1979, 1982; Cheng, 1999)  Predictions of flights paths to the goal from distances of up to two meters (largely untested)  Consistent body axis orientation near the goal (Collett & Baron, 1994)  Lateral scanning flights near the goal (Collett & Rees, 1997)