Augmenting spatial awareness with the haptic radar

Slides:



Advertisements
Similar presentations
HAPTICS.
Advertisements

SEMINAR ON VIRTUAL REALITY 25-Mar-17
S3 Technologies Presents Tactile Vision Glove for The Blind S3 Technologies: Shaun Marlatt Sam Zahed Sina Afrooze ENSC 340 Presentation: December 17, 2004.
A Lightweight Computer Vision-based Electronic Travel Aid Andrew B. Raij Enabling Tech Project Status Report 3/6/2003.
Chapter 1 Perception. Ecological Approach to Perception James Gibson 1966, Perception is in “tune” with properties of the enviornment that are.
LabView Basics.
2  Definition  Features  Architecture  Prototypes  Communication  Security  Benefits and Concerns  Conclusion.
Virtual Reality Design Virtual reality systems are designed to produce in the participant the cognitive effects of feeling immersed in the environment.
Presented by: Team NightStriker Course: EDSGN Section: 006.
Rotary Encoder. Wikipedia- Definition  A rotary encoder, also called a shaft encoder, is an electro- mechanical device that converts the angular position.
Collision Mitigation break system (CMS) What is CMS? How it works System Configuration Warning Devices Collision Avoidance Maneuvers Conclusions.
Distribution Statement A – Approved for Public Release CAPT JT Elder, USN Commanding Officer NSWC Crane Dr. Adam Razavian, SES Technical Director NSWC.
A Lightweight Computer- Vision-based Electronic Travel Aid Andrew B. Raij Enabling Tech Project Final Report 4/17/2003.
SienceSpace Virtual Realities for Learning Complex and Abstract Scientific Concepts.
Wheelesley : A Robotic Wheelchair System: Indoor Navigation and User Interface Holly A. Yanco Woo Hyun Soo DESC Lab.
MUltimo3-D: a Testbed for Multimodel 3-D PC Presenter: Yi Shi & Saul Rodriguez March 14, 2008.
Virtual Reality. What is virtual reality? a way to visualise, manipulate, and interact with a virtual environment visualise the computer generates visual,
The Laser Aura: a prosthesis for emotional expression Ishikawa Oku Lab UNIVERSITY OF TOKYO Alvaro Cassinelli, Yuko Zhou,
Lecture 4: Perception and Cognition in Immersive Virtual Environments Dr. Xiangyu WANG.
Teleoperation Interfaces. Introduction Interface between the operator and teleoperator! Teleoperation interface is like any other HMI H(mobile)RI = TI.
Basics of Sensors. A sensor is a device which is used to sense the surroundings of it & gives some useful information about it. This information is used.
Introduce about sensor using in Robot NAO Department: FTI-FHO-FPT Presenter: Vu Hoang Dung.
2.03B Common Types and Interface Devices and Systems of Virtual Reality 2.03 Explore virtual reality.
June 12, 2001 Jeong-Su Han An Autonomous Vehicle for People with Motor Disabilities by G. Bourhis, O.Horn, O.Habert and A. Pruski Paper Review.
Friday, 4/8/2011 Professor Wyatt Newman Smart Wheelchairs.
Gesture Recognition Using Laser-Based Tracking System Stéphane Perrin, Alvaro Cassinelli and Masatoshi Ishikawa Ishikawa Namiki Laboratory UNIVERSITY OF.
2.5/2.6/2.7.  Virtual Reality presents a world in 3d space  Regular input devices such as a mouse only has 2 degrees of movement when 6 is needed for.
Active Display Robot System Using Ubiquitous Network Byung-Ju Yi Hanyang University.
Copyright John Wiley & Sons, Inc. Chapter 3 – Interactive Technologies HCI: Developing Effective Organizational Information Systems Dov Te’eni Jane.
Constraints-based Motion Planning for an Automatic, Flexible Laser Scanning Robotized Platform Th. Borangiu, A. Dogar, A. Dumitrache University Politehnica.
Computational Perception Li Liu. Course 10 lectures 2 exercises 2 labs 1 project 1 written examination.
Spatiotemporal Information Processing No.1 History of computer and relationship between Virtual Reality and Spatiotemporal Information Processing Kazuhiko.
Juhana Leiwo – Marco Torti.  Position and movement  Direction of acceleration (gravity) ‏  Proximity and collision sensing  3-dimensional spatial.
Sensing self motion Key points: Why robots need self-sensing Sensors for proprioception in biological systems in robot systems Position sensing Velocity.
Juhana Leiwo – Marco Torti.  Position and movement  Direction of acceleration (gravity) ‏  Proximity and collision sensing  3-dimensional spatial.
오 세 영, 이 진 수 전자전기공학과, 뇌연구센터 포항공과대학교
Robotica Lecture 3. 2 Robot Control Robot control is the mean by which the sensing and action of a robot are coordinated The infinitely many possible.
Álvaro Cassinelli, Stéphane Perrin, Masatoshi Ishikawa Ishikawa-Namiki-Laboratory Parallel Processing for Sensory information University of Tokyo.
Robotica Lecture 3. 2 Robot Control Robot control is the mean by which the sensing and action of a robot are coordinated The infinitely many possible.
Spatiotemporal Information Processing No.1 History of computer and relationship between Virtual Reality and Spatiotemporal Information Processing Kazuhiko.
Vrobotics I. DeSouza, I. Jookhun, R. Mete, J. Timbreza, Z. Hossain Group 3 “Helping people reach further”
Virtual Reality Lecture2. Some VR Systems & Applications 고려대학교 그래픽스 연구실.
GENESIS OF VIRTUAL REALITY  The term ‘Virtual reality’ (VR) was initially coined by Jaron Lanier, founder of VPL Research (1989)..
The University of Texas at Austin Vision-Based Pedestrian Detection for Driving Assistance Marco Perez.
卓越發展延續計畫分項三 User-Centric Interactive Media ~ 主 持 人 : 傅立成 共同主持人 : 李琳山,歐陽明,洪一平, 陳祝嵩 水美溫泉會館研討會
Haptic technology, or haptics, is a tactile feedback technology that takes advantage of our sense of touch by applying forces, vibrations, and/or motions.
Camera-less Smart Laser Projector
Laser-Based Finger Tracking System Suitable for MOEMS Integration Stéphane Perrin, Alvaro Cassinelli and Masatoshi Ishikawa Ishikawa Hashimoto Laboratory.
Low cost tactile feedback platform for teleoperation and VR sensing Human Machine Interaction & Low cost technologies Adrien Moucheboeuf - July 8 th, 2015.
2.03 Explore virtual reality design and use.
Autonomous Soil Investigator. What Is the ASI? Designed to complete the 2013 IEEE student robotics challenge Collects "soil" samples from a simulated.
VIRTUAL REALITY PRESENTED BY, JANSIRANI.T, NIRMALA.S, II-ECE.
1 1 Spatialized Haptic Rendering: Providing Impact Position Information in 6DOF Haptic Simulations Using Vibrations 9/12/2008 Jean Sreng, Anatole Lécuyer,
EMBEDDED SYSTEM & ROBOTICS. Introduction to robotics Robots are machines capable of carrying out a complex series of actions automatically. Robotics is.
Copyright John Wiley & Sons, Inc. Chapter 3 – Interactive Technologies HCI: Developing Effective Organizational Information Systems Dov Te’eni Jane.
New Human-Computer Paradigms. New Realities 2 Augmented Reality Enhanced view of a physical world Augmented by computer generated input – Data/Graphics/GPS.
A Smart Multisensor Approach to Assist Blind People in Specific Urban Navigation Tasks 一多重感測器協助盲人於城市中行走 Presenter : I-Chung Hung Date : 2010/11/17 B. Andò,
Robot Intelligence Technology Lab. Evolutionary Robotics Chapter 3. How to Evolve Robots Chi-Ho Lee.
Simulation of Characters in Entertainment Virtual Reality.
MULTIMODAL AND NATURAL COMPUTER INTERACTION Domas Jonaitis.
Programming & Sensors.
Crowds (and research in computer animation and games)
Know your Robot Electrical Parts
Introduction to Virtual Environments & Virtual Reality
Automation as the Subject of Mechanical Engineer’s interest
Ubiquitous Computing and Augmented Realities
Summarized by Geb Thomas
DT-Assessment Frame Work Term2
Crowds (and research in computer animation and games)
Klaas Werkman Arjen Vellinga
Presentation transcript:

Augmenting spatial awareness with the haptic radar Álvaro Cassinelli, Carson Reynolds & Masatoshi Ishikawa The University of Tokyo

Time to (re)grow antennas on people (and machines)? Concept & Motivation Antennae, hairs and cilia precede eyes in evolutionary development Efficient for short-range spatial awareness (unambiguous, computationally inexpensive) Robust (insensitive to illumination & background) Easily configurable (hairs at strategic locations) and potentially omni-directional + Today’s MOEMS technology enables mass produced, tiny opto-mechanical devices... Develop insect-like artificial antennae antenna precede eyes in evolutionary development. They provide a simple and efficient method for perceiving space. Antennae present unique advantages as sensors direct and computationally inexpensive range perception insensitive to illumination conditions Refigure hair and antenna as a useful sensorial modality for people and machines: Conclusion: A wearable, augmenting sensing device, a double skin. For the people who where here yesterday and heard professor Inami’s talk: this clearly goes in the direction of “X-men computing” Time to (re)grow antennas on people (and machines)?

An opto-mechanical hair? Hair shaft is a steerable beam of light (a laser-radar). Modular, but interconnected structure (artificial skin) Local, range-to-tactile stimulation Active scanning of the surrounding: Proprioception-based Automatic sweeping of the surroundings to extract important features (inspired by animal whiskers’ motion, two-point touch technique, etc) Infrared or ultrasound rangefinder sensors can be used too, but won’t be mobile.

Optical Hair module structure External world Artificial hair module Electronic input/output (for electronic devices, robots, etc). Steerable laser beam User Electronic driver & interface MOEMS based Laser Radar Tactile output (for humans) Input real time 3D measures tracking surface roughness, etc. Output laser display laser cueing Interconnection network (to other modules)

Possible applications Augmented spatial awareness & sensing Electronic Travel Aid for the visually impaired . Augmented spatial awareness for motorcycle drivers and workers in hazardous environments. Collision avoidance (robotic limbs, vehicles, etc). Augmented sensing & tele-sensing (texture, speed). Input… … but also Human-Machine interface technology: “Hairy electronics”: versatile human- computer interface Sensitive spaces: human-aware “hairy” architecture Display: laser-based vectorial graphics, laser annotation on surrounding (augmented reality, attentional cues) Thanks to the its reconfigurability, the proposed concept (module formed by coupling a range-detector and a tactile stimulator) can target very different areas. … and output!

( ) Laser-based module: feasability* MEMS galvano-mirrors Concept (“hairy electronics”)… … opto-mechanical implementation MEMS galvano-mirrors Smart Laser Tracking (*) principle (can work as an antenna sweeping mode) (*) “The Smart Laser Scanner”, SIGCHI 2005

The haptic radar as a travel aid A few fundamental questions: New sensorial modality: how easy to appropriate? (Would it be like re-exercising an atrophied one?) reflex reaction to range-to-tactile translation? Is the brain capable of intuitive integration of data from eyes on the back, the front, the sides…? Prototype characteristics and limitations: Configuration studied: headband with few modules. Limitation: non-mobile beam Two prototypes built: one without range-finders (simulated maze exploration), another with range- finders (but short range). I am going to show here a proof of principle of the modular range-to-tactile translation system Simultaneous, “full-horizon” spatial awareness possible: to my knowledge, this has never been explored before. Can we learn to integrate information from “eyes on the back (as well as on the sides)”? Is it possible to form and visualize a coherent 3D model of the world (with some reasonable training), or this 360 degrees of awareness will always impose a cognitive overload (i.e., we have to selectively pay attention to the “back”, as if we were looking there using back mirrors in a car). My guess is that it is possible, because we already integrate this information form the objects in our back (for instance, using the skin or the hairs). It’s very short range, but this are good news because the training is not targeted to the creation of an entirely new modality, but rather to the exercising of an existing one (perhaps atrophied?). No mobile beam: the user relies on body proprioception to give meaning to the tactile cues.

(a) Haptic Radar Simulator Q: How participants deal with 360 of spatial awareness without previous training? Simulator features Six actuators & LED indicators No range-sensors (controlled virtual space) Adjustable horizon of visibility Perception modalities: proximity open-space

(a) Simulator demo

Simulator discussion orientation is rapidly lost => add compass? Interactive horizon of visibility is a necessary feature “proximity feel” mode disturbing if many actuators vibrate at the same time => compute center of gravity “open-space” perception mode interesting, but counterintuitive (needs training). continuous range-to-vibration function not easy to interpret = > discretize levels (3 or 4 levels). Too few actuators/sensors (annoying jumping effect) vibrators need to be calibrated to produce same perceived effect (motors characteristics differ, as well as sensitivity on each site)

(b) Prototype with sensors Q: Can participants avoid unseen object approaching from behind ? Prototype features: Six sensor & vibrators Non-steerable “hairs” (infrared sensors) Max range 80 cm (arm’s range)

Experiment Design / Results Hypothesis: participants can avoid an unseen object approaching from behind N=10 participants, each with 3 trials In 26 of 30 trials, participants moved to avoid the unseen stimulus (p=1.26*10^-5). In follow-up questionnaire, participants viewed the system as: more of a help (p=0.005), easy (p=0.005), and intuitive (p=0.005).

(b) Collision avoidance demo

Discussion Immediate problems & possible improvements Range detection too short (1 meter max) [ next prototype will use utrasound sensors (up to 6 meters), then laser rangefinders] Simultaneous stimulus confusing [ only one actuator active at any time, perhaps in the opposite direction (showing direction of clear path)] Low spatial resolution of actuators [ more vibrators / different actuators] Variable motor characteristics [ individual calibration] Range-to-tactile linear function too simplistic [ log scale / discrete] Effect of rotation is confusing in the simulator [ head tracking] Sense of direction is rapidly lost when there is no “reference background” [use “interactive horizon” technique & add compass cue] -Importance of pre-processing information in order to REDUCE THE AMOUNT OF INFORMATION. REM: works by Leslie Kay (Sonicguide), Tony Heyes (Sonic Pathfinder) and Allan Dodds (http://www.seeingwithsound.com/sensub.htm). “Heyes' approach is rather different from Kay's in that the Sonic Pathfinder deliberatily supplies only the minimum but most relevant information for travel needed by the user, whereas Kay strives for more information-rich sonar-based displays. “. Both systems present information as sound signals…

Future Research Directions Ultrasound sensors (more range – up to 6 meters) MEMS based steerable laser beams (automatic sweeping) Evaluate other tactile actuators (skin stretch?) and tactile signals (ex: tactons). More compact MEMS device modules for more density Grid network of interconnected modules More comprehensive experiments: Can participants navigate through crowds? Can participants predict if an object will hit them? In the long term, is there habituation to vibration stimulus? - fMRI tests on this device to see how the brain learn to decode this information (using pneumatic actuators, as in Evaluation of a pneumatically driven tactile stimulator device for vision substitution during fMRI studies. Zappe AC, Maucher T, Meier K, Scheiber C Magn Reson Med. 2004 Apr ; 51(4): 828-34 http://www.hubmed.org/display.cgi?uids=15065257) Other interesting research directions: Study haptic radar for rehabilitation of hemi-negligent patients. Use the optical hair to write/annotate objects in the surrounding

Questions?

Proof-of-principle: Laser annotation Laser display Visual cues (attentional cueing, augmented reality). Screen-less display Retinal display?

Expected (final) module performance Module size: roughly 3x2 cm2 including laser, micromirrors and microcontroller electronics. Sampling rate: kilohertz range Range measurement: using just intensity and modulated laser diode, up to one or two meters. Angular “sweeping” speed: depends on selected micromirror (ex: 500 rad/s for MEL-ARI devices). Power: to study (at least 200mW/module…)

Smart Laser Scanning principle laser excursion is intelligently confined to the area of interest Simplest laser trajectory for tracking: a circular laser “saccade”. Fast! (kHz range).