오 세 영, 이 진 수 전자전기공학과, 뇌연구센터 포항공과대학교

Slides:



Advertisements
Similar presentations
Gestures Recognition. Image acquisition Image acquisition at BBC R&D studios in London using eight different viewpoints. Sequence frame-by-frame segmentation.
Advertisements

Chayatat Ratanasawanya Min He May 13, Background information The goal Tasks involved in implementation Depth estimation Pitch & yaw correction angle.
Robotics Where AI meets the real world. Ankit Jain
Visual Servo Control Tutorial Part 1: Basic Approaches Chayatat Ratanasawanya December 2, 2009 Ref: Article by Francois Chaumette & Seth Hutchinson.
Advanced Mobile Robotics
Discussion topics SLAM overview Range and Odometry data Landmarks
Hybrid Position-Based Visual Servoing
Vision Based Control Motion Matt Baker Kevin VanDyke.
Monte Carlo Localization for Mobile Robots Karan M. Gupta 03/10/2004
SA-1 Probabilistic Robotics Probabilistic Sensor Models Beam-based Scan-based Landmarks.
GIS and Image Processing for Environmental Analysis with Outdoor Mobile Robots School of Electrical & Electronic Engineering Queen’s University Belfast.
Motion planning, control and obstacle avoidance D. Calisi.
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
Dana Cobzas-PhD thesis Image-Based Models with Applications in Robot Navigation Dana Cobzas Supervisor: Hong Zhang.
Localization of Piled Boxes by Means of the Hough Transform Dimitrios Katsoulas Institute for Pattern Recognition and Image Processing University of Freiburg.
Visual Navigation in Modified Environments From Biology to SLAM Sotirios Ch. Diamantas and Richard Crowder.
Active SLAM in Structured Environments Cindy Leung, Shoudong Huang and Gamini Dissanayake Presented by: Arvind Pereira for the CS-599 – Sequential Decision.
CH24 in Robotics Handbook Presented by Wen Li Ph.D. student Texas A&M University.
A Robotic Wheelchair for Crowded Public Environments Choi Jung-Yi EE887 Special Topics in Robotics Paper Review E. Prassler, J. Scholz, and.
SA-1 Probabilistic Robotics Probabilistic Sensor Models Beam-based Scan-based Landmarks.
Vision-Based Motion Control of Robots
ECE 4340/7340 Exam #2 Review Winter Sensing and Perception CMUcam and image representation (RGB, YUV) Percept; logical sensors Logical redundancy.
Active Simultaneous Localization and Mapping Stephen Tully, : Robotic Motion Planning This project is to actively control the.
MEAM 620 Project Report Nima Moshtagh.
Particle Filters for Mobile Robot Localization 11/24/2006 Aliakbar Gorji Roborics Instructor: Dr. Shiri Amirkabir University of Technology.
An experiment on squad navigation of human and robots IARP/EURON Workshop on Robotics for Risky Interventions and Environmental Surveillance January 7th-8th,
The Terrapins Computer Vision Laboratory University of Maryland.
Behavior- Based Approaches Behavior- Based Approaches.
Vision Guided Robotics
Slide 1 ROBOT VISION  2000 Jaskaran Singh ROBOT VISION.
Introduction to Machine Vision Systems
Mohammed Rizwan Adil, Chidambaram Alagappan., and Swathi Dumpala Basaveswara.
1 DARPA TMR Program Collaborative Mobile Robots for High-Risk Urban Missions Second Quarterly IPR Meeting January 13, 1999 P. I.s: Leonidas J. Guibas and.
Abstract Design Considerations and Future Plans In this project we focus on integrating sensors into a small electrical vehicle to enable it to navigate.
Waikato Margaret Jefferies Dept of Computer Science University of Waikato.
Markov Localization & Bayes Filtering
International Conference on Computer Vision and Graphics, ICCVG ‘2002 Algorithm for Fusion of 3D Scene by Subgraph Isomorphism with Procrustes Analysis.
CSCE 5013 Computer Vision Fall 2011 Prof. John Gauch
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
SA-1 Probabilistic Robotics Probabilistic Sensor Models Beam-based Scan-based Landmarks.
Vrobotics I. DeSouza, I. Jookhun, R. Mete, J. Timbreza, Z. Hossain Group 3 “Helping people reach further”
MULTISENSOR INTEGRATION AND FUSION Presented by: Prince Garg.
M. De Cecco - Lucidi del corso di Robotica e Sensor Fusion Laser Range Finder Camera  direct depth measurement  wide accuracy span (till 200 m)  only.
Forward-Scan Sonar Tomographic Reconstruction PHD Filter Multiple Target Tracking Bayesian Multiple Target Tracking in Forward Scan Sonar.
Jorge Almeida Laser based tracking of mutually occluding dynamic objects University of Aveiro 2010 Department of Mechanical Engineering 10 September 2010.
1 Distributed and Optimal Motion Planning for Multiple Mobile Robots Yi Guo and Lynne Parker Center for Engineering Science Advanced Research Computer.
Vision and Obstacle Avoidance In Cartesian Space.
Team Members Ming-Chun Chang Lungisa Matshoba Steven Preston Supervisors Dr James Gain Dr Patrick Marais.
Sensors Uncertainties, Line extraction from laser scans Vision
Fast SLAM Simultaneous Localization And Mapping using Particle Filter A geometric approach (as opposed to discretization approach)‏ Subhrajit Bhattacharya.
Mobile Robot Localization and Mapping Using Range Sensor Data Dr. Joel Burdick, Dr. Stergios Roumeliotis, Samuel Pfister, Kristo Kriechbaum.
Presented by: Kumar Magi. ( 2MM07EC016 ). Contents Introduction Definition Sensor & Its Evolution Sensor Principle Multi Sensor Fusion & Integration Application.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Line fitting.
3D Perception and Environment Map Generation for Humanoid Robot Navigation A DISCUSSION OF: -BY ANGELA FILLEY.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
STEREO VISION BASED MOBILE ROBOT NAVIGATION IN ROUGH TERRAIN
COGNITIVE APPROACH TO ROBOT SPATIAL MAPPING
Sensors Fusion for Mobile Robotics localization
Paper – Stephen Se, David Lowe, Jim Little
Contents Team introduction Project Introduction Applicability
Sensors for industrial mobile Robots: environment referred laser scanning Need more/better pictures.
Pursuit-Evasion Games with UGVs and UAVs
Probabilistic Robotics
Map for Easy Paths GIANLUCA BARDARO
Probabilistic Robotics
A Short Introduction to the Bayes Filter and Related Models
Creating Data Representations
SENSOR BASED CONTROL OF AUTONOMOUS ROBOTS
Presentation transcript:

오 세 영, 이 진 수 전자전기공학과, 뇌연구센터 포항공과대학교 자율 주행 시스템의 구현 오 세 영, 이 진 수 전자전기공학과, 뇌연구센터 포항공과대학교

연구목표 Vision 센서를 이용한 Landmark 기반의 주행 기법 지능형 센서 융합에 기반을 둔 환경인식 및 Path Planning

연구내용 Visual Servoing Stochastic Map Building Multisensor Integration & Fusion

Visual Servoing 연구내용 – Visual Servoing 2. Cartesian Control Law (Position Based Visual Servoing) 1. Feature Space Control Law (Image Based Visual Servoing) x y z

1. Image Based Visual Servoing (IBVS) Desired Image + Feature Space Control Law Motion Control Robot - Image Feature Extraction  Advantage - No Need for Camera Calibration or Image to Workspace Transform - Using Fuzzy Logic Controller for Motion Control  Disadvantage - No Guarantee of an Efficient Trajectory on Workspace

 Temporary Desired Feature Method 연구내용 – Visual Servoing  Temporary Desired Feature Method Pre-defined Path Navigation Image Feature Extraction (x, y) Neural Network * - +  GOAL START START GOAL Trajectory on Image Space Trajectory on Work Space

2. Position Based Visual Servoing (PBVS) Desired Pose + Cartesian Control Law Motion Control Robot - Image to Workspace Transform Image Feature Extraction  Image to Workspace Transform (2D-2D Mapping)

연구내용 – Visual Servoing  Landmark prediction method (When the landmark is out of image boundary) using odometry data - Also useful when detected landmark has uncertainty due to noise disturbance

Stochastic Map Building Two main method of environment representation Occupancy grid representation Mainly using ultrasonic sensors which is cheap and easy to use High memory requirements in a real large environment Impossible to use them directly for position estimation Geometric primitive representation Mainly using 2D laser rangefinder Line or circle primitives extraction 2D laser rangefinder An optical sensor which scans its surroundings with infrared laser beams

연구내용 – Stochastic Map Building Sensor Data from 2D laser rangefinder It provides denser scans and more accurate measurements. The measurement provides line features and some clusters. But, it may not clear line when robot moves stochastic feature is needed

2. Stochastic Feature Extraction 연구내용 – Stochastic Map Building 2. Stochastic Feature Extraction Sensor Data Clustering Grouping the data and separating the regions by checking the distance If the distance , and are in the different cluster The iterative end point fit (IEPF) method Two connecting wall are discriminated  two clustering regions Recursively splitting a set of points C into two subsets C1 and C2 . IEPF C C1 C2

연구내용 – Stochastic Map Building Conversion of measured points The position of measured points w.r.t. global coordinate frame is determined from the measure distance, the sensor bearing, and robot position Scanning area j

y x 연구내용 – Stochastic Map Building Feature Extraction Representation each cluster region Ci as : the parameter of the line expression : the mean vector of object positions in Ci : the vector of eigenvalues x y

연구내용 – Stochastic Map Building Linear Regression Intermediate parameters to represent the cluster by general parameters , where is its larger eigenvalue and is its smaller eigenvalue.

연구내용 – Stochastic Map Building The eigenvalues Indicating how the object positions in the cluster are scattered from the mean. (Case A)  the object positions are very much aligned as commonly found in the obstacles such as wall. (Case B)  the object positions are widely scattered as commonly found in the tiny obstacles located closely together (Case A) (Case B)

연구내용 – Stochastic Map Building Finally, the parameters are determined by

3. Mobile Robot System 연구내용 – Stochastic Map Building ALiVE2 mobile robot system is equipped with a 2D laser rangefinder.

Multisensor Integration & Fusion Multisensor integration is the synergistic use of the information provided by multiple sensory devices to assist in the accomplishment of a task by a system. The advantages gained through the use of redundant, complementary, or more timely information in a system can provide more reliable and accurate information.

연구내용 – Multisensor Integration & Fusion Multisensor Fusion Signal-level fusion can be used in real-time applications and can be considered as just an additional step in the overall processing of the signals Pixel-level fusion can be used to improve the performance of many image processing tasks like segmentation Feature- and Symbol-level fusion can be used to provide an object recognition system with additional features that can be used to increase its recognition capabilities.

Multisensor Fusion Diagram 연구내용 – Multisensor Integration & Fusion Multisensor Fusion Diagram Decision-Level Fusion Feature-Level Fusion Signal-Level Fusion

Future Work IBVS + PBVS Stochastic Map Building 연구내용 – Multisensor Integration & Fusion Future Work IBVS + PBVS Stochastic Map Building  Real implementation is in progress. Human’s Sensor Fusion Process Modeling Real H/W Experiments Multisensor Integration