Presentation is loading. Please wait.

Presentation is loading. Please wait.

오 세 영, 이 진 수 전자전기공학과, 뇌연구센터 포항공과대학교

Similar presentations


Presentation on theme: "오 세 영, 이 진 수 전자전기공학과, 뇌연구센터 포항공과대학교"— Presentation transcript:

1 오 세 영, 이 진 수 전자전기공학과, 뇌연구센터 포항공과대학교
자율 주행 시스템의 구현 오 세 영, 이 진 수 전자전기공학과, 뇌연구센터 포항공과대학교

2 연구목표 Vision 센서를 이용한 Landmark 기반의 주행 기법
지능형 센서 융합에 기반을 둔 환경인식 및 Path Planning

3 연구내용 Visual Servoing Stochastic Map Building
Multisensor Integration & Fusion

4 Visual Servoing 연구내용 – Visual Servoing 2. Cartesian Control Law
(Position Based Visual Servoing) 1. Feature Space Control Law (Image Based Visual Servoing) x y z

5 1. Image Based Visual Servoing (IBVS)
Desired Image + Feature Space Control Law Motion Control Robot - Image Feature Extraction  Advantage - No Need for Camera Calibration or Image to Workspace Transform - Using Fuzzy Logic Controller for Motion Control  Disadvantage - No Guarantee of an Efficient Trajectory on Workspace

6  Temporary Desired Feature Method
연구내용 – Visual Servoing  Temporary Desired Feature Method Pre-defined Path Navigation Image Feature Extraction (x, y) Neural Network * - + GOAL START START GOAL Trajectory on Image Space Trajectory on Work Space

7 2. Position Based Visual Servoing (PBVS)
Desired Pose + Cartesian Control Law Motion Control Robot - Image to Workspace Transform Image Feature Extraction  Image to Workspace Transform (2D-2D Mapping)

8 연구내용 – Visual Servoing  Landmark prediction method (When the landmark is out of image boundary) using odometry data - Also useful when detected landmark has uncertainty due to noise disturbance

9 Stochastic Map Building
Two main method of environment representation Occupancy grid representation Mainly using ultrasonic sensors which is cheap and easy to use High memory requirements in a real large environment Impossible to use them directly for position estimation Geometric primitive representation Mainly using 2D laser rangefinder Line or circle primitives extraction 2D laser rangefinder An optical sensor which scans its surroundings with infrared laser beams

10 연구내용 – Stochastic Map Building
Sensor Data from 2D laser rangefinder It provides denser scans and more accurate measurements. The measurement provides line features and some clusters. But, it may not clear line when robot moves stochastic feature is needed

11 2. Stochastic Feature Extraction
연구내용 – Stochastic Map Building 2. Stochastic Feature Extraction Sensor Data Clustering Grouping the data and separating the regions by checking the distance If the distance , and are in the different cluster The iterative end point fit (IEPF) method Two connecting wall are discriminated  two clustering regions Recursively splitting a set of points C into two subsets C1 and C2 . IEPF C C1 C2

12 연구내용 – Stochastic Map Building
Conversion of measured points The position of measured points w.r.t. global coordinate frame is determined from the measure distance, the sensor bearing, and robot position Scanning area j

13 y x 연구내용 – Stochastic Map Building Feature Extraction
Representation each cluster region Ci as : the parameter of the line expression : the mean vector of object positions in Ci : the vector of eigenvalues x y

14 연구내용 – Stochastic Map Building
Linear Regression Intermediate parameters to represent the cluster by general parameters , where is its larger eigenvalue and is its smaller eigenvalue.

15 연구내용 – Stochastic Map Building
The eigenvalues Indicating how the object positions in the cluster are scattered from the mean. (Case A)  the object positions are very much aligned as commonly found in the obstacles such as wall. (Case B)  the object positions are widely scattered as commonly found in the tiny obstacles located closely together (Case A) (Case B)

16 연구내용 – Stochastic Map Building
Finally, the parameters are determined by

17 3. Mobile Robot System 연구내용 – Stochastic Map Building
ALiVE2 mobile robot system is equipped with a 2D laser rangefinder.

18 Multisensor Integration & Fusion
Multisensor integration is the synergistic use of the information provided by multiple sensory devices to assist in the accomplishment of a task by a system. The advantages gained through the use of redundant, complementary, or more timely information in a system can provide more reliable and accurate information.

19 연구내용 – Multisensor Integration & Fusion
Multisensor Fusion Signal-level fusion can be used in real-time applications and can be considered as just an additional step in the overall processing of the signals Pixel-level fusion can be used to improve the performance of many image processing tasks like segmentation Feature- and Symbol-level fusion can be used to provide an object recognition system with additional features that can be used to increase its recognition capabilities.

20 Multisensor Fusion Diagram
연구내용 – Multisensor Integration & Fusion Multisensor Fusion Diagram Decision-Level Fusion Feature-Level Fusion Signal-Level Fusion

21 Future Work IBVS + PBVS Stochastic Map Building
연구내용 – Multisensor Integration & Fusion Future Work IBVS + PBVS Stochastic Map Building  Real implementation is in progress. Human’s Sensor Fusion Process Modeling Real H/W Experiments Multisensor Integration


Download ppt "오 세 영, 이 진 수 전자전기공학과, 뇌연구센터 포항공과대학교"

Similar presentations


Ads by Google