Autonomous Vehicle Competition

Slides:



Advertisements
Similar presentations
Single-view geometry Odilon Redon, Cyclops, 1914.
Advertisements

Combining Detectors for Human Hand Detection Antonio Hernández, Petia Radeva and Sergio Escalera Computer Vision Center, Universitat Autònoma de Barcelona,
Simultaneous surveillance camera calibration and foot-head homology estimation from human detection 1 Author : Micusic & Pajdla Presenter : Shiu, Jia-Hau.
The fundamental matrix F
Institut für Elektrische Meßtechnik und Meßsignalverarbeitung Professor Horst Cerjak, Augmented Reality VU 2 Calibration Axel Pinz.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Foreground Modeling The Shape of Things that Came Nathan Jacobs Advisor: Robert Pless Computer Science Washington University in St. Louis.
Computer vision. Camera Calibration Camera Calibration ToolBox – Intrinsic parameters Focal length: The focal length in pixels is stored in the.
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Multi video camera calibration and synchronization.
Segmentation Divide the image into segments. Each segment:
CSSE463: Image Recognition Day 30 Due Friday – Project plan Due Friday – Project plan Evidence that you’ve tried something and what specifically you hope.
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
Single-view geometry Odilon Redon, Cyclops, 1914.
Projected image of a cube. Classical Calibration.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Optical flow (motion vector) computation Course: Computer Graphics and Image Processing Semester:Fall 2002 Presenter:Nilesh Ghubade
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Tal Mor  Create an automatic system that given an image of a room and a color, will color the room walls  Maintaining the original texture.
Path-Based Constraints for Accurate Scene Reconstruction from Aerial Video Mauricio Hess-Flores 1, Mark A. Duchaineau 2, Kenneth I. Joy 3 Abstract - This.
Sequential Reconstruction Segment-Wise Feature Track and Structure Updating Based on Parallax Paths Mauricio Hess-Flores 1, Mark A. Duchaineau 2, Kenneth.
Computer Vision James Hays, Brown
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
3D SLAM for Omni-directional Camera
Imaging Geometry for the Pinhole Camera Outline: Motivation |The pinhole camera.
CSE 185 Introduction to Computer Vision Pattern Recognition 2.
Particle Filters for Shape Correspondence Presenter: Jingting Zeng.
Geometric Camera Models
© 2005 Martin Bujňák, Martin Bujňák Supervisor : RNDr.
Single-view geometry Odilon Redon, Cyclops, 1914.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
3D Imaging Motion.
CVPR2013 Poster Detecting and Naming Actors in Movies using Generative Appearance Models.
Distinctive Image Features from Scale-Invariant Keypoints David Lowe Presented by Tony X. Han March 11, 2008.
FREE-VIEW WATERMARKING FOR FREE VIEW TELEVISION Alper Koz, Cevahir Çığla and A.Aydın Alatan.
Video Tracking G. Medioni, Q. Yu Edwin Lei Maria Pavlovskaia.
Computer vision: models, learning and inference M Ahad Multiple Cameras
Presented by: Idan Aharoni
Visual Tracking by Cluster Analysis Arthur Pece Department of Computer Science University of Copenhagen
MASKS © 2004 Invitation to 3D vision Uncalibrated Camera Chapter 6 Reconstruction from Two Uncalibrated Views Modified by L A Rønningen Oct 2008.
Single-view geometry Odilon Redon, Cyclops, 1914.
Uncalibrated reconstruction Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration.
1 Microarray Clustering. 2 Outline Microarrays Hierarchical Clustering K-Means Clustering Corrupted Cliques Problem CAST Clustering Algorithm.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Viewing. Classical Viewing Viewing requires three basic elements - One or more objects - A viewer with a projection surface - Projectors that go from.
Calibrating a single camera
Graphcut Textures:Image and Video Synthesis Using Graph Cuts
SIFT Scale-Invariant Feature Transform David Lowe
CSSE463: Image Recognition Day 29
A Forest of Sensors: Using adaptive tracking to classify and monitor activities in a site Eric Grimson AI Lab, Massachusetts Institute of Technology
Subpixel Registration and Distortion Measurement
Mean Shift Segmentation
Mauricio Hess-Flores1, Mark A. Duchaineau2, Kenneth I. Joy3
Vehicle Segmentation and Tracking in the Presence of Occlusions
CSSE463: Image Recognition Day 29
Multiple View Geometry for Robotics
Uncalibrated Geometry & Stratification
Clustering.
Video Compass Jana Kosecka and Wei Zhang George Mason University
Camera Calibration Coordinate change Translation P’ = P – O
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 29
Single-view geometry Odilon Redon, Cyclops, 1914.
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 29
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 29
-Intelligence Transport System PHHung Media IC & System Lab
Clustering.
UAV CINEMATOGRAPHY CONSTRAINTS IMPOSED BY VISUAL TARGET TRACKING
Presentation transcript:

Autonomous Vehicle Competition Laksono Kurnianggoro laksono@islab.ulsan.ac.kr 18 July 2014

Activities Last activities: Went to Seoul to test the AVC car. Examined the image moments. Modified the car tracking method.

IECON Registration

Image Moments 𝑚 𝑖𝑗 = 𝑥,𝑦 𝐼 𝑥,𝑦 𝑥 𝑖 𝑦 𝑗 𝑥 = 𝑚 10 𝑚 00 ; 𝑦 = 𝑚 01 𝑚 00

Problem with Car Detection Dataset Some part of the dataset are background. Affecting the tracking result. Detected region Detected region (HSV) Backpojection image

Region used in histogram computation Solution Retrain the SVM using better dataset. Need times to construct the new dataset. OR: Modify the tracking model. Using a smaller window size for the histogram computation. Detected region Region used in histogram computation result

Appendix

Part 1: Calibration of Camera and LRF Relation between a point from camera’s and LRF’s point of view: Where: Φ is the rotation from camera to LRF. Δ is the translation from camera to LRF. 𝑃 𝑐𝑎𝑚 location (in 3D space) is directly known, However it is located at the calibration plane (checkerboard). 𝑃 𝐿𝑅𝐹 = Φ𝑃 𝑐𝑎𝑚 +Δ (1) Calibration system’s setting

Point to Plane Constraint Using camera calibration method, checkerboard’s normal and its distance to camera can be found. The constraint: In the camera coordinate system: all the points (P) in checkerboard are constrained with its point to plane distance to the imaginary planar on the camera’s position. Imaginary planar, Parallel to the checker board checkerboard 𝑁=− 𝑛 𝑑 − 𝑛 d Camera’s position 𝑛 − 𝑛 .𝑃 𝑛 =𝑑 𝑁.𝑃= 𝑑 2 (2)

Formulation of the Calibration System Main relation: Simplification: All laser points are located at Y=0  𝑃 𝐿𝑅𝐹 = 𝑋;𝑍;1 Formula (3) can be rewritten as: Where: Knowing N, d and 𝑃 𝐿𝑅𝐹 , H can be computed. 𝑁.𝑃= 𝑑 2 (2) 𝑃 𝑐𝑎𝑚 = Φ −1 (𝑃 𝐿𝑅𝐹 −Δ) (1) 𝑁. Φ −1 (𝑃 𝐿𝑅𝐹 −Δ)= 𝑑 2 (3) 𝑁.𝐻 𝑃 𝐿𝑅𝐹 = 𝑑 2 (4) 𝐻= Φ −1 1 0 0 0 0 1 Δ

Decomposing H into Rotation and Translation Where 𝐻 𝑖 is the 𝑖 𝑡ℎ column of matrix H: Φ= [ 𝐻 1 ,− 𝐻 1 × 𝐻 2 , 𝐻 2 ] ⊺ ∆ = −[ 𝐻 1 ,− 𝐻 1 × 𝐻 2 , 𝐻 2 ] ⊺ 𝐻 3

Laser’s visualization Result Camera’s image Laser’s visualization

Calibration in AVC’s Car

Conversion of 3D point to Image Point Points3D, 𝐾, Kc p.x=Points3d.x/Points3d.z p.y=Points3d.y/Points3d.z Tangential Distortion: 𝑎_1 𝑖 =2.0∗ 𝑝 𝑖 .𝑥+ 𝑝 𝑖 .𝑦 𝑎_2 𝑖 = 𝑟 𝑖 2 +2.0∗ 𝑝 𝑖 .𝑥 2 𝑎_3 𝑖 = 𝑟 𝑖 2 +2.0∗ 𝑝 𝑖 .𝑦 2 ∆ 𝑥 𝑖 =𝑘𝑐 2 ∗ 𝑎_1 𝑖 +𝑘𝑐 3 ∗ 𝑎_2 𝑖 ∆ 𝑦 𝑖 =𝑘𝑐 2 ∗ 𝑎_3 𝑖 +𝑘𝑐 3 ∗ 𝑎_1 𝑖 Distortion: 𝑟 𝑖 2 = 𝑝 𝑖 .𝑥 2 + 𝑝 𝑖 .𝑦 2 , 1<𝑖<𝑛𝑃𝑜𝑖𝑛𝑡𝑠 𝑟 𝑖 4 = 𝑟 𝑖 2 𝑟 𝑖 2 𝑟 𝑖 6 = 𝑟 𝑖 4 𝑟 𝑖 2 Radial Distortion: 𝑐𝑑𝑖𝑠𝑡 𝑖 =1.0+𝑘𝑐 0 ∗ 𝑟 𝑖 2 +𝑘𝑐 1 ∗ 𝑟 𝑖 4 +𝑘𝑐 4 ∗ 𝑟 𝑖 6 𝑝_𝑡𝑑 𝑖 .𝑥= 𝑝_𝑟𝑑 𝑖 .𝑥∗ ∆𝑥 𝑖 𝑝_𝑡𝑑 𝑖 .𝑦= 𝑝_𝑟𝑑 𝑖 .𝑦∗ ∆𝑦 𝑖 Skew: 𝛼=𝛾/𝑓𝑥 𝑝_𝑟𝑑 𝑖 = 𝑝 𝑖 ∗ 𝑐𝑑𝑖𝑠𝑡 𝑖 𝑝_𝑠 𝑖 .𝑥= 𝑝_𝑡𝑑 𝑖 .𝑥+𝛼∗ 𝑝 𝑡𝑑 𝑖 .𝑦 𝑝_𝑠 𝑖 .𝑦= 𝑝_𝑡𝑑 𝑖 .𝑦 𝑃𝑜𝑖𝑛𝑡𝑠 𝑜𝑛 𝑡ℎ𝑒 𝑖𝑚𝑎𝑔𝑒: 𝑝2𝑑.𝑥= 𝑝_𝑠 𝑖 .𝑥∗𝑓𝑥+𝑐𝑥 𝑝2𝑑.𝑦= 𝑝_𝑠 𝑖 .𝑦∗𝑓𝑦+𝑐𝑦 𝐾= 𝑓𝑥 𝛾 𝑐𝑥 0 𝑓𝑦 𝑐𝑦 0 0 1

Part 2: Registration Problem In the dataset, the laser was not registered properly to the camera. Solution: Using several sliding windows. Red: sliding window boxes, cyan: detected car.

Part 4: Predicting the Bounding Box Position Frame n Frame n+1 When the result of car detection is available = dx, dy current prev When the result of car detection is available prev prediction Frame n Frame n+1

Part 5: Simple Clustering Method Exploit the property of the LRF. Invalid scan is detected as 0 meter. If the different of the two consecutive scan data is more than a threshold, they belonged to different cluster.

Part 6: Tracking the Laser Data Assumption: object in the new frame should not be moved far away compared to the previous frame. Green line: previous position Object candidate http://youtu.be/zZVkiQCUFuc

Merging the LRF Tracking and Image Tracking Calibration data Cam clustering filtering registration candidates Choose car clusters Recognition (svm) Choose the highest confidence region LMS tracking: add tracked cluster Detected cars region Pick the cluster closest to the predictor Image tracking Pick the corresponding LRF cluster A cluster is picked? Image tracking N Use Image detection as reference Y Update the predictor Laser tracking

Output data from the tracking program Adding the cluster length to output. Visualization of the laser data box_x box_y box_w box_h laser_x laser_y length

Mean-Shift Tracking model patch Histogram computation normalization observation Change the pixel value to corresponding histogram bin value Observed image Back-projection image Mean-shift 1.0 30 50 60 90 Histogram example Back-projection image example

Mean-Shift Finds the center of gravity from a given window. Repeat until convergence. Iteration #1 Iteration #2

Mean-Shift in Back-Projection Image Mass center is computed using moment. where 𝑥 = 𝑚 10 𝑚 00 ; 𝑦 = 𝑚 01 𝑚 00 𝑚 𝑖𝑗 = 𝑥,𝑦 𝐼 𝑥,𝑦 𝑥 𝑖 𝑦 𝑗 video Illustration of the mean shift iteration on the back projection image sequences