MPR Intersection Experiment

Slides:



Advertisements
Similar presentations
3DVCR Group, Department of Machine Intelligence *Yipu Zhao, M. He, H. Zhao, F. Davoine, and H. Zha Department of EECS, Peking University Sino-French Lab,
Advertisements

Driver Behavior Models NSF DriveSense Workshop Norfolk, VA Oct Mario Gerla UCLA, Computer Science Dept.
Street Crossing Tracking from a moving platform Need to look left and right to find a safe time to cross Need to look ahead to drive to other side of road.
PETS’05, Beijing, October 16 th 2005 ETISEO Project Ground Truth & Video annotation.
InteractIVe Summer School, July 6 th, 2012 Grid based SLAM & DATMO Olivier Aycard University of Grenoble 1 (UJF), FRANCE
Stereo.
Dana Cobzas-PhD thesis Image-Based Models with Applications in Robot Navigation Dana Cobzas Supervisor: Hong Zhang.
Christian LAUGIER – e-Motion project-team Bayesian Sensor Fusion for “Dynamic Perception” “Bayesian Occupation Filter paradigm (BOF)” Prediction Estimation.
MULTI-TARGET TRACKING THROUGH OPPORTUNISTIC CAMERA CONTROL IN A RESOURCE CONSTRAINED MULTIMODAL SENSOR NETWORK Jayanth Nayak, Luis Gonzalez-Argueta, Bi.
Video Processing EN292 Class Project By Anat Kaspi.
Vigilant Real-time storage and intelligent retrieval of visual surveillance data Dr Graeme A. Jones.
A Probabilistic Approach to Collaborative Multi-robot Localization Dieter Fox, Wolfram Burgard, Hannes Kruppa, Sebastin Thrun Presented by Rajkumar Parthasarathy.
Hazard and Incident Warning « Majority of events occurring on the road represent a danger for road users » By transmitting road events and road status.
A Tracking-based Traffic Performance measurement System for Roundabouts/Intersections PI: Hua Tang Graduate students: Hai Dinh Electrical and Computer.
CSE473/573 – Stereo Correspondence
College of Engineering and Science Clemson University
TAXISAT PROJECT Low Cost GNSS and Computer Vision based data fusion solution for driverless vehicles Marc POLLINA
DVMM Lab, Columbia UniversityVideo Event Recognition Video Event Recognition: Multilevel Pyramid Matching Dong Xu and Shih-Fu Chang Digital Video and Multimedia.
A Brief Overview of Computer Vision Jinxiang Chai.
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
Fault Tolerant Sensor Network for Border Activity Detection B. Cukic, V. Kulathumani, A. Ross Lane Department of CSEE West Virginia University NC-BSI,
Waikato Margaret Jefferies Dept of Computer Science University of Waikato.
Intelligent Transportation System (ITS) ISYM 540 Current Topics in Information System Management Anas Hardan.
MMS/MLS – Mobile Mapping and Mobile Laser Scanning System 4th ISPRS SC and WG VI/5 Summer School, Warsaw 2009.
Dynamic 3D Scene Analysis from a Moving Vehicle Young Ki Baik (CV Lab.) (Wed)
STREET VIEW Wenyang Liu GMAT9205: Fundamentals of Geo-Positioning Session 1, 2010.
A Camera-Projector System for Real-Time 3D Video Marcelo Bernardes, Luiz Velho, Asla Sá, Paulo Carvalho IMPA - VISGRAF Laboratory Procams 2005.
2011/10/03 許志明 林意淳. Introduction to Lane-Position Detection Jun. 30, 2010 Lane-Position Detection Lane-Departure- Warning Systems Automated Vehicle- Control.
Use of GIS Methodology for Online Urban Traffic Monitoring German Aerospace Center Institute of Transport Research M. Hetscher S. Lehmann I. Ernst A. Lippok.
The University of Texas at Austin Vision-Based Pedestrian Detection for Driving Assistance Marco Perez.
Asian Institute of Technology
Road Inventory Data Collection Re-engineering Collected Data Items (more than 50 items): –Street Names. –Pavement width, number of lanes, etc. –Bike path,
Peter Henry1, Michael Krainin1, Evan Herbst1,
Computer Vision and Digital Photogrammetry Methodologies for Extracting Information and Knowledge from Remotely Sensed Data Toni Schenk, CEEGS Department,
A Prototype System for 3D Dynamic Face Data Collection by Synchronized Cameras Yuxiao Hu Hao Tang.
Visual Odometry for Ground Vehicle Applications David Nistér, Oleg Naroditsky, and James Bergen Sarnoff Corporation CN5300 Princeton, New Jersey
Spring, 2005 CSE391 – Lecture 1 1 Introduction to Artificial Intelligence Martha Palmer CSE391 Spring, 2005.
Traffic Signals & ITS to Encourage Walking & Cycling
Semantic Alignment Spring 2009 Ben-Gurion University of the Negev.
VIVID Project Attacking plan. Problems Description What we have? – Map(? ) – Satellite Imagery – Aerial Video and Mosaic Images Target – Road, building,
ENTERFACE 08 Project 9 “ Tracking-dependent and interactive video projection ” Mid-term presentation August 19th, 2008.
On-board Perception for Intersection Safety Sergiu Nedevschi, Technical University of Cluj-Napoca, Romania European Commission.
Traffic Records Forum 2016 August 9, 2016 Kelvin R. Santiago-Chaparro
REAL-TIME DETECTOR FOR UNUSUAL BEHAVIOR
Multiple View Geometry
CS b659: Intelligent Robotics
SP2 Integration Workshop Short Introduction to the LDM
Autonomous Driving Localization, path planning, obstacle avoidance
Pursuit-Evasion Games with UGVs and UAVs
Najib METNI François DERKX Jean-Luc SORIN
Case Study Autonomous Cars 9/21/2018.
European Robotic Pedestrian Assistant 2.0 Wolfram Burgard
Autonomous Cyber-Physical Systems: Autonomous Systems Software Stack
Ring Road Experiment For Driving Safety Analysis Beijing, 2011 Spring
Factors that Influence the Geometric Detection Pattern of Vehicle-based Licence Plate Recognition Systems Martin Rademeyer Thinus Booysen, Arno Barnard.
Customer-centric and Real-time Parking Recommendation
Vehicle Segmentation and Tracking in the Presence of Occlusions
Eric Grimson, Chris Stauffer,
Interaction in Urban Traffic – Insights into an Observation of Pedestrian-Vehicle Encounters André Dietrich, Chair of Ergonomics, TUM
Case Study Autonomous Cars 1/14/2019.
On-board Perception for Intersection Safety Sergiu Nedevschi, Technical University of Cluj-Napoca, Romania European Commission.
Sensor Placement Agile Robotics Program Review August 8, 2008
ECE 477 Senior Design Group 3  Spring 2011
Video Analysis in DVC Project
Design Standards.
Fig. 2 System and method for determining retinal disparities when participants are engaged in everyday tasks. System and method for determining retinal.
Project: Understanding and Guiding Pedestrian and Crowd Motion for Improving Transportation (Driving) Efficiency Goal: improving transportation (driving)
Understanding and Guiding Pedestrian and Crowd Motion for Improving Transportation (Driving) Efficiency Lead Researcher: Ümit Özgüner Project Team: :
Fig. 2 System and method for determining retinal disparities when participants are engaged in everyday tasks. System and method for determining retinal.
Closing Remarks.
Presentation transcript:

MPR Intersection Experiment Beijing, 2011 Spring Jointly by PKU and HEUDIASYC

MPR Intersection Experiment Motivation/Purpose

Scenario Design Laser Camera Host V S N

Horizontal laser scanner Obliquely laser scanner Host V: POSS GPS/IMU Two Flea2 Compose stereo Cameras Horizontal laser scanner Obliquely laser scanner More?

Scenario Design Laser Camera Host V Haidian Gymnasium S Peking Univ. N Laser1: SW, NPC4 S Peking Univ. N Laser3: NN, NPC5 Laser2: NW, NPC1 Server NPC7 & AP An overhead bridge ChangChun Dorm. of Peking Univ.

Scenario Design Haidian Gymnasium S Peking Univ. N An overhead bridge Host Vehicle Trajectories Haidian Gymnasium S Peking Univ. N An overhead bridge ChangChun Dorm. of Peking Univ.

Intersection Experiment Time and Place: 05/23/2011, 13:00-17:00 T word intersection, in the west of PKU Experiment Facilities: Sensors Intersection 3 ground lasers 1 ground mono camera Vehicle 2 lasers: horizontal and obliquely downward at the front of car Stereo camera: 2 mono cameras GPS

Cont. Experiment Facilities: Computers Other Facilities: Intersection 3 laptops: laser data collection client 1 laptop: laser data collection server Vehicle 1 pc: laser and GPS data collection 1 pc: video data collection Other Facilities: Batteries, Cables, Wireless AP, Patch Boards, Tripod, etc…

Experimental Procedure: Preparation Vehicle Sensors: Stereo Camera, Horizontal Laser and Obliquely Laser Calibration: Online Mono Camera + Horizontal Laser <(u, v)、(x, y, z=1.0)> (x, y, ?)

Cont. Intersection Sensors: Ground Lasers and Video Camera Lasers must be horizontal Time Synchronization All computers should in one network(PC on POSS and Laptops at intersection) Do time synchronization Calibration Ground Lasers: Nature object or land markers Ground Lasers + Video Camera: Land markers

Ground Laser + Video Calibration 4 5 6 3 2 1

Ground Laser + Video Calibration

Experiment Procedure: Measurements Start the measurements of ground lasers and video camera. Start the measurements of on-vehicle lasers and cameras. Let the host vehicle run across the intersection

Experiment Data Intersection Ground Lasers Data Ground Video *.lms1, *.lms2, *.lms3 Ground Video exp20110523_westgate_1.avi exp20110523_westgate_1.log : start time of video

Experiment Data Vehicle Lasers Stereo Camera GPS data *.lms1: Horizontal Laser Data *.lms2: Obliquely Laser Data Stereo Camera *.avi: stereo video *.txt: start time of corresponding video GPS data *.pos

Experiment Data Primary Data Processing Time Synchronization Laser data, GPS data Correct the start time of video Offline Calibration Intersection ground laser + video Laser and Video data fusion Using calibration results to project laser points on video images. Intersection Vehicle

Experiment Data On Vehicle Data Fusion Incorrect calibration result Didn’t notice that the ground is not horizontal while doing calibration

Experiment Data Intersection Data Fusion Not so exactly

Research Tasks What perceptual algorithm do we study? TASK 1: Multimodal perception Multimodal sensor fusion-based object detection, recognition and tracking Navigable space detection (road geometry, boundaries, lanes) Static object detection: signs, trees, facades Moving/movable object detection and tracking: cars, cycles, pedestrians Multimodal data constrained SLAM (GPS, GIS) Data representation and vicinity dynamic maps What knowledge do we need to reasoning? TASK 2: Reasoning and scene understanding An open comparative dataset for testing for cross-cultural robustness in traffic scene understanding Learning for scene semantics and moving object behaviors Traffic situation awareness with uncertainty, scene semantics and information fusion Task 2.1: Navigable space detection (road geometry, boundaries, lanes) Task 2.2: Static object detection: signs, trees, facades Task 2.3: Moving/movable object detection and tracking: cars, cycles, pedestrians Task 2.4: On-road SLAM