Grand Challenge for Computer Science DARPA AGV Race 2005 Srini Vason

Slides:



Advertisements
Similar presentations
BELLWORK SINGLE POINT PERSPECTIVE
Advertisements

Add and Use a Sensor & Autonomous For FIRST Robotics
Street Crossing Tracking from a moving platform Need to look left and right to find a safe time to cross Need to look ahead to drive to other side of road.
Chapter 6 Performing Basic Vehicle Manuers
CHAPTER 6 BASIC MANEUVERS.
ADVERSE CONDITIONS Chapter 12 SUNGLARE  Use sunglasses  Also use sun visor  If the sun is behind you, turn on your low-beam headlights to become more.
Right and Left Turns.
Chapter 10: Negotiating Intersections
Challenges Close Shave Sprint, Spin, Sprint The Labyrinth
NUS CS5247 Motion Planning for Camera Movements in Virtual Environments By Dennis Nieuwenhuisen and Mark H. Overmars In Proc. IEEE Int. Conf. on Robotics.
Pathfinding Basic Methods.
CHAPTER 9 DRIVING IN URBAN TRAFFIC
Intersections & Right of Way
GIS and Image Processing for Environmental Analysis with Outdoor Mobile Robots School of Electrical & Electronic Engineering Queen’s University Belfast.
Transportation Tuesday TRANSPORTATION TUESDAY The five seeing habits while driving…. Observation and concentration are required at all times while driving.
Chapter 3 Basic Vehicle Control
Basic maneuvers.
A Robotic Wheelchair for Crowded Public Environments Choi Jung-Yi EE887 Special Topics in Robotics Paper Review E. Prassler, J. Scholz, and.
Intelligent Ground Vehicle Competition 2006 Brigham Young University.
Autonomy for Ground Vehicles Status Report, July 2006 Sanjiv Singh Associate Research Professor Field Robotics Center Carnegie Mellon University.
Computerized Labyrinth Solver Gregory Schallert Chad Craw.
Efficient Path Determining Robot Jamie Greenberg Jason Torre.
DAGR Defense Advanced GPS Receiver
Patent Liability Analysis Andrew Loveless. Potential Patent Infringement Autonomous obstacle avoidance 7,587,260 – Autonomous navigation system and method.
Ch. 6 - Passing NY State DMV 1. The law requires that we drive on the right side of the road.  When we are allowed to pass other vehicles, we usually.
Autonomous Unmanned Ground Vehicle Navigation: Present and Future Larry Jackel DARPA IPTO / TTO darpatech2004/
Abstract Design Considerations and Future Plans In this project we focus on integrating sensors into a small electrical vehicle to enable it to navigate.
1 Constant Following Distance Simulations CS547 Final Project December 6, 1999 Jeremy Elson.
Shaun Martin & Brian Pritchett.  Team Introduction  Problem Definition  The Challenge  Contest Rules  Demonstration  Current Design  Who are the.
Advanced Piloting Course Chapter 5 Positioning Techniques
1. This seminar paper is based upon the project work being carried out by the collaboration of Delphi- Delco Electronics (DDE) and General Motors Corporation.It.
Maze Challenge Maze Challenge activity > TeachEngineering.org
Autonomous Vehicles By: Rotha Aing. What makes a vehicle autonomous ? “Driverless” Different from remote controlled 3 D’s –Detection –Delivery –Data-Gathering.
IPDE Process IDENTIFY Give meaning to what you see. The sooner you identify a possible hazard the more time you will have to react safely. Look For:
November 10, 2004 Prof. Christopher Rasmussen Lab web page: vision.cis.udel.edu.
Today’s Agenda 1.Scribbler Program Assignment 1.Project idea due next class 2.Program demonstration due Wednesday, June 3 2.Attendance & lab pair groupings.
EV3 Workshop Oct 3, 2015 Instructor: Chris Cartwright
THINK*PAIR*SHARE Why do you think the chances of a collision are greater at an intersection than at any other point on the roadway? (Your answer should.
Design Studies 20 ‘Show Off’ Project How to make a computer monitor In Google Sketchup By: Liam Jack.
Autonomous Brake Light Communication Phil Osteen Robot Name: Traffic Flobot EML 5666C, IMDL November 25, 2008.
Autonomous Vehicle Instructor Dr. Dongchul Kim Presented By Harish Kumar Gudipati.
Adverse Driving Conditions Section 10 Reduced Visibility Windows Most important rule is Keep Your Windows Clean!
Minimum Risk Manoeuvres (MRM)
EV3 Software EV3 Robot Workshop
Team RoboTrek Matt Kabert Ryan Bokman Vipul Gupta Advisor: Rong Xu.
ROBOTC Software EV3 Robot Workshop
Driver’s Education Chapter 3 Basic Vehicle Control.
Alan Cleary ‘12, Kendric Evans ‘10, Michael Reed ‘12, Dr. John Peterson Who is Western State? Western State College of Colorado is a small college located.
By the Brown Team Module 2. Driver Preparation Procedures Always check for small children and pets, fluid leaks, tire inflation, obvious physical damage,
Lesson Plan For Day Two Power point presentation 30 min Video– AAA- signs, signals, etc. 20 min Quiz- Signs Etc 10 min Correct 10 min H/O- Signs 10 min.
ParkNet: Drive-by Sensing of Road-Side Parking Statistics Irfan Ullah Department of Information and Communication Engineering Myongji university, Yongin,
Module 3: Topics 1-3 Vision and Driving
Vision Based Automation of Steering using Artificial Neural Network Team Members: Sriganesh R. Prabhu Raj Kumar T. Senthil Prabu K. Raghuraman V. Guide:
Module 3 Brianna James Percy Antoine. Entering the Roadway/Moving to the Curb/Backing  The seven steps to safely pull from a curb. Place foot firmly.
Obstacle Avoidance Manjulata Chivukula. Requirements Traversing the list of waypoints Traversing the list of waypoints Avoiding the obstacle in the path.
Self-Navigation Robot Using 360˚ Sensor Array
Vision-Guided Humanoid Footstep Planning for Dynamic Environments
Robotics Training For The Riverside Robotics Society
PRESENTED BY: SHAHIN HUSSAN
Understanding Communication with a Robot? Activity (60 minutes)
Paper – Stephen Se, David Lowe, Jim Little
Autonomous CAR.
Performing Basic Maneuvers
Case Study Autonomous Cars 9/21/2018.
Fitting Curve Models to Edges
TUGS Jason Higuchi && Julia Yefimenko && Raudel mayorga
CajunBot: Tech Challenges
Case Study Autonomous Cars 1/14/2019.
Maze Challenge Maze Challenge activity > TeachEngineering.org
Alabama Driver Manual Chapter 3
Presentation transcript:

Grand Challenge for Computer Science DARPA AGV Race 2005 Srini Vason

Web Resources

AGV Grand Challenge - Oct 8, 2005 Traverse a distance of 200 miles through rough terrain from Barstow, CA to Los Vegas, NV in 10 hrs or less using an autonomous ground vehicle No contact with the vehicle and a single failure in waypoint following results in disqualification Route is given only two hours before the start of race Best result so far is 7 miles by CMU in 20 min.

GC March 13, 2004 Waypoint file - spreadsheet Video of waypoints created from aerial photographs and overlaying waypoints Video of vehicles at the starting line (15 vehicles started) Videos of cyberrider AGV under manual operation in the Barstow, CA OHA

T1 T2T4 T3 Front Back AGV T5 T6 T7 T8 T9T10P1 P2 P3 P4 R1 R2 S1 S2 S3 S4 Ti - Trinocular vision cameras Pi - Panorama cameras Si - Ultrasound sensors (Sonar) Ri - Radar OD1 OD2 ODi - Odor and dust sensor WGH1 WGH2 WGHi - Water depth and ground hardness sensor IR1 IR2 Iri - Infrared sensor Sensors and their Locations on Autonomous Ground Vehicle (AGV)

Real-time Integrated Sensors and GN&C Processing Image Stabilization Image to Road Model Correspondence Road Models Database Video image streams Path Planning & Path Tracking Stationary Obstacle Detection Moving Object Detection GPS Sensor Data & Map Data Radar & Sonar Data streams Obstacle Tracking Other Vehicle Tracking OD and WGH Sensors Data Steering, Brake, Cruise Control Correction Computation Vehicle Sensors Data Actuator Control Data Determination AGV Actuators ESTOP GPS Error updates Ladar Data Wind & Env. Data Compass Data RDDF Run/pause/stop

Ladar Radar GPS Digital compass Vision Wheel encoders Fusion Module Map Database Road model database Driver Module Pause/run/stop Waypoint list

Start Boot_failure Initialize E_pause E_stop AGV State Diagram Ready Running/ moving Pause Stopped Powerdown Diagnostics Fault Ignition_on Poweron Getset (start) Cruise(run) Pause Stop Shutdown E_stop E_pause

Challenges Low-cost navigation sensors (GPS, vision, hardness detector, compass, ladar, radar, sonar) construction, calibration, and maintenance Sensor processing in real-time (a few hundred TOPS) and stabilization of sensors Synchronization of diverse sensor processing systems and hierarchical data fusion Ambient (surrounding) awareness framework Automatic steering, throttle, and brake control

Stop and go adaptive cruise control Elimination of shadows fromvideo images in real- time after filtering, rectification, and LOG filtering 3D Terrain profile construction from diverse sensors in real-time Path finding between waypoints, path following, lane marker detection, and lane following Perceptive passing Stationary vegetation and solid obstacle detection on roads and trails Moving object detection and collision avoidance

Pot hole, gutter, washboard, barbed wire, and fence post detection on trails Scene analysis to adjust gazing angle for sensors Cliff, canyon, switchback, and hill detection Fault detection and recovery in real-time AGV surviving in harsh environments ( rough roads, stream crossing, blowing wind and sand) Experimental setup, testing, and measurement in harsh environments Navigation during switchbacks, hill climbing, and descending

Road / Trail Following - cases Path or trail following Trail center line following Road following (right side of road) Road following with yellow divider line Sharp turns in roads Switchbacks in roads Road following in rolling hills

W1 W2C4 C3 Front Back ROBOT T5 T6 T7 R1 S1 S2 T7 - Stereo vision cameras Si- Ultrasound sensors (Sonar) Ri- Radar Wi -wheel speed sensor IR1IR2 IRi -Ladar/Lasersensor Sensors and their Locations on the ROBOT DGPS – Differential GPS DC – Digital Compass DC DGPS T5, T6 - Edge detection cameras C3, C4 - Edge detection cameras

Experiments Robots with sensors attached to them - DREAMbot Campus level experiments using Robots

T1 T2T4 T3 Front Back AGV T5 T6 T7 T8 T9T10P1 P2 P3 P4 R1 R2 S1 S2 S3 S4 Ti - Trinocular vision cameras Pi - Panorama cameras Si - Ultrasound sensors (Sonar) Ri - Radar OD1 OD2 ODi - Odor and dust sensor WGH1 WGH2 WGHi - Water depth and ground hardness sensor IR1 IR2 Iri - Infrared sensor Sensors and their Locations on Autonomous Ground Vehicle (AGV)

Video Cameras Low-cost camera array Use it for edge detection Use it for depth determination using stereo cameras 3-D object recognition 3D Terrain profile construction Surrounding construction Predictive passing

Cameras used in Edge Detection L-IBot camera R-IBot camera C-PtGrey camera Ladar Left camera Right camera Front View of Robot Road coverage by Cameras

Edge Detection - input/output Edge detection is implemented in vision module. The input to the edge detection is an image grabbed by the camera. The color image (.ppm) is converted to a grayscale image (.pgm), and then the pgm image is processed with edge dection algorithm. The output is a binary image in which 1 stands for places containing edge and 0 stands for places containing no edge or a packet containing edge vectors represented using end points. The output image (compressed or vectorized) will be sent to Fusion module (cruise control node) in an UDP packet. Unix program outputing UDP packets 10 times a second

Traversal Grid (0.75 m X 25 m) Traversal grid is formed in front of the ROBOT for navigation. The traversal grid is divided into an array of 160 rows and 12 columns at the bottom and 120 rows and 6 columns at the top Each element in the lower part of the array represents a 6.25cm square. Each element in the upper part of the array represents a 12.5cm square. Each element in the array gets a 1 if an edge is found and a 0 if no edge is found in the binary image constructed from the compressed or vectored data given as output by an edge detection video camera.

25m 75 cm 6.25cm X 6.25cm squares 1.25m m 12.50cm X 12.50cm squares 15m

Navigation Scenario (waypoints outside trail) The robot should pass through each way point and stay within the lateral boundary (LB) specified by the circle. The waypoint may not be in the road and the robot should go along the road and stay within the LB area (which will be regarded as passing the way point). Generally the road is straight between every two neighboring way points, but the robot should still not go off road. X X X X road Robot’s path

Some tips on using Intermediate points Determine a road or trail near the two waypoints to be traversed. If the map database contain intermediate points to traverse the waypoints, use them. If no intermediate waypoints are found in the map database, follow the road and stay within the LB. Else traverse by staying within the LB (might have to go out of the road)

Navigation Algorithm using Road Edge The fusion module receives UDP packets from a vision node. It decompresses the packets and reconstruct the binary edge image matrix. Apply the traversal grid: – Select the partial matrix (pink rectangle) in the edge matrix based on the current heading. The partial matrix has double the width of traversal grid. –The inner matrix represents the traversal grid and has the same length as the partial matrix. It should be free of obstacles for the robot to move forward. Check if there is an 1 in the inner matrix. If so, that means there is an obstacle in front and the robot needs to go around it by using steering control. 0.75m 25m 1.5m Image areaPartial matrixInner matrix Current heading

Algorithm (contd.) If there is a ‘1’ in the inner matrix but none in the rest of the partial matrix, rotate the inner matrix (left or right) enough to avoid the grid containing the ‘1’. This is an obstacle detection step to see if there is room for avoiding the obstacle. (Ignore points outside partial matrix when rotating) If the inner matrix still resides in the partial matrix, robot will make a turn by the same angle we rotated the inner matrix. If not, slow the robot (for further detection and making new decisions). If there is no ‘1’ in the inner matrix, proceed to go straight without making changes to steering angle. 1 Current heading

Algorithm (contd.) If an edge is detected in the partial matrix, the robot should make a slight turn to keep away from the edge. edge Current heading

Case discussion The robot starts (WPT1) and goes forward to next way point (WPT2). The vision module needs to look ahead (WPT3) to check the condition of the trail (or check the intermediate point to the current target way point). The look ahead allows the calculation of expected turning angle when reaching current target waypoint. This also gives a hint that the road will turn. When the angle is calculated, the robot will be switched to one of the following cases: The front is clear and the next way point is far away: go straight. <1.7°: make a small correction to the direction. 1.7°~5°: slow down and make a medium correction to the direction. 5°~10°: slow down and prepare for turning in road. 10°~20°: really slow and prepare for a sharp turn. 20°~90°: stop and prepare for left and right turn. Switch back in road:

Special situations - preprocessing The edge detection algorithm should filter out small objects in the road by Wiener filtering the image. If there is a big object in the road, the robot may first check if it is a road turn. It may check if there is an edge in the partial area or from any hints contained in the waypoints. If this is a road turn, it will slow down and look at two sides to find the road. Otherwise, it should prepare to avoid the object.

Vision Module Design Steps(Edge Camera) Acquire the image. Pre process the image: filter small objects, shadows, and trail markings. Apply edge detection scheme (Canny or Sobel) Compress the binary image, packetize, and communicate to fusion module.

Vision Module Design Steps(Center Camera) Acquire the image. Enhance path edge. Enhance center line. Pre process the image: filter small objects, shadows, and trail marking (make sure that center line shows clearly.) Apply edge detection scheme (Canny or Sobel) Compress the binary image, packetize, and communicate to fusion module

Data Fusion using CMV In the fusion module, we merge the data from different sensors by using the confidence measure vector (CMV) at each waypoint or intermediate point. –Initialize confidence measure vector. –Merge value for traversal grid. (edge detection, center line) –Follow center line. (center camera gets high confidence measure) –Follow path. (medium value for center camera and high value for edge detection)

Certainty Matrix (CM) Points in the traversal grid are covered by different sensors. The certainty of measurements at grid points change for each sensor. Certainty matrix specifies how certain each measurement at the grid points are Each sensor has a certainty matrix for the partial matrix The value for certainty can be from 0 to 255 Values 0 to 20 indicate low certainty and values 200 to 255 indicate high certainty on the measurements.

CM for T5 (left camera) Camera’s sweet spot is between rows 11 and 100 Rows 1 to 10 and cols. 1 to 6 : 150; cols. 7 to 12: 50 Rows 11 to 100 and cols. 1 to 6: 200; cols. 7 to 12: 70 Rows 101 to 130 and cols. 1 to6: 150; cols. 7 to 12: 50 Rows 131 to 160 and cols. 1 to 6: 20; cols. 7 to 12: 20 Rows 161 to 200 and cols. 1 to 6: 10; cols. 7 to 12: 0 Rows 201 to 280 and cols. 1 to 6: 5; cols. 7 to 12: 0 All rows and columns on the left side of column 1 in the partial matrix have the value 150

CM for T6 (right camera) Camera’s sweet spot is between rows 11 and 100 Rows 1 to 10 and cols. 1 to 6 : 50; cols. 7 to 12: 150 Rows 11 to 100 and cols. 1 to 6: 70; cols. 7 to 12: 200 Rows 101 to 130 and cols. 1 to6: 50; cols. 7 to 12: 150 Rows 131 to 160 and cols. 1 to 6: 20; cols. 7 to 12: 20 Rows 161 to 200 and cols. 1 to 6: 0; cols. 7 to 12: 10 Rows 201 to 280 and cols. 1 to 6: 0; cols. 7 to 12: 5 All rows and columns on the right side of column 12 in the partial matrix have the value 150

CM for T7 (center camera) Camera’s sweet spot is between rows 11 and 100 Stereo cameras allow depth map calculation also Rows 1 to 10 and cols. 3 to 10 : 150; rest of the cols.: 50 Rows 11 to 100 and cols. 3 to 10: 200; rest of cols. : 70 Rows 101 to 130 and cols. 3 to 10: 150; rest of cols. : 50 Rows 131 to 160 and cols. 3 to 10: 20; rest of cols. : 20 Rows 161 to 200 and cols. 3 to 10: 10; rest of cols. : 0 Rows 201 to 280 and cols. 3 to 6: 5; rest of cols. : 0 All rows and columns on the left side of column 1 and right side of column 12 in the partial matrix have the value 0

Merging Data from Sensors For each sensor use the CM to calculate values for each point in the traversal grid. Form the data structure, MTA, merged_traversal_array(k, i, j) that contains for the kth sensor the value for the square on the ith row and jth column of the 0.75m X 25m traversal grid. The value can be 0 to 255. Find the CMV for the current waypoint traversal by consulting the waypoint database and the threshold value (T). Calculate the goodness measure for the traversal grid using the CMV. This calculation is shown as NMTA.

Normalized MTA (NMTA) Let CMV = [v1 v2 v3 Vn], where n is the total number of sensors. MTA[l, i, j] = vl * MTA[l, i, j] NMTA[i, j] = (1/(255 *n)) * ( ΣMTA[l, i, j]) Use threshold T on each element of NMTA to come up with a binary version of NMTA

Fusion Module Design (path following) (1) Left camera gets camera data input: 1. Overlay camera axis. (XY for camera coordinate and X’Y’ for Robot coordinate) X (X’) YY’ Robot Heading Camera Heading YY’ X X’

Fusion Module Design (path following) (2) 2. Overlay path as determined by GPS coordinates. (Desired path) That is, given the current GPS coordinates and compass reading, locate the target waypoint position and calculate correct desired heading. YY’ X X’ Current heading Desired heading

Fusion Module Design (path following) (3) 3. Overlay grid boundary. (inner matrix) 4. Overlay twice grid boundary. (partial matrix) YY’ X X’ Current heading Desired heading

Fusion Module Design (path following) (4) 5. look for obstacles in grid. a. Ignore shadows, path marking (e.g. bikes, no bikes). b. Use Ladar data. c. Merge data. 6. Interference with road edge. Steering correction. 7. Turn. Slow the robot and then make the turn. Start path following again. 8. Repeat 1 to 7 for each sample. This is useful when doing left turn. YY’ X X’ Current heading Desired heading

Fusion Module Design (center line following) (1) Input image comes from center camera after edge detection. The center line is the path to be followed. Overlay Grid on the image. Look for interference between partial matrix and path edge. Steering correction – line up with center line. Y X

Fusion Module Design (path following) (5) Right camera: –Do the same sequence of steps as left camera. –This is useful in doing right turn.