Download presentation
Presentation is loading. Please wait.
Published byEric Powers Modified over 9 years ago
1
GCAPS Team Design Review CPE 450 Section 1 January 22, 2008 Nick Hebner Kooper Frahm Ryan Weiss
2
Introduction ► General Problem Statement In order to create an autonomous transportation system, both position and orientation information is required. This project entails the development of a reliable position data acquisition system and a means of translating it into useful state information. The vehicle must self-localize based on its knowledge of surroundings.
3
Introduction ► Client - Chris Clark Cal Poly professor Specialized experience with Artificial Intelligence, Autonomous Mobile Robots, and Multi-Robot Systems Founded the Cal Poly Autonomous Transportation System Project (CPATS)
4
Introduction ► Related Work Precise Vehicle Localization Using Multiple Sensors and Natural Landmarks ► GPS, GIS Mapping, Visual Landmarks, Kalman Filter A Visual Positioning System for Vehicle or Mobile Robot Navigation ► Stereo Vision, Feature extraction/tracking 2D Map-Building and Localization in Outdoor Environments ► Laser Landmark Detection, Kalman Filter, GPS
5
Formal Product Definition ► Need Statement Current vehicles are often prone to failure when mistakes are made by the operators. Of the many possible systems that could aid operators, one of the most basic and important is a self-localization system which models the state of the vehicle at all times. Ultimately, transportation systems may be operated entirely by computer systems, providing a safer and more efficient ride. Among the requirements for such a system is the ability to know the current position and use this information to efficiently transport people.
6
Formal Product Definition ► Objectives Self Localization. The system needs to know where it is at all times with relation to its surroundings. Generation of a meaningful sensor and visual image processing algorithm. The system will implement a mechanism for making decisions and signaling if any changes are necessary. The algorithm will need to process data received from the image processing unit in conjunction with the data from the wheel encoder and translate this into meaningful action. State Estimation. The system needs to track its position (x,y) and heading (θ) around a small loop providing live feedback.
7
Formal Product Definition ► Requirements Marketing ► The system should be able to distinguish unique markings to help with self-location. ► The system should be able to determine distance traveled between any given markings, and total distance traveled. ► The system should be fully autonomous, meaning completely run from a machine in conjunction with a computer, without any human interaction.
8
Formal Product Definition ► Requirements Engineering ► The system will use a Particle Filter (fusion system) to make positional determinations based on image processing and wheel encoding. ► The system must support image detection as a primary means of determining position. ► The system must support wheel encoding to provide a distance and vector change to all possible “particles” generated by the Particle Filter.
9
Formal Product Definition ► Requirements Engineering (cont.) ► The system will have an onboard computer that controls all functions of golf-cart operation. ► The system will have an onboard camera that has auto focus. ► The system will have wheel encoders, both incremental and absolute to provide distance and angle measurements, respectively.
10
Formal Product Definition ► Requirements Constraints ► A vision system is to be used to detect visual landmarks that are used by the fusion system in determining position. ► A second sensor system (encoding) needs to provide distance and angle measurements for all particles in the system. ► Position must be known within an accuracy of 10 cm. ► The system must navigate and determine its position while moving around a circle.
11
Formal Product Definition ► Criteria Positional detection error (smaller = better) Frequency of positional updates (higher = better) Computational time (lower = better) Artificial landmark dimensions (smaller = better) Deployment cost (time) (smaller = better) Deployment cost (money) (smaller = better) Production cost (smaller = better)
12
Design ► Overview of Team Process Problem Statement Requirements Specification Design Implementation Testing Maintenance Final Report
13
Design ► System Architecture
14
Design ► Particle Filter The fusion system that combines data from the image and encoding systems to determine location. ► Odometry The system that measures distance and angle of the cart by dead reckoning and passes the data to the fusion algorithm. Dead Reckoning simply estimates one’s current position based upon knowledge of a previous position advanced by a heading and distance.
15
Design ► Image Processing The system that takes images and uses a technique called Principle Components Analysis (PCA) to determine position and send the data to the fusion system. Pose of the landmark is extracted by its similarity to known poses of landmarks in the training images.
16
Design ► Hardware Design The camera is interfaced to the computer by USB. Images are captured from the camera using OpenCV. The encoder is interfaced to a microcontroller which constantly polls for changes. Updates are sent at intervals to the host.
17
Design ► Software Algorithms PCA – used for determining position from pose estimation. ► Available through OpenCV and/or Matlab. Dead Reckoning ► The algorithm uses distance traveled and wheel angle to calculate changes in position. ► Wheel angle -> Turning Radius
18
Design ► Software Algorithms (cont.) Particle Filtering (Loop over time step t) ► ► For i = 1 …N ► ► Pick xt-1[i] from Xt-1 ► ► Draw xt[i] with probability p( xt[i]| xt-1[i], ot) ► ► Calculate wt[i]= p( zt| xt[i]) ► ► Add xt[i] to XtTemp ► ► For j = 1 …N ► ► Draw xt[j] from XtTemp with probability wt[j] ► ► Add xt[i] to Xt ► Steps 1 through 5 represent the Prediction steps, 6 through 8 are the correction steps.
19
Design ► Mechanical Design Wheel encoders must be fastened to the test vehicle for appropriate measurement. The mechanical design still needs thought as the parts have not arrived and method of fastening the encoders and the camera is not clear.
20
Design ► Components Image Processing - the responsibility of the image processing component is to receive data and process the information. Small markings will be used as landmarks for the system to locate itself and ensure proper operation. Landmarks - the landmarks will be small markings on the faces of curbs or the street that the system uses to determine its location. Wheel Encoder - the responsibility of the wheel encoder is to provide a mechanical means of data to be passed to the Particle Filter. The wheel encoder is able to count the rotations of the wheel and accurately pass this information to the fusion algorithm. The encoder will be mounted in one of the wheel hubs to enable accurate counting.
21
Design ► Components (cont.) Laptop PC - the responsibility of the laptop is to provide the processing capability of all the necessary components. The platform will be an x86 architecture running a Linux operating system. Processing Algorithm - the main processing of the system will be accomplished with the use of a Particle Filter.
22
Test Plan ► PCA Test Synthesized images Input live image, compare to correlated training image ► Straight Line Test Initialize Golf-cart’s position Flat, feature-absent road Mid-day, typical lighting No GUI, measure and compare ► Feature-rich Roads Road may contain natural features and deformations ► Curved Paths & Circular Paths Realistic roads U-turns
23
Budget and Justification ► The budget has not been determined, and therefore cannot be justified.
24
Bill of Materials ► Logitech QuickCam Pro (for notebooks) $57.06 ► Quadrature Wheel Encoder Modules $57.94 ► Total $115.00
25
References ► Precise Vehicle Localization Using Multiple Sensors and Natural Landmarks Ullrich Scheunert, Heiko Cramer, Gerd Wanielik ► A Visual Positioning System for Vehicle or Mobile Robot Navigation Huei-Yung Lin, Jen-Hung Lin ► 2D Map-Building and Localization in Outdoor Environments R. Madhavan, H. F. Durrant-Whyte ► Visual Learning and Recognition of 3-D Objects from Appearance H. Murase, S. K. Nayar
26
Gantt Chart
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.