Presentation is loading. Please wait.

Presentation is loading. Please wait.

Vision-based Landing of an Unmanned Air Vehicle

Similar presentations


Presentation on theme: "Vision-based Landing of an Unmanned Air Vehicle"— Presentation transcript:

1 Vision-based Landing of an Unmanned Air Vehicle
Omid Shakernia Department of EECS, UC Berkeley

2 Applications of Vision-based Control
UCAV X-45 Predator Global Hawk SR/71 Fire Scout UCAVS will use vision for dog fights, localizing targets, etc Vision for takeoff / landing, obstacle avoidance Formation control, relative positioning for mid-air refueling

3 Goal: Autonomous landing on a ship deck
Challenges Hostile environments Ground effect Pitching deck High winds, etc Why vision? Passive sensor Observes relative motion

4 Simulation: Vision in the loop

5 Vision-Based Landing of a UAV
Motion estimation algorithms Linear, nonlinear, multiple-view Error: 5cm translation, 4° rotation Real-time vision system Customized software Off-the-shelf hardware Vision in Control Loop Landing on stationary deck Tracking of pitching deck I will present a real-time vision system for pose estimation in the problem landing a VTOL UAV onto a target. The vision system consists of off-the-shelf hardware and custom software, runs at the frame rate of 30Hz, and gives quite accurate results as we will see from the experiment.. After motivating the problem, I will describe the system hardware, the customized vision system software, our algorithms for vision-based motion estimation, and finally our flight test results. end --- I will describe the off-the-shelf hardware and the configuration we used Off the shelf camera, framegrabber, pc, etc.. I will describe the vision system software Image processing Feature extraction/ correspondence matching Motion estimation (linear/nonlinear) This project is part of a larger research effort at Berkeley .. We have pursuit evasion games, low level helicopter control, hybrid system synthesis, etc…

6 Vision-based Motion Estimation
Current pose Image plane We use a calibrated pinhole camera model. SETUP x_i’s are image points (measured) and q_i are 3-D feature points (the corners of the landing target) Notice that the feature labeling problem is important here.. Need to label each corner feature x_i according to its match q_i We want to recover current pose [R p] from a set correspondence matches Feature Points Pinhole Camera Landing target

7 Pose Estimation: Linear Optimization
Pinhole Camera: Epipolar Constraint: Planar constraint: More than 4 feature points Solve linearly for Project onto to recover This is a novel (as far as we know) model-based pose estimation algorithm… We know the 3-D feature points Based on measured image points try to recover [R p] Using the pinhole camera model We have the planar version of the epipolar constraint (this is cross-product of X with [R p]*q =0 Planar constraint.. Define coordinate frame such that q_i(3)=0 for all feature points Drop 3rd component of q and 3rd column of [R p] to get equation which is linear in [r1 r2 p] Compute R by projection Compute scale on p by averaging scales on r1 and r2 This is a linear estimator and so it can be quite biased However, it gives a solution which is close enough to the true solution so that it can initialize the following nonlinear algorithm

8 Pose Estimation: Nonlinear Refinement
Objective: minimize error Parameterize rotation by Euler angles Minimize by Newton-Raphson iteration Initialize with linear algorithm Here we propose to minimize the re-projection error by a nonlinear optimization These are ZYX –Euler angles (DbGb)^dag is the Moore-Penrose pseudo inverse of the Jacobian of G with respect to motion parameters \beta The Jacobian was computed symbolically for estimation parameters and evaluate it at run-time. K_n is an adaptive step size that ensure that |G(q,x,\beta)| monotonically goes to zero Linear and non-linear algorithms complement each-other nicely: Linear is noisy, but in the ballpark of the correct pose Non-linear gives good estimates, but needs a good initialization or else gets stuck in local minima Therefore, we initialize the iteration using the estimate from the linear algorithm

9 Multiple-View Motion Estimation
Pinhole Camera Multiple View Matrix We use a calibrated pinhole camera model. SETUP x_i’s are image points (measured) and q_i are 3-D feature points (the corners of the landing target) Notice that the feature labeling problem is important here.. Need to label each corner feature x_i according to its match q_i We want to recover current pose [R p] from a set correspondence matches Rank deficiency constraint

10 Multiple-View Motion Estimation
n points in m views Equivalent to finding s.t. Initialize with two-view linear solution Least squared solution: Use to linearly solve for Iterate until converge Now we have multiple points in multiple images

11 Real-time Vision System
Ampro embedded Little Board PC Pentium 233MHz running LINUX 440 MB flashdisk HD robust to vibration Runs motion estimation algorithm Controls Pan/Tilt/Zoom camera Motion estimation algorithms Written and optimized in C++ using LAPACK Estimate relative position and orientation at 30 Hz The Vision system hardware consists of the following off-the-shelf components: Littleboard computer running Linux.. The pan/tilt/zoom camera is a popular Sony camera that was designed for video tele-conferencing. There are also several mobile ground robotics projects that use this camera… it was not designed for the harsh environment of a rotorcraft UAV with its serious body vibrations. But we put it there anyway… Imagenation PXC200 framegrabber based on Brooktree Bt848 chipset The UAV test-bed is a Yamaha R-50 model helicopter. This model helicopter has the following components Navigation computer (in QNX runs low level control, guidance and navigation) INS/GPS (2cm accuracy) (NovAtel GPS) and Boeing DQI-NP INS/GPS integration system As you might imagine, this research is part of a larger research project in multi-agent robotics based on ground and aerial vehicles at UC Berkeley. For example, tomorrow my colleague Rene Vidal will describe the same a project for multi-agent pursuit-evasion games between UAVs and ground vehicles where the same UAV and vision system test-bed were applied for the problem of locating and tracking evaders The software is written in C++, and runs in real-time UAV Pan/Tilt Camera Onboard Computer

12 Hardware Configuration
On-board UAV Vision System Vision Computer RS232 Vision Algorithm Frame Grabber Camera WaveLAN to Ground Navigation System Navigation Computer Control & Navigation INS/GPS This is the hardware configuration of the on-board system The figure describes the interconnections between the vision / navigation systems The vision/navigation computers communicate through a RS232 serial link In this talk, we are using this link only for accessing the “ground truth” pose from the INS/GPS for evaluation of vision estimates Once the vision is in the loop, the link will be used for sending control commands to the navigation computer The vision computer controls the PTZ camera through a serial link, and it gives the PTZ commands based on the images that are captured by the frame-grabber… Also, both vision/navigation computers have wireless ethernet (WAVELAN) links to ground stations for monitoring…

13 Feature Extraction Acquire Image Threshold Histogram Segmentation
Target Detection Corner Detection Correspondence Now I will discuss the vision system software, starting with the low level image processing First notice our Landing target design. We have freedom to design target to simplify computer vision tasks, such as: target recognition Feature extraction (corner detection) correspondence matching (labeling) The particular target was designed by the consideration that squares give the best corners and the lack of symmetry caused by the large square makes feature labeling easy each corner of the target can be uniquely identified) Image acquisition This picture shows the landing target as seen from the on-board camera Histogram/Threshold Compute histogram, and threshold to get a binary image Threshold image at a fixed percentage between the min and max gray levels Segmentation 2 consecutive passes of a flood-fill connected components labelling algorithm: First pass: identify background as largest black component Target classification: Identify landmark as single foreground region with 6 white regions and 1 black region Corner detection Need to detect corners of a 4-sided polygon in a binary image Also, we know exactly the number of corners we want: no more, no less… Well-structured nature allows us to avoid computational cost (and unknown number of resulting corners) of a general purpose corner tracker Use the invariant that convexity is preserved under perspective projection Choose 2 points that are furthest from vertical line thru center of gravity of polygon Choose 3rd point as the point which is furthest from line connecting first two corners Choose 4th point as the point in the polygon which has maximal distance from triangle defined by first 3 points Feature labeling Observe: the counterclockwise order of vectors is invariant under Euclidean motion and perspective projection First we uniquely identify the 6 squares: Start by labeling square D (lower left corner) For each square, compute vectors from center of square to center of other squares Square D is the one that has two pairs of collinear vectors Order the rest of the square by ordering them counterclockwise (and by distance) from D Order each corner in each square counterclockwise in same manner…

14 Pan/Tilt to keep features in image center
Camera Control Pan/Tilt to keep features in image center Prevent features from leaving field of view Increased Field of View Increased range of motion of UAV The goal of the pan-tilt camera is to track and keep the landing target in the field of view… Simply give pan and tilt commands according to the geometry in the figure to keep the target centered. This is a Sony PTZ camera that was not designed for harsh vibrations of helicopter environment and jitter in the camera head relative to the base is a problem… next step is to use a vibration isolating mounting for the camera to reduce the camera head jitter

15 Comparing Vision with INS/GPS
Ground Station Comparing Vision with INS/GPS Linear multi-view algorithm performs as well as nonlinear optimization based two-view Linear multi-view is more globally robust than nonlinear

16 Motion Estimation in Real Flight Tests
Linear multi-view algorithm performs as well as nonlinear optimization based two-view Linear multi-view is more globally robust than nonlinear

17 Landing on Stationary Target
Linear multi-view algorithm performs as well as nonlinear optimization based two-view Linear multi-view is more globally robust than nonlinear

18 Tracking Pitching Target

19 Conclusions Contributions Extensions
Vision-based motion estimation (5cm accuracy) Real-time vision system in control loop Demonstrated proof of concept prototype: first vision-based UAV landing Extensions Dynamic vision: Filtering motion estimates Symmetry-based motion estimation Fixed-wing UAVs: Vision-based landing on runways Modeling and prediction of ship deck motion Landing gear that grabs ship deck Unstructured environments: Recognizing good landing spots (grassy field, roof top etc)


Download ppt "Vision-based Landing of an Unmanned Air Vehicle"

Similar presentations


Ads by Google