PixelLaser: Range scans from image segmentation Nicole Lesperance ’11 Michael Leece ’11 Steve Matsumoto ’12 Max Korbel ’13 Kenny Lei ’15 Zach Dodds ‘62 REU
Inspiration Horswill (polly) ’94 Saxena (rccar) ’05
Scans ? C. Plagemann et al., ICRA 2008 platform"omnicam" imageserrors...
Cheap! Expensive! Less information: 180 rays of distances More information: 640 columns of pixels, each of which we can get a range from But it’s harder (for a computer) to extract data from an image than from a laser range scan. Which is where we come in! Motivation
ClassifySegmentScans Maps! (in pictorial form!) obstacle ground (and localization) Overall Strategy
Training and Classifying hand-segmented imagefeatures, in kd-trees filters
Nearest-neighbor classification RGB alone doesn’t work well…
Nearest-neighbors RGB + texture produces better segmentations
Classifier Variations Many attempts were made, but the overall accuracy for reasonable versions hovered around 95% 3-tree Classifier: used above, below, and line categories Genetic Algorithm: attempted to find the optimal weight values, but optimized for the set of images used to evolve the classifier. The weight-set did not generalize to other environments Different Environments: the classifier was tested in environments varying in difficulty
Overall Accuracy Results Regardless of the environment, the Classifier performs consistently
Hallway: Overall Accuracy % Library Downstairs: Overall Accuracy %
FLANN (Fast Library for Approximate Nearest Neighbors) C++ knn implementation with variable precision levels Decreased kd-tree lookup time by 2 orders of magnitude Allows for more complex real-time segmentation algorithms Allows the robot to autonomously wander in real time using only the segmenter Marius Muja: Our hero
Now Arriving at…Segmentation! Uses the information from the classifier to draw the line between the ground and obstacles Sends the information to the scanner to make laser range scans Classify → Segment → Scan → Map
Initial approaches (1)bottom-up (2)patch pairs (3)multiresolution
Transition Segmentation Uses transition strength rather than classification Assigns transition strength based on the certainty that the below patch is traversable and the top patch is not Takes the difference of the “above strength” of the top patch and the “below strength” of the bottom one Issues Misclassifications can result in incorrectly strong transitions
Transitions using a Line Tree The strongest transitions are found and the area closest matching the line category is chosen
Seam Carving Start at one end of the image and find the strongest transition Assign a cost to deviating from the line height in the previous column
Smoothing Certainty Locally Calculate certainties for all patches in the image Apply averaging filter on the certainties to suppress rogue misclassifications Look for the strongest transition in each column and draw line
Edge Detection Global image edge detection is not useful, but “snapping” to the nearest edge in the strongest transition area reduces segmentation error
Results Sprague LabSprague First FloorLibra Hallways Multi-resolution Smooth Transition Line Sprague LabSprague First FloorLibra Hallways Multi-resolution Smooth Transition Line Average Pixel Error per Column Average Distance Error per Column (inches)
Results - Moving Average Pixel Error per Column Sprague LabSprague First FloorLibra Hallways Multi-resolution Smooth Transition Line Sprague LabSprague First FloorLibra Hallways Multi-resolution Smooth Transition Line Average Distance Error per Column (inches)
From segmentation to scan pixel row number range row-to-range map scan segmentation
Scan Matching Match two consecutive laser scans and find the corresponding transformation Can be used to correct poor odometry
SLAM Map Building Combine all laser scans to buildmaps using CoreSLAM (simultaneous localization and mapping)
Monte Carlo Localization Create random list of possible locations of robot Monte Carlo Localization Cycle: Move: Move all points based on change in odometry Sense: Compare the laser scan from segmented image with laser scans at possible locations Resample: Redistribute points based on probability at each location
Monte Carlo Localization
Point and Click Navigation Given a known map, we want to be able to click on a location in the map and have the robot navigate there The current implementation works only if there is a straight, uninterrupted path between the current location and the destination (haven’t implemented a path planning algorithm) Things to think about include navigation around obstacles and odometry correction
Autonomous Wandering
Future Work Segmenting with a projected laser level line – possibly to be used for automated training as well Stabilizing the camera – keeping the horizon constant while moving is almost impossible Speeding up the wandering speed – a more stable camera would help with this Autonomous mapping
Questions?
Segmentation classification + confidencesegmentations
../TrainingImages/Playspacepswo13Patches/00029/randomBelow/0009.png e /TrainingImages/Playspacepswo13Patches/00026/randomabove/0007.png e