A Versatile Depalletizer of Boxes Based on Range Imagery Dimitrios Katsoulas*, Lothar Bergen*, Lambis Tassakos** *University of Freiburg **Inos Automation-software.

Slides:



Advertisements
Similar presentations
Object Recognition from Local Scale-Invariant Features David G. Lowe Presented by Ashley L. Kapron.
Advertisements

Alignment Visual Recognition “Straighten your paths” Isaiah.
Invariants (continued).
Road-Sign Detection and Recognition Based on Support Vector Machines Saturnino, Sergio et al. Yunjia Man ECG 782 Dr. Brendan.
Joydeep Biswas, Manuela Veloso
Fitting: The Hough transform. Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not.
1 Autonomous Registration of LiDAR Data to Single Aerial Image Takis Kasparis Nicholas S. Shorter
Object Recognition using Invariant Local Features Applications l Mobile robots, driver assistance l Cell phone location or object recognition l Panoramas,
Automatic Feature Extraction for Multi-view 3D Face Recognition
Mapping: Scaling Rotation Translation Warp
Intelligent Systems Lab. Extrinsic Self Calibration of a Camera and a 3D Laser Range Finder from Natural Scenes Davide Scaramuzza, Ahad Harati, and Roland.
Segmentation (2): edge detection
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
Image alignment Image from
Query Processing in Databases Dr. M. Gavrilova.  Introduction  I/O algorithms for large databases  Complex geometric operations in graphical querying.
Robust Extraction of Vertices in Range Images by Constraining the Hough Transform Dimitrios Katsoulas Institute for Pattern Recognition and Image Processing.
Localization of Piled Boxes by Means of the Hough Transform Dimitrios Katsoulas Institute for Pattern Recognition and Image Processing University of Freiburg.
Reverse Engineering Niloy J. Mitra.
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
ICIP 2000, Vancouver, Canada IVML, ECE, NTUA Face Detection: Is it only for Face Recognition?  A few years earlier  Face Detection Face Recognition 
Image alignment Image from
Fitting: The Hough transform
Two Examples of Docking Algorithms With thanks to Maria Teresa Gil Lucientes.
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2005 with a lot of slides stolen from Steve Seitz and.
Fitting a Model to Data Reading: 15.1,
Scale Invariant Feature Transform (SIFT)
MSU CSE 803 Fall 2008 Stockman1 CV: 3D sensing and calibration Coordinate system changes; perspective transformation; Stereo and structured light.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and.
Sebastian Thrun CS223B Computer Vision, Winter Stanford CS223B Computer Vision, Winter 2005 Lecture 3 Advanced Features Sebastian Thrun, Stanford.
Part I: Basics of Computer Graphics Rendering Polygonal Objects (Read Chapter 1 of Advanced Animation and Rendering Techniques) Chapter
Stockman MSU/CSE Math models 3D to 2D Affine transformations in 3D; Projections 3D to 2D; Derivation of camera matrix form.
Computer vision: models, learning and inference
1 Fingerprint Classification sections Fingerprint matching using transformation parameter clustering R. Germain et al, IEEE And Fingerprint Identification.
1 Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash University, Australia Visual Perception.
CS 450: Computer Graphics REVIEW: OVERVIEW OF POLYGONS
Gwangju Institute of Science and Technology Intelligent Design and Graphics Laboratory Multi-scale tensor voting for feature extraction from unstructured.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
October 14, 2014Computer Vision Lecture 11: Image Segmentation I 1Contours How should we represent contours? A good contour representation should meet.
Epipolar geometry The fundamental matrix and the tensor
Object Tracking/Recognition using Invariant Local Features Applications l Mobile robots, driver assistance l Cell phone location or object recognition.
The Brightness Constraint
Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31.
Intelligent Vision Systems ENT 496 Object Shape Identification and Representation Hema C.R. Lecture 7.
A 3D Model Alignment and Retrieval System Ding-Yun Chen and Ming Ouhyoung.
Quadratic Surfaces. SPLINE REPRESENTATIONS a spline is a flexible strip used to produce a smooth curve through a designated set of points. We.
Generalized Hough Transform
Chapter 10, Part II Edge Linking and Boundary Detection The methods discussed in the previous section yield pixels lying only on edges. This section.
Fitting: The Hough transform
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
776 Computer Vision Jan-Michael Frahm Spring 2012.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Computer Graphics One of the central components of three-dimensional graphics has been a basic system that renders objects represented by a set of polygons.
Computer Vision Computer Vision based Hole Filling Chad Hantak COMP December 9, 2003.
Camera Calibration Course web page: vision.cis.udel.edu/cv March 24, 2003  Lecture 17.
Mobile Robot Localization and Mapping Using Range Sensor Data Dr. Joel Burdick, Dr. Stergios Roumeliotis, Samuel Pfister, Kristo Kriechbaum.
Visible-Surface Detection Methods. To identify those parts of a scene that are visible from a chosen viewing position. Surfaces which are obscured by.
Hough Transform CS 691 E Spring Outline Hough transform Homography Reading: FP Chapter 15.1 (text) Some slides from Lazebnik.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
776 Computer Vision Jan-Michael Frahm Spring 2012.
Lecture 07 13/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Paper – Stephen Se, David Lowe, Jim Little
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Recognizing Deformable Shapes
Dimitrios Katsoulas and Dimitrios Kosmopoulos
Fitting Curve Models to Edges
3D Photography: Epipolar geometry
3D Rendering Pipeline Hidden Surface Removal 3D Primitives
Computer Graphics One of the central components of three-dimensional graphics has been a basic system that renders objects represented by a set of polygons.
Presentation transcript:

A Versatile Depalletizer of Boxes Based on Range Imagery Dimitrios Katsoulas*, Lothar Bergen*, Lambis Tassakos** *University of Freiburg **Inos Automation-software GmbH

Introduction  Definition of Depalletizing problem.  Our research focuses on depalletizing of objects in distribution centers (boxes, box-like objects, sacks).  This contribution concerns depalletizing of boxes.  Our system deals with pallets containing piled boxes of various dimensions.  Our system is model based.

Related Work (1)  Camera-based systems.  Range Imagery based: Katsoulas et.al 2002 Kristensen et.al Baerveldt 1993 Vayda et.al. 1990

Related Work (2)  Chen et. Al. 1989: Model based. Hypothesis generation and verification framework. Idea: object vertices incorporate necessary constraints for unique determination of pose transform. Vertices are represented via surface normals and the vertex point. Vertex point determined via intersection of surfaces. 3 surfaces need to be exposed for accurate computation of vertex point.

Our Approach.  Data Acquisition via a time of flight laser sensor mounted on the hand of the robot.  Accurate detection of box edges.  Vertex representation via box edges.  Hypothesis generation and verification framework triggered by detected vertices.

The System in Operation

Edge Map Creation via Scan Line Approximation  Edge points are detected by approximating the rows and columns of the range image with linear segments.  The algorithm uses infomation from long line segments to compute the edge points.  It is better than local approaches in terms of accuracy.  It is fast.

Vertex Detection(1)  Accurate detection of 3D lines corresponding to edges of boxes.  Grouping of compatible box edges to yield Vertices. Representation of a Vertex as a triplet: (feature set).  Two-step, 3D line Detection (Inspired from the dynamic generalized Hough transform): Get rough approximation of the position of the lines in space. Exploit the sparsity of lines in the 3D space to refine the roughly computed line parameters.

Vertex Detection(2) – 3D Line Detection 2D connected components (CCs) are determined. 3D line segments are fitted to the points defined by the CCs. Points in the vicinity of the segments are considered. Robust Calculation of the line‘s direction vector. Robust Calculation of the line‘s starting point.

Vertex Detection(3) – 3D Line detection  Calculation of the direction vector: Selection of random pairs of points and calculation of difference vectors. Accumulation of the coordinates of the difference vectors in 3 1D accumulators. The maxima of the accumulators are the desired parameters provided that the standard deviation of the accumulators is below a threshold.  Similar technique for calculating the starting point.

Object Recognition(1)  Based on Hypotheses generation and verification.  A scene vertex is aligned to a model box vertex.  This alignment produces a transform which brings the model to the scene. This is equivalent to hypothesis creation.  Verification of the hypothesis is performed via examination of the position of neighbouring vertices.

Object Recognition (2)- Model Database  The model database contains triplets of the form (model local feature set) which express model vertices.  6 feature sets per model are stored in the model DB.

Object Recognition(3)

Hypothesis Generation  If a scene vertex and a model vertex, a box location hypothesis is generated as follows: Rotation Matrix: Translation Vector:

Hypothesis Verification (1)  For every adjacent scene vertex, its position in the model coordinate system is determined:  If one of the vertices of the model which created the hypothesis is close to the back- projected adjacent vertex, it is considered compatible to the scene vertex which created the hypothesis.  Verification depends on the number of compatible scene vertices found.

Hypothesis Verification (2)

Hypothesis Verification (3)  Verification criterio when only one surface is exposed: detection of one compatible vertex which shares no common edge with the hypothesis generating vertex.  This safely verifies the occurence of a box side (surface).  Not a problem, since we are looking for graspable surfaces.  However, the set of necessary constraints for verification needs to be further reduced.

Hypothesis Verification (4)

Hypothesis Verification (5)

Hypothesis Verification (6)

Plane Fitting Test  Idea: The border points of the hypothesised surface can be accurately computed.  3D range points lying in the hypothesised surface segment are extracted.  A plane is fitted to the 3D points.  If the fitting error is below a threshold the hypothesis is considered verified.

Hypothesis Verification (6)

Discussion on the Plane Fitting Test  It is implemented via polygon rasterization.  It reduces the number of scene vertices that need to be detected for safe verification.  It could be used as a verification method when no compatible scene vertices are detected.  It ensures that a detected box surface is graspable.  Helps for an even more accurate grasping.

Experimental Results (1) – Intensity Image

Experimental Results (1) – Edge Map

Experimental Results (1) – Robust Lines

Experimental Results (1) – Vertices

Experimental Results (1) – Detected Boxes

Experimental Results (2) – Intensity Image

Experimental Results (2) – Edge Map

Experimental Results (2) – Robust Lines

Experimental Results (2) – Vertices

Experimental Results (2) – Detected Boxes

System Advantages  Computational efficiency: fast vertex Detection, fast hypothesis generation and verification. Detection of more than one boxes per scan are detected. <15 secs per box.  Accuracy: Accurate hypothesis generation due to robust detection of vertices. 4 degrees in orientation, 2cm in translation.  Robustness: robust verification criteria. No false identifications.

System Advantages (2)  Independence from lighting conditions: Employment of time of flight laser sensor.  Versatility: Deals with both layered and jumbled configurations.  Ease of Installation: Sensor on the hand of the robot.  Low Cost: Cost of the sensor ~ 3000 Euro.  Simplicity.

Problems  System fails when the objects are placed very close to each other in distinct layers, because no edges points or vertices can be detected.  Solution: additional sensor (e.g. intensity camera) to resolve the correct orientation.

Future Work  Exploit vertices detected in previous scans to make the recognition process faster.  Depalletizing of non rigid box-like objects and piled sacks.