Overview of Our Sensors For Robotics

Slides:



Advertisements
Similar presentations
Lecture 8: Three-Level Architectures CS 344R: Robotics Benjamin Kuipers.
Advertisements

Robot Vision SS 2005 Matthias Rüther 1 ROBOT VISION Lesson 10: Object Tracking and Visual Servoing Matthias Rüther.
Robot Vision SS 2005 Matthias Rüther ROBOT VISION („Messen aus Bildern“) 2VO 1KU Matthias Rüther Kawada Industries Inc.DLR.
Hybrid Position-Based Visual Servoing
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
A presentation by Modupe Omueti For CMPT 820:Multimedia Systems
Video enhances, dramatizes, and gives impact to your multimedia application. Your audience will better understand the message of your application.
A New Omnidirectional Vision Sensor for the Spatial Semantic Hierarchy E. Menegatti, M. Wright, E. Pagello Dep. of Electronics and Informatics University.
Brent Dingle Marco A. Morales Texas A&M University, Spring 2002
SWE 423: Multimedia Systems Chapter 7: Data Compression (1)
Sonitus Capture the imagination. Agenda Introduction Introduction System Overview System Overview Transmit stage Transmit stage Receive stage Receive.
Chapter 2 Computer Imaging Systems. Content Computer Imaging Systems.
2007Theo Schouten1 Introduction. 2007Theo Schouten2 Human Eye Cones, Rods Reaction time: 0.1 sec (enough for transferring 100 nerve.
Highlights Lecture on the image part (10) Automatic Perception 16
Preliminary Design Review The Lone Rangers Brad Alcorn Tim Caldwell Mitch Duggan Kai Gelatt Josh Peifer Capstone – Spring 2007.
Simultaneous Localization and Map Building System for Prototype Mars Rover CECS 398 Capstone Design I October 24, 2001.
1 King ABDUL AZIZ University Faculty Of Computing and Information Technology CS 454 Computer graphicsIntroduction Dr. Eng. Farag Elnagahy
Digital Images The nature and acquisition of a digital image.
Vision Guided Robotics
Jason Li Jeremy Fowers Ground Target Following for Unmanned Aerial Vehicles.
Mohammed Rizwan Adil, Chidambaram Alagappan., and Swathi Dumpala Basaveswara.
A Brief Overview of Computer Vision Jinxiang Chai.
Robot Vision SS 2007 Matthias Rüther ROBOT VISION 2VO 1KU Matthias Rüther.
HRTC Meeting 12 September 2002, Vienna Smart Sensors Thomas Losert.
Perception Introduction Pattern Recognition Image Formation
CSCE 5013 Computer Vision Fall 2011 Prof. John Gauch
Video Video.
DIGITAL Video. Video Creation Video captures the real world therefore video cannot be created in the same sense that images can be created video must.
Guide to Linux Installation and Administration, 2e1 Chapter 2 Planning Your System.
CHAPTER TEN AUTHORING.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
University of Amsterdam Search, Navigate, and Actuate - Qualitative Navigation Arnoud Visser 1 Search, Navigate, and Actuate Qualitative Navigation.
HARDWARE INTERFACE FOR A 3-DOF SURGICAL ROBOT ARM Ahmet Atasoy 1, Mehmed Ozkan 2, Duygun Erol Barkana 3 1 Institute of Biomedical Engineering, Bogazici.
Towards real-time camera based logos detection Mathieu Delalandre Laboratory of Computer Science, RFAI group, Tours city, France Osaka Prefecture Partnership.
Intelligent Vision Systems Image Geometry and Acquisition ENT 496 Ms. HEMA C.R. Lecture 2.
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
MACHINE VISION Machine Vision System Components ENT 273 Ms. HEMA C.R. Lecture 1.
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
Vehicle Segmentation and Tracking From a Low-Angle Off-Axis Camera Neeraj K. Kanhere Committee members Dr. Stanley Birchfield Dr. Robert Schalkoff Dr.
1 Artificial Intelligence: Vision Stages of analysis Low level vision Surfaces and distance Object Matching.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
Robot Vision SS 2007 Matthias Rüther 1 ROBOT VISION Lesson 9: Robots & Vision Matthias Rüther.
Autonomous Robots Vision © Manfred Huber 2014.
Jack Pinches INFO410 & INFO350 S INFORMATION SCIENCE Computer Vision I.
By Naveen kumar Badam. Contents INTRODUCTION ARCHITECTURE OF THE PROPOSED MODEL MODULES INVOLVED IN THE MODEL FUTURE WORKS CONCLUSION.
1 Machine Vision. 2 VISION the most powerful sense.
COMP 417 – Jan 12 th, 2006 Guest Lecturer: David Meger Topic: Camera Networks for Robot Localization.
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
Intelligent Vision Systems Image Geometry and Acquisition ENT 496 Ms. HEMA C.R. Lecture 2.
Intelligent Robotics Today: Vision & Time & Space Complexity.
Suggested Machine Learning Class: – learning-supervised-learning--ud675
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Optical flow and keypoint tracking Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Robot Vision SS 2009 Matthias Rüther ROBOT VISION 2VO 1KU Matthias Rüther.
Portable Camera-Based Assistive Text and Product Label Reading From Hand-Held Objects for Blind Persons.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
Robot Vision.
Visual Information Processing. Human Perception V.S. Machine Perception  Human perception: pictorial information improvement for human interpretation.
MAV Optical Navigation Software System April 30, 2012 Tom Fritz, Pamela Warman, Richard Woodham Jr, Justin Clark, Andre DeRoux Sponsor: Dr. Adrian Lauf.
- photometric aspects of image formation gray level images
Recent developments on micro-triangulation
Paper – Stephen Se, David Lowe, Jim Little
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Common Classification Tasks
Vehicle Segmentation and Tracking in the Presence of Occlusions
Optical flow and keypoint tracking
Chapter 13: I/O Systems.
Presentation transcript:

Overview of Our Sensors For Robotics

Machine vision Computer vision To recover useful information about a scene from its 2-D projections. To take images as inputs and produce other types of outputs (object shape, object contour, etc.) Geometry + Measurement + Interpretation To create a model of the real world from images.

Topics • Computer vision system • Image enhancement • Image analysis • Pattern Classification

Related fields Image processing Computer graphics Pattern recognition Transformation of images into other images Image compression, image enhancement Useful in early stages of a machine vision system Computer graphics Pattern recognition Artificial intelligence Psychophysics

Vision system hardware

Image Processing System

Image Representation

Image Image : a two-dimensional array of pixels The indices [i, j] of pixels : integer values that specify the rows and columns in pixel values

Sampling, pixeling and quantization The real image is sampled at a finite number of points. Sampling rate : image resolution how many pixels the digital image will have e.g.) 640 x 480, 320 x 240, etc. Pixel Each image sample At the sample point, an integer value of the image intensity

Quantization Each sample is represented with the finite word size of the computer. How many intensity levels can be used to represent the intensity value at each sample point. e.g.) 28 = 256, 25 = 32, etc.

Color models Color models for images, RGB, CMY Color models for video, YIQ, YUV (YCbCr) Relationship between color models :

6.7. Digital Cameras

Digital Cameras Technology Resolution • CCD (charge coupled devices) • CMOS (complementary metal oxide semiconductor) Resolution • 60x80 black/white up to • several Mega-Pixels in 32bit color However: Embedded system has to have computing power to deal with this large amount of data!

Vision (camera + framegrabber)

Digital Cameras Performance of embedded system: 10% - 50% of standard PC

Interfacing Digital Cameras to CPU Interfacing to CPU: • Completely depends on sensor chip specs • Many sensors provide several different interfacing protocols versatile in hardware design software gets very complicated • Typically: 8 bit parallel (or 4, 16, serial) • Numerous control signals required

Interfacing Digital Cameras to CPU Digital camera sensors are very complex units. In many respects they are themselves similar to an embedded controller chip. Some sensors buffer camera data and allow slow reading via handshake (ideal for slow microprocessors) Most sensors send full image as a stream after start signal (CPU must be fast enough to read or use hardware buffer or DMA) We will not go into further details in this course. However, we consider camera access routines

Simplified diagram of camera to CPU interface

Problem with Digital Cameras • Every pixel from the camera causes an interrupt • Interrupt service routines take long, since they need to store register contents on the stack • Everything is slowed down Solution • Use RAM buffer for image and read full image with single interrupt

Idea • Use FIFO as image data buffer • FIFO is similar to dual-ported RAM, it is required since there is no synchronization between camera and CPU • When FIFO is half full, interrupt is generated • Interrupt service routine then reads FIFO until empty (Assume delay is small enough to avoid FIFO overrun)

Bayer Pattern

De-Mosaic

Conversion in Digital Cameras Bayer Pattern • Output format of most digital cameras • Note: 2x2 pattern is not spatially located in a single point! • Can be simply converted to RGB (drop one green byte) 160x120 Bayer → 80x60 RGB • Can be better converted using “demosaicing” technique 160x120 Bayer → 160x120 RGB

CMUCAM2+ CAMERA www.seattlerobotics.com The camera can track user defined color blobs at up to 50 fps (frames per second) Track motion using frame differencing at 26 fps Find the centroid of any tracking data Gather mean color and variance data Gather a 28 bin histogram of each color channel Manipulate horizontal pixel differenced images Arbitrary image windowing Adjust the camera’s image properties This camera can do a lot of processing

This camera can do a lot of processing Dump a raw image Up to 160 X 255 resolution Support multiple baud rates Control 5 servos outputs Slave parallel image processing mode off of single camera bus Automatically use servos to do two axis color tracking B/W analog video output (Pal or NTSC) Flexible output packet customization Multiple pass image processing on a buffered image

Vision Guided Robotics and Applications in Industry and Medicine

Contents Robotics in General Industrial Robotics Medical Robotics What can Computer Vision do for Robotics? Vision Sensors Issues / Problems Visual Servoing Application Examples Summary

Industrial Robot vs Human Robot Advantages: Strength Accuracy Speed Does not tire Does repetitive tasks Can Measure Human advantages: Intelligence Flexibility Adaptability Skill Can Learn Can Estimate Robot needs vision

Industrial Robot Requirements: Accuracy Tool Quality Robustness Strength Speed Price Production Cost Maintenance Production Quality

Medical (Surgical) Robot Requirements Safety Accuracy Reliability Tool Quality Price Maintenance Man-Machine Interface

What can Computer Vision do for (industrial and medical) Robotics? Accurate Robot-Object Positioning Keeping Relative Position under Movement Visualization / Teaching / Telerobotics Performing measurements Object Recognition Registration Visual Servoing

Vision Sensors Single Perspective Camera Multiple Perspective Cameras (e.g. Stereo Camera Pair) Laser Scanner Omnidirectional Camera Structured Light Sensor

Vision Sensors Single Perspective Camera Single projection

Vision Sensors Multiple Perspective Cameras (e.g. Stereo Camera Pair)

Vision Sensors Multiple Perspective Cameras (e.g. Stereo Camera Pair)

Vision Sensors Multiple Perspective Cameras (e.g. Stereo Camera Pair)

Vision Sensors Laser Scanner

Vision Sensors Laser Scanner

Vision Sensors Omnidirectional Camera

Vision Sensors Omnidirectional Camera

Vision Sensors Structured Light Sensor                                                                                                                                                                                                       Figures from PRIP, TU Vienna

Issues/Problems of Vision Guided Robotics Measurement Frequency Measurement Uncertainty Occlusion, Camera Positioning Sensor dimensions

Visual Servoing Vision System operates in a closed control loop. Better Accuracy than „Look and Move“ systems Figures from S.Hutchinson: A Tutorial on Visual Servo Control

Visual Servoing Example: Maintaining relative Object Position Figures from P. Wunsch and G. Hirzinger. Real-Time Visual Tracking of 3-D Objects with Dynamic Handling of Occlusion

Camera Configurations for Visual Servoing End-Effector Mounted Fixed Figures from S.Hutchinson: A Tutorial on Visual Servo Control

Visual Servoing Architectures Figures from S.Hutchinson: A Tutorial on Visual Servo Control

Position-based vs Image Based control in Visual Servoing Alignment in target coordinate system The 3D structure of the target is rconstructed The end-effector is tracked Sensitive to calibration errors Sensitive to reconstruction errors Image based: Alignment in image coordinates No explicit reconstruction necessary Insensitive to calibration errors Only special problems solvable Depends on initial pose Depends on selected features End-effector target Image of end effector Image of target

EOL and ECL control in Visual Servoing EOL: endpoint open-loop; only the target is observed by the camera ECL: endpoint closed-loop; target as well as end-effector are observed by the camera EOL ECL

Visual Servoing Position Based Algorithm: Example: point alignment Estimation of relative pose Computation of error between current pose and target pose Movement of robot Example: point alignment p1 p2

Visual Servoing Position based point alignment p1m p2m Goal: bring e to 0 by moving p1 e = |p2m – p1m| u = k*(p2m – p1m) pxm is subject to the following measurement errors: sensor position, sensor calibration, sensor measurement error pxm is independent of the following errors: end effector position, target position p1m p2m d

Visual Servoing Image based point alignment p1 p2 Goal: bring e to 0 by moving p1 e = |u1m – v1m| + |u2m – v2m| uxm, vxm is subject only to sensor measurement error uxm, vxm is independent of the following measurement errors: sensor position, end effector position, sensor calibration, target position p1 p2 u1 v1 v2 u2 d1 d2 c1 c2

Visual Servoing Example Laparoscopy Figures from A.Krupa: Autonomous 3-D Positioning of Surgical Instruments in Robotized Laparoscopic Surgery Using Visual Servoing

Visual Servoing Example Laparoscopy Figures from A.Krupa: Autonomous 3-D Positioning of Surgical Instruments in Robotized Laparoscopic Surgery Using Visual Servoing

Registration Registration of CAD models to scene features: Figures from P.Wunsch: Registration of CAD-Models to Images by Iterative Inverse Perspective Matching

Registration Registration of CAD models to scene features: Figures from P.Wunsch: Registration of CAD-Models to Images by Iterative Inverse Perspective Matching

Summary on tracking and servoing Computer Vision provides accurate and versatile measurements for robotic manipulators With current general purpose hardware, depth and pose measurements can be performed in real time In industrial robotics, vision systems are deployed in a fully automated way. In medicine, computer vision can make more intelligent „surgical assistants“ possible.

Omnidirectional Vision Systems CABOTO Robot’s task: Building a topological map of an unknown environment; Sensor: Omnidirectional vision system; Work’s aim: Prove effectiveness of omnidirectional sensors for Spatial Semantic Hierarchy; SSH

Spatial Semantic Hierarchy... ... A model of the human knowledge of large spaces Layers: Sensory Level Control Level Causal Level Topological Level Metrical Level Distance, Direction, Shape Useful, but seldom essential Minimal set of Places, Paths and Regions Interface with the robot’s sensory system Control Laws, Transition of State, Distinctiveness Measure View, Action, Distinct Place Abstracts Discrete from Continous

Tracking Instrument tracking in laparoscopy Figures from Wei: A Real-time Visual Servoing System for Laparoscopic Surgery

Omnidirectional Camera Composed of: Standard Color Camera Convex Mirror Perspex Cylinder

Pros e Cons Advantages Wide vision field High speed Vertical Lines Rotational Invariance Disadvantages Low Resolution Distortions Low readability

Omnidirectional Vision and SSH View Omnidirectional image Exploring around the block P2 P4 P3 P5 P1 Robot should discriminate between “turns” and “travels” We need an Effective Distinctiveness measure

Assumptions for vision system Man-made environment Floor flat and horizontal Wall and objects surfaces are vertical Static objects Constant Lighting Robot translates or rotates No encoders

Features and Events Feature: Events: Vertical Edges A new edge An edge disappears Two edges 180° apart Two pairs of edges 180° apart

Experiments Tasks of Caboto robot: Navigation; Map building; Techniques: Edge detection; Colour marking;

Caboto’s Images

Results Correct tracking of edges Recognition of actions Calculation of the turn angle The path segmentation

Mirror shape should depend on robot task! Mirror Design Mirror shape should depend on robot task! Design custom mirror profile Maximise resolution in ROIs Mirror Profile

The new mirror

Conclusion on Omnivision camera Omnidirectional vision sensor is a good sensor for map building with SSH Motion of the robot was estimated without active vision The use of a mirror designed for this application will improve the system

Omnidirectional Cameras Compound-eye camera (from Univ. of Maryland, College Park. ) Panoramic cameras (from Apple) Omnidirectional cameras (from University of Picardie - France)

Student info. % of lab marks can be deducted if rules and regulation are not followed ex: by not cleaning up your bench or sliding your chairs back underneath bench top. For more technical information on boards, devices and sensors check out my web page at : www.site.uottawa.ca/~alan Students are responsible for their own extra parts ex: if you want to add a sensor or device that the dept. doesn’t have you are responsible for the purchase and delivery of that part, on rare occasion did the school purchase those parts. Back packs off bench tops TA’s will have student # based on station # Important issue regarding the design of a new project is to do a current analysis before the start of your design Setup a leader among your team so that you are better organized Do not wait, before starting your project start now ! Prepare yourself before coming to the lab It doesn’t work ! Ask yourself is it software or hardware, use the scope to trouble shoot Fuses keeps on blowing, stop and do some investigation. Do not cut any servo, battery and other device wire connectors. If you must please come and see me No design must exceed 50 volts, ex: do not work with 120 volts AC I can give you what I have regarding metal, wood and plastic recycled pieces and do some cuts or holes with my band saw and drill press for you, PLEASE DO NOT ask me to barrow my tools. If you need to do a task with a special tool that I have then I shall do it for you.

Problems for students Hardware and software components of a vision system for a mobile robot Image representation for intelligent processing Sampling, pixeling and Quantization Color models Types of digital cameras. Interfacing digital cameras to CPU. Problems with cameras. Bayer Patterns and conversion. What is good about CMUCAM? Use of vision in industrial robots. Use of multiple-perspective cameras. Use of omnivision cameras. Types of visual servoing. Applications of visual servoing Visual servoing in surgery Explain tracking applications of vision.

References Dr. Gaurav Sukhatme Thomas Braunl Students 2002, class 479 Photo’s ,Text and Schematics Information www.acroname.com www.lynxmotion.com www.drrobot.com Alan Stewart Dr. Gaurav Sukhatme Thomas Braunl Students 2002, class 479 E. Menegatti, M. Wright, E. Pagello