Autonomous Mobile Robots CPE 470/670 Lecture 6 Instructor: Monica Nicolescu.

Slides:



Advertisements
Similar presentations
Are the basic building blocks to create a work of art.
Advertisements

16421: Vision Sensors Lecture 6: Radiometry and Radiometric Calibration Instructor: S. Narasimhan Wean 5312, T-R 1:30pm – 2:50pm.
Sensors For Robotics Robotics Academy All Rights Reserved.
Stereo.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
MSU/CSE Fall Computer Vision: Imaging Devices Summary of several common imaging devices covered ( Chapter 2 of Shapiro and Stockman)
Topics: Introduction to Robotics CS 491/691(X)
Lecture 30: Light, color, and reflectance CS4670: Computer Vision Noah Snavely.
Topics: Introduction to Robotics CS 491/691(X) Lecture 5 Instructor: Monica Nicolescu.
Autonomous Mobile Robots CPE 470/670 Lecture 5 Instructor: Monica Nicolescu.
Stockman MSU/CSE Fall Computer Vision: Imaging Devices Summary of several common imaging devices covered in Chapter 2 of Shapiro and Stockman.
Face Recognition Based on 3D Shape Estimation
Autonomous Mobile Robots CPE 470/670 Lecture 6 Instructor: Monica Nicolescu.
Autonomous Mobile Robots CPE 470/670
2007Theo Schouten1 Introduction. 2007Theo Schouten2 Human Eye Cones, Rods Reaction time: 0.1 sec (enough for transferring 100 nerve.
Human Sensing: The eye and visual processing Physiology and Function Martin Jagersand.
December 2, 2014Computer Vision Lecture 21: Image Understanding 1 Today’s topic is.. Image Understanding.
Lecture 6: Feature matching and alignment CS4670: Computer Vision Noah Snavely.
Vision Our most dominant sense
Basic Principles of Imaging and Photometry Lecture #2 Thanks to Shree Nayar, Ravi Ramamoorthi, Pat Hanrahan.
Optical flow (motion vector) computation Course: Computer Graphics and Image Processing Semester:Fall 2002 Presenter:Nilesh Ghubade
1B50 – Percepts and Concepts Daniel J Hulme. Outline Cognitive Vision –Why do we want computers to see? –Why can’t computers see? –Introducing percepts.
November 29, 2004AI: Chapter 24: Perception1 Artificial Intelligence Chapter 24: Perception Michael Scherger Department of Computer Science Kent State.
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #15.
DO NOW: What do you know about our sense of sight and vision? What parts of the eye do you know? What do you know about light?
Sensors. Sensors are for Perception Sensors are physical devices that measure physical quantities. – Such as light, temperature, pressure – Proprioception.
Sensing self motion Key points: Why robots need self-sensing Sensors for proprioception in biological systems in robot systems Position sensing Velocity.
Robotica Lecture 3. 2 Robot Control Robot control is the mean by which the sensing and action of a robot are coordinated The infinitely many possible.
© Manfred Huber Autonomous Robots Robot Path Planning.
Perception Introduction Pattern Recognition Image Formation
CS-424 Gregory Dudek Today’s Lecture Computational Vision –Images –Image formation in brief (+reading) –Image processing: filtering Linear filters Non-linear.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
Robotica Lecture 3. 2 Robot Control Robot control is the mean by which the sensing and action of a robot are coordinated The infinitely many possible.
Sensation and Perception
Chapter 3 Sensation and Perception McGraw-Hill ©2010 The McGraw-Hill Companies, Inc. All rights reserved.
Intelligent Vision Systems Image Geometry and Acquisition ENT 496 Ms. HEMA C.R. Lecture 2.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
University of Windsor School of Computer Science Topics in Artificial Intelligence Fall 2008 Sept 11, 2008.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
1 Computational Vision CSCI 363, Fall 2012 Lecture 5 The Retina.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
Tutorial Visual Perception Towards Computer Vision
1 Perception and VR MONT 104S, Fall 2008 Lecture 4 Lightness, Brightness and Edges.
Autonomous Robots Vision © Manfred Huber 2014.
Lecture 34: Light, color, and reflectance CS4670 / 5670: Computer Vision Noah Snavely.
Tracking Systems in VR.
Light. Photon is a bundle of light related to the amount of energy Light travels in straight line paths called light rays
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
Autonomous Robots Robot Path Planning (3) © Manfred Huber 2008.
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Intelligent Vision Systems Image Geometry and Acquisition ENT 496 Ms. HEMA C.R. Lecture 2.
Perception and VR MONT 104S, Fall 2008 Lecture 8 Seeing Depth
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Chapter 24: Perception April 20, Introduction Emphasis on vision Feature extraction approach Model-based approach –S stimulus –W world –f,
Eight light and image. Oscillation Occurs when two forces are in opposition Causes energy to alternate between two forms Guitar string Motion stretches.
1 Computational Vision CSCI 363, Fall 2012 Lecture 2 Introduction to Vision Science.
EECS 274 Computer Vision Sources, Shadows, and Shading.
Vision Our most dominant sense. Our Essential Questions What are the major parts of the eye? How does the eye translate light into neural impulses?
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
CS262 – Computer Vision Lect 4 - Image Formation
- photometric aspects of image formation gray level images
Sensors For Robotics Robotics Academy All Rights Reserved.
Paper – Stephen Se, David Lowe, Jim Little
Sensors For Robotics Robotics Academy All Rights Reserved.
Common Classification Tasks
Range Imaging Through Triangulation
Early Processing in Biological Vision
Object Recognition Today we will move on to… April 12, 2018
Experiencing the World
Presentation transcript:

Autonomous Mobile Robots CPE 470/670 Lecture 6 Instructor: Monica Nicolescu

CPE 470/670 - Lecture 72 Review Sensors Photocells Reflective optosensors Infrared sensors Break-Beam sensors Shaft encoders Sonar sensors

CPE 470/670 - Lecture 73 Laser Sensing High accuracy sensor Lasers use light time-of-flight Light is emitted in a beam (3mm) rather than a cone Provide higher resolution SICK LMS200 –360 readings over an 180- degrees, 10Hz Disadvantages: –cost, weight, power, price –mostly 2D

Global Positioning System (GPS) CPE 470/670 - Lecture 74 General information –At least 24 satellites orbiting the earth every 12 hours at a height of km –4 satellites were located in each of 6 orbits with 60 degrees orientation between each other How does it work? –Each satellite transmits its location and current time –GPS receivers use the arrival time and location to infer their current location –GPS receivers read from at least 4 satellites

Global Positioning System (GPS) Challenges –Time synchronization between satellites and GPS receiver –Real time update of the location of the satellites –Precise measurement of time-of flight CPE 470/670 - Lecture 75

6 Visual Sensing Cameras try to model biological eyes Machine vision is a highly difficult research area –Reconstruction –What is that? Who is that? Where is that? Robotics requires answers related to achieving goals –Not usually necessary to reconstruct the entire world Applications –Security, robotics (mapping, navigation)

CPE 470/670 - Lecture 77 Principles of Cameras Cameras have many similarities with the human eye –The light goes through an opening ( iris - lens ) and hits the image plane ( retina ) –The retina is attached to light-sensitive elements ( rods, cones – silicon circuits ) –Only objects at a particular range are in focus ( fovea ) – depth of field –512x512 pixels ( cameras ), 120x10 6 rods and 6x10 6 cones ( eye ) –The brightness is proportional to the amount of light reflected from the objects

CPE 470/670 - Lecture 78 Image Brightness Brightness depends on –reflectance of the surface patch –position and distribution of the light sources in the environment –amount of light reflected from other objects in the scene onto the surface patch Two types of reflection –Specular (smooth surfaces) –Diffuse (rough sourfaces) Necessary to account for these properties for correct object reconstruction  complex computation

CPE 470/670 - Lecture 79 Early Vision The retina is attached to numerous rods and cones which, in turn, are attached to nerve cells ( neurons ) The nerves process the information; they perform "early vision", and pass information on throughout the brain to do "higher-level" vision processing The typical first step ("early vision") is edge detection, i.e., find all the edges in the image Suppose we have a b&w camera with a 512 x 512 pixel image Each pixel has an intensity level between white and black How do we find an object in the image? Do we know if there is one?

CPE 470/670 - Lecture 710 Edge Detection Edge = a curve in the image across which there is a change in brightness Finding edges –Differentiate the image and look for areas where the magnitude of the derivative is large Difficulties –Not only edges produce changes in brightness: shadows, noise Smoothing –Filter the image using convolution –Use filters of various orientations Segmentation: get objects out of the lines

CPE 470/670 - Lecture 711 Model-Based Vision Compare the current image with images of similar objects ( models ) stored in memory Models provide prior information about the objects Storing models –Line drawings –Several views of the same object –Repeatable features (two eyes, a nose, a mouth) Difficulties –Translation, orientation and scale –Not known what is the object in the image –Occlusion

CPE 470/670 - Lecture 712 Vision from Motion Take advantage of motion to facilitate vision Static system can detect moving objects –Subtract two consecutive images from each other  the movement between frames Moving system can detect static objects –At consecutive time steps continuous objects move as one –Exact movement of the camera should be known Robots are typically moving themselves –Need to consider the movement of the robot

CPE 470/670 - Lecture 713 Stereo Vision 3D information can be computed from two images Compute relative positions of cameras Compute disparity –displacement of a point in 2D between the two images Disparity is inverse proportional with actual distance in 3D

CPE 470/670 - Lecture 714 Biological Vision Similar visual strategies are used in nature Model-based vision is essential for object/people recognition Vestibular occular reflex –Eyes stay fixed while the head/body is moving to stabilize the image Stereo vision –Typical in carnivores Human vision is particularly good at recognizing shadows, textures, contours, other shapes

CPE 470/670 - Lecture 715 Vision for Robots If complete scene reconstruction is not needed we can simplify the problem based on the task requirements Use color Use a combination of color and movement Use small images Combine other sensors with vision Use knowledge about the environment

CPE 470/670 - Lecture 716 Examples of Vision-Based Navigation Running QRIO Sony Aibo – obstacle avoidance

CPE 470/670 - Lecture 717 Feedback Control Feedback control = having a system achieve and maintain a desired state by continuously comparing its current and desired states, then adjusting the current state to minimize the difference Also called closed loop control

CPE 470/670 - Lecture 718 Goal State Goal driven behavior is used in both control theory and in AI Goals in AI –Achievement goals: states that the system is trying to reach –Maintenance goals: states that need to be maintained Control theory: mostly focused on maintenance goals Goal states can be: –Internal : monitor battery power level –External : get to a particular location in the environment –Combinations of both: balance a pole

CPE 470/670 - Lecture 719 Error Error = the difference in the current state and desired state of the system The controller has to minimize the error at all times Zero/non-zero error: –Tells whether there is an error or not –The least information we could have Magnitude of error: –The distance to the goal state Direction of error: –Which way to go to minimize the error Control is much easier if we know both magnitude and direction

CPE 470/670 - Lecture 720 A Robotic Example Use feedback to design a wall following robot What sensors to use, what info will they provide? –Contact: the least information –IR: information about a possible wall, but not distance –Sonar, laser: would provide distance –Bend sensor: would provide distance Control If distance-to-wall is right, then keep going If distance-to-wall is larger then turn toward the wall else turn away from the wall

CPE 470/670 - Lecture 721 Simple Feedback Control HandyBug oscillates around setpoint goal value Never goes straight Sharp turns void left() { motor(RIGHT_MOTOR, 100); motor(LEFT_MOTOR, 0); } void right() { motor(LEFT_MOTOR, 100); motor(RIGHT_MOTOR, 0); }

CPE 470/670 - Lecture 722 Readings M. Matarić: Chapter 10