Robotics and Perception

Slides:



Advertisements
Similar presentations
Teaching Assistant: Roi Yehoshua
Advertisements

TURTLEBOT ROBOTIC GUIDE. Project Description Teach a robot to guide a person to a predefined destination. General requirements: 1. Use a Turtlebot as.
CS B659: Principles of Intelligent Robot Motion Rigid Transformations.
Kinematics & Grasping Need to know: Representing mechanism geometry Standard configurations Degrees of freedom Grippers and graspability conditions Goal.
SA-1 Body Scheme Learning Through Self-Perception Jürgen Sturm, Christian Plagemann, Wolfram Burgard.
Teaching Assistant: Roi Yehoshua
3D Coordinate Systems and Transformations Revision 1
3D orientation.
CSCE 689: Computer Animation Rotation Representation and Interpolation
CSCE 441: Computer Graphics Rotation Representation and Interpolation
CSCE 641: Computer Graphics Rotation Representation and Interpolation Jinxiang Chai.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Technical Background.
CS I400/B659: Intelligent Robotics Rigid Transformations.
The Visual Display Transform for Virtual Reality Cyrus Moon Computer Integrated Surgery II ( )
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 3.1: 3D Geometry Jürgen Sturm Technische Universität München.
Teaching Assistant: Roi Yehoshua
Definition of an Industrial Robot
Visualizing Orientation using Quaternions Gideon Ariel; Rudolf Buijs; Ann Penny; Sun Chung.
Rotations and Translations
Multi-Robot Systems with ROS Lesson 1
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Kinematic Linkages.
Your Python program uses the create library. It supplies functions that send commands: to your laptop’s Bluetooth radio, then to the BAM on the Create,
Teaching Assistant: Roi Yehoshua
Rotations and Translations
Week 5 - Wednesday.  What did we talk about last time?  Project 2  Normal transforms  Euler angles  Quaternions.
Ling Chen ( From Shanghai University 1.
Kinematics Jehee Lee Seoul National University. Kinematics How to animate skeletons (articulated figures) Kinematics is the study of motion without regard.
EEE. Dept of HONG KONG University of Science and Technology Introduction to Robotics Page 1 Lecture 2. Rigid Body Motion Main Concepts: Configuration Space.
Environmental Innovation Challenge – Funding deadline soon – Travel funds available too Business Plan Competition – Late Spring – Interdisciplinary is.
Real-Time Simultaneous Localization and Mapping with a Single Camera (Mono SLAM) Young Ki Baik Computer Vision Lab. Seoul National University.
Real-time motion planning for Manipulator based on Configuration Space Chen Keming Cis Peking University.
Chapter 2: Description of position and orientation Faculty of Engineering - Mechanical Engineering Department ROBOTICS Outline: Introduction. Descriptions:
Cpt_S 483: Introduction to Robotics. Dana 3 RoboSub Robot Club Research Lab.
Week 5 - Monday.  What did we talk about last time?  Lines and planes  Trigonometry  Transforms  Affine transforms  Rotation  Scaling  Shearing.
COMP322/S2000/L111 Inverse Kinematics Given the tool configuration (orientation R w and position p w ) in the world coordinate within the work envelope,
SPACE MOUSE. INTRODUCTION  It is a human computer interaction technology  Helps in movement of manipulator in 6 degree of freedom * 3 translation degree.
Understanding tf ROSCON 2012 Tully Foote
Lecturer: Roi Yehoshua
INTRODUCTION TO WIRELESS SENSOR NETWORKS
Pose2D [geometry_msgs/Pose2D]: float64 x float64 y float64 theta.
Computer Animation Algorithms and Techniques
CPSC 641: Computer Graphics Rotation Representation and Interpolation
Binary Notation and Intro to Computer Graphics
Spatcial Description & Transformation
Using Sensor Data Effectively
Fused Angles: A Representation of Body Orientation for Balance
What is ROS? ROS is an open-source robot operating system
Developing Artificial Intelligence in Robotics
Online Shopping APP.
Zaid H. Rashid Supervisor Dr. Hassan M. Alwan
Robotic Guidance.
CHAPTER 2 FORWARD KINEMATIC 1.
TIGERBOT 2 REJUVENATION
- The Robot Operating System
Segway Fault Patrick Lloyd Undergraduate Student
CSE4421/5324: Introduction to Robotics
Mixed Reality Server under Robot Operating System
Multiple View Geometry for Robotics
Robotics and Perception
Manipulator Dynamics 2 Instructor: Jacob Rosen
Planning and Storyboarding a Web Site
Quick Introduction to ROS
A definition we will encounter: Groups
Robotic Perception and Action
Robotic Perception and Action
UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN
Midway Design Review Team 16 December 6,
Robotics 1 Copyright Martin P. Aalund, Ph.D.
Emerging Technologies for Games Review & Revision Strategy
Robot Operating System (ROS): An Introduction
Presentation transcript:

Robotics and Perception CMSC498F, CMSC828K Lecture 7: Robots in ROS

Representing robot state A robot consists of many moving and static parts e.g. base, wheels, cameras, manipulators, grippers etc. Each part can be associated with a coordinate frame. Robot state can be represented by the poses of these frames. Our robot has many static and moving parts. We want to control this robot. Let’s take a simple example. Let’s say our robot is moving towards some goal and we are using proportional control to control it. To use it we have to know what is the position of the robot relative to the goal point in the real world. Let’s have another example. We have a robot with a gripper and we have a camera and we have found a bottle of water using our camera. Now we want to grasp it. In order to move the gripper to the bottle we have to know where the gripper is right now. In other words, if we want to control the robot, we need to answer the question: what is the current state of the robot? This includes both it’s state relative to the real world and it’s own internal state. So how do we represent the robot state. Our robot consists of many moving parts. Each one of them can be associated with a coordinate frame. We can represent the state of the robot using the relative poses between these frames. What other information about these coordinate systems we might need? Consider the same example of a robot moving towards a goal. Let’s say we are far away from our goal point and our robot is not moving. Our proportional control will tell it to immediately start moving with maximum speed. Not good! We need to know the velocity of the robot relative to the map to make sure that the robot moves smoothly. In order for us to control the robot we must be able to answer the question what is the current state of the robot. How do we even

Representing robot state A robot consists of many moving and static parts e.g. base, wheels, cameras, manipulators, grippers etc. Each part can be associated with a coordinate frame. Robot state can be represented by the poses of these frames. More sophisticated control schemes may require time derivatives of the poses i.e velocity, acceleration and jerk. Our robot has many static and moving parts. We want to control this robot. Let’s take a simple example. Let’s say our robot is moving towards some goal and we are using proportional control to control it. To use it we have to know what is the position of the robot relative to the goal point in the real world. Let’s have another example. We have a robot with a gripper and we have a camera and we have found a bottle of water using our camera. Now we want to grasp it. In order to move the gripper to the bottle we have to know where the gripper is right now. In other words, if we want to control the robot, we need to answer the question: what is the current state of the robot? This includes both it’s state relative to the real world and it’s own internal state. So how do we represent the robot state. Our robot consists of many moving parts. Each one of them can be associated with a coordinate frame. We can represent the state of the robot using the relative poses between these frames. What other information about these coordinate systems we might need? Consider the same example of a robot moving towards a goal. Let’s say we are far away from our goal point and our robot is not moving. Our proportional control will tell it to immediately start moving with maximum speed. Not good! We need to know the velocity of the robot relative to the map to make sure that the robot moves smoothly. In order for us to control the robot we must be able to answer the question what is the current state of the robot. How do we even

tf tf is a ROS package that keeps track of multiple coordinate frames over time. Can answer questions like: What is the current pose of the base frame of the robot in the map frame? What is the pose of the object in my gripper relative to my base? Where was the head frame relative to the world frame, 5 seconds ago?

tf tree /map

tf tree /map /robot/base_link

tf tree /map /robot/base_link ... /robot/camera

tf tree /map /robot/base_link ... /robot/camera /robot/gripper

tf tree /map /robot/base_link ... /robot/camera /robot/gripper T?

tf tree /map /robot/base_link ... /robot/camera /robot/gripper Forward transform Inverse transform /robot/base_link ... /robot/camera /robot/gripper T?

tf Individual nodes publish their transform data on the /tf topic. Transform listeners subscribe to /tf and aggregate all of the transform information of the system. Transform listener class: tf.TransformListener() Lookup a transform between two frames at a given time: lookupTransform(source_frame, target_frame, time) Wait till a transform between two frames becomes available: waitForTransform(source_frame, target_frame, time)

tf Pose message: [geometry_msgs/Pose]: geometry_msgs/Point position float64 x float64 y float64 z geometry_msgs/Quaternion orientation float64 w

Orientation representation: Euler angles Euler angles represent orientation as rotations around X, Y and Z axes: α ∊ [-π, π) rotation around Z axis β ∊ [-π/2, π/2) rotation around X axis γ ∊ [-π, π) rotation around Y axis Advantages: intuitive, compact representation Disadvantages: gimbal lock

Orientation representation: Quaternions Any rotation in 3D can be represented as a combination of a vector u (the Euler axis) This representation can be “easily” converted to quaternions:

Orientation representation: Quaternions Rotation is represented with 4 numbers: x, y, z, w Quaternions avoid gimbal lock! Conversion from quaternions to Euler angles in ROS: (alpha, beta, gamma) = tf.transformations.euler_from_quaternion(x,y,z,w)

tf listener example 1 #!/usr/bin/env python 2 import rospy 3 import tf 4 5 def tfListener(): 6 # Initialize node 7 rospy.init_node(tf_listener, anonymous=True) 8 9 # Initialize transform listener 10 tfListener = tf.TransformListener() 11 12 # Wait for the transform to become available 13 tfListener.waitForTransform("/odom", "/base_footprint", rospy.Time(10)) 14 15 # Get the current transform 16 (position, orientationQ) = tfListener.lookupTransform("/odom", "/base_footprint", rospy.Time(0)) 17 18 # Convert orientation from quaternion to Euler angles 19 orientationE = tf.transformations.euler_from_quaternion(orientationQ)

tf command line tools Print transform between two frames: rosrun tf tf_echo source_frame_id target_frame_id Print update information about transforms: rosrun tf tf_monitor (all frames) rosrun tf tf_monitor source_frame_id target_frame_id Create a document with the current transform tree: rosrun tf view_frames

Rviz 3D visualization tool for ROS. Can visualize: robot state (coordinate frames, robot model) sensor data (image, laserscan, pointcloud) environment map, robot path Crucial tool for debugging. Controlled using a GUI. Running rviz: rosrun rviz rviz

Turtlebot robot Consists of: Kobuki base Asus Xtion Pro RGB-D sensor Netbook Mounting hardware Specialized ROS packages that implement basic functions. An open source robotic platform for research and education. Show the robot. Kobuki base - wheels, odometry, batteries, external power, bumper sensors, cliff sensors. RGBD sensor - main perception source Netbook - the brain of the robot that connects to hardware and runs ROS Online tutorials: http://wiki.ros.org/turtlebot/Tutorials/indigo

ROS networking Running a robot requires two computers running ROS: Robot laptop roscore drivers Control laptop debugging high level control Robot laptop is controlled remotely using SSH. To setup communication between them both must be on the same network and know each other’s IP addresses. See Piazza for detailed instructions on network setup.

Finn Gunter BMO Jake Available robots Show the robot. Kobuki base - wheels, odometry, batteries, external power, bumper sensors, cliff sensors. RGBD sensor - main perception source Netbook - the brain of the robot that connects to hardware and runs ROS Jake

Turtlebot demonstration

Gazebo Gazebo is a 3D dynamic simulator Simulates the robot and it’s environment. Can generate robot sensor data. Uses a physics engine to simulate the interaction of the robot and the environment. Essential tool for robot algorithm development.

Gazebo demonstration

The end! Questions?