Inverting the Jacobian and Manipulability

Slides:



Advertisements
Similar presentations
COMP Robotics: An Introduction
Advertisements

3D Geometry for Computer Graphics
Animation Following “Advanced Animation and Rendering Techniques” (chapter 15+16) By Agata Przybyszewska.
Forward and Inverse Kinematics CSE 3541 Matt Boggus.
Chris Hall Aerospace and Ocean Engineering
Inverse Kinematics Set goal configuration of end effector
CSCE 641: Forward kinematics and inverse kinematics Jinxiang Chai.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Some useful linear algebra. Linearly independent vectors span(V): span of vector space V is all linear combinations of vectors v i, i.e.
Ch. 7: Dynamics.
CSCE 641: Forward kinematics and inverse kinematics Jinxiang Chai.
Ch. 4: Velocity Kinematics
Forward Kinematics.
Chapter 3 Determinants and Matrices
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
ME Robotics DIFFERENTIAL KINEMATICS Purpose: The purpose of this chapter is to introduce you to robot motion. Differential forms of the homogeneous.
CSCE 689: Forward Kinematics and Inverse Kinematics
3D Geometry for Computer Graphics
Chapter 5: Path Planning Hadi Moradi. Motivation Need to choose a path for the end effector that avoids collisions and singularities Collisions are easy.
Inverse Kinematics Jacobian Matrix Trajectory Planning
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
Introduction to ROBOTICS
1cs542g-term Notes  Extra class next week (Oct 12, not this Friday)  To submit your assignment: me the URL of a page containing (links to)
Matrices CS485/685 Computer Vision Dr. George Bebis.
Today Wrap up of probability Vectors, Matrices. Calculus
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
역운동학의 구현과 응용 Implementation of Inverse Kinematics and Application 서울대학교 전기공학부 휴먼애니메이션연구단 최광진
Velocity Analysis Jacobian
V ELOCITY A NALYSIS J ACOBIAN University of Bridgeport 1 Introduction to ROBOTICS.
Velocities and Static Force
ME451 Kinematics and Dynamics of Machine Systems Review of Matrix Algebra – 2.2 Review of Elements of Calculus – 2.5 Vel. and Acc. of a point fixed in.
Advanced Graphics (and Animation) Spring 2002
Definition of an Industrial Robot
February 21, 2000Robotics 1 Copyright Martin P. Aalund, Ph.D. Computational Considerations.
Compiled By Raj G. Tiwari
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
BMI II SS06 – Class 3 “Linear Algebra” Slide 1 Biomedical Imaging II Class 3 – Mathematical Preliminaries: Elementary Linear Algebra 2/13/06.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Kinematic Linkages.
Chap 5 Kinematic Linkages
Lecture 2: Introduction to Concepts in Robotics
Inverse Kinematics.
Simulation and Animation
Outline: 5.1 INTRODUCTION
Least SquaresELE Adaptive Signal Processing 1 Method of Least Squares.
1 Fundamentals of Robotics Linking perception to action 2. Motion of Rigid Bodies 南台科技大學電機工程系謝銘原.
CSCE 441: Computer Graphics Forward/Inverse kinematics Jinxiang Chai.
Vector Norms and the related Matrix Norms. Properties of a Vector Norm: Euclidean Vector Norm: Riemannian metric:
Review: Differential Kinematics
Kinematic Redundancy A manipulator may have more DOFs than are necessary to control a desired variable What do you do w/ the extra DOFs? However, even.
Review of Matrix Operations Vector: a sequence of elements (the order is important) e.g., x = (2, 1) denotes a vector length = sqrt(2*2+1*1) orientation.
Trajectory Generation
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
CSCE 441: Computer Graphics Forward/Inverse kinematics Jinxiang Chai.
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
Numerical Methods for Inverse Kinematics Kris Hauser ECE 383 / ME 442.
CSCE 441: Computer Graphics Forward/Inverse kinematics
Character Animation Forward and Inverse Kinematics
Review of Linear Algebra
Review of Matrix Operations
Singular Value Decomposition
Outline: 5.1 INTRODUCTION
Some useful linear algebra
CSCE 441: Computer Graphics Forward/Inverse kinematics
SVD: Physical Interpretation and Applications
Numerical Analysis Lecture 16.
Outline: 5.1 INTRODUCTION
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 14, Tuesday 8th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
Outline: 5.1 INTRODUCTION
Chapter 4 . Trajectory planning and Inverse kinematics
Robotics 1 Copyright Martin P. Aalund, Ph.D.
Presentation transcript:

Inverting the Jacobian and Manipulability Howie Choset (with notes copied from Rob Platt)

The Inverse Problem Quick review of linear algebra Forward Kinematics is in general many to one (if more joint angles than DOF in WS), non linear Inverse Kinematics is in general one to many (have a choice), non linear Forward differential kinematics, linear, many to one, etc. based on # of DOF and WS Inverse is “easier” because linear, but this many to one issue comes up

The Inverse Problem Quick review of linear algebra

Cartesian control Let’s control the position (not orientation) of the three link arm end effector: We can use the same strategy that we used before:

Cartesian control joint ctlr joint position sensor However, this only works if the Jacobian is square and full rank… All rows/columns are linearly independent, or Columns span Cartesian space, or Determinant is not zero

Cartesian control What if you want to control the two-dimensional position of a three-link manipulator? Two equations of three variables each… This is an under-constrained system of equations. multiple solutions there are multiple joint angle velocities that realize the same EFF velocity.

Generalized inverse If the Jacobian is not a square matrix (or is not full rank), then the inverse doesn’t exist… what next? We have: We are looking for a matrix such that:

Generalized inverse Two cases: Underconstrained manipulator (redundant) Overconstrained Generalized inverse: for the underconstrained manipulator: given , find any vector s.t. for the overconstrained manipulator: given , find any vector s.t. Is minimized

Jacobian Pseudoinverse: Redundant manipulator Psuedoinverse definition: (underconstrained) Given a desired twist, , find a vector of joint velocities, , that satisfies while minimizing Minimize joint velocities Minimize subject to : Use lagrange multiplier method: This condition must be met when is at a minimum subject to

Jacobian Pseudoinverse: Redundant manipulator Minimize Subject to

Jacobian Pseudoinverse: Redundant manipulator I won’t say why, but if is full rank, then is invertible So, the pseudoinverse calculates the vector of joint velocities that satisfies while minimizing the squared magnitude of joint velocity ( ). Therefore, the pseudoinverse calculates the least-squares solution.

Calculating the pseudoinverse The pseudoinverse can be calculated using two different equations depending upon the number of rows and columns: Underconstrained case (if there are more columns than rows (m<n)) Overconstrained case (if there are more rows than columns (n<m)) If there are an equal number of rows and columns (n=m) These equations can only be used if the Jacobian is full rank; otherwise, use singular value decomposition (SVD):

Calculating the pseudoinverse using SVD Singular value decomposition decomposes a matrix as follows: is a diagonal matrix of singular values:

Properties of the pseudoinverse Moore-Penrose conditions: Generalized inverse: satisfies condition 1 Reflexive generalized inverse: satisfies conditions 1 and 2 Pseudoinverse: satisfies all four conditions Other useful properties of the pseudoinverse:

Controlling Cartesian Position joint ctlr joint position sensor Procedure for controlling position: Calculate position error: Multiply by a scaling factor: Multiply by the velocity Jacobian pseudoinverse:

Controlling Cartesian Orientation How does this strategy work for orientation control? Suppose you want to reach an orientation of Your current orientation is You’ve calculated a difference: How do you turn this difference into a desired angular velocity to use in ? joint ctlr joint position sensor

Controlling Cartesian Orientation You can’t do this: Convert the difference to ZYZ Euler angles: Multiply the Euler angles by a scaling factor and pretend that they are an angular velocity: Remember that in general: joint ctlr joint position sensor

The Analytical Jacobian If you really want to multiply the angular Jacobian by the derivative of an Euler angle, you have to convert to the “analytical” Jacobian: For ZYZ Euler angles Gimbal lock: by using an analytical Jacobian instead of the angular velocity Jacobian, you introduce the gimbal lock problems we talked about earlier into the Jacobian – this essentially adds “singularities” (we’ll talk more about that in a bit…)

Controlling Cartesian Orientation The easiest way to handle this Cartesian orientation problem is to represent the error in axis-angle format Axis angle delta rotation Procedure for controlling rotation: Represent the rotation error in axis angle format: Multiply by a scaling factor: Multiply by the angular velocity Jacobian pseudoinverse:

Controlling Cartesian Orientation Why does axis angle work? Remember Rodrigues’ formula from before: axis angle Compare this to the definition of angular velocity: The solution to this FO diff eqn is: Therefore, the angular velocity gets integrated into an axis angle representation

Jacobian Transpose Control The story of Cartesian control so far:

Jacobian Transpose Control Start with a squared position error function (assume the poses are represented as row vectors) Here’s another approach: Position error: Gradient descent: take steps proportional to in the direction of the negative gradient.

Jacobian Transpose Control The same approach can be used to control orientation: orientation error: axis angle orientation of reference pose in the current end effector reference frame:

Jacobian Transpose Control So, evidently, this is the gradient of that Jacobian transpose control descends a squared error function. Gradient descent always follows the steepest gradient

Jacobian Transpose v Pseudoinverse What gives? Which is more direct? Jacobian pseudoinverse or transpose? or They do different things: Transpose: move toward a reference pose as quickly as possible One dimensional goal (squared distance meteric) Pseudoinverse: move along a least squares reference twist trajectory Six dimensional goal (or whatever the dimension of the relevant twist is)

Jacobian Transpose v Pseudoinverse The pseudoinverse moves the end effector in a straight line path toward the goal pose using the least squared joint velocities. The goal is specified in terms of the reference twist Manipulator follows a straight line path in Cartesian space The transpose moves the end effector toward the goal position In general, not a straight line path in Cartesian space Instead, the transpose follows the gradient in joint space

Using the Jacobian for Statics Up until now, we’ve used the Jacobian in the twist equation, Interestingly, you can also use the Jacobian in a statics equation: Joint torques Cartesian wrench: force moment (torque)

When J is not invertible: Case 1 minimizes

When is J not invertible: Case 2

Singularities for most configurations

Singularities

Manipulability A measure of the Jacobian Configuration dependent Maybe it is a measure of how far away from a singularity Norms Scalar Vector Matrix

Manipulability Ellipsoid Can we characterize how close we are to a singularity? Yes – imagine the possible instantaneous motions are described by an ellipsoid in Cartesian space. Can’t move much this way Can move a lot this way

Manipulability Ellipsoid The manipulability ellipsoid is an ellipse in Cartesian space corresponding to the twists that unit joint velocities can generate: A unit sphere in joint velocity space Project the sphere into Cartesian space The space of feasible Cartesian velocities

Manipulability Ellipsoid You can calculate the directions and magnitudes of the principle axes of the ellipsoid by taking the eigenvalues and eigenvectors of The lengths of the axes are the square roots of the eigenvalues Yoshikawa’s manipulability measure: You try to maximize this measure Maximized in isotropic configurations This measures the volume of the ellipsoid

Manipulability Ellipsoid Another characterization of the manipulability ellipsoid: the ratio of the largest eigenvalue to the smallest eigenvalue: Let be the largest eigenvalue and let be the smallest. Then the condition number of the ellipsoid is: The closer to one the condition number, the more isotropic the ellispoid is.

Manipulability Ellipsoid Isotropic manipulability ellipsoid NOT isotropic manipulability ellipsoid

Force Manipulability Ellipsoid You can also calculate a manipulability ellipsoid for force: A unit sphere in the space of joint torques The space of feasible Cartesian wrenches

Manipulability Ellipsoid Principle axes of the force manipulability ellipsoid: the eigenvalues and eigenvectors of: has the same eigenvectors as : But, the eigenvalues of the force and velocity ellipsoids are reciprocals: Therefore, the shortest principle axes of the velocity ellipsoid are the longest principle axes of the force ellipsoid and vice versa…

Velocity and force manipulability are orthogonal! Force ellipsoid Velocity ellipsoid This is known as force/velocity duality You can apply the largest forces in the same directions that your max velocity is smallest Your max velocity is greatest in the directions where you can only apply the smallest forces

Manipulability Ellipsoid: Example Solve for the principle axes of the manipulability ellipsoid for the planar two link manipulator with unit length links at Principle axes:

Eigenvalues and Vectors Eigenvectors are the directions that a linear mapping just scales a vector Ax = sx where A is a matrix, s is a scalar Det (Ax) = Det (sx)  Eigenvectors Ellipsoid analogy

Singular Values Eigenvalues are for squares (remember the determinant) This square matrix has eigenvalues and vectors

Singular Value Decomposition