Download presentation
Presentation is loading. Please wait.
Published byChristian Lee Modified over 8 years ago
1
Calibration and Learning ECE 383 / MEMS 442: Introduction to Robotics Kris Hauser
2
Agenda Calibration Ground truth / calibration rigs Camera intrinsic / extrinsic calibration Identifiability Linear and nonlinear least squares Brief overview of robot learning Reading: CVAA 6.3
3
General comments Purpose: to determine an accurate mathematical model of physical quantities The first step to making robots work An unsexy process … But one that helps you crush your competitors!
4
General comments Purpose: to determine an accurate mathematical model of physical quantities The first step to making robots work An unsexy process … But one that helps you crush your competitors! Process: Establish some ground truth with trusted measurements Develop a parametric model relating quantities of interest and observations to ground truth quantities Gather observations Optimize the quantities of interest (i.e., the parameters of the model) to minimize the error between the predictions and ground truth
5
General comments Purpose: to determine an accurate mathematical model of physical quantities The first step to making robots work An unsexy process … But one that helps you crush your competitors! Process: Establish some ground truth with trusted measurements Develop a parametric model relating quantities of interest and observations to ground truth quantities Gather observations Optimize the quantities of interest (i.e., the parameters of the model) to minimize the error between the predictions and ground truth Identifiability: Can we possibly estimate the quantities of interest using the observations?
6
General comments Purpose: to determine an accurate mathematical model of physical quantities The first step to making robots work An unsexy process … But one that helps you crush your competitors! Process: Establish some ground truth with trusted measurements Develop a parametric model relating quantities of interest and observations to ground truth quantities Gather observations Optimize the quantities of interest (i.e., the parameters of the model) to minimize the error between the predictions and ground truth Data acquisition issues: reliability of observations, coverage of operating regime
7
General comments Purpose: to determine an accurate mathematical model of physical quantities The first step to making robots work An unsexy process … But one that helps you crush your competitors! Process: Establish some ground truth with trusted measurements Develop a parametric model relating quantities of interest and observations to ground truth quantities Gather observations Optimize the quantities of interest (i.e., the parameters of the model) to minimize the error between the predictions and ground truth Optimization issues: Dimensionality, smoothness, constraints, local minima, nuisance parameters
8
Camera Intrinsic Parameter Calibration Determine camera’s intrinsic parameters Focal length Field of view Pixel dimensions Radial distortion That determine the mapping from image pixels to an idealized pinhole camera Usually have distinctive calibration patterns such as checkerboards Ground truth: 90° angles of checkerboard, cell width
9
Extrinsic Parameter Calibration Extrinsic parameters: position and orientation of camera frame with respect to some given coordinate frame Origin: coordinates of focal point +Z direction: viewing direction (forward in camera’s POV) +X direction: right in camera’s POV +Y direction: down in camera’s POV The other coordinate frame could be the robot’s body, the world, or another camera Calibration: rigs with distinctive patterns (checkerboard), determine feature matches in camera-to-camera calibration, or direct measurement
10
Kinematic Parameter Calibration
11
Dynamic Parameter Calibration
12
General principles: the simple case
13
x f(x) When might we want this?
14
Example: Torque curves Max torque of a motor is not constant as a function of RPM Peak torque rating Continuous torque rating (no overheating) Generally, electrical motor torque curves are decreasing
15
Handling noise / overfitting When observations are noisy, it often is better to choose a simpler curve and avoid exact fitting Inherent tradeoffs between fitting / overfitting, but some automated methods for deciding (model selection)
16
Handling noise / overfitting When observations are noisy, it often is better to choose a simpler curve and avoid exact fitting Inherent tradeoffs between fitting / overfitting, but some automated methods for deciding (model selection)
17
Handling noise / overfitting When observations are noisy, it often is better to choose a simpler curve and avoid exact fitting Inherent tradeoffs between fitting / overfitting, but some automated methods for deciding (model selection)
18
Handling noise / overfitting When observations are noisy, it often is better to choose a simpler curve and avoid exact fitting Inherent tradeoffs between fitting / overfitting, but some automated methods for deciding (model selection)
19
Parametric model fitting
20
Examples
21
Parametric model fitting
22
Linear Least-Squares f(x;θ) = x ∙ θ Value of θ that optimizes E(θ) is: θ = [Σ i x (i) ∙ y (i) ] / [Σ i x (i) ∙ x (i) ] E(θ) = Σ i ( x (i) ∙θ - y (i) ) 2 = Σ i ( x (i) 2 θ 2 – 2 x (i) y (i) θ + y (i)2 ) E’(θ) = 0 => d/d θ [Σ i ( x (i) 2 θ 2 – 2 x (i) y (i) θ + y (i)2 )] = Σ i 2 x (i)2 θ – 2 x (i) y (i) = 0 => θ = [Σ i x (i) ∙ y (i) ] / [Σ i x (i) ∙ x (i) ] x f(x) f(x, )
23
Linear Least-Squares with constant offset f(x,θ 0,θ 1 ) = θ 0 + θ 1 x E(θ 0,θ 1 ) = Σ i (θ 0 +θ 1 x (i) - y (i) ) 2 = Σ i (θ 0 2 + θ 1 2 x (i) 2 + y (i)2 +2θ 0 θ 1 x (i) -2θ 0 y (i) -2θ 1 x (i) y (i) ) dE/dθ 0 (θ 0 *, θ 1 * ) = 0 and dE/dθ 1 (θ 0 *, θ 1 * ) = 0, so: 0 = 2Σ i (θ 0 * +θ 1 * x (i) - y (i) ) 0 = 2Σ i x (i) (θ 0 * + θ 1 * x (i) - y (i) ) Verify the solution: θ 0 * = 1/N Σ i (y (i) – θ 1 * x (i) ) θ 1 * = [N (Σ i x (i) y (i) ) – (Σ i x (i) )(Σ i y (i) )]/ [N (Σ i x (i)2 ) – (Σ i x (i) ) 2 ] x f(x) f(x, )
24
Multi-Dimensional Least- Squares Let x include attributes (x 1,…,x N ) Let θ include coefficients (θ 1,…,θ N ) Model f(x,θ) = x 1 θ 1 + … + x N θ N x f(x) f(x, )
25
Multi-Dimensional Least- Squares f(x,θ) = x 1 θ 1 + … + x N θ N Best θ given by θ = (A T A) -1 A T b Where A is matrix of x (i) ’s in rows, b is vector of y (i) ’s
26
Multi-Dimensional Least- Squares f(x,θ) = x 1 θ 1 + … + x n θ n Best θ given by θ = (A T A) -1 A T b Where A is matrix of x (i) ’s in rows, b is vector of y (i) ’s n x n matrix (note relation to covariance matrix) n-D vector
27
Nonlinear Least-Squares E.g. quadratic f(x,θ) = θ 0 + x θ 1 + x 2 θ 2 E.g. exponential f(x,θ) = exp(θ 0 + x θ 1 ) Any combinations f(x,θ) = exp(θ 0 + x θ 1 ) + θ 2 + x θ 3 Fitting can be done using gradient descent 27 x f(x) linear quadraticother
28
Aside: Feature Transforms Common model: weighted sums of nonlinear functions f 1 (x),…,f N (x) Linear in the feature space Polynomial g(x,θ) = θ 0 + x θ 1 + … + x d θ d In general g(x,θ) = f 1 (x) θ 1 + … + f N (x) θ N Least squares fit can be solved exactly by consider a transformed dataset (x’,y) with x’=(f 1 (x),…,f N (x)) 28
29
Dynamic System Identification (System ID)
31
Identifiable vs non-identifiable
32
More complex calibrations with nuisance parameters
33
Break
34
Quiz Jianqiao is planning on calibrating a depth sensor’s intrinsic parameters. He has a calibration rig including 5 distinctive points with known relative position, and plans to wave it around. Do you think the FOV, depth scaling, and distortion parameters are identifiable? Why or why not? What other information might you need?
35
Robot Learning Robot learning: much of what we just talked about More powerful machine learning techniques are available E.g., probabilistic graphical models, regression trees, linear weighted regression, neural networks Specialized features E.g., visual features, point cloud features Python library: scikit-learn
36
Robot Learning Robot learning: much of what we just talked about More powerful machine learning techniques are available E.g., probabilistic graphical models, regression trees, linear weighted regression, neural networks Specialized features E.g., visual features, point cloud features Python library: scikit-learn Exception! often, we want to learn a controller, not just a model
37
Topics in Robot Learning Goal: to generate a control policy u(x) that “performs well” via data (rather than deliberation) Approaches: Learning from Demonstration (LfD): human teacher Reinforcement Learning (RL): self-taught Hybrid approaches, e.g., apprenticeship learning
38
Learning from Demonstration Given several observations of u= (x) from a human teacher, can just copy what you’ve seen before Were your demonstrations (and model) sufficiently general? Careful definition of state to remove invariants, state/time warping Acquiring demonstrations is time consuming
39
Reinforcement Learning Very similar problem to optimal control, but… Use feedback from experience rather than deliberative planning Only solve for the optimal policy in the neighborhood of states that are near optimal Use a cost function to penalize / reward certain states (usually denoted a reward function R(x)) Concept: the robot starts with a policy u= (x), and starts executing it. It reduces the preference for high-cost actions and increases the preference for low-cost actions at a given state Issues: How to represent policies in high-D space? How to use information from one state to affect nearby states? Policy gradient methods
40
Apprenticeship learning Often it is hard to define a reward function by hand The robot’s policy is bootstrapped by a human teacher The robot’s reward function is bootstrapped by a human teacher (inverse reinforcement learning) The robot then improves its policy according to the learned reward via reinforcement learning
41
Almost done…
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.