Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lab 4 1.Get an image into a ROS node 2.Find all the orange pixels (suggest HSV) 3.Identify the midpoint of all the orange pixels 4.Explore the findContours.

Similar presentations


Presentation on theme: "Lab 4 1.Get an image into a ROS node 2.Find all the orange pixels (suggest HSV) 3.Identify the midpoint of all the orange pixels 4.Explore the findContours."— Presentation transcript:

1 Lab 4 1.Get an image into a ROS node 2.Find all the orange pixels (suggest HSV) 3.Identify the midpoint of all the orange pixels 4.Explore the findContours and simpleBlobDetection functions in OpenCV. Give a 1-2 sentence description of both, and then use one of them 5.Extra credit: use a video rather than an image

2

3 Matching ® ® ® Global Map Local Map … … … obstacle Where am I on the global map?                                            Examine different possible robot positions.

4 General approach: A: action S: pose O: observation Position at time t depends on position previous position and action, and current observation

5 1.Uniform Prior 2.Observation: see pillar 3.Action: move right 4.Observation: see pillar

6 Localization Sense Move Initial Belief Gain Information Lose Information

7 Bayes Formula

8 Simple Example of State Estimation

9 Example  P(z|open) = 0.6P(z|  open) = 0.3  P(open) = P(  open) = 0.5

10

11 Actions Often the world is dynamic since – actions carried out by the robot, – actions carried out by other agents, – or just the time passing by change the world. How can we incorporate such actions?

12 Typical Actions Actions are never carried out with absolute certainty. In contrast to measurements, actions generally increase the uncertainty. (Can you think of an exception?)

13 Modeling Actions

14 Example: Closing the door

15 for : If the door is open, the action “close door” succeeds in 90% of all cases.

16 Integrating the Outcome of Actions Applied to status of door, given that we just (tried to) close it?

17 Integrating the Outcome of Actions P(closed | u) = P(closed | u, open) P(open) + P(closed|u, closed) P(closed)

18 Example: The Resulting Belief

19 OK… but then what’s the chance that it’s still open?

20 Example: The Resulting Belief

21 Summary Bayes rule allows us to compute probabilities that are hard to assess otherwise. Under the Markov assumption, recursive Bayesian updating can be used to efficiently combine evidence.

22 Example

23 s P(s)

24 Example

25 How does the probability distribution change if the robot now senses a wall?

26 Example 2 0.2

27 Example 2 0.2 Robot senses yellow. Probability should go up. Probability should go down.

28 Example 2 States that match observation Multiply prior by 0.6 States that don’t match observation Multiply prior by 0.2 0.2 Robot senses yellow. 0.040.12 0.04

29 Example 2 ! The probability distribution no longer sums to 1! 0.040.12 0.04 Normalize (divide by total).111.333.111 Sums to 0.36

30 Nondeterministic Robot Motion R The robot can now move Left and Right..111.333.111

31 Nondeterministic Robot Motion R The robot can now move Left and Right..111.333.111 When executing “move x steps to right” or left:.8: move x steps.1: move x-1 steps.1: move x+1 steps

32 Nondeterministic Robot Motion Right 2 The robot can now move Left and Right. 01000 000.10.80.1 When executing “move x steps to right”:.8: move x steps.1: move x-1 steps.1: move x+1 steps

33 Nondeterministic Robot Motion Right 2 The robot can now move Left and Right. 00.50 0 0.40.05 0.4 (0.05+0.05) 0.1 When executing “move x steps to right”:.8: move x steps.1: move x-1 steps.1: move x+1 steps

34 Nondeterministic Robot Motion What is the probability distribution after 1000 moves? 00.50 0

35 Example

36 Right

37 Example

38 Right

39 Kalman Filter Model Gaussian σ2σ2

40 Kalman Filter Sense Move Initial Belief Gain Information Lose Information Bayes Rule (Multiplication) Convolution (Addition) Gaussian: μ, σ 2

41 Measurement Example μ, σ 2 v, r 2

42 Measurement Example μ, σ 2 v, r 2

43 Measurement Example μ, σ 2 v, r 2 σ 2 ’?

44 Measurement Example μ, σ 2 v, r 2 μ', σ 2 ’ To calculate, go through and multiply the two Gaussians and renormalize to sum to 1 Also, the multiplication of two Gaussian random variables is itself a Gaussian

45 Multiplying Two Gaussians 1. (ignoring the constant term in front) 2.To find the mean, need to maximize this function (i.e. take derivative) 3.Set equal to and solve for 4.…something similar for the variance μ, σ 2 v, r 2 1.(ignoring constant term in front) 2.To find mean, maximize function (i.e., take derivative) 3. 4.4. Set equal to 0 and solve for x 5. 6.… something similar for the variance:

46 Another point of view… μ, σ 2 v, r 2 μ', σ 2 ’ : p(x) : p(z|x) : p(x|z)

47 Example

48

49 μ’=12.4 σ 2 ’=1.6

50 Kalman Filter Sense Move Initial Belief Lose Information Convolution (Addition) Gaussian: μ, σ 2

51 Motion Update For motion Model of motion noise μ’=μ+u σ 2 ’=σ 2 +r 2 u, r 2

52 Motion Update For motion Model of motion noise μ’=μ+u σ 2 ’=σ 2 +r 2 u, r 2

53 Motion Update For motion Model of motion noise μ’=μ+u=18 σ 2 ’=σ 2 +r 2 =10 u, r 2

54 Kalman Filter Sense Move Initial Belief Gaussian: μ, σ 2 μ’=μ+u σ 2 ’=σ 2 +r 2

55 Kalman Filter in Multiple Dimensions 2D Gaussian


Download ppt "Lab 4 1.Get an image into a ROS node 2.Find all the orange pixels (suggest HSV) 3.Identify the midpoint of all the orange pixels 4.Explore the findContours."

Similar presentations


Ads by Google