Presentation is loading. Please wait.

Presentation is loading. Please wait.

LEGO Mindstorms NXT SOURCES: Carnegie Mellon Gabriel J. Ferrer Dacta

Similar presentations


Presentation on theme: "LEGO Mindstorms NXT SOURCES: Carnegie Mellon Gabriel J. Ferrer Dacta"— Presentation transcript:

1 LEGO Mindstorms NXT SOURCES: Carnegie Mellon Gabriel J. Ferrer Dacta
Timothy Friez Miha Štajdohar Anjum Gupta Group: Roanne Manzano Eric Tsai Jacob Robison

2 Introductory programming robotics projects
Developed for a zero-prerequisite course Most students are not ECE or CS majors 4 hours per week 2 meeting times 2 hours each Students build robot outside class

3 Beginning activities Bridge Tower LEGO Man Organizing Pieces
Naming Pieces Programming Robot People Robots by instructions

4 Teaching Ideas Teach mini-lessons as necessary Gears- Power vs. Speed
Transmission of energy/motion Using fasteners Worm Gears Building with bricks vs. building machines These spin: These don’t:

5 Project 1: Motors and Sensors (1)
Introduce motors Drive with both motors forward for a fixed time Drive with one motor to turn Drive with opposing motors to spin Introduce subroutines Low-level motor commands get tiresome Simple tasks Program a path (using time delays) to drive through the doorway

6 First Project (2) Introduce the touch sensor Interesting problem
if statements Must touch the sensor at exactly the right time while loops Sensor is constantly monitored Interesting problem Students try to put code in the loop body e.g. set the motor power on each iteration Causes confusion rather than harm

7 First Project (3) Combine infinite loops with conditionals
Enables programming of alternating behaviors Front touch sensor hit => go backward Back touch sensor hit => go forward Braitenberg vehicles and state-machine based robots

8 Project 2: Mobile robot and rotation sensors (1)
Physics of rotational motion Introduction of the rotation sensors Built into the motors Balance wheel power If left counts < right counts Increase left wheel power Race through obstacle course

9 Second Project (2) if (/* Write a condition to put here */) {
nxtDisplayTextLine(2, "Drifting left"); } else if (/* Write a condition to put here */) nxtDisplayTextLine(2, "Drifting right"); else nxtDisplayTextLine(2, "Not drifting"); Complete this code with various conditions and various motions

10 Project 3 Line Following

11 Line Following Use light sensors to follow a line in the least time
Design and programming challenge Uses looping or repeating programs Robots appear to be ‘thinking’

12 The „line following” project
Objectives : Build a mobile robot and program it to follow a line Make the robot go „as fast as possible” Challenges : Different lines (large, thin, continuous, with gaps, sharp turns, line crossings, etc…) Control algorithms for 1, 2 and 3 sensors Real time, changing environment Learning, adaptation Fault tolerance, error recovery The “line following” project start stop 2x for 1 sec <> stat stop for 2 sec In the simple “line following” project, a mobile robot has to follow a line (a thin “guiding line” or large like a road), as fast as possible. The line has different color than the background which can be detected by 1, 2 or 3 light insensity sensors. Student find easilly several control algorithm for the robot but they have to consider the real time constraints too, a fast moving robot may not have enough time to read the sensors, process the data and control the robot. Trying to go fast result the line is lost sometimes (in some experiment we voluntary set up a situation where the line is not continuos, has small gaps, or the line turns wery sharply). Robot (controlling program) has to deal with these situations. have to use robust, fault tolerant, “error recovering” techniques, or even have to learn. Stopping and going back where line was last seen is the very first recovering solution. A more complex control program don't stop immediately (when the line is lost) but try to continue to the “estimated line direction” (the program memorize which side the line was last seen and try to go that direction) “hoping” the the line will be found. If the line is not found in a certain amount of time the robot control program may switch to an “active search” mode where the robot try to find a line looking for both directions, going to bigger and bigger circles. Lightning conditions changes, line color (shade of the robot on the line) changing... robot reduce speed, turn sharper... Real time limitations/problems in communication

13 Different control algorithms for different lines (large and thin line)
RCX RCX RCX Egyszerű nyelvezettel fejtse ki mondanivalóját! A bizonyítékokat feltárhatja mind szóbeli, mind képi eszközökkel Erősítse meg az elmondottakat egy konkrét eset ismertetésével! Alkalmazzon átvezetést vagy átmenetet a napirendi pontok között! RCX RCX RCX

14 Different control algorithms for 1 and 3 sensors…
RCX RCX RCX RCX Egyszerű nyelvezettel fejtse ki mondanivalóját! A bizonyítékokat feltárhatja mind szóbeli, mind képi eszközökkel Erősítse meg az elmondottakat egy konkrét eset ismertetésével! Alkalmazzon átvezetést vagy átmenetet a napirendi pontok között! RCX RCX RCX

15 The used techniques and knowledge (1)
Real time constraints appear when the robot goes „as fast as possible” : Sensor reading and information processing speed Motor-robot inertia, wheel slipping… Fault tolerant, error recovery techniques are used when : Unreliable sensor values Inaccurate surface Loosing the line…

16 The used techniques and knowledge (2)
Initial calibration and adaptation are used in the „changing environment” : Changes in the light intensity of the line (room lamps, robot shade, …) Battery’s charge… „Learning” techniques can be used to determine : How fast the robot can go (acceleration on long straight lines) How sharply the robot should turn How to avoid endless repetitions Stop, go back where line last was seen (and try again…) Continue (and hope) to find the line Continue and search for the line Time limits… Adaptive threshold and variable speed turning based on the amount of turning needed Acceleration …memorize the track on a closed circuit…

17 Educational benefits of the „line following” project
Students confronted, used and learned : Real time constraints Robust, fault tolerant control algorithms Error „recovery” techniques Robot’s learning and adaptation to the changing environment Programming languages State machine Multi tasking

18 The Challenges

19 Project 4: Drawing robot
Pen-drawer First project with an effector Builds upon lessons from previous projects Limitations of rotation sensors Slippage problematic Most helpful with a limit switch Shapes (Square, Circle) Word (“LEGO

20 Pen-Drawer Robot

21 Pen-Drawer Robot

22 Project 5: Finding objects (1)
Light sensor Find a line Sonar sensor Find an object Find free space

23 Fourth Project (2) Begin with following a line edge
Robot follows a circular track Always turns right when track lost Traversal is one-way Alternative strategy Robot scans both directions when track lost Each pair of scans increases in size

24 Fourth Project (3) Once scanning works, replace light sensor reading with sonar reading Scan when distance is short Finds freespace Scan when distance is long Follow a moving object

25 Light Sensor/Sonar Robot

26 Other Projects with mobile robots
“Theseus” Store path (from line following) in an array Backtrack when array fills Robotic forklift Finds, retrieves, delivers an object Perimeter security robot Implemented using RCX 2 light sensors, 2 touch sensors Wall-following robot Build a rotating mount for the sonar Quantum Braitenberg Robots of Arushi Raghuvanshi Maze Robots of Stefan Gebauer and Fuzzy robots of Chris Brawn

27 Robot Forklift

28 Gearing the motors

29 Project 6: Fuzzy Logic Implement a fuzzy expert system for the robot to perform a task Students given code for using fuzzy logic to balance wheel encoder counts Students write fuzzy experts that: Avoid an obstacle while wandering Maintain a fixed distance from an object

30 Fuzzy Rules for Balancing Rotation Counts
Inference rules: biasRight => leftSlow biasLeft => rightSlow biasNone => leftFast biasNone => rightFast Inference is trivial for this case Fuzzy membership/defuzzification is more interesting

31 Fuzzy Membership Functions
Disparity = leftCount - rightCount biasLeft is 1.0 up to -100 Decreases linearly down to 0.0 at 0 biasRight is the reverse biasNone is 0.0 up to -50 1.0 at 0 falls to 0.0 at 50

32 Defuzzification Use representative values: Left wheel:
Slow = 0 Fast = 100 Left wheel: (leftSlow * repSlow + leftFast * repFast) / (leftSlow + leftFast) Right wheel is symmetric Defuzzified values are motor power levels

33 Project 7. Q-Learning Discrete sets of states and actions Q-values
States form an N-dimensional array Unfolded into one dimension in practice Individual actions selected on each time step Q-values 2D array (indexed by state and action) Expected rewards for performing actions Action1= strike action2 action3 action4 State happy 0.3 State unhappy State angry State hungry State bored Q-values

34 Q-Learning Main Loop Select action Change motor speeds
Inspect sensor values Calculate updated state Calculate reward Update Q values Set “old state” to be the updated state

35 Calculating the State (Motors)
For each motor: 100% power 93.75% power 87.5% power Six motor states

36 Calculating the State (Sensors)
No disparity: STRAIGHT Left/Right disparity 1-5: LEFT_1, RIGHT_1 6-12: LEFT_2, RIGHT_2 13+: LEFT_3, RIGHT_3 Seven total sensor states 63 states overall

37 Action Set for Balancing Rotation Counts
MAINTAIN Both motors unchanged UP_LEFT, UP_RIGHT Accelerate motor by one motor state DOWN_LEFT, DOWN_RIGHT Decelerate motor by one motor state Five total actions

38 Action Selection Determine whether action is random
Determined with probability epsilon If random: Select uniformly from action set If not random: Visit each array entry for the current state Select action with maximum Q-value from current state

39 Calculating Reward No disparity => highest value
Reward decreases with increasing disparity

40 Updating Q-values Q[oldState][action] = Q[oldState][action] +
learningRate * (reward + discount * maxQ(currentState) - Q[oldState][action])

41 Student Exercises Assess performance of wheel-balancer
Experiment with different constants Learning rate Discount Epsilon Alternative reward function Based on change in disparity

42 Learning to Avoid Obstacles
Robot equipped with sonar and touch sensor Hitting the touch sensor is penalized Most successful formulation: Reward increases with speed Big penalty for touch sensor

43 Other classroom possibilities
Operating systems Inspect, document, and modify firmware Programming languages Develop interpreters/compilers NBC an excellent target language Supplementary labs for CS1/CS2

44 Project 8. Sumo and similar fighting competitions

45 The Tug O’ War Robots pull on opposite ends of a 2 foot string
There are limits on mass,motors, and certain wheels Teaches integrity, torque, gearing, friction Good challenge for beginners Very little programming

46 Drag Race Least amount of time to cross a set distance
Straight, light fast designs Teaches gearing, efficiency Nice contrast to Tug O’ War Little programming

47 Sprint Rally Cross the table and return, attempting to stay within the designated path. Challenging programming Possibly uses sensors Teaches precision, programming logic, prediction

48 Sumo-Autonomous Robots push each other out of the ring
A ‘real’ competition Require light sensors Encourages efficient, robust designs Power isn’t everything Designs must predict unknown opponents

49 Sumo-Remote Uses another RCX or tethered sensors to control
Do not use Mindstorms remote Like BattleBots Still requires programming Driver skill is a factor

50 Other Challenge Possibilities
Weight lifting, obstacle course, tightrope walking, soccer, maze navigation, Dancing, golf, bipedal locomotion, tractor pull, and many more Cooperative Robots Component Design Time-limited robot design See the website, find more on the internet, or create your own Create Specific rules Predict loopholes

51 Final Notes Slides available on-line:
Make sure to check back with for updates and support. Join the robotc.net forums at – useful community website for getting all other FIRST related questions answered Any questions: Post to forums, or me at


Download ppt "LEGO Mindstorms NXT SOURCES: Carnegie Mellon Gabriel J. Ferrer Dacta"

Similar presentations


Ads by Google