Outline: Biological Metaphor Biological generalization How AI applied this Ramifications for HRI How the resulting AI architecture relates to automation and control theory HRI1
2 Biological Intelligence* “Upper brain” or cortex Reasoning over information about goals “Middle brain” Converting sensor data into information Spinal Cord and “lower brain” Skills and responses *An amazingly sweeping generalization for the purpose of metaphor
HRI3 Programming Modules SENSEPLANACT LEARN PRIMITIVES
1 2 3 Early AI Robotics ( ) HRI4 PLANSENSEACT Seemed to capture cognitive notions such as “action- perception cycle”
1 2 3 Early Problem ( ) HRI5 PLANSENSEACT In practice: World model was intractable In theory: Ignored Gibson, reflexes, …
HRI6 Two Major “Loops”: 1- Reflexes, Reactive, Direct Perception Spinal Cord and “lower brain” Skills and responses, behaviors *An amazingly sweeping generalization for the purpose of metaphor
HRI7 Two Major “Loops”: 2- Deliberative, Use Symbols/Representations … “Upper brain” or cortex Reasoning over information about goals Spinal Cord and “lower brain” Skills and responses *An amazingly sweeping generalization for the purpose of metaphor
HRI8 Plus Perception to Symbols (Abstraction, Models, Explicit Representation) “Upper brain” or cortex Reasoning over information about goals “Middle brain” Converting sensor data into information Spinal Cord and “lower brain” Skills and responses *An amazingly sweeping generalization for the purpose of metaphor
HRI9 AI Architecture Using Biological Metaphor “Upper brain” or cortex Reasoning over information about goals “Middle brain” Converting sensor data into information Spinal Cord and “lower brain” Skills and responses Reactive (or Behavioral) Layer Deliberative Layer
Behavioral Layer HRI10 ACTSENSEACTSENSEACTSENSE Behaviors are independent, run in parallel, output is emergent SENSE-ACT couplings are “behaviors”
Just Behavioral Layer… HRI11
HRI Ramifications Overall action is EMERGENT, a product of interaction of multiple behaviors and their response to stimulus –Not amenable to proofs, traditional guarantees of correctness/safety Behaviors-only implementations aren’t optimal HRI12
HRI13Introduction to AI Robotics R. Murphy (MIT Press 2000) for second edition13 PLAN, then instantiate and monitor SENSE-ACT behaviors Reuse sensing channels but create task-specific representations ACTSENSE PLAN ACTSENSE ACTSENSE Deliberative Layer
Example HRI14
Don’t Know How to Do Symbol- Ground Problem/World Models HRI15Introduction to AI Robotics R. Murphy (MIT Press 2000) for second edition15 “Upper brain” or cortex Reasoning over information about goals “Middle brain” Converting sensor data into information Spinal Cord and “lower brain” Skills and responses *An amazingly sweeping generalization for the purpose of metaphor
HRI Ramifications Robots are good at computer optimization, large data set types of problems –Planning and “search” –Allocation Robots are not good at converting what is in the real world to symbols (which are required for deliberative functions) –Recognition is hard –Gesturing and giving directions relative to objects is hard because no perceptual common ground HRI16
HRI17 What About LEARN? SENSEPLANACT LEARN PRIMITIVES
HRI18 What About Other People? *An amazingly sweeping generalization for the purpose of metaphor
HRI Ramifications Robots don’t learn. If they do, it is extremely limited and local to a particular activity or situation Natural language understanding remains elusive, so unlikely to be communicating with any depth HRI19
HRI20 senseactsenseact senseact monitoringgenerating selectingimplementing plan World model Deliberative Layer: Upper level is mission generation & monitoring But World Modeling & Monitoring is hard (SA) Lower level is selection of behaviors to accomplish task ( instantiation ) & local monitoring How AI Relates to Factory Automation
HRI21 senseact senseact senseact Reactive (fly by wire, inner loop control, behaviors): Tightly coupled with sensing, so very fast Many concurrent stimulus-response behaviors, strung together with simple scripting with FSA Action is generated by sensed or internal stimulus No awareness, no mission monitoring Models are of the vehicle, not the “larger” world Control Theory is “Lower Level” But Doesn’t Necessary Capture it All plan World model
HRI22 Consider Time Scales/Horizon senseactsenseact senseact monitoringgenerating selectingimplementing plan World model PRESENT, VERY FAST, PARALLEL PRESENT+PAST, FAST PRESENT+PAST+FUTURE, SLOW
Recap… Automation is closed world, autonomy is open world –Automation fails in the open world, autonomy fails –Humans are the more adaptive member of the JCS A simple biological analogy for AI in robotics –Behaviors easy –Advanced cognitive easy –Connecting perception with symbols hard HRI23
Next: Case Studies Automation is closed world, autonomy is open world –Automation fails in the open world, autonomy fails –Humans are the more adaptive member of the JCS A simple biological analogy for AI in robotics –Behaviors easy –Advanced cognitive easy –Connecting perception with symbols hard Teleoperation –Is always the backup control regime –Operator is mediated and only human Robot manufacturers cheap out and assume human can figure it out HRI24