Download presentation
Presentation is loading. Please wait.
1
Presented by Yuqian Jiang 2/27/2019
Back to the BlocksWorld: Learning New Actions through Situated Human-Robot Dialogue Presented by Yuqian Jiang 2/27/2019
2
Source: https://goo.gl/images/nS1JgX
PROBLEM Learn new actions through situated human-robot dialogue ...in a simplified blocks world Source:
3
PROBLEM How does a robot learn the action stack from a dialogue
if it knows primitive actions: open gripper, close gripper, move
5
MOTIVATION When robots work side-by-side with humans, they can learn new tasks from their human partners through dialogue Challenges: Human language: discrete and symbolic, robot representation: continuous How to represent new knowledge so it can generalize? How should the human teach new actions?
6
RELATED WORK Following natural language instructions
Kollar et al., 2010; Tellex et al., 2011; Chen et al., 2010 Learning by demonstration Cakmak et al., 2010 Connecting language with lower level control systems Kress-Gazit et al., 2008; Siskind, 1999; Matuszek et al., 2012 Using dialogue for action learning Cantrell et al., 2012; Mohan et al., 2013
7
METHOD A dialogue system for action learning
8
Intent Recognizer: Command or confirmation Semantic Processor: Implemented using Combinatory Categorial Grammar (CCG) Extracts action and object properties
9
on the red block on your right.”
“stack the blue block on the red block on your right.”
10
Perception Modules: From camera image and internal status A conjunction of predicates representing environment Reference Solver: Grounds objects in the semantic representation to the objects in the robot’s perception
11
on the red block on your right.”
“stack the blue block on the red block on your right.”
12
Dialogue manager: A dialogue policy decides the dialogue acts based on the current state Language Generator: Pre-defined templates
13
ACTION MODULES Action knowledge Action execution Action learning
15
ACTION LEARNING If an action is not in the knowledge base, ask for instructions Follow the instructions Extract a goal state describing the action effects
17
ACTION LEARNING
18
EXPERIMENTS Teach five new actions under two strategies
Pickup, Grab, Drop, ClearTop, Stack step-by-step instructions vs. one-shot instructions (“pick up the blue block and put it on top of the red block”) Five participants (more will be recruited)
19
EXPERIMENTS
20
RESULTS: Teaching Completion
All failed teaching dialogues are one-shot instructions.
21
RESULTS: Teaching Duration
Step-by-step dialogues take longer to learn.
22
Step-by-step instructions have better generalization.
RESULTS: Execution Step-by-step instructions have better generalization.
23
CONCLUSION An approach to learn new actions from human-robot dialogue
On top of a layered planning/execution system Integrated with language and perception modules Success in generalizing to new situations in blocks world
24
CRITIQUE Simplified domain with only 3 low-level actions
Cannot learn high-level actions that cannot be sequenced using these low-level actions Cannot learn actions that involve objects that cannot be grounded Is it really learning a new action, or just a new word that describes a goal using existing actions?
25
CRITIQUE Only learns action effects, but no preconditions
Experiments do test situations that violate preconditions, such as picking up a block that has another block on top Again, only successful because the preconditions of the underlying actions are modeled
26
CRITIQUE Evaluation Nothing surprising about the collaborative/non-collaborative results Prefer to see more details on other modules of the system, and evaluation of their robustness
27
CRITIQUE Challenges: ✔ Human language: discrete and symbolic,
robot representation: continuous ? How to represent new knowledge so it can generalize? ? How should the human teach new actions?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.