Download presentation
Presentation is loading. Please wait.
Published byJohanne Samson Modified over 10 years ago
1
On the Use of Intelligent Agents as Partners in Training Systems for Complex Tasks* Thomas R. Ioerger, Joe Sims, Richard Volz Department of Computer Science Texas A&M University Judson Workman, Wayne Shebilske Department of Psychology Wright State University *Funds provided by a MURI grant through DoD/AFOSR.
2
Complex Tasks, and the Need for new Training Methods Complex tasks (e.g. operating machinery) –multiple cognitive components (memory, perceptual, motor, reasoning/inference...) –novices feel over-whelmed –limitations of part-task training –automaticity vs. attention management Role for intelligent agents? –can place agents in simulation environments –need guiding principles to promote learning
3
Previous Work: Partner-Based Training AIM (Active Interlocked Modeling; Shebilske, 1992) –trainees work in pairs (AIM-Dyad) –each trainee does part of the task together importance of context (integration of responses) can produce equal training, 100% efficiency gain co-presence/social variables not required –trainees placed in separate rooms correlation with intelligence of partner –Bandura, 1986: “modeling”
4
Automating the Partner with an Intelligent Agent Hypothesis: Would the training be as effective if the partner were played by an intelligent agent? Important pre-requisite: a CTA (cognitive task analysis) –a hierarchical task-decomposition allows functions to be divided in a “natural” way between human and agent partners
5
Space Fortress: Laboratory Task Representative of complex tasks –has similar perceptual, motor, attention, memory, and decision-making demands as flying a fighter jet –continuous control: navigation with joystick, 2nd-order thrust control –discrete events: firing missles, making bonus selections with mouse –must learn rules for when to fire, boundaries... Large body of previous studies/data –Multiple Emphasis on Components (MEC) protocol –transfers to operational setting (attention mgmt)
6
PNTS CNTRL VLCTY VLNER IFF INTRVL SPEED SHOTS 200 100 119 0 W 90 70 $ A MINE THE FORTRESS FORTRESS SHOT SHIP BONUS AVAILABLE PMI MOUSE BUTTONSJOYSTICK
7
Implementation of a Partner Agent Implemented decision-making procedures for automating mouse and joystick Added if-then-else rules in C source code –emulate Decision-Making with rules Agent simple, but satisfies criteria: –situated, goal-oriented, autonomous First version of agent played too “perfectly” Make it play “realistically” by adding some delays and imprecision (e.g. in aiming)
8
Agent Finite-State Diagrams Handling the FortressHandling Mines
9
Experiment 1 Hypothesis: Training with agent improves final scores Protocol : –10 sessions of 10 3-minute trials each (over 4 days) –each session 1/2 hour: 8 practice trials, 2 test trials Groups : –Control (standard instructions+practice) –Partner Agent: (instructions+practice, alternate mouse and joystick between trainee and agent) Participants : –40 male undegrads at WSU –<20 hrs/wk playing video games
10
Results of Expt 1 *Difference in final scores was significant at p<0.05 level by paired T-test (with dof=38): t=2.33>2.04
11
Breakdown of Scores
12
Effect of Level of Simulated Expertise of Agent? Results of Expt 1 raises follow-up question: What is the effect of the level of expertise simulated by the agent? Can make the agent more or less accurate. Recall: correlation with partner’s intelligence Is it better to train with an expert? or perhaps with a partner of matching skill-level?... –novices might have trouble comprehending experts strategies since struggling to keep up
13
Experiment 2 Hypothesis: Different skill-levels of agent affect trainees’ performance improvement Similar design as Expt 1, except 4 Groups: –Control, Novice agent, Intemediate agent, Expert Adjust skill-level of agent by fine-tuning randomness parameters (shot timing, aiming accur., IFF mistakes) Gauge to skill levels target groups (empirical):
14
Results of Expt 2 Conclusion: Training with an expert partner agent is best.
15
Lessons Learned for Future Applications Principled approach to using agents in training systems: as partners - cognitive benefits Requires CTA –best if high degree of de-coupling –if greater interaction, agent might have to “cooperate” with human by interpreting and responding to apparent strategies Desiderata for Agent: –Correctness –Consistency (necessary for modeling) –Realism (how to simulate human “errors”?) –Exploration (errors lead to unusual situations)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.