Download presentation
Presentation is loading. Please wait.
Published byZoe Ryan Modified over 8 years ago
1
All rights reserved, California Institute of Technology © 2002 Continual Coordination through Shared Activities Brad Clement & Tony Barrett Artificial Intelligence Group Jet Propulsion Laboratory California Institute of Technology {bclement,barrett}@aig.jpl.nasa.gov http://www-aig.jpl.nasa.gov/
2
All rights reserved, California Institute of Technology © 2002 Outline motivation shared activity model SHAC algorithm protocol development consensus windows Mars scenario conclusion & future work
3
All rights reserved, California Institute of Technology © 2002 Why Decentralized Planning? Why plan? –near-term actions can effect subsequent ones in achieving longer-term goals Why decentralize? –competing objectives (self-interest) –control is already distributed –communication constraints/costs (b/w, delay, privacy) –computation constraints (parallel processing) –robustness to failure?
4
All rights reserved, California Institute of Technology © 2002 Motivation for Space Over 40 multi-spacecraft missions proposed! –Autonomous single spacecraft missions have not yet reached maturity. –How can we cost-effectively manage multiple spacecraft? Earth Observing SystemSun-Earth Connections Origins Program Structure & Evolution of the Universe Mars Network NMP
5
All rights reserved, California Institute of Technology © 2002 Motivation for Space Considerable ground operations effort and cost involved in coordinating mission plans for interacting missions. Human collaboration can be error-prone and slow to react. Automating this coordination reduces operations costs and increases science return. Can automate coordination on ground or on board spacecraft
6
All rights reserved, California Institute of Technology © 2002 Prior Work Treats decentralized planning as an offline, collaborative problem –planners collaborate on resolving state conflicts, ignore communication costs Space missions present real-time problems with self-interested agents –scientists compete for instrument/spacecraft use –missions compete for bandwidth to Earth –remote explorers may need to respond to dynamics autonomously
7
All rights reserved, California Institute of Technology © 2002 Problems How should planning agents communicate with each other? How can they coordinate joint actions during execution? How can coordination algorithms be developed efficiently?
8
All rights reserved, California Institute of Technology © 2002 Problems How should planning agents communicate with each other? –shared activities How can they coordinate joint actions during execution? –continual coordination algorithm –consensus window How can coordination algorithms be developed efficiently? –Protocol classes that manipulate shared activities
9
All rights reserved, California Institute of Technology © 2002 Shared Activity Coordination (SHAC) –Software for getting separate planners to interact –Communication language –Continual coordination algorithm –Framework for defining and implementing automated interactions between planning systems (a.k.a. coordination protocols) –Testbed for evaluating protocols –Agents react as team to unexpected events.
10
All rights reserved, California Institute of Technology © 2002 Optimize a function of variable assignments with both local and non-local constraints. Distributed Constrained Optimization Control Executive Planner Analyst
11
All rights reserved, California Institute of Technology © 2002 Executive Planner Executive Planner Executive Planner Shared Activity Coordination Shared activities implement team plans, joint actions, and shared states/resources
12
All rights reserved, California Institute of Technology © 2002 Shared Activity Model parameters (string, integer, etc.) –constraints (e.g. agent4 allows start_time [0,20], [40,50]) decompositions (shared subplans) permissions - to modify parameters, move, add, delete, choose decomposition, constrain roles - maps each agent to a local activity protocols - defined for each role –change constraints –change permissions –change roles includes adding/removing agents assigned to activity
13
All rights reserved, California Institute of Technology © 2002 SHAC Algorithm Given: a plan with multiple activities, including a set of shared_activities, and a projection of plan into the future. 1.Revise projection using the currently perceived state and any newly added goal activities. 2.Alter plan and projection while honoring constraints and permissions of shared_activities. 3.Release relevant near-term activities of plan to the real-time execution system. 4.For each shared activity in shared_activities –if outside consensus window, apply each associated protocol to modify the activity –else apply simple consensus_protocol 5.Communicate changes in shared_activities. 6.Update shared_activities based on received communications. 7.Go to 1.
14
All rights reserved, California Institute of Technology © 2002 Protocol Capabilities Default protocol class joint intention mutual belief resource sharing active/passive roles master/slave roles Extending protocol classes 1.modify permissions 2.modify local parameter constraints 3.add/delete sharing agents 4.change roles of sharing agents
15
All rights reserved, California Institute of Technology © 2002 Example Protocol Classes 1.modify permissions 2.modify local parameter constraints 3.add/delete sharing agents 4.change roles of sharing agents Argumentation (1,2) Delegation (3) Asynchronous weak commitment (1,2) Constraint-based conflict resolution (2,4) Round robin (1) Centralized conflict delegator (extends delegation)
16
All rights reserved, California Institute of Technology © 2002 Delegation Protocol (variant used in MISUS) Delegation::modifyRoles() –if roles does not contain exactly 1 subordinate choose a subordinate to whom to delegate the activity add subordinate to roles Subordination::modifyRoles() –if cannot resolve conflicts involving activity remove self from roles
17
All rights reserved, California Institute of Technology © 2002 Round Robin Modify permissions –if have modification permissions update time_elapsed –if finished planning or time_elapsed > threshold remove self's modification permissions (e.g. move and delete) add modification permissions for next agent set time_elapsed to 0
18
All rights reserved, California Institute of Technology © 2002 Constraint-Based Conflict Resolution Modify parameter constraints: –if cannot resolve conflicts involving shared activity update parameter constraints describing locally consistent values Modify roles: –if reached consensus on constraints or time_elapsed > threshold switch to role for solution convergence (e.g. argumentation, voting, highest rank decides)
19
All rights reserved, California Institute of Technology © 2002 Asynchronous Weak Commitment AWC::modifyPermissions() –if have highest priority remove self’s modification permissions (add, move, delete) –else give self modification permissions AWC::modifyConstraints() –if cannot resolve local conflicts and conflicts with constraints of higher ranking agents set own rank to highest rank plus one generate parameter constraints (no-good) describing locally consistent values
20
All rights reserved, California Institute of Technology © 2002 MER depends on orbiters to route data Orbiters have their own agendas, but agree to relay MER data based on priority. Orbiters schedule comm requests and respond with downlink times. MER delegates to orbiter with best downlink time or plans to send directly to Earth. Mars Scenario B. Clement SHAC MGS MEX Odyssey MER A MER B
21
All rights reserved, California Institute of Technology © 2002 shared_activity comm_a { datasize; critical_size; high_size; response_time; start_time; roles = comm_orbiter_a by mera, move_mer_data by mgs, move_mer_data by odyssey, move_mer_data by mex; protocols = mera MarsDelegation, mgs Subordination, mex Subordination, odyssey Subordination; permissions = comm_orbiter_time_a (all), move_mer_data (place, detail, connect, disconnect, response_time); }; shared_activity comm_b {... }; agent mera { planner = AspenPlannerInterface(80, 20, comm_windows, odyssey, odyssey_view_sv0, in, mgs, mgs_view_sv0, in, mex, mex_view_sv0, in); communication = SocketCommunication("../dist-coord/mars-ports.txt"); communicator = AspenCommunicator; }; agent merb {... }; agent odyssey { planner = AspenPlannerInterface(60, 20); communication = SocketCommunication("../dist-coord/mars-ports.txt"); communicator = AspenCommunicator; }; agent mgs {... }; agent mex {... }; protocol MarsDelegation(); protocol Subordination(); SHAC Model File
22
All rights reserved, California Institute of Technology © 2002 Executive DSN Reaction to unexpected events MGS Odyssey MEX comm opportunities If a communication fails, or new comm is requested, the planner must either schedule later in open comm slots or swap comm slots to send earlier. MER can only estimate orbiter downlink times. MER and orbiters must be in agreement, so near-term changes may be impossible. MER Planner
23
All rights reserved, California Institute of Technology © 2002 Executive DSN Communication Window-Based Protocols MGS Odyssey MEX Problem: when should xmit commitments be made on communication? If too early, chances to optimize are lost. If too late, de-commitments could result in lengthy returns to Earth. comm opportunities MER Planner
24
All rights reserved, California Institute of Technology © 2002 Executive DSN Immediate requests may not be fulfilled during communication because replanning is needed. By starting request in prior comm window, consensus can be reached more easily because there is time to replan in between comm windows. Would a single consensus window be better because of dependencies among orbiters? MER Planner Communication Window-Based Protocols MGS Odyssey MEX
25
All rights reserved, California Institute of Technology © 2002 Executive DSN MGS Odyssey MEX Single consensus window spans to next comm opportunity. This forces the orbiter to commit to requests on-the-fly. This either optimistically assumes the orbiter can replan immediately while leaving time to transfer the image, or resources are reserved up-front for on-the-fly communication and storage (potentially restricting the flexibility of operations). Communication Window-Based Protocols
26
All rights reserved, California Institute of Technology © 2002 Executive DSN MGS Odyssey MEX Separate consensus windows for each orbiters spans to next comm opportunities MER Planner Communication Window-Based Protocols
27
All rights reserved, California Institute of Technology © 2002 Executive DSN MGS Odyssey MEX Separate consensus windows span two ahead. Immediate requests may not be fulfilled during communication because replanning is needed. By starting request in prior comm window, consensus can be reached more easily because there is time to replan in between comm windows. MER Planner Communication Window-Based Protocols
28
All rights reserved, California Institute of Technology © 2002 Executive DSN MGS Odyssey MEX Single consensus window spans two ahead. May be better because of dependencies among orbiters. MER Planner Communication Window-Based Protocols
29
All rights reserved, California Institute of Technology © 2002 Computing Consensus Windows Agent A Agent C Agent B 11 Agent A Agent C Agent B 11 22 Agent A Agent B Agent C time execute consensus window highest rank decides voting or auction
30
All rights reserved, California Institute of Technology © 2002 Computing Consensus Windows Agent A Agent C Agent B 11 22 Agent A Agent B Agent C time execute consensus window voting or auction
31
All rights reserved, California Institute of Technology © 2002 Computing Consensus Windows Agent A Agent C Agent B 11 22 Agent A Agent B Agent C time execute consensus window voting or auction votes collected
32
All rights reserved, California Institute of Technology © 2002 Computing Consensus Windows Agent A Agent C Agent B 11 22 Agent A Agent B Agent C time execute voting or auction
33
All rights reserved, California Institute of Technology © 2002 Computing Consensus Windows Agent A Agent C Agent B 11 22 Agent A Agent B Agent C time execute voting or auction
34
All rights reserved, California Institute of Technology © 2002 Computing Consensus Windows Agent A Agent C Agent B 11 22 Agent A Agent B Agent C time execute voting or auction
35
All rights reserved, California Institute of Technology © 2002 Computing Consensus Windows Agent A Agent C Agent B 11 22 Agent A Agent B Agent C time execute voting or auction
36
All rights reserved, California Institute of Technology © 2002 Computing Consensus Windows Agent A Agent C Agent B 11 22 Agent A Agent B Agent C time execute voting or auction
37
All rights reserved, California Institute of Technology © 2002 Computing Consensus Windows Agent A Agent C Agent B 11 22 Agent A Agent B Agent C time execute voting or auction
38
All rights reserved, California Institute of Technology © 2002 Computing Consensus Windows Agent A Agent C Agent B 11 22 Agent A Agent B Agent C time execute voting or auction
39
All rights reserved, California Institute of Technology © 2002 Computing Consensus Windows Agent A Agent C Agent B 11 22 Agent A Agent B Agent C time execute voting or auction
40
All rights reserved, California Institute of Technology © 2002 Computing Consensus Windows Agent A Agent C Agent B 11 22 Agent A Agent B Agent C time execute consensus window voting or auction
41
All rights reserved, California Institute of Technology © 2002 Computing Consensus Windows
42
All rights reserved, California Institute of Technology © 2002 SHAC Design
43
All rights reserved, California Institute of Technology © 2002 SHAC Design
44
All rights reserved, California Institute of Technology © 2002 Mars Scenario no pending request request wait for uplink critical pancam comm earth comm odyssey MER activities Odyssey activities no pending request comm earth through Odyssey direct must-be wait wait for uplink no pending request request wait for uplink critical pancam comm earth must-be wait odyssey received no pending request comm earth comm odyssey wait for uplink downlink critical data uplink from DSN
45
All rights reserved, California Institute of Technology © 2002 Mars Scenario no pending request Odyssey MER A must wait comm earth MER activities Odyssey activities critical pancam comm earth comm odyssey traverse comm earth no pending request request no pending request wait for uplink science activities
46
All rights reserved, California Institute of Technology © 2002 Mars Scenario Odyssey MER A must wait comm earth MER activities Odyssey activities critical pancam comm earth comm odyssey traverse comm earth no pending request request no pending request wait for uplink science activities critical pancam comm earth comm odyssey traverse comm earth no pending request request no pending request comm earth must wait wait for uplink odyssey received must wait wait for uplink no pending request
47
All rights reserved, California Institute of Technology © 2002 Mars Scenario Odyssey MER A comm earth MER activities Odyssey activities critical pancam comm earth comm odyssey traverse comm earth no pending request request science activities critical pancam comm earth comm odyssey traverse comm earth no pending request request comm earth must wait wait for uplink odyssey received must wait wait for uplink odyssey received no pending request
48
All rights reserved, California Institute of Technology © 2002 Related Work WITS and VOLT provide collaborative interfaces for visualization and data entry Plan merging (Georgeff, Ephrati & Rosenchein, Durfee et al.) Divide & conquer collaborative planning (Corkill, Lansky) DSIPE (des Jardins & Wolverton) TAEMS & GPGP (Decker & Lesser) Shared Plans (Grosz et al.) TEAMCORE & TOP (Tambe & Pynadath) SHAC resolve conflicts opti- miza- tion resour- ces decen- tralized search process auto- mation self- interest XX XX X XX XXXX XX XXXXXX
49
All rights reserved, California Institute of Technology © 2002 Single vs. Multiple ASPENs 1.Users access separate GUIs to central ASPEN 2.Different views by hiding acts & timelines 3.User access uniform 4.Updates instantaneously 5.Must develop custom interfaces to other tools 1.Users each have planner interface 2.Different views from different models 3.Protocols model a coordination process: heterogeneous and adaptive access 4.Updates controlled 5.Provides planner- independent interface ASPENSHAC
50
All rights reserved, California Institute of Technology © 2002 Ground Planning Techsat-21 Problem: Multiple groups need to collaborate on building command sequences Almost all (if not all) missions have this problem –Hubble Space Telescope (one spacecraft, one instrument, many scientists) –scientists collaborate with each other –scientists collaborate with operations staff –missions collaborate with each other (e.g. Mars, Earth- orbiters, Great Observatories) Payload GS (i.e., Datalynx, USN) (X-band) Spacecraft GS (i.e., RSC) Commands (L-Band) Telemetry, QL Payload (S-Band) { AFSCN } Payload Data Payload Downlink Requests Payload Data - FTP - Overnight (all) Telemetry (ftp) Pager Mission Planning Simulation Env Commanding SOH display Telemetry ASPEN SCL Fight Dynamics Payload Ops W/S Activity schedules TS-21 Engr Cmd Verification Engineering Models PPC Cluster Cmd Verification TT&C W/S Data Center Pass Playback SOH display Trending Anom Res SCL Matlab TT&C W/S PTF MOC R/T MOC
51
All rights reserved, California Institute of Technology © 2002 Collaborating ASPENs Centralized approach advantages sequence uplinks must be centralized anyway decisions simplified with central authority disadvantages planning decisions centralized bottleneck for communication and problem solving central point of failure (with proper backup procedure, not a big issue) MPW
52
All rights reserved, California Institute of Technology © 2002 Collaborating ASPENs Decentralized approach advantages heterogeneous planning capabilities conflicts localized and resolved by subgroup parallel problem solving disadvantages distributed problem solving more complex—may require increased communication
53
All rights reserved, California Institute of Technology © 2002 Proposed Solution Hybrid approach central authoritative schedule (e.g. MPW) heterogeneous planning hierarchical activities providing scheduling at different levels of abstraction SHAI custom interface conflict resolution regulated by central authority localized to subgroups Example planning interactions MPW new activities rejected activities rescheduled activities confirmation schedule updates removed activities local constraints
54
All rights reserved, California Institute of Technology © 2002 Example: How it Would Work Planning Phase: ASPEN + ShAC Users plan activities in parallel –add goals to schedule, –automated scheduling using ASPEN, –explore alternative schedules. Users command ShAC (depending on scheduling permissions) to –input constraints on activities, –send out plan changes, rejections, and constraints, –receive changes (schedule updates) from others, –incorporate (enforce) constraints from others. ShAC can automate subsets of the above commands, taking user out of the loop if desired. MPW new activities some accepted, some rescheduled, constraints all accepted ASETENCAP new activities some rescheduled, constraints rescheduled activities rescheduled activities, new activities rescheduled activities, deleted activities, new activities some rescheduled, constraints constraints
55
All rights reserved, California Institute of Technology © 2002 Example: How it Would Work Planning Phase: non-ASPEN + ShAC Process could be the same with –translation between planning languages or –manual encoding of activity changes and reports giving feedback ShAC enforces constraints and permissions but notifies with reports Planning Tool X ShAC Translate activity languages activity changes reports on accepted, rejected, and modified activities constraints permission changes
56
All rights reserved, California Institute of Technology © 2002 Contributions communication language for distributed planning general algorithm for continual coordination framework for developing coordination protocols planner independent interface
57
All rights reserved, California Institute of Technology © 2002 Current Future Work evaluate protocols (including DCSP) for a few domains distributed network scheduling abstraction techniques for limiting communication and preserving flexibility find a customer
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.