Download presentation
Presentation is loading. Please wait.
Published byNigel Blow Modified over 9 years ago
1
Modeling Maze Navigation Consider the case of a stationary robot and a mobile robot moving towards a goal in a maze. We can model the utility of sharing the stationary robot’s information as the change in path cost when the mobile robot plans with and without it. From this we can estimate the performance of information sharing in a large team of robots performing this task. Conclusions and Future Work In this work, we establish an upper bound on average case performance of information sharing in large teams and show that in certain circumstances random heuristic policies can achieve a significant portion of that performance. This means that in domains where network and utility distributions are similar to these cases, random information sharing policies may present an efficient and robust information sharing solution. References [1] Y. Xu, P. Scerri, B. Yu, S. Okamoto, M. Lewis, and K. Sycara. An integrated token-based algorithm for scalable coordination. In Proc. of AAMAS'05, 2005. [2] P. Velagapudi, O. Prokopyev, K. Sycara, and P. Scerri. Maintaining shared belief in a large multiagent team. In Proc. of FUSION'07, 2007. [3] S. Boyd, A. Ghosh, B. Prabhakar, and D. Shah. Randomized gossip algorithms. In Proc. of IEEE/ACM Trans. Netw., 2006. Scale Invariance of Performance The relative performance of the random policies as compared to the omniscient lookahead policy appears to remain constant as the network size grows. This invariance is a particularly desirable property when designing for massive multiagent applications. As the complexity of maintaining knowledge of a team increases with its size, this suggests that random policies may be more competitive in larger networks. Performance as network size increases Acknowledgements This research has been funded in part by the AFOSR grant FA9550-07-1-0039 and the AFOSR MURI grant FA9550-08- 1-0356. This material is based upon work supported under a National Science Foundation Graduate Research Fellowship. Optimality of Random Policies The performance of several information sharing methods is tested in an abstract simulator in which a network of agents are assigned utilities for a given piece of information from a specified distribution. In certain conditions, the performance gap between lookahead and random policies is relatively small, suggesting that random policies are distributing information efficiently. Information Sharing in Large Teams Emerging applications require the cooperation of 1000s of members (humans, robots, agents). These systems generate and consume massive amounts of information. How can we share information effectively in these domains? The Utility of Information In this work, utility is defined as the increase in team performance associated with an agent receiving a piece of information minus the cost of disseminating the information. The objective of an information sharing algorithm is to maximize this utility over all the information generated by the team. Information Sharing Algorithms We consider two extremes of information sharing algorithm design in addressing this problem: Using all possible knowledge of team utility Using no knowledge of team utility 1. Order Statistic Bound Given a probability distribution of utility, the optimal sequence of transmissions can be modeled as a descent through the order statistics of the team members. We assume that the team members constitute i.i.d. samples from the utility distribution. 2. Lookahead Policy Taking advantage of all possible knowledge of agent utility and network properties, this approach directly solves the above maximization using an exhaustive search of all possible network paths of a fixed length. 3. Random Walk Ignoring all available knowledge of agent utility, this algorithm randomly passes information from neighbor to neighbor, performing a random walk across the network. 4. Random Self-Avoiding Walk A straightforward improvement to the random walk approach, this adds a history of nodes that is carried with the information. Visited agents are marked in this history and excluded from selection when routing, approximating a self-avoiding walk over the network. 5. Random Trail An alternative approach is to maintain a local history of previously used incoming and outgoing network connections at each agent for each active piece of information. Previously used connections are then excluded from selection when routing. utilitycommunication agents Information source Normal Distribution Exponential Distribution performance gap Example: Scale free network, Normal distribution of utility Sampled utility of shared map Estimated utility of map sharing in 1000-agent maze navigation
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.