Project title: Hierarchical awareness in distributed agents (start 1 Jan. 2005) Project team members: dr. Bram Bakker (postdoc UvA) dr.ir. Leon Kester.

Slides:



Advertisements
Similar presentations
Max Welling HGL Machine Learning UvA Group: Intelligent Autonomous Agents.
Advertisements

Distributed Scheduling in Supply Chain Management Emrah Zarifoğlu
Partially Observable Markov Decision Process (POMDP)
Modeling Maze Navigation Consider the case of a stationary robot and a mobile robot moving towards a goal in a maze. We can model the utility of sharing.
SA-1 Probabilistic Robotics Planning and Control: Partially Observable Markov Decision Processes.
Self Stabilizing Algorithms for Topology Management Presentation: Deniz Çokuslu.
Dynamic Bayesian Networks (DBNs)
Meeting 3 POMDP (Partial Observability MDP) 資工四 阮鶴鳴 李運寰 Advisor: 李琳山教授.
Chapter 15 Probabilistic Reasoning over Time. Chapter 15, Sections 1-5 Outline Time and uncertainty Inference: ltering, prediction, smoothing Hidden Markov.
LCSLCS 18 September 2002DARPA MARS PI Meeting Intelligent Adaptive Mobile Robots Georgios Theocharous MIT AI Laboratory with Terran Lane and Leslie Pack.
What Are Partially Observable Markov Decision Processes and Why Might You Care? Bob Wall CS 536.
Partially Observable Markov Decision Process By Nezih Ergin Özkucur.
Planning under Uncertainty
AAMAS 2009, Budapest1 Analyzing the Performance of Randomized Information Sharing Prasanna Velagapudi, Katia Sycara and Paul Scerri Robotics Institute,
Kuang-Hao Liu et al Presented by Xin Che 11/18/09.
Elisati Hulu. Definition  “a deliverable-oriented hierarchical decomposition of the work to be executed by the project team to accomplish the project.
Zach Ramaekers Computer Science University of Nebraska at Omaha Advisor: Dr. Raj Dasgupta 1.
Elisati Hulu. Definition  “a deliverable-oriented hierarchical decomposition of the work to be executed by the project team to accomplish the project.
Research Topics of Potential Interest to Geography COMPUTER SCIENCE Research Away Day, 29 th April 2010 Thomas Erlebach.
KI Kunstmatige Intelligentie / RuG Markov Decision Processes AIMA, Chapter 17.
Scalable Information-Driven Sensor Querying and Routing for ad hoc Heterogeneous Sensor Networks Maurice Chu, Horst Haussecker and Feng Zhao Xerox Palo.
Guided Conversational Agents and Knowledge Trees for Natural Language Interfaces to Relational Databases Mr. Majdi Owda, Dr. Zuhair Bandar, Dr. Keeley.
Advisors Prof Shlomi Dolev Dr Guy Leshem Team Members Raviv Arania Noam Arad.
Department of Computer Science Undergraduate Events More
Exploration in Reinforcement Learning Jeremy Wyatt Intelligent Robotics Lab School of Computer Science University of Birmingham, UK
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL.
Algorithms for Self-Organization and Adaptive Service Placement in Dynamic Distributed Systems Artur Andrzejak, Sven Graupner,Vadim Kotov, Holger Trinks.
PROJECT MANGMENT SAMPLES By: Prof. Lili Saghafi. Contents of a Scope Statement Contents and length will vary based on the project. Typical contents include:
Coding techniques for digital cinema Andreja Samčović University of Belgrade Faculty of Transport and Traffic Engineering.
Decision-Making on Robots Using POMDPs and Answer Set Programming Introduction Robots are an integral part of many sectors such as medicine, disaster rescue.
Conference Paper by: Bikramjit Banerjee University of Southern Mississippi From the Proceedings of the Twenty-Seventh AAAI Conference on Artificial Intelligence.
Controlling and Configuring Large UAV Teams Paul Scerri, Yang Xu, Jumpol Polvichai, Katia Sycara and Mike Lewis Carnegie Mellon University and University.
Collectively Cognitive Agents in Cooperative Teams Jacek Brzeziński, Piotr Dunin-Kęplicz Institute of Computer Science, Polish Academy of Sciences Barbara.
Reinforcement Learning on Markov Games Nilanjan Dasgupta Department of Electrical and Computer Engineering Duke University Durham, NC Machine Learning.
CPSC 502, Lecture 13Slide 1 Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 13 Oct, 25, 2011 Slide credit POMDP: C. Conati.
Planning and Execution with Phase Transitions Håkan L. S. Younes Carnegie Mellon University Follow-up paper to Younes & Simmons’ “Solving Generalized Semi-Markov.
1 ECE-517 Reinforcement Learning in Artificial Intelligence Lecture 7: Finite Horizon MDPs, Dynamic Programming Dr. Itamar Arel College of Engineering.
CSE-573 Reinforcement Learning POMDPs. Planning What action next? PerceptsActions Environment Static vs. Dynamic Fully vs. Partially Observable Perfect.
MURI: Integrated Fusion, Performance Prediction, and Sensor Management for Automatic Target Exploitation 1 Dynamic Sensor Resource Management for ATE MURI.
Utilities and MDP: A Lesson in Multiagent System Based on Jose Vidal’s book Fundamentals of Multiagent Systems Henry Hexmoor SIUC.
Risk Management in a Collaborative Environment John Lofty 5 March 2004.
Solving POMDPs through Macro Decomposition
SOFTWARE DESIGN. INTRODUCTION There are 3 distinct types of activities in design 1.External design 2.Architectural design 3.Detailed design Architectural.
Maximum a posteriori sequence estimation using Monte Carlo particle filters S. J. Godsill, A. Doucet, and M. West Annals of the Institute of Statistical.
A Tutorial on the Partially Observable Markov Decision Process and Its Applications Lawrence Carin June 7,2006.
Conformant Probabilistic Planning via CSPs ICAPS-2003 Nathanael Hyafil & Fahiem Bacchus University of Toronto.
Tractable Inference for Complex Stochastic Processes X. Boyen & D. Koller Presented by Shiau Hong Lim Partially based on slides by Boyen & Koller at UAI.
1 Chapter 15 Probabilistic Reasoning over Time. 2 Outline Time and UncertaintyTime and Uncertainty Inference: Filtering, Prediction, SmoothingInference:
Title Page Include title of proposal, name of PI, and names of team members. You may include pictures, logos, and figures in your presentation as appropriate.
The Science of Design. What is Design? Science vs. Engineering – Science teaches about natural things where engineering teaches about artificial things.
PnP Networks Self-Aware Networks Self-Aware Networks Self-Healing and Self-Defense via Aware and Vigilant Networks PnP Networks, Inc. August, 2002.
Machine Vision ENT 273 Regions and Segmentation in Images Hema C.R. Lecture 4.
Transfer Learning in Sequential Decision Problems: A Hierarchical Bayesian Approach Aaron Wilson, Alan Fern, Prasad Tadepalli School of EECS Oregon State.
1 Chapter 17 2 nd Part Making Complex Decisions --- Decision-theoretic Agent Design Xin Lu 11/04/2002.
1 (Chapter 3 of) Planning and Control in Stochastic Domains with Imperfect Information by Milos Hauskrecht CS594 Automated Decision Making Course Presentation.
CSE 473 Uncertainty. © UW CSE AI Faculty 2 Many Techniques Developed Fuzzy Logic Certainty Factors Non-monotonic logic Probability Only one has stood.
Course 2 Probability Basics 7.9 and Theoretical Probability Theoretical Probability is the ratio of the number of ways an event can occur to the.
Partial Observability “Planning and acting in partially observable stochastic domains” Leslie Pack Kaelbling, Michael L. Littman, Anthony R. Cassandra;
Information Sharing in Large Heterogeneous Teams Prasanna Velagapudi Robotics Institute Carnegie Mellon University FRC Seminar - August 13, 2009.
Test Title Test Content.
Crash-imminent safety (CrIS) UTC
Engineering Societies in the Agents World Workshop 2003
NLW DAMS Implementation
Thrust IC: Action Selection in Joint-Human-Robot Teams
Harm van Seijen Bram Bakker Leon Kester TNO / UvA UvA
Proposal Mechanism.
Author(s). TITLE, Journal, vol. #, pp.#-#, Month, Year.
CS 416 Artificial Intelligence
The Network Approach: Mind as a Web
Presentation transcript:

Project title: Hierarchical awareness in distributed agents (start 1 Jan. 2005) Project team members: dr. Bram Bakker (postdoc UvA) dr.ir. Leon Kester (TNO-FEL) prof.dr.ir. Frans Groen (UvA)

Project objective: To develop scalable, theoretically sound methods for constructing and updating hierarchical state representations of complex distributed systems Methods/approach: Criteria for hierarchical state representations: 1) information content for actions is high; 2) Markov property holds; 3) uncertainty is low; 4) … Develop algorithms that enforce/approximate these criteria, based on: MDPs, POMDPs, HPOMDPs, DBNs, Utile Distinction methods

ENVIRONMENT state estimation action selection state estimation action selection state estimation action selection S E N S O R S E F F E C T O R S CONTROL SYSTEM POMDP

Intended project results: Algorithms that automatically construct and maintain a hierarchy of state representations Milestones and intended results 2005: Define desired characteristics of hierarchical representations Develop first algorithm, e.g. based on Utile Distinction Test in simple traffic management simulation Set up collaborations, e.g. within ESA and with CDM