Presentation is loading. Please wait.

Presentation is loading. Please wait.

Christophe Bisciglia 5/16/2003

Similar presentations


Presentation on theme: "Christophe Bisciglia 5/16/2003"— Presentation transcript:

1 Christophe Bisciglia 5/16/2003
Softbot Planning Christophe Bisciglia 5/16/2003

2 Agenda Issues with Softbot Planning Representation (KPC & SADL)
PUCCINI (Observation/Verification Links) What’s Next?

3 Challenges to Softbots
Can traditional planners be used? How do you represent information? Is it complete? Uncertainty Can it be modeled Probabilistically? Should it be? What are “actions” and “goals” for Softbots? How do you ensure sanity (reasonable plans)? What types of traditional planners might we try and use? Open World Incomplete Information Uncertainty Can it be modeled Probabilistically? Should it be? Action and Goal Representation Sanity (reasonable plans)

4 Motivating Examples Personal Agents Larger Scale Agents
Find me a few recent articles on “The Matrix Reloaded” Find all .java files under my home directory, compile them, and make a JAR Larger Scale Agents Comparison Shopping Agents Intelligent Spiders

5 A Softbot’s World Can you make Open/Closed world assumptions?
How can you describe information? Bounded? Complete? Correct? A softbots world is neither open nor closed, its somewhere in the middle This means we need a more expressive way to describe the world Generally called “unbounded incomplete information” Massively incomplete, generally assumed correct How do you determine something is false? Sensing actions only confirm existence Or do they

6

7 Given unbounded, incomplete information
How do you figure out “the truth”? What assumptions can be made about quality? Is assuming truth reasonable? How does assuming incomplete, but correct information limit domains? Good/Bad examples Figuring out truth: Sensing actions Consider a project Cody Kwok did a few years back called “Mulder” Doesn’t assume any one page correct, but relies on the fact that the assumption that “the truth is out there” Domains: Good: unix, wsj.com, reputable web services Bad: Internet as a whole? Unstable networks – KaZaA?

8 Local Closed Word Information (LCW)
Make the Closed Word Assumption about local areas of the world How can this idea be used to determine “Ф is F” When is something still unknown?

9 LCW - Formally Set of Ground Literals DM LCW Formulas DF
Of the form LCW(Ф) IE: LCW(parent.dir(f,/tex)) Means DM contains all files in /tex Ф() is a LCW formula Ф with the set of variables  substituted Ф()  DM  Truth-Value(Ф)  {T,F} Ф  DM  Truth-Value(Ф() )  {F,U} LCW(Ф)  DF  Ф() is F LCW(Ф)  DF  Ф() is U Mention that  is a variable substitution Encourage class not to get two confused by it…

10 LCW Example Consider the following
DM = {aips.tex=(parent.dir(aips.tex,/papers), 241 b)} DF = {} What do we know about the world? What happens of the planner executes: ls –a /papers Results: 241b aips.tex, 187 b TPSreport.tex New state: DM = {aips.tex=(parent.dir(aips.tex,/papers), 241 b), TPSreport.tex=(parent.dir(TPSreport.tex, /papers), 187 b)} DF = {parent.dir(f, /papers) /\ length(f,l)}

11 LCW Example Continued State: How do we conclude:
DM = {aips.tex=(parent.dir(aips.tex,/papers), 241 b), TPSreport.tex=(parent.dir(TPSreport.tex, /papers), 187 b)} DF = {parent.dir(f, /papers) /\ length(f,l)} How do we conclude: paret.dir(aips.tex, /papers) /\ length(aips.tex, 241 b) parent.dir(AAAI.tex, /papers) /\ length(AAAI.tex, 921 b) paent.dir(memo.tex, /memos) /\ length(memo.tex, 71 b) paret.dir(aips.tex, /papers) - contained in Dm (true) parent.dir(AAAI.tex, /papers) -not in Dm but unifies with LCW formula in Df, so false paent.dir(memo.tex, /memos) -neither in Dm nor unified with anything in Df, su unknown

12 LCW and Universal Quantification (briefly)
How could LCW be used for universally quantified effects? Example: compress all files in /papers !f parent.dir(!f, /papers) satisfy(compressed(!f)) Plan: Obtain LCW(parent.dir(f, /papers)) For each f in Dm where parent.dir(f, /papers) is true, compress f How do we know this works? Sub goal on obtaining LCW, then satisfy P for each relevant entry in Dm How do we know this works? If we have LCW on /papers, and an a file is not in Dm, it doesn’t exist

13 LCW Pros & Cons Allows Agent to make local conclusions
Prevents Redundant Sensing – how? Universal Quantification Others? What about “mv aips.tex foo” when we have LCW on foo – Do we need to re-sense? Bookkeeping Others? Basic Idea: If an addition to Dm doesn’t fully unify with a formula in Df Conclusion: Overall, LCW is great for Softbot Planning

14 Knowledge Representation
Classical Knowledge Pre-Conditions Requires a priori knowledge that an action causes some effect Can’t really “check if X is the case” – Consider the safe combination example Problems with KPCs “Representational Handcuffs” for sensing How do you represent “I may or may not see X” – and then plan accordingly? Why not just build contingent plans? Would POMDPs work? Before displaying, ask the class “What are the KPC” -What do you need to know before you execute an action? -How do you represent a completely unknown result? -It not like contingent, you may or may not see an infinite number of things -what does this to to POMDPs? – 2^Way to fucking big number

15 SADL = UWL + ADL Designed to represent sensing actions and information goals Eliminates Knowledge Pre-Conditions Generalizes causal links Categorizes effects and goals Runtime variables (preceded by !)

16 SADL Actions Causal Actions Observational Actions
Actions that change the world IE: ? Observational Actions Actions that report the state of the world Causal Actions Actions that change the world IE: mv, rm, gzip, etc… Observational Actions Actions that report the state of the world IE: ls, finger, date, etc…

17 SADL Goals Satisfy Goals Initially Goals
Traditional Goals Satisfy my any means possible Initially Goals Similar, but refers to when goal was given to agent, not when the goal is achieved Initially (p,!tv) means by the time the plan is complete, the agents should know whether it was true when it started What do initially goals allow? Similar, but refers to when goal was given to agent, not when the goal is achieved Initially (p,!tv) means by the time the plan is complete, the agents should know whether it was true when it started Can combine to express “tidiness” – modify P at will, but restore it before plan is completed

18 SADL Goals continued… Hands-Off Goals What does this do for us?
Prohibits agent from modifying fluents involved What does this do for us? Consider this example Goal: Delete core file Plan: mv TPS-report.tex core rm core Remember, agents are very resourceful 

19 General Causal Links Two types discussed in PUCCINI paper
Observational Links: Ae-e,p->Ap The effect e from A1is an observe effect needed by p for A2 Verification Links: Ap<-p,e-Ae Action Ap needs p to be verified by the effect e from Ae What’s the difference? What happens if we remove the ordering constraint as the paper suggests? What does this do to the search space? Types: Observation, and Verification. They are very similar, but the ordering of the produce and consumer is reversed Without ordering constraint, the planner doesn’t have to commit to observing p or verifying p  Creates more possible plans, hence increasing complexity, but in practice, doesn’t seem to hurt

20 What next? This planner is a few years old, what new technologies might be used? What assumptions could we relax? New techs: plan graphs, heuristics Assumptions: How could we deal with the Onion? Parag asked about using verification links in other domains

21 In Conclusion… LCW is a compromise between open and closed world assumptions. LCW prevents redundant sensing LCW facilitates universal quantification SADL is great when you need to describe sensing and ensure reasonable plans Generalizing causal links gives planner more options without greatly increasing complexity


Download ppt "Christophe Bisciglia 5/16/2003"

Similar presentations


Ads by Google