Knowledge Representation and Reasoning

Slides:



Advertisements
Similar presentations
From Memory to Problem Solving: Mechanism Reuse in a Graphical Cognitive Architecture The projects or efforts depicted were or are sponsored by the U.S.
Advertisements

Heuristic Search techniques
Classical Planning via Plan-space search COMP3431 Malcolm Ryan.
Part2 AI as Representation and Search
Intelligent systems Lecture 6 Rules, Semantic nets.
Problem Solving and Search in AI Part I Search and Intelligence Search is one of the most powerful approaches to problem solving in AI Search is a universal.
Introduction to SOAR Based on “a gentle introduction to soar: an Architecture for Human Cognition” by Jill Fain Lehman, John Laird, Paul Rosenbloom. Presented.
Informed Search (no corresponding text chapter). Recall: Wanted " An algorithm and associated data structure(s) that can: 1) Solve an arbitrary 8-puzzle.
1 Soar Semantic Memory Yongjia Wang University of Michigan.
A Soar’s Eye View of ACT-R John Laird 24 th Soar Workshop June 11, 2004.
Polyscheme John Laird February 21, Major Observations Polyscheme is a FRAMEWORK not an architecture – Explicitly does not commit to specific primitives.
Models of Human Performance Dr. Chris Baber. 2 Objectives Introduce theory-based models for predicting human performance Introduce competence-based models.
How does model tracing work? CPI 494 Feb 10 Kurt VanLehn.
* Problem solving: active efforts to discover what must be done to achieve a goal that is not readily attainable.
Artificial Intelligence Lecture 9. Outline Search in State Space State Space Graphs Decision Trees Backtracking in Decision Trees.
Knowledge Representation and Reasoning University "Politehnica" of Bucharest Department of Computer Science Fall 2010 Adina Magda Florea
Planning, page 1 CSI 4106, Winter 2005 Planning Points Elements of a planning problem Planning as resolution Conditional plans Actions as preconditions.
Intro to Computation and AI Dr. Jill Fain Lehman School of Computer Science Lecture 6: December 4, 1997.
Basic Problem Solving Search strategy  Problem can be solved by searching for a solution. An attempt is to transform initial state of a problem into some.
Problem Reduction So far we have considered search strategies for OR graph. In OR graph, several arcs indicate a variety of ways in which the original.
Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu Blind Searches - Introduction.
Knowledge Representation Fall 2013 COMP3710 Artificial Intelligence Computing Science Thompson Rivers University.
SOAR A cognitive architecture By: Majid Ali Khan.
Beyond Chunking: Learning in Soar March 22, 2003 John E. Laird Shelley Nason, Andrew Nuxoll and a cast of many others University of Michigan.
3/14/20161 SOAR CIS 479/579 Bruce R. Maxim UM-Dearborn.
Knowledge Representation and Reasoning University "Politehnica" of Bucharest Department of Computer Science Fall 2011 Adina Magda Florea Master of Science.
Definition and Technologies Knowledge Representation.
Biointelligence Lab School of Computer Sci. & Eng. Seoul National University Artificial Intelligence Chapter 8 Uninformed Search.
Modeling Primitive Skill Elements in Soar
ARTIFICIAL INTELLIGENCE
Knowledge Representation
Agenda Preliminaries Motivation and Research questions Exploring GLL
What is cognitive psychology?
Module 11: File Structure
Learning Fast and Slow John E. Laird
Chapter 14: System Protection
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 3
Dialog Processing with Unsupervised Artificial Neural Networks
Knowledge Representation and Reasoning
ECE 448 Lecture 4: Search Intro
Introduction to Artificial Intelligence
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 3
Constraint Satisfaction Problems vs. Finite State Problems
Artificial Intelligence Lecture No. 6
i206: Lecture 13: Recursion, continued Trees
Artificial Intelligence
Achieving parsimony between NGS and the Michigan approach
Jaya Krishna, M.Tech, Assistant Professor
John E. Laird 32nd Soar Workshop
Heuristic search INT 404.
Indexing and Hashing Basic Concepts Ordered Indices
Bryan Stearns University of Michigan Soar Workshop - May 2018
LALR Parsing Adapted from Notes by Profs Aiken and Necula (UCB) and
Artificial Intelligence
EA C461 – Artificial Intelligence Problem Solving Agents
Artificial Intelligence
G5BAIM Artificial Intelligence Methods
Haskell Tips You can turn any function that takes two inputs into an infix operator: mod 7 3 is the same as 7 `mod` 3 takeWhile returns all initial.
Chapter 14: Protection.
Artificial Intelligence
A General Backtracking Algorithm
CIS 488/588 Bruce R. Maxim UM-Dearborn
Knowledge Representation
Data Structures & Algorithms
Solving problems by searching
Dialog Processing with Unsupervised Artificial Neural Networks
UNINFORMED SEARCH -BFS -DFS -DFIS - Bidirectional
Chapter 2 Problems, Problem Spaces, and Search?
Solving problems by searching
Presentation transcript:

Knowledge Representation and Reasoning Master of Science in Artificial Intelligence, 2009-2011 Knowledge Representation and Reasoning University "Politehnica" of Bucharest Department of Computer Science Fall 2009 Adina Magda Florea http://turing.cs.pub.ro/krr_09 curs.cs.pub.ro

Lecture 9 Soar: an architecture for human cognition - cont Lecture outline Soar representation Solving a goal Chunking in Soar Extended Soar architecture

1. Soar representation Problem space (ps): set of states, set of operators A goal has 3 slots: problem space, state, operator Context = ps, state, ops, goal Goals can have sub-goals (and assoc contexts) => goal- subgoal hierarchy

Soar representation Objects have a unique identifier generated at the time the object was created have augmentations – further descriptions of the object Augmentation = triple (identifier, attribute, value) All objects are connected via augmentations in a context => Identifiers of objects act as nodes of a semantic network where augmentations are the links

2. Solving a goal Select a problem space (ps) for the goal Select an initial state Select an operator to apply to the state Enter elaboration phase = collects all available knowledge relevant to the current situation => explicit search control knowledge Info added during elaboration: existing objects have their description elaborated via augmentations (e.g., addition of an evaluation to a sate) create data structures - preferences

Solving a goal Repeat elaboration phase until quiescence Use a fixed decision procedure to interpret preferences Apply operator

Solving a goal by sub-goaling During decision an impasse may occur (see previous lecture) Soar respond to an impasse by creating a sub-goal and an associated context Again select ps, init state, op, elaboration, etc If an impasse is reached, another sub-goal is created to resolve it – hierarchy of sub-goals Universal sub-goaling in Soar = the capability to generate automatically sub-goals in response to impasses and to create impasses for all aspects of problem solving behavior

Sub-goaling Because all goals are generated as responses to impasses, and each local goal has at most one impasse at a time, the goals (and associated contexts) in WM are structured as a stack -> context stack A sub-goal terminates when its impasse is resolved pops the context stack removes from WM all augmentations created in that sub-goal which are not connected to a prior context, either directly or indirectly (by a chain of augmentations) removes from WM preferences for objects not matched in prior contexts The augmentations and preferences that are NOT removed are the result of the sub-goal

Sub-goaling Besides the 3 types of impasses already discussed (1) A state no-change impasse - no operators are proposed for the current state (2) An operator tie impasse - there are multiple operators proposed, but insufficient preferences to select between them (3) An operator conflict impasse - there are multiple operators proposed, and the preferences conflict there is a 4th type of impasse (4) An operator no-change impasse – an operator has been selected but there are no rules to apply it

3. Chunking in Soar Problem solving in Soar can support learning: When new knowledge is needed: when an impasse occurs What to learn: information discovered / obtained when an impasse is resolved = problem solving within a subgoal When to acquire new kowledge: when a subgoal was achieved because its impasse has been resolved

An example of problem solving Eight Puzzle A single general op to move adjacent tiles in the empty cell For a given cell, an instance of this op is created for each cell adjacent to the empty cell: up, down, left, right To encode this, we need productions that: propose a problem space (ps) create the initial state of the ps implement the ops for the ps detect the desired state (final state)

An example of problem solving If no knowledge is available, a depth-first search occurs as a result of the default processing of tie impasses Op selection -> tie impasses -> sub-goals for this impasses -> another tie impasse – another sub-goal -> search deepens If search control knowledge is added to the ops -> informed search

Another way to control search Decompose goal into sub-goals = means-end analysis Question? Are sub-goals non-serializable? = tasks for which there exists no ordering of sub-goals such that successive sub-goals can be achieved without undoing what was accomplished earlier Non-serializable goals can be tracktable if they are serially decomposable = if there is an ordering p1, .., pk, .. such that the solving of pk depends only on p1, …, pk-1 See also linear vs non-linear planning

Another way to control search We can do this for Eight Puzzle Operators: (1) have the blank in its correct position in Sf (2) have the blank and the first tile in their correct position in Sf (3) have the blank and the first 2 tiles in their correct position in Sf … Obs about undone and redone This is not direct search control knowledge but knowledge about how to structure and decompose the puzzle

Another way to control search We can use 2 problem spaces eight-puzzle – the set of 4 ops eight-puzzle-sd – the set of nine ops corresponding to the nine sub-goals The ordering of the sub-goals is encoded as search control knowledge that creates preferences for the operators

==> G2 (resolve-no-change) 6 P2 eight-puzzle 7 S1 1 G1 solve eight-puzzle 2 P1 eight-puzzle-sd 3 S1 4 O1 place-blank ==> G2 (resolve-no-change) 6 P2 eight-puzzle 7 S1 8 ==> G3 (resolve-tie operator) 9 P3 tie 10 S2 (left, up, down) 11 O5 evaluate-object (O2(left)) 12 ==> G4 (resolve-no-change) 13 P2 eight-puzzle 14 S1 15 O2 left 16 S3 17 O2 left 18 S4 19 S4 20 O8 place-1 2 3 1 8 4 7 6 5 A part of a problem-solving trace for the Eight Puzzle 2 3 1 8 4 7 6 5 Si 2 3 1 8 4 7 6 5 Returns pref to select left in S1/G2 1 2 3 8 4 7 6 5 Sf

Explanations (1) G1 – id to represent goal in the WM init current goal, start problem solving (2) problem solving begins in the eight-puzzle-sd ps P1 (3) creates first sate S1 in P1 (4) preferences are generated that order the ps so that place-blank is selected (O1) (5) A no-change-impasse leads to a sub-goal (G2) to implement place-blank (impasse will be resolved when the blank is in the desired position) (6) place-blank – implemented as a search in the eight-puzzle ps (P2 for a state with the blank in correct position (7) init state selected in P2 (8) tie-impasse to select among left, up, down (resolve-tie-impasse subgoal G3 is automatically generated (9) tie ps is selected (P3)

Explanations (10) For the tie ps (p3) the states are set of objects to be evaluated and the set of ops evaluate objects so that preferences can be created (11) Suppose one of these evaluate-object operators (O5) is selected to evaluate the op that moves 8 to the left (12) A n-change impasse and sub-goal G4 is generated because there are no productions to compute an evaluation of the left perator Default search control knowledge attempts to implement the evaluate-object op (O5) by applying the left op to S1 (13)-(16) – goes to S3

Each tile is moved into place by one operator in the eight-puzzle-sd problem space The sub-goals that implement these operators are generated in the eight-puzzle ps

How are chunks created Basic idea Learn Chunk C1: Generalize Task: [ont(a), ont(b), free(b)]  [on(a,b)] Result: { pick(a), stack(a,b) } Learn Chunk C1: if the task is "Do a plan for [ont(a), ont(b), free(b)]  [on(a,b)]" then the result is { pick(a), stack(a,b) } Generalize Task: [ont(X), ont(Y), free(Y)]  [on(X,Y)] Result: { pick(X), stack(X,Y) } Store generalized Chunk C1: if the task is "Do a plan for [ont(X), ont(X), free(Y)]  [on(X,Y)]" then the result is { pick(X), stack(X,Y) }

How are chunks created Chunking creates rules that summarize the processing of a sub-goal In the future, this process can be replaced by a rule application When a sub-goal is generated, a learning episode begins which can lead to the creation of a chunk. When a sub-goal terminates, a chunk can be created A chunk is a rule or a set of rules

How are chunks created Chunk conditions – aspects of the situation that existed prior to the goal + were examined during the processing of the goal Chunk action – the result of the goal When a goal terminates, the WMEs are converted into the conditions and action of one or more production rules

Collecting conditions and actions Conditions = WME that were matched by productions that fired in the goal (or one of its sub-goals) and existed before the goal was created This collection of WMEs are maintained for each goal in a reference-list (implies for any rule application to look for which goal in the stack is a rule associated) Action = based on the results of the goal

WMEs during chunking G2 D1 P3 G3 S2 O2 O3 O4 O5 P2 S1 E1 G4 EVALUATE-OBJECT TIED TIE EIGHT-PUZZLE Before subgoal During subgoal NO CHANGE x 8 2 3 1 4 7 6 5 S3 X1 SUCCESS

Collecting conditions and actions Chunk learned = conditions test if there is an evaluate-object operator which can evaluate the op that moves the blank into its desired location + all tiles are either in the desired position or irrelevant for the current eight-puzzle-sd operator

Identifier variabilization All identifiers are replaced by problem variables Constants are left unchanged (e.g., evaluate-object, eight-puzzle) All occurrences of a single identifier are replaced with the same variable and all occurrences of different identifiers are replaced with different variables Conditions to ensure that two different variables can not match the same identifier Maybe over-specialization

Chunk optimization Remove conditions from chunk that provide no constraint on the match process A condition is removed if it has a variable in the value field of the augmentation that is not bound elsewhere in the rule (condition or action) Apply a condition ordering algorithm to take advantage of the Rete-network matcher used in Soar

4. Extended Soar architecture