Computing & Information Sciences Kansas State University Wednesday, 30 Aug 2006CIS 490 / 730: Artificial Intelligence Lecture 4 of 42 Wednesday, 30 August.

Slides:



Advertisements
Similar presentations
Artificial Intelligence Chapter 9 Heuristic Search Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
Advertisements

1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
Artificial Intelligence Problem Solving Eriq Muhammad Adams
CS 480 Lec 3 Sept 11, 09 Goals: Chapter 3 (uninformed search) project # 1 and # 2 Chapter 4 (heuristic search)
Blind Search1 Solving problems by searching Chapter 3.
Search Strategies Reading: Russell’s Chapter 3 1.
Touring problems Start from Arad, visit each city at least once. What is the state-space formulation? Start from Arad, visit each city exactly once. What.
CSC 423 ARTIFICIAL INTELLIGENCE
Artificial Intelligence (CS 461D)
Search Strategies CPS4801. Uninformed Search Strategies Uninformed search strategies use only the information available in the problem definition Breadth-first.
Artificial Intelligence Lecture No. 7 Dr. Asad Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Lecture 3 Note: Some slides and/or pictures are adapted from Lecture slides / Books of Dr Zafar Alvi. Text Book - Aritificial Intelligence Illuminated.
Computing & Information Sciences Kansas State University Lecture 4 of 42 CIS 530 / 730 Artificial Intelligence Lecture 4 of 42 William H. Hsu Department.
14 Jan 2004CS Blind Search1 Solving problems by searching Chapter 3.
SE Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
Review: Search problem formulation
Artificial Intelligence Chapter 3: Solving Problems by Searching
Uninformed Search Reading: Chapter 3 by today, Chapter by Wednesday, 9/12 Homework #2 will be given out on Wednesday DID YOU TURN IN YOUR SURVEY?
CS 561, Session 6 1 Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
Solving Problems by Searching
Computing & Information Sciences Kansas State University Lecture 3 of 42 CIS 530 / 730 Artificial Intelligence Lecture 3 of 42 William H. Hsu Department.
Problem Solving and Search Andrea Danyluk September 11, 2013.
Kansas State University Department of Computing and Information Sciences CIS 730: Introduction to Artificial Intelligence Lecture 3 Wednesday 27 August.
Computing & Information Sciences Kansas State University Lecture 2 of 42 CIS 530 / 730 Artificial Intelligence Lecture 2 of 42 William H. Hsu Department.
CS 440 / ECE 448 Introduction to Artificial Intelligence Spring 2010 Lecture #3 Instructor: Eyal Amir Grad TAs: Wen Pu, Yonatan Bisk Undergrad TAs: Sam.
Computing & Information Sciences Kansas State University Monday, 28 Aug 2006CIS 490 / 730: Artificial Intelligence Lecture 3 of 42 Monday, 28 August 2006.
Kansas State University Department of Computing and Information Sciences CIS 730: Introduction to Artificial Intelligence Lecture 9 of 14 Friday, 10 September.
Computing & Information Sciences Kansas State University Lecture 9 of 42 CIS 530 / 730 Artificial Intelligence Lecture 9 of 42 William H. Hsu Department.
Computing & Information Sciences Kansas State University Wednesday, 15 Oct 2008CIS 530 / 730: Artificial Intelligence Lecture 20 of 42 Wednesday, 15 October.
CHAPTER 4: INFORMED SEARCH & EXPLORATION Prepared by: Ece UYKUR.
Artificial Intelligence
Computing & Information Sciences Kansas State University Wednesday, 20 Sep 2006CIS 490 / 730: Artificial Intelligence Lecture 12 of 42 Wednesday, 20 September.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
Informed Search Methods. Informed Search  Uninformed searches  easy  but very inefficient in most cases of huge search tree  Informed searches  uses.
Informed Search Methods Heuristic = “to find”, “to discover” “Heuristic” has many meanings in general How to come up with mathematical proofs Opposite.
For Friday Finish reading chapter 4 Homework: –Lisp handout 4.
For Monday Read chapter 4, section 1 No homework..
Search with Costs and Heuristic Search 1 CPSC 322 – Search 3 January 17, 2011 Textbook §3.5.3, Taught by: Mike Chiang.
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
1 Solving problems by searching 171, Class 2 Chapter 3.
Kansas State University Department of Computing and Information Sciences CIS 730: Introduction to Artificial Intelligence Lecture 6 of 41 Wednesday, 01.
Advanced Artificial Intelligence Lecture 2: Search.
A General Introduction to Artificial Intelligence.
For Friday Read chapter 4, sections 1 and 2 Homework –Chapter 3, exercise 7 –May be done in groups.
Informed Search I (Beginning of AIMA Chapter 4.1)
Uninformed Search ECE457 Applied Artificial Intelligence Spring 2007 Lecture #2.
1 Kuliah 4 : Informed Search. 2 Outline Best-First Search Greedy Search A* Search.
Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Slides by: Eric Ringger, adapted from slides by Stuart Russell of UC Berkeley. CS 312: Algorithm Design & Analysis Lecture #36: Best-first State- space.
Informed Search II CIS 391 Fall CIS Intro to AI 2 Outline PART I  Informed = use problem-specific knowledge  Best-first search and its variants.
Computing & Information Sciences Kansas State University Friday, 20 Oct 2006CIS 490 / 730: Artificial Intelligence Lecture 24 of 42 Friday, 20 October.
Kansas State University Department of Computing and Information Sciences CIS 730: Introduction to Artificial Intelligence Lecture 5 of 41 Wednesday 03.
Computing & Information Sciences Kansas State University Friday, 08 Sep 2006CIS 490 / 730: Artificial Intelligence Lecture 7 of 42 Friday, 08 September.
3.5 Informed (Heuristic) Searches This section show how an informed search strategy can find solution more efficiently than uninformed strategy. Best-first.
Computing & Information Sciences Kansas State University Friday, 05 Sep 2008CIS 490 / 730: Artificial Intelligence Lecture 4 of 42 Friday, 05 September.
Computing & Information Sciences Kansas State University Monday, 09 Oct 2006CIS 490 / 730: Artificial Intelligence Lecture 19 of 42 Monday, 09 October.
Artificial Intelligence Lecture No. 8 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Computing & Information Sciences Kansas State University Friday, 01 Sep 2006CIS 490 / 730: Artificial Intelligence Lecture 5 of 42 Friday, 01 September.
Blind Search Russell and Norvig: Chapter 3, Sections 3.4 – 3.6 CS121 – Winter 2003.
Computing & Information Sciences Kansas State University Wednesday, 04 Oct 2006CIS 490 / 730: Artificial Intelligence Lecture 17 of 42 Wednesday, 04 October.
Computing & Information Sciences Kansas State University Friday, 13 Oct 2006CIS 490 / 730: Artificial Intelligence Lecture 21 of 42 Friday, 13 October.
For Monday Read chapter 4 exercise 1 No homework.
Chap 4: Searching Techniques Artificial Intelligence Dr.Hassan Al-Tarawneh.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Artificial Intelligence Solving problems by searching.
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Last time: Problem-Solving
Artificial Intelligence (CS 370D)
Chap 4: Searching Techniques Artificial Intelligence Dr.Hassan Al-Tarawneh.
Chap 4: Searching Techniques
Presentation transcript:

Computing & Information Sciences Kansas State University Wednesday, 30 Aug 2006CIS 490 / 730: Artificial Intelligence Lecture 4 of 42 Wednesday, 30 August 2006 William H. Hsu Department of Computing and Information Sciences, KSU KSOL course page: Course web site: Instructor home page: Reading for Next Class: Sections 3.5 – 3.7, p. 81 – 88, Russell & Norvig 2 nd edition Section 4.1, p , Russell & Norvig 2 nd edition Uninformed Search: DLS, Bidirectional, B&B, SMA, IDA

Computing & Information Sciences Kansas State University Wednesday, 30 Aug 2006CIS 490 / 730: Artificial Intelligence Lecture Outline Reading for Next Class: Sections 3.5 – 3.7, 4.1, R&N 2 e This Week: Search, Chapters  State spaces  Graph search examples  Basic search frameworks: discrete and continuous Coping with Time and Space Limitations of Uninformed Search  Depth-limited and memory-bounded search  Iterative deepening  Bidirectional search Intro to Heuristic Search  What is a heuristic?  Relationship to optimization, static evaluation, bias in learning  Desired properties and applications of heuristics Friday, Monday: Heuristic Search, Chapter 4

Computing & Information Sciences Kansas State University Wednesday, 30 Aug 2006CIS 490 / 730: Artificial Intelligence General Search Algorithm: Review function General-Search (problem, strategy) returns a solution or failure  initialize search tree using initial state of problem  loop do  if there are no candidates for expansion then return failure  choose leaf node for expansion according to strategy  If node contains a goal state then return corresponding solution  else expand node and add resulting nodes to search tree  end Note: Downward Function Argument (Funarg) strategy Implementation of General-Search  Rest of Chapter 3, Chapter 4, R&N  See also:  Ginsberg (handout in CIS library today)  Rich and Knight  Nilsson: Principles of Artificial Intelligence

Computing & Information Sciences Kansas State University Wednesday, 30 Aug 2006CIS 490 / 730: Artificial Intelligence Iterative Deepening Search: Review Intuitive Idea  Search incrementally  Anytime algorithm: return value on demand Analysis  Solution depth (in levels from root, i.e., edge depth): d  Analysis  b i nodes generated at level i  At least this many nodes to test  Total:  I b i = 1 + b + b 2 + … + b d =  (b d ) Worst-Case Space Complexity:  (bd) Properties  Convergence: suppose b, l finite and l  d  Complete: guaranteed to find a solution  Optimal: guaranteed to find minimum-depth solution (why?)

Computing & Information Sciences Kansas State University Wednesday, 30 Aug 2006CIS 490 / 730: Artificial Intelligence Bidirectional Search: Review Intuitive Idea  Search “from both ends”  Caveat: what does it mean to “search backwards from solution”? Analysis  Solution depth (in levels from root, i.e., edge depth): d  Analysis  b i nodes generated at level i  At least this many nodes to test  Total:  I b i = 1 + b + b 2 + … + b d/2 =  (b d/2 ) Worst-Case Space Complexity:  (b d/2 ) Properties  Convergence: suppose b, l finite and l  d  Complete: guaranteed to find a solution  Optimal: guaranteed to find minimum-depth solution  Worst-case time complexity is square root of that of BFS

Computing & Information Sciences Kansas State University Wednesday, 30 Aug 2006CIS 490 / 730: Artificial Intelligence Previously: Uninformed (Blind) Search  No heuristics: only g(n) used  Breadth-first search (BFS) and variants: uniform-cost, bidirectional  Depth-first search (DFS) and variants: depth-limited, iterative deepening Heuristic Search  Based on h(n) – estimated cost of path to goal (“remaining path cost”)  h – heuristic function  g: node  R; h: node  R; f: node  R  Using h  h only: greedy (aka myopic) informed search  f = g + h: (some) hill-climbing, A/A* Branch and Bound Search  Originates from operations research (OR)  Special case of heuristic search: treat as h(n) = 0  Sort candidates by g(n) Informed (Heuristic) Search: Review

Computing & Information Sciences Kansas State University Wednesday, 30 Aug 2006CIS 490 / 730: Artificial Intelligence Best-First Search [1]: Evaluation Function Recall: General-Search Applying Knowledge  In problem representation (state space specification)  At Insert(), aka Queueing-Fn()  Determines node to expand next Knowledge representation (KR)  Expressing knowledge symbolically/numerically  Objective  Initial state  State space (operators, successor function)  Goal test: h(n) – part of (heuristic) evaluation function

Computing & Information Sciences Kansas State University Wednesday, 30 Aug 2006CIS 490 / 730: Artificial Intelligence Best-First Search [2]: Characterization of Algorithm Family Best-First: Family of Algorithms  Justification: using only g doesn’t direct search toward goal  Nodes ordered  Node with best evaluation function (e.g., h) expanded first  Best-first: any algorithm with this property (NB: not just using h alone) Note on “Best”  Refers to “apparent best node”  based on eval function  applied to current frontier  Discussion: when is best-first not really best?

Computing & Information Sciences Kansas State University Wednesday, 30 Aug 2006CIS 490 / 730: Artificial Intelligence Best-First Search [3]: Implementation function Best-First-Search (problem, Eval-Fn) returns solution sequence  inputs:problem, specification of problem (structure or class) Eval-Fn, an evaluation function  Queueing-Fn  function that orders nodes by Eval-Fn  Compare: Sort with comparator function <  Functional abstraction  return General-Search (problem, Queueing-Fn) Implementation  Recall: priority queue specification  Eval-Fn: node  R  Queueing-Fn  Sort-By: node list  node list  Rest of design follows General-Search Issues  General family of greedy (aka myopic, i.e., nearsighted) algorithms  Discussion: What guarantees do we want on h(n)? What preferences?

Computing & Information Sciences Kansas State University Wednesday, 30 Aug 2006CIS 490 / 730: Artificial Intelligence Heuristic Search [1]: Terminology Heuristic Function  Definition: h(n) = estimated cost of cheapest path from state at node n to a goal state  Requirements for h  In general, any magnitude (ordered measure, admits comparison)  h(n) = 0 iff n is goal  For A/A*, iterative improvement: want  h to have same type as g  Return type to admit addition  Problem-specific (domain-specific) Typical Heuristics  Graph search in Euclidean space h SLD (n) = straight-line distance to goal  Discussion (important): Why is this good?

Computing & Information Sciences Kansas State University Wednesday, 30 Aug 2006CIS 490 / 730: Artificial Intelligence Best-First Search [1]: Evaluation Function Recall: General-Search Applying Knowledge  In problem representation (state space specification)  At Insert(), aka Queueing-Fn()  Determines node to expand next Knowledge representation (KR)  Expressing knowledge symbolically/numerically  Objective  Initial state  State space (operators, successor function)  Goal test: h(n) – part of (heuristic) evaluation function

Computing & Information Sciences Kansas State University Wednesday, 30 Aug 2006CIS 490 / 730: Artificial Intelligence Best-First Search [2]: Characterization of Algorithm Family Best-First: Family of Algorithms  Justification: using only g doesn’t direct search toward goal  Nodes ordered  Node with best evaluation function (e.g., h) expanded first  Best-first: any algorithm with this property (NB: not just using h alone) Note on “Best”  Refers to “apparent best node”  based on eval function  applied to current frontier  Discussion: when is best-first not really best?

Computing & Information Sciences Kansas State University Wednesday, 30 Aug 2006CIS 490 / 730: Artificial Intelligence Best-First Search [3]: Implementation function Best-First-Search (problem, Eval-Fn) returns solution sequence  inputs:problem, specification of problem (structure or class) Eval-Fn, an evaluation function  Queueing-Fn  function that orders nodes by Eval-Fn  Compare: Sort with comparator function <  Functional abstraction  return General-Search (problem, Queueing-Fn) Implementation  Recall: priority queue specification  Eval-Fn: node  R  Queueing-Fn  Sort-By: node list  node list  Rest of design follows General-Search Issues  General family of greedy (aka myopic, i.e., nearsighted) algorithms  Discussion: What guarantees do we want on h(n)? What preferences?

Computing & Information Sciences Kansas State University Wednesday, 30 Aug 2006CIS 490 / 730: Artificial Intelligence Heuristic Search [1]: Terminology Heuristic Function  Definition: h(n) = estimated cost of cheapest path from state at node n to a goal state  Requirements for h  In general, any magnitude (ordered measure, admits comparison)  h(n) = 0 iff n is goal  For A/A*, iterative improvement: want  h to have same type as g  Return type to admit addition  Problem-specific (domain-specific) Typical Heuristics  Graph search in Euclidean space h SLD (n) = straight-line distance to goal  Discussion (important): Why is this good?

Computing & Information Sciences Kansas State University Wednesday, 30 Aug 2006CIS 490 / 730: Artificial Intelligence Heuristic Search [2]: Background Origins of Term  Heuriskein – to find (to discover)  Heureka (“I have found it”) – attributed to Archimedes Usage of Term  Mathematical logic in problem solving  Polyà [1957]  Methods for discovering, inventing problem-solving techniques  Mathematical proof derivation techniques  Psychology: “rules of thumb” used by humans in problem-solving  Pervasive through history of AI  e.g., Stanford Heuristic Programming Project  One origin of rule-based (expert) systems General Concept of Heuristic (A Modern View)  Standard (rule, quantitative measure) used to reduce search  “As opposed to exhaustive blind search”  Compare (later): inductive bias in machine learning

Computing & Information Sciences Kansas State University Wednesday, 30 Aug 2006CIS 490 / 730: Artificial Intelligence Greedy Search [1]: A Best-First Algorithm function Greedy-Search (problem) returns solution or failure  // recall: solution Option  return Best-First-Search (problem, h) Example of Straight-Line Distance (SLD) Heuristic: Figure 4.2 R&N  Can only calculate if city locations (coordinates) are known  Discussion: Why is h SLD useful?  Underestimate  Close estimate Example: Figure 4.3 R&N  Is solution optimal?  Why or why not?

Computing & Information Sciences Kansas State University Wednesday, 30 Aug 2006CIS 490 / 730: Artificial Intelligence Similar to DFS  Prefers single path to goal  Backtracks Same Drawbacks as DFS?  Not optimal  First solution  Not necessarily best  Discussion: How is this problem mitigated by quality of h?  Not complete: doesn’t consider cumulative cost “so-far” (g) Worst-Case Time Complexity:  (b m ) – Why? Worst-Case Space Complexity:  (b m ) – Why? Greedy Search [2]: Properties

Computing & Information Sciences Kansas State University Wednesday, 30 Aug 2006CIS 490 / 730: Artificial Intelligence Branch-and-Bound Search Heuristics for General-Search Function of Problem-Solving-Agent  Informed (heuristic) search: heuristic definition, development process  Best-First Search  Greedy  A/A*  Admissibility property  Developing good heuristics  Humans  Intelligent systems (automatic derivation): case studies and principles Constraint Satisfaction Heuristics This Week: More Search Basics  Memory bounded, iterative improvement (gradient, Monte Carlo search)  Introduction to game tree search Next Topic: Informed (Heuristic) Search

Computing & Information Sciences Kansas State University Wednesday, 30 Aug 2006CIS 490 / 730: Artificial Intelligence Terminology State Space Search Goal-Directed Reasoning, Planning Search Types: Uninformed (“Blind”) vs. Informed (“Heuristic”) Basic Search Algorithms  British Museum (depth-first aka DFS), iterative-deepening DFS  Breadth-First aka BFS, depth-limited, uniform-cost  Bidirectional  Branch-and-Bound Properties of Search  Soundness: returned candidate path satisfies specification  Completeness: finds path if one exists  Optimality: (usually means) achieves maximal online path cost  Optimal efficiency: (usually means) maximal offline cost

Computing & Information Sciences Kansas State University Wednesday, 30 Aug 2006CIS 490 / 730: Artificial Intelligence Summary Points Reading for Next Class: Sections 3.2 – 3.4, R&N 2 e This Week: Search, Chapters  State spaces  Graph search examples  Basic search frameworks: discrete and continuous Uninformed (“Blind”) and Informed (“Heuristic”) Search  Cost functions: online vs. offline  Time and space complexity  Heuristics: examples from graph search, constraint satisfaction Relation to Intelligent Systems Concepts  Knowledge representation: evaluation functions, macros  Planning, reasoning, learning Next Week: Heuristic Search, Chapter 4; Constraints, Chapter 5 Later: Goal-Directed Reasoning, Planning (Chapter 11)