COMP 8620 Advanced Topics in AI

Slides:



Advertisements
Similar presentations
Solving problems by searching Chapter 3. Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms.
Advertisements

Additional Topics ARTIFICIAL INTELLIGENCE
Review: Search problem formulation
Notes Dijstra’s Algorithm Corrected syllabus.
Informed search algorithms
An Introduction to Artificial Intelligence
G5BAIM Artificial Intelligence Methods Graham Kendall Blind Searches.
May 12, 2013Problem Solving - Search Symbolic AI: Problem Solving E. Trentin, DIISM.
Search Strategies CPS4801. Uninformed Search Strategies Uninformed search strategies use only the information available in the problem definition Breadth-first.
UNINFORMED SEARCH Problem - solving agents Example : Romania  On holiday in Romania ; currently in Arad.  Flight leaves tomorrow from Bucharest.
CS 380: Artificial Intelligence Lecture #3 William Regli.
Informed search.
Review: Search problem formulation
Artificial Intelligence
Cooperating Intelligent Systems Informed search Chapter 4, AIMA.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
Problem Solving and Search in AI Heuristic Search
CSC344: AI for Games Lecture 4: Informed search
Cooperating Intelligent Systems Informed search Chapter 4, AIMA 2 nd ed Chapter 3, AIMA 3 rd ed.
Rutgers CS440, Fall 2003 Heuristic search Reading: AIMA 2 nd ed., Ch
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
2013/10/17 Informed search A* algorithm. Outline 2 Informed = use problem-specific knowledge Which search strategies? Best-first search and its variants.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed Search Methods. Informed Search  Uninformed searches  easy  but very inefficient in most cases of huge search tree  Informed searches  uses.
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
SOLVING PROBLEMS BY SEARCHING Chapter 3 August 2008 Blind Search 1.
A General Introduction to Artificial Intelligence.
CSC3203: AI for Games Informed search (1) Patrick Olivier
Informed Search I (Beginning of AIMA Chapter 4.1)
Artificial Intelligence Problem solving by searching.
1 Kuliah 4 : Informed Search. 2 Outline Best-First Search Greedy Search A* Search.
Solving problems by searching 1. Outline Problem formulation Example problems Basic search algorithms 2.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
Pengantar Kecerdasan Buatan 4 - Informed Search and Exploration AIMA Ch. 3.5 – 3.6.
Heuristic Search Foundations of Artificial Intelligence.
Introduction to Artificial Intelligence (G51IAI)
Romania. Romania Problem Initial state: Arad Goal state: Bucharest Operators: From any node, you can visit any connected node. Operator cost, the.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Uninformed Search ECE457 Applied Artificial Intelligence Spring 2008 Lecture #2.
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Lecture 3: Uninformed Search
Informed Search Methods
Review: Tree search Initialize the frontier using the starting state
EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS
Last time: Problem-Solving
Artificial Intelligence (CS 370D)
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Heuristic Search Introduction to Artificial Intelligence
Uninformed Search Chapter 3.4.
Discussion on Greedy Search and A*
Discussion on Greedy Search and A*
Solving problems by searching
CS 4100 Artificial Intelligence
Artificial Intelligence Informed Search Algorithms
EA C461 – Artificial Intelligence
Searching for Solutions
Informed search algorithms
Informed search algorithms
Lecture 8 Heuristic search Motivational video Assignment 2
Artificial Intelligence
Informed search algorithms
CSE 473 University of Washington
Solving problems by searching
Artificial Intelligence
Solving Problems by Searching
Artificial Intelligence: Theory and Practice Ch. 3. Search
Informed Search.
Solving problems by searching
Presentation transcript:

COMP 8620 Advanced Topics in AI Lecturers: Philip Kilby & Jinbo Huang 6/12/2018

Part 2: Probabilistic Reasoning with Bayesian networks Part 1: Search Lecturer: Dr Philip Kilby Philip.Kilby@nicta.com.au Weeks 1-7 (Week 7 is assignment seminars) Part 2: Probabilistic Reasoning with Bayesian networks Lecturer: Dr Jinbo Huang Jinbo.Huang@nicta.com.au Weeks 8-13 6/12/2018

Course Outline – Part 1 1-2 Introduction Systematic Search 3-4 Linear Programming / Integer Programming / Branch and Bound 5-6 Neighbourhood-based methods:  Simulated Annealing  Tabu Search  Genetic Algorithms 7-8 Constraint Programming (Guest lecturer: Jason Li) 7-8 Constraint Programming (Guest lecturer: Jason Li) 9-10 Case Studies:  TSP  Planning (Jussi Rintanen) 11 Dynamic Programming 12 Qualitative Reasoning (Guest lecturer: Jason Li) 13-14 Seminars (Guest Lecturers: You lot!) 6/12/2018 Search – Lecture 1-2

Assessment 40% Exam 60% Assignment Search 1 x Assignment (next week) 15% 1 x Seminar 15% Reasoning 3 x Assignment (10% each) (tentative) 6/12/2018 Search – Lecture 1-2

How do we find the good ones? What is Search? Search finds a solution to a problem Landscape of solutions Some are “good” Lots (and lots and lots) are “bad” How do we find the good ones? 6/12/2018 Search – Lecture 1-2

What is Search? Landscape can continuous or discrete (We will only deal with discrete) Landscape can be (approximated by?) a tree Landscape can be (approximated by?) a graph 6/12/2018 Search – Lecture 1-2

What is Search? X + 2 * Y < 3, X  [0,5] Y  [1,5] 6/12/2018 Search – Lecture 1-2

What is Search? X + 2 * Y < 3, X  [0,5] Y  [1,5] X=4 Y=2 X=0 Y=4 6/12/2018 Search – Lecture 1-2

What is Search? X + 2 * Y < 3, X  [0,5] Y  [1,5] X=0 X=1 X=2 X=3 6/12/2018 Search – Lecture 1-2

What is Search? X + 2 * Y < 3, X  [0,5] Y  [1,5] Y=1 Y=2 Y=3 Y=4 6/12/2018 Search – Lecture 1-2

What is Search? X + 2 * Y < 3, X  [0,5] Y  [1,5] X>0 X=0 6/12/2018 Search – Lecture 1-2

What is Search? Different flavours Find a solution (Satisfaction/Decision) Find the best solution (Combinatorial Optimisation) Decision problems: Search variant: Find a solution (or show none exists) Decision variant: Does a solution exist? 6/12/2018 Search – Lecture 1-2

What is Search Constructive Search – Find a solution by construction Local Search – Given a solution, explore the “neighbourhood” of that solution to find other (possibly better) solutions 6/12/2018 Search – Lecture 1-2

What is Search? Anytime algorithm (for optimisation prob): After a (short) startup time, an answer is available The longer the alg is run, the better the solution Quality guarantee may be available (e.g. solution is within 5% of optimal) One-shot algorithm Answer only available at completion 6/12/2018 Search – Lecture 1-2

What is Search? A problem is defined by Initial state Successor function (an action leading from one state to another) Goal test Path cost A solution is a sequence of actions leading from the start state to a goal state 6/12/2018 Search – Lecture 1-2

Example: Romania 6/12/2018 Search – Lecture 1-2

Problem formulation A problem is defined by four items: initial state e.g., “at Arad” successor function S(x) set of action–state pairs e.g., S(Arad) = {<Arad → Zerind, at Zerind>, . . .} goal test e.g., x = “at Bucharest” can be implicit, e.g., HasAirport(x) path cost (additive) e.g. sum of distances, number of actions executed, etc. c(x, a, y) is the step cost, assumed to be ≥ 0 6/12/2018 Search – Lecture 1-2

Tree Search function Tree-Search (problem, fringe) returns a solution, or failure fringe ← Insert (Make-Node (Initial-State (problem)), fringe) loop do if Is-Empty (fringe) then return failure node ← Remove-Front (fringe) if Goal-Test (problem, State[node]) then return node fringe ← InsertAll (Expand (node, problem), fringe) function Expand (node, problem) returns a set of nodes successors ← the empty set for each (action, result) in Successor-Fn (problem, State[node]) do s ← a new Node Parent-Node[s] ← node; Action[s] ← action; State[s] ← result Path-Cost[s] ← Path-Cost[node] + Step-Cost(State[node], action, result) Depth[s] ← Depth[node] + 1 successors ← Insert (s) return successors 6/12/2018 Search – Lecture 1-2

Breadth-first Search A B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Breadth-first Search A B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Breadth-first Search A B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Breadth-first Search A B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Breadth-first Search A B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Breadth-first Search A B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Breadth-first Search A B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Breadth-first Search A B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Breadth-first Search A B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Breadth-first Search A B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Breadth-first Search A B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Breadth-first Search A B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Breadth-first Search A B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Breadth-first Search !! A B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Depth-first Search A B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Depth-first Search A B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Depth-first Search A B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Depth-first Search A B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Depth-first Search A B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Depth-first Search A B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Depth-first Search A B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Depth-first Search A B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Depth-first Search A B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Depth-first Search A B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Depth-first Search A B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Depth-first Search A B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Depth-first Search A B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Breadth-first Search !! A B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Iterative Deepening DFS Search B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Iterative Deepening DFS Search B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Iterative Deepening DFS Search B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Iterative Deepening DFS Search B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Iterative Deepening DFS Search B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Iterative Deepening DFS Search B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Iterative Deepening DFS Search B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Iterative Deepening DFS Search B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Iterative Deepening DFS Search B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Iterative Deepening DFS Search B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Iterative Deepening DFS Search B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Iterative Deepening DFS Search B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Iterative Deepening DFS Search B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Iterative Deepening DFS Search B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Iterative Deepening DFS Search B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Iterative Deepening DFS Search B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Iterative Deepening DFS Search B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Iterative Deepening DFS Search B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Iterative Deepening DFS Search B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Iterative Deepening DFS Search B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Iterative Deepening DFS Search B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Iterative Deepening DFS Search B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Iterative Deepening DFS Search B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Iterative Deepening DFS Search B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Iterative Deepening DFS Search B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Iterative Deepening DFS Search B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Iterative Deepening DFS Search B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2

Iterative Deepening DFS Search B C D E F G !! H I L M O 6/12/2018 Search – Lecture 1-2

Uniform cost search Expand in order of cost Identical to Breadth-First Search if all step costs are equal Number of nodes expanded depends on ε > 0 – the smallest cost action cost. (If ε = 0, uniform-Cost search may infinite-loop) 6/12/2018 Search – Lecture 1-2

Bi-directional Search Extend BFS from Start and Goal states simultaneously bd/2 + bd/2 << bd 6/12/2018 Search – Lecture 1-2

Bi-directional Search Restrictions: Can only use if 1 (or a few) known goal nodes Can only use if actions are reversible i.e. if we can efficiently calculate Pred(s) (for state s) where 6/12/2018 Search – Lecture 1-2

Summary Criterion BFS DFS IDDFS Uniform Cost Bi-direc-tional Complete Yes No Time O(bd+1) O(bm) O(bd) O(bC*/ε) O(bd/2) Space Optimal Yes, for unit cost b = branching factor ; m = max depth of tree ; d = depth of goal node C* = cost of goal ; ε = min action cost 6/12/2018 Search – Lecture 1-2

Repeated State Detection Test for previously-seen states Essentially, search graph mimics state graph 6/12/2018 Search – Lecture 1-2

Repeated State Detection Can make exponential improvement in time Can make DFS complete  DFS now has potentially exponential space requirement  Have to be careful to guarantee optimality In practice, almost always use repeated state detection 6/12/2018 Search – Lecture 1-2

A* Search Idea: avoid expanding paths that are already expensive Evaluation function f(n) = g(n) + h(n) g(n) = cost so far to reach n h(n) = estimated cost from n to the closest goal f(n) = estimated total cost of path through n to goal A∗ search uses an admissible heuristic i.e., h(n) ≤ h∗(n) where h∗(n) is the true cost from n. (Also require h(n) ≥ 0, so h(G) = 0 for any goal G.) E.g., hSLD(n) never overestimates the actual road distance When h(n) is admissible, f(n) never overestimates the total cost of the shortest path through n to the goal Theorem: A∗ search is optimal 6/12/2018 Search – Lecture 1-2

A* Example Arad 366=0+366 6/12/2018 Search – Lecture 1-2

A* Example Arad 366=0+366 6/12/2018 Search – Lecture 1-2

A* Example Arad Sibiu 393=140+253 Timisoara 447=118+329 Zerind 449=75+374 6/12/2018 Search – Lecture 1-2

A* Example Arad Sibiu 393=140+253 Timisoara 447=118+329 Zerind 449=75+374 6/12/2018 Search – Lecture 1-2

A* Example Arad Sibiu Timisoara 447=118+329 Zerind 449=75+374 Arad 646=280+366 Fagaras 415=239+176 Oradea 671=291+380 Rimnicu Vilcea 413=220+193 6/12/2018 Search – Lecture 1-2

A* Example Arad Sibiu Timisoara 447=118+329 Zerind 449=75+374 Arad 646=280+366 Fagaras 415=239+176 Oradea 671=291+380 Rimnicu Vilcea 413=220+193 6/12/2018 Search – Lecture 1-2

A* Example Arad Sibiu Timisoara 447=118+329 Zerind 449=75+374 Arad 646=280+366 Fagaras 415=239+176 Oradea 671=291+380 Rimnicu Vilcea Craiova 526=366+160 Pitesti 417=317+100 Sibiu 553=300+253 6/12/2018 Search – Lecture 1-2

A* Example Arad Sibiu Timisoara 447=118+329 Zerind 449=75+374 Arad 646=280+366 Fagaras 415=239+176 Oradea 671=291+380 Rimnicu Vilcea Craiova 526=366+160 Pitesti 417=317+100 Sibiu 553=300+253 6/12/2018 Search – Lecture 1-2

A* Example Arad Sibiu Timisoara 447=118+329 Zerind 449=75+374 Arad 646=280+366 Fagaras Oradea 671=291+380 Rimnicu Vilcea Sibiu 591=338+253 Bucharest 450=450+0 Craiova 526=366+160 Pitesti 417=317+100 Sibiu 553=300+253 6/12/2018 Search – Lecture 1-2

A* Example Arad Sibiu Timisoara 447=118+329 Zerind 449=75+374 Arad 646=280+366 Fagaras Oradea 671=291+380 Rimnicu Vilcea Sibiu 591=338+253 Bucharest 450=450+0 Craiova 526=366+160 Pitesti 417=317+100 Sibiu 553=300+253 6/12/2018 Search – Lecture 1-2

A* Example Arad Sibiu Timisoara 447=118+329 Zerind 449=75+374 Arad 646=280+366 Fagaras Oradea 671=291+380 Rimnicu Vilcea Sibiu 591=338+253 Bucharest 450=450+0 Craiova 526=366+160 Pitesti Sibiu 553=300+253 Bucharest 418=418+0 Craiova 615=455+160 Rimnicu Vilcea 607=414+193 6/12/2018 Search – Lecture 1-2

A* Example Arad Sibiu Timisoara 447=118+329 Zerind 449=75+374 Arad 646=280+366 Fagaras Oradea 671=291+380 Rimnicu Vilcea Sibiu 591=338+253 Bucharest 450=450+0 Craiova 526=366+160 Pitesti Sibiu 553=300+253 Bucharest 418=418+0 Craiova 615=455+160 Rimnicu Vilcea 607=414+193 6/12/2018 Search – Lecture 1-2

A* Example Arad Sibiu Timisoara 447=118+329 Zerind 449=75+374 Arad 646=280+366 Fagaras Oradea 671=291+380 Rimnicu Vilcea Sibiu 591=338+253 Bucharest 450=450+0 Craiova 526=366+160 Pitesti Sibiu 553=300+253 Bucharest 418=418+0 Craiova 615=455+160 Rimnicu Vilcea 607=414+193 6/12/2018 Search – Lecture 1-2

Exponential space requirement A* Search Optimal No other optimal algorithm expands fewer nodes (modulo order of expansion of nodes with same cost) Exponential space requirement 6/12/2018 Search – Lecture 1-2

Greedy Search Use (always admissible) h(n) = 0 in f(n) = g(n) + h(n) i.e. always use the cheapest found so far 6/12/2018 Search – Lecture 1-2

Variants of A* Search Many variants of A* developed Weighted A*: f(n) = g(n) + W h(n) Not exact Faster Answer is within W of optimal 6/12/2018 Search – Lecture 1-2

Variants of A* Search Many variants of A* developed Weighted A*: f(n) = g(n) + W h(n) Not exact Faster Answer is within W of optimal A* 6/12/2018 Search – Lecture 1-2

Variants of A* Search Many variants of A* developed Weighted A*: f(n) = g(n) + W h(n) Not exact Faster Answer is within W of optimal Weighted A* 6/12/2018 Search – Lecture 1-2

Variants of A* Search Iterative Deepening A* (IDA*) Incrementally increase max depth Memory-bounded A* Impose bound on memory Discard nodes with largest f(x) if necessary Others to come in seminars… 6/12/2018 Search – Lecture 1-2

Limited Discrepancy Search Requires a heuristic to order choices (e.g. admissible heuristic, but don’t require underestimate) If heuristic was right, we could always take the choice given by the heuristic LDS allows the heuristic to make mistakes: k-LDS allows k mistakes. 6/12/2018 Search – Lecture 1-2

Limited Discrepancy Search (k = 1) B C D E F G H I J K L M N O 6/12/2018 Search – Lecture 1-2

Limited Discrepancy Search (k = 1) 1 B C D E F G H I J K L M N O 6/12/2018 Search – Lecture 1-2

Limited Discrepancy Search (k = 1) 1 B C 1 D E F G H I J K L M N O 6/12/2018 Search – Lecture 1-2

Limited Discrepancy Search (k = 1) 1 B C 1 D E F G 1 H I J K L M N O 6/12/2018 Search – Lecture 1-2

Limited Discrepancy Search (k = 1) 1 B C 1 D E F G 1 H I J K L M N O 6/12/2018 Search – Lecture 1-2

Limited Discrepancy Search (k = 1) 1 B C 1 D E F G 1 H I J K L M N O 6/12/2018 Search – Lecture 1-2

Limited Discrepancy Search (k = 1) 1 B C 1 D E F G 1 1 H I J K L M N O 6/12/2018 Search – Lecture 1-2

Limited Discrepancy Search (k = 1) 1 B C 1 D E F G 1 1 H I J K L M N O 6/12/2018 Search – Lecture 1-2

Limited Discrepancy Search (k = 1) 1 B C 1 1 D E F G 1 1 H I J K L M N O 6/12/2018 Search – Lecture 1-2

Limited Discrepancy Search (k = 1) 1 B C 1 1 D E F G 1 1 1 H I J K L M N O 6/12/2018 Search – Lecture 1-2

Limited Discrepancy Search (k = 1) 1 B C 1 1 D E F G 1 1 1 H I J K L M N O 6/12/2018 Search – Lecture 1-2

(Integer) Linear Programming and Next week…. (Integer) Linear Programming and Branch & Bound 6/12/2018 Search – Lecture 1-2