Download presentation
Presentation is loading. Please wait.
1
COMP 8620 Advanced Topics in AI
Lecturers: Philip Kilby & Jinbo Huang 6/12/2018
2
Part 2: Probabilistic Reasoning with Bayesian networks
Part 1: Search Lecturer: Dr Philip Kilby Weeks 1-7 (Week 7 is assignment seminars) Part 2: Probabilistic Reasoning with Bayesian networks Lecturer: Dr Jinbo Huang Weeks 8-13 6/12/2018
3
Course Outline – Part 1 1-2 Introduction Systematic Search
3-4 Linear Programming / Integer Programming / Branch and Bound 5-6 Neighbourhood-based methods: Simulated Annealing Tabu Search Genetic Algorithms 7-8 Constraint Programming (Guest lecturer: Jason Li) 7-8 Constraint Programming (Guest lecturer: Jason Li) 9-10 Case Studies: TSP Planning (Jussi Rintanen) 11 Dynamic Programming 12 Qualitative Reasoning (Guest lecturer: Jason Li) 13-14 Seminars (Guest Lecturers: You lot!) 6/12/2018 Search – Lecture 1-2
4
Assessment 40% Exam 60% Assignment Search
1 x Assignment (next week) 15% 1 x Seminar 15% Reasoning 3 x Assignment (10% each) (tentative) 6/12/2018 Search – Lecture 1-2
5
How do we find the good ones?
What is Search? Search finds a solution to a problem Landscape of solutions Some are “good” Lots (and lots and lots) are “bad” How do we find the good ones? 6/12/2018 Search – Lecture 1-2
6
What is Search? Landscape can continuous or discrete
(We will only deal with discrete) Landscape can be (approximated by?) a tree Landscape can be (approximated by?) a graph 6/12/2018 Search – Lecture 1-2
7
What is Search? X + 2 * Y < 3, X [0,5] Y [1,5] 6/12/2018
Search – Lecture 1-2
8
What is Search? X + 2 * Y < 3, X [0,5] Y [1,5] X=4 Y=2 X=0 Y=4
6/12/2018 Search – Lecture 1-2
9
What is Search? X + 2 * Y < 3, X [0,5] Y [1,5] X=0 X=1 X=2 X=3
6/12/2018 Search – Lecture 1-2
10
What is Search? X + 2 * Y < 3, X [0,5] Y [1,5] Y=1 Y=2 Y=3 Y=4
6/12/2018 Search – Lecture 1-2
11
What is Search? X + 2 * Y < 3, X [0,5] Y [1,5] X>0 X=0
6/12/2018 Search – Lecture 1-2
12
What is Search? Different flavours
Find a solution (Satisfaction/Decision) Find the best solution (Combinatorial Optimisation) Decision problems: Search variant: Find a solution (or show none exists) Decision variant: Does a solution exist? 6/12/2018 Search – Lecture 1-2
13
What is Search Constructive Search – Find a solution by construction
Local Search – Given a solution, explore the “neighbourhood” of that solution to find other (possibly better) solutions 6/12/2018 Search – Lecture 1-2
14
What is Search? Anytime algorithm (for optimisation prob):
After a (short) startup time, an answer is available The longer the alg is run, the better the solution Quality guarantee may be available (e.g. solution is within 5% of optimal) One-shot algorithm Answer only available at completion 6/12/2018 Search – Lecture 1-2
15
What is Search? A problem is defined by Initial state
Successor function (an action leading from one state to another) Goal test Path cost A solution is a sequence of actions leading from the start state to a goal state 6/12/2018 Search – Lecture 1-2
16
Example: Romania 6/12/2018 Search – Lecture 1-2
17
Problem formulation A problem is defined by four items: initial state
e.g., “at Arad” successor function S(x) set of action–state pairs e.g., S(Arad) = {<Arad → Zerind, at Zerind>, . . .} goal test e.g., x = “at Bucharest” can be implicit, e.g., HasAirport(x) path cost (additive) e.g. sum of distances, number of actions executed, etc. c(x, a, y) is the step cost, assumed to be ≥ 0 6/12/2018 Search – Lecture 1-2
18
Tree Search function Tree-Search (problem, fringe) returns a solution, or failure fringe ← Insert (Make-Node (Initial-State (problem)), fringe) loop do if Is-Empty (fringe) then return failure node ← Remove-Front (fringe) if Goal-Test (problem, State[node]) then return node fringe ← InsertAll (Expand (node, problem), fringe) function Expand (node, problem) returns a set of nodes successors ← the empty set for each (action, result) in Successor-Fn (problem, State[node]) do s ← a new Node Parent-Node[s] ← node; Action[s] ← action; State[s] ← result Path-Cost[s] ← Path-Cost[node] + Step-Cost(State[node], action, result) Depth[s] ← Depth[node] + 1 successors ← Insert (s) return successors 6/12/2018 Search – Lecture 1-2
19
Breadth-first Search A B C D E F G H I L M O 6/12/2018
Search – Lecture 1-2
20
Breadth-first Search A B C D E F G H I L M O 6/12/2018
Search – Lecture 1-2
21
Breadth-first Search A B C D E F G H I L M O 6/12/2018
Search – Lecture 1-2
22
Breadth-first Search A B C D E F G H I L M O 6/12/2018
Search – Lecture 1-2
23
Breadth-first Search A B C D E F G H I L M O 6/12/2018
Search – Lecture 1-2
24
Breadth-first Search A B C D E F G H I L M O 6/12/2018
Search – Lecture 1-2
25
Breadth-first Search A B C D E F G H I L M O 6/12/2018
Search – Lecture 1-2
26
Breadth-first Search A B C D E F G H I L M O 6/12/2018
Search – Lecture 1-2
27
Breadth-first Search A B C D E F G H I L M O 6/12/2018
Search – Lecture 1-2
28
Breadth-first Search A B C D E F G H I L M O 6/12/2018
Search – Lecture 1-2
29
Breadth-first Search A B C D E F G H I L M O 6/12/2018
Search – Lecture 1-2
30
Breadth-first Search A B C D E F G H I L M O 6/12/2018
Search – Lecture 1-2
31
Breadth-first Search A B C D E F G H I L M O 6/12/2018
Search – Lecture 1-2
32
Breadth-first Search !! A B C D E F G H I L M O 6/12/2018
Search – Lecture 1-2
33
Depth-first Search A B C D E F G H I L M O 6/12/2018
Search – Lecture 1-2
34
Depth-first Search A B C D E F G H I L M O 6/12/2018
Search – Lecture 1-2
35
Depth-first Search A B C D E F G H I L M O 6/12/2018
Search – Lecture 1-2
36
Depth-first Search A B C D E F G H I L M O 6/12/2018
Search – Lecture 1-2
37
Depth-first Search A B C D E F G H I L M O 6/12/2018
Search – Lecture 1-2
38
Depth-first Search A B C D E F G H I L M O 6/12/2018
Search – Lecture 1-2
39
Depth-first Search A B C D E F G H I L M O 6/12/2018
Search – Lecture 1-2
40
Depth-first Search A B C D E F G H I L M O 6/12/2018
Search – Lecture 1-2
41
Depth-first Search A B C D E F G H I L M O 6/12/2018
Search – Lecture 1-2
42
Depth-first Search A B C D E F G H I L M O 6/12/2018
Search – Lecture 1-2
43
Depth-first Search A B C D E F G H I L M O 6/12/2018
Search – Lecture 1-2
44
Depth-first Search A B C D E F G H I L M O 6/12/2018
Search – Lecture 1-2
45
Depth-first Search A B C D E F G H I L M O 6/12/2018
Search – Lecture 1-2
46
Breadth-first Search !! A B C D E F G H I L M O 6/12/2018
Search – Lecture 1-2
47
Iterative Deepening DFS Search
B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2
48
Iterative Deepening DFS Search
B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2
49
Iterative Deepening DFS Search
B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2
50
Iterative Deepening DFS Search
B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2
51
Iterative Deepening DFS Search
B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2
52
Iterative Deepening DFS Search
B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2
53
Iterative Deepening DFS Search
B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2
54
Iterative Deepening DFS Search
B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2
55
Iterative Deepening DFS Search
B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2
56
Iterative Deepening DFS Search
B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2
57
Iterative Deepening DFS Search
B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2
58
Iterative Deepening DFS Search
B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2
59
Iterative Deepening DFS Search
B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2
60
Iterative Deepening DFS Search
B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2
61
Iterative Deepening DFS Search
B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2
62
Iterative Deepening DFS Search
B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2
63
Iterative Deepening DFS Search
B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2
64
Iterative Deepening DFS Search
B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2
65
Iterative Deepening DFS Search
B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2
66
Iterative Deepening DFS Search
B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2
67
Iterative Deepening DFS Search
B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2
68
Iterative Deepening DFS Search
B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2
69
Iterative Deepening DFS Search
B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2
70
Iterative Deepening DFS Search
B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2
71
Iterative Deepening DFS Search
B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2
72
Iterative Deepening DFS Search
B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2
73
Iterative Deepening DFS Search
B C D E F G H I L M O 6/12/2018 Search – Lecture 1-2
74
Iterative Deepening DFS Search
B C D E F G !! H I L M O 6/12/2018 Search – Lecture 1-2
75
Uniform cost search Expand in order of cost
Identical to Breadth-First Search if all step costs are equal Number of nodes expanded depends on ε > 0 – the smallest cost action cost. (If ε = 0, uniform-Cost search may infinite-loop) 6/12/2018 Search – Lecture 1-2
76
Bi-directional Search
Extend BFS from Start and Goal states simultaneously bd/2 + bd/2 << bd 6/12/2018 Search – Lecture 1-2
77
Bi-directional Search
Restrictions: Can only use if 1 (or a few) known goal nodes Can only use if actions are reversible i.e. if we can efficiently calculate Pred(s) (for state s) where 6/12/2018 Search – Lecture 1-2
78
Summary Criterion BFS DFS IDDFS Uniform Cost Bi-direc-tional Complete
Yes No Time O(bd+1) O(bm) O(bd) O(bC*/ε) O(bd/2) Space Optimal Yes, for unit cost b = branching factor ; m = max depth of tree ; d = depth of goal node C* = cost of goal ; ε = min action cost 6/12/2018 Search – Lecture 1-2
79
Repeated State Detection
Test for previously-seen states Essentially, search graph mimics state graph 6/12/2018 Search – Lecture 1-2
80
Repeated State Detection
Can make exponential improvement in time Can make DFS complete DFS now has potentially exponential space requirement Have to be careful to guarantee optimality In practice, almost always use repeated state detection 6/12/2018 Search – Lecture 1-2
81
A* Search Idea: avoid expanding paths that are already expensive
Evaluation function f(n) = g(n) + h(n) g(n) = cost so far to reach n h(n) = estimated cost from n to the closest goal f(n) = estimated total cost of path through n to goal A∗ search uses an admissible heuristic i.e., h(n) ≤ h∗(n) where h∗(n) is the true cost from n. (Also require h(n) ≥ 0, so h(G) = 0 for any goal G.) E.g., hSLD(n) never overestimates the actual road distance When h(n) is admissible, f(n) never overestimates the total cost of the shortest path through n to the goal Theorem: A∗ search is optimal 6/12/2018 Search – Lecture 1-2
82
A* Example Arad 366=0+366 6/12/2018 Search – Lecture 1-2
83
A* Example Arad 366=0+366 6/12/2018 Search – Lecture 1-2
84
A* Example Arad Sibiu 393=140+253 Timisoara 447=118+329 Zerind
449=75+374 6/12/2018 Search – Lecture 1-2
85
A* Example Arad Sibiu 393=140+253 Timisoara 447=118+329 Zerind
449=75+374 6/12/2018 Search – Lecture 1-2
86
A* Example Arad Sibiu Timisoara 447=118+329 Zerind 449=75+374 Arad
646= Fagaras 415= Oradea 671= Rimnicu Vilcea 413= 6/12/2018 Search – Lecture 1-2
87
A* Example Arad Sibiu Timisoara 447=118+329 Zerind 449=75+374 Arad
646= Fagaras 415= Oradea 671= Rimnicu Vilcea 413= 6/12/2018 Search – Lecture 1-2
88
A* Example Arad Sibiu Timisoara 447=118+329 Zerind 449=75+374 Arad
646= Fagaras 415= Oradea 671= Rimnicu Vilcea Craiova 526= Pitesti 417= Sibiu 553= 6/12/2018 Search – Lecture 1-2
89
A* Example Arad Sibiu Timisoara 447=118+329 Zerind 449=75+374 Arad
646= Fagaras 415= Oradea 671= Rimnicu Vilcea Craiova 526= Pitesti 417= Sibiu 553= 6/12/2018 Search – Lecture 1-2
90
A* Example Arad Sibiu Timisoara 447=118+329 Zerind 449=75+374 Arad
646= Fagaras Oradea 671= Rimnicu Vilcea Sibiu 591= Bucharest 450=450+0 Craiova 526= Pitesti 417= Sibiu 553= 6/12/2018 Search – Lecture 1-2
91
A* Example Arad Sibiu Timisoara 447=118+329 Zerind 449=75+374 Arad
646= Fagaras Oradea 671= Rimnicu Vilcea Sibiu 591= Bucharest 450=450+0 Craiova 526= Pitesti 417= Sibiu 553= 6/12/2018 Search – Lecture 1-2
92
A* Example Arad Sibiu Timisoara 447=118+329 Zerind 449=75+374 Arad
646= Fagaras Oradea 671= Rimnicu Vilcea Sibiu 591= Bucharest 450=450+0 Craiova 526= Pitesti Sibiu 553= Bucharest 418=418+0 Craiova 615= Rimnicu Vilcea 607= 6/12/2018 Search – Lecture 1-2
93
A* Example Arad Sibiu Timisoara 447=118+329 Zerind 449=75+374 Arad
646= Fagaras Oradea 671= Rimnicu Vilcea Sibiu 591= Bucharest 450=450+0 Craiova 526= Pitesti Sibiu 553= Bucharest 418=418+0 Craiova 615= Rimnicu Vilcea 607= 6/12/2018 Search – Lecture 1-2
94
A* Example Arad Sibiu Timisoara 447=118+329 Zerind 449=75+374 Arad
646= Fagaras Oradea 671= Rimnicu Vilcea Sibiu 591= Bucharest 450=450+0 Craiova 526= Pitesti Sibiu 553= Bucharest 418=418+0 Craiova 615= Rimnicu Vilcea 607= 6/12/2018 Search – Lecture 1-2
95
Exponential space requirement
A* Search Optimal No other optimal algorithm expands fewer nodes (modulo order of expansion of nodes with same cost) Exponential space requirement 6/12/2018 Search – Lecture 1-2
96
Greedy Search Use (always admissible) h(n) = 0 in f(n) = g(n) + h(n)
i.e. always use the cheapest found so far 6/12/2018 Search – Lecture 1-2
97
Variants of A* Search Many variants of A* developed
Weighted A*: f(n) = g(n) + W h(n) Not exact Faster Answer is within W of optimal 6/12/2018 Search – Lecture 1-2
98
Variants of A* Search Many variants of A* developed
Weighted A*: f(n) = g(n) + W h(n) Not exact Faster Answer is within W of optimal A* 6/12/2018 Search – Lecture 1-2
99
Variants of A* Search Many variants of A* developed
Weighted A*: f(n) = g(n) + W h(n) Not exact Faster Answer is within W of optimal Weighted A* 6/12/2018 Search – Lecture 1-2
100
Variants of A* Search Iterative Deepening A* (IDA*)
Incrementally increase max depth Memory-bounded A* Impose bound on memory Discard nodes with largest f(x) if necessary Others to come in seminars… 6/12/2018 Search – Lecture 1-2
101
Limited Discrepancy Search
Requires a heuristic to order choices (e.g. admissible heuristic, but don’t require underestimate) If heuristic was right, we could always take the choice given by the heuristic LDS allows the heuristic to make mistakes: k-LDS allows k mistakes. 6/12/2018 Search – Lecture 1-2
102
Limited Discrepancy Search (k = 1)
B C D E F G H I J K L M N O 6/12/2018 Search – Lecture 1-2
103
Limited Discrepancy Search (k = 1)
1 B C D E F G H I J K L M N O 6/12/2018 Search – Lecture 1-2
104
Limited Discrepancy Search (k = 1)
1 B C 1 D E F G H I J K L M N O 6/12/2018 Search – Lecture 1-2
105
Limited Discrepancy Search (k = 1)
1 B C 1 D E F G 1 H I J K L M N O 6/12/2018 Search – Lecture 1-2
106
Limited Discrepancy Search (k = 1)
1 B C 1 D E F G 1 H I J K L M N O 6/12/2018 Search – Lecture 1-2
107
Limited Discrepancy Search (k = 1)
1 B C 1 D E F G 1 H I J K L M N O 6/12/2018 Search – Lecture 1-2
108
Limited Discrepancy Search (k = 1)
1 B C 1 D E F G 1 1 H I J K L M N O 6/12/2018 Search – Lecture 1-2
109
Limited Discrepancy Search (k = 1)
1 B C 1 D E F G 1 1 H I J K L M N O 6/12/2018 Search – Lecture 1-2
110
Limited Discrepancy Search (k = 1)
1 B C 1 1 D E F G 1 1 H I J K L M N O 6/12/2018 Search – Lecture 1-2
111
Limited Discrepancy Search (k = 1)
1 B C 1 1 D E F G 1 1 1 H I J K L M N O 6/12/2018 Search – Lecture 1-2
112
Limited Discrepancy Search (k = 1)
1 B C 1 1 D E F G 1 1 1 H I J K L M N O 6/12/2018 Search – Lecture 1-2
113
(Integer) Linear Programming and
Next week…. (Integer) Linear Programming and Branch & Bound 6/12/2018 Search – Lecture 1-2
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.