Download presentation
Presentation is loading. Please wait.
Published byBrittany Norris Modified over 9 years ago
1
CSE 473 Artificial Intelligence Review
2
Logistics Project reports and homework 5 due Monday 5pm Monday 5pm project demo in 002 Exam next Wed 8:30—10:30 Regular classroom Closed book Cover all quarter’s material Emphasis on material not covered on midterm
3
Defining AI thought vs. behavior human-like vs. rational Systems that think like humans Systems that think rationally Systems that act like humans Systems that act rationally
4
Goals of this Course To introduce you to a set of key: Paradigms & Techniques Teach you to identify when & how to use Heuristic search Constraint satisfaction Logical inference Bayesian inference Policy construction Machine learning
5
Theme I Problem Spaces & Search How to specify PS? Two kinds of search?
6
Learning as Search Decision trees Structure learning in Bayesian networks Unsupervised clustering
7
Theme II In the knowledge lies the power Adding knowledge to search
8
Heuristics How to generate? Admissibility?
9
Propositional. Logic vs. First Order Ontology Syntax Semantics Inference Algorithm Complexity Objects, Properties, Relations Atomic sentences Connectives Variables & quantification Sentences have structure: terms father-of(mother-of(X))) Unification Forward, Backward chaining Prolog, theorem proving DPLL, GSAT Fast in practice Semi-decidableNP-Complete Facts (P, Q) Interpretations (Much more complicated) Truth Tables
10
© D. Fox 10 Planning Problem solving algorithms that operate on explicit propositional representations of states and actions. Make use of specific heuristics. State-space search: forward (progression) / backward (regression) search Partial order planners search space of plans from goal to start, adding actions to achieve goals GraphPlan: Generates planning graph to guide backwards search for plan SATplan: Converts planning problem into propositional axioms. Uses SAT solver to find plan.
11
Probabilistic Representations How encode knowledge here? In the knowledge lies the power
12
Uncertainity Joint Distribution Prior & Conditional Probability Bayes Rule [Conditional] Independence Bayes Net Propositional Hot topic: extensions to FOL Earthquake Burglary Alarm Nbr2CallsNbr1Calls Pr(B=t) Pr(B=f) 0.05 0.95 Pr(A|E,B) e,b 0.9 (0.1) e,b 0.2 (0.8) e,b 0.85 (0.15) e,b 0.01 (0.99) Radio
13
© D. Fox 13 P(B | J=true, M=true) EarthquakeBurglary Alarm MaryJohn P(b|j,m) = P(b,j,m,e,a) e,a
14
© D. Fox 14 P(B | J=true, M=true) EarthquakeBurglary Alarm MaryJohn P(b|j,m) = P(b) P(e) P(a|b,e)P(j|a)P(m,a) e a
15
Localization as Dynamic Bayes Net m x z u x z u 2 2 x z u... t t x 1 1 0 10t-1
16
Markov Assumption Helps! x t-1 x z u t t t
17
© D. Fox 17
18
Representations for Bayesian Robot Localization Discrete approaches (’95) Topological representation (’95) uncertainty handling (POMDPs) occas. global localization, recovery Grid-based, metric representation (’96) global localization, recovery Multi-hypothesis (’00) multiple Kalman filters global localization, recovery Particle filters (’99) sample-based representation global localization, recovery Kalman filters (late-80s?) Gaussians approximately linear models position tracking AI Robotics
19
Sensor board: Data Stream Courtesy G. Borriello
20
Example Evaluation Run Decision stumps classifiers (at 4Hz) HMM with probabilities as inputs (using a 15 second sliding window with 5 second overlap) Ground truth for a continuous hour and half segment of data.
21
Specifying an MDP S = set of states set (|S| = n) A = set of actions (|A| = m) Pr = transition function Pr(s,a,s’) represented by set of m n x n stochastic matrices each defines a distribution over SxS R(s) = bounded, real-valued reward fun represented by an n-vector
22
Max Bellman Backup, Value Iteration a1a1 a2a2 a3a3 s VnVn VnVn VnVn VnVn VnVn VnVn VnVn Q n+1 (s,a) V n+1 (s)
23
© D. Fox 23 Stochastic, Fully Observable
24
© D. Fox 24 Why is Learning Possible? Experience alone never justifies any conclusion about any unseen instance. Learning occurs when PREJUDICE meets DATA!
25
© D. Fox 25 Inductive learning method Construct/adjust h to agree with f on training set (h is consistent if it agrees with f on all examples) E.g., curve fitting: Ockham’s razor: prefer the simplest hypothesis consistent with data
26
© D. Fox 26 Decision Tree Overfitting Number of Nodes in Decision tree Accuracy 0.9 0.8 0.7 0.6 On training data On test data
27
© D. Fox 27
28
© D. Fox 28
29
© D. Fox 29
30
© D. Fox 30
31
© D. Fox 31
32
© D. Fox 32
33
© D. Fox 33
34
© D. Fox 34
35
© D. Fox 35 Anemia Group Control Group
36
And More Specific search & CSP algorithms Adversary Search Inference in Propositional & FO Logic Lots of details
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.