AI Adjacent Fields  Philosophy:  Logic, methods of reasoning  Mind as physical system  Foundations of learning, language, rationality  Mathematics.

Slides:



Advertisements
Similar presentations
Solving problems by searching Chapter 3. Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms.
Advertisements

Additional Topics ARTIFICIAL INTELLIGENCE
Solving problems by searching
Review: Search problem formulation
Uninformed search strategies
Solving Problem by Searching
Lirong Xia Basic search Friday, Jan 24, TA OH –Joe Johnson (mainly in charge of the project): Wed 12:30-1:30 pm, Amos Eaton 217 –Hongzhao Huang.
Problem Solving and Search Andrea Danyluk September 9, 2013.
Problem Solving Agents A problem solving agent is one which decides what actions and states to consider in completing a goal Examples: Finding the shortest.
1 Lecture 3 Uninformed Search. 2 Uninformed search strategies Uninformed: While searching you have no clue whether one non-goal state is better than any.
Solving Problems by Searching Currently at Chapter 3 in the book Will finish today/Monday, Chapter 4 next.
CS 480 Lec 3 Sept 11, 09 Goals: Chapter 3 (uninformed search) project # 1 and # 2 Chapter 4 (heuristic search)
Blind Search1 Solving problems by searching Chapter 3.
Search Strategies Reading: Russell’s Chapter 3 1.
May 12, 2013Problem Solving - Search Symbolic AI: Problem Solving E. Trentin, DIISM.
Solving Problem by Searching Chapter 3. Outline Problem-solving agents Problem formulation Example problems Basic search algorithms – blind search Heuristic.
Touring problems Start from Arad, visit each city at least once. What is the state-space formulation? Start from Arad, visit each city exactly once. What.
Artificial Intelligence (CS 461D)
Uninformed Search Jim Little UBC CS 322 – Search 2 September 12, 2014
UNINFORMED SEARCH Problem - solving agents Example : Romania  On holiday in Romania ; currently in Arad.  Flight leaves tomorrow from Bucharest.
CS 380: Artificial Intelligence Lecture #3 William Regli.
Review: Search problem formulation
UnInformed Search What to do when you don’t know anything.
Artificial Intelligence Chapter 3: Solving Problems by Searching
Uninformed Search Reading: Chapter 3 by today, Chapter by Wednesday, 9/12 Homework #2 will be given out on Wednesday DID YOU TURN IN YOUR SURVEY?
CS 188: Artificial Intelligence Spring 2006 Lecture 2: Queue-Based Search 8/31/2006 Dan Klein – UC Berkeley Many slides over the course adapted from either.
Announcements Project 0: Python Tutorial
Uninformed Search (cont.)
Solving problems by searching
CS 188: Artificial Intelligence Fall 2009 Lecture 2: Queue-Based Search 9/1/2009 Dan Klein – UC Berkeley Multiple slides from Stuart Russell, Andrew Moore.
CSE 473: Artificial Intelligence Spring 2014 Hanna Hajishirzi Problem Spaces and Search slides from Dan Klein, Stuart Russell, Andrew Moore, Dan Weld,
Review: Search problem formulation Initial state Actions Transition model Goal state (or goal test) Path cost What is the optimal solution? What is the.
Lab 3 How’d it go?.
CSE 511a: Artificial Intelligence Spring 2012
How are things going? Core AI Problem Mobile robot path planning: identifying a trajectory that, when executed, will enable the robot to reach the goal.
Search Tamara Berg CS 560 Artificial Intelligence Many slides throughout the course adapted from Dan Klein, Stuart Russell, Andrew Moore, Svetlana Lazebnik,
PEAS: Medical diagnosis system  Performance measure  Patient health, cost, reputation  Environment  Patients, medical staff, insurers, courts  Actuators.
AI in game (II) 권태경 Fall, outline Problem-solving agent Search.
CSE 573: Artificial Intelligence Autumn 2012 Heuristic Search With slides from Dan Klein, Stuart Russell, Andrew Moore, Luke Zettlemoyer Dan Weld.
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Lecture 3: Uninformed Search
yG7s#t=15 yG7s#t=15.
Advanced Artificial Intelligence Lecture 2: Search.
A General Introduction to Artificial Intelligence.
Computer Science CPSC 322 Lecture 6 Iterative Deepening and Search with Costs (Ch: 3.7.3, 3.5.3)
Solving problems by searching 1. Outline Problem formulation Example problems Basic search algorithms 2.
Uninformed search strategies A search strategy is defined by picking the order of node expansion Uninformed search strategies use only the information.
Problem Solving by Searching
CHAPTER 2 SEARCH HEURISTIC. QUESTION ???? What is Artificial Intelligence? The study of systems that act rationally What does rational mean? Given its.
CS 343H: Artificial Intelligence
Solving problems by searching A I C h a p t e r 3.
Warm-up We’ll often have a warm-up exercise for the 10 minutes before class starts. Here’s the first one… Write the pseudo code for breadth first search.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Artificial Intelligence Search Instructors: David Suter and Qince Li Course Harbin Institute of Technology [Many slides adapted from those.
Artificial Intelligence Solving problems by searching.
Solving problems by searching Chapter 3. Types of agents Reflex agent Consider how the world IS Choose action based on current percept Do not consider.
Lecture 3: Uninformed Search
Solving problems by searching
Uninformed search Lirong Xia Spring, Uninformed search Lirong Xia Spring, 2017.
Announcements Homework 1 will be posted this afternoon. After today, you should be able to do Questions 1-3. Python 2.7 only for the homework; turns.
CS 188: Artificial Intelligence Spring 2007
CS 188: Artificial Intelligence Fall 2008
Lecture 1B: Search.
What to do when you don’t know anything know nothing
Principles of Computing – UFCFA3-30-1
CSE 473: Artificial Intelligence Spring 2012
Uninformed search Lirong Xia. Uninformed search Lirong Xia.
Uninformed search Lirong Xia. Uninformed search Lirong Xia.
CS440/ECE 448 Lecture 4: Search Intro
Presentation transcript:

AI Adjacent Fields  Philosophy:  Logic, methods of reasoning  Mind as physical system  Foundations of learning, language, rationality  Mathematics  Formal representation and proof  Algorithms, computation, (un)decidability, (in)tractability  Probability and statistics  Psychology  Adaptation  Phenomena of perception and motor control  Experimental techniques (psychophysics, etc.)‏  Economics: formal theory of rational decisions  Linguistics: knowledge representation, grammar  Neuroscience: physical substrate for mental activity  Control theory:  homeostatic systems, stability  simple optimal agent designs This slide deck courtesy of Dan Klein at UC Berkeley

How Much of AI is Math? A lot, but not right away Understanding probabilities will help you a great deal In later weeks, there will be many more equations Countdown to math 1 st lecture MDPs (math)‏

Reflex Agents  Reflex agents:  Choose action based on current percept (and maybe memory)‏  May have memory or a model of the world’s current state  Do not consider the future consequences of their actions  Act on how the world IS  Can a reflex agent be rational?

Goal Based Agents  Goal-based agents:  Plan ahead  Ask “what if”  Decisions based on (hypothesized) consequences of actions  Must have a model of how the world evolves in response to actions  Act on how the world WOULD BE

Search Problems  A search problem consists of:  A state space  A transition function  A start state and a goal test  A solution is a sequence of actions (a plan) which transforms the start state to a goal state “N”, 1.0 “E”, 1.0

Example: Romania  State space:  Cities  Transition function:  Go to adj city with cost = dist  Start state:  Arad  Goal test:  Is state == Bucharest?  Solution?

State Space Graphs  State space graph: A mathematical representation of a search problem  For every search problem, there’s a corresponding state space graph  The successor function is represented by arcs  We can rarely build this graph in memory (so we don’t)‏ S G d b p q c e h a f r Ridiculously tiny search graph for a tiny search problem

State Space Sizes?  Search Problem: Eat all of the food  Pacman positions: 10 x 12 = 120  Food count: 30  Ghost positions: 12  Pacman facing: up, down, left, right

Search Trees  A search tree:  This is a “what if” tree of plans and outcomes  Start state at the root node  Children correspond to successors  Nodes contain states, correspond to PLANS to those states  For most problems, we can never actually build the whole tree “E”, 1.0“N”, 1.0

Another Search Tree  Search:  Expand out possible plans  Maintain a fringe of unexpanded plans  Try to expand as few tree nodes as possible

General Tree Search  Important ideas:  Fringe  Expansion  Exploration strategy  Main question: which fringe nodes to explore? Detailed pseudocode is in the book!

Example: Tree Search S G d b p q c e h a f r

State Graphs vs. Search Trees S a b d p a c e p h f r q qc G a q e p h f r q qc G a S G d b p q c e h a f r We construct both on demand – and we construct as little as possible. Each NODE in in the search tree is an entire PATH in the problem graph.

States vs. Nodes  Nodes in state space graphs are problem states  Represent an abstracted state of the world  Have successors, can be goal / non-goal, have multiple predecessors  Nodes in search trees are plans  Represent a plan (sequence of actions) which results in the node’s state  Have a problem state and one parent, a path length, a depth & a cost  The same problem state may be achieved by multiple search tree nodes Depth 5 Depth 6 Parent Node Search NodesProblem States Action

Review: Depth First Search S a b d p a c e p h f r q qc G a q e p h f r q qc G a S G d b p q c e h a f r q p h f d b a c e r Strategy: expand deepest node first Implementation: Fringe is a LIFO stack

Review: Breadth First Search S a b d p a c e p h f r q qc G a q e p h f r q qc G a S G d b p q c e h a f r Search Tiers Strategy: expand shallowest node first Implementation: Fringe is a FIFO queue

Search Algorithm Properties  Complete? Guaranteed to find a solution if one exists?  Optimal? Guaranteed to find the least cost path?  Time complexity?  Space complexity? Variables: n Number of states in the problem b The average branching factor B (the average number of successors)‏ C* Cost of least cost solution s Depth of the shallowest solution m Max depth of the search tree

DFS  Infinite paths make DFS incomplete…  How can we fix this? AlgorithmCompleteOptimalTimeSpace DFS Depth First Search NN O(B LMAX )‏O(LMAX)‏ START GOAL a b NNInfinite

DFS  With cycle checking, DFS is complete.*  When is DFS optimal? AlgorithmCompleteOptimalTimeSpace DFS w/ Path Checking YN O(b m+1 )‏O(bm)‏ … b 1 node b nodes b 2 nodes b m nodes m tiers * Or graph search – next lecture.

BFS  When is BFS optimal? AlgorithmCompleteOptimalTimeSpace DFS w/ Path Checking BFS YN O(b m+1 )‏O(bm)‏ … b 1 node b nodes b 2 nodes b m nodes s tiers YN* O(b s+1 )‏O(b s )‏ b s nodes

Comparisons  When will BFS outperform DFS?  When will DFS outperform BFS?

Iterative Deepening Iterative deepening uses DFS as a subroutine: 1.Do a DFS which only searches for paths of length 1 or less. 2.If “1” failed, do a DFS which only searches paths of length 2 or less. 3.If “2” failed, do a DFS which only searches paths of length 3 or less. ….and so on. AlgorithmCompleteOptimalTimeSpace DFS w/ Path Checking BFS ID YN O(b m+1 )‏O(bm)‏ YN* O(b s+1 )‏O(b s )‏ YN* O(b s+1 )‏O(bs)‏ … b

Costs on Actions Notice that BFS finds the shortest path in terms of number of transitions. It does not find the least-cost path. We will quickly cover an algorithm which does find the least-cost path. START GOAL d b p q c e h a f r

Uniform Cost Search S a b d p a c e p h f r q qc G a q e p h f r q qc G a Expand cheapest node first: Fringe is a priority queue S G d b p q c e h a f r Cost contours 2

Priority Queue Refresher pq.push(key, value)‏inserts (key, value) into the queue. pq.pop()‏ returns the key with the lowest value, and removes it from the queue.  You can decrease a key’s priority by pushing it again  Unlike a regular queue, insertions aren’t constant time, usually O(log n)‏  We’ll need priority queues for cost-sensitive search methods  A priority queue is a data structure in which you can insert and retrieve (key, value) pairs with the following operations:

Uniform Cost Search AlgorithmCompleteOptimalTimeSpace DFS w/ Path Checking BFS UCS YN O(b m+1 )‏O(bm)‏ … b C*/  tiers YN O(b s+1 )‏O(b s )‏ Y*Y O(b C*/  )‏ * UCS can fail if actions can get arbitrarily cheap

Uniform Cost Search  What will UCS do for this graph?  What does this mean for completeness? START GOAL a b

Uniform Cost Issues  Remember: explores increasing cost contours  The good: UCS is complete and optimal!  The bad:  Explores options in every “direction”  No information about goal location Start Goal … c  3 c  2 c  1

Search Heuristics  Any estimate of how close a state is to a goal  Designed for a particular search problem  Examples: Manhattan distance, Euclidean distance

Heuristics

Best First / Greedy Search  Expand the node that seems closest…  What can go wrong?