MIDTERM REVIEW. Intelligent Agents Percept: the agent’s perceptual inputs at any given instant Percept Sequence: the complete history of everything the.

Slides:



Advertisements
Similar presentations
Chapter 2: Intelligent Agents
Advertisements

Additional Topics ARTIFICIAL INTELLIGENCE
Artificial Intelligent
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Intelligent Agents Russell and Norvig: 2
ICS-171: 1 Intelligent Agents Chapter 2 ICS 171, Fall 2009.
Search Strategies CPS4801. Uninformed Search Strategies Uninformed search strategies use only the information available in the problem definition Breadth-first.
Intelligent Agents Chapter 2. Outline Agents and environments Agents and environments Rationality Rationality PEAS (Performance measure, Environment,
Intelligent Agents CPS
ICS-271: 1 Intelligent Agents Chapter 2 ICS 279 Fall 09.
ICS-171: Notes 2: 1 Intelligent Agents Chapter 2 ICS 171, Fall 2005.
Intelligent Agents Chapter 2 ICS 171, Fall 2005.
Intelligent Agents Chapter 2.
Russell and Norvig: Chapter 2 CMSC421 – Fall 2006
ICS-171: Notes 2: 1 Intelligent Agents Chapter 2 ICS 171, spring 2007.
Rational Agents (Chapter 2)
Solving problems by searching
Introduction to Logic Programming WS2004/A.Polleres 1 Introduction to Artificial Intelligence MSc WS 2009 Intelligent Agents: Chapter 2.
Rational Agents (Chapter 2)
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
CPSC 7373: Artificial Intelligence Jiang Bian, Fall 2012 University of Arkansas at Little Rock.
Solving Problems by Searching CPS Outline Problem-solving agents Example problems Basic search algorithms.
CHAPTER 2 Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
© Copyright 2008 STI INNSBRUCK Introduction to A rtificial I ntelligence MSc WS 2009 Intelligent Agents: Chapter.
How R&N define AI Systems that think like humans Systems that think rationally Systems that act like humans Systems that act rationally humanly vs. rationally.
Agents & Search Tamara Berg CS 560 Artificial Intelligence Many slides throughout the course adapted from Dan Klein, Stuart Russell, Andrew Moore, Svetlana.
Chapter 2 Intelligent Agents. Chapter 2 Intelligent Agents What is an agent ? An agent is anything that perceiving its environment through sensors and.
Intelligent Agents Chapter 2 Some slide credits to Hwee Tou Ng (Singapore)
Lection 3. Part 1 Chapter 2 of Russel S., Norvig P. Artificial Intelligence: Modern Approach.
Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Artificial Intelligence.
Intelligent Agents Chapter 2. CIS Intro to AI - Fall Outline  Brief Review  Agents and environments  Rationality  PEAS (Performance measure,
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment.
Agents CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Chapter 2 Hande AKA. Outline Agents and Environments Rationality The Nature of Environments Agent Types.
CE An introduction to Artificial Intelligence CE Lecture 2: Intelligent Agents Ramin Halavati In which we discuss.
CHAPTER 2 Intelligent Agents. Outline Artificial Intelligence a modern approach 2 Agents and environments Rationality PEAS (Performance measure, Environment,
Rational Agents (Chapter 2)
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
1/23 Intelligent Agents Chapter 2 Modified by Vali Derhami.
© Copyright 2008 STI INNSBRUCK Intelligent Systems Problem Solving Methods – Lecture 7 Prof. Dieter Fensel (&
3.5 Informed (Heuristic) Searches This section show how an informed search strategy can find solution more efficiently than uninformed strategy. Best-first.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 2 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types.
Intelligent Agents CMPT 463. Outline Agents and environments PEAS of task environment (Performance measure, Environment, Actuators, Sensors) Environment.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
ARTIFICIAL INTELLIGENCE
CSC AI Intelligent Agents.
ECE 448 Lecture 3: Rational Agents
EA C461 – Artificial Intelligence Intelligent Agents
Lecture 2: Problem Solving using State Space Representations
Rational Agents (Chapter 2)
Artificial Intelligence Lecture No. 5
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Hong Cheng SEG4560 Computational Intelligence for Decision Making Chapter 2: Intelligent Agents Hong Cheng
Introduction to Artificial Intelligence
Intelligent Agents Chapter 2.
AI: Artificial Intelligence
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
EA C461 – Artificial Intelligence Intelligent Agents
Midterm Review.
Solving Problems by Searching
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Presentation transcript:

MIDTERM REVIEW

Intelligent Agents Percept: the agent’s perceptual inputs at any given instant Percept Sequence: the complete history of everything the agent has ever perceived The agent function maps from percept histories to actions: [f: P*  A ] (abstract) The agent program runs on the physical architecture to produce f. (implementation)

Example: Vacuum-Cleaner World Percepts: location and contents, e.g., [A, Dirty] Actions: Left, Right, Suck, NoOp

Task Environment PEAS: Performance measure, Environment, Actuators, Sensors Consider the task of designing an automated taxi: Performance measure: safety, destination, profits, legality, comfort… Environment: US streets/freeways, traffic, pedestrians, weather… Actuators: steering, accelerator, brake, horn, speaker/display… Sensors: camera, sonar, GPS, odometer, engine sensor…

Environment Types Fully observable (vs. partially observable): An agent’s sensors give it access to the complete state of the environment at each point in time. Card game vs. poker (needs internal memory) Deterministic (vs. stochastic): The next state of the environment is completely determined by the current state and the action executed by the agent. Chess vs. game with dice (uncertainty, unpredictable) Episodic (vs. sequential): The agent’s experience is divided into atomic “episodes” (each episode consists of the agent perceiving and then performing a single action), and the choice of action in each episode depends only on the episode itself. Chess and taxi driving

Environment Types Static (vs. dynamic): The environment is unchanged while an agent is deliberation. (The environment is semidynamic if the environment itself does not change with the passage of time but the agent’s performance score does.) Taxi driving vs. chess (when played with a clock) vs. crossword puzzles Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions. Chess vs. taxi driving (infinite) Single agent (vs. multiagent): An agent operating by itself in an environment. Crossword puzzle vs. chess

SolitaireChess with a clock Internet Shopping Taxi Observable?Yes No Deterministic?Yes No Episodic?No Static?YesSemi No Discrete?Yes No Single-agent?YesNoYewNo

Problem Formulation A problem is defined by five components. Initial state e.g., “at Arad” Actions (s)  {a1, a2, a3, … } e.g., {Go(Sibiu), Go(Timisoara), Go(Zerind)} Transition model: Result (s,a)  s’ e.g., Result(In(Arad), Go(Timisoara)) = In(Timisoara) Goal test (s)  T/F e.g., “at Bucharest” Path cost (s  s  s)  n (additive) sum of cost of individual steps, e.g., number of miles traveled, number of minutes to get to destination

states? A state description specifies the location of the eight tiles and the blank one. initial state? any state actions? movement of the blank space: Left, Right, Up, Down transition model? (s,a)  s’ goal test? goal state (given) path cost? 1 per move

Tree search vs. graph search Tree search may have repeated state and redundant paths. Graph search keeps the explored set: remembers every expanded node.

Uninformed Search Strategies Uninformed search strategies use only the information available in the problem definition. Breadth-first search Uniform-cost search Depth-first search Depth-limited search Iterative deepening search

Informed Search Strategies uses problem-specific knowledge beyond the definition of the problem itself Best-first search Idea: use an evaluation function f(n) for each node estimate of "desirability"  Expand most desirable unexpanded node Special cases: greedy best-first search A * search

Romania with step costs in km

Best-first search Greedy best-first search Evaluation function f(n) = h(n) (heuristic) = estimate of cost from n to goal A * search Evaluation function f(n) = g(n) + h(n) g(n) = cost so far to reach n h(n) = estimated cost from n to goal f(n) = estimated total cost of path through n to goal

Local Search Hill-Climbing Search Variants Simulated Annealing Local Beam Search

Adversarial Search Optimal decisions in games (minimax) α-β pruning

Rule-Based Expert Systems How to represent rules and facts Inference Engine

Two approaches Forward chaining Backward chaining

Forward Chaining Exercise 1 Use forward chaining to prove the following:

Backward chaining Exercise 1 Use backward chaining to prove the following:

Conflict resolution Conflict resolution provides a specific method for choosing which rule to fire. Highest priority Most specific rule Most recent first

Uncertainty Probability Theory Bayesian Rule

Applying Bayes’ rule A doctor knows that the disease meningitis causes the patient to have a stiff neck for 70% of the time. The probability that a patient has meningitis is 1/50,000. The probability that any patient has a stiff neck is 1%. P(s|m) = 0.7 P(m) = 1/50000 P(s) = 0.01 P(m|s) = ?

Bayesian reasoning Example: Cancer and Test P(C) = 0.01 P(¬C) = 0.99 P(+|C) = 0.9 P(-|C) = 0.1 P(+|¬C) = 0.2P(-|¬C) = 0.8 P(C|+) = ?