Download presentation
Presentation is loading. Please wait.
Published byChristal Briggs Modified over 9 years ago
1
MIDTERM REVIEW
2
Intelligent Agents Percept: the agent’s perceptual inputs at any given instant Percept Sequence: the complete history of everything the agent has ever perceived The agent function maps from percept histories to actions: [f: P* A ] (abstract) The agent program runs on the physical architecture to produce f. (implementation)
3
Example: Vacuum-Cleaner World Percepts: location and contents, e.g., [A, Dirty] Actions: Left, Right, Suck, NoOp
4
Task Environment PEAS: Performance measure, Environment, Actuators, Sensors Consider the task of designing an automated taxi: Performance measure: safety, destination, profits, legality, comfort… Environment: US streets/freeways, traffic, pedestrians, weather… Actuators: steering, accelerator, brake, horn, speaker/display… Sensors: camera, sonar, GPS, odometer, engine sensor…
5
Environment Types Fully observable (vs. partially observable): An agent’s sensors give it access to the complete state of the environment at each point in time. Card game vs. poker (needs internal memory) Deterministic (vs. stochastic): The next state of the environment is completely determined by the current state and the action executed by the agent. Chess vs. game with dice (uncertainty, unpredictable) Episodic (vs. sequential): The agent’s experience is divided into atomic “episodes” (each episode consists of the agent perceiving and then performing a single action), and the choice of action in each episode depends only on the episode itself. Chess and taxi driving
6
Environment Types Static (vs. dynamic): The environment is unchanged while an agent is deliberation. (The environment is semidynamic if the environment itself does not change with the passage of time but the agent’s performance score does.) Taxi driving vs. chess (when played with a clock) vs. crossword puzzles Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions. Chess vs. taxi driving (infinite) Single agent (vs. multiagent): An agent operating by itself in an environment. Crossword puzzle vs. chess
7
SolitaireChess with a clock Internet Shopping Taxi Observable?Yes No Deterministic?Yes No Episodic?No Static?YesSemi No Discrete?Yes No Single-agent?YesNoYewNo
8
Problem Formulation A problem is defined by five components. Initial state e.g., “at Arad” Actions (s) {a1, a2, a3, … } e.g., {Go(Sibiu), Go(Timisoara), Go(Zerind)} Transition model: Result (s,a) s’ e.g., Result(In(Arad), Go(Timisoara)) = In(Timisoara) Goal test (s) T/F e.g., “at Bucharest” Path cost (s s s) n (additive) sum of cost of individual steps, e.g., number of miles traveled, number of minutes to get to destination
9
states? A state description specifies the location of the eight tiles and the blank one. initial state? any state actions? movement of the blank space: Left, Right, Up, Down transition model? (s,a) s’ goal test? goal state (given) path cost? 1 per move
10
Tree search vs. graph search Tree search may have repeated state and redundant paths. Graph search keeps the explored set: remembers every expanded node.
11
Uninformed Search Strategies Uninformed search strategies use only the information available in the problem definition. Breadth-first search Uniform-cost search Depth-first search Depth-limited search Iterative deepening search
12
Informed Search Strategies uses problem-specific knowledge beyond the definition of the problem itself Best-first search Idea: use an evaluation function f(n) for each node estimate of "desirability" Expand most desirable unexpanded node Special cases: greedy best-first search A * search
13
Romania with step costs in km
14
Best-first search Greedy best-first search Evaluation function f(n) = h(n) (heuristic) = estimate of cost from n to goal A * search Evaluation function f(n) = g(n) + h(n) g(n) = cost so far to reach n h(n) = estimated cost from n to goal f(n) = estimated total cost of path through n to goal
15
Local Search Hill-Climbing Search Variants Simulated Annealing Local Beam Search
16
Adversarial Search Optimal decisions in games (minimax) α-β pruning
17
Rule-Based Expert Systems How to represent rules and facts Inference Engine
18
Two approaches Forward chaining Backward chaining
19
Forward Chaining Exercise 1 Use forward chaining to prove the following:
20
Backward chaining Exercise 1 Use backward chaining to prove the following:
21
Conflict resolution Conflict resolution provides a specific method for choosing which rule to fire. Highest priority Most specific rule Most recent first
22
Uncertainty Probability Theory Bayesian Rule
23
Applying Bayes’ rule A doctor knows that the disease meningitis causes the patient to have a stiff neck for 70% of the time. The probability that a patient has meningitis is 1/50,000. The probability that any patient has a stiff neck is 1%. P(s|m) = 0.7 P(m) = 1/50000 P(s) = 0.01 P(m|s) = ?
24
Bayesian reasoning Example: Cancer and Test P(C) = 0.01 P(¬C) = 0.99 P(+|C) = 0.9 P(-|C) = 0.1 P(+|¬C) = 0.2P(-|¬C) = 0.8 P(C|+) = ?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.