Presentation is loading. Please wait.

Presentation is loading. Please wait.

ARTIFICIAL INTELLIGENCE

Similar presentations


Presentation on theme: "ARTIFICIAL INTELLIGENCE"— Presentation transcript:

1 ARTIFICIAL INTELLIGENCE
CS 512 ARTIFICIAL INTELLIGENCE LECTURE 02

2 Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.

3 Agents

4 Specifying the task environment
Problem specification: Performance measure, Environment, Actuators, Sensors (PEAS). Example: autonomous taxi Performance measure : Safe, fast, legal, comfortable trip, maximize profits Environment : Roads, other traffic, pedestrians, customers Actuators : Steering wheel, accelerator, brake, signal, horn Sensors : Cameras, LIDAR, speedometer, GPS, odometer, engine sensors, keyboard

5 Another PEAS example: Spam filter
Performance measure : Minimizing false positives, false negatives Environment : A user’s account, server Actuators : Mark as spam, delete, etc. Sensors : Incoming messages, other information about user’s account

6 Environment types Fully observable / Partially observable
If an agent’s sensors give it access to the complete state of the environment needed to choose an action, the environment is fully observable.

7 Environment types Deterministic / Stochastic
An environment is deterministic if the next state of the environment is completely determined by the current state of the environment and the action of the agent; In a stochastic environment, there are multiple unpredictable outcomes. Deterministic Stochastic

8 Environment types Episodic / Sequential
In an episodic environment, the agent’s experience is divided into atomic episodes. Each episode consists of the agent perceiving and then performing a single action. In a sequential environment, the agent engages in a series of connected episodes. Current decision can affect future decisions. Episodic Sequential

9 Environment types Static / Dynamic
A static environment does not change while the agent is thinking. The environment is semi-dynamic if the environment itself does not change with the passage of time but the agent's performance score does eg play chess with clock. Dynamic Static Semi-dynamic

10 Environment types Discrete / Continuous
If the number of distinct percepts and actions is limited, the environment is discrete, otherwise it is continuous. Discrete Continuous

11 Environment types Single agent / Multi-agent
If the environment contains other intelligent agents, the agent needs to be concerned about, it is a multi-agent. Single agent Multi-agent

12 Environment types Known /Unknown
if the rules of the environment (transitions and rewards) known to the agent, it’s known environment. If it’s not known , the agent will have to learn how it works in order to make good decisions. Unknown Known

13 Agent Types Five basic types of Agent Simple Reflex Agents Agents do not have memory of past world states or percepts. So, actions depend solely on current percept. For example if a mars lander found a rock in a specific place it needed to collect then it would collect it, if it was a simple reflex agent then if it found the same rock in a different place , it would still pick it up as it doesn't take into account that it already picked it up. If tail-light of car in front is red, then brake.

14 Agent Types Five basic types of Agent Model based reflex agents
Agents have internal state, which is used to keep track of past states of the world. Agents have the ability to represent change in the World. description of current world state Know how world evolves Overtaking car gets closer from behind How agents actions affect the world Wheel turned clockwise takes you right

15 Agent Types Five basic types of Agent Goal based agents
Reflex agent breaks when it sees brake lights. Goal based agent reasons –Brake light -> car in front is stopping -> I should stop -> I should use brake Five basic types of Agent Goal based agents In addition to state information, have goal information that describes desirable situations to be achieved. knowing state and environment Taxi can go left, right, straight Have a goal A destination to get to Uses knowledge about a goal to guide its actions E.g., Search, planning

16 Agent Types Five basic types of Agent Utility based agents
When there are multiple possible alternatives, how to decide which one is best? Many action sequences get taxi to destination. How fast, how safe….. A utility function maps a state onto a real number which describes the associated degree of happiness.

17 Agent Types Five basic types of Agent Learning agents
When we expand our environments we get a larger and larger amount of tasks, eventually we are going to have a very large number of actions to pre-define. Another way of going about creating an agent is to get it to learn new actions as it goes about its business.

18 Agent Types Evaluates current world state Changes action rules
Suggests explorations

19 Learning Agents(Taxi driver)
Performance element How it currently drives Taxi driver Makes quick left turn across 3 lanes Critics observe shocking language by passenger and other drivers and informs bad action Learning element tries to modify performance elements for future Problem generator suggests experiment out something called Brakes on different Road conditions Critics is not always easy shocking language Less tip Less passengers

20 Learning Agents(Taxi driver)
“Quick turn is not safe” No quick turn Road conditions Takes percepts and selects actions Try out the brakes on different road surfaces

21 Solving problems by searching
We will consider the problem of designing goal-based agents in fully observable, deterministic, discrete, known environments. The agent must find a sequence of actions that reaches the goal. The performance measure is defined by reaching the goal and how “expensive” the path to the goal is

22 Search problem components
Initial state : Actions : Transition model : What state results from performing a given action in a given state? Goal state : Path cost : Assume that it is a sum of nonnegative step costs

23 Example: Romania On vacation in Romania; currently in Arad and
flight leaves tomorrow from Bucharest.

24 Example: Romania Initial state : Arad Actions :
Go from one city to another Transition model : If you go from city A to city B, you end up in city B Goal state : Bucharest Path cost : Total distance traveled

25 State space The initial state, actions, and transition model define the state space of the problem. State space : The set of all states reachable from initial state by any sequence of actions. Can be represented as a directed graph where the nodes are states and links between nodes are actions.

26 Example: Vacuum world Initial State States :
– Agent location and dirt location – How many possible states? Actions – Left, right, suck Vacuum world state space graph

27 Example: River Crossing
A farmer wants to get his cabbage, sheep, wolf across a river. He has a boat that only holds two. He and at most one item on boat. Unsupervised, wolf bites sheep, sheep eats cabbage. How should computers solve this?

28 Example: River Crossing
State space S : all valid configurations Initial states = {(CSDF, _ )}  S Goal states G = {( _ , CSDF )}  S Cost(s,s’) = 1 for all transitions. Initial State Goal State

29 Example: River Crossing
State space S :


Download ppt "ARTIFICIAL INTELLIGENCE"

Similar presentations


Ads by Google