Artificial Intelligence and Intelligent Agents

Slides:



Advertisements
Similar presentations
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
Advertisements

Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
Pat Langley Institute for the Study of Learning and Expertise Palo Alto, California A Cognitive Architecture for Complex Learning.
Additional Topics ARTIFICIAL INTELLIGENCE
Cognitive Systems, ICANN panel, Q1 What is machine intelligence, as beyond pattern matching, classification and prediction. What is machine intelligence,
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2. Outline Agents and environments Agents and environments Rationality Rationality PEAS (Performance measure, Environment,
ICS 101 Fall 2011 Introduction to Artificial Intelligence Asst. Prof. Lipyeow Lim Information & Computer Science Department University of Hawaii at Manoa.
6/21/2015 LECTURE-3. 6/21/2015 OBJECTIVE OF TODAY’S LECTURE T oday we are going to study about details of Intelligent Agents. In which we discuss what.
Intelligent Agents Chapter 2.
Biointelligence Laboratory School of Computer Science and Engineering Seoul National University Cognitive Robots © 2014, SNU CSE Biointelligence Lab.,
Vedrana Vidulin Jožef Stefan Institute, Ljubljana, Slovenia
Robotica Lezione 1. Robotica - Lecture 12 Objectives - I General aspects of robotics –Situated Agents –Autonomous Vehicles –Dynamical Agents Implementing.
1 AI and Agents CS 171/271 (Chapters 1 and 2) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Lecture 1 Note: Some slides and/or pictures are adapted from Lecture slides / Books of Dr Zafar Alvi. Text Book - Aritificial Intelligence Illuminated.
ICS 101 Fall 2011 Introduction to Artificial Intelligence Asst. Prof. Lipyeow Lim Information & Computer Science Department University of Hawaii at Manoa.
COMP 4640 Intelligent & Interactive Systems Cheryl Seals, Ph.D. Computer Science & Software Engineering Auburn University Lecture 2: Intelligent Agents.
Artificial Intelligence Introductory Lecture Jennifer J. Burg Department of Mathematics and Computer Science.
Lection 3. Part 1 Chapter 2 of Russel S., Norvig P. Artificial Intelligence: Modern Approach.
Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Artificial Intelligence.
Intelligent Agents Chapter 2. CIS Intro to AI - Fall Outline  Brief Review  Agents and environments  Rationality  PEAS (Performance measure,
CSC 423 ARTIFICIAL INTELLIGENCE Intelligence Agents.
Chapter 2 Agents & Environments. © D. Weld, D. Fox 2 Outline Agents and environments Rationality PEAS specification Environment types Agent types.
ARTIFICIAL INTELLIGENCE [INTELLIGENT AGENTS PARADIGM] Professor Janis Grundspenkis Riga Technical University Faculty of Computer Science and Information.
CE An introduction to Artificial Intelligence CE Lecture 2: Intelligent Agents Ramin Halavati In which we discuss.
University of Windsor School of Computer Science Topics in Artificial Intelligence Fall 2008 Sept 11, 2008.
Artificial Intelligence Lecture 1. Objectives Definition Foundation of AI History of AI Agent Application of AI.
Introduction of Intelligent Agents
Instructional Objective  Define an agent  Define an Intelligent agent  Define a Rational agent  Discuss different types of environment  Explain classes.
INTELLIGENT AGENTS. Agents  An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through.
Feng Zhiyong Tianjin University Fall  An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that.
1/23 Intelligent Agents Chapter 2 Modified by Vali Derhami.
Chapter 2 Agents. Intelligent Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment.
Artificial Intelligence: Research and Collaborative Possibilities a presentation by: Dr. Ernest L. McDuffie, Assistant Professor Department of Computer.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 2 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Intelligent Agents Chapter 2. How do you design an intelligent agent? Definition: An intelligent agent perceives its environment via sensors and acts.
Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types.
The Agent and Environment Presented By:sadaf Gulfam Roll No:15156 Section: E.
Done by Fazlun Satya Saradhi. INTRODUCTION The main concept is to use different types of agent models which would help create a better dynamic and adaptive.
Introduction to Artificial Intelligence
Artificial Intelligence
CHAPTER 1 Introduction BIC 3337 EXPERT SYSTEM.
EA C461 – Artificial Intelligence Intelligent Agents
Artificial Intelligence Lecture No. 4
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Web-Mining Agents Cooperating Agents for Information Retrieval
Intelligent Agents Chapter 2.
Hong Cheng SEG4560 Computational Intelligence for Decision Making Chapter 2: Intelligent Agents Hong Cheng
© James D. Skrentny from notes by C. Dyer, et. al.
Course Instructor: knza ch
Artificial Intelligence introduction(2)
Intelligent Agents Chapter 2.
Sentient Robot By: Jorge Chen.
Artificial Intelligence (Lecture 1)
Artificial Intelligence Lecture 3: Intelligent Agent
EA C461 – Artificial Intelligence Problem Solving Agents
Intelligent Agents Chapter 2.
AI and Agents CS 171/271 (Chapters 1 and 2)
Intelligent Agents Chapter 2.
EA C461 – Artificial Intelligence Intelligent Agents
Introduction to Artificial Intelligence Instructor: Dr. Eduardo Urbina
Artificial Intelligence
Structure of intelligent agents and environments
Information Retrieval
Artificial Intelligence
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Presentation transcript:

Artificial Intelligence and Intelligent Agents Lecture # 3

Intelligence Intelligence has been defined in many different ways including as one's capacity for  Logic Understanding self-awareness Learning  emotional knowledge,  planning,  creativity and  problem solving. It can be more generally described as the ability to perceive information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.

Artificial Intelligence Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, an ideal "intelligent" machine is a flexible rational agent that perceives its environment and takes actions that maximize its chance of success at some goal. The term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving". Capabilities currently classified as AI include successfully understanding human speech, competing at a high level in strategic game systems (such as Chess and Go), self-driving cars, and interpreting complex data.  AI research is divided into subfields that focus on specific problems or on specific approaches or on the use of a particular tool or towards satisfying particular applications.

Weak AI Weak AI (also known as narrow AI) is non-sentient artificial intelligence that is focused on one narrow task. Weak AI is defined in contrast to either strong AI (a machine with consciousness, sentience and mind) or artificial general intelligence (a machine with the ability to apply intelligence to any problem, rather than just one specific problem). All currently existing systems considered artificial intelligence of any sort are weak AI at most. Example : SIRI

Strong AI Strong AI is a term used to describe a certain mindset of artificial intelligence development. Strong AI's goal is to develop artificial intelligence to the point where the machine's intellectual capability is functionally equal to a human's.  This approach presents a solution to the problems of symbolic attempts to create human intelligence in computers.

Strong AI Instead of trying to give the computer adult-like knowledge from the outset, the computer would only have to be given the ability to interact with the environment and the ability to learn from those interactions. As time passed it would gain common sense and language on its own. This paradigm seeks to combine the mind and the body, whereas the common trend in symbolic programming (i.e. CYC) has been to disregard the body to the detriment of the computer's intellect.

Neat AI and Scruffy AI Neat and scruffy are labels for two different types of artificial intelligence (AI) research.  Neats consider that solutions should be elegant, clear and provably correct.  Scruffies believe that intelligence is too complicated (or computationally intractable) to be solved with the sorts of homogeneous system such neat requirements usually mandate.

Neat AI and Scruffy AI Much success in AI came from combining neat and scruffy approaches. For example, there are many cognitive models matching human psychological data built in Soar and ACT-R. Both of these systems have formal representations and execution systems, but the rules put into the systems to create the models are generated ad hoc.

Soar- Cognitive architecture The main goal of the Soar project is to be able to handle the full range of capabilities of an intelligent agent, from highly routine to extremely difficult open-ended problems. In order for that to happen, according to the view underlying Soar, it needs to be able to create representations and use appropriate forms of knowledge (such as procedural, declarative, episodic). Soar should then address a collection of mechanisms of the mind. Also underlying the Soar architecture is the view that a symbolic system is essential for general intelligence (see brief comment on neats versus scruffies).

Soar- Cognitive architecture This is known as the physical symbol system hypothesis. The views of cognition underlying Soar are tied to the psychological theory expressed in Allen Newell's book, Unified Theories of Cognition. While symbol processing remains the core mechanism in the architecture, recent versions of the theory incorporate non-symbolic representations and processes, including reinforcement learning, imagery processing, and emotion modeling. Soar's capabilities have always included a mechanism for creating new representations, by a process known as "chunking". Ultimately, Soar's goal is to achieve general intelligence, though this is acknowledged to be an ambitious and possibly very long-term goal.

Agent An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors. A human agent has eyes, ears, and other organs for sensors, and hands, legs, mouth, and other body parts for effectors. A robotic agent substitutes cameras and infrared range finders for the sensors and various motors for the effectors. A software agent has encoded bit strings as its percepts and actions.

Agents interact with environments through sensors and effectors.

How AGENTS SHOULD ACT? Rationally?? A rational agent is one that does the right thing. What is rational at any given time depends on four things: The performance measure that defines degree of success. Everything that the agent has perceived so far. We will call this complete perceptual history the percept sequence. What the agent knows about the environment. The actions that the agent can perform.

Ideal rational agent: For each possible percept sequence, an ideal rational agent should do whatever action is expected to maximize its performance measure, on the basis of the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

The ideal mapping from percept sequences to actions Once we realize that an agent's behavior depends only on its percept sequence to date, then we can describe any particular agent by making a table of the action it takes in response to each possible percept sequence. Such a list is called a mapping from percept sequences to actions. We can, in principle, find out which mapping correctly describes an agent by trying out all possible percept sequences and recording which actions the agent does in response. And if mappings describe agents, then ideal mappings describe ideal agents. Specifying which action an agent ought to take in response to any given percept sequence provides a design for an ideal agent.

Autonomy An agent's behavior can be based on both its own experience and the built- in knowledge used in constructing the agent for the particular environment in which it operates. A system is autonomous to the extent that its behavior is determined by its own experience. It would be reasonable to provide an artificial intelligent agent with some initial knowledge as well as an ability to learn. A truly autonomous intelligent agent should be able to operate successfully in a wide variety of environments, given sufficient time to adapt.

STRUCTURE OF INTELLIGENT AGENTS Agents by describing their behavior—the action that is performed after any given sequence of percepts. The job of AI is to design the agent program: a function that implements the agent mapping from percepts to actions. We assume this program will run on some sort of computing device, which we will call the architecture. The architecture might be a plain computer, or it might include special- purpose hardware for certain tasks, such as processing camera images or filtering audio input.

STRUCTURE OF INTELLIGENT AGENTS It might also include software that provides a degree of insulation between the raw computer and the agent program, so that we can program at a higher level. In general, the architecture makes the percepts from the sensors available to the program, runs the program, and feeds the program's action choices to the effectors as they are generated. The relationship among agents, architectures, and programs can be summed up as follows: agent = architecture + program

Software agents Software agents (or software robots or softbots) exist in rich, unlimited domains. Imagine a softbot designed to fly a flight simulator for a 747. The simulator is a very detailed, complex environment, and the software agent must choose from a wide variety of actions in real time. Or imagine a softbot designed to scan online news sources and show the interesting items to its customers. To do well, it will need some natural language processing abilities, it will need to learn what each customer is interested in, and it will need to dynamically change its plans when, for example, the connection for one news source crashes or a new one comes online.

Software agents Some environments blur the distinction between "real" and "artificial.“ In the ALIVE environment (Maes et al., 1994), software agents are given as percepts a digitized camera image of a room where a human walks about. The agent processes the camera image and chooses an action. The environment also displays the camera image on a large display screen that the human can watch, and superimposes on the image a computer graphics rendering of the software agent. One such image is a cartoon dog, which has been programmed to move toward the human (unless he points to send the dog away) and to shake hands or jump up eagerly when the human makes certain gestures.

Agent programs