Download presentation
Presentation is loading. Please wait.
Published byΒηθανία Φωτινή Γαλάνη Modified over 6 years ago
1
Artificial Intelligence and Intelligent Agents
Lecture # 3
2
Intelligence Intelligence has been defined in many different ways including as one's capacity for Logic Understanding self-awareness Learning emotional knowledge, planning, creativity and problem solving. It can be more generally described as the ability to perceive information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.
3
Artificial Intelligence
Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, an ideal "intelligent" machine is a flexible rational agent that perceives its environment and takes actions that maximize its chance of success at some goal. The term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving". Capabilities currently classified as AI include successfully understanding human speech, competing at a high level in strategic game systems (such as Chess and Go), self-driving cars, and interpreting complex data. AI research is divided into subfields that focus on specific problems or on specific approaches or on the use of a particular tool or towards satisfying particular applications.
4
Weak AI Weak AI (also known as narrow AI) is non-sentient artificial intelligence that is focused on one narrow task. Weak AI is defined in contrast to either strong AI (a machine with consciousness, sentience and mind) or artificial general intelligence (a machine with the ability to apply intelligence to any problem, rather than just one specific problem). All currently existing systems considered artificial intelligence of any sort are weak AI at most. Example : SIRI
5
Strong AI Strong AI is a term used to describe a certain mindset of artificial intelligence development. Strong AI's goal is to develop artificial intelligence to the point where the machine's intellectual capability is functionally equal to a human's. This approach presents a solution to the problems of symbolic attempts to create human intelligence in computers.
6
Strong AI Instead of trying to give the computer adult-like knowledge from the outset, the computer would only have to be given the ability to interact with the environment and the ability to learn from those interactions. As time passed it would gain common sense and language on its own. This paradigm seeks to combine the mind and the body, whereas the common trend in symbolic programming (i.e. CYC) has been to disregard the body to the detriment of the computer's intellect.
7
Neat AI and Scruffy AI Neat and scruffy are labels for two different types of artificial intelligence (AI) research. Neats consider that solutions should be elegant, clear and provably correct. Scruffies believe that intelligence is too complicated (or computationally intractable) to be solved with the sorts of homogeneous system such neat requirements usually mandate.
8
Neat AI and Scruffy AI Much success in AI came from combining neat and scruffy approaches. For example, there are many cognitive models matching human psychological data built in Soar and ACT-R. Both of these systems have formal representations and execution systems, but the rules put into the systems to create the models are generated ad hoc.
9
Soar- Cognitive architecture
The main goal of the Soar project is to be able to handle the full range of capabilities of an intelligent agent, from highly routine to extremely difficult open-ended problems. In order for that to happen, according to the view underlying Soar, it needs to be able to create representations and use appropriate forms of knowledge (such as procedural, declarative, episodic). Soar should then address a collection of mechanisms of the mind. Also underlying the Soar architecture is the view that a symbolic system is essential for general intelligence (see brief comment on neats versus scruffies).
10
Soar- Cognitive architecture
This is known as the physical symbol system hypothesis. The views of cognition underlying Soar are tied to the psychological theory expressed in Allen Newell's book, Unified Theories of Cognition. While symbol processing remains the core mechanism in the architecture, recent versions of the theory incorporate non-symbolic representations and processes, including reinforcement learning, imagery processing, and emotion modeling. Soar's capabilities have always included a mechanism for creating new representations, by a process known as "chunking". Ultimately, Soar's goal is to achieve general intelligence, though this is acknowledged to be an ambitious and possibly very long-term goal.
12
Agent An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors. A human agent has eyes, ears, and other organs for sensors, and hands, legs, mouth, and other body parts for effectors. A robotic agent substitutes cameras and infrared range finders for the sensors and various motors for the effectors. A software agent has encoded bit strings as its percepts and actions.
13
Agents interact with environments through sensors and effectors.
14
How AGENTS SHOULD ACT? Rationally??
A rational agent is one that does the right thing. What is rational at any given time depends on four things: The performance measure that defines degree of success. Everything that the agent has perceived so far. We will call this complete perceptual history the percept sequence. What the agent knows about the environment. The actions that the agent can perform.
15
Ideal rational agent: For each possible percept sequence, an ideal rational agent should do whatever action is expected to maximize its performance measure, on the basis of the evidence provided by the percept sequence and whatever built-in knowledge the agent has.
16
The ideal mapping from percept sequences to actions
Once we realize that an agent's behavior depends only on its percept sequence to date, then we can describe any particular agent by making a table of the action it takes in response to each possible percept sequence. Such a list is called a mapping from percept sequences to actions. We can, in principle, find out which mapping correctly describes an agent by trying out all possible percept sequences and recording which actions the agent does in response. And if mappings describe agents, then ideal mappings describe ideal agents. Specifying which action an agent ought to take in response to any given percept sequence provides a design for an ideal agent.
18
Autonomy An agent's behavior can be based on both its own experience and the built- in knowledge used in constructing the agent for the particular environment in which it operates. A system is autonomous to the extent that its behavior is determined by its own experience. It would be reasonable to provide an artificial intelligent agent with some initial knowledge as well as an ability to learn. A truly autonomous intelligent agent should be able to operate successfully in a wide variety of environments, given sufficient time to adapt.
19
STRUCTURE OF INTELLIGENT AGENTS
Agents by describing their behavior—the action that is performed after any given sequence of percepts. The job of AI is to design the agent program: a function that implements the agent mapping from percepts to actions. We assume this program will run on some sort of computing device, which we will call the architecture. The architecture might be a plain computer, or it might include special- purpose hardware for certain tasks, such as processing camera images or filtering audio input.
20
STRUCTURE OF INTELLIGENT AGENTS
It might also include software that provides a degree of insulation between the raw computer and the agent program, so that we can program at a higher level. In general, the architecture makes the percepts from the sensors available to the program, runs the program, and feeds the program's action choices to the effectors as they are generated. The relationship among agents, architectures, and programs can be summed up as follows: agent = architecture + program
21
Software agents Software agents (or software robots or softbots) exist in rich, unlimited domains. Imagine a softbot designed to fly a flight simulator for a 747. The simulator is a very detailed, complex environment, and the software agent must choose from a wide variety of actions in real time. Or imagine a softbot designed to scan online news sources and show the interesting items to its customers. To do well, it will need some natural language processing abilities, it will need to learn what each customer is interested in, and it will need to dynamically change its plans when, for example, the connection for one news source crashes or a new one comes online.
22
Software agents Some environments blur the distinction between "real" and "artificial.“ In the ALIVE environment (Maes et al., 1994), software agents are given as percepts a digitized camera image of a room where a human walks about. The agent processes the camera image and chooses an action. The environment also displays the camera image on a large display screen that the human can watch, and superimposes on the image a computer graphics rendering of the software agent. One such image is a cartoon dog, which has been programmed to move toward the human (unless he points to send the dog away) and to shake hands or jump up eagerly when the human makes certain gestures.
24
Agent programs
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.