THE OSCAR PROJECT Rational Cognition in OSCAR John L. Pollock Department of Philosophy University of Arizona Tucson, Arizona 85721

Slides:



Advertisements
Similar presentations
Knowledge as JTB Someone S has knowledge of P IFF: 1. S believes P 2. S is justified in believing P 3. P is true.
Advertisements

Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona A Cognitive Architecture for Integrated.
Immanuel Kant ( ) Theory of Aesthetics
By Anthony Campanaro & Dennis Hernandez
Why is Socrates’ life important?Socrates’. How do I Know?
 Make better decisions Usually business decisions  Build theory Understand the world better.
5-1 Chapter 5: REACTIVE AND HYBRID ARCHITECTURES.
CHAPTER 2 THE RESEARCH PROCESS. 1. Selection of topic  2. Reviewing the literature  3. Development of theoretical and conceptual frameworks  4.
1 OSCAR: An Architecture for Generally Intelligent Agents John L. Pollock Philosophy and Cognitive Science University of Arizona
Dutch books and epistemic events Jan-Willem Romeijn Psychological Methods University of Amsterdam ILLC 2005 Interfacing Probabilistic and Epistemic Update.
Undergraduate Students’ Laboratory Practice Illuminated by The Philosophy of Science TheoryVs. Experimental Evidence. Rachel Havdala Guy Ashkenazi Dept.
4-1 Chapter 4: PRACTICAL REASONING An Introduction to MultiAgent Systems
Principles of High Quality Assessment
The Conceptual Framework Lecture 8 1. Organization of this lecture Conceptual Framework: Role of the Conceptual Framework Theory: Source of Conceptual.
Belief Revision Lecture 1: AGM April 1, 2004 Gregory Wheeler
Chapter 3: Knowledge The Congenial Skeptic: David Hume
Decision Making Upul Abeyrathne, Dept. of Economics, University of Ruhuna, Matara.
MORAL THEORY: INTRODUCTION PHILOSOPHY 224. THE ROLE OF REASONS A fundamental feature of philosophy's contribution to our understanding of the contested.
RESEARCH QUESTIONS, AIMS AND OBJECTIVES
Problem solving in project management
Critical Analysis and Problem Solving Merging Critical Thought and Assessments in Modern Maritime Education IMLA 19 Conference 2011 Captain Gregory Hanchrow.
COMP14112: Artificial Intelligence Fundamentals L ecture 3 - Foundations of Probabilistic Reasoning Lecturer: Xiao-Jun Zeng
Part Nine: Practical Cognition / Planning Given an agent capable of sophisticated epistemic cognition, how can it make use of that in practical cognition?
1 AI and Agents CS 171/271 (Chapters 1 and 2) Some text and images in these slides were drawn from Russel & Norvig’s published material.
KNOWLEDGE What is it? How does it differ from belief? What is the relationship between knowledge and truth? These are the concerns of epistemology How.
MODELS OF REFLECTION.
Copyright © 2005 by South-Western, a division of Thomson Learning All rights reserved 1 Chapter 8 Fundamentals of Decision Making.
“A man without ethics is a wild beast loosed upon this world.”
THE OSCAR PROJECT Rational Cognition in OSCAR John L
Artificial Intelligence Introductory Lecture Jennifer J. Burg Department of Mathematics and Computer Science.
HOW PROFESSIONALS LEARN AND ACQUIRE EXPERTISE  Model of professionals as learners:  How professionals know  How professionals incorporate knowledge.
EEL 5937 Models of agents based on intentional logic EEL 5937 Multi Agent Systems.
Kantian ethics (& suicide): Kantian ethics (& suicide): Immanuel Kant ( ). A German philosopher. Ought implies Can Maxims Categorical Imperative.
Philosophy 224 Moral Theory: Introduction. The Role of Reasons A fundamental feature of philosophy's contribution to our understanding of the contested.
PEP 570, DeGeorge, Chp. 3 10/28/20151 Chapter Three: Dr. DeGeorge Utilitarianism: Justice and Love.
The Role of Decision Making in Management Chapter 1.
Philosophy 224 Responding to the Challenge. Taylor, “The Concept of a Person” Taylor begins by noting something that is going to become thematic for us.
Traditional Ethical Theories. Reminder Optional Tutorial Monday, February 25, 1-1:50 Room M122.
THE OSCAR PROJECT Rational Cognition in OSCAR John L. Pollock Department of Philosophy University of Arizona Tucson, Arizona 85721
Uncertainty Management in Rule-based Expert Systems
Researching and Writing Dissertations Roy Horn Researching and Writing Dissertations.
Unit 1 The Concept of Law. What is a Commonplace?  The set of everyday truths about a given subject matter providing us a shared subject matter for inquiry.
THE OSCAR PROJECT Rational Cognition in OSCAR John L. Pollock Department of Philosophy University of Arizona Tucson, Arizona 85721
Chapter 13: Inferences about Comparing Two Populations Lecture 8b Date: 15 th November 2015 Instructor: Naveen Abedin.
LECTURE 23 MANY COSMOI HYPOTHESIS & PURPOSIVE DESIGN (SUMMARY AND GLIMPSES BEYOND)
Feng Zhiyong Tianjin University Fall  An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that.
 Problem solving involves a number of well- defined steps, which are as follows:  Define the problem.  Analyze the problem.  Identify and evaluate.
Assumption-based Truth Maintenance Systems: Motivation n Problem solvers need to explore multiple contexts at the same time, instead of a single one (the.
Six Steps To Problem Solving A simple systematic approach to problems and issues faced by students By MK NKWANE g15N7271 TUESDAY GROUP.
Why Stakeholder Theorists Should Support Stakeholder Democracy Jeffrey Moriarty Bentley University February, 2011.
Philosophy 224 Moral Theory: Introduction. The Role of Reasons A fundamental feature of philosophy ' s contribution to our understanding of the contested.
Philosophy 219 Introduction to Moral Theory. Theoretical vs. Practical  One of the ways in which philosophers (since Aristotle) subdivide the field of.
Unit 1 – Introduction to Philosophy of Law What is law? How do we begin to talk about what law is?
Theory of Knowledge TOK
Introduction to Moral Theory
CHAPTER 5 Handling Uncertainty BIC 3337 EXPERT SYSTEM.
Introduction to Moral Theory
Problem-based learning
Rational Perspectives on Decision Making Keys to Decision Making
How Managers Make Things Happen
Introduction to Moral Theory
Course Instructor: knza ch
CAPE INFORMATION TECHNOLOGY
Stephen Hess Dr. Jeffery Heer Discussion for 4/21 CS 376.
AI and Agents CS 171/271 (Chapters 1 and 2)
CAPE INFORMATION TECHNOLOGY
Conclusions An architecture for “anthropomorphic agents” must mimic (but not necessarily duplicate) human rational cognition. Practical cognition makes.
28th September 2005 Dr Bogdan L. Vrusias
Steps for Ethical Analysis
Dr. Arslan Ornek MATHEMATICAL MODELS
Presentation transcript:

THE OSCAR PROJECT Rational Cognition in OSCAR John L. Pollock Department of Philosophy University of Arizona Tucson, Arizona

Rational Cognition Humans have a remarkable ability to solve complex problems just by thinking about them.

Humans are not the only creatures capable of problem solving.

The OSCAR Project The original goal of AI was that of building an artificial rational agent. My conviction is that one reason progress in AI has been slower than anticipated is because many of the problems that have to be solved are of a philosophical nature rather than being straightforward engineering problems. Specifically, one constraint on an artificial agent is that it draw conclusions and make decisions that we regard as rational. To accomplish this, we need a general theory of rationality. Considerable insight can be gleaned from approaching rationality from “the design stance”. The goal of the OSCAR project is the construction of a general theory of rationality (both epistemic and practical) and its implementation in an AI system. These lectures will introduce the student to the general approach used in the OSCAR project.

Part One: Evaluating Agent Architectures Stuart Russell: “rational agents are those that do the right thing”. –The problem of designing a rational agent then becomes the problem of figuring out what the right thing is. –There are two approaches to the latter problem, depending upon the kind of agent we want to build. Anthropomorphic Agents — those that can help human beings rather directly in their intellectual endeavors. –These endeavors consist of decision making and data processing. –An agent that can help humans in these enterprises must make decisions and draw conclusions that are rational by human standards of rationality. Goal-Oriented Agents — those that can carry out certain narrowly-defined tasks in the world. –Here the objective is to get the job done, and it makes little difference how the agent achieves its design goal.

Evaluating Goal-Oriented Agents If the design goal of a goal-oriented agent is sufficiently simple, it may be possible to construct a metric the measures how well an agent achieves it. –Then the natural way of evaluating an agent architecture is in terms of the expected-value of that metric. –An ideally rational goal-oriented agent would be one whose design maximizes that expected-value. –The recent work on bounded-optimality (Russell and Subramanian; Horvitz; Zilberstein and Russell, etc.) derives from this approach to evaluating agent architectures. This approach will only be applicable in cases in which it is possible to construct a metric of success. If the design goal is sufficiently complex, that will be at least difficult, and perhaps impossible.

Evaluating Anthropomorphic Agents Here it is the individual decisions and conclusions of the agent that we want to be rational. In principle, we could regard an anthropomorphic agent as a special case of a goal-oriented agent, where now the goal is to make rational decisions and draw rational conclusions, but it is doubtful that we can produce a metric that measures the degree to which such an agent is successful in achieving these goals. Even if we could construct such a metric, it would not provide an analysis of rationality for such an agent, because the metric itself must presuppose prior standards of rationality governing the individual cognitive acts of the agent being evaluated.

Evaluating Anthropomorphic Agents In AI it is often supposed that the standards of rationality that apply to individual cognitive acts are straightforward and unproblematic: –Bayesian probability theory provides the standards of rationality for beliefs; –classical decision theory provides the standards of rationality for practical decisions. It may come as a surprise then that most philosophers reject Bayesian epistemology, and I believe there are compelling reasons for rejecting classical decision theory.

Bayesian Epistemology Bayesian epistemology asserts that the degree to which a rational agent is justified in believing something can be identified with a subjective probability. Belief updating is governed by conditionalization on new inputs. –There is an immense literature on this. Some of the objections to it are summarized in Pollock and Cruz, Contemporary Theories of Knowledge, 2nd edition (Rowman and Littlefield, 1999). Perhaps the simplest objection to Bayesian epistemology is that it implies that an agent is always rational in believing any truth of logic, because any such truth has probability 1. –However, this conflicts with common sense. Consider a complex tautology like [P  (Q & ~P)]  ~Q. If one of my logic students picks this out of the air and believes it for no reason, we do not regard that as rational. –He should only believe it if he has good reason to believe it. In other words, rational belief requires reasons, and that conflicts with Bayesian epistemology. further discussion (from Contemporary Theories of Knowledge, 2nd edition) further discussion (from Contemporary Theories of Knowledge, 2nd edition)

Classical Decision Theory Classical decision theory has us choose acts one at a time on the basis of their expected values. It is courses of action, or plans, that must be evaluated decision- theoretically, and individual acts become rational by being prescribed by rationally adopted plans. (See Pollock, Cognitive Carpentry, chapter 5, MIT Press, 1995.) Consider the following, where pushing the top four buttons in sequence will produce 10 utiles, and pushing button B will produce 5 utiles. One ought to push the top four buttons. button A button B 10 utiles 5 utiles

Classical Decision Theory But decision theory has us choose buttons one at a time. –The expected utility of pushing button B is 5 utiles. –The expected utility of pushing button A is 10 times the probability that one will push the remaining buttons in sequence. –If one has not yet solved the decision problem, the latter probability is effectively 0. –So decision theory tells us to push button B rather than button A, which is the wrong answer. Instead we should adopt the plan of pushing the top four buttons, and push button A because that is prescribed by the plan. button A button B 10 utiles 5 utiles further discussion (from “Plans and Decisions”) further discussion (from “Plans and Decisions”)

Classical Decision Theory Furthermore, I will argue below that we cannot just take classical decision theory intact and apply it to plans. A plan is not automatically superior to a competitor just because it has a higher expected value.

Anthopomorphic Agents and Procedural Rationality The design of an anthropomorphic agent requires a general theory of rational cognition. The agent’s cognition must be rational by human standards. Cognition is a process, so this generates an essentially procedural concept of rationality. –Many AI researchers have followed Herbert Simon in rejecting such a procedural account, endorsing instead a satisficing account based on goal-satisfaction, but that is not applicable to anthropomorphic agents.

Procedural Rationality We do not necessarily want an anthropomorphic agent to model human cognition exactly. –We want it to draw rational conclusions and make rational decisions, but it need not do so in exactly the same way humans do it. How can we make sense of this? Stuart Russell (following Herbert Simon) suggests that the appropriate concept of rationality should only apply to the ultimate results of cognition, and not the course of cognition.

Procedural Rationality A conclusion or decision is warranted (relative to a system of cognition) iff it is endorsed “in the limit”. –i.e., there is some stage of cognition at which it becomes endorsed and beyond which the endorsement is never retracted. We might require an agent architecture to have the same theory of warrant as human rational cognition. –This is to evaluate its behavior in the limit. –An agent that drew conclusions and made decisions at random for the first ten million years, and then started over again reasoning just like human beings would have the same theory of warrant, but it would not be a good agent design. –This is a problem for any assessment of agents in terms of the results of cognition in the limit.

Procedural Rationality It looks like the best we can do is require that the agent’s reasoning never strays very far from the course of human reasoning. –If humans will draw a conclusion within a certain number of steps, the agent will do so within a “comparable” number of steps, and if a human will retract the conclusion within a certain number of further steps, the agent will do so within a “comparable” number of further steps. This is admittedly vague. –We might require that the worst-case difference be polynomial in the number of steps, or something like that. However, this proposal does not handle the case of the agent that draws conclusions randomly for the first ten million years. I am not going to endorse a solution to this problem. I just want to call attention to it, and urge that whatever the solution is, it seems reasonable to think that the kind of architecture I am about to describe satisfies the requisite constraints.

Goal-Oriented Agents Simple goal-oriented agents might be designed without any attention to how human beings achieve the same cognitive goals. I will argue that for sophisticated goal-oriented agents that aim at achieving complex goals in complicated environments, it will often be necessary to build in many of the same capabilities as are required for anthropomorphic agents. –I must wait to argue for this after I have said more about what is required for anthropomorphic agents.