1 OSCAR: An Architecture for Generally Intelligent Agents John L. Pollock Philosophy and Cognitive Science University of Arizona

Slides:



Advertisements
Similar presentations
© Michael Lacewing Innate ideas Michael Lacewing.
Advertisements

1 Knowledge Representation Introduction KR and Logic.
Does risk exist, and if it does, where does it live and how do we find it? Doug Crawford-Brown Professor of Environmental Sciences and Policy Director,
ARTIFICIAL INTELLIGENCE [INTELLIGENT AGENTS PARADIGM] Professor Janis Grundspenkis Riga Technical University Faculty of Computer Science and Information.
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
Hume’s Problem of Induction 2 Seminar 2: Philosophy of the Sciences Wednesday, 14 September
HUME ON THE ARGUMENT FROM DESIGN (Part 2 of 2) Text source: Dialogues Concerning Natural Religion, parts 2-8.
Hume’s Problem of Induction. Most of our beliefs about the world have been formed from inductive inference. (e.g., all of science, folk physics/psych)
CPSC 322 Introduction to Artificial Intelligence September 13, 2004.
A: A Unified Brand-name-Free Introduction to Planning Subbarao Kambhampati Environment What action next? The $$$$$$ Question.
The Problem of the Criterion Chisholm: Particularists and Methodists.
LEARNING FROM OBSERVATIONS Yılmaz KILIÇASLAN. Definition Learning takes place as the agent observes its interactions with the world and its own decision-making.
Science and induction  Science and we assume causation (cause and effect relationships)  For empiricists, all the evidence there is for empirical knowledge,
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
CS 357 – Intro to Artificial Intelligence  Learn about AI, search techniques, planning, optimization of choice, logic, Bayesian probability theory, learning,
Knowledge Acquisitioning. Definition The transfer and transformation of potential problem solving expertise from some knowledge source to a program.
Models -1 Scientists often describe what they do as constructing models. Understanding scientific reasoning requires understanding something about models.
Intelligence without Reason
LEARNING FROM OBSERVATIONS Yılmaz KILIÇASLAN. Definition Learning takes place as the agent observes its interactions with the world and its own decision-making.
COMP 3009 Introduction to AI Dr Eleni Mangina
Lecture 6 1. Mental gymnastics to prepare to tackle Hume 2. The Problem of Induction as Hume argues for it 1. His question 2. His possible solutions 3.
Random Administrivia In CMC 306 on Monday for LISP lab.
Argumentation Logics Lecture 5: Argumentation with structured arguments (1) argument structure Henry Prakken Chongqing June 2, 2010.
Introduction, Acquiring Knowledge, and the Scientific Method
Chapter One: The Science of Psychology
1. Human – the end-user of a program – the others in the organization Computer – the machine the program runs on – often split between clients & servers.
Part Nine: Practical Cognition / Planning Given an agent capable of sophisticated epistemic cognition, how can it make use of that in practical cognition?
Artificial Intelligence: Definition “... the branch of computer science that is concerned with the automation of intelligent behavior.” (Luger, 2009) “The.
Research !!.  Philosophy The foundation of human knowledge A search for a general understanding of values and reality by chiefly speculative rather thanobservational.
A Research Question How to get a research question Do you want to know the answer Does it have to be new? Difficulties What makes a good research question.
Chapter One: The Science of Psychology. Ways to Acquire Knowledge Tenacity Tenacity Refers to the continued presentation of a particular bit of information.
THE OSCAR PROJECT Rational Cognition in OSCAR John L
{ Logic in Artificial Intelligence By Jeremy Wright Mathematical Logic April 10 th, 2012.
THE OSCAR PROJECT Rational Cognition in OSCAR John L. Pollock Department of Philosophy University of Arizona Tucson, Arizona 85721
Artificial Intelligence
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
Bayesian networks. Motivation We saw that the full joint probability can be used to answer any question about the domain, but can become intractable as.
Knowledge Representation Use of logic. Artificial agents need Knowledge and reasoning power Can combine GK with current percepts Build up KB incrementally.
Formal Models in AGI Research Pei Wang Temple University Philadelphia, USA.
What is “Thinking”? Forming ideas Drawing conclusions Expressing thoughts Comprehending the thoughts of others Where does it occur? Distributed throughout.
Reason: as a Way of Knowing Richard van de Lagemaat, Theory of Knowledge for the IB Diploma (Cambridge: CUP, 2005)
Science Fair How To Get Started… (
The Nature and Kinds of Research Subject matter of course  Class about quantitative research  How is research different from other ways of answering.
Copyright © Curt Hill Mathematical Logic An Introduction.
1 Reasoning Under Uncertainty Artificial Intelligence Chapter 9.
© Michael Lacewing Hume and Kant Michael Lacewing co.uk.
Simultaneously Learning and Filtering Juan F. Mancilla-Caceres CS498EA - Fall 2011 Some slides from Connecting Learning and Logic, Eyal Amir 2006.
HOW TO CRITIQUE AN ARGUMENT
Uncertainty in Expert Systems
© Michael Lacewing Kant on conceptual schemes Michael Lacewing osophy.co.uk.
4 Proposed Research Projects SmartHome – Encouraging patients with mild cognitive disabilities to use digital memory notebook for activities of daily living.
Theories and Hypotheses. Assumptions of science A true physical universe exists Order through cause and effect, the connections can be discovered Knowledge.
CONCLUSION The conclusion of this work is that it is possible to develop a problem-solving method providing evolutionary computational support to general.
Definitions of AI There are as many definitions as there are practitioners. How would you define it? What is important for a system to be intelligent?
THE OSCAR PROJECT Rational Cognition in OSCAR John L. Pollock Department of Philosophy University of Arizona Tucson, Arizona 85721
International Conference on Fuzzy Systems and Knowledge Discovery, p.p ,July 2011.
Probabilistic Robotics Introduction.  Robotics is the science of perceiving and manipulating the physical world through computer-controlled devices.
Do I need statistical methods? Samu Mäntyniemi. Learning from experience Which way a bottle cap is going to land? Think, and then write down your opinion.
The Psychologist as Detective, 4e by Smith/Davis © 2007 Pearson Education Chapter One: The Science of Psychology.
An argument-based framework to model an agent's beliefs in a dynamic environment Marcela Capobianco Carlos I. Chesñevar Guillermo R. Simari Dept. of Computer.
Probabilistic Robotics Introduction. SA-1 2 Introduction  Robotics is the science of perceiving and manipulating the physical world through computer-controlled.
We do not learn from experience… we learn from reflecting on experience. JOHN DEWEY.
Matching ® ® ® Global Map Local Map … … … obstacle Where am I on the global map?                                   
Smith/Davis (c) 2005 Prentice Hall Chapter One The Science of Psychology PowerPoint Presentation created by Dr. Susan R. Burns Morningside College.
PHIL 2 Philosophy: Ethics in Contemporary Society Week 2 Topic Outlines.
Artificial Intelligence Logical Agents Chapter 7.
Michael Lacewing Hume and Kant Michael Lacewing © Michael Lacewing.
Course: Autonomous Machine Learning
Artificial Intelligence introduction(2)
Conclusions An architecture for “anthropomorphic agents” must mimic (but not necessarily duplicate) human rational cognition. Practical cognition makes.
Presentation transcript:

1 OSCAR: An Architecture for Generally Intelligent Agents John L. Pollock Philosophy and Cognitive Science University of Arizona

2 Sparse Knowledge OSCAR is a fully implemented architecture for cognitive agents, based on the author’s work in philosophy concerning epistemology and the theory of practical reasoning. The basic observation that motivates the OSCAR architecture is that agents of human-level intelligence operating in an environment of real-world complexity (GIAs — “generally intelligent agents”) must be able to form beliefs and make decisions against a background of pervasive ignorance.

3 Defeasible Reasoning In light of our pervasive ignorance, we cannot get around in the world just forming beliefs that follow deductively from what we already know together with new sensor input. GIAs come equipped (by evolution or design) with inference schemes that are reliable in the circumstances in which the agent operates. That is, if the agent reasons in that way, its conclusions will tend to be true, but are not guaranteed to be true. Such reasoning must be defeasible — the agent must be prepared to withdraw conclusions in the face of contrary information. OSCAR is based on a general-purpose defeasible reasoner.

4 Defeasible Reasoning is Δ 2 A GIA must be capable of employing a rich system of representations — at least a full first-order language. In a first-order language, the set of defeasible consequences of a set of assumptions is not r.e. (Israel 1981 and Reiter 1981). Given some reasonable assumptions, it is Δ 2.

5 Double Defeasibility Nonmonotonic logic — simple defeasibility. Argument construction is non-terminating. –Defeat statuses can only be computed provisionally, on the basis of the arguments constructed so far. –The cognizer must be prepared to change its mind about defeat statuses if it finds new relevant arguments. –In other words, the defeat status computation must itself be defeasible. –We can put this by saying that the reasoning is doubly defeasible. To the best of my knowledge, OSCAR is the only implemented defeasible reasoner that accommodates double defeasibility.

6 Problem Solving is Non-Terminating These observations have the consequence that a GIA cannot be viewed as a classical problem solver. Because reasoning is non-terminating, a GIA can only solve problems defeasibly. –When the agent has to act, it acts on the basis of the solutions it has found to date, but if it has more time, there is always the possibility that better solutions will emerge or that difficulties will be found for previously discovered solutions. –Problems can be set aside when defeasible solutions are found, but they can never be forgotten.

7 Practical Cognition The preceding observations are about epistemic cognition. But they have important implications for rational decision making, and in particular, for planning. High performance planners But most of this work is inapplicable to building GIAs. –High performance planners assume perfect knowledge of the agent’s environment, and do so in an absolutely essential way that cannot be relaxed. –But this flies in the face of our initial observation of pervasive ignorance. –So GIAs cannot be constructed on the basis of such planners.

8 Decision-Theoretic Planners Existing decision-theoretic planners assume that the cognitive agent has at its disposal a complete probability distribution. This assumption is equally preposterous for GIAs. –300 simple propositions –2 300 logically independent probabilities –2 300 = –The best current estimates of the number of elementary particles in the universe is – –This problem cannot be avoided by assuming statistical independence and using Bayesian nets.

9 “Computing” Plans Existing planners “compute plans”. –Their input is a planning problem which includes all of the information needed to find a plan, and then they run a program that computes a plan and terminates. A GIA cannot be expected to come to a planning problem already knowing everything that is relevant to solving the problem. –In the course of trying to construct a plan, an agent will typically encounter things it would like to know but does not know. The defeasible search for knowledge is non-terminating, so the planner cannot first solve the epistemic problems and then do the planning. Planning and epistemic cognition must be interleaved.

10 Defeasible Planning It is an open question what form of planning should be employed by an GIA, but in the absence of the perfect knowledge assumption, the only familiar kind of planner that can be employed in a GIA is something like a classical goal regression planner. I do not want to claim that this is exactly the way planning should work in a GIA, but perhaps we can at least assume that the planner is a refinement planner that (1) constructs a basic plan, (2) looks for problems (“threats”), and (3) tries to fix the plan.

11 Defeasible Planning The epistemic reasoning involved in searching for threats will not be terminating. It follows that the set of  planning,solution  pairs cannot be recursively enumerable, and planning cannot be performed by a terminating computational process. The result is that, just like other defeasible reasoning, planning cannot be done by “computing” the plans to be adopted. The computation could never terminate.

12 Defeasible Planning A GIA must be prepared to adopt plans provisionally in the absence of knowledge of threats (that is, assume defeasibly that there are no threats), and then withdraw the plans and try to repair them when defeaters are found. So the logical structure of planning is indistinguishable from general defeasible reasoning. My suggestion is that a GIA should reason epistemically about plans rather than computing them with some computational black box.

13 What Has Been Done The general architecture of OSCAR, including the defeasible reasoner, has been implemented in LISP, and OSCAR and the OSCAR Manual can be downloaded from To build a cognitive system, the architecture must be supplemented with inference schemes. Defeasible inference schemes for perceptual reasoning, temporal projection, causal reasoning, and classical planning have been implemented. Work on a defeasible decision-theoretic planner is underway.