Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA

Slides:



Advertisements
Similar presentations
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Advertisements

Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Pat Langley Arizona State University and Institute for the Study of Learning and Expertise Expertise, Transfer, and Innovation in.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California Elena Messina.
Center for the Study of Language and Information Stanford University, Stanford, California March 20-21, 2004 Symposium on Reasoning and Learning in Cognitive.
Pat Langley Institute for the Study of Learning and Expertise Palo Alto, California and Center for the Study of Language and Information Stanford University,
Pat Langley Seth Rogers Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA.
Some Information 1.Parking is free in Stanford lots over the weekend. 2.There is no registration fee thanks to ONR and NSF. 3.Participation in the meeting.
Pat Langley Computer Science and Engineering Arizona State University Tempe, Arizona, USA The Cognitive Systems Paradigm Thanks to Paul Bello, Ron Brachman,
Pat Langley Institute for the Study of Learning and Expertise Palo Alto, CA Cumulative Learning of Relational and Hierarchical Skills.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Varieties of Problem Solving in a Unified Cognitive Architecture.
Pat Langley Dan Shapiro Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Extending the I CARUS Cognitive Architecture Thanks to D. Choi,
Pat Langley Dongkyu Choi Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Mental Simulation and Learning in the I CARUS Architecture.
General learning in multiple domains transfer of learning across domains Generality and Transfer in Learning training items test items training items test.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Modeling Social Cognition in a Unified Cognitive Architecture.
Information Processing Technology Office Learning Workshop April 12, 2004 Seedling Overview Learning Hierarchical Reactive Skills from Reasoning and Experience.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona A Cognitive Architecture for Integrated.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA A Unified Cognitive Architecture for Embodied Agents Thanks.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Cognitive Architectures and Virtual Intelligent Agents Thanks.
Pat Langley Institute for the Study of Learning and Expertise Palo Alto, California A Cognitive Architecture for Complex Learning.
Modelling with expert systems. Expert systems Modelling with expert systems Coaching modelling with expert systems Advantages and limitations of modelling.
Cognitive Systems, ICANN panel, Q1 What is machine intelligence, as beyond pattern matching, classification and prediction. What is machine intelligence,
Artificial Intelligence
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
Chapter Thirteen Conclusion: Where We Go From Here.
Lecture 3 – Skills Theory
An Introduction to Artificial Intelligence Presented by : M. Eftekhari.
C SC 421: Artificial Intelligence …or Computational Intelligence Alex Thomo
Artificial Intelligence Austin Luczak, Katie Regin, John Trawinski.
Cognitive & Linguistic Sciences What is cognitive science anyway? Why is it interdisciplinary? Why do we need to learn about information processors?
From Discrete Mathematics to AI applications: A progression path for an undergraduate program in math Abdul Huq Middle East College of Information Technology,
Marakas: Decision Support Systems, 2nd Edition © 2003, Prentice-Hall Chapter Chapter 7: Expert Systems and Artificial Intelligence Decision Support.
Polyscheme John Laird February 21, Major Observations Polyscheme is a FRAMEWORK not an architecture – Explicitly does not commit to specific primitives.
Models of Human Performance Dr. Chris Baber. 2 Objectives Introduce theory-based models for predicting human performance Introduce competence-based models.
Developing Ideas for Research and Evaluating Theories of Behavior
Problem Solving & Creativity Dr. Claudia J. Stanny EXP 4507 Memory & Cognition Spring 2009.
Semantic Web Technologies Lecture # 2 Faculty of Computer Science, IBA.
Main Branches of Linguistics
Expert Systems Infsy 540 Dr. Ocker. Expert Systems n computer systems which try to mimic human expertise n produce a decision that does not require judgment.
Vedrana Vidulin Jožef Stefan Institute, Ljubljana, Slovenia
WHAT IS PERSONALITY? Why would we want to study personality?
Chapter 14: Artificial Intelligence Invitation to Computer Science, C++ Version, Third Edition.
Knowledge representation
SLB /04/07 Thinking and Communicating “The Spiritual Life is Thinking!” (R.B. Thieme, Jr.)
Artificial Intelligence Introductory Lecture Jennifer J. Burg Department of Mathematics and Computer Science.
Copyright © 2006, The McGraw-Hill Companies, Inc. All rights reserved. Decision Support Systems Chapter 10.
“Being a cognitive psychologist myself, I can see at least three important ways in which RFT can contribute to cognitive science. First, it puts forward.
Introduction to ACT-R 5.0 ACT-R Post Graduate Summer School 2001 Coolfont Resort ACT-R Home Page: John R. Anderson Psychology Department.
Introduction to Science Informatics Lecture 1. What Is Science? a dependence on external verification; an expectation of reproducible results; a focus.
Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Developing and Evaluating Theories of Behavior.
1 CS 2710, ISSP 2610 Foundations of Artificial Intelligence introduction.
1 Knowledge Acquisition and Learning by Experience – The Role of Case-Specific Knowledge Knowledge modeling and acquisition Learning by experience Framework.
RULES Patty Nordstrom Hien Nguyen. "Cognitive Skills are Realized by Production Rules"
Cognitive Architectures and General Intelligent Systems Pay Langley 2006 Presentation : Suwang Jang.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, CA
From NARS to a Thinking Machine Pei Wang Temple University.
Applied Linguistics Applied Linguistics means
Cognitive Modeling Cogs 4961, Cogs 6967 Psyc 4510 CSCI 4960 Mike Schoelles
Artificial Intelligence
What is cognitive psychology?
Fundamentals of Information Systems, Sixth Edition
Artificial Intelligence introduction(2)
Developing and Evaluating Theories of Behavior
Artificial Intelligence Lecture 2: Foundation of Artificial Intelligence By: Nur Uddin, Ph.D.
Presented By: Darlene Banta
Presentation transcript:

Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA Intelligent Behavior in Humans and Machines Thanks to Herbert Simon, Allen Newell, John Anderson, David Nicholas, John Laird, Randy Jones, and many others for discussions that led to the ideas presented in this talk.

Early AI was closely linked to the study of human cognition. This alliance produced many ideas that have been crucial to the fields long-term development. Over the past 20 years, that connection has largely been broken, which has hurt our ability to pursue two of AI's original goals: to understand the nature of the human mind to understand the nature of the human mind to achieve artifacts that exhibit human-level intelligence to achieve artifacts that exhibit human-level intelligence Re-establishing the connection to psychology would help achieve these challenging objectives. Basic Claims

Review of early AI accomplishments that benefited from connections to cognitive psychology Review of early AI accomplishments that benefited from connections to cognitive psychology Examples of AI's current disconnection from psychology and some reasons behind this unfortunate development Examples of AI's current disconnection from psychology and some reasons behind this unfortunate development Ways that AI can benefit from renewed links to psychology Ways that AI can benefit from renewed links to psychology Research on cognitive architectures as a promising avenue Research on cognitive architectures as a promising avenue Steps we can take to encourage research along these lines Steps we can take to encourage research along these lines Outline of the Talk

As AI emerged in the 1950s, one central insight was that computers might reproduce the complex cognition of humans. Some took human intelligence as an inspiration without trying to model the details. Others, like Herb Simon and Allen Newell, viewed themselves as psychologists aiming to explain human thought. This paradigm was pursued vigorously at Carnegie Tech, and it was respected elsewhere. The approach was well represented in the early edited volume Computers and Thought. Early Links Between AI and Psychology

Early Research on Knowledge Representation Much initial work on representation dealt with the structure and organization of human knowledge: Hovland and Hunt's (1960) CLS Hovland and Hunt's (1960) CLS Feigenbaum's (1963) EPAM Feigenbaum's (1963) EPAM Quillian's (1968) semantic networks Quillian's (1968) semantic networks Schank and Abelson's (1977) scripts Schank and Abelson's (1977) scripts Newell's (1973) production systems Newell's (1973) production systems Not all research was motivated by concerns with psychology, but it had a strong impact on the field.

Early Research on Problem Solving Studies of human problem solving also influenced early AI research: Newell, Shaw, and Simons (1958) Logic Theorist Newell, Shaw, and Simons (1958) Logic Theorist Newell, Shaw, and Simons (1961) General Problem Solver Newell, Shaw, and Simons (1961) General Problem Solver DeGroots (1965) discovery of progressive deepening DeGroots (1965) discovery of progressive deepening VanLehns (1980) analysis of impasse-driven errors VanLehns (1980) analysis of impasse-driven errors Psychological studies led to key insights about both state-space and goal-directed heuristic search.

Initial Paper on the Logic Theorist

Early Research on Knowledge-Based Reasoning The 1980s saw many developments in knowledge-based reasoning that incorporated ideas from psychology: expert systems (e.g., Waterman, 1986) expert systems (e.g., Waterman, 1986) qualitative physics (e.g., Kuipers, 1984; Forbus, 1984) qualitative physics (e.g., Kuipers, 1984; Forbus, 1984) model-based reasoning (e.g., Gentner & Stevens, 1983) model-based reasoning (e.g., Gentner & Stevens, 1983) analogical reasoning (e.g., Gentner & Forbus, 1991) analogical reasoning (e.g., Gentner & Forbus, 1991) Research on natural language also borrowed many ideas from studies of structural linguistics.

Early Research on Learning and Discovery Many AI systems also served as models of human learning and discovery processes: categorization (Hovland & Hunt, 1960; Feigenbaum, 1963; Fisher, 1987) categorization (Hovland & Hunt, 1960; Feigenbaum, 1963; Fisher, 1987) problem solving (Anzai & Simon, 1979; Anderson, 1981; Minton et al., 1989; Jones & VanLehn, 1994) problem solving (Anzai & Simon, 1979; Anderson, 1981; Minton et al., 1989; Jones & VanLehn, 1994) natural language (Reeker, 1976; Anderson, 1977; Berwick, 1979, Langley, 1983) natural language (Reeker, 1976; Anderson, 1977; Berwick, 1979, Langley, 1983) scientific discovery (Lenat, 1977; Langley, 1979) scientific discovery (Lenat, 1977; Langley, 1979) This work reflected the diverse forms of knowledge supported by human learning and discovery.

The Unbalanced State of Modern AI Unfortunately, AI has moved away from modeling human cognition and become unfamiliar with results from psychology. Despite the historical benefits, many AI researchers now believe psychology has little to offer it. Similarly, few psychologists believe that results from AI are relevant to modeling human behavior. This shift has taken place in a number of research areas, and it has occurred for a number of reasons.

Current Emphases in AI Research Knowledge representation Knowledge representation focus on restricted logics that guarantee efficient processing focus on restricted logics that guarantee efficient processing less flexibility and power than observed in human reasoning less flexibility and power than observed in human reasoning Problem solving and planning Problem solving and planning partial-order and, more recently, disjunctive planners partial-order and, more recently, disjunctive planners bear little resemblance to problem solving in humans bear little resemblance to problem solving in humans Natural language processing Natural language processing statistical methods with few links to psycho/linguistics statistical methods with few links to psycho/linguistics focus on tasks like information retrieval and extraction focus on tasks like information retrieval and extraction Machine learning Machine learning statistical techniques that learn far more slowly than humans statistical techniques that learn far more slowly than humans almost exclusive focus on classification and reactive control almost exclusive focus on classification and reactive control

Technological Reasons for the Shift One reason revolves around faster computer processors and larger memories, which have made possible new methods for: playing games by carrying out far more search than humans playing games by carrying out far more search than humans finding complicated schedules that trade off many factors finding complicated schedules that trade off many factors retrieving relevant items from large document repositories retrieving relevant items from large document repositories inducing complex predictive models from large data sets inducing complex predictive models from large data sets These are genuine scientific advances, but AI might fare even better by incorporating insights from human behavior.

Formalist Trends in Computer Science Another factor involves AIs typical home in departments of computer science: which often grew out of mathematics departments which often grew out of mathematics departments where analytical tractability is a primary concern where analytical tractability is a primary concern where guaranteed optimality trumps heuristic satisficing where guaranteed optimality trumps heuristic satisficing even when this restricts work to narrow problem classes even when this restricts work to narrow problem classes Many AI faculty in such organizations view connections to psychology with intellectual suspicion.

Commercial Success of AI Another factor has been AIs commercial success, which has: led many academics to study narrowly defined tasks led many academics to study narrowly defined tasks produced a bias toward near-term applications produced a bias toward near-term applications caused an explosion of work on niche AI caused an explosion of work on niche AI Moreover, component algorithms are much easier to evaluate experimentally, especially given available repositories. Such focused efforts are appropriate for corporate AI labs, but academic researchers should aim for higher goals.

Benefits: Understanding Human Cognition One reason for renewed interchange between the two fields is to understand the nature of human cognition: because this would have important societal applications in education, interface design, and other areas; because this would have important societal applications in education, interface design, and other areas; because human intelligence comprises an important set of phenomena that demand scientific explanation. because human intelligence comprises an important set of phenomena that demand scientific explanation. This remains an open and challenging problem, and AI systems remain the best way to tackle it.

Benefits: Source of Challenging Tasks Another reason is that observations of human abilities serve as an important source of challenges, such as: understanding language at a deeper level than current systems understanding language at a deeper level than current systems interleaving planning with execution in pursuit of many goals interleaving planning with execution in pursuit of many goals learning complex knowledge structures from few experiences learning complex knowledge structures from few experiences carrying out creative activities in art and science carrying out creative activities in art and science Most work in AI sets its sights too low by focusing on tasks that hardly involve intelligence. Psychological studies reveal the impressive abilities of human cognition and pose new problems for AI research.

Benefits: Constraints on Intelligent Artifacts how the system can represent and organize knowledge; how the system can represent and organize knowledge; how the system can use that knowledge in performance; how the system can use that knowledge in performance; how the system can acquire knowledge from experience. how the system can acquire knowledge from experience. To develop intelligent systems, we must constrain their design, and findings about human behavior can suggest: Some of the most interesting AI research uses psychological ideas as design heuristics, including abilities we do not need (e.g., to carry out rapid and extensive search). Humans remain our only example of general intelligent systems, and insights about their operation deserve serious consideration.

AI and Cognitive Systems move beyond isolated phenomena and capabilities to develop complete models of intelligent behavior; move beyond isolated phenomena and capabilities to develop complete models of intelligent behavior; develop cognitive systems that make strong theoretical claims about the nature of the mind; develop cognitive systems that make strong theoretical claims about the nature of the mind; view cognitive psychology and artificial intelligence as close allies with distinct but related goals. view cognitive psychology and artificial intelligence as close allies with distinct but related goals. In 1973, Allen Newell argued You cant play twenty questions with nature and win. Instead, he proposed that we: Newell claimed that a successful framework should provide a unified theory of intelligent behavior. He associated these aims with the idea of a cognitive architecture.

Assumptions of Cognitive Architectures Most cognitive architectures incorporate a variety of assumptions from psychological theories: These claims are shared by a variety of architectures, including ACT-R, Soar, Prodigy, and I CARUS. 1.Short-term memories are distinct from long-term stores 2.Memories contain modular elements cast as symbolic structures 3.Long-term structures are accessed through pattern matching 4.Cognition occurs in retrieval/selection/action cycles 5.Performance and learning compose elements in memory

each element in a short-term memory is an active version of some structure in long-term memory; each element in a short-term memory is an active version of some structure in long-term memory; many mental structures are relational in nature, in that they describe connections or interactions among objects; many mental structures are relational in nature, in that they describe connections or interactions among objects; concepts and skills encode different aspects of knowledge that are stored as distinct cognitive structures; concepts and skills encode different aspects of knowledge that are stored as distinct cognitive structures; long-term memories have hierarchical organizations that define complex structures in terms of simpler ones. long-term memories have hierarchical organizations that define complex structures in terms of simpler ones. Ideas about Representation Cognitive psychology makes important representational claims: Many architectures adopt these assumptions about memory.

In addition, a cognitive architecture makes commitments about: performance processes for: performance processes for: retrieval, matching, and selection retrieval, matching, and selection inference and problem solving inference and problem solving perception and motor control perception and motor control learning processes that: learning processes that: generate new long-term knowledge structures generate new long-term knowledge structures refine and modulate existing structures refine and modulate existing structures Architectural Commitment to Processes In most cognitive architectures, performance and learning are tightly intertwined, again reflecting influence from psychology.

humans often resort to problem solving and search to solve novel, unfamiliar problems; humans often resort to problem solving and search to solve novel, unfamiliar problems; problem solving depends on mechanisms for retrieval and matching, which occur rapidly and unconsciously; problem solving depends on mechanisms for retrieval and matching, which occur rapidly and unconsciously; people use heuristics to find satisfactory solutions, rather than algorithms to find optimal ones; people use heuristics to find satisfactory solutions, rather than algorithms to find optimal ones; problem solving in novices requires more cognitive resources than experts use of automatized skills. problem solving in novices requires more cognitive resources than experts use of automatized skills. Ideas about Performance Many architectures embody these ideas about performance. Cognitive psychology makes clear claims about performance:

efforts to overcome impasses during problem solving can lead to new cognitive structures; efforts to overcome impasses during problem solving can lead to new cognitive structures; learning can transform backward-chaining heuristic search into forward-chaining behavior; learning can transform backward-chaining heuristic search into forward-chaining behavior; learning is incremental and interleaved with performance; learning is incremental and interleaved with performance; structural learning involves monotonic addition of symbolic elements to long-term memory; structural learning involves monotonic addition of symbolic elements to long-term memory; transfer to new tasks depends on the amount of structure shared with previously mastered tasks. transfer to new tasks depends on the amount of structure shared with previously mastered tasks. Ideas about Learning Cognitive psychology has also developed ideas about learning: Architectures often incorporate these ideas into their operation.

Cognitive architectures come with a programming language that: includes a syntax linked to its representational assumptions includes a syntax linked to its representational assumptions inputs long-term knowledge and initial short-term elements inputs long-term knowledge and initial short-term elements provides an interpreter that runs the specified program provides an interpreter that runs the specified program incorporates tracing facilities to inspect system behavior incorporates tracing facilities to inspect system behavior Architectures as Programming Languages Such programming languages ease construction and debugging of knowledge-based systems. Thus, ideas from psychology can support efficient development of software for intelligent systems.

Responses: Broader AI Education Most current AI courses ignore the fields history; we need a broader curriculum that covers its connections to: cognitive psychology cognitive psychology structural linguistics structural linguistics logical reasoning logical reasoning philosophy of mind philosophy of mind These areas are more important to AIs original agenda than are ones from mainstream computer science. For one example, see a course I have offered for the past three years.

Responses: Funding Initiatives makes contact with ideas from computational psychology makes contact with ideas from computational psychology addresses the same range of tasks that humans can handle addresses the same range of tasks that humans can handle develops integrated cognitive systems that move beyond component algorithms develops integrated cognitive systems that move beyond component algorithms In recent years, DARPA and NSF have taken promising steps in this direction, with clear effects on the community. However, we need more funding programs along these lines. We also need funding to support additional AI research that:

Responses: Publication Venues AAAIs new track for integrated intelligent systems AAAIs new track for integrated intelligent systems this years Spring Symposium on AI meets Cognitive Science this years Spring Symposium on AI meets Cognitive Science the special issue of AI Magazine on human-level intelligence the special issue of AI Magazine on human-level intelligence We need more outlets of this sort, but recent events have been moving the field in the right direction. We also need places to present work in this paradigm, such as:

Closing Remarks In summary, AIs original vision was to understand the basis of intelligent behavior in humans and machines. Many early systems doubled as models of human cognition, while others made effective use of ideas from psychology. Recent years have seen far less research in this tradition, with AI becoming a set of narrow, specialized subfields. Re-establishing contact with ideas from psychology, including work on cognitive architectures, can remedy this situation. The next 50 years must see AI return to its psychological roots if it hopes to achieve human-level intelligence.

Closing Dedication Allen Newell (1927 – 1992) Herbert Simon (1916 – 2001) Allen Newell (1927 – 1992) Herbert Simon (1916 – 2001) I would like to dedicate this talk to two of AIs founding fathers: Both contributed to the field in many ways: posing new problems, inventing methods, writing key papers, and training students. They were both interdisciplinary researchers who contributed not only to AI but to other disciplines, including psychology. Allen Newell and Herb Simon were excellent role models who we should all aim to emulate.