The Symbol Grounding Problem

Slides:



Advertisements
Similar presentations
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
Advertisements

Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
Pat Langley Institute for the Study of Learning and Expertise Palo Alto, California A Cognitive Architecture for Complex Learning.
The Dictionary as Mirror of the Mind Stevan Harnad with collaborators: Alexandre Blondin-Massé Guillaume Chicoisne Yassine Gargouri Odile Marcotte Olivier.
Introduction to Computational Linguistics
Summer 2011 Tuesday, 8/ No supposition seems to me more natural than that there is no process in the brain correlated with associating or with.
Meaning Skepticism. Quine Willard Van Orman Quine Willard Van Orman Quine Word and Object (1960) Word and Object (1960) Two Dogmas of Empiricism (1951)
Chapter Thirteen Conclusion: Where We Go From Here.
Chapter 4 Introduction to Cognitive Science
Artificial intelligence. I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 10.
Introduction to Cognitive Sciences Harmanjit Singh Dipendra Misra Divyaratn Poply Kritika Singh Rishabh Raj.
An Introduction to Artificial Intelligence Presented by : M. Eftekhari.
Intelligent systems Colloquium 1 Positive and negative of logic in thinking and AI.
Chapter 10: What am I?.
SEARLE THE CHINESE ROOM ARGUMENT: MAN BECOMES COMPUTER.
Shailesh Appukuttan : M.Tech 1st Year CS344 Seminar
Knowing Semantic memory.
Introduction to Cognitive Science Lecture #1 : INTRODUCTION Joe Lau Philosophy HKU.
Overview and History of Cognitive Science
Overview and History of Cognitive Science. How do minds work? What would an answer to this question look like? What is a mind? What is intelligence? How.
Physical Symbol System Hypothesis
Chapter Two The Philosophical Approach: Enduring Questions.
Artificial Intelligence
Philosophical Foundations Chapter 26. Searle v. Dreyfus argument §Dreyfus argues that computers will never be able to simulate intelligence §Searle, on.
THEORIES OF MIND: AN INTRODUCTION TO COGNITIVE SCIENCE Jay Friedenberg and Gordon Silverman.
The Linguistic Turn To what extent is knowledge in the use of language rather than what language is about? MRes Philosophy of Knowledge: Day 2 - Session.
1 Neural Networks and Statistics: Intelligence and the Self Prof Bruce Curry and Dr Peter Morgan Cardiff Business School, UK.
Chapter 6: Objections to the Physical Symbol System Hypothesis.
Connectionism. ASSOCIATIONISM Associationism David Hume ( ) was one of the first philosophers to develop a detailed theory of mental processes.
Bloom County on Strong AI THE CHINESE ROOM l Searle’s target: “Strong AI” An appropriately programmed computer is a mind—capable of understanding and.
Philosophy 4610 Philosophy of Mind Week 9: AI in the Real World.
Theories of First Language Acquisition
LOGIC AND ONTOLOGY Both logic and ontology are important areas of philosophy covering large, diverse, and active research projects. These two areas overlap.
How Solvable Is Intelligence? A brief introduction to AI Dr. Richard Fox Department of Computer Science Northern Kentucky University.
Section 2.3 I, Robot Mind as Software McGraw-Hill © 2013 McGraw-Hill Companies. All Rights Reserved.
© NOKIAmind.body.PPT / / PHa page: 1 Conscious Machines and the Mind-Body Problem Dr. Pentti O A Haikonen, Principal Scientist, Cognitive Technology.
EECS 690 March 31. Purpose of Chapter 4 The authors mean to address the concern that many might have that the concepts of morality and ethics just simply.
SIMULATIONS, REALIZATIONS, AND THEORIES OF LIFE H. H. PATTEE (1989) By Hyojung Seo Dept. of Psychology.
Subsumption Architecture and Nouvelle AI Arpit Maheshwari Nihit Gupta Saransh Gupta Swapnil Srivastava.
The Unreasonable Effectiveness of Data
RULES Patty Nordstrom Hien Nguyen. "Cognitive Skills are Realized by Production Rules"
What It Is To Be Conscious: Exploring the Plausibility of Consciousness in Deep Learning Computers Senior Project – Philosophy and Computer Science ID.
The Chinese Room Argument Part II Joe Lau Philosophy HKU.
Cognitive Science Overview Introduction, Syllabus
INTRODUCTION TO COGNITIVE SCIENCE NURSING INFORMATICS CHAPTER 3 1.
Chapter 15. Cognitive Adequacy in Brain- Like Intelligence in Brain-Like Intelligence, Sendhoff et al. Course: Robots Learning from Humans Cinarel, Ceyda.
Reminder: Definition of Computation
Meaning in Language and Literature, 4EN707 Meaning in Language, Lecture 2: Theories Helena Frännhag Autumn 2013.
Linguistics Linguistics can be defined as the scientific or systematic study of language. It is a science in the sense that it scientifically studies the.
Mental Representations
Artificial intelligence (AI)
Computer Science cpsc322, Lecture 20
Literature Reviews Are critical evaluations of material that has already been published. By organizing, integrating, and evaluating previously published.
The Search for Ultimate Reality and the Mind/Body Problem
Seven Principles of Synthetic Intelligence
Philosophy of Mathematics 1: Geometry
The what, where, when, and how of visual word recognition
Unit 4 Introducing the Study.
Ch. 2 Fundamental Concepts in Semiotics Part One
ARTIFICIAL INTELLIGENCE.
Introduction Artificial Intelligent.
How to Write a Position Argument
What is Digital Right Management’s Role in Modern Education System’s Play? —A Comparative Research of DRM System’s Influence in.
TA : Mubarakah Otbi, Duaa al Ofi , Huda al Hakami
AI and Agents CS 171/271 (Chapters 1 and 2)
Searle on Artificial Intelligence Minds, Brains and Science Chapter 2
Computer Science cpsc322, Lecture 20
Introduction to Artificial Intelligence Instructor: Dr. Eduardo Urbina
Artificial Intelligence
CO Games Development 2 Week 22 Machine Learning
Presentation transcript:

The Symbol Grounding Problem Presenter: Ankur Garg - Good morning, everyone. I'll be talking about the "The symbol grounding problem" which was originally written to explain the cognitive theory, basically the theory about the internal functioning of the brain to explain our observed behavior. But since we are supposedly living in the current age of AI which aims to develop systems mimicking our brains, I felt that this paper was quite an interesting read on how far are we from actual AI.

Motivation: The Chinese Room[1] - The motivation of this paper stems from a major thought experiment conducted by John Searle in the early 80's. This was known as the Chinese room experiment and first let's see a glimpse of what it was. - As you saw in the video, this experiment countered the claims of the GOFAI which was popular at that time. I'll come back to this experiment later again and show it's relevance for today's AI systems. [1] Searle, John. "The Chinese Room." [2] Video Source: https://www.youtube.com/watch?v=D0MD4sRHj1M

Motivation: Chinese/Chinese Dictionary-Go-Round[1] Endless looping between chinese symbols No understanding of the symbols 马 → 骘 → 骑[2] - The author gives another example which is termed as Chinese/Chinese Dictionary Go Round. Consider you want to learn Chinese but you only have access to a Chinese dictionary. You are more likely to oscillate between different symbols without ever knowing what the symbol actual means in the real world. These two examples show how our learning of language which is also a set of symbols cannot be based on symbols alone. [1] Harnad, Stevan. "The symbol grounding problem." Physica D: Nonlinear Phenomena 42.1-3 (1990): 335-346. [2] Chinese Synonyms fetched from Google Translate

Symbol Grounding Problem "How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads?"[1] Formally, the symbol grounding problem is defined through this statement in the paper. Basically, the symbols should explicitly convey the intended meaning instead of relying on the reader to interpret its meaning. "connecting to the world" [1] Harnad, Stevan. "The symbol grounding problem." Physica D: Nonlinear Phenomena 42.1-3 (1990): 335-346.

Why is it important? 马 → 骘 → 骑 Connects symbols to the world The obvious benefit of this problem is can then ground the chinese symbols we saw in the earlier slide to horses as they all are synonyms of horses. [1] Chinese Synonyms fetched from Google Translate [2] Image Credit (Creative Commons license): https://commons.wikimedia.org/wiki/File:Nokota_Horses_cropped.jpg

Why is it important? Explain cognitive theory How do we discriminate and identify? Showing cognition is more than just symbol manipulation The larger goal is to explain the cognitive theory of human brain and what happens inside of it that explains behaviors like discriminating between two objects and identifying the class of an object. Once, we'll see the symbol system formalism, we will see that it is completely independent of the physical elements it corresponds to and the author attempts to bridge this gap through this symbol grounding problem.

Contributions Defining the Symbol Grounding Problem Bottom-up solution to ground symbols to non-symbolic representations Defining the role of connectionism in symbol grounding These are three main contributions of the work

Background Modeling the mind Symbol System Connectionism Now, let's take a step back and understand some terms mentioned in the previous few slides which are important. I'll be focusing on symbol systems and connectionism in my slides. Is there any other term you want me to disambiguate before moving on?

Symbol Systems Formal System: 8 properties Explicit Rules Semantically Interpretable Composite Programming Language is a symbol system

Symbol Systems Independent of Physical Realizations Most successful in building AI systems at the time Better at formal, language-like tasks

Connectionist Systems "neural networks", "parallel distributed processing" Restricted to explaining observable behavior and causal interactions - cognitive aspect "Brainlike" not necessary criteria

Connectionist Systems Better at sensory, motor and learning tasks However, some higher level tasks seem symbolic logical reasoning, mathematics, chess-playing

Candidate Solution

Connecting to the World: Hybrid Approach Symbolic Representations Symbol Systems Elementary Symbols Iconic Representations Categorical Representations Connectionist Systems

Iconic Representation Projections of objects on sensory surfaces Helps in discriminating between two objects [1] Zebra Image Credit (Creative Commons license): https://commons.wikimedia.org/wiki/File:Zebra_3945_Nevit.svg [2] Horse Image Credit (Creative Commons license): https://commons.wikimedia.org/wiki/File:Nokota_Horses_cropped.jpg

Categorical Representations "Invariant features" of sensory projection Helps in identifying a member of a category Example Invariant Features Different Horses - add some images of invariant features [1] Horses Image Credit (Creative Commons license): https://en.wikipedia.org/wiki/File:Brockhaus_and_Efron_Encyclopedic_Dictionary_b35_043-0.jpg#filelinks [2] Features Image Credit (Creative Commons license): https://commons.wikimedia.org/wiki/File:Horse_anatomy_head.jpg

Symbolic Representations Compose grounded set of elementary symbols Builds semantic interpretation Allows identification of unseen objects & = [1] Zebra Image Credit (Creative Commons license): https://commons.wikimedia.org/wiki/File:Zebra_3945_Nevit.svg [2] Stripes Image Credit (Creative Commons license): https://commons.wikimedia.org/wiki/File:Stripes_scaling_test_image_scaledup400,rot45,cropped.png [3] Horse Image Credit (Creative Commons license): https://commons.wikimedia.org/wiki/File:Nokota_Horses_cropped.jpg

Role of Connectionism Aid in establishing relations between symbols and icons Learn the "invariant features" for identification

Conclusion Bottom-up is the only viable route for grounding Complete symbolic system impossible Symbol modification in proposed system: Shape of the symbol Shape of iconic/categorical representation of the grounded symbol Scheme in spirit of behaviorism

Critique

Chinese Room Axioms and Conclusions[1] (A1) Programs are formal (syntactic). (A2) Minds have mental contents (semantics). (A3) Syntax by itself is neither constitutive of nor sufficient for semantics. (A4) Brains cause minds. [1] Source: https://www.iep.utm.edu/chineser/ [2] Searle, John. "The Chinese Room."

Importance of the Problem Relevant to multiple disciplines Cognitive Science, Philosophy, Computer Science, AI, Robotics Does AI understands the symbols as we do? Problem still persists[1] Most approaches follow the brute force grounding solution Consequences of actions - even though it was intended only to be a cognitive modeling problem - can be read with multiple perspectives [1] Lewis, Mike, et al. "Deal or no deal? end-to-end learning for negotiation dialogues." arXiv preprint arXiv:1706.05125 (2017).

Limitations of Solution Missing details on generating symbolic representations User study to verify the proposed cognitive theory

Research Directions 4,426 citations![2] Cognitive Theory Psycholinguistics Artificial Intelligence Question Answering Interpretability from connectionism Robotics Teaching Language Self-driving cars - talking about the research directions it has opened [1] Image Credit: https://www.semanticscholar.org/paper/The-Symbol-Grounding-Problem-Harnad/06f8d5c7a15b44907ccb4c5397ab14e40ea0a478?navId=citing-papers [2] https://scholar.google.com/scholar?hl=en&as_sdt=0%2C44&q=the+symbol+grounding+problem&btnG=&oq=the

References Harnad, Stevan. "The symbol grounding problem." Physica D: Nonlinear Phenomena 42.1-3 (1990): 335-346. The Chinese Room Experiment - The Hunt for AI - BBC Cangelosi, Angelo. "Solutions and open challenges for the symbol grounding problem." International Journal of Signs and Semiotic Systems (IJSSS) 1.1 (2011): 49-54. Chinese Room Argument

Discussion Clarifying questions about the content Is the Chinese Room experiment still relevant for today's AI?