The Neural Basis of Thought and Language Week 11 Metaphor and Automatic Reasoning.

Slides:



Advertisements
Similar presentations
1 Aspects of IEEE P1471 Viewpoints in Unified Modeling Language (UML) Manzur Ashraf, BRAC University Humayra Binte Ali, Dhaka University Md.Mahfuz Ashraf,
Advertisements

Summer 2011 Tuesday, 8/ No supposition seems to me more natural than that there is no process in the brain correlated with associating or with.
 Lokendra Shastri ICSI, Berkeley The Binding Problem  Massively Parallel Brain  Unitary Conscious Experience  Many Variations and Proposals  Our focus:
PROBABILITY. Uncertainty  Let action A t = leave for airport t minutes before flight from Logan Airport  Will A t get me there on time ? Problems :
CSCI 121 Special Topics: Bayesian Networks Lecture #5: Dynamic Bayes Nets.
ISBN Chapter 3 Describing Syntax and Semantics.
Introductory Lecture. What is Discrete Mathematics? Discrete mathematics is the part of mathematics devoted to the study of discrete (as opposed to continuous)
CPSC 322, Lecture 26Slide 1 Reasoning Under Uncertainty: Belief Networks Computer Science cpsc322, Lecture 27 (Textbook Chpt 6.3) March, 16, 2009.
9.012 Brain and Cognitive Sciences II Part VIII: Intro to Language & Psycholinguistics - Dr. Ted Gibson.
The Semantic Web Week 13 Module Website: Lecture: Knowledge Acquisition / Engineering Practical: Getting to know.
CS 182 Sections slides created by Eva Mok modified by JGM March 22, 2006.
The Neural Basis of Thought and Language Final Review Session.
Department of Computer Engineering University of California at Santa Cruz Networking Systems (1) Hai Tao.
Multiagent Systems and Societies of Agents
CS 182 Sections slides created by Eva Mok modified by jgm April 13, 2005.
CS 330 Programming Languages 09 / 18 / 2007 Instructor: Michael Eckmann.
 Lokendra Shastri ICSI, Berkeley A Neurally Plausible model of Reasoning  Lokendra Shastri ICSI, Berkeley Lokendra Shastri International Computer Science.
Describing Syntax and Semantics
02 -1 Lecture 02 Agent Technology Topics –Introduction –Agent Reasoning –Agent Learning –Ontology Engineering –User Modeling –Mobile Agents –Multi-Agent.
Cognitive Development in Middle Childhood: Chapter 11.
Accounting Theory: Roles and Approaches
General Knowledge Dr. Claudia J. Stanny EXP 4507 Memory & Cognition Spring 2009.
Knowledge representation
Some Thoughts to Consider 1 What is so ‘artificial’ about Artificial Intelligence? Just what are ‘Knowledge Based Systems’ anyway? Why would we ever want.
CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes March 13, 2012.
ARTIFICIAL INTELLIGENCE [INTELLIGENT AGENTS PARADIGM] Professor Janis Grundspenkis Riga Technical University Faculty of Computer Science and Information.
ISBN Chapter 3 Describing Semantics -Attribute Grammars -Dynamic Semantics.
An Ontological Framework for Web Service Processes By Claus Pahl and Ronan Barrett.
Introduction to Embodied Construction Grammar March 4, 2003 Ben Bergen
Course Instructor: Kashif Ihsan 1. Chapter # 3 2.
Programming Languages and Design Lecture 3 Semantic Specifications of Programming Languages Instructor: Li Ma Department of Computer Science Texas Southern.
1/21 Automatic Discovery of Intentions in Text and its Application to Question Answering (ACL 2005 Student Research Workshop )
Copyright © 2010, Pearson Education Inc., All rights reserved.  Prepared by Katherine E. L. Norris, Ed.D.  West Chester University of Pennsylvania This.
Knowledge Representation
Joffe discussion of Leucari paper - 20 March 2006 Formal tools for handling evidence – Dr Valentina Leucari Discussion by Dr Mike Joffe.
CS 5991 Presentation Ptolemy: A Framework For Simulating and Prototyping Heterogeneous Systems.
CPSC 322, Lecture 26Slide 1 Reasoning Under Uncertainty: Belief Networks Computer Science cpsc322, Lecture 27 (Textbook Chpt 6.3) Nov, 13, 2013.
Artificial Intelligence Knowledge Representation.
ACCOUNTING THEORY AND STANDARDS
MDD-Kurs / MDA Cortex Brainware Consulting & Training GmbH Copyright © 2007 Cortex Brainware GmbH Bild 1Ver.: 1.0 How does intelligent functionality implemented.
The Neural Basis of Thought and Language Final Review Session.
CS 182 Sections Leon Barrett with acknowledgements to Eva Mok and Joe Makin April 4, 2007.
The Neural Basis of Thought and Language Week 12 Metaphor and Automatic Reasoning, plus Grammars.
Knowledge Representation. A knowledge base can be organised in several different configurations to facilitate fast inferencing Knowledge Representation.
Five Tools & Processes for NGSS
Knowledge Representation & Logic
Reasoning Under Uncertainty: Belief Networks
CS 188: Artificial Intelligence Spring 2007
Service-Oriented Computing: Semantics, Processes, Agents
Learning Fast and Slow John E. Laird
IB Assessments CRITERION!!!.
Propositional Logic Session 3
with acknowledgments to Eva Mok and Joe Makin
Thinking & Decision Making: Dual Process Model
Visual vs. Language-based Thinking
Service-Oriented Computing: Semantics, Processes, Agents
Survey of Knowledge Base Content
The Philosophical Model of Communication
COT 5611 Operating Systems Design Principles Spring 2012
Internet of Things A Process Calculus Approach
to Plan for a Unit of Instruction
University of California
CS 188: Artificial Intelligence Fall 2007
CS 188: Artificial Intelligence Fall 2008
Service-Oriented Computing: Semantics, Processes, Agents
Learning linguistic structure with simple recurrent neural networks
Some Programming Paradigms
CS 188: Artificial Intelligence Spring 2006
Representations & Reasoning Systems (RRS) (2.2)
COT 5611 Operating Systems Design Principles Spring 2014
Presentation transcript:

The Neural Basis of Thought and Language Week 11 Metaphor and Automatic Reasoning

Schedule Assignment 7 extension, due Tuesday in class Last Week –Aspect and Tense –Event Structure Metaphor This Week –Inference, KARMA: Knowledge-based Action Representations for Metaphor and Aspect –The binding theory, automatic inference, and SHRUTI Next Week –Grammar

Last few lectures What topics do you want to go over in the last few lectures?

Questions 1.How are the source and target domains represented in KARMA? 2.How does the source domain information enter KARMA? How should it? 3.What does SHRUTI buy us? 4.How are bindings propagated in a structured connectionist framework?

KARMA DBN to represent target domain knowledge Metaphor maps link target and source domain X-schema to represent source domain knowledge

DBNs Explicit causal relations + full joint table  Bayes Nets Sequence of full joint states over time  HMM HMM + BN  DBNs DBNs are a generalization of HMMs which capture sparse causal relationships of full joint

Dynamic Bayes Nets

Metaphor Maps map entities and objects between embodied and abstract domains invariantly map the aspect of the embodied domain event onto the target domain by setting the evidence for the status variable based on controller state (event structure metaphor) project x-schema parameters onto the target domain

Where does the domain knowledge come from? Both domains are structured by frames Frames have: –List of roles (participants, frame elements) –Relations between roles –Scenario structure

DBN for the target domain T0T1 Economic State Goa l Policy Outcom e Difficulty [Liberalization, Protectionism] [free trade, protection ] [success, failure] [present, absent] [recession,nogrowth,lowgrowth,higrowth ]

Let’s try a different domain I didn’t quite catch what he was saying His slides are packed with information He sent the audience a clear message 9/11 Commission Public Hearing, Monday, March 31, 2003 When we can get a good flow of information from the streets of our cities across to, whether it is an investigating magistrate in France or an intelligence operative in the Middle East, and begin to assemble that kind of information and analyze it and repackage it and send it back out to users, whether it's a policeman on the beat or a judge in Italy or a Special Forces Team in Afghanistan, then we will be getting close to the kind of capability we need to deal with this kind of problem. That's going to take a couple, a few years.

Target domain belief net (T-1) Metaphor Map (conduit metaphor) Ideas are objects Words are container s Senders are speakers Receiver s are addresse es send is talk recei ve is hear Target domain belief net (T) (communication frame) speakeraddresseeactionoutcome degree of understanding Source domain f-structs (transfer) X-Schema representation senderreceivermeansforcerate transfer send receiv e pack

Questions How are the source and target domains represented in KARMA? How does the source domain information enter KARMA? How should it? What does SHRUTI buy us? How are bindings propagated in a structured connectionist framework?

How do the source domain f-structs get parameterized? In the KARMA system, they are hand-coded. In general, you need analysis of sentences: –syntax –semantics Syntax captures: constraints on word order constituency (units of words) grammatical relations (e.g. subject, object) subcategorization & dependency (e.g. transitive, intransitive, subject-verb agreement)

Questions 1.How are the source and target domains represented in KARMA? 2.How does the source domain information enter KARMA? How should it? What does SHRUTI buy us? How are bindings propagated in a structured connectionist framework?

SHRUTI A connectionist model of reflexive processing Reflexive reasoning automatic, extremely fast (~300ms), ubiquitous computation of coherent explanations and predictions gradual learning of causal structure episodic memory understanding language Reflective reasoning conscious deliberation, slow overt consideration of alternatives external props (pencil + paper) solving logic puzzles doing cryptarithmetic planning a vacation

SHRUTI synchronous activity without using global clock An episode of reflexive processing is a transient propagation of rhythmic activity An “entity” is a phase in the above rhythmic activity. Bindings are synchronous firings of role and entity cells Rules are interconnection patterns mediated by coincidence detector circuits that allow selective propagation of activity Long-term memories are coincidence and coincidence-failure detector circuits An affirmative answer / explanation corresponds to reverberatory activity around closed loops

focal cluster provides locus of coordination, control and decision making enforce sequencing and concurrency initiate information seeking actions initiate evaluation of conditions initiate conditional actions link to other schemas, knowledge structures

Questions 1.How are the source and target domains represented in KARMA? 2.How does the source domain information enter KARMA? How should it? 3.What does SHRUTI buy us? How are bindings propagated in a structured connectionist framework?

dynamic binding example asserting that get(father, cup) father fires in phase with agent role cup fires in phase with patient role + - ? agtagt patpat +e+e +v+v ?e?e ?v?v +? get cup my-father type entit y predicate

Active Schemas in SHRUTI active schemas require control and coordination, dynamic role binding and parameter setting schemas are interconnected networks of focal clusters bindings are encoded and propagated using temporal synchrony scalar parameters are encoded using rate-encoding