Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computing with Abstract Neurons

Similar presentations


Presentation on theme: "Computing with Abstract Neurons"— Presentation transcript:

1 Computing with Abstract Neurons
McCollough-Pitts Neurons were initially used to model pattern classification size = small AND shape = round AND color = green AND location = on_tree => unripe linking classified patterns to behavior size = large OR motion = approaching => move_away size = small AND direction = above => move_above McCollough-Pitts Neurons can compute logical functions. AND, NOT, OR

2 Distributed vs Localist Rep’n
John 1 Paul George Ringo John 1 Paul George Ringo What are the drawbacks of each representation?

3 Distributed vs Localist Rep’n
John 1 Paul George Ringo John 1 Paul George Ringo What happens if you want to represent a group? How many persons can you represent with n bits? 2^n What happens if one neuron dies? How many persons can you represent with n bits? n

4 Sparse Distributed Representation

5 Natural Language Understanding
Natural Language Processing (NLP) is the overall category Search, Machine Translation, Sentiment Analysis, etc. Natural Language Understanding (NLU) ~ action without human intervention Google Search vs. Google Car Current Mainstream Approaches Templates: Siri, Cortana, Google, Alexa (next slide) Machine Learning Natural Language Generation adds more complications Habitability Problem FCG – Luc Steels Language Communication with Autonomous Systems (LCAS) Focus on Action Constrained domain of Autonomous System yields tractability

6 Amazon Alexa Skills ~ Templates
developer.amazon.com/alexa-skills-kit GetHoroscope what is the horoscope for { Sign} GetHoroscope what will be the horoscope for { Sign} be on {Date} GetHoroscope get me my horoscope MatchSign do {FirstSign} and {Second Sign } get along MatchSign what is the relationship between {FirstSign} and {Second Sign }

7 Embodiment Alan Turing (Intelligent Machines,1948)
Of all of these fields, the learning of languages would be the most impressive, since it is the most human of these activities. This field, however, seems to depend rather too much on the sense organs and locomotion to be feasible. Alan Turing (Intelligent Machines,1948)

8 Actionability in Integrated Cognitive Science
1. All living things act; acting is what living things do. 2. Natural selection constrains the fitness (utility) of these actions. 3. Actionability is an agent's assessment of the expected utility of an external or internal action. 4. Volition is the key concept; agents perform volitional as well as automatic actions. 5. This defines, but does claim to solve, actionability as a integrating issue for Cognitive Science. Learning can improve actionability estimates. 6. No answers are suggested for hard mind-body problems like subjective agency.  7. Actionability calculation often involves simulation of action and its consequences.  Feldman JA(2016)Actionability and Simulation: No Representation without Communication. Front.Psychol.7:1204. doi: /fpsyg

9 Introduction: NTL NTL’s main tenets direct neural realization
continuity of thought and language both of which entail a commitment to parallel processing and spreading activation importance of language communities conventional beliefs, grammars simulation semantics language understanding involves some of the brain circuitry involved in perception, motion, and emotion best-fit process underlying learning, understanding, and production of language

10 Basic Questions Addressed
How could our brain, a mass of chemical cells, produce language and thought? How much can we know about our own experience? How do we learn new concepts? Does our language determine how we think? Is language Innate? How do children learn grammar? Why make computational brain models of thought? Will our robots understand us? How did language evolve? What is the nature of subjective experience?

11 Simulation-based language understanding
“Harry walked to the cafe.” Utterance Constructions Analysis Process General Knowledge Simulation Specification Schema Trajector Goal walk Harry cafe Belief State Cafe Simulation

12 Ideas from Cognitive Linguistics
Embodied Semantics (Lakoff, Johnson, Sweetser, Talmy Radial categories (Rosch 1973, 1978; Lakoff 1985) mother: birth / adoptive / surrogate / genetic, … Profiling (Langacker 1989, 1991; cf. Fillmore XX) hypotenuse, buy/sell (Commercial Event frame) Metaphor and metonymy (Lakoff & Johnson 1980, …) ARGUMENT IS WAR, MORE IS UP The ham sandwich wants his check. Mental spaces (Fauconnier 1994) The girl with blue eyes in the painting really has green eyes. Conceptual blending (Fauconnier & Turner 2002, inter alia) workaholic, information highway, fake guns “Does the name Pavlov ring a bell?” (from a talk on ‘dognition’!)

13 Image schemas Trajector / Landmark (asymmetric)
LM Trajector / Landmark (asymmetric) The bike is near the house ? The house is near the bike Boundary / Bounded Region a bounded region has a closed boundary Topological Relations Separation, Contact, Overlap, Inclusion, Surround Orientation Vertical (up/down), Horizontal (left/right, front/back) Absolute (E, S, W, N) TR bounded region boundary

14 Schema Formalism SCHEMA <name> SUBCASE OF <schema>
EVOKES <schema> AS <local name> ROLES < self role name>: <role restriction> < self role name> <-> <role name> CONSTRAINTS <role name> <- <value> <role name> <-> <role name> <setting name> :: <role name> <-> <role name> <setting name> :: <predicate> | <predicate>

15 A Simple Example SCHEMA hypotenuse SUBCASE OF line-segment
SCHEMA hypotenuse SUBCASE OF line-segment EVOKES right-triangle AS rt ROLES Comment inherited from line-segment CONSTRAINTS SELF <-> rt.long-side

16 Source-Path-Goal SCHEMA: spg ROLES: source: Place path: Directed Curve
SCHEMA: spg ROLES: source: Place path: Directed Curve goal: Place trajector: Entity

17 Translational Motion SCHEMA translational motion SUBCASE OF motion
SCHEMA translational motion SUBCASE OF motion EVOKES spg AS s ROLES mover <-> s.trajector source <-> s.source goal <-> s.goal CONSTRAINTS before:: mover.location <-> source after:: mover.location <-> goal

18 Simulation-based language understanding
“Harry walked to the cafe.” Utterance Constructions Analysis Process General Knowledge Simulation Specification Schema Trajector Goal walk Harry cafe Belief State Cafe Simulation

19 Simulation specification
It should be clear that the simulation specification includes exactly the schematic content of the different elements of the sentence, bound appropriately. As noted earlier, the two representations differ with respect to which image schemas are involved – as reflected by the additional CONTAINER schema in Figure 5b – and in the precise bindings of aspects of the cafe to the SPG schema. Like the image schema representations, the simulation specifications can be viewed as a summary of the much more complex structures that are active when an event is simulated or imagined. Activating these structures – that is, “running” the simulation – can thus provide the much richer basis for inference necessary for accounting for many linguistic phenomena The analysis process produces a simulation specification that includes image-schematic, motor control and conceptual structures provides parameters for a mental simulation

20 Simulation Semantics BASIC ASSUMPTION: SAME REPRESENTATION FOR PLANNING AND SIMULATIVE INFERENCE Evidence for common mechanisms for recognition and action (mirror neurons) in the F5 area (Rizzolatti et al (1996), Gallese 96, Boccino 2002) and from motor imagery (Jeannerod 1996) IMPLEMENTATION: x-schemas affect each other by enabling, disabling or modifying execution trajectories. Whenever the CONTROLLER schema makes a transition it may set, get, or modify state leading to triggering or modification of other x-schemas. State is completely distributed (a graph marking) over the network. RESULT: INTERPRETATION IS IMAGINATIVE SIMULATION!

21 Active representations
Many inferences about actions derive from what we know about executing them Representation based on stochastic Petri nets captures dynamic, parameterized nature of actions Used for acting, recognition, planning, and language walker at goal Walking: bound to a specific walker with a direction or goal consumes resources (e.g., energy) may have termination condition (e.g., walker at goal) ongoing, iterative action Certain words are closely associated with biological phenomena. Simple representation based on petri nets. Action/event representation has in common with motor control the need to refer to process states and transitions; and resource consumption/production; parameters. Part of learning/understanding words like “push”, “walk” clearly involves grounded knowledge about how to perform the action, as well as quite complex/concrete inferences based on execution. (e.g....) Note: these representations might be parameterized: “shove”, “walk slowly”, “walk home” and NOTE: this model of action is accurate cross-linguistically, even if some specific conditions on word meaning varies from language to language. This may seem complex, but in fact VERY early children seem to have no problem performing and understanding words like this. And more complicated ones too! energy walker=Harry goal=home

22 Learning Verb Meanings David Bailey
A model of children learning their first verbs. Assumes parent labels child’s actions. Child knows parameters of action, associates with word Program learns well enough to: 1) Label novel actions correctly 2) Obey commands using new words (simulation) System works across languages Mechanisms are neurally plausible.

23 System Overview

24 Learning Two Senses of PUSH
Model merging based on Bayesian MDL

25 Event Structure in Language Srini Narayanan
Fine-grained Rich Notion of Contingency Relationships. Phenomena: Aspect, Tense, Force-dynamics, Uncertainty, Modals, Counterfactuals Event Structure Metaphor: Phenomena: Abstract Actions are conceptualized as Motion and Manipulation Schematic Inferences are preserved. Aspect: ways languages describe the structure of events using a variety of lexical and grammatical devices.

26 Task: Interpret simple discourse fragments/ blurbs
France fell into recession. Pulled out by Germany US Economy on the verge of falling back into recession after moving forward on an anemic recovery. Indian Government stumbling in implementing Liberalization plan. Moving forward on all fronts, we are going to be ongoing and relentless as we tighten the net of justice. The Government is taking bold new steps. We are loosening the stranglehold on business, slashing tariffs and removing obstacles to international trade.

27 Event Structure Metaphor
States are Locations Changes are Movements Causes are Forces Causation is Forced Movement Actions are Self-propelled Movements Purposes are Destinations Means are Paths Difficulties are Impediments to Motion External Events are Large, Moving Objects Long-term, Purposeful Activities are Journeys

28

29 Results Model was implemented and tested on discourse fragments from a database of 50 newspaper stories in international economics from standard sources such as WSJ, NYT, and the Economist. Results show that motion terms are often the most effective method to provide the following types of information about abstract plans and actions. Information about uncertain events and dynamic changes in goals and resources. (sluggish, fall, off-track, no steam) Information about evaluations of policies and economic actors and communicative intent (strangle-hold, bleed). Communicating complex, context-sensitive and dynamic economic scenarios (stumble, slide, slippery slope). Commincating complex event structure and aspectual information (on the verge of, sidestep, giant leap, small steps, ready, set out, back on track). ALL THESE BINDINGS RESULT FROM REFLEX, AUTOMATIC INFERENCES PROVIDED BY X-SCHEMA BASED INFERENCES.

30


Download ppt "Computing with Abstract Neurons"

Similar presentations


Ads by Google