 Lokendra Shastri ICSI, Berkeley The Binding Problem  Massively Parallel Brain  Unitary Conscious Experience  Many Variations and Proposals  Our focus:

Slides:



Advertisements
Similar presentations
Information Processing Technology Office Learning Workshop April 12, 2004 Seedling Overview Learning Hierarchical Reactive Skills from Reasoning and Experience.
Advertisements

Read this article for Friday next week [1]Chelazzi L, Miller EK, Duncan J, Desimone R. A neural basis for visual search in inferior temporal cortex. Nature.
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
Chapter 2: Marr’s theory of vision. Cognitive Science  José Luis Bermúdez / Cambridge University Press 2010 Overview Introduce Marr’s distinction between.
Bayesian Network and Influence Diagram A Guide to Construction And Analysis.
Neural Network Models in Vision Peter Andras
Knowledge Representation
Read this article for Friday next week [1]Chelazzi L, Miller EK, Duncan J, Desimone R. A neural basis for visual search in inferior temporal cortex. Nature.
COGNITIVE VIEWS OF LEARNING Information processing is a cognitive theory that examines the way knowledge enters and is stored in and retrieved from memory.
OASIS Reference Model for Service Oriented Architecture 1.0
C T I Metacognitive Processes for Uncertainty Handling Marvin S. Cohen, Ph.D. Bryan B. Thompson Cognitive Technologies, Inc Lorcom Lane Arlington,
Visual Neuron Responses This conceptualization of the visual system was “static” - it did not take into account the possibility that visual cells might.
LEARNING FROM OBSERVATIONS Yılmaz KILIÇASLAN. Definition Learning takes place as the agent observes its interactions with the world and its own decision-making.
Knowledge Acquisitioning. Definition The transfer and transformation of potential problem solving expertise from some knowledge source to a program.
COGNITIVE NEUROSCIENCE
Use of Ontologies in the Life Sciences: BioPax Graciela Gonzalez, PhD (some slides adapted from presentations available at
Tracking multiple independent targets: Evidence for a parallel tracking mechanism Zenon Pylyshyn and Ron Storm presented by Nick Howe.
 Lokendra Shastri ICSI, Berkeley A Neurally Plausible model of Reasoning  Lokendra Shastri ICSI, Berkeley Lokendra Shastri International Computer Science.
Chapter Seven The Network Approach: Mind as a Web.
Physical Symbol System Hypothesis
Cognitive Development in Middle Childhood: Chapter 11.
CASE Tools And Their Effect On Software Quality Peter Geddis – pxg07u.
General Knowledge Dr. Claudia J. Stanny EXP 4507 Memory & Cognition Spring 2009.
Neo-Piagetian Developmental Theories
Jochen Triesch, UC San Diego, 1 Short-term and Long-term Memory Motivation: very simple circuits can store patterns of.
Chapter 14: Artificial Intelligence Invitation to Computer Science, C++ Version, Third Edition.
Robotica Lecture 3. 2 Robot Control Robot control is the mean by which the sensing and action of a robot are coordinated The infinitely many possible.
Connectionism. ASSOCIATIONISM Associationism David Hume ( ) was one of the first philosophers to develop a detailed theory of mental processes.
An Architecture for Empathic Agents. Abstract Architecture Planning + Coping Deliberated Actions Agent in the World Body Speech Facial expressions Effectors.
The search for organizing principles of brain function Needed at multiple levels: synapse => cell => brain area (cortical maps) => hierarchy of areas.
Chapter 9 Neural Network.
Outline What Neural Networks are and why they are desirable Historical background Applications Strengths neural networks and advantages Status N.N and.
Advances in Modeling Neocortex and its impact on machine intelligence Jeff Hawkins Numenta Inc. VS265 Neural Computation December 2, 2010 Documentation.
NEW REALITY STUDENTS MUST HAVE HIGHER-ORDER THINKING SKILLS 1.
Cognitive Systems Foresight Brain Rhythms. Cognitive Systems Foresight Sensory Processing How does the brain build coherent perceptual accounts of sensory.
Learning Progressions: Some Thoughts About What we do With and About Them Jim Pellegrino University of Illinois at Chicago.
Ashley H. Brock February 28 th, n Mandler, J. M. (1992). How to build a baby: II. Conceptual primitives. Psychological Review, 99, pp
Report on Intrusion Detection and Data Fusion By Ganesh Godavari.
Putting Research to Work in K-8 Science Classrooms Ready, Set, SCIENCE.
Evolution of Control-Related Mental Models Crystal A. Brandon.
Educational Objectives
1 Knowledge Representation CS 171/CS How to represent reality? Use an ontology (a formal representation of reality) General/abstract domain Specific.
Brunning – Chapter 2 Sensory, Short Term and Working Memory.
Week 14 The Memory Function of Sleep Group 3 Tawni Voyles Alyona Koneva Bayou Wang.
Foundations (cont.) Complexity Testing explanations in psychology Cognitive Neuroscience.
Neural Networks in Computer Science n CS/PY 231 Lab Presentation # 1 n January 14, 2005 n Mount Union College.
Words in the brain Slide #1 김 민 경 Chap 4. Words in the brain.
Human Abilities 2 How do people think? 1. Agenda Memory Cognitive Processes – Implications Recap 2.
Motor Control Theories.  1. The patterning of body and limb motions relative to the patterning of environmental objects and events.
The Role of the Pre Frontal Cortex in Social Cognition Group 5 Alicia Iafonaro Alyona Koneva Barbara Kim Isaac Del Rio.
Some Thoughts to Consider 8 How difficult is it to get a group of people, or a group of companies, or a group of nations to agree on a particular ontology?
MDA & RM-ODP. Why? Warehouses, factories, and supply chains are examples of distributed systems that can be thought of in terms of objects They are all.
Chapter Twelve The Cognitive Perspective. Schemas and Their Development Schema—a mental organization of information –Perceptual images –Abstract knowledge.
How conscious experience and working memory interact Bernard J. Baars and Stan Franklin Soft Computing Laboratory 김 희 택 TRENDS in Cognitive Sciences vol.
CHAPTER 6 COGNITIVE PERSPECTIVES: 1. THE PROCESSING OF INFORMATION EPSY DR. SANDRA RODRIGUEZ PRESENTATION BY: RUTH GARZA Gredler, M. E. (2009).
Chapter 15. Cognitive Adequacy in Brain- Like Intelligence in Brain-Like Intelligence, Sendhoff et al. Course: Robots Learning from Humans Cinarel, Ceyda.
COGNITIVE LEVEL OF ANALYSIS An Introduction. Cognitive Psychology studies: how the human mind comes to know things about the world AND how the mind uses.
Done by Fazlun Satya Saradhi. INTRODUCTION The main concept is to use different types of agent models which would help create a better dynamic and adaptive.
The Neural Basis of Thought and Language Week 11 Metaphor and Automatic Reasoning.
Chapter 9 Knowledge. Some Questions to Consider Why is it difficult to decide if a particular object belongs to a particular category, such as “chair,”
What is cognitive psychology?
Typical Person :^) Fall 2002 CS/PSY 6750.
THEORY IN EDUCATIONAL RESEARCH
Frequency-specific network connectivity increases underlie accurate spatiotemporal memory retrieval Andrew J Watrous, Nitin Tandon, Chris R Conner, Thomas.
Typical Person :^) Fall 2002 CS/PSY 6750.
Information Processing by Neuronal Populations Chapter 5 Measuring distributed properties of neural representations beyond the decoding of local variables:
Costa’s Levels of Questioning
Neural Network Models in Vision
The Network Approach: Mind as a Web
Presentation transcript:

 Lokendra Shastri ICSI, Berkeley The Binding Problem  Massively Parallel Brain  Unitary Conscious Experience  Many Variations and Proposals  Our focus: The Variable Binding Problem

 Lokendra Shastri ICSI, Berkeley

Problem Binding problem –In vision You do not exchange the colors of the shapes below –In behavior Grasp motion depends on object to grasp –In inference Human(x) -> Mortal(x) Must bind a variable to x

 Lokendra Shastri ICSI, Berkeley Automatic Inference Inference needed for many tasks –Reference resolution –General language understanding –Planning Humans do this quickly and without conscious thought –Automatically –No real intuition of how we do it

 Lokendra Shastri ICSI, Berkeley Other Solutions in Inference Brute-force enumeration –Does not scale to depth of human knowledge Signature propagation (direct reference) –Difficult to pass enough information to directly reference each object –Unifying two bindings (e.g. reference resolution) is difficult Temporal synchrony example (SHRUTI) –Little biological evidence

 Lokendra Shastri ICSI, Berkeley SHRUTI SHRUTI does inference by connections between simple computation nodes Nodes are small groups of neurons Nodes firing in sync reference the same object

 Lokendra Shastri ICSI, Berkeley A Neurally Plausible model of Reasoning  Lokendra Shastri ICSI, Berkeley Lokendra Shastri International Computer Science Institute Berkeley, CA 94704

 Lokendra Shastri ICSI, Berkeley Five levels of Neural Theory of Language Cognition and Language Computation Structured Connectionism Computational Neurobiology Biology SHRUTI abstraction

 Lokendra Shastri ICSI, Berkeley “John fell in the hallway. Tom had cleaned it. He got hurt.” the hallway  Tom had cleaned the hallway. The hallway floor was wet.  The hallway floor was wet.  John slipped and fell on the wet floor  John slipped and fell on the wet floor.  Johnas a result of the fall  John got hurt as a result of the fall. such inferences establish referential and causal coherence.

 Lokendra Shastri ICSI, Berkeley Reflexive Reasoning  Ubiquitous  Automatic, effortless  Extremely fast --- almost a reflex response of our cognitive apparatus

 Lokendra Shastri ICSI, Berkeley Reflexive Reasoning Not all reasoning is reflexive Contrast with reflective reasoning deliberate involves explicit consideration of alternatives require props (paper and pencil) e.g., solving logic puzzles … differential equations

How fast is reflexive reasoning? We understand language at the rate of words per minute  Reflexive inferences required for establishing inferential and causal coherence are drawn within a few hundred milliseconds  Lokendra Shastri ICSI, Berkeley

How can a system of slow and simple neuron-like elements encode a large body of semantic and episodic knowledge and yet perform a wide range of inferences within a few hundred milliseconds?  Lokendra Shastri ICSI, Berkeley

Characterization of reflexive reasoning? What can and cannot be inferred via reflexive processes?  Lokendra Shastri ICSI, Berkeley

Shruti Lokendra Shastri V. Ajjanagadde (Penn, ex-graduate student) Carter Wendelken (UCB, ex-graduate student) D. Mani (Penn, ex-graduate student) D.J. Grannes (UCB, ex-graduate student) Jerry Hobbs, USC/ISI (abductive reasoning) Marvin Cohen, CTI (metacognition; belief and utility) Bryan Thompson, CTI (metacognition; belief and utility) Lokendra Shastri ICSI, Berkeley

Reflexive Reasoning representational and processing issues Activation-based (dynamic) representation of events and situations (relational instances)

 Lokendra Shastri ICSI, Berkeley Dynamic representation of relational instances “John gave Mary a book” giver: John recipient: Mary given-object: a-book giver a-book Mary recipient John given-object *

 Lokendra Shastri ICSI, Berkeley Reflexive Reasoning Expressing dynamic bindings Systematically propagating dynamic bindings Computing coherent explanations and predictions –evidence combination –instantiation and unification of entities Requires compatible neural mechanisms for: All of the above must happen rapidly

 Lokendra Shastri ICSI, Berkeley Learning one-shot learning of events and situations (episodic memory) gradual/incremental learning of concepts, relations, schemas, and causal structures

 Lokendra Shastri ICSI, Berkeley Relation focal-cluster + - ? fall-pat fall-loc FALL

 Lokendra Shastri ICSI, Berkeley Entity, category and relation focal-clusters + - ? fall-pat fall-loc FALL

 Lokendra Shastri ICSI, Berkeley Entity, category and relation focal-clusters + - ? fall-pat fall-loc FALL Functional nodes in a focal-cluster [ collector (+/-), enabler (?), and role nodes] may be situated in different brain regions

 Lokendra Shastri ICSI, Berkeley Focal-cluster of a relational schema FALL + - ? fall-pat fall-loc focal-clusters of motor schemas associated with fall focal-clusters of lexical know- ledge associated with fall focal-clusters of perceptual schemas and sensory representations associated with fall focal-clusters of other relational schemas causally related to fall episodic memories of fall events

 Lokendra Shastri ICSI, Berkeley Focal-clusters Nodes in the fall focal-cluster become active when perceiving a fall event remembering a fall event understanding a sentence about a fall event experiencing a fall event A focal-cluster is like a “supra-mirror” cluster

 Lokendra Shastri ICSI, Berkeley Focal-cluster of an entity John + ? focal-clusters of motor schemas associated with John focal-clusters of lexical know- ledge associated with John focal-clusters of perceptual schemas and sensory representations associated with John focal-clusters of other entities and categories semantically related to John episodic memories where John is one of the role-fillers

 Lokendra Shastri ICSI, Berkeley + - ? fall-pat fall-loc Fall + ? Hallway John “John fell in the hallway”

 Lokendra Shastri ICSI, Berkeley + - ? fall-pat fall-loc Fall + ? Hallway John “John fell in the hallway”

 Lokendra Shastri ICSI, Berkeley ? fall-pat fall-loc Fall + ? Hallway John + ? +:Fall +:John fall-pat fall-loc +:Hallway “John fell in the hallway”

 Lokendra Shastri ICSI, Berkeley Encoding “slip => fall” in Shruti SLIP + - ? slip-pat slip-loc FALL + - ? fall-pat fall-loc + ? r1 r2 mediator Such rules are learned gradually via observations, by being told …

 Lokendra Shastri ICSI, Berkeley “John slipped in the hallway” Slip + - ? slip-pat slip-loc Fall + - ? fall-pat fall-loc mediator r2 r1 ? + + ? Hallway John + ? → “John fell in the hallway” +:slip +:John slip-pat +:Hallway slip-loc +:Fall fall-pat fall-loc

 Lokendra Shastri ICSI, Berkeley A Metaphor for Reasoning An episode of reflexive reasoning is a transient propagation of rhythmic activity Each entity involved in this reasoning episode is a phase in this rhythmic activity Bindings are synchronous firings of cell clusters Rules are interconnections between cell- clusters that support propagation of synchronous activity

 Lokendra Shastri ICSI, Berkeley

Focal-clusters with intra-cluster links John + ? + - ? fall-pat fall-loc FALL +e +v ?v ?e Person Shruti always seeks explanations

 Lokendra Shastri ICSI, Berkeley Encoding “slip => fall” in Shruti SLIP + - ? slip-pat slip-loc FALL + - ? fall-pat fall-loc + ? r1 r2 mediator

 Lokendra Shastri ICSI, Berkeley Linking focal-clusters of types and entities John + ? +e +v ?v ?e Agent + ? +e +v ?v ?e Hallway Location Man Person

 Lokendra Shastri ICSI, Berkeley Focal-clusters and context-sensitive priors (T-facts) + ? + - ? fall-pat fall-loc +e +v ?v ?e cs-priors* cs-priors* context-sensitive priors * * cortical circuits entitiesandtypes entities and types John FALL Person

 Lokendra Shastri ICSI, Berkeley Focal-clusters and episodic memories (E-facts) + ? + - ? fall-pat fall-loc +e +v ?v ?e episodic memories e-memories e-memories fromrole-fillers to role-fillers * * * * hippocampal circuits John FALL Person

 Lokendra Shastri ICSI, Berkeley Explaining away in ShrutiSLIP + ? FALL TRIP mediator mediator

 Lokendra Shastri ICSI, Berkeley Other features of Shruti Mutual inhibition between collectors of incompatible entities Merging of phases -- unification Instantiation of new entities Structured priming

 Lokendra Shastri ICSI, Berkeley Unification in Shruti : merging of phases The activity in focal-clusters of two entity or relational instances will synchronize if there is evidence that the two instances are the same Is there an entity A of type T filling role r in situation P (Did a man fall in the hallway?) R1: Is there an entity A of type T filling role r in situation P? (Did a man fall in the hallway?) Entity B of type T is filling role r in situation P. R2: Entity B of type T is filling role r in situation P. (John fell in the hallway.) (John fell in the hallway.) In such a situation, the firing of A and B will synchronize. Consequently, A and B will unify, and so will the relational instances involving A and B.

 Lokendra Shastri ICSI, Berkeley Entity instantiation in Shruti If Shruti encodes the rule-like knowledge: x:Agent y:Location fall(x,y) => hurt(x) it automatically posits the existence of a location where John fell in response to the dynamic instantiation of hurt(x)

 Lokendra Shastri ICSI, Berkeley Encoding “fall => hurt” in Shruti FALL + - ? fall-pat fall-loc HURT + ? r1 r2 mediator type (semantic) restrictions role-filler instantiation + - ? hurt-pat

 Lokendra Shastri ICSI, Berkeley

The activation trace of +:slip and +:trip “John fell in the hallway. Tom had cleaned it. He got hurt.” s2 s1s3

 Lokendra Shastri ICSI, Berkeley

A Metaphor for Reasoning An episode of reflexive reasoning is a transient propagation of rhythmic activity Each entity involved in this reasoning episode is a phase in this rhythmic activity Bindings are synchronous firings of cell clusters Rules are interconnections between cell-clusters that support context-sensitive propagation of activity Unification corresponds to merging of phases A stable inference (explanation/answer) corresponds to reverberatory activity around closed loops

 Lokendra Shastri ICSI, Berkeley Support for Shruti Neurophysiological evidence: transient synchro- nization of cell firing might encode dynamic bindings Makes plausible predictions about working memory limitations Speed of inference satisfies performance requirements of language understanding Representational assumptions are compatible with a biologically realistic model of episodic memory

 Lokendra Shastri ICSI, Berkeley Neurophysiological evidence for synchrony Synchronous activity found in anesthetized cat as well as in anesthetized and awake monkey. Spatially distributed cells exhibit synchronous activity if they represent information about the same object. Synchronous activity occurs in the gamma band (25--60Hz) (maximum period of about 40 msec.) frequency drifts by 5-10Hz, but synchronization stays stable for msec In humans EEG and MEG signals exhibit power spectrum shifts consistent with synchronization of cell ensembles – orienting or investigatory behavior; delayed-match-to- sample task; visuo-spatial working memory task

 Lokendra Shastri ICSI, Berkeley Predictions: constraints on reflexive inference gamma band activity (25-60Hz) underlies dynamic bindings (the maximum period ~40 msec.) allowable jitter in synchronous firing 3 msec. lead/lag. only a small number of distinct conceptual entities can participate in an episode of reasoning  only a small number of distinct conceptual entities can participate in an episode of reasoning 7 +/ /- 2 (40 divided by 6) as the number of entities increases beyond five, their activity starts overlapping, leading to cross-talk Note: Not a limit on the number of co-active bindings!

 Lokendra Shastri ICSI, Berkeley Predictions: Constraints on reflexive reasoning 1.A large number of relational instances (facts) can be co-active, and numerous rules can fire in parallel, but 2.only a small number of distinct entities can serve as role-fillers in this activity 3.only a small number of instances of the same predicate can be co-active at the same time 4.the depth of inference is bounded – systematic reasoning via binding propagation degrades to a mere spreading of activation beyond a certain depth. 2 and 3 specify limits on Shruti’s working memory

 Lokendra Shastri ICSI, Berkeley Massively Parallel Inference if gamma band activity underlies propagation of bindings each binding propagation step takes ca. 25 msec. inferring “John may be hurt” and “John may have slipped” from “John fell” would take only ca. 200 msec. time required to perform inference is independent of the size of the causal model

 Lokendra Shastri ICSI, Berkeley Probabilistic interpretation of link weights P(C/E) = P(E/C) P(C)/P(E)

 Lokendra Shastri ICSI, Berkeley

Encoding X-schema

 Lokendra Shastri ICSI, Berkeley

Proposed Alternative Solution Indirect references –Pass short signatures, “fluents” Functionally similar to SHRUTI's time slices –Central “binder” maps fluents to objects In SHRUTI, the objects fired in that time slice –Connections need to be more complicated than in SHRUTI Fluents are passed through at least 3 bits But temporal synchrony is not required

 Lokendra Shastri ICSI, Berkeley Components of the System Object references –Fluents –Binder Short term storage –Predicate state Long term storage –Facts, mediators, what predicates exist Inference –Mediators Types –Ontology

 Lokendra Shastri ICSI, Berkeley Fluents: Roles are just patterns of activation 3-4 bits

 Lokendra Shastri ICSI, Berkeley Binder: What does the pattern mean? –The binder gives fluent patterns meaning

 Lokendra Shastri ICSI, Berkeley Predicates: Represent short term beliefs about the world Basic unit of inference Negative Collector Positive Collector Question Node Role Nodes (hold fluents)

 Lokendra Shastri ICSI, Berkeley Facts: Support or refute belief in a specific set of bindings of a given predicate

 Lokendra Shastri ICSI, Berkeley Inference: Connections between predicates form evidential links –Big(x) & CanBite(x) => Scary(x) –Poisonous(x) & CanBite(x) => Scary(x) –Strength of connections and shape of neuron response curve determines exactly what “evidence” means Direct connections won't work –Consider Big(f1) & Poisonous(f1) –We want to “Or” over a number of “And”s

 Lokendra Shastri ICSI, Berkeley Solution: Mediators Multiple antecedents Role consistency

 Lokendra Shastri ICSI, Berkeley Mediators (continued)

 Lokendra Shastri ICSI, Berkeley Fluents: Roles are just patterns of activation 3-4 bits

 Lokendra Shastri ICSI, Berkeley Binder: What does the pattern mean? –The binder gives fluent patterns meaning

 Lokendra Shastri ICSI, Berkeley Multiple Assertions As described so far, the system cannot simultaneously represent Big(f1) and Big(f2) Solution –Multiple instances of predicates –Requires more complex connections Signals must pass only between clusters with matching fluents Questions must requisition an appropriate number of clusters

 Lokendra Shastri ICSI, Berkeley Multiple Assertions (detail) Connections between Predicates and their evidence Mediators are easy 1-1

 Lokendra Shastri ICSI, Berkeley Multiple Assertions (detail) Connections between Predicates and their evidence Mediators are easy 1-1 Evidential connections of Mediators and their evidence Predicates are easy –Just connect + and - nodes dependent on matching fluents Questions going between Mediators and evidence Predicates are hard –Add a selection network to deal with one question at a time

 Lokendra Shastri ICSI, Berkeley Components of the System Object references –Fluents –Binder Short term storage –Predicate state Long term storage –Facts, mediators, what predicates exist Inference –Mediators Types –Ontology

 Lokendra Shastri ICSI, Berkeley Limitations Size of network is linear with knowledge base Short-term knowledge limited by number of fluents Depth of inference limited in time Number of same assertions limited Inference only goes entirely correctly with ground instances (e.g. “Fido” and not “dog”)

 Lokendra Shastri ICSI, Berkeley Questions

 Lokendra Shastri ICSI, Berkeley Representing belief and utility in Shruti associate utilities with states of affairs (relational instances) encode utility facts: –context sensitive memories of utilities associated with certain events or event-types propagate utility along causal structures encode actions and their consequences

 Lokendra Shastri ICSI, Berkeley Encoding “Fall => Hurt” Fall + $ - $ ? + $ - $ ? patient location Hurt mediator + $ ? r1 r2

 Lokendra Shastri ICSI, Berkeley Focal-clusters augmented to encode belief and utility Attack + p n - p n ? target location UF to roles and role-fillers from roles and role-fillers * * UF: utility fact; either a specific reward fact (R-fact) or a generic value fact (V-fact)

 Lokendra Shastri ICSI, Berkeley Behavior of augmented Shruti Shruti reflexively Makes observations Seeks explanations Makes predictions Instantiates goals Seeks plans that enhance expected future utility –identify actions that are likely to lead to desirable situations and prevent undesirable ones

 Lokendra Shastri ICSI, Berkeley Shruti suggests how different sorts of knowledge may be encoded within neurally plausible networks Entities, types and their relationships (John is a Man) Relational schemas/frames corresponding to action and event types (Falling, giving, …) Causal relations between relational schemas (If you fall you can get hurt) Taxon/Semantic facts (Children often fall) Episodic facts (John fell in the hallway on Monday) Utility facts (It is bad to be hurt)  Lokendra Shastri ICSI, Berkeley

Current status of learning in Shruti Episodic facts: A biologically grounded model of “one-shot” episodic memory formation Shastri, 1997; Proceedings of CogSci 1997 _2001; Neurocomputing _2002; Trends in Cognitive Science _In Revision; Behavioral and Brain Science (available as a Technical Report)  Lokendra Shastri ICSI, Berkeley

…current status of learning in Shruti Work in Progress Causal rules Categories Relational schemas Shastri and Wendelken 2003; Neurocomputing  Lokendra Shastri ICSI, Berkeley

Questions