Sergio Pissanetzky 2009 Title

Slides:



Advertisements
Similar presentations
Methods of Proof Chapter 7, second half.. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound)
Advertisements

Logic Use mathematical deduction to derive new knowledge.
Agents That Reason Logically Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Spring 2004.
Methods of Proof Chapter 7, Part II. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound) generation.
Logic CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
CPSC 422, Lecture 21Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 21 Mar, 4, 2015 Slide credit: some slides adapted from Stuart.
F22H1 Logic and Proof Week 7 Clausal Form and Resolution.
Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.
Knowledge in intelligent systems So far, we’ve used relatively specialized, naïve agents. How can we build agents that incorporate knowledge and a memory?
Methods of Proof Chapter 7, second half.
Let remember from the previous lesson what is Knowledge representation
Logical Agents Chapter 7 Feb 26, Knowledge and Reasoning Knowledge of action outcome enables problem solving –a reflex agent can only find way from.
Inference is a process of building a proof of a sentence, or put it differently inference is an implementation of the entailment relation between sentences.
Logical Agents Chapter 7 (based on slides from Stuart Russell and Hwee Tou Ng)
CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes Jan 19, 2012.
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Logical Agents Logic Propositional Logic Summary
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
An Introduction to Artificial Intelligence – CE Chapter 7- Logical Agents Ramin Halavati
Logical Agents Chapter 7. Knowledge bases Knowledge base (KB): set of sentences in a formal language Inference: deriving new sentences from the KB. E.g.:
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
LDK R Logics for Data and Knowledge Representation Propositional Logic: Reasoning First version by Alessandro Agostini and Fausto Giunchiglia Second version.
1 Logical Inference Algorithms CS 171/271 (Chapter 7, continued) Some text and images in these slides were drawn from Russel & Norvig’s published material.
The Matrix Theory of Objects An Update Sergio Pissanetzky Model Universality Behavior Constraints Dynamics Cost Chaos Attractors.
Dr. Shazzad Hosain Department of EECS North South Universtiy Lecture 04 – Part B Propositional Logic.
1 UNIT-3 KNOWLEDGE REPRESENTATION. 2 Agents that reason logically(Logical agents) A Knowledge based Agent The Wumpus world environment Representation,
Logical Agents Chapter 7 Part I. 2 Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic.
The Theory of Objects and the automatic generation of Intelligent Agents Sergio Pissanetzky 2009.
Artificial Intelligence Logical Agents Chapter 7.
Logical Agents. Inference : Example 1 How many variables? 3 variables A,B,C How many models? 2 3 = 8 models.
LOGICAL AGENTS CHAPTER 7 AIMA 3. OUTLINE  Knowledge-based agents  Wumpus world  Logic in general - models and entailment  Propositional (Boolean)
Logical Agents. Outline Knowledge-based agents Logic in general - models and entailment Propositional (Boolean) logic Equivalence, validity, satisfiability.
Propositional Logic: Logical Agents (Part I)
ECE457 Applied Artificial Intelligence Fall 2007 Lecture #6
The NP class. NP-completeness
Hierarchical and K-means Clustering
Introduction to Artificial Intelligence
Knowledge Representation and Reasoning
Knowledge and reasoning – second part
I am presenting a new type of structured ANNs based on the MMC.
EA C461 – Artificial Intelligence Logical Agent
If intelligence is the ability to solve unanticipated problems,
If intelligence is the ability to solve unanticipated problems,
The Matrix Model of Computation (MMC)
Logical Agents Chapter 7 Selected and slightly modified slides from
Logical Agents Reading: Russell’s Chapter 7
Adaptive Systems and Analyst-independent technologies
CS201: Data Structures and Discrete Mathematics I
Logical Agents Chapter 7.
Logic Use mathematical deduction to derive new knowledge.
Artificial Intelligence
Propositional Equivalences
Artificial Intelligence: Agents and Propositional Logic.
Artificial Intelligence: Logic agents
CS 416 Artificial Intelligence
Knowledge and reasoning – second part
Quantum Computation and Information Chap 1 Intro and Overview: p 28-58
Logical Agents Chapter 7.
CS 416 Artificial Intelligence
Knowledge Representation I (Propositional Logic)
ECE 352 Digital System Fundamentals
Methods of Proof Chapter 7, second half.
Propositional Logic: Methods of Proof (Part II)
Introduction to Artificial Intelligence Instructor: Dr. Eduardo Urbina
Toward a Great Class Project: Discussion of Stoianov & Zorzi’s Numerosity Model Psych 209 – 2019 Feb 14, 2019.
The Network Approach: Mind as a Web
CS201: Data Structures and Discrete Mathematics I
Habib Ullah qamar Mscs(se)
Logical Agents Prof. Dr. Widodo Budiharto 2018
Presentation transcript:

The Theory of Objects and the automatic generation of Intelligent Agents Sergio Pissanetzky 2009 Title The Matrix Theory of Objects and the Automatic Generation of Intelligent Agents.

What is an object, anyway? Our minds make objects all the time. We think, communicate, even dream in terms of objects. What is an object, anyway? An object is anything that: has information (attributes) can use it to do something (behavior) can share it with other objects (interaction). What do they do? Think of a TV screen, for a second. It has dots of light. Nothing moves there. But we see objects. Faces, buildings, cars, things that move. They are not on the screen. We make them, our mind makes them. We think, communicate, even dream in terms of objects. This is possible the most widely observed experimental phenomenon in nature. Yet, there is no mathematical theory to explain it. What is an object, anyway? An object is a mathematical abstraction, like “set”, or “group”. Anything that satisfies the definition will also satisfy the theory. Objects are made of other objects. They organize information by giving it a structure. They reduce complexity. A system is made of objects. Objects are the fundamental structural element of information. organize information and reduce complexity. fundamental structural element of information. there can be no AI without a mastery of objects.

But ... where do they come from? Computer program. behavior structure refactoring transformation any system can be refactored It is easier to explain this if you think of a computer program. The program works and does something, so it has a behavior. But it also has a structure. It can have many different structures and still work the same. So I can change the structure without changing the behavior. It’s called a refactoring transformation. Any system can be refactored, not just a program.

The Matrix Theory of Objects establishes that: Any finite system can be perfectly represented by a matrix of services in canonical form. Under a deterministic, dissipative and chaotic dynamics, the structure of the system evolves towards its stable attractors. The attractors are partitioned into submatrices that remain invariant under the dynamics. The attractors represent the structure of the system. The attractors also represent the equivalent logical circuit for the system. The submatrices represent the objects present in the system, and their relationships represent the system’s ontology. The matrix theory says that any system can be described as a canonical matrix. The matrix is refactored by a chaotic dynamic process. The dynamics causes the structure to evolve and converge to its stable attractors. Many stable submatrices form in the attractors. They move around and interact with each other but do not change. These submatrices are the objects. The canonical matrix also represents the equivalent logical circuit of the system, as well as the system’s ontology.

The Matrix Model Any canonical objects system matrix SCA ORA circuit Here is how it all works. I start with any system, and represent it as a canonical matrix. The Scope Constriction Algorithm (SCA) does the chaotic dynamics. It finds the attractors, with the objects in them. But it can not discern the objects themselves. The Object Recognition Algorithm (ORA) partitions the matrix and recognizes the objects. The objects are subsequently replaced into the canonical matrix, resulting in great simplification. The process continues, larger objects are formed out of the smaller ones. At the same time, additional information is acquired by the system. There could be a team of developers writing more code, or a teacher imparting instruction. The logical circuit equivalent to the system also develops in the matrix. The structure of the system is obtained in terms of objects. We use this same algorithm in our brains. We explain everything in terms of objects. We think and communicate in terms of objects. This explains why. learning ontology

The Wumpus World 3 W G P 2 1 A Enough of that. Let’s see an example: the Hunt the Wumpus computer game. We are going to design an intelligent agent that can hunt the Wumpus. The process involves learning and reasoning in the canonical model. The grid contains the Wumpus, the Gold, several Pits, and the Agent. Rules: The agent is initially in cell [1, 1], there is no pit in [1, 1]. The agent perceives a breeze in any cell next to a pit. The agent has to kill the Wumpus and take the gold, without falling into a pit. The agent’s knowledge base is described by a Learning Matrix. The ability of the agent to learn grows as it learns. Because the agent makes objects as it learns, and subsequently learns in terms of the objects. That way the agent develops intelligence.

● Collect knowledge into the Learning Matrix. ● Organize the knowledge into canonical form. ● Minimize the profile to generate objects. 3 W G P 2 1 A rule percpt CNF conclusions data ¬P11 ¬B11 B11  P12 V P21 ¬P12 ¬P21 S1 C A S2 S3 S4 S5 S6 S7 S8 This is the MMC Learning Matrix, which represents the agent’s knowledge base. Initially, the game puts the agent in [1,1] and guarantees ¬P11 (there is no pit in [1,1]). The agent has to decide where to go. It uses its percepts and the rules of the game to do that. Percept: the agent senses no breeze in [1,1]: ¬B11. A situation arises where B11 is present in two different sentences. Must use Resolution to resolve. Sentences must be in Conjunctive Normal Form (CNF): a conjunction of disjunctions of literals. The first rule and the percept are already in CNF. Apply to the second rule. Services S4, S5 and S6 do the conversion. They use the following inferences: biconditional elimination, implication elimination, De Morgan, distributivity. The effect of the CNF is to generate terms with B11 that can be resolved with the percept ¬B11. Two such terms have been obtained. Resolve them: no pitts in [1,2] or [2,1]. Services S7, S8 do the resolution. The agent can move to either location. But this is unreallistic. Knowledge does not come organized the way I presented it. CNF Resol

Information obtained by the Learning Matrix may be incoherent, disorganized. 3 W G P 2 1 A ¬B11 ¬P21 ¬P12 V B11 ¬P11  P12 P21 data S7 A C S5 S1 S8 S4 S3 S2 S6 Imagine the process that feeds data to the knowledge base. It might be a parallel process, such as a set of data-driven neural cliques that create data on their own schedule. Data arrives to the learning matrix, itself a bigger neural clique, in any order, and is stored as it arrives. The result might be the matrix depicted here: not very well organized. It is the same matrix, only permuted beyond recognition.

The Sorter organizes the information into a canonical form 3 W G P 2 1 A B11  P12 V P21 ¬B11 ¬P11 ¬P12 ¬P21 data S3 C A S2 S1 S5 S6 S4 S7 S8 Next, the sorter sorts the learning matrix. The matrix is now canonical and much better organized. Yet, there are still no clearly visible patterns. There are no objects. That’s because the Theory of Objects defines objects as the stable attractors of the profile function. The profile is the area between the diagonal and the envelope of the elements. The vertical profile is red, the horizontal profile is blue. The total profile is 40. To find the objects, the total profile of the learning matrix must be minimized. The profile is minimized by applying symmetric permutations to the matrix. The permutations preserve the canonicity and cause services to condense into objects. Profile = 40 But there there are still no objects

SCA has automatically refactored the matrix and has generated the following attractor 3 W G P 2 1 A ¬P11 B11  P12 V P21 ¬B11 ¬P12 ¬P21 data S1 C A S3 S4 S5 S6 S2 S7 S8 This is the canonical matrix attractor with minimum profile. The profile is 23. Why am I saying this is an attractor? Because it has a minimum profile, and it has objects that are stable under the dynamics. But objects are still hard to recognize. A statistical sample of configurations of the attractor is necessary. The theory establishes that the same objetcs will be present in all configurations. It is this condition, and this condition alone, that determines the existence of objects. Profile = 23 But objects are still difficult to discern

The two objects are intelligent agents Based on a statistical sample of attractors, ORA has automatically generated two objects 3 W G P 2 1 A ¬P11 B11  P12 V P21 ¬B11 ¬P12 ¬P21 data S1 C A S3 S4 S5 S6 S2 S7 S8 X Y Agent X encapsulates services S4, S5 and S6, which do the conversion to CNF. The object has one constructor that takes one single input, and 3 methods, that initialize 1 variable each. Recall that these objects, in turn, encapsulate several types of inference. Agent Y encapsulates service S2, which receives the percept ¬B11, and S7, S8, which do Resolution. The object has one constructor and 2 methods, that take the one CNF statement each and initialize one conclusion. Resoluton is sound and complete. Resolution encapsulates reasoning. Both objects can sense the environment (receive information) and act on it (output information). They can also reason. Therefore, the two objects are abstract intelligent agents. Profile = 23 The two objects are intelligent agents

The ontology of the system generated by SCA/ORA inference rules biconditional elimination implication elimination De Morgan distributivity resolution SCA has also automatically generated the complete ontology for the new intelligent agents. Agent X inherits the inference rules. It is the one that converts sentences to conjunctive normal form. Agent Y inherits the resolution algorithm. It is the one that reasons. This is way beyond Propositional Logic. X (CNF) Y (reason)

The equivalent logical circuit of the system generated by SCA/ORA data ¬P11 B11  P12 V P21 ¬B11 ¬P12 ¬P21 S1 S3 S4 S5 S6 To each “C” in the canonical matrix there corresponds a connection bus. To each “A” in the canonical matrix, there corresponds an AND gate. The circuit can be built, and will simulate the system when operated. It is interesting to note that the objects are not easily visualized in the circuit. Maybe that is why circuit designers have missed them. Instead, parallelism is easily visualized. S1, S2 and S3 each initiate a thread. Now imagine doing this with the brain, or parts of the brain! A good starting point would be the input devices: eyes, ears. S2 S7 S8