Presentation is loading. Please wait.

Presentation is loading. Please wait.

Sergio Pissanetzky 2009 Title

Similar presentations


Presentation on theme: "Sergio Pissanetzky 2009 Title"— Presentation transcript:

1 The Theory of Objects and the automatic generation of Intelligent Agents
Sergio Pissanetzky 2009 Title The Matrix Theory of Objects and the Automatic Generation of Intelligent Agents.

2 What is an object, anyway?
Our minds make objects all the time. We think, communicate, even dream in terms of objects. What is an object, anyway? An object is anything that: has information (attributes) can use it to do something (behavior) can share it with other objects (interaction). What do they do? Think of a TV screen, for a second. It has dots of light. Nothing moves there. But we see objects. Faces, buildings, cars, things that move. They are not on the screen. We make them, our mind makes them. We think, communicate, even dream in terms of objects. This is possible the most widely observed experimental phenomenon in nature. Yet, there is no mathematical theory to explain it. What is an object, anyway? An object is a mathematical abstraction, like “set”, or “group”. Anything that satisfies the definition will also satisfy the theory. Objects are made of other objects. They organize information by giving it a structure. They reduce complexity. A system is made of objects. Objects are the fundamental structural element of information. organize information and reduce complexity. fundamental structural element of information. there can be no AI without a mastery of objects.

3 But ... where do they come from?
Computer program. behavior structure refactoring transformation any system can be refactored It is easier to explain this if you think of a computer program. The program works and does something, so it has a behavior. But it also has a structure. It can have many different structures and still work the same. So I can change the structure without changing the behavior. It’s called a refactoring transformation. Any system can be refactored, not just a program.

4 The Matrix Theory of Objects establishes that:
Any finite system can be perfectly represented by a matrix of services in canonical form. Under a deterministic, dissipative and chaotic dynamics, the structure of the system evolves towards its stable attractors. The attractors are partitioned into submatrices that remain invariant under the dynamics. The attractors represent the structure of the system. The attractors also represent the equivalent logical circuit for the system. The submatrices represent the objects present in the system, and their relationships represent the system’s ontology. The matrix theory says that any system can be described as a canonical matrix. The matrix is refactored by a chaotic dynamic process. The dynamics causes the structure to evolve and converge to its stable attractors. Many stable submatrices form in the attractors. They move around and interact with each other but do not change. These submatrices are the objects. The canonical matrix also represents the equivalent logical circuit of the system, as well as the system’s ontology.

5 The Matrix Model Any canonical objects system matrix SCA ORA circuit
Here is how it all works. I start with any system, and represent it as a canonical matrix. The Scope Constriction Algorithm (SCA) does the chaotic dynamics. It finds the attractors, with the objects in them. But it can not discern the objects themselves. The Object Recognition Algorithm (ORA) partitions the matrix and recognizes the objects. The objects are subsequently replaced into the canonical matrix, resulting in great simplification. The process continues, larger objects are formed out of the smaller ones. At the same time, additional information is acquired by the system. There could be a team of developers writing more code, or a teacher imparting instruction. The logical circuit equivalent to the system also develops in the matrix. The structure of the system is obtained in terms of objects. We use this same algorithm in our brains. We explain everything in terms of objects. We think and communicate in terms of objects. This explains why. learning ontology

6 The Wumpus World 3 W G P 2 1 A Enough of that. Let’s see an example: the Hunt the Wumpus computer game. We are going to design an intelligent agent that can hunt the Wumpus. The process involves learning and reasoning in the canonical model. The grid contains the Wumpus, the Gold, several Pits, and the Agent. Rules: The agent is initially in cell [1, 1], there is no pit in [1, 1]. The agent perceives a breeze in any cell next to a pit. The agent has to kill the Wumpus and take the gold, without falling into a pit. The agent’s knowledge base is described by a Learning Matrix. The ability of the agent to learn grows as it learns. Because the agent makes objects as it learns, and subsequently learns in terms of the objects. That way the agent develops intelligence.

7 ● Collect knowledge into the Learning Matrix.
● Organize the knowledge into canonical form. ● Minimize the profile to generate objects. 3 W G P 2 1 A rule percpt CNF conclusions data ¬P11 ¬B11 B11 P12 V P21 ¬P12 ¬P21 S1 C A S2 S3 S4 S5 S6 S7 S8 This is the MMC Learning Matrix, which represents the agent’s knowledge base. Initially, the game puts the agent in [1,1] and guarantees ¬P11 (there is no pit in [1,1]). The agent has to decide where to go. It uses its percepts and the rules of the game to do that. Percept: the agent senses no breeze in [1,1]: ¬B11. A situation arises where B11 is present in two different sentences. Must use Resolution to resolve. Sentences must be in Conjunctive Normal Form (CNF): a conjunction of disjunctions of literals. The first rule and the percept are already in CNF. Apply to the second rule. Services S4, S5 and S6 do the conversion. They use the following inferences: biconditional elimination, implication elimination, De Morgan, distributivity. The effect of the CNF is to generate terms with B11 that can be resolved with the percept ¬B11. Two such terms have been obtained. Resolve them: no pitts in [1,2] or [2,1]. Services S7, S8 do the resolution. The agent can move to either location. But this is unreallistic. Knowledge does not come organized the way I presented it. CNF Resol

8 Information obtained by the Learning Matrix may be incoherent, disorganized.
3 W G P 2 1 A ¬B11 ¬P21 ¬P12 V B11 ¬P11 P12 P21 data S7 A C S5 S1 S8 S4 S3 S2 S6 Imagine the process that feeds data to the knowledge base. It might be a parallel process, such as a set of data-driven neural cliques that create data on their own schedule. Data arrives to the learning matrix, itself a bigger neural clique, in any order, and is stored as it arrives. The result might be the matrix depicted here: not very well organized. It is the same matrix, only permuted beyond recognition.

9 The Sorter organizes the information into a canonical form
3 W G P 2 1 A B11 P12 V P21 ¬B11 ¬P11 ¬P12 ¬P21 data S3 C A S2 S1 S5 S6 S4 S7 S8 Next, the sorter sorts the learning matrix. The matrix is now canonical and much better organized. Yet, there are still no clearly visible patterns. There are no objects. That’s because the Theory of Objects defines objects as the stable attractors of the profile function. The profile is the area between the diagonal and the envelope of the elements. The vertical profile is red, the horizontal profile is blue. The total profile is 40. To find the objects, the total profile of the learning matrix must be minimized. The profile is minimized by applying symmetric permutations to the matrix. The permutations preserve the canonicity and cause services to condense into objects. Profile = 40 But there there are still no objects

10 SCA has automatically refactored the matrix
and has generated the following attractor 3 W G P 2 1 A ¬P11 B11 P12 V P21 ¬B11 ¬P12 ¬P21 data S1 C A S3 S4 S5 S6 S2 S7 S8 This is the canonical matrix attractor with minimum profile. The profile is 23. Why am I saying this is an attractor? Because it has a minimum profile, and it has objects that are stable under the dynamics. But objects are still hard to recognize. A statistical sample of configurations of the attractor is necessary. The theory establishes that the same objetcs will be present in all configurations. It is this condition, and this condition alone, that determines the existence of objects. Profile = 23 But objects are still difficult to discern

11 The two objects are intelligent agents
Based on a statistical sample of attractors, ORA has automatically generated two objects 3 W G P 2 1 A ¬P11 B11 P12 V P21 ¬B11 ¬P12 ¬P21 data S1 C A S3 S4 S5 S6 S2 S7 S8 X Y Agent X encapsulates services S4, S5 and S6, which do the conversion to CNF. The object has one constructor that takes one single input, and 3 methods, that initialize 1 variable each. Recall that these objects, in turn, encapsulate several types of inference. Agent Y encapsulates service S2, which receives the percept ¬B11, and S7, S8, which do Resolution. The object has one constructor and 2 methods, that take the one CNF statement each and initialize one conclusion. Resoluton is sound and complete. Resolution encapsulates reasoning. Both objects can sense the environment (receive information) and act on it (output information). They can also reason. Therefore, the two objects are abstract intelligent agents. Profile = 23 The two objects are intelligent agents

12 The ontology of the system generated by SCA/ORA
inference rules biconditional elimination implication elimination De Morgan distributivity resolution SCA has also automatically generated the complete ontology for the new intelligent agents. Agent X inherits the inference rules. It is the one that converts sentences to conjunctive normal form. Agent Y inherits the resolution algorithm. It is the one that reasons. This is way beyond Propositional Logic. X (CNF) Y (reason)

13 The equivalent logical circuit of the system generated by SCA/ORA
data ¬P11 B11 P12 V P21 ¬B11 ¬P12 ¬P21 S1 S3 S4 S5 S6 To each “C” in the canonical matrix there corresponds a connection bus. To each “A” in the canonical matrix, there corresponds an AND gate. The circuit can be built, and will simulate the system when operated. It is interesting to note that the objects are not easily visualized in the circuit. Maybe that is why circuit designers have missed them. Instead, parallelism is easily visualized. S1, S2 and S3 each initiate a thread. Now imagine doing this with the brain, or parts of the brain! A good starting point would be the input devices: eyes, ears. S2 S7 S8


Download ppt "Sergio Pissanetzky 2009 Title"

Similar presentations


Ads by Google