INTELLIGENCE WITHOUT REPRESENTAION 인지과학 협동과정 이광주
Contents Introduction Evolution of Intelligence Abstraction as a Dangerous weapon Incremental Intelligence Who Has the Representation? The Methodology in Practice What this is not Limits to Growth
Introduction Traditional AI right subpieces? right interface? My approach Incrementally build up, having complete system at each step At each step, we let the system loose in the real world to find upgrade point
Different approach to AI Conclusion Simple level intelligence get explicit representation and models of the world in the way Using the world as a model is better! Hypothesis Representation is the wrong unit of abstraction in building the bulkiest parts of intelligent systems
The Evolution of Intelligence Intelligence is once very simple only has: The ability to move around in a dynamic environment The ability sensing the surroundings Mobility, acute vision and the ability to carry out survival related task in a dynamic environment is necessary basis for development of intelligence
Abstraction as a Dangerous weapon(1) In AI, Abstraction is used as a mechanism for self-delusion to factor out all aspects of perception and motor skills “Good representation is the key to AI” Representing only the pertinent facts explicitly, the semantics of a world (which on the surface was quite complex) were reduced to a simple closed system. Ex) Chair and Photograph How can we capture all concept? And what?
Abstraction as a Dangerous weapon(2) Input to most AI program is a restricted set of simple assertions deduced from the real data by humans This abstraction performed by humans is the essence of intelligence and the hard part of the problems being solved No clean division between perception(abstraction) and reasoning in the real world
Abstraction as a Dangerous weapon(3) The abstraction reduces the input data so that the program experiences the same perceptual world as humans(Merkwelt in von Uexküll 1921) Robot has its own Merkwelt The Merkwelt we provides may not be what we actually use internally
Incremental Intelligence “I wish to build completely autonomous mobile agents that co-exist in the world with humans, and are seen by those humans as intelligent beings in their own right. Consider the problem of building Creatures as an engineering problem Engineering scheme Decompose a complex system into parts Build the parts Interface them into a complete system 이광주 : 묻지마 : how human work, application, philosophical implication 이광주 : 묻지마 : how human work, application, philosophical implication
The Requirement for Creatures A Creature must cope appropriately and in a timely fashion with changes in its dynamic environment A Creature should be robust with respect to its environment A Creature should be able to adapt to surroundings and capitalize on fortuitous circumstances A Creature should do something in the world; it should have some purpose in being
Decomposition by function Traditionally, Intelligent system has been a central system. Perceptual modules - deliver a symbolic description of the world Action modules - take a symbolic description of desired actions and make sure they happen in the world. Vision workers are not immune to earlier criticisms of AI workers. Most vision research is presented as a transformation from one image representation to another registered image.
Decomposition by function(2) Troubles Needs a long chain of modules to connect perception to action To test any of them, they all must first be built Until realistic modules are built, we can hardly predict exactly what modules will be needed or what interfaces they will need
Decomposition by Activity Divided into activity producing subsystems Each activity (behavior producing system) individually connects sensing to action layer, skill Each activity must decide when to act for themselves, not be subroutine to be invoked at call of some other layer Advantages Gives an incremental path from very simple systems to complex autonomous intelligent system. 각 단계에서 얘들은 small piece 만 만들면 되고, 만든 담엔 이미 있던 intelligence 랑만 interface 하게 해 주면 된다.
Decomposition by Activity(2) Experiment First, Build a simple complete autonomous system, test it in the real world Mobile robot, which avoids hitting things Two independent channels connecting sensing to action. No single place where “perception” delivers a representation of the world Next, Build an incremental layer of intelligence Operates in parallel to the first system Try to visit distant visible places
Who has the representation? The fact that there is no central representation helps the Creature meet goals Low-level simple activity - quick and often Multiple parallel activities and No central representation - Gradual degradation Each Layer has its implicit purpose (or goal) and is using the world as its own model No need of explicit representation of goals that some central process selects from. by observation By not trying to have an analogous model of the world, centrally located in the system, we are less likely to have built in a dependence on that model being completely accurate. Rather, individual layers extract only those aspects of the world which they find relevant- projections of a repr into a simple subspace. By not trying to have an analogous model of the world, centrally located in the system, we are less likely to have built in a dependence on that model being completely accurate. Rather, individual layers extract only those aspects of the world which they find relevant- projections of a repr into a simple subspace. Thanks to no complex repr. and need to maintain, reason about it.
No representation vs. No central representation No representation such as standard AI representation No variables that need instantiation No rules which need to be selected No choice To a large extent, the state of the world determines the action of the Creature. “The complexity of the behavior of a system was not necessarily inherent in the complexity of the creature, but perhaps in the complexityof the environment” (Simon, 1969)
The Methodology in Practice Maxims Test the Creature in the real world Each layer must be fully debugged Subsumption architecture decomposition into layers incremental composition through debugging
Architecture Each layer is composed of a fixed topology network of simple finite state machine Each finite state machine has Handful of states has 1~2 internal registers has 1~2 internal timers has access to simple computational machines run asynchronously. Sending and receiving fixed length messages No central locus of control & data-driven
Architecture(2) Interfaces between layers suppression inhibition
Some diagram
What This is not It isn’t connectionism It isn’t Neural Networks It isn’t Production Rules It isn’t a Blackboard It isn’t German Philosophy
Limit to Growth How many layers can be built before the interactions become too complex? How complex can the behaviors be without the aid of central representations? Is learning and such high-level functions possible?