The Emergence of Interactive Meaning Processes in Autonomous Systems Argyris Arnellos, Thomas Spyrou, John Darzentas University of the Aegean Dept of Product.

Slides:



Advertisements
Similar presentations
Theoretical Issues: Structure and Agency
Advertisements

SETTINGS AS COMPLEX ADAPTIVE SYSTEMS AN INTRODUCTION TO COMPLEXITY SCIENCE FOR HEALTH PROMOTION PROFESSIONALS Nastaran Keshavarz Mohammadi Don Nutbeam,
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
GODFREY HODGSON HOLMES TARCA
Knowledge Representation
Causation, Control, and the Evolution of Complexity H. H. Pattee Synopsis Steve B. 3/12/09.
Cognition and Crime Kristopher Proctor Kirk R. Williams Nancy G. Guerra University of California, Riverside.
Faculty of Management and Organization Emergence of social constructs and organizational behaviour How cognitive modelling enriches social simulation Martin.
Creativity and Rationality: Interrelationships and Implications for Education. Mark H. Bickhard Lehigh University
A Biosemiotic Analysis of Serotonin’s Complex Functionality Argyris Arnellos +, Martien Brands ++, Thomas Spyrou +, John Darzentas + + Dept of Product.
A Multi-Agent System for Visualization Simulated User Behaviour B. de Vries, J. Dijkstra.
ANALYSIS OF THE DESIGN OF A FRAMEWORK SUPPORTING MEANING PROCESSES IN LIVING AND ARTIFICIAL SYSTEMS Argyris Arnellos, Thomas Spyrou, John Darzentas University.
Chapter Seven The Network Approach: Mind as a Web.
Creating Architectural Descriptions. Outline Standardizing architectural descriptions: The IEEE has published, “Recommended Practice for Architectural.
Polyscheme John Laird February 21, Major Observations Polyscheme is a FRAMEWORK not an architecture – Explicitly does not commit to specific primitives.
01 -1 Lecture 01 Intelligent Agents TopicsTopics –Definition –Agent Model –Agent Technology –Agent Architecture.
1 Chapter 19 Intelligent Agents. 2 Chapter 19 Contents (1) l Intelligence l Autonomy l Ability to Learn l Other Agent Properties l Reactive Agents l Utility-Based.
Brain, Mind, Body and Society: Controllability and Uncontrollability in Robotics Motomu SHIMODA, PhD. Kyoto Women’s University.
Noynay, Kelvin G. BSED-ENGLISH Educational Technology 1.
What is science? Matt Jarvis. What is science? The word ‘science’ From the Latin Scire meaning ‘to know’ The subject matter of all science is the natural.
Chapter 9 Architecture Alignment. 9 – Architecture Alignment 9.1 Introduction 9.2 The GRAAL Alignment Framework  System Aspects  The Aggregation.
Functionalism Mind and Body Knowledge and Reality; Lecture 3.
Robotica Lezione 1. Robotica - Lecture 12 Objectives - I General aspects of robotics –Situated Agents –Autonomous Vehicles –Dynamical Agents Implementing.
Are genes signs and if so what are they signs of? John Collier Philosophy, University of KwaZulu-Natal, Durban 4041 South Africa
An Architecture for Empathic Agents. Abstract Architecture Planning + Coping Deliberated Actions Agent in the World Body Speech Facial expressions Effectors.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Language as a Tool System Mark H. Bickhard
Putting Research to Work in K-8 Science Classrooms Ready, Set, SCIENCE.
After studying this chapter, you will be able to : 1.Explain the meaning and the changing role of accounting. 2.List users of accounting information and.
ARTIFICIAL INTELLIGENCE [INTELLIGENT AGENTS PARADIGM] Professor Janis Grundspenkis Riga Technical University Faculty of Computer Science and Information.
Autonomy and Artificiality Margaret A. Boden Hojin Youn.
COGNITIVE SEMANTICS: INTRODUCTION DANA RETOVÁ CSCTR2010 – Session 1.
Palette Summer school: on the idea of communities of practice Murray Saunders.
Section 2.3 I, Robot Mind as Software McGraw-Hill © 2013 McGraw-Hill Companies. All Rights Reserved.
University of Windsor School of Computer Science Topics in Artificial Intelligence Fall 2008 Sept 11, 2008.
Bain on Neural Networks and Connectionism Stephanie Rosenthal September 9, 2015.
1 Knowledge Acquisition and Learning by Experience – The Role of Case-Specific Knowledge Knowledge modeling and acquisition Learning by experience Framework.
Instructional Objective  Define an agent  Define an Intelligent agent  Define a Rational agent  Discuss different types of environment  Explain classes.
SIMULATIONS, REALIZATIONS, AND THEORIES OF LIFE H. H. PATTEE (1989) By Hyojung Seo Dept. of Psychology.
Cognitive Science and Biomedical Informatics Department of Computer Sciences ALMAAREFA COLLEGES.
Introduction to Artificial Intelligence CS 438 Spring 2008 Today –AIMA, Chapter 1 –Defining AI Next Tuesday –Intelligent Agents –AIMA, Chapter 2 –HW: Problem.
Chapter 1. Cognitive Systems Introduction in Cognitive Systems, Christensen et al. Course: Robots Learning from Humans Park, Sae-Rom Lee, Woo-Jin Statistical.
Distributed Models for Decision Support Jose Cuena & Sascha Ossowski Pesented by: Gal Moshitch & Rica Gonen.
Behavior-based Multirobot Architectures. Why Behavior Based Control for Multi-Robot Teams? Multi-Robot control naturally grew out of single robot control.
Institute of Physics Wroclaw University of Technology 28/09/2005 How can statistical mechanics contribute to social sciences? Piotr Magnuszewski, Andrzej.
From Mind to Brain Machine The Architecture of Cognition David Davenport Computer Eng. Dept., Bilkent University, Ankara – Turkey.
Thinking behind the environment for Making Construals (MCE)
International Conference on Fuzzy Systems and Knowledge Discovery, p.p ,July 2011.
What is in a ROBOT? Robotic Components Unit A – Ch 3.
Do software agents know what they talk about? Agents and Ontology dr. Patrick De Causmaecker, Nottingham, March 7-11, 2005.
Dialog Processing with Unsupervised Artificial Neural Networks Andrew Richardson Thomas Jefferson High School for Science and Technology Computer Systems.
Reinforcement Learning AI – Week 22 Sub-symbolic AI Two: An Introduction to Reinforcement Learning Lee McCluskey, room 3/10
Artificial Intelligence: Research and Collaborative Possibilities a presentation by: Dr. Ernest L. McDuffie, Assistant Professor Department of Computer.
The Chinese Room Argument Part II Joe Lau Philosophy HKU.
Chapter 4 Motor Control Theories Concept: Theories about how we control coordinated movement differ in terms of the roles of central and environmental.
ABRA Week 3 research design, methods… SS. Research Design and Method.
From NARS to a Thinking Machine Pei Wang Temple University.
Intelligent Agents Chapter 2. How do you design an intelligent agent? Definition: An intelligent agent perceives its environment via sensors and acts.
Biosemiotics must be grounded in an organizational approach to functionality John Collier Philosophy and Ethics University of KwaZulu-Natal
Pattern Recognition. What is Pattern Recognition? Pattern recognition is a sub-topic of machine learning. PR is the science that concerns the description.
Network Management Lecture 13. MACHINE LEARNING TECHNIQUES 2 Dr. Atiq Ahmed Université de Balouchistan.
Done by Fazlun Satya Saradhi. INTRODUCTION The main concept is to use different types of agent models which would help create a better dynamic and adaptive.
GODFREY HODGSON HOLMES TARCA
© LOUIS COHEN, LAWRENCE MANION AND KEITH MORRISON
MOIS 508 Spring 2006 Dr. Dina Rateb
Seven Principles of Synthetic Intelligence
Intelligent Agents Chapter 2.
Knowledge Representation
The Cognitive Agent Overcoming informational limits Orlin Vakarelov
Designing Scalable Architectures
Presentation transcript:

The Emergence of Interactive Meaning Processes in Autonomous Systems Argyris Arnellos, Thomas Spyrou, John Darzentas University of the Aegean Dept of Product & Systems Design Eng Syros, Greece

General Problem Theoretical frameworks of cognition are differentiated by the way they handle the notions of intentionality, meaning, representation and information. One could ask: How is meaning generated and manipulated in natural and consequently in artificial cognitive systems?

MAIN APPROACHES to GOGNITION and to the DESIGN of ARTIFICIAL AGENTS Cognitivism and Computational Artificial Agents All intentional content is a kind of information which is externally transmitted by a merely causal flow. Meaning is externally ascribed  An Objection: Searle’s Chinese Room Argument  An Answer: (Harnad, 1990): Symbol Grounding is an Empirical Issue Note: A complete cognitivist grounding theory should consider both external and internal representational content, as well as, their transduction system and its interactive nature. Main candidate for the transduction is a connectionist network

Connectionism Connectionist systems as syntactically adaptive systems (not all of them) Such machines receive (contingent) feedback from their outputs, which then directs the adjustment of their decision function (i.e.new percept-action mapping). –No semantics as the system cannot decide on its own which aspects of the world must be encoded ("feature primitives") such that the machine can find a successful classification rule. –Adding a "training" origin to correspondence works no better than adding lawfulness –Connectionistic architectures cannot account for internal content.

Dynamic Systems and Cognition Time-Dependence: Natural cognition happens in real time, hence dynamics is better suited to model it than the a-temporal computational approach. Embodiment: Cognition is embedded in a nervous system, in a body, and in an environment, whereas computationalism typically abstracts this embeddedness away, and can incorporate it in only an ad hoc manner. Emergence: Dynamics can explain the emergence and stability of cognition through self-organisation, whereas cognitivism ignores the problem of cognitive emergence.

Dynamic Systems and Cognition No information processing (no symbols, no representations) The dynamics of the cognitive substrate (matter) are taken to be the only thing responsible for its self- organization System’s ability for classification is dependent on the richness of its attractors, which are used to represent events in its environment System’s meaning evolving threshold cannot transcend its attractor’s landscape complexity

Defining Agency Strong notion: An agent is a system which exhibits: interactivity: the ability to perceive and act upon its environment by taking the initiative; intentionality: the ability to effect goal-oriented interaction by attributing purposes, beliefs and desires to its actions; autonomy: the ability to operate intentionally and interactively based only on its own resources. [Collier, 1999] suggests that there is: no function without autonomy; no intentionality without function; no meaning without intentionality; Circle closes by considering meaning as a prerequisite for the maintenance of system’s autonomy during its interaction.

The Need for a New Kind of Representations “As tasks become more complex the use of internal states that carry information about the environment becomes less and less avoidable.” (Kirsh, 1991) “…even in the very simple cases mentioned above we find that individual units act as very simple representations in mediating interactions between the robot and its world” Brooks (1997) What kind of representations do we need? No representations per se, but a different type of representation

The Need for a New Kind of Representations Representations that can only be understood in the context of activity. For an adaptive system the primary problem is to produce action appropriate to the context, not referentially individuate a signal source (cognitivism). The content should be accessible to the system itself.

Functionality and Representations A behaviour is ‘really’ contributing to systems functionality if and only if it is mediated by representations and an information-carrier is only a representation if it plays an appropriate role in the systems functionality towards its self- maintenance Where can these kind of representations be found and What type will they be?

Code Duality in protein structure/sequences based on (Hoffmeyer & Emmeche, 1991) Analog Information Space: protein functional conformations Digital Information Space: amino-acid sequences AIS H  K DIS K sequence:structure

Levels of Interactive Representations (Bickhard, 1998) Interactivism and Function Function is a forward looking concept as it tries to explain what is its future value to the system. Recursively Self-Maintenant Systems System has alternative ways of self-maintenance available and it can switch one alternative to another in case of failure. The conditions under which the serving of a function succeeds constitute the dynamic presuppositions of those functional processes. a minimal ontological representative system (S) has to include a subsystem, a differentiator (Dif), engaging in interaction with its environment (Env).

Levels of Interactive Representations (Bickhard, 1998) Internal course of that interaction will depend both on the organization of the subsystem and on the interactive properties of the environment. Each final state classifies all of the environments together that would yield that particular final state if interacted with. Each possible final state (FS) will serve as a differentiation of its class of environments Env1 Env2 Env3 System Dif FS1 FS2 Goal System 1 Goal System 2 PiPi PjPj

Emergent Levels of Interactive Representations Level 3: Implicit definitions of environmental categories Final states reached can be considered as a digitalisation of the analog- analog interactions in the internal of the system due to its contact with the environment. Minimal information: There is no information concerning anything about that environment beyond the fact that it was just encountered and that it is not the same as those environments differentiated by any of the other possible final states. There is no representational content involved System has no information about the classes of environments that it implicitly defines.

Emergent Levels of Interactive Representations Level 4: Functional Interactive Predication System strives to achieve maintain self-maintenance and through its interactions builds a new level of organisation (new representational level) where the implicit environmental differentiations of Level3 are re-organised as quantitative variety of functional predications about the environment. From Level3  Level 4 - Minimal representation: The whole system at this moment (FI) interprets the signs of Level3 as Dynamic Interpretants at Level4 Differentiator’s final states (FS) (Representamens) indicate which further procedures might be appropriate and the goal- system selects from among them. Analog-driven Emergence where new predicates are formed

Levels of Interactive Representations Env1 Env2 Env3 System Dif FS1 FS2 Goal System 1 Goal System 2 PiPi PjPj System’s Habit Representamen Dynamic Interpretant

Levels of Interactive Representations Level 5: Implicit definitions of environmental properties Interaction with the environment continuous and the AIS of Level4 are locally interacting in various time scales in order to reduce uncertainty for the environment. A in a way more compressed digital record emerges and we have a transition from implicit definitions of environmental categories  implicit definitions of environmental interactive properties From Level4  Level 5: Emergence of functional relations among system’s organisations that involve such implicit definitions:  Implicitness and presupposition is observed which can account for unbounded representationality.

Levels of Interactive Representations Level 6: Emergence of organisations of interactive potentialities Level’s 5 representations are implicitly being selected by system’s differentiating interactions in a “statistical manner”  formation of aggregates of properties that are presently available. –These aggregates are ongoingly updates  construction of new indications and changing old ones  formation of apperceptive procedures. From Explicit Situation Images  Implicit Situation Images Implicit definitions of environmental interactive properties – Level5) ENGAGE in various apperceptive procedures driven by Level4 and forms organisations of interactive potentialities

Levels of Interactive Representations Level 7: Emergence of objects and Constructive Memory The organisations of indications of interactive potentialities are used in system’s interaction and in some cases they tend to remain constant (invariant) as patterns.  The quantitative variety of the organisations of interactive potentialities of Level 6 is re-organised as certain types of organisations (based on their temporal coherence).  Such types of organisations of interactive potentialities constitute objects for the system itself. Memory: System is able to expand its situation image without explicit bound  system represents such invariances in its situation image. Constructive Memory System is able to test past apperceptive processes in present and differing directions.

Levels of Interactive Representations Level 7: Emergence of objects and Constructive Memory At this phase, icons and indexes can emerge into the system but not symbols, for which genuine social communication is needed. Symbols will need the lower levels Notes: A useful framework for Alife and AI experiments (since, interactive representations need only simple control systems) It seems that initially two non-semiotic levels should exist?

IOII FI sos of representational structure DI pragmatics system’s history Testing anticipations object II FI Morphodynamics action COGNITIVE SYSTEM ENVIRONMENT DO Memory-based analogy making measurement PS Rule-based syntactic complexity DO signs Abduction Deduction Induction Full Semiotic and Representational Capacity