Download presentation
Presentation is loading. Please wait.
Published byDana McCarthy Modified over 9 years ago
1
Understanding Consciousness with Model Abstractions Firmo Freire firmo@inf.puc-rio.br Paris, Juillet 9, 2007
2
LIP6-09/07/20072 Agenda Introduction Function of Consciousness Control Systems Fundamentals Internal Models The Simulator Experiment and Results Related Work Future Work Conclusions References
3
LIP6-09/07/20073 Introduction This presentation is about: –A software platform for research into Artificial Consciousness; –The conceptual foundation for this research.
4
LIP6-09/07/20074 What Is Consciousness? It is perhaps too early to try and define consciousness. A better course would be to try to understand its added value to behavior and then try to define consciousness as consequence of this understanding. Consciousness must be have physical (direct or indirect) influence(s) on the environment, otherwise it would not be detectable by evolution and selected as a trait for survival.
5
LIP6-09/07/20075 A Function of Consciousness Consciousness allows for flexibility of action/behavior How can the brain make successful limb and body movements? Environmental conditions are constantly changing and have to be adapted to.
6
LIP6-09/07/20076 Control Systems Concepts (1/6) Open Loop Control (Feed Forward Control Systems)
7
LIP6-09/07/20077 Control Systems Concepts (2/6) Closed Loop Control (Feedback Control Systems)
8
LIP6-09/07/20078 Control Systems Concepts (3/6) System Identification -Black-boxes -Gray-boxes, and -White-boxes
9
LIP6-09/07/20079 Control Systems Concepts (4/6) Model Predictive Control
10
LIP6-09/07/200710 Control Systems Concepts (5/6) Model Predictive Control Dynamics
11
LIP6-09/07/200711 Control Systems Concepts (6/6) Forward and Inverse Models
12
LIP6-09/07/200712 Internal Models (1/3) Delay in Closed Loop Systems
13
LIP6-09/07/200713 Internal Models (2/3) Plant Model in the Control Loop
14
LIP6-09/07/200714 Internal Models (3/3) Benefits –Feedback control –Anomaly detection –Anticipation Comparison with other techniques –Flexibility
15
LIP6-09/07/200715 The Simulator (1/4 ) Simulator Structure
16
LIP6-09/07/200716 The Simulator (2/4) Environment Structure
17
LIP6-09/07/200717 The Simulator (3/4) Agent Cognitive Structure
18
LIP6-09/07/200718 The Simulator (4/4) Internal Model General States for Skills
19
LIP6-09/07/200719 Experiments and Results (1/5) Test of Stop Model
20
LIP6-09/07/200720 Experiments and Results (2/5) Test of Car-Following Model
21
LIP6-09/07/200721 Experiments and Results (3/5) Car Following Zoom
22
LIP6-09/07/200722 Experiments and Results (4/5) Car Following Dynamics
23
LIP6-09/07/200723 Experiments and Results (5/5) Choosing Between Conflicting Alternatives The next experiment will include a higher level Internal Model that will deal with potentially conflicting situations. For example the agent is near his destination but has a slow moving car in front of him. If the agent overtakes the leading car he runs the risk of overshooting his destination. This would be an example in that a frustrating (negative) experience would have to be endured in order to achieve a greater good.
24
LIP6-09/07/200724 Related Work Owen Holland(Holland and Goodman 2003) in the paper Robots With Internal Models
25
LIP6-09/07/200725 Future Work Simulator as a Framework (MAS) 3D Graphical Interface Model Refining Model Implementation Technology Time Considerations: Real Time and Synchronisms Learning Features and Mechanisms
26
LIP6-09/07/200726 Conclusions Promising approach to the study of cognitive processes in general and Artificial Consciousness in particular. A simulator architecture that can grow and expand to a multiprocessing environment, thus affording greatly enhanced computing power. If various functions attributed to consciousness can be unequivocally be implemented then it comes down to: a) These functions do not need consciousness to steer behavior, or b) The machine is exhibiting some level of consciousness within the domain of the simulated environment.
27
LIP6-09/07/200727 References Churchland, P.S. (2002), “Brain_Wise: Studies in Neurophilosophy, The MIT Press, pages 76-90. Damasio, A. (1994), “Descartes Error: Emotion, Reason, and the Human Brain. New York: Grossett/Putnan. Damasio, A. (1999), “A Feeling of What Happens”. New York: Harcourt Brace. Grush, R. (1997), “The Architecture of Representation”, in Philosophical Psychology 10:5-23. Iacoboni M.,Monar-Szakacs, Gallese V., Buccino G., and Mazziotta J.C. (2005), “Grasping the Intentions of Others with One’s Own Mirror System”, in PloS Biology (www.plosbiology.org).
28
LIP6-09/07/200728 References (Cont.) Gaschler K. (2006), “One Person, One Neuron?”, Scientific American Mind (February/March), pp: 77-82. Pouget, A., and T.J. Sejnowski (1997), “Spatial Transformations in the Parietal Cortex Using Basis Functions”, Journal of Cognitive Neural Science 9(2):222- 237. Rizzolatti G., Fogassi L. and Gallese V. (2006), “Mirrors in the Mind”, Scientific American (November), pp: 54-61. Sloman, A. (2004), “GC5 The Architecture of Brain and Mind”, in Grang Challenges in Computing – Research, edited by Tony Hoare and Robin Milner, BCS, 21, 24. Wolpert, D.M., Z. Ghahramani, and M.I. Jordan (1995), “An Internal Model for Sensorimotor Integration”, in Science 269:1880-1882.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.