ROBOTICS Dr. Tom Froese. “Why does the burnt kitten avoid the fire?” Ashby (1960)

Slides:



Advertisements
Similar presentations
Elephants Don’t Play Chess
Advertisements

Intelligent Agents Russell and Norvig: 2
1 Robotic Summer School 2009 Subsumption architecture Andrej Lúčny Department of Applied Informatics, FMFI, Comenius University, Bratislava
Artificial intelligence. I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 10.
Perception and Perspective in Robotics Paul Fitzpatrick MIT Computer Science and Artificial Intelligence Laboratory Humanoid Robotics Group Goal To build.
Delft University of TechnologyDelft Centre for Mechatronics and Microsystems Introduction Factory robots use trajectory control; the desired angles of.
Physical Symbol System Hypothesis Gagan Deep Kaur Roll No Course Seminar - CS 344.
Com1005: Machines and Intelligence Amanda Sharkey.
Adaptive Systems Ezequiel Di Paolo COGS Lecture 11: Autonomous Robotics.
A Summary of the Article “Intelligence Without Representation” by Rodney A. Brooks (1987) Presented by Dain Finn.
Cooperative Robot Communication Client & Supervisor : Tom Gedeon Student: Yi WAN Date: June
IAT 800 Braitenberg Vehicles. Oct 31, Fall 2006IAT 8002 Outline  Braitenberg vehicles –Concept behind vehicles –Introduce several vehicles –Look through.
Emergent Adaptive Lexicons Luc Steels 1996 Artificial Intelligence Laboratory Vrije Universiteit Brussel Presented by Achim Ruopp University of Washington.
Intelligence without Reason
Jochen Triesch, UC San Diego, 1 Real Artificial Life: Robots.
Biology of Cognition Dr. Tom Froese.
COGNITIVE SCIENCE Dr. Tom Froese. Asimo breaks down (again)
Behavior-Based Artificial Intelligence Pattie Maes MIT Media-Laboratory Presentation by: Derak Berreyesa UNR, CS Department.
Functionalism Mind and Body Knowledge and Reality; Lecture 3.
Robotica Lezione 1. Robotica - Lecture 12 Objectives - I General aspects of robotics –Situated Agents –Autonomous Vehicles –Dynamical Agents Implementing.
Artificial Intelligence By Ryan Shoultes & Jeremy Creighton.
THE NEW ERA OF LIFE. Introduction: Artificial Intelligence (AI) is the area of computer science focusing on creating machines that can engage on behaviors.
Emotion as Decision Engine: Model of Emotion in Negotiation and Decision-Making Bilyana Martinovski, Stockholm University, Sweden Wenji Mao, Chinese Academy.
Evolutionary Robotics and Interdisciplinary Enactivism CNRS Summer School: Constructivism and Enaction: A new paradigm for Cognitive Science Ile d’Oleron,
BIPEDAL LOCOMOTION Prima Parte Antonio D'Angelo.
Towards Cognitive Robotics Biointelligence Laboratory School of Computer Science and Engineering Seoul National University Christian.
Bloom County on Strong AI THE CHINESE ROOM l Searle’s target: “Strong AI” An appropriately programmed computer is a mind—capable of understanding and.
Welcome to Robotics! Spring 2007 Sarah Lawrence College Professor Jim Marshall.
Robotica Lecture 3. 2 Robot Control Robot control is the mean by which the sensing and action of a robot are coordinated The infinitely many possible.
Evolutionary Robotics Teresa Pegors. Importance of Embodiment  Embodied system includes:  Body – morphology of system and movement capabilities  Control.
1 Helsinki University of Technology Systems Analysis Laboratory Intentions and Systems Intelligence: Prospects for Complexity Research Ilkka Leppänen,
Modeling Complex Dynamic Systems with StarLogo in the Supercomputing Challenge
How Solvable Is Intelligence? A brief introduction to AI Dr. Richard Fox Department of Computer Science Northern Kentucky University.
Section 2.3 I, Robot Mind as Software McGraw-Hill © 2013 McGraw-Hill Companies. All Rights Reserved.
University of Windsor School of Computer Science Topics in Artificial Intelligence Fall 2008 Sept 11, 2008.
CBP 2003 Robot lecture 21 Sojourner 1996 MIT Kismet Honda Asimo 2003 NavLab (CMU) Why Study Robots now?
Comp Sojourner 1996 MIT Kismet Honda Asimo 2003 NavLab (CMU) Why Study Robots now?
Introduction to Neural Networks and Example Applications in HCI Nick Gentile.
Evolving the goal priorities of autonomous agents Adam Campbell* Advisor: Dr. Annie S. Wu* Collaborator: Dr. Randall Shumaker** School of Electrical Engineering.
Artificial intelligence
Robotica Lecture Review Reactive control Complete control space Action selection The subsumption architecture –Vertical vs. horizontal decomposition.
Evolutionary Robotics
Brooks’ Subsumption Architecture EEL 6838 T. Ryan Fitz-Gibbon 1/24/2004.
Introduction to Artificial Intelligence CS 438 Spring 2008 Today –AIMA, Ch. 25 –Robotics Thursday –Robotics continued Home Work due next Tuesday –Ch. 13:
Subsumption Architecture and Nouvelle AI Arpit Maheshwari Nihit Gupta Saransh Gupta Swapnil Srivastava.
Efficient Bipedal Robots Based on Passive-Dynamic Walkers
Are Expert Systems Really Experts? Introduction to Expert Systems Slide 1 Università di Salerno: April 2004 Are Expert Systems Really Experts? Different.
Lecture 4-1CS251: Intro to AI/Lisp II Robots in Action.
English Theme Date: Apr 24, 2007 Room: Instructor: Mafuyu Kitahara Material: Thelen, E. (1995) “Time-scale dynamics and the development.
*Why Humanoid Robots?* PREPARED BY THI.PRASANNA S.PRITHIVIRAJ
Slide no 1 Cognitive Systems in FP6 scope and focus Colette Maloney DG Information Society.
Minds and Computers Discovering the nature of intelligence by studying intelligence in all its forms: human and machine Artificial intelligence (A.I.)
INTELLIGENCE WITHOUT REPRESENTAION 인지과학 협동과정 이광주.
LCC 6310 Computation as an Expressive Medium Lecture 11.
Robot Intelligence Technology Lab. Evolutionary Robotics Chapter 3. How to Evolve Robots Chi-Ho Lee.
BEAM Robotics Biology Electronics Aesthetics Mechanics
Angelo Loula, Ricardo Gudwin, Charbel Nino El-Hani, and Joao Queiroz
Human System Interactions, HSI '09. 2nd Conference on
Business School Computing Division
Seven Principles of Synthetic Intelligence
Announcements Homework 6 due Friday 11:59pm.
Ch 14. Active Vision for Goal-Oriented Humanoid Robot Walking (1/2) Creating Brain-Like Intelligence, Sendhoff et al. (eds), Robots Learning from.
Course Instructor: knza ch
Non-Symbolic AI lecture 4
Artificial Intelligence Lecture 2: Foundation of Artificial Intelligence By: Nur Uddin, Ph.D.
What is AI?.
The Symbol Grounding Problem
Non-Symbolic AI lecture 4
Introduction to Artificial Intelligence Instructor: Dr. Eduardo Urbina
English Theme end Date: July 3rd, 2007 Room: 8-405
Presentation transcript:

ROBOTICS Dr. Tom Froese

“Why does the burnt kitten avoid the fire?” Ashby (1960)

The Ultrastable System (Homeostat)

Adaptation to inverting goggles Erismann 1930s Kohler 1950s and 60s They demonstrated the plasticity of perceptual systems by perturbation. Inverted senses would, part by part, adapt over time and the perceived world would return to the “normal” state.

The rise of computer science

”Sense-think-act" cycle LabVIEW Robotics (2014)

Challenges for symbolic AI Robustness Lack of noise and fault-tolerance; lack of generalizability If a situation arises which has not been predefined a traditional symbol processing model will break down Integrated learning Learning mechanisms are ad hoc and imposed on top of non-learning systems Real-time performance Sequential processing Programs are sequential and work on a step-by-step basis Pfeifer (1996)

The Frame Problem Pfeifer (1996)

The Frame Problem The robot R1 has been told that its battery is in a room with a bomb and that it must move the battery out of the room before the bomb goes off. Both the battery and the bomb are on a wagon. R1 knows that the action of pulling the wagon out of the room will remove the battery from the room. It does so and as it is outside, the bomb goes off Poor R1 had not realized that pulling the wagon would bring the bomb out along with the battery. Dennett (1984)

The Frame Problem The designers realized that the robot would have to be made to recognize not just the intended implications of its acts, but also its side-effects by deducing these implications from the descriptions it uses in formulating its plans. They called their next model the robot deducer, or short R1D1, and did the same experiment. R1D1 started considering the implications of pulling the wagon out of the room. It had just finished deducing that pulling the wagon out of the room would not change the color of the room's walls when the bomb went off.

The Frame Problem The problem was obvious. The robot must be taught the difference between relevant and irrelevant implications. R2D1, the robot-relevant-deducer, was again tested. The designers saw R2D1 sitting outside the room containing the ticking bomb. "Do something!" they yelled at it. "I am", it retorted. "I am busily ignoring some thousands of implications I have determined to be irrelevant. Just as soon as I find an irrelevant implication, I put it on the list of those I must ignore, and..." the bomb went off.

The Symbol Grounding Problem “once we remove the human interpreter from the loop, as in the case of autonomous agents, we have to take into account that the system needs to interact with the environment on its own. Thus, if there are symbols in the system, their meaning must be grounded in the system's own experience in the interaction with the real world. Symbol systems in which symbols only refer to other symbols are not grounded because the connection to the outside world is missing. The symbols only have meaning to a designer or a user, not to the system itself.” Pfeifer (1996); see also Harnad (1990)

Searle’s (1980) “Chinese room” argument

Braitenberg’s (1984) “Vehicles”

Brooks’ (1991) “Creatures” Herbert Genghis Allen “The key observation is that the world is its own best model. It is always exactly up to date. It always contains every detail there is to be known. The trick is to sense it appropriately and often enough.” Brooks (1990)

Brook’s (1991) “Creatures”

Behavior-based robotics LabVIEW Robotics (2014) Brooks’ “subsumption architecture”

Brooks’ creatures and creatures-no-more Cog Baxter Roomba PackBot

Pfeifer’s (1996b) “Fungus Eaters”

Pfeifer’s (1996b) design principles

Humanoid robot walking

Asimo takes a nasty fall down the stairs

Passive dynamic walking Collins et al. (2001) built a passive dynamic walking robot based on the ideas of McGeer. It was built from metal rods, springs, and weights. The robot could walk down a plank without power, sensors, or a control system. The robot was also able to walk efficiently on a flat surface by giving it a small push. McGeer had previously noticed that adding knees made passive walking more stable for bipedal machines.

Passive dynamic walking Collins et al. (2001)

New cognitive science (4E)

References Ashby, W. R. (1960). Design for a Brain: The Origin of Adaptive Behaviour (Second ed.). London: Chapman & Hall Braitenberg, V. (1984). Vehicles: Experiments in Synthetic Psychology. Cambridge, MA: MIT Press Brooks, R. A. (1990). Elephants don't play chess. Robotics and Autonomous Systems, 6, 3-15 Brooks, R. A. (1991). Intelligence without representation. Artificial Intelligence, 47(1-3), Collins, S. H., Wisse, M., & Ruina, A. (2001). A three-dimensional passive-dynamic walking robot with two legs and knees. International Journal of Robotics Research, 20(7), Dennett, D. C. (1984). Cognitive wheels: The frame problem of AI. In C. Hookway (Ed.), Minds, Machines and Evolution: Philosophical Studies (pp ). Cambridge: Cambridge University Press Harnad, S. (1990). The symbol grounding problem. Physica D: Nonlinear Phenomena, 42,

References Pfeifer, R. (1996). Symbols, patterns, and behaviour: Towards a new understanding of intelligence. Proceedings of the 10th Annual Conference of Japanese Society for Artificial Intelligence (pp. 1-15). Tokyo: JSAI Pfeifer, R. (1996b). Building "fungus eaters:" Design principles of autonomous agents. In P. Maes, M. J. Matarić, J.-A. Meyer, J. Pollack & S. W. Wilson (Eds.), From Animals to Animats 4: Proceedings of the Fourth International Conference on Simulation of Adaptive Behavior (pp. 3-12). Cambridge, MA: MIT Press Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3):

Homework Please read the whole article if possible: Di Paolo, E. A. (2015). El enactivismo y la naturalización de la mente. In D. Pérez Chico & M. G. Bedia (Eds.), Nueva Ciencia Cognitiva: Hacia una Teoría Integral de la Mente (in press). Zaragoza: PUZ Optional: van Gelder, T. & Port, R. F. (1995). It’s about time: An overview of the dynamical approach to cognition. In: R. F. Port & T. van Gelder (eds.), Mind as Motion: Explorations in the Dynamics of Cognition (pp. 1-43). Cambridge, MA: MIT Press