AI History, Philosophical Foundations Part 2. Some highlights from early history of AI Gödel’s theorem: 1930 Turing machines: 1936 McCulloch and Pitts.

Slides:



Advertisements
Similar presentations
Artificial Intelligence 12. Two Layer ANNs
Advertisements

1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
Section 2.3 I, Robot Mind as Software.
Artificial Neural Network
Artificial intelligence. I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 10.
CS10 The Beauty and Joy of Computing Artificial Intelligence Anna Rafferty (Slides adapted from Dan Garcia) 19 March 2012.
Reading for Next Week Textbook, Section 9, pp A User’s Guide to Support Vector Machines (linked from course website)
WHAT IS ARTIFICIAL INTELLIGENCE?
CSE 471/598,CBS598 Introduction to Artificial Intelligence Spring 2005
Introduction to Introduction to Artificial Intelligence Henry Kautz.
CS147 - Terry Winograd - 1 Lecture 14 – Agents and Natural Language Terry Winograd CS147 - Introduction to Human-Computer Interaction Design Computer Science.
Biological neuron artificial neuron.
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Intro Neural Networks (Reading:
CSE 471/598,CBS598 Introduction to Artificial Intelligence Fall 2004
CS 480 Lec 2 Sept 4 complete the introduction Chapter 3 (search)
Random Administrivia In CMC 306 on Monday for LISP lab.
D Goforth - COSC 4117, fall Notes  Program evaluation – Sept Student submissions  Mon. Sept 11, 4-5PM  FA 181 Comments to committee are.
CS 484 – Artificial Intelligence
CPSC 171 Artificial Intelligence Read Chapter 14.
CS10 The Beauty and Joy of Computing Artificial Intelligence Anna Rafferty (Slides adapted from Dan Garcia) 22 October 2012.
C463 / B551 Artificial Intelligence Dana Vrajitoru Introduction.
ARTIFICIAL INTELLIGENCE Introduction: Chapter 1. Outline Course overview What is AI? A brief history The state of the art.
1 Introduction to Artificial Neural Networks Andrew L. Nelson Visiting Research Faculty University of South Florida.
AI Principles, Semester 2, Spring 2008, Week 1, Lecture 1, 1 Introduction Why AI is Important Structure of the course History of AI AI as a field of study.
CISC4/681 Introduction to Artificial Intelligence1 Introduction – Artificial Intelligence a Modern Approach Russell and Norvig: 1.
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
19/13/2015CS360 AI & Robotics CS360: AI & Robotics TTh 9:25 am - 10:40 am Shereen Khoja
Introduction (Chapter 1) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Introduction: Chapter 1
Artificial Intelligence: An Introduction Definition of AI Foundations of AI History of AI Advanced Techniques.
A RTIFICIAL I NTELLIGENCE Introduction 3 October
CNS 4470 Artificial Intelligence. What is AI? No really what is it? No really what is it?
Artificial Intelligence CS 363 Kawther Abas Lecture 1 Introduction 5/4/1435.
Classification / Regression Neural Networks 2
Introduction to Artificial Intelligence and Soft Computing
CS344 : Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 29 Introducing Neural Nets.
ARTIFICIAL INTELLIGENCE 2009 Ira Pohl TIM Oct 29, 2009.
Types of Artificial Intelligence & Pioneers in the Field By Vernon Crowder.
Due Monday Read chapter 2 Homework: –Chapter 1, exercises –Answer each in 100 words or less. Send to from your preferred.
Section 2.3 I, Robot Mind as Software McGraw-Hill © 2013 McGraw-Hill Companies. All Rights Reserved.
1 The main topics in AI Artificial intelligence can be considered under a number of headings: –Search (includes Game Playing). –Representing Knowledge.
AI: Can Machines Think? Juntae Kim Department of Computer Engineering Dongguk University.
Approaches to A. I. Thinking like humans Cognitive science Neuron level Neuroanatomical level Mind level Thinking rationally Aristotle, syllogisms Logic.
FOUNDATIONS OF ARTIFICIAL INTELLIGENCE
Multilayer Neural Networks (sometimes called “Multilayer Perceptrons” or MLPs)
Princess Nora University Artificial Intelligence CS 461 Level 8 1.
Artificial Intelligence Methods Neural Networks Lecture 1 Rakesh K. Bissoondeeal Rakesh K.
Perceptrons Michael J. Watts
Vilalta/Norvig/Eick: History of AI Origins McCulloch and Pitts (1943) Model of Artificial Neurons. Donald Hebb (1949) Hebbian Learning Conference at Dartmouth.
1 Introduction to Artificial Intelligence CSE 415 Winter 2006.
1 CMSC 671 Fall 2001 Class #10 – Thursday, October 4.
Artificial Intelligence
INTRODUCTION TO NEURAL NETWORKS 2 A new sort of computer What are (everyday) computer systems good at... and not so good at? Good at..Not so good at..
Linear separability Hyperplane In 2D: Feature 1 Feature 2 A perceptron can separate data that is linearly separable.
Split-Brain Studies What do you see? “Nothing”
Artificial Intelligence
第 1 章 绪论.
Linear separability Hyperplane In 2D: Feature 1 Feature 2 A perceptron can separate data that is linearly separable.
Joost N. Kok Universiteit Leiden
Intelligent Leaning -- A Brief Introduction to Artificial Neural Networks Chiung-Yao Fang.
Introduction to Artificial Intelligence and Soft Computing
Intelligent Leaning -- A Brief Introduction to Artificial Neural Networks Chiung-Yao Fang.
Symbolic cognitive architectures
Introduction to Artificial Intelligence
What is AI?.
Artificial Intelligence 12. Two Layer ANNs
Artificial Intelligence Chapter 1 Introduction
David Kauchak CS158 – Spring 2019

Introduction to Artificial Intelligence
Presentation transcript:

AI History, Philosophical Foundations Part 2

Some highlights from early history of AI Gödel’s theorem: 1930 Turing machines: 1936 McCulloch and Pitts neurons: 1943 (introduced field of "neural networks”) Von Neumann self-reproducing automaton: 1940s

Dartmouth workshop: 1956 (2006 is 50th anniversary) 1950s: –Lisp (McCarthy) –Logic Theorist (Newell and Simon) –General Problem Solver (Newell and Simon) –Learning checkers player (Samuel) –Geometry theorem prover (Gelernter) –Perceptrons (Rosenblatt) Minsky and Papert Perceptrons book: 1969

"In from three to eight years, we'll have a machine with the general intelligence of an average human being.“ Marvin Minsky to Life magazine, 1970

Means-Ends Analysis (“General Problem Solver”) 1.Compare the current state with the goal state; find a difference between them. 2.Find in memory an operator that experience has taught reduces differences of this kind; 3.Apply the operator to change the state. 4.If state = goal state, return “success”, else go to 1. (See transparancies)

Newell and Simon: Physical Symbol System hypothesis: “A physical symbol system has the necessary and sufficient means of general intelligent action.”

Quote from Searle: “I thought the whole idea of strong AI is that we don’t need to know how the brain operates to know how the mind works. The basic hypothesis, or so I had supposed, was that there is a level of mental operations consisting of computational processes over formal elements that constitute the essence of the mental and can be realized in all sorts of different brain processes, in the same way that any computer program can be realized in different computer hardware: on the assumptions of strong AI, the mind is to the brain as the program is to the hardware, and thus we can understand the mind without doing neurophysiology.”

Blocks World (used by Shrdlu)

The dialog that was used as a SHRDLU demo: Person: PICK UP A BIG RED BLOCK. Computer: OK. (does it) Person: GRASP THE PYRAMID. Computer: I DON'T UNDERSTAND WHICH PYRAMID YOU MEAN. Person: FIND A BLOCK WHICH IS TALLER THAN THE ONE YOU ARE HOLDING AND PUT IT INTO THE BOX. Computer: BY "IT", I ASSUME YOU MEAN THE BLOCK WHICH IS TALLER THAN THE ONE I AM HOLDING. Computer: OK. (does it) Person: WHAT DOES THE BOX CONTAIN? Computer: THE BLUE PYRAMID AND THE BLUE BLOCK. Person: WHAT IS THE PYRAMID SUPPORTED BY? Computer: THE BOX.

Person: HOW MANY BLOCKS ARE NOT IN THE BOX? Computer: FOUR OF THEM. Person: IS AT LEAST ONE OF THEM NARROWER THAN THE ONE WHICH I TOLD YOU TO PICK UP? Computer: YES, THE RED CUBE. Person: IS IT SUPPORTED? Computer: YES, BY THE TABLE. Person: CAN THE TABLE PICK UP BLOCKS? Computer: NO. Person: CAN A PYRAMID BE SUPPORTED BY A BLOCK? Computer: YES. Person: CAN A PYRAMID SUPPORT A PYRAMID? Computer: I DON'T KNOW. Person: STACK UP TWO PYRAMIDS. Computer: I CAN'T.

Person: WILL YOU PLEASE STACK UP BOTH OF THE RED BLOCKS AND EITHER A GREEN CUBE OR A PYRAMID? Computer: OK. (does it) Person: WHICH CUBE IS SITTING ON THE TABLE? Computer: THE LARGE GREEN ONE WHICH SUPPORTS THE RED PYRAMID. Person: IS THERE A LARGE BLOCK BEHIND A PYRAMID? Computer: YES, THREE OF THEM: A LARGE RED ONE, A LARGE GREEN CUBE, AND THE BLUE ONE. Person: PUT A SMALL ONE ONTO THE GREEN CUBE WHICH SUPPORTS A PYRAMID. Computer: OK. (does it)

Blocks world (Winston) Input was a line drawing of a blocks world structure. Program produced a semantic net representation of the structural description of the object

biological neuronartificial neuron

A two-layer neural network Input layer (activations represent feature vector for one training example) Hidden layer (“internal representation”) Output layer (activation represents classification) Weighted connections

ALVINN (Pomerleau, 1993) ALVINN learns to drive an autonomous vehicle at normal speeds on public highways (!) Input: 30 x 32 grid of pixel intensities from camera

Each output unit correspond to a particular steering direction. The most highly activated one gives the direction to steer.