Download presentation
Presentation is loading. Please wait.
1
Open only for Humans; Droids and Robots should go for CSE 462 next door ;-)
2
General Information Instructor: Subbarao Kambhampati (Rao) –Office hours: after class, T/Th: 3-4pm TA: Lei Tang –Office hours: TBD (will be twice weekly) –Additional help (on request/one-on-one) by J. Benton; G. Wolfe; Will Cushing, Sungwook Yoon Send mail to tutors-cse471-f06@enws209.eas.asu.edu … Course Homepage: http://rakaposhi.eas.asu.edu/cse471 http://rakaposhi.eas.asu.edu/cse471
3
Grading etc. –Projects/Homeworks/Participation (~55%) Projects –Approximately 4 »First project already up! Due in 2 weeks –Expected background »Competence in Lisp programming »Why lisp? (Because!) Homeoworks –Homeworks will be assigned piecemeal.. (Socket system) Participation –Attendance to and attentiveness in classes is mandatory –Do ask questions –Midterm & final (~45%) Subject to (minor) Changes
4
Grade Anxiety All letter grades will be awarded –A+,A,B+,B,B-,C+,C,D etc. No pre-set grade thresholds CSE471 and CSE598 students will have the same assignments/tests etc. During letter grade assignment however, they will be compared to their own group. –The class is almost evenly split between CSE471 and CSE598 (grad) students
5
Honor Code Unless explicitly stated otherwise, all assignments are: –Strictly individual effort –You are forbidden from trawling the web for answers/code etc Any infraction will be dealt with in severest terms allowed.
6
Life with a homepage.. I will not be giving any handouts –All class related material will be accessible from the web-page Home works may be specified incrementally –(one problem at a time) –The slides used in the lecture will be available on the class page I reserve the right to modify slides right up to the time of the class When printing slides avoid printing the hidden slides
9
1946: ENIAC heralds the dawn of Computing
10
I propose to consider the question: “Can machines think?” --Alan Turing, 1950 1950: Turing asks the question….
11
1956: A new field is born G We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. G - Dartmouth AI Project Proposal; J. McCarthy et al.; Aug. 31, 1955.Dartmouth AI Project Proposal
12
1996: EQP proves that Robbin’s Algebras are all boolean [An Argonne lab program] has come up with a major mathematical proof that would have been called creative if a human had thought of it. -New York Times, December, 1996
13
1997: HAL 9000 becomes operational in fictional Urbana, Illinois …by now, every intelligent person knew that H-A-L is derived from Heuristic ALgorithmic - Dr. Chandra, 2010: Odyssey Two
14
1997: Deep Blue ends Human Supremacy in Chess I could feel human-level intelligence across the room -Gary Kasparov, World Chess Champion (human) vs. In a few years, even a single victory in a long series of games would be the triumph of human genius.
15
For two days in May, 1999, an AI Program called Remote Agent autonomously ran Deep Space 1 (some 60,000,000 miles from earth) 1999: Remote Agent takes Deep Space 1 on a galactic ride
16
2002: Computers start passing Advanced Placement Tests … a project funded by (Microsoft Co-founder) Paul Allen attempts to design a “Digital Aristotle”. Its first results involve programs that can pass High School Advanced Placement Exam in Chemistry…
17
2005: Cars Drive Themselves G Stanley and three other cars drive themselves over a 132 mile mountain road
18
2005: Robots play soccer (without headbutting!) G 2005 Robot Soccer: Humanoid league
19
2006: AI Celebrates its Golden Jubilee…
20
2006: AI Celebrates its Golden Jubilee… …and invites you along.. Welcome! To CSE471/598 50 th Anniversary Edition (tons of bonus material)
21
Course Overview What is AI –Intelligent Agents Search (Problem Solving Agents) –Single agent search [Project 1] Markov Decision Processes Constraint Satisfaction Problems –Adversarial (multi-agent) search Logical Reasoning [Project 2] Reasoning with uncertainity Planning [Project 3] Learning [Project 4]
23
Mechanical flight became possible only when people decided to stop emulating birds… What if we are writing intelligent agents that interact with humans? The COG project
24
What AI can do is as important as what it can’t yet do.. Captcha project
25
Arms race to defeat Captchas… (using unwitting masses) Start opening an email account at Yahoo.. Clip the captcha test Show it to a human trying to get into another site –Usually a site that has pretty pictures of the persons of apposite * sex Transfer their answer to the Yahoo Note: Apposite—not opposite. This course is nothing if not open minded
27
Do we want a machine that beats humans in chess or a machine that thinks like humans while beating humans in chess? DeepBlue supposedly DOESN’T think like humans..
29
It can be argued that all the faculties needed to pass turing test are also needed to act rationally to improve success ratio…
33
Playing an (entertaining) game of Soccer Solving NYT crossword puzzles at close to expert level Navigating in deep space Learning patterns in databases (datamining…) Supporting supply-chain management decisions at fortune-500 companies Bringing “Semantics” to the web
34
Playing an (entertaining) game of Soccer Solving NYT crossword puzzles at close to expert level Navigating in deep space Learning patterns in databases (datamining…) Supporting supply-chain management decisions at fortune-500 companies Bringing “Semantics” to the web
35
Architectures for Intelligent Agents Wherein we discuss why do we need representation, reasoning and learning 8/24 Grigory Perelman Pluto in mourning..
36
TA office hours etc. TA: Lei Tang Office hours: –Regular: M/W 3:30—4:30 BYENG 214 (Exception Tomorrow: Friday: 10-12; BY561AC)
37
Survey Sheet Summary Reasons for taking course: –Most said “Sounded interesting” or “will be of use in our research” (examples include AI&biology; AI&games) Topics suggested –Neural networks (perennial favorite ) –Machine learning –Multi-agent (sytems, learning) –Perception/Robotics Recitation session –19 said they would be very interested –21 said they would have some interest –(out of a total of 50) Other questions?
38
PEAS (Performance, Environment, Actuators,Sensors)
39
Course Overview What is AI –Intelligent Agents Search (Problem Solving Agents) –Single agent search [Project 1] Markov Decision Processes Constraint Satisfaction Problems –Adversarial (multi-agent) search Logical Reasoning [Project 2] Reasoning with uncertainity Planning [Project 3] Learning [Project 4] We will explain the role of these topics in the context of designing intelligent agents
40
A: A Unified Brand-name-Free Introduction to Planning Subbarao Kambhampati Environment What action next? The $$$$$$ Question
43
Partial contents of sources as found by Get Get,Post,Buy,.. Cheapest price on specific goods Internet, congestion, traffic, multiple sources
44
“history” = {s0,s1,s2……sn….} Performance = f(history) Expected Performance= E(f(history)) Rational != Intentionally avoiding sensing and prior knowledge
45
A: A Unified Brand-name-Free Introduction to Planning Subbarao Kambhampati Environment action perception Goals (Static vs. Dynamic) (Observable vs. Partially Observable) (perfect vs. Imperfect) (Deterministic vs. Stochastic) What action next? (Instantaneous vs. Durative) (Full vs. Partial satisfaction) The $$$$$$ Question
46
Yes No Yes #1 No >1 Accessible: The agent can “sense” its environment best: Fully accessible worst: inaccessible typical: Partially accessible Deterministic: The actions have predictable effects best: deterministic worst: non-deterministic typical: Stochastic Static: The world evolves only because of agents’ actions best: static worst: dynamic typical: quasi-static Episodic: The performance of the agent is determined episodically best: episodic worst: non-episodic Discrete: The environment evolves through a discrete set of states best: discrete worst: continuous typical: hybrid Agents: # of agents in the environment; are they competing or cooperating? #Agents
47
Booo hooo
48
Additional ideas/points covered Impromptu The point that complexity of behavior is a product of both the agent and the environment –Simon’s Ant in the sciences of the artificial The importance of modeling the other agents in the environment –The point that one reason why our brains are so large, evolutionarily speaking, may be that we needed them to outwit not other animals but our own enemies The issue of cost of deliberation and modeling –It is not necessary that an agent that minutely models the intentions of other agents in the environment will always win… The issue of bias in learning –Often the evidence is consistent with many many hypotheses. A small agent, to survive, has to use strong biases in learning. –Gavagai example and the whole-object hypothesis.
49
Review
50
(Model-based reflex agents)
51
This one already assumes that the “sensors features” mapping has been done! Even basic survival needs state information..
52
EXPLICIT MODELS OF THE ENVIRONMENT --Blackbox models (child function) --Logical models --Probabilistic models Representation & Reasoning (aka Model-based Reflex Agents)
53
It is not always obvious what action to do now given a set of goals You woke up in the morning. You want to attend a class. What should your action be? Search (Find a path from the current state to goal state; execute the first op) Planning (does the same for logical—non-blackbox state models)
54
--Decision Theoretic Planning --Sequential Decision Problems..certain inalienable rights—life, liberty and pursuit of ?Money ?Daytime TV ?Happiness (utility)
55
Discounting The decision-theoretic agent often needs to assess the utility of sequences of states (also called behaviors). –One technical problem is “How do keep the utility of an infinite sequence finite? –A closely related real problem is how do we combine the utility of a future state with that of a current state (how does 15$ tomorrow compare with 5000$ when you retire?) –The way both are handled is to have a discount factor r (0<r<1) and multiply the utility of n th state by r n r 0 U(s o )+ r 1 U(s 1 )+…….+ r n U(s n )+ Guaranteed to converge since power series converge for 0<r<n –r is set by the individual agents based on how they think future rewards stack up to the current ones An agent that expects to live longer may consider a larger r than one that expects to live shorter…
56
Representation Mechanisms: Logic (propositional; first order) Probabilistic logic Learning the models Search Blind, Informed Planning Inference Logical resolution Bayesian inference How the course topics stack up…
57
Learning Dimensions: What can be learned? --Any of the boxes representing the agent’s knowledge --action description, effect probabilities, causal relations in the world (and the probabilities of causation), utility models (sort of through credit assignment), sensor data interpretation models What feedback is available? --Supervised, unsupervised, “reinforcement” learning --Credit assignment problem What prior knowledge is available? -- “Tabularasa” (agent’s head is a blank slate) or pre-existing knowledge
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.