Artificial Intelligence ICS 61 February, 2015 Dan Frost UC Irvine frost@uci.edu
Defining Artificial Intelligence A computer performing tasks that are normally thought to require human intelligence. Getting a computer to do in real life what computers do in the movies. In games: NPCs that seem to be human avatars
Approaches to A. I. Human Rational Thinking Acting This model from Russell and Norvig.
Systems that think like humans Human Rational Thinking like humans “Cognitive science” Neuron level Neuroanatomical level Mind level Thinking Acting
Systems that act like humans Human Rational Thinking like humans “Cognitive science” Neuron level Neuroanatomical level Mind level Acting like humans Understand language Game AI, control NPCs Control the body The Turing Test Thinking Acting
Systems that think rationally Human Rational Thinking like humans “Cognitive science” Neuron level Neuroanatomical level Mind level Thinking rationally Aristotle, syllogisms Logic “Laws of thought” Acting like humans Understand language Game AI, control NPCs Control the body The Turing Test Thinking Acting
Systems that act rationally Human Rational Thinking like humans “Cognitive science” Neuron level Neuroanatomical level Mind level Thinking rationally Aristotle, syllogisms Logic “Laws of thought” Acting like humans Understand language Game AI, control NPCs Control the body The Turing Test Acting rationally Business approach Results oriented Thinking Acting
Tour of A.I. applications Natural language processing – translation, summarization, IR, “smart search” Game playing – chess, poker, Jeopardy! Of interest to businesses – machine learning, scheduling Artificial Neural Networks
Natural Language Processing Uniquely human Commercially valuable Traditional “big” AI research area. Upper left approach (think like humans)
Parse trees
The Vauquois Triangle prob(eat(cat, fish), 0.9) eat(cat-species, fish-species, typically) S NP VP V NP Les chats mangent du poisson. Cats eat fish.
Parsing challenges I saw a man with my telescope.
Parsing challenges I saw a man with my telescope. Red tape holds up new bridge.
Parsing challenges I saw a man with my telescope. Red tape holds up new bridge. Kids make nutritious snacks.
Parsing challenges I saw a man with my telescope. Red tape holds up new bridge. Kids make nutritious snacks. On the evidence that what we will and won’t say and what we will and won’t accept can be characterized by rules, it has been argued that, in some sense, we “know” the rules of our language.
Statistical Approach to NLP The “Google” way – use lots of data and lots of computing power. Utilize large corpuses of translated texts (e.g. from the UN).
AI for playing games Adversarial Controlled environment with robust interactions. Tic tac toe, chess – complete knowledge Poker – incomplete knowledge, probabilities Jeopardy! – NLP, databases, culture Video games – the Turing test revisited
Tic tac toe and minimax
Chess and minimax Minimax game trees are too big! 10-40 branches at each level 10-40 moves to checkmate Choose promising branches heuristically. Evaluate mid-game board positions. Use libraries of openings. Specialized end-game algorithms. Deep Blue beats Garry Kasporov in 1997.
Poker AIs Bluffing – theory of mind Betting, raising, calling – making decisions based on expected utility (probability of results and payoffs) Decision making using Monte Carlo method
Jeopardy! and IBM’s Watson
How Watson works Picks out keywords in the clue Searches Wikipedia, dictionaries, news articles, and literary works – 200 million pages, all in memory Runs multiple algorithms simultaneously looking for related phrases. Determines best response and its confidence level.
Jeopardy! and IBM’s Watson
AI in video games – Madden
AI in video games - Halo
AI in video games – Façade
AI in video games NPCs (non-player characters) can have goals, plans, emotions NPCs use path finding NPCs respond to sounds, lights, signals NPCs co-ordinate with each other; squad tactics Some natural language processing
Commercial applications of AI Machine learning Mitchell: “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.” Learning often means finding/creating categories. Scheduling Often offline, with online updates.
Machine Learning Induction: learn from observations Learn a function f from a set of input-output pairs. How best to represent a function internally? Input Output x1 x2 x3 x4 f(x1, x2, x3, x4) 1
Some more classified data to learn from – should we play golf today?
Decision Trees
Scheduling / timetabling
Scheduling / timetabling Courses, nurses, airplanes, factories Multiple constraints and complex optimization function Offline – create schedule in advance Online – revise schedule as conditions change Local search often works well Start with an arbitrary schedule Make small (local) modifications, choose best Repeat; or stop if no local mod is better.
Local search
Recap – Approaches to A. I. Human Rational Thinking like humans “Cognitive science” Neuron level Neuroanatomical level Mind level Thinking rationally Aristotle, syllogisms Logic “Laws of thought” Acting like humans Understand language Game AI, control NPCs Control the body The Turing Test Acting rationally Business approach Results oriented Thinking Acting
Recap – many A.I. applications Natural language processing – translation, summarization, IR, “smart search” Game playing – chess, poker, Jeopardy!, video games – and playing in games (NPCs) Machine learning, Scheduling
Recap – Approaches to A. I. Human Rational Thinking like humans Cognitive science Neuron level Neuroanatomical level Mind level Thinking rationally Aristotle, syllogisms Logic “Laws of thought” Acting like humans Understand language Game AI, control NPCs Control the body The Turing Test Acting rationally Business approach Results oriented Thinking Acting
(Artificial) Neural Networks Biological inspiration Synthetic networks non-Von Neumann Machine learning Perceptrons – MATH Perceptron learning Varieties of Artificial Neural Networks
Brain - Neurons 10 billion neurons (in humans) Each one has an electro-chemical state
Brain – Network of Neurons Each neuron has on average 7,000 synaptic connections with other neurons. A neuron “fires” to communicate with neighbors.
Modeling the Neural Network
von Neumann Architecture Separation of processor and memory. One instruction executed at a time.
Animal Neural Architecture von Neumann Birds and bees (and us) Separate processor and memory Sequential instructions Each neuron has state and processing Massively parallel, massively interconnected.
The Percepton A simple computational model of a single neuron. Frank Rosenblatt, 1957 𝑓 𝑥 = 1 if 𝑤 ∙ 𝑥 −𝑏>0 0 otherwise The entries in 𝑤 and 𝑥 are usually real-valued (not limited to 0 and 1)
The Perceptron
Perceptrons can be combined to make a network
How to “program” a Perceptron? Programming a Perceptron means determining the values in 𝑤 . That’s worse than C or Fortran! Back to induction: Ideally, we can find 𝑤 from a set of classified inputs.
Perceptron Learning Rule Training data: Input Output x1 x2 x3 1 if avg(x1, x2)>x3, 0 otherwise 12 9 6 1 -2 8 15 3 -0.5 4 Valid weights: 𝑤1=0.5, 𝑤2=0.5, 𝑤3=−1.0, 𝑏=0 1 if 0.5𝑥1+0.5𝑥2 −𝑥3−0>0 0 otherwise Perceptron function:
Varieties of Artificial Neural Networks Neurons that are not Perceptrons. Multiple neurons, often organized in layers.
Feed-forward network
Recurrent Neural Networks
Hopfield Network
On Learning the Past Tense of English Verbs Rumelhart and McClelland, 1980s
On Learning the Past Tense of English Verbs
On Learning the Past Tense of English Verbs
Neural Networks Alluring because of their biological inspiration degrade gracefully handle noisy inputs well good for classification model human learning (to some extent) don’t need to be programmed Limited hard to understand, impossible to debug not appropriate for symbolic information processing
All clear?