CO Games Development 2 Week 22 Machine Learning

Slides:



Advertisements
Similar presentations
Artificial Intelligence 12. Two Layer ANNs
Advertisements

Slides from: Doug Gray, David Poole
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
Simple Neural Nets For Pattern Classification
Chapter Seven The Network Approach: Mind as a Web.
1 Pendahuluan Pertemuan 1 Matakuliah: T0293/Neuro Computing Tahun: 2005.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Artificial Neural Networks for Secondary Structure Prediction CSC391/691 Bioinformatics Spring 2004 Fetrow/Burg/Miller (slides by J. Burg)
Philosophical Foundations Chapter 26. Searle v. Dreyfus argument §Dreyfus argues that computers will never be able to simulate intelligence §Searle, on.
Traffic Sign Recognition Using Artificial Neural Network Radi Bekker
CHAPTER 12 ADVANCED INTELLIGENT SYSTEMS © 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang.
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Chapter 14: Artificial Intelligence Invitation to Computer Science, C++ Version, Third Edition.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Connectionism. ASSOCIATIONISM Associationism David Hume ( ) was one of the first philosophers to develop a detailed theory of mental processes.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
Artificial Intelligence Lecture No. 29 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
AI History, Philosophical Foundations Part 2. Some highlights from early history of AI Gödel’s theorem: 1930 Turing machines: 1936 McCulloch and Pitts.
UNIVERSITI TENAGA NASIONAL 1 CCSB354 ARTIFICIAL INTELLIGENCE AI Debates Instructor: Alicia Tang Y. C.
Introduction to Machine Learning Kamal Aboul-Hosn Cornell University Chess, Chinese Rooms, and Learning.
© 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang 12-1 Chapter 12 Advanced Intelligent Systems.
 Based on observed functioning of human brain.  (Artificial Neural Networks (ANN)  Our view of neural networks is very simplistic.  We view a neural.
CS 478 – Tools for Machine Learning and Data Mining Perceptron.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Artificial Neural Networks Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
1 Perceptron as one Type of Linear Discriminants IntroductionIntroduction Design of Primitive UnitsDesign of Primitive Units PerceptronsPerceptrons.
Artificial Intelligence Methods Neural Networks Lecture 1 Rakesh K. Bissoondeeal Rakesh K.
Perceptrons Michael J. Watts
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Neural Networks (NN) Part 1 1.NN: Basic Ideas 2.Computational Principles 3.Examples of Neural Computation.
Minds and Computers Discovering the nature of intelligence by studying intelligence in all its forms: human and machine Artificial intelligence (A.I.)
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 2 ARTIFICIAL.
Network Management Lecture 13. MACHINE LEARNING TECHNIQUES 2 Dr. Atiq Ahmed Université de Balouchistan.
Chapter 13 Artificial Intelligence. Artificial Intelligence – Figure 13.1 The Turing Test.
Artificial Neural Networks This is lecture 15 of the module `Biologically Inspired Computing’ An introduction to Artificial Neural Networks.
Pattern Recognition Lecture 20: Neural Networks 3 Dr. Richard Spillman Pacific Lutheran University.
Neural networks.
Fall 2004 Perceptron CS478 - Machine Learning.
Chapter 11: Artificial Intelligence
Learning in Neural Networks
Other Classification Models: Neural Network
CSC 578 Neural Networks and Deep Learning
Artificial Intelligence
Artificial Intelligence introduction(2)
Chapter 12 Advanced Intelligent Systems
OVERVIEW OF BIOLOGICAL NEURONS
of the Artificial Neural Networks.
Perceptron as one Type of Linear Discriminants
Artificial Intelligence Lecture No. 28
MACHINE LEARNING TECHNIQUES IN IMAGE PROCESSING
MACHINE LEARNING TECHNIQUES IN IMAGE PROCESSING
Fundamentals of Neural Networks Dr. Satinder Bal Gupta
Artificial Intelligence 12. Two Layer ANNs
CS 621 Artificial Intelligence Lecture /10/05 Prof
The Network Approach: Mind as a Web
Introduction to Neural Network
David Kauchak CS158 – Spring 2019

Sanguthevar Rajasekaran University of Connecticut
Models of the brain hardware
EE 193/Comp 150 Computing with Biological Parts
Artificial Intelligence Machine Learning
Presentation transcript:

CO3301 - Games Development 2 Week 22 Machine Learning Gareth Bellaby

Machine Learning & Games There are some games which “learn” during play, e.g. Creatures, Petz, Black & White. Generally older games. Quite a few claims in publicity but no reality… So I stopped doing anything about machine learning.

Machine Learning & Games However, I’ve seen various recent instances of ML being used Not “in game”, but to train a game, e.g. Kinect Sport Learning a function from examples of its inputs and outputs is called inductive learning Functions can be represented by logical sentences, polynomials, belief networks, neural networks etc

Learning It's possible to get build a computer which learns. Two common methods: Genetic Algorithms Neural Nets

Genetic Algorithms Inspiration from genetics, evolution and "survival of the fittest". GA programs don't solve problems by reasoning logically about them. Instead, populations of competing candidate solutions are created. Poor candidates die out, better candidates survive and reproduce by constructing new solutions out of components of their parents, i.e. out of their "genetic material".

Neural Nets Use a structure based (supposedly) on the brain's neurons. A net because many, interconnected, "neurons" are used. Knowledge as a pattern of activation, rather than rules.

Learning Neural nets were one of the first instances of machine learning. A neural net is not set up according to some understanding of a problem, rather it learns (or is taught) how to solve the problem itself.

Learning In essence a neural net works by summing multiple inputs according to weighting (which can be adjusted), with output being triggered at a threshold. A basic neural nets uses a rule of the form: "Change the weight of the connection between a unit A and a unit B in proportion to the product of their simultaneous activation"

Learning Starting from 0 weights, expose the network to a series of learning trials. Learning is from experience which causes changes in weights and thus changes in patterns of activation. System cycles through learning trials until the system settles down to a rest state.

Artificial Neuron x1 w1 w2 x2 net = xiwi f(net) w3 output x3 wn xn input signals xi weights wi activation level xiwi threshold function f

Neural Nets Historically comes in two forms: Basic 2-layer net, which has some important limitations. Multi-layered net.

2-layer net Advantages: There is a guaranteed learning rule (the Hebb learning rule), i.e. change the weight of the connection between two units in proportion to the product of their simultaneous activation. Disadvantages: There is a strong correlation between input and output units. XOR problem. This is a result of the strong correlation between input and output units. Summing inputs takes the node over its threshold and so it wrongly fires.

McCulloch-Pitt AND neuron +1 x x ^ y +1 x + y - 2 y -2 1 Three inputs: x, y and the bias, which has a constant value of +1 Weights are +1, +1 and -2, respectively. Threshold is: if the sum of x + y -2 is less than 0, return -1. Otherwise return 1.

McCulloch-Pitt AND neuron x y x + y -2 output 1 -1 -2

Multi-layered net Multi-layered nets overcome the learning limitations of 2-layer nets. Prevents the net from becoming simply correlational.   A disadvantage is that there is no guaranteed learning rules.

Solution to XOR problem activation threshold = .5 output layer -2 +1 +1 activation threshold = 1.5 hidden unit +1 +1 input layer y x

Connectionism One approach towards knowledge is that taken by symbolic logic (e.g. Turing). This approach contains the suggestion that the architecture or framework which underlies computation, and thought, is irrelevant. Connectionism suggests that architecture does matter and, specifically, that it useful to use an architecture which is brain-like, i.e. parallel machines massively connected. Note: neural nets can be implemented on single processor, binary machines.

Knowledge Representation. Connectionism also represents knowledge in a different fashion from (for instance) semantic nets or production rules. Knowledge is represented as a pattern of activation. This pattern bears no resemblance to the knowledge being represented. The pattern is distributed across the system as a whole. Learning is by some type of internal representation. Neural nets are something like brains: they are parallel and distributed.

The Symbol Grounding Problem

Symbols Computers are machines that manipulate symbols. There is a philosophical tradition which suggests that that symbol manipulation captures all thinking and understanding. Includes such writers as Turing, Russell, early Wittgenstein.   “Symbolic" model of the mind: The mind is a symbol system and cognition is symbol manipulation.

Turing Turing Machine (TM) can compute anything which is computable. Uses symbolic logic. Universal TM can perform as any other TM. If it possible to work out how cognitive, intelligent processes work. the UTM can be programmed to perform them.

Searle Searle argues machines can never possess ‘intentionality’. Most AI programs consist of sets of rules for manipulating symbols- they are arbitrary and never about objects or events.   Intentionality is aboutness. "The property of the mind by which it is directed at, about, or 'of' objects and events in the world. Aboutness - in the manner of beliefs, fears, desires, etc", Eliasmith, Dictionary of Philosophy of Mind.

Penrose Penrose: cognitive processes are not computable. Agues that human thought cannot be simulated by any computation. Uses Gödel's incompleteness theorem.

Symbol Grounding Problem The problem of providing systems with some external, fixed reference. One (simplistic) way to think of this is to distinguish between knowledge and data. Data can be understood, manipulated, used to create new data, etc., but all of these processes can occur even though the data is untrue. The word 'knowledge' can be used for those things we 'know' to be true.

Symbol Grounding Problem Some types of intelligent behavior can be reproduced using sets of formal rules to manipulate symbols (e.g. chess), but this is not obviously true of many of the things we do.   Could a computer be given the equipment to interact with the world in the way that we do, and to learn from it?

Harnad "How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes, be grounded in anything but other meaningless symbols?“ Harnad, S. ,(1990), "The Symbol Grounding Problem."

Harnad Harnad himself proposes connectionism as a possible solution.   Neural nets are used to ground symbols (i.e. provide the symbols with a foundation). "Connectionism is one natural candidate for the mechanism that learns the invariant features underlying categorical representations, thereby connecting names to the proximal projections of the distal objects they stand for." Harnad, S. ,(1990), "The Symbol Grounding Problem."