Download presentation
Presentation is loading. Please wait.
Published byAlexandrina Marsh Modified over 9 years ago
1
Neural Networks AI – Week 21 Sub-symbolic AI One: Neural Networks Lee McCluskey, room 3/10 Email lee@hud.ac.uklee@hud.ac.uk http://scom.hud.ac.uk/scomtlm/cha2555/
2
Aoccdrnig to rscheearch at Cmabrigde Uinervtisy, it deosn't mttaer in waht oredr the ltteers in a wrod are, the olny iprmoetnt tihng is taht the frist and lsat ltteer be at the rghit pclae. The rset can be a toatl mses and you can sitll raed it wouthit porbelm. Tihs is bcuseae the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe. Neural Networks
3
Up to now: Symbolic AI Knowledge Representation is explicit and composite – features (eg objects, relations..) of the representation map to feature of the world Processes often based on heuristic search, matching, logic reasoning, constraints handling Good for simulating “high level cognitive” tasks such as reasoning, planning, problem solving, high level learning, language and text processing.. OnTop(A,B)m A B The World The Representation
4
Neural Networks Up to now: Symbolic AI Benefits: -AI/KBs can be engineered and maintained like in software engineering -Behaviour can be predicted and explained eg using logic reasoning Problems: Reasoning tends to be “brittle” – easily broken by incorrect / approximate data Not so good for simulating low level (reactive) animal behaviour where the inputs are noisy / incomplete
5
Neural Networks Neural Networks (NNs) are networks of neurons, for example, as found in real (i.e. biological) brains. Artificial Neurons are crude approximations of the neurons found in brains. They may be physical devices, or purely mathematical constructs. Artificial Neural Networks (ANNs) are networks of Artificial Neurons, and hence constitute crude approximations to parts of real brains. ANNs =~ a parallel computational system consisting of many simple processing elements connected together in a specific way in order to perform a particular task. BENEFITS: Massive parallelism makes them very efficient They can learn and generalize from training data – so there is no need for knowledge engineering or a complex understanding of the problem. They are fault tolerant – this is equivalent to the “graceful degradation” found in biological systems, and noise tolerant – so they can cope with noisy inaccurate inputs
6
Learning in Neural Networks There are many forms of neural networks. Most operate by passing neural ‘activations’ – processed firing states through a network of connected neurons. One of the most powerful features of neural networks is their ability to learn and generalize from a set of training data. They adapt the strengths/weights of the connections between neurons so that the final output activations are correct. (e.g. like catching a ball, learning to balance) We will consider: 1. Supervised Learning (i.e. learning with a teacher) 2. Reinforcement learning (i.e. learning with limited feedback)
7
Neural Networks BRAINS VS COMPUTERS 1.There are approximately 10 billion neurons in the human cortex, compared with 10s of thousands of processors in the most powerful parallel computers. 2.Each biological neuron is connected to several thousands of other neurons, similar to the connectivity in powerful parallel computers. 3.Lack of processing units can be compensated by speed. The typical operating speeds of biological neurons is measured in milliseconds (10 -3 s), while a silicon chip can operate in nanoseconds (10 -9 s). 4.The human brain is extremely energy efficient, using approximately 10 -16 joules per operation per second, whereas the best computers today use around 10 -6 joules per operation per second. 5.Brains have been evolving for tens of millions of years, computers have been evolving for tens of decades. “My Brain is a Learning Neural Network” Terminator 2
8
Very Very Simple Model of an Artificial Neuron (McCulloch and Pitts 1943) A set of synapses (i.e. connections) brings in activations (inputs) from other neurons. A processing unit sums the inputs x weights, and then applies a transfer function using a “threshold value” to see if the neuron “fires”. An output line transmits the result to other neurons (output can be binary or continuous). If the sum does not reach the threshold, output is 0.
10
NNs: we don’t have to design them, they can learn their weights Consider the simple Neuron Model: 1.Supply a set of values for the input (x1 … xn) 2.An output is achieved and compared with the known target (correct/desired) output (like a “class” in learning from example). 3.If the output generated by the network does not match the target output, the weights are adjusted. 4.The process is repeated from step 1 until the correct output is generated. This is like supervised learning / learning from examples
11
Real Example: Pattern Recognition Pixel Grid Dimension: n = 5 x 8 = 40 1 output node indicates two classes. What’s missing here?
12
Simple Example: Boolean Functions Learn
13
Example viewed as a Decision Problem x1x1 x2x2 Separating line (decision boundary).
14
One Layer Neuron not very powerful …!
15
XOR – Linearly Non-separable x1x1 x2x2 Classes cannot be separated by a single decision boundary.
16
Perceptrons To determine whether the j th output node should fire, we calculate the value If this value exceeds 0 the neuron will fire otherwise it will not fire.
17
Neural Networks Conclusions The McCulloch-Pitts / Perceptron neuron models are crude approximations to real neurons that performs a simple summation and threshold function on activation levels. NNs are particularly good at Classification Problems where the weights are learned Powerful NNs can be created using multi- layers – next term Next week – Reinforcement Learning
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.