Neural Networks AI – Week 21 Sub-symbolic AI One: Neural Networks Lee McCluskey, room 3/10

Slides:



Advertisements
Similar presentations
The human brain … … tricks us whenever it can!. The phenomenal power of the human mind I cdnuolt blveiee taht I cluod aulaclty uesdnatnrd waht I was rdanieg.
Advertisements

Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
ITEC 1010 Information and Organizations Artificial Intelligence.
BT101: Hermeneutics Introduction. A. Description of Hermeneutics 1. General Hermeneutics The study of the activity of interpretation;
Dr. Orla Murphy School of English 27 May 2011
CS100J 18 September 2003 Rsrecah on spleilng Aoccdrnig to a rscheearch at Cmabirgde Uinervtisy, it deosn't mttaer in waht oredr the ltteers in a wrod are,
Inclusive Learning Through Technology Damian Gordon.
What is science? Science: is a process by which we gain knowledge deals only with the natural world collects & organizes information (data/evidence) gives.
Sensation and Perception Chapter 3. Psychophysics This is how we experience our physical world. Classroom demo judging weight of pill bottles. Which one.
Standards Certification Education & Training Publishing Conferences & Exhibits Human Factors Engineering Why Smart People do Dumb Things.
Logo Design. UNTITLED Cdnuolt blveiee taht I cluod aulaclty uesdnatnrd waht I was rdanieg. The phaonmneal pweor of the hmuan mind: aoccdrnig to a rscheearch.
Or fizzix, fizicks, fzzx, etc THE Phun damental science.
Count the Number of “f”s FINISHED FILES ARE THE RE SULT OF YEARS OF SCIENTI FIC STUDY COMBINED WITH THE EXPERIENCE OF YEARS... How many did you find?
Communication The ability to communicate well, both orally and in writing, is a critical skill for allability to communicate Chapter 15: pp ; ;
Inclusive Learning Through Technology Damian Gordon.
I CAN: Explain the Relationship Between Perception and Sensation? Copyright © Allyn and Bacon 2006 Perception brings meaning to sensation, so perception.
Sensation.
 The nugger was flinp.  The nugger was flinp and wugnet.  The nugger was flinp, wugnet and manple in my waslet.  What was flinp?  How else does the.
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
T HE H UMAN M IND. The phaonmneal pwer of the hmuan mnid Aoccdrnig to a rscheearch at Cmabrigde Uinervtisy, it deson’t mttaer in what oredr the ltteers.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
What do you see?. O lny srmat poelpe can raed tihs. I cdnuolt blveiee taht I cluod aulaclty uesdnatnrd waht I was rdanieg. The phaonmneal pweor.
Teaching reading.
Mindboggling. Visual Imagery (visual cortex) Visualize a place you’d like to be. Maybe it’s riding a bike, sitting in the park or just hanging out in.
Technical Reading Presented by Beatrice Moore Luchin NUMBERS Mathematics Professional Development NUMBERSmpd.com.
Taxonomy, Ontology, Semantics and Other Painful Things By Francis Hsu
Sensation & Perception How do we construct our representations of the external world? To represent the world, we must detect physical energy (a stimulus)
~ Thought Journal ~ SILENTLY read the following passage. When you are finished, SILENTLY write down your reaction in your thought journal. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Communication “ The exchange of information, facts, ideas and meanings” Quinn et al. (2003, p38) Transferring information to bring about change “ The process.
Editing Documents Dr. Anatoliy Tmanov Pennsylvania State University.
The phenomenal power of the human mind   I cdnuolt blveiee taht I cluod aulaclty uesdnatnrd waht I was rdanieg.The phaonmneal pweor of the hmuan mnid!
Please read this sentence and count the number of F’s:
PROOFREADING Mini-lesson (Step 4 of WHAT GOOD WRITERS DO... )
The human brain … … tricks us whenever it can!. The human mind is so non-literal! I cdnuolt blveiee taht I cluod aulaclty uesdnatnrd waht I was rdanieg.
CHS AP Psychology Unit 4: Sensation, Perception and States of Consciousness Essential Task 4-1: Discuss basic principles of sensation/bottom up processing.
CALL Computer Assisted Language Learning : Research University of Stellenbosch.
Sensation & Perception
Illusions and Other Visual Defects CITA 6016 Food Sensory Analysis University of Puerto Rico Food Science & Technology.
Ignite your thought process Creativity. Two Myths About Creativity  Only a few special people possess it  Creativity is a gift and not a skill.
I cdnuolt blveiee taht I cluod aulaclty uesdnatnrd waht I was rdanieg. The phaonmneal pweor of the hmuan mnid. Aoccdrnig to a rscheearch at Cmabrigde Uinervtisy,
The Eye and Optical Illusions Chatfield Senior High.
Readability Make sure at least the first and last letter of your word are very readable. If some of the other in between letters must sacrifice their readability.
1.
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Welcome to Group Dynamics LDSP 351 Dr. Crystal Hoyt.
Sen sati on & Per cep tio n How do we construct our representations of the external world? To represent the world, we must detect physical energy (a stimulus)
Inspiring Youth to Live their Dreams! Scott Shickler Founder & CEO.
Aoccdrnig to rscheearch at Cmabrigde Uinervtisy, it deosn't mttaer in waht oredr the ltteers in a wrod are, the olny iprmoatnt tihng is taht the frist.
Unit 4: Sensation and Perception
The human brain … … tricks us whenever it can!.
The phenomenal power of the human mind   I cdnuolt blveiee taht I cluod aulaclty uesdnatnrd waht I was rdanieg The phaonmneal pweor of the hmuan mnid!
Elements of Effective Literacy Instruction in Grades 5-8
Please read the sign..
AP Psychology Sensation Essential Task 4-1:
Mindboggling.
Even though the next page may look weird, you can still read it!
Unit 4: Sensation & Perception
Mindboggling.
Closing IBS Lecture Fall 2008.
Sensation and Perception
Sabotage Effective Communication
Mindboggling.

Mindboggling.
Sensation and Perception
How does your brain perceive objects?

Mindboggling.
Mining Gold from Data Data Mining.
Presentation transcript:

Neural Networks AI – Week 21 Sub-symbolic AI One: Neural Networks Lee McCluskey, room 3/10

Aoccdrnig to rscheearch at Cmabrigde Uinervtisy, it deosn't mttaer in waht oredr the ltteers in a wrod are, the olny iprmoetnt tihng is taht the frist and lsat ltteer be at the rghit pclae. The rset can be a toatl mses and you can sitll raed it wouthit porbelm. Tihs is bcuseae the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe. Neural Networks

Up to now: Symbolic AI Knowledge Representation is explicit and composite – features (eg objects, relations..) of the representation map to feature of the world Processes often based on heuristic search, matching, logic reasoning, constraints handling Good for simulating “high level cognitive” tasks such as reasoning, planning, problem solving, high level learning, language and text processing.. OnTop(A,B)m A B The World The Representation

Neural Networks Up to now: Symbolic AI Benefits: -AI/KBs can be engineered and maintained like in software engineering -Behaviour can be predicted and explained eg using logic reasoning Problems: Reasoning tends to be “brittle” – easily broken by incorrect / approximate data Not so good for simulating low level (reactive) animal behaviour where the inputs are noisy / incomplete

Neural Networks Neural Networks (NNs) are networks of neurons, for example, as found in real (i.e. biological) brains. Artificial Neurons are crude approximations of the neurons found in brains. They may be physical devices, or purely mathematical constructs. Artificial Neural Networks (ANNs) are networks of Artificial Neurons, and hence constitute crude approximations to parts of real brains. ANNs =~ a parallel computational system consisting of many simple processing elements connected together in a specific way in order to perform a particular task. BENEFITS: Massive parallelism makes them very efficient They can learn and generalize from training data – so there is no need for knowledge engineering or a complex understanding of the problem. They are fault tolerant – this is equivalent to the “graceful degradation” found in biological systems, and noise tolerant – so they can cope with noisy inaccurate inputs

Learning in Neural Networks There are many forms of neural networks. Most operate by passing neural ‘activations’ – processed firing states through a network of connected neurons. One of the most powerful features of neural networks is their ability to learn and generalize from a set of training data. They adapt the strengths/weights of the connections between neurons so that the final output activations are correct. (e.g. like catching a ball, learning to balance) We will consider: 1. Supervised Learning (i.e. learning with a teacher) 2. Reinforcement learning (i.e. learning with limited feedback)

Neural Networks BRAINS VS COMPUTERS 1.There are approximately 10 billion neurons in the human cortex, compared with 10s of thousands of processors in the most powerful parallel computers. 2.Each biological neuron is connected to several thousands of other neurons, similar to the connectivity in powerful parallel computers. 3.Lack of processing units can be compensated by speed. The typical operating speeds of biological neurons is measured in milliseconds (10 -3 s), while a silicon chip can operate in nanoseconds (10 -9 s). 4.The human brain is extremely energy efficient, using approximately joules per operation per second, whereas the best computers today use around joules per operation per second. 5.Brains have been evolving for tens of millions of years, computers have been evolving for tens of decades. “My Brain is a Learning Neural Network” Terminator 2

Very Very Simple Model of an Artificial Neuron (McCulloch and Pitts 1943) A set of synapses (i.e. connections) brings in activations (inputs) from other neurons. A processing unit sums the inputs x weights, and then applies a transfer function using a “threshold value” to see if the neuron “fires”. An output line transmits the result to other neurons (output can be binary or continuous). If the sum does not reach the threshold, output is 0.

NNs: we don’t have to design them, they can learn their weights Consider the simple Neuron Model: 1.Supply a set of values for the input (x1 … xn) 2.An output is achieved and compared with the known target (correct/desired) output (like a “class” in learning from example). 3.If the output generated by the network does not match the target output, the weights are adjusted. 4.The process is repeated from step 1 until the correct output is generated. This is like supervised learning / learning from examples

Real Example: Pattern Recognition Pixel Grid Dimension: n = 5 x 8 = 40 1 output node indicates two classes. What’s missing here?

Simple Example: Boolean Functions Learn

Example viewed as a Decision Problem x1x1 x2x2 Separating line (decision boundary).

One Layer Neuron not very powerful …!

XOR – Linearly Non-separable x1x1 x2x2 Classes cannot be separated by a single decision boundary.

Perceptrons To determine whether the j th output node should fire, we calculate the value If this value exceeds 0 the neuron will fire otherwise it will not fire.

Neural Networks Conclusions The McCulloch-Pitts / Perceptron neuron models are crude approximations to real neurons that performs a simple summation and threshold function on activation levels. NNs are particularly good at Classification Problems where the weights are learned Powerful NNs can be created using multi- layers – next term Next week – Reinforcement Learning