Fall 2004 Perceptron CS478 - Machine Learning.

Slides:



Advertisements
Similar presentations
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
Advertisements

1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
Artificial Neural Network
Tuomas Sandholm Carnegie Mellon University Computer Science Department
Kostas Kontogiannis E&CE
Machine Learning Neural Networks
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2008 Shreekanth Mandayam ECE Department Rowan University.
20.5 Nerual Networks Thanks: Professors Frank Hoffmann and Jiawei Han, and Russell and Norvig.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks / Fall 2004 Shreekanth Mandayam ECE Department Rowan University.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2010 Shreekanth Mandayam ECE Department Rowan University.
Chapter Seven The Network Approach: Mind as a Web.
Machine Learning Motivation for machine learning How to set up a problem How to design a learner Introduce one class of learners (ANN) –Perceptrons –Feed-forward.
1 Pendahuluan Pertemuan 1 Matakuliah: T0293/Neuro Computing Tahun: 2005.
Data Mining with Neural Networks (HK: Chapter 7.5)
Using Neural Networks to Improve the Performance of an Autonomous Vehicle By Jon Cory and Matt Edwards.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
CHAPTER 12 ADVANCED INTELLIGENT SYSTEMS © 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang.
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Neural Networks AI – Week 21 Sub-symbolic AI One: Neural Networks Lee McCluskey, room 3/10
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Networks (ANN). Output Y is 1 if at least two of the three inputs are equal to 1.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Neural Networks Ellen Walker Hiram College. Connectionist Architectures Characterized by (Rich & Knight) –Large number of very simple neuron-like processing.
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
START OF DAY 4 Reading: Chap. 3 & 4. Project Topics & Teams Select topics/domains Select teams Deliverables – Description of the problem – Selection.
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
 Diagram of a Neuron  The Simple Perceptron  Multilayer Neural Network  What is Hidden Layer?  Why do we Need a Hidden Layer?  How do Multilayer.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
CS344: Introduction to Artificial Intelligence (associated lab: CS386) Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 31: Feedforward N/W; sigmoid.
ADVANCED PERCEPTRON LEARNING David Kauchak CS 451 – Fall 2013.
Features of Biological Neural Networks 1)Robustness and Fault Tolerance. 2)Flexibility. 3)Ability to deal with variety of Data situations. 4)Collective.
Features of Biological Neural Networks 1)Robustness and Fault Tolerance. 2)Flexibility. 3)Ability to deal with variety of Data situations. 4)Collective.
CS 478 – Tools for Machine Learning and Data Mining Perceptron.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
CS621 : Artificial Intelligence
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
COMP53311 Other Classification Models: Neural Network Prepared by Raymond Wong Some of the notes about Neural Network are borrowed from LW Chan’s notes.
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Nicolas Galoppo von Borries COMP Motion Planning Introduction to Artificial Neural Networks.
Bab 5 Classification: Alternative Techniques Part 4 Artificial Neural Networks Based Classifer.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 2 ARTIFICIAL.
Neural networks.
Neural Network Architecture Session 2
Artificial Neural Networks
Other Classification Models: Neural Network
Other Classification Models: Neural Network
Joost N. Kok Universiteit Leiden
What is an ANN ? The inventor of the first neuro computer, Dr. Robert defines a neural network as,A human brain like system consisting of a large number.
Artificial Neural Networks
Machine Learning Today: Reading: Maria Florina Balcan
Perceptrons for Dummies
Classification Neural Networks 1
XOR problem Input 2 Input 1
Intelligent Leaning -- A Brief Introduction to Artificial Neural Networks Chiung-Yao Fang.
Perceptron as one Type of Linear Discriminants
network of simple neuron-like computing elements
Artificial Intelligence Lecture No. 28
Machine Learning: Lecture 4
Machine Learning: UNIT-2 CHAPTER-1
Artificial Intelligence 12. Two Layer ANNs
Seminar on Machine Learning Rada Mihalcea
The Network Approach: Mind as a Web
Introduction to Neural Network
CS621: Artificial Intelligence Lecture 22-23: Sigmoid neuron, Backpropagation (Lecture 20 and 21 taken by Anup on Graphical Models) Pushpak Bhattacharyya.
Sanguthevar Rajasekaran University of Connecticut
Presentation transcript:

Fall 2004 Perceptron CS478 - Machine Learning

Neural Networks Sub-symbolic approach: Does not use symbols to denote objects Views intelligence/learning as arising from the collective behavior of a large number of simple, interacting components Motivated by biological plausibility (i.e., brain-like)

Natural Neuron

Captures the essence of the natural neuron Artificial Neuron Captures the essence of the natural neuron

NN Characteristics (Artificial) neural networks are sets of (highly) interconnected artificial neurons (i.e., simple computational units) Characteristics Massive parallelism Distributed knowledge representation (i.e., implicit in patterns of interactions) Graceful degradation (e.g., grandmother cell) Less susceptible to brittleness Noise tolerant Opaque (i.e., black box)

Network Topology Pattern of interconnections between neurons: primary source of inductive bias Characteristics Interconnectivity (fully connected, mesh, etc.) Number of layers Number of neurons per layer Fixed vs. dynamic

NN Learning Learning is effected by one or a combination of several mechanisms: Weight updates Changes in activation functions Changes in overall topology

Perceptrons (1958) The simplest class of neural networks Single-layer, i.e., only one set of weight connections between inputs and outputs Binary inputs Boolean activation levels

Learning for Perceptrons Algorithm devised by Rosenblatt in 1958 Given an example (i.e., labeled input pattern): Compute output Check output against target Adapt weights when different

Learn-Perceptron d  some constant (the learning rate) Initialize weights (typically random) For each new training example with target output T For each node (parallel for loop) Compute activation F If (F=T) then Done Else if (F=0 and T=1) then Increment weights on active lines by d Else Decrement weights on active lines by d

Intuition Learn-Perceptron slowly (depending on the value of d) moves weights closer to the target output Several iterations over the training set may be required

Example Consider a 3-input, 1-output perceptron Let d=1 and t=0.9 Initialize the weights to 0 Build the perceptron (i.e., learn the weights) for the following training set: 0 0 1  0 1 1 1  1 1 0 1  1 0 1 1  0

More Functions Repeat the previous exercise with the following training set: 0 0  0 1 0  1 1 1  0 0 1  1 What do you observe? Why?