Perceptron by Python Wen-Yen Hsu, Hsiao-Lung Chan

Slides:



Advertisements
Similar presentations
Koby Crammer Department of Electrical Engineering
Advertisements

NEURAL NETWORKS Backpropagation Algorithm
G53MLE | Machine Learning | Dr Guoping Qiu
T HE A DALINE N EURON Ranga Rodrigo February 8,
also known as the “Perceptron”
Lecture 13 – Perceptrons Machine Learning March 16, 2010.
Perceptron.
Branch Prediction with Neural- Networks: Hidden Layers and Recurrent Connections Andrew Smith CSE Dept. June 10, 2004.
September 30, 2010Neural Networks Lecture 8: Backpropagation Learning 1 Sigmoidal Neurons In backpropagation networks, we typically choose  = 1 and 
Back-Propagation Algorithm
Before we start ADALINE
September 23, 2010Neural Networks Lecture 6: Perceptron Learning 1 Refresher: Perceptron Training Algorithm Algorithm Perceptron; Start with a randomly.
Dr. Hala Moushir Ebied Faculty of Computers & Information Sciences
Learning with large datasets Machine Learning Large scale machine learning.
Midterm Review Rao Vemuri 16 Oct Posing a Machine Learning Problem Experience Table – Each row is an instance – Each column is an attribute/feature.
Outline Classification Linear classifiers Perceptron Multi-class classification Generative approach Naïve Bayes classifier 2.
8/25/05 Cognitive Computations Software Tutorial Page 1 SNoW: Sparse Network of Winnows Presented by Nick Rizzolo.
Appendix B: An Example of Back-propagation algorithm
MATLAB.
Artificial Intelligence Chapter 3 Neural Networks Artificial Intelligence Chapter 3 Neural Networks Biointelligence Lab School of Computer Sci. & Eng.
Multi-Layer Perceptron
Linear Discrimination Reading: Chapter 2 of textbook.
Non-Bayes classifiers. Linear discriminants, neural networks.
Neural Networks and Backpropagation Sebastian Thrun , Fall 2000.
ADALINE (ADAptive LInear NEuron) Network and
Chapter 2 Single Layer Feedforward Networks
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
Neural Networks Vladimir Pleskonjić 3188/ /20 Vladimir Pleskonjić General Feedforward neural networks Inputs are numeric features Outputs are in.
Previous Lecture Perceptron W  t+1  W  t  t  d(t) - sign (w(t)  x)] x Adaline W  t+1  W  t  t  d(t) - f(w(t)  x)] f’ x Gradient.
Artificial Intelligence Methods Neural Networks Lecture 3 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
Neural NetworksNN 21 Architecture We consider the architecture: feed- forward NN with one layer It is sufficient to study single layer perceptrons with.
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
Bab 5 Classification: Alternative Techniques Part 4 Artificial Neural Networks Based Classifer.
Pattern Recognition Lecture 20: Neural Networks 3 Dr. Richard Spillman Pacific Lutheran University.
Learning with Neural Networks Artificial Intelligence CMSC February 19, 2002.
National Taiwan University
The Gradient Descent Algorithm
第 3 章 神经网络.
Real Neurons Cell structures Cell body Dendrites Axon
Ranga Rodrigo February 8, 2014
A Simple Artificial Neuron
CSE 473 Introduction to Artificial Intelligence Neural Networks
Classification with Perceptrons Reading:
CSE P573 Applications of Artificial Intelligence Neural Networks
CSE 473 Introduction to Artificial Intelligence Neural Networks
Machine Learning Today: Reading: Maria Florina Balcan
CSC 578 Neural Networks and Deep Learning
ECE 471/571 - Lecture 17 Back Propagation.
Example, perceptron learning function AND
Neural Networks Advantages Criticism
Artificial Intelligence Chapter 3 Neural Networks
Large Scale Support Vector Machines
Face Recognition with Neural Networks
CSE 573 Introduction to Artificial Intelligence Neural Networks
Deep Neural Networks (DNN)
雲端計算.
Artificial Intelligence Chapter 3 Neural Networks
Overview of Neural Network Architecture Assignment Code
雲端計算.
Multilayer Perceptron: Learning : {(xi, f(xi)) | i = 1 ~ N} → W
Example, perceptron learning function AND
Artificial Intelligence Chapter 3 Neural Networks
Softmax Classifier.
Artificial Intelligence 10. Neural Networks
Backpropagation David Kauchak CS159 – Fall 2019.
Artificial Intelligence Chapter 3 Neural Networks
CS621: Artificial Intelligence Lecture 22-23: Sigmoid neuron, Backpropagation (Lecture 20 and 21 taken by Anup on Graphical Models) Pushpak Bhattacharyya.
Introduction to Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
Outline Announcement Neural networks Perceptrons - continued
Presentation transcript:

Perceptron by Python Wen-Yen Hsu, Hsiao-Lung Chan Dept Electrical Engineering Chang Gung University, Taiwan chanhl@mail.cgu.edu.tw

Outline Perceptron.py AdalineGD.py AdalineSGD.py Iris.csv

Perceptron Perceptron API (application programming interface ) by object Plotting region picture by function class Perceptron(object): def __init__(self, eta=0.01, n_iter=1, random_state=1): def fit(self, X, y): def net_input(self, X): def predict(self, X): def plot_decision_regions(X, y, classifier, resolution=0.01):

Training method for Perceptron Implement fit def fit(self, X, y): rgen = np.random.RandomState(self.random_state) self.w_ = rgen.normal(loc=0.0, scale=0.01, size=1 + X.shape[1]) self.errors_ = [] for _ in range(self.n_iter): errors = 0 for xi, target in zip(X, y): update = self.eta * (target - self.predict(xi)) self.w_[1:] += update * xi self.w_[0] += update errors += int(update != 0.0) self.errors_.append(errors) return self

net_input and predict Implement net_input and predict def net_input(self, X): """Calculate net input""" return np.dot(X, self.w_[1:]) + self.w_[0] def predict(self, X): """Return class label after unit step""" return np.where(self.net_input(X) >= 0.0, 1, -1)

Iris.csv sepal_length petal_length species

Load Iris data Read data by pandas Organize data import pandas as pd df = pd.read_csv('iris.csv', header=None) y = df.iloc[1:101, 4].values y = np.where(y == 'setosa', -1, 1) X = df.iloc[1:101, [0, 2]].values X = X.astype(np.float)

Training Phase Initialize the object and training Test the effect of training iteration n_iter : Epochs max_inter = 7 ppn = Perceptron(eta=0.1, n_iter = max_inter) ppn.fit(X, y) for i in range(1, 10, 1): ppn = Perceptron(eta=0.01, n_iter=i) ppn.fit(X, y)

GD Adaline AdalineGD API class AdalineGD(object): def __init__(self, eta=0.01, n_iter=5, random_state=1): def fit(self, X, y): def net_input(self, X): def activation(self, X): def predict(self, X):

Training method for Adaline Perceptron Implement fit Implement activation function def fit(self, X, y): rgen = np.random.RandomState(self.random_state) self.w_ = rgen.normal(loc=0.0, scale=0.01, size=1 + X.shape[1]) self.cost_ = [] for i in range(self.n_iter): net_input = self.net_input(X) output = self.activation(net_input) errors = (y - output) self.w_[1:] += self.eta * X.T.dot(errors) self.w_[0] += self.eta * errors.sum() cost = (errors**2).sum() / 2.0 self.cost_.append(cost) return self def activation(self, X): """Compute linear activation""" return X

Choose learning rate

Standardization Standardize features X_std = np.copy(X) X_std[:, 0] = (X[:, 0] - X[:, 0].mean()) / X[:, 0].std() X_std[:, 1] = (X[:, 1] - X[:, 1].mean()) / X[:, 1].std()

SGD Adaline (Stochastic Gradient Descent) AdalineSGD API class AdalineSGD(object): def __init__(self, eta=0.01, n_iter=10, shuffle=True, random_state=None): def fit(self, X, y): def partial_fit(self, X, y): def _shuffle(self, X, y): def _initialize_weights(self, m): def _update_weights(self, xi, target): def net_input(self, X): def activation(self, X): def predict(self, X):

Training method for SGD Adaline Perceptron Implement def fit(self, X, y): self._initialize_weights(X.shape[1]) self.cost_ = [] for i in range(self.n_iter): if self.shuffle: X, y = self._shuffle(X, y) cost = [] for xi, target in zip(X, y): cost.append(self._update_weights(xi, target)) avg_cost = sum(cost) / len(y) self.cost_.append(avg_cost) return self

shuffle and update_weights Implement def _shuffle(self, X, y): """Shuffle training data""" r = self.rgen.permutation(len(y)) return X[r], y[r] def _update_weights(self, xi, target): """Apply Adaline learning rule to update the weights""" output = self.activation(self.net_input(xi)) error = (target - output) self.w_[1:] += self.eta * xi.dot(error) self.w_[0] += self.eta * error cost = 0.5 * error**2 return cost