Deep Learning.

Slides:



Advertisements
Similar presentations
Greedy Layer-Wise Training of Deep Networks
Advertisements

A machine learning perspective on neural networks and learning tools
NEURAL NETWORKS Biological analogy
A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C. Wunsch.
NEURAL NETWORKS Backpropagation Algorithm
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
Advanced topics.
Tiled Convolutional Neural Networks TICA Speedup Results on the CIFAR-10 dataset Motivation Pretraining with Topographic ICA References [1] Y. LeCun, L.
Kostas Kontogiannis E&CE
Artificial Neural Networks
Presented by: Mingyuan Zhou Duke University, ECE September 18, 2009
Deep Learning.
1 Neural networks 3. 2 Hopfield network (HN) model A Hopfield network is a form of recurrent artificial neural network invented by John Hopfield in 1982.
Artificial Neural Networks ECE 398BD Instructor: Shobha Vasudevan.
1 Part I Artificial Neural Networks Sofia Nikitaki.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Connectionist Modeling Some material taken from cspeech.ucd.ie/~connectionism and Rich & Knight, 1991.
CS 484 – Artificial Intelligence
Traffic Sign Recognition Using Artificial Neural Network Radi Bekker
Hybrid AI & Machine Learning Systems Using Ne ural Network and Subsumption Architecture Libraries By Logan Kearsley.
Neural Networks Ellen Walker Hiram College. Connectionist Architectures Characterized by (Rich & Knight) –Large number of very simple neuron-like processing.
Backpropagation An efficient way to compute the gradient Hung-yi Lee.
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
A shallow introduction to Deep Learning
NEURAL NETWORKS FOR DATA MINING
Classification / Regression Neural Networks 2
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Building high-level features using large-scale unsupervised learning Anh Nguyen, Bay-yuan Hsu CS290D – Data Mining (Spring 2014) University of California,
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
1 Introduction to Neural Networks And Their Applications.
Artificial Neural Networks Bruno Angeles McGill University – Schulich School of Music MUMT-621 Fall 2009.
Neural Networks II By Jinhwa Kim. 2 Neural Computing is a problem solving methodology that attempts to mimic how human brain function Artificial Neural.
Multi-Layer Perceptron
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Introduction to Neural Networks Freek Stulp. 2 Overview Biological Background Artificial Neuron Classes of Neural Networks 1. Perceptrons 2. Multi-Layered.
Kim HS Introduction considering that the amount of MRI data to analyze in present-day clinical trials is often on the order of hundreds or.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Lecture 12. Outline of Rule-Based Classification 1. Overview of ANN 2. Basic Feedforward ANN 3. Linear Perceptron Algorithm 4. Nonlinear and Multilayer.
Machine Learning Artificial Neural Networks MPλ ∀ Stergiou Theodoros 1.
Xintao Wu University of Arkansas Introduction to Deep Learning 1.
Variational Autoencoders Theory and Extensions
Deep Learning Primer Swadhin Pradhan Reading Group Presentation 03/30/2016, UT Austin.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Machine Learning Supervised Learning Classification and Regression
Neural Networks.
Deep Learning Amin Sobhani.
Computer Science and Engineering, Seoul National University
Real Neurons Cell structures Cell body Dendrites Axon
Mastering the game of Go with deep neural network and tree search
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
How it Works: Convolutional Neural Networks
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
CSE P573 Applications of Artificial Intelligence Neural Networks
Prof. Carolina Ruiz Department of Computer Science
Brain Inspired Algorithms Dr. David Fagan
Computer Science Department Brigham Young University
Convolutional Neural Networks
CSE 573 Introduction to Artificial Intelligence Neural Networks
Capabilities of Threshold Neurons
Machine Learning: Lecture 4
Machine Learning: UNIT-2 CHAPTER-1
Artificial Neural Networks
Deep Learning Authors: Yann LeCun, Yoshua Bengio, Geoffrey Hinton
David Kauchak CS158 – Spring 2019
CSC 578 Neural Networks and Deep Learning
Prof. Carolina Ruiz Department of Computer Science
Presentation transcript:

Deep Learning

neuron

neural network

neural network

neural network

axon soma dendrites

axon soma dendrites

Soma adds dendrite activity together and passes it to axon. +

More dendrite activity makes more axon activity +

+

Synapse Connection between axon of one neuron and dendrites of another

Axons can connect to dendrites strongly, weakly, or somewhere in between.

Medium connection (.6)

Strong connection (1.0)

Weak connection (.2) No connection is a 0.

Lots of axons connect with the dendrites of one neuron. Each has its own connection strength.

.5 .8 .3 .2 .9

ACE A B C D E

eyeball eye sun ball glasses

sunglasses eyeball eye sun ball glasses

sunglasses eyeball eyeglasses eye sun ball glasses

The guy at the shawarma place Sometimes he works mornings, sometimes evenings am am pm pm am am pm pm

Initialize am am pm pm am am pm pm Start random .3 .8 .9 .9 .6 .2 .1 .4 am am pm pm

Gather data am am pm pm .3 .8 .9 .9 .6 .2 .1 .4 am am am pm pm pm

am am pm pm am am pm pm Activate each of the neurons (.3+.1)/2=.2 .3 .8 .9 .9 .6 .2 .1 .4 am am pm pm

am am pm pm am am pm pm Activate each of the neurons .2 (.8+.4)/2=.6 .3 .8 .9 .9 .6 .2 .1 .4 am am pm pm

Error am am pm pm am am pm pm Calculate how bad the best fit neuron is. If it’s a perfect fit, there is no need to update the weights. If the best fit is bad, the weights need a lot of work. am am .2 pm pm .6 Error=1-.6 =.4 .3 .8 .9 .9 .6 .2 .1 .4 am am pm pm

Gradient descent am am pm pm am am pm pm For each weight, adjust it up and down a bit and see how the error changes. am am .2 pm pm .6 Error=.4 .3 .8 .9 .9 .6 .2 .1 .4 am am pm pm

Gradient descent For each weight, adjust it up and down a bit and see how the error changes. error weight

adjustment = error * gradient * delta Backpropagation Adjust all the weights am am .2 pm pm .6 adjustment = error * gradient * delta .3 .8 .9 .9 .6 .2 .1 .4 am am pm pm

adjustment = error * gradient * delta Backpropagation Adjust all the weights am am .2 pm pm .6 adjustment = error * gradient * delta .3 .9 .9 .9 .6 .2 .1 .4 am am pm pm

adjustment = error * gradient * delta Backpropagation Adjust all the weights am am .2 pm pm .6 adjustment = error * gradient * delta .3 .9 .9 .8 .6 .2 .1 .4 am am pm pm

adjustment = error * gradient * delta Backpropagation Adjust all the weights am am .2 pm pm .6 adjustment = error * gradient * delta .3 .9 .9 .8 .6 .1 .1 .4 am am pm pm

adjustment = error * gradient * delta Backpropagation Adjust all the weights am am .2 pm pm .6 adjustment = error * gradient * delta .3 .9 .9 .8 .6 .1 .1 .5 am am pm pm

adjustment = error * gradient * delta Backpropagation Adjust the weights am am .2 pm pm (.9 + .5)/2=.7 adjustment = error * gradient * delta .3 .9 .9 .8 .6 .1 .1 .5 am am pm pm

Backpropagation am am pm pm am am pm pm The error is now a little lower. am am .2 pm pm .7 error=1-.7 =.3 .3 .9 .9 .8 .6 .1 .1 .5 am am pm pm

Iterate am am pm pm am am am pm pm pm Repeat this process until the weights stop changing. am am pm pm .2 .9 1 .8 .7 .1 .5 am am am pm pm pm

Iterate am am pm pm am am am pm pm pm Repeat this process until the weights stop changing. am am pm pm .1 .9 1 .8 .8 .1 .5 am am am pm pm pm

Iterate am am pm pm am am am pm pm pm Repeat this process until the weights stop changing. am am pm pm .1 1 1 .8 .8 .0 .6 am am am pm pm pm

Iterate am am pm pm am am am pm pm pm Repeat this process until the weights stop changing. am am pm pm .1 1 1 .7 .8 .7 am am am pm pm pm

Iterate am am pm pm am am am pm pm pm Repeat this process until the weights stop changing. am am pm pm .1 1 1 .6 .8 .8 am am am pm pm pm

Iterate am am pm pm am am pm pm Repeat this process until the weights stop changing. am am pm pm 1 1 1 1 am am pm pm

Each node represents a pattern, a combination of the neurons on the previous layer. first layer inputs

Deep network If a network has more than three layers, it’s deep. inputs first layer second layer third layer If a network has more than three layers, it’s deep. Some 12 or more.

Deep network … Patterns get more complex as the number of layers goes up. … evasive answer antique finish altered states … answer antique altered astern … ant ted st ern qu … a b c d e

Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations Honglak Lee, Roger Grosse, Rajesh Ranganath, Andrew Y. Ng

Understanding Neural Networks Through Deep Visualization Understanding Neural Networks Through Deep Visualization. Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, Hod Lipson

Recommending Music on Spotify with Deep Learning. Sander Dieleman

Playing Atari with Deep Reinforcement Learning Playing Atari with Deep Reinforcement Learning. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, Martin Riedmiller

Robot Learning ManipulationAction Plans by “Watching” Unconstrained Videos from the World Wide Web. Yezhou Yang, Cornelia Fermuller, Yiannis Aloimonos

TYPES OF NEURAL NETWORKS Convolutional Neural Network Recurrent Neural Networks Deep Belief Network Restricted Boltzmann Machine Deep Reinforcement Learning Deep Q Learning Hierarchical Temporal Memory Stacked Denoising Autoencoders

Bottom line Deep learning is good at learning patterns