A machine learning perspective on neural networks and learning tools

Slides:



Advertisements
Similar presentations
Artificial Neural Networks
Advertisements

Greedy Layer-Wise Training of Deep Networks
Reinforcement learning
Memristor in Learning Neural Networks
A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C. Wunsch.
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
Deep Learning Bing-Chen Tsai 1/21.
Ch. Eick: More on Machine Learning & Neural Networks Different Forms of Learning: –Learning agent receives feedback with respect to its actions (e.g. using.
Deep Learning.
Machine Learning Neural Networks
Lecture 14 – Neural Networks
Supervised and Unsupervised learning and application to Neuroscience Cours CA6b-4.
1 Part I Artificial Neural Networks Sofia Nikitaki.
Neural Networks. R & G Chapter Feed-Forward Neural Networks otherwise known as The Multi-layer Perceptron or The Back-Propagation Neural Network.
Neural Networks Chapter Feed-Forward Neural Networks.
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
Neural Optimization of Evolutionary Algorithm Strategy Parameters Hiral Patel.
Soft Computing Colloquium 2 Selection of neural network, Hybrid neural networks.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Introduction to Neural Networks Debrup Chakraborty Pattern Recognition and Machine Learning 2006.
NEURAL NETWORKS FOR DATA MINING
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
PARALLELIZATION OF ARTIFICIAL NEURAL NETWORKS Joe Bradish CS5802 Fall 2015.
Artificial Neural Networks (Cont.) Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
Neural Networks Vladimir Pleskonjić 3188/ /20 Vladimir Pleskonjić General Feedforward neural networks Inputs are numeric features Outputs are in.
Convolutional LSTM Networks for Subcellular Localization of Proteins
Introduction to Neural Networks Freek Stulp. 2 Overview Biological Background Artificial Neuron Classes of Neural Networks 1. Perceptrons 2. Multi-Layered.
Joe Bradish Parallel Neural Networks. Background  Deep Neural Networks (DNNs) have become one of the leading technologies in artificial intelligence.
Deep Learning and Deep Reinforcement Learning. Topics 1.Deep learning with convolutional neural networks 2.Learning to play Atari video games with Deep.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Machine Learning Artificial Neural Networks MPλ ∀ Stergiou Theodoros 1.
Xintao Wu University of Arkansas Introduction to Deep Learning 1.
Machine Learning Supervised Learning Classification and Regression
Deep Reinforcement Learning
Big data classification using neural network
Reinforcement Learning
CS 388: Natural Language Processing: Neural Networks
CS 388: Natural Language Processing: LSTM Recurrent Neural Networks
Deep Learning Amin Sobhani.
Learning with Perceptrons and Neural Networks
Recursive Neural Networks
Real Neurons Cell structures Cell body Dendrites Axon
Matt Gormley Lecture 16 October 24, 2016
Date of download: 12/22/2017 Copyright © ASME. All rights reserved.
ICS 491 Big Data Analytics Fall 2017 Deep Learning
Classification with Perceptrons Reading:
Self organizing networks
FUNDAMENTAL CONCEPT OF ARTIFICIAL NETWORKS
CSE P573 Applications of Artificial Intelligence Neural Networks
"Playing Atari with deep reinforcement learning."
CSE 473 Introduction to Artificial Intelligence Neural Networks
Brain Inspired Algorithms Dr. David Fagan
Image Captions With Deep Learning Yulia Kogan & Ron Shiff
A First Look at Music Composition using LSTM Recurrent Neural Networks
Recurrent Neural Networks
CSC 578 Neural Networks and Deep Learning
CSE 573 Introduction to Artificial Intelligence Neural Networks
Emre O. Neftci  iScience  Volume 5, Pages (July 2018) DOI: /j.isci
The use of Neural Networks to schedule flow-shop with dynamic job arrival ‘A Multi-Neural Network Learning for lot Sizing and Sequencing on a Flow-Shop’
Neural Networks Geoff Hulten.
Neural Networks II Chen Gao Virginia Tech ECE-5424G / CS-5824
Artificial Neural Networks
COSC 4335: Part2: Other Classification Techniques
Neural Networks II Chen Gao Virginia Tech ECE-5424G / CS-5824
Deep Learning Authors: Yann LeCun, Yoshua Bengio, Geoffrey Hinton
PYTHON Deep Learning Prof. Muhammad Saeed.
Artificial Neural Network learning
Presentation transcript:

A machine learning perspective on neural networks and learning tools Tom Schaul

Overview PyBrain: training artificial neural networks for classification, (sequence) prediction and control Neural networks Modular structure Available architectures Training Supervised learning Optimization Reinforcement learning (RL) 4th FACETS CodeJam Workshop - Tom Schaul - PyBrain

Disclaimer Only version 0.3, you may encounter But growing inconsistencies bugs undocumented “features” But growing 10+ contributors 100+ followers (github, mailing list) 1000+ downloads 4th FACETS CodeJam Workshop - Tom Schaul - PyBrain

(Our) Neural Networks No spikes Continuous activations Discrete time steps 4th FACETS CodeJam Workshop - Tom Schaul - PyBrain

Network Structure: Modules input error output error Derivatives derivatives output Parameters parameters input 4th FACETS CodeJam Workshop - Tom Schaul - PyBrain

Network Structure: Connections Module input output input error output error FullConnection Module input output input error output error Module input output input error output error 4th FACETS CodeJam Workshop - Tom Schaul - PyBrain

Network Structure: Graphs, Recurrency, Nesting Module 4th FACETS CodeJam Workshop - Tom Schaul - PyBrain

Network Components: Modules Module types layers of neurons additive or multiplicative sigmoidal squashing functions stochastic outputs gate units memory cells (e.g. LSTM cells) … 4th FACETS CodeJam Workshop - Tom Schaul - PyBrain

Network Components: Connections Fully connected or sparse Time-recurrent Weight-sharing may contain parameters … 4th FACETS CodeJam Workshop - Tom Schaul - PyBrain

Network Architectures Feed-forward networks, including Deep Belief Nets Restricted Boltzmann Machines (RBM) Recurrent networks, including Reservoirs (Echo State networks) Bidirectional networks Long Short-Term Memory (LSTM) architectures Multi-Dimensional Recurrent Networks (MDRNN) Custom-designed topologies 4th FACETS CodeJam Workshop - Tom Schaul - PyBrain

Overview Neural networks Training Modular structure Available architectures Training Supervised learning Optimization Reinforcement learning (RL) 4th FACETS CodeJam Workshop - Tom Schaul - PyBrain

Training: Supervised Learning compare to target Module input output Parameters parameters input error output error Derivatives derivatives Backpropagation gradient update on parameters 4th FACETS CodeJam Workshop - Tom Schaul - PyBrain

Training: Black-box Optimization fitness function based on e.g. MSE, accuracy, rewards multiple fitness values: multi-objective optimization BlackBoxOptimizer Parameters parameters Black box update parameters fitness 4th FACETS CodeJam Workshop - Tom Schaul - PyBrain

Optimization Algorithms (Stochastic) Hill-climbing Particle Swarm Optimization (PSO) (Natural) Evolution Strategies (ES) Covariance Matrix Adaptation (CMA) Genetic Algorithms (GA) Co-evolution Multi-Objective Optimization (NSGA-II) … 4th FACETS CodeJam Workshop - Tom Schaul - PyBrain

Training: Reinforcement Learning Experiment Environment state action Task Environment Agent action observation reward 4th FACETS CodeJam Workshop - Tom Schaul - PyBrain

RL: Agents, Learners, Exploration action reward observation Explorer Module DataSet Learner LearningAgent 4th FACETS CodeJam Workshop - Tom Schaul - PyBrain

RL: Learning Algorithms and Exploration Value-based RL Q-Learning, SARSA Fitted-Q Iteration Policy Gradient RL REINFORCE Natural Actor-Critic Exploration methods Epsilon-Greedy Boltzmann State-Dependent Exploration 4th FACETS CodeJam Workshop - Tom Schaul - PyBrain

RL: Environments and Tasks 2D Mazes (MDP / POMDP) Pole balancing 3D environments (ODE, FlexCube) Board games (e.g. Atari-Go, Pente) 4th FACETS CodeJam Workshop - Tom Schaul - PyBrain

Also in PyBrain Unsupervised learning and preprocessing Support Vector Machines (through LIBSVM) Tools Plotting / Visualization netCDF support XML read/write support arac: fast C version 4th FACETS CodeJam Workshop - Tom Schaul - PyBrain

References Source download, documentation Mailing list (200+ members) www.pybrain.org Mailing list (200+ members) groups.google.com/group/pybrain Feature requests github.com/pybrain/pybrain/issues Citation T. Schaul, J. Bayer, D. Wierstra, Y. Sun, M. Felder, F. Sehnke, T. Rückstieß and J. Schmidhuber. PyBrain. Journal of Machine Learning Research, 2010. 4th FACETS CodeJam Workshop - Tom Schaul - PyBrain

Acknowledgements Justin Bayer Martin Felder Thomas Rückstiess Frank Sehnke Daan Wierstra and many more who contributed code, suggestions, bug fixes … … and you for your attention! 4th FACETS CodeJam Workshop - Tom Schaul - PyBrain