Meta-controlled Boltzmann Machine toward Accelerating the Computation Tran Duc Minh (*), Junzo Watada (**) (*) Institute Of Information Technology-Viet.

Slides:



Advertisements
Similar presentations
Presentation By Utkarsh Trivedi Y8544
Advertisements

Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
Perceptron Learning Rule
Stochastic Neural Networks Deep Learning and Neural Nets Spring 2015.
1 Using ANN to solve the Navier Stoke Equations Motivation: Solving the complete Navier Stokes equations using direct numerical simulation is computationally.
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
Qi Yu, Antti Sorjamaa, Yoan Miche, and Eric Severin (Jean-Paul Murara) Wednesday
CS 678 –Boltzmann Machines1 Boltzmann Machine Relaxation net with visible and hidden units Learning algorithm Avoids local minima (and speeds up learning)
CHAPTER 8 A NNEALING- T YPE A LGORITHMS Organization of chapter in ISSO –Introduction to simulated annealing –Simulated annealing algorithm Basic algorithm.
Kostas Kontogiannis E&CE
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
The back-propagation training algorithm
1 Simulation Modeling and Analysis Session 13 Simulation Optimization.
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Neural Networks: Concepts (Reading:
MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002.
D Nagesh Kumar, IIScOptimization Methods: M1L4 1 Introduction and Basic Concepts Classical and Advanced Techniques for Optimization.
October 28, 2010Neural Networks Lecture 13: Adaptive Networks 1 Adaptive Networks As you know, there is no equation that would tell you the ideal number.
Application of Back-Propagation neural network in data forecasting Le Hai Khoi, Tran Duc Minh Institute Of Information Technology – VAST Ha Noi – Viet.
Simulated Annealing G.Anuradha. What is it? Simulated Annealing is a stochastic optimization method that derives its name from the annealing process used.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Introduction Dr. Ying Lu RAIK 283: Data Structures & Algorithms.
 C. C. Hung, H. Ijaz, E. Jung, and B.-C. Kuo # School of Computing and Software Engineering Southern Polytechnic State University, Marietta, Georgia USA.
Artificial Neural Networks
Biointelligence Laboratory, Seoul National University
Multiple-Layer Networks and Backpropagation Algorithms
Hybrid AI & Machine Learning Systems Using Ne ural Network and Subsumption Architecture Libraries By Logan Kearsley.
Neural Networks Ellen Walker Hiram College. Connectionist Architectures Characterized by (Rich & Knight) –Large number of very simple neuron-like processing.
Hybrid AI & Machine Learning Systems Using Ne ural Networks and Subsumption Architecture By Logan Kearsley.
A Modified Meta-controlled Boltzmann Machine Tran Duc Minh, Le Hai Khoi (*), Junzo Watada (**), Teruyuki Watanabe (***) (*) Institute Of Information Technology-Viet.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 16: NEURAL NETWORKS Objectives: Feedforward.
Boltzmann Machine (BM) (§6.4) Hopfield model + hidden nodes + simulated annealing BM Architecture –a set of visible nodes: nodes can be accessed from outside.
CS 478 – Tools for Machine Learning and Data Mining Backpropagation.
CSC321: Introduction to Neural Networks and machine Learning Lecture 16: Hopfield nets and simulated annealing Geoffrey Hinton.
CHECKERS: TD(Λ) LEARNING APPLIED FOR DETERMINISTIC GAME Presented By: Presented To: Amna Khan Mis Saleha Raza.
Method of Hooke and Jeeves
Soft Computing Lecture 8 Using of perceptron for image recognition and forecasting.
SemiBoost : Boosting for Semi-supervised Learning Pavan Kumar Mallapragada, Student Member, IEEE, Rong Jin, Member, IEEE, Anil K. Jain, Fellow, IEEE, and.
Simulated Annealing G.Anuradha.
Tetris Agent Optimization Using Harmony Search Algorithm
Introduction to Neural Networks. Biological neural activity –Each neuron has a body, an axon, and many dendrites Can be in one of the two states: firing.
ADALINE (ADAptive LInear NEuron) Network and
Neural Networks - Berrin Yanıkoğlu1 Applications and Examples From Mitchell Chp. 4.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
AI & Machine Learning Libraries By Logan Kearsley.
An Introduction to Simulated Annealing Kevin Cannons November 24, 2005.
1 Ch. 2: Getting Started. 2 About this lecture Study a few simple algorithms for sorting – Insertion Sort – Selection Sort (Exercise) – Merge Sort Show.
A study of simulated annealing variants Ana Pereira Polytechnic Institute of Braganca, Portugal Edite Fernandes University of Minho,
1 Perceptron as one Type of Linear Discriminants IntroductionIntroduction Design of Primitive UnitsDesign of Primitive Units PerceptronsPerceptrons.
CS623: Introduction to Computing with Neural Nets (lecture-12) Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
Lecture 9 Model of Hopfield
Bab 5 Classification: Alternative Techniques Part 4 Artificial Neural Networks Based Classifer.
IJCNN, July 27, 2004 Extending SpikeProp Benjamin Schrauwen Jan Van Campenhout Ghent University Belgium.
1 Neural networks 2. 2 Introduction: Neural networks The nervous system contains 10^12 interconnected neurons.
Nonlinear balanced model residualization via neural networks Juergen Hahn.
CSC2535: Computation in Neural Networks Lecture 8: Hopfield nets Geoffrey Hinton.
A Presentation on Adaptive Neuro-Fuzzy Inference System using Particle Swarm Optimization and it’s Application By Sumanta Kundu (En.R.No.
Big data classification using neural network
Supervised Learning in ANNs
Optimization of PHEV/EV Battery Charging
Introduction to Simulated Annealing
Introduction to Neural Networks
Capabilities of Threshold Neurons
Boltzmann Machine (BM) (§6.4)
Ch. 2: Getting Started.
Simulated Annealing & Boltzmann Machines
CSC 578 Neural Networks and Deep Learning
Random Neural Network Texture Model
Presentation transcript:

Meta-controlled Boltzmann Machine toward Accelerating the Computation Tran Duc Minh (*), Junzo Watada (**) (*) Institute Of Information Technology-Viet Nam Academy of Science & Technology (**) Graduate School of Information, Production and System, Waseda University, Japan TOKYO 02/2004

at Meiji University 25/02/20042 CONTENT Introduction The portfolio selection problem Inner behaviors of the Meta-controlled Boltzmann machine Some hints on accelerating the Meta- controlled Boltzmann machine Conclusion

at Meiji University 25/02/20043 Introduction H. Markowitz proposed a method to allocate an amount of funds to plural stocks for investment Model of Meta-controlled Boltzmann Machine The ability of Meta-controlled Boltzmann Machine in solving the quadratic programming problem

at Meiji University 25/02/20044 Maximize Minimize Subject to and with m i  {0, 1}, i = 1,.., n where  ij denotes a covariance between stocks i and j,  i is an expected return rate of stock i, x i is investing rate to stock i, n denotes the total number of stocks and S denotes the number of stocks selected, and finally, m i denotes a selection variable of investing stocks. The portfolio selection problem

at Meiji University 25/02/20045 The portfolio selection problem Convert the objective function into the energy functions of the two components that are Meta- controlling layer (Hopfield Network) and the Lower-layer (Boltzmann Machine) as described below: Meta-Controlling layer Lower Layer where K u, K l are weights of the expected return rate for each layer and s i is the output value of the ith unit of the Meta-Controlling layer.

at Meiji University 25/02/20046 Algorithm of the Meta-controlled Boltzmann machine Step 1. Set each parameter to its initial value. Step 2. Input the values of Ku and Kl. Step 3. Execute the Meta-controlling layer. Step 4. If the output value of a unit in the Meta-controlling layer is 1, add some amount of value to the corresponding unit in the lower layer. Execute the lower layer. Step 5. After executing the lower layer the constant number of times, decreases the temperature. Step 6. If the output value is sufficiently large, add a certain amount of value to the corresponding unit in the Meta-controlling layer. Step 7. Iterate from Step 3 to Step 6 until the temperature reaches the restructuring temperature. Step 8. Restructure the lower layer using the selected units of the Meta-controlling layer. Step 9. Execute the lower layer until reaching at the termination.

at Meiji University 25/02/20047 Inner behaviors of the Meta- controlled Boltzmann machine Some times, the Hopfield layer may converge to a local minimum but the disturb values make it to get over The changes of Meta layer’s energy function are very small, while the lower layer’s energy function’s is quite large The number of cycles to execute the Meta layer is much smaller than the cycles for the lower layer Similar to the simulated annealing that we will “try to go downhill most of the time instead of always going downhill” The time to converge is much shorter than a conventional Boltzmann machine All the neurons that are “encouraged” will be selected before the system goes to the final computation.

at Meiji University 25/02/20048 Chart of behaviors of Meta-controlled Boltzmann Machine Disturb back value = 80%

at Meiji University 25/02/20049 Chart of behaviors of Meta-controlled Boltzmann Machine Disturb back value = 1 %

at Meiji University 25/02/ Comparison of computing time between a Conventional Boltzmann machine and a Meta-controlled Boltzmann Machine (1286 units)

at Meiji University 25/02/ Some hints on accelerating the Meta- controlled Boltzmann machine Trying to use only a layer of Boltzmann Machine, modify the algorithm of original Boltzmann Machine by removing the discouraged units before goes into final computation. Trying to use only a layer of Boltzmann Machine, modify the algorithm of original Boltzmann Machine by removing the discouraged units before goes into final computation. The modification on Meta layer by replacing deterministic neurons by stochastic neurons The modification on Meta layer by replacing deterministic neurons by stochastic neurons Some ideas employed from the other kinds of neural networks: using multi-layers structure, heuristic modification of the learning procedure. Some ideas employed from the other kinds of neural networks: using multi-layers structure, heuristic modification of the learning procedure.

at Meiji University 25/02/200412

at Meiji University 25/02/200413

at Meiji University 25/02/ CONCLUSION The training algorithms for neural network models are varied from many simple but efficient ideas. The trend of accelerating algorithms is focused mainly on heuristic modification and numeric optimization technique, i.e. toward the faster convergence of algorithms whereas keeping the correctness for them. The Meta-controlled Boltzmann Machine can be used to solve quadratic programming problems. Future works: Try the model with other quadratic programming problem. Modify the original algorithm toward accelerating computation.

at Meiji University 25/02/ THANK YOU!