B.Macukow 1 Lecture 12 Neural Networks. B.Macukow 2 Neural Networks for Matrix Algebra Problems.

Slides:



Advertisements
Similar presentations
Applied Informatics Štefan BEREŽNÝ
Advertisements

Systems of Linear Equations
Lecture 9: Introduction to Matrix Inversion Gaussian Elimination Sections 2.4, 2.5, 2.6 Sections 2.2.3, 2.3.
Chapter 9.1 = LU Decomposition MATH 264 Linear Algebra.
MF-852 Financial Econometrics
Linear Transformations
Simple Neural Nets For Pattern Classification
Linear Systems of Equations Ax = b Marco Lattuada Swiss Federal Institute of Technology - ETH Institut für Chemie und Bioingenieurwissenschaften ETH Hönggerberg/
The back-propagation training algorithm
Ch 7.9: Nonhomogeneous Linear Systems
September 30, 2010Neural Networks Lecture 8: Backpropagation Learning 1 Sigmoidal Neurons In backpropagation networks, we typically choose  = 1 and 
Matrices and Systems of Equations
Back-Propagation Algorithm
Economics 2301 Matrices Lecture 13.
6 1 Linear Transformations. 6 2 Hopfield Network Questions.
November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 1 Associative Networks Associative networks are able to store a set of patterns.
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
化工應用數學 授課教師: 郭修伯 Lecture 9 Matrices
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Chapter 3 The Inverse. 3.1 Introduction Definition 1: The inverse of an n  n matrix A is an n  n matrix B having the property that AB = BA = I B is.
Chapter 5 Determinants.
Arithmetic Operations on Matrices. 1. Definition of Matrix 2. Column, Row and Square Matrix 3. Addition and Subtraction of Matrices 4. Multiplying Row.
Copyright © Cengage Learning. All rights reserved. 7.6 The Inverse of a Square Matrix.
Compiled By Raj G. Tiwari
ECON 1150 Matrix Operations Special Matrices
 Row and Reduced Row Echelon  Elementary Matrices.
Matrix Algebra. Quick Review Quick Review Solutions.
Multivariate Statistics Matrix Algebra II W. M. van der Veld University of Amsterdam.
2.5 - Determinants & Multiplicative Inverses of Matrices.
8.1 Matrices & Systems of Equations
Copyright © 2013, 2009, 2005 Pearson Education, Inc. 1 5 Systems and Matrices Copyright © 2013, 2009, 2005 Pearson Education, Inc.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
 6.2 Pivoting Strategies 1/17 Chapter 6 Direct Methods for Solving Linear Systems -- Pivoting Strategies Example: Solve the linear system using 4-digit.
Lecture 8 Matrix Inverse and LU Decomposition
Chapter 5 MATRIX ALGEBRA: DETEMINANT, REVERSE, EIGENVALUES.
Chapter 3 Determinants Linear Algebra. Ch03_2 3.1 Introduction to Determinants Definition The determinant of a 2  2 matrix A is denoted |A| and is given.
Chapter 6 Systems of Linear Equations and Matrices Sections 6.3 – 6.5.
ADALINE (ADAptive LInear NEuron) Network and
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
College Algebra Sixth Edition James Stewart Lothar Redlin Saleem Watson.
Copyright © Cengage Learning. All rights reserved. 2 SYSTEMS OF LINEAR EQUATIONS AND MATRICES Read pp Stop at “Inverse of a Matrix” box.
Review of Matrix Operations Vector: a sequence of elements (the order is important) e.g., x = (2, 1) denotes a vector length = sqrt(2*2+1*1) orientation.
Chapter 8: Adaptive Networks
Hazırlayan NEURAL NETWORKS Backpropagation Network PROF. DR. YUSUF OYSAL.
Chapter 2 Determinants. With each square matrix it is possible to associate a real number called the determinant of the matrix. The value of this number.
TH EDITION LIAL HORNSBY SCHNEIDER COLLEGE ALGEBRA.
Matrices and Matrix Operations. Matrices An m×n matrix A is a rectangular array of mn real numbers arranged in m horizontal rows and n vertical columns.
Previous Lecture Perceptron W  t+1  W  t  t  d(t) - sign (w(t)  x)] x Adaline W  t+1  W  t  t  d(t) - f(w(t)  x)] f’ x Gradient.
2.5 – Determinants and Multiplicative Inverses of Matrices.
3/1/ LU Decomposition Computer Engineering Majors Authors: Autar Kaw Transforming.
Section 2.1 Determinants by Cofactor Expansion. THE DETERMINANT Recall from algebra, that the function f (x) = x 2 is a function from the real numbers.
Autar Kaw Benjamin Rigsby Transforming Numerical Methods Education for STEM Undergraduates.
Matrices, Vectors, Determinants.
Numerical Methods.  LU Decomposition is another method to solve a set of simultaneous linear equations  For most non-singular matrix [A] that one could.
Numerical Computation Lecture 6: Linear Systems – part II United International College.
Copyright © Cengage Learning. All rights reserved. 8 Matrices and Determinants.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Matrices Introduction.
College Algebra Chapter 6 Matrices and Determinants and Applications
Review of Matrix Operations
Linear Algebra Lecture 19.
Matrices 3 1.
Linear Algebra Lecture 15.
Chapter 7: Matrices and Systems of Equations and Inequalities
Numerical Computation and Optimization
Bellwork 1) Multiply. 3) Find the determinant. 2) Multiply.
Neuro-Computing Lecture 2 Single-Layer Perceptrons
Major: All Engineering Majors Authors: Autar Kaw
Presentation transcript:

B.Macukow 1 Lecture 12 Neural Networks

B.Macukow 2 Neural Networks for Matrix Algebra Problems

B.Macukow 3 Neural Networks for Matrix Algebra Problems The feedforward neural networks for solving (in real time) a large variety of important matrix algebra problems such as: matrix inversion, matrix multiplication LU decomposition, the eigenvalue problem

B.Macukow 4 These algorithms basing on the massively parallel data transformation assure the high speed (μsek) in practice – in the real-time. For a given problem define the error (energy) function and proper multilayer network and during learning phase find the minimum of the error function Neural Networks for Matrix Algebra Problems

B.Macukow 5 Matrix inversion Let A be a nonsingular square Task: To find the neural network calculating the matrix B = A -1. matrix B fulfill the relation BA = I Neural Networks for Matrix Algebra Problems

B.Macukow 6 Multiplying both sides by arbitrary non-zero vector x=[x 1,x 2,...,x n ] we get BAx - x = 0 (1) The energy (error) function can be defined by Neural Networks for Matrix Algebra Problems

B.Macukow 7 Solving of equation (1) can be replaced by the minimization of the function (2). Vector x plays double role: is the learning signal (network input signal), is the desired output (target) signal i. e. it is the autoassociative network Neural Networks for Matrix Algebra Problems

B.Macukow 8 A simplified block diagram Neural Networks for Matrix Algebra Problems

B.Macukow 9 u = Ax, y = Bu or y = Bu = BAx = Ix = x It means that the output vector signal y must be equal to the input vector signal x – i.e. the network should learn the identity map y = x. The fundamental question for the training phase: what kind of input signals x should be applied in order to obtain the desired solution? Neural Networks for Matrix Algebra Problems

B.Macukow 10 One of the simplest input patterns can be chosen as: x (1) =[1,0,0,...,0] T, x (2) =[0,1,0,...,0] T,..., x (n) =[0,0,0,...,1] T. The better convergence speed can be obtained by changing the input patterns randomly on each time step from the set x (1) =[1,-1,...,-1] T, x (2) =[-1,1,-1,...,-1] T,..., x (n) =[-1,-1,...,1] T. In this two-layer network the first layer has fixed connection weights a ij, while in the second layer weights are unknown, and are described by the unknown matrix B = A 1. Neural Networks for Matrix Algebra Problems

B.Macukow 11 The network architecture Neural Networks for Matrix Algebra Problems

B.Macukow 12 In order to minimize the local error function E for a single pattern y i is the actual output signal x i is the desired output signal we can apply a standard steepest-descent approach Neural Networks for Matrix Algebra Problems

B.Macukow 13 Matrix multiplication If matrix C is equal the product of matrices A and B it fulfills the equation C = AB Neural Networks for Matrix Algebra Problems

B.Macukow 14 To construct a proper neural network able to solve the problem it is necessary to define the error (energy) function whose minimization leads to the desired solution. Multiplying both sides by arbitrary non-zero vector x=[x 1,x 2,...,x n ] we get ABx – Cx = 0 Neural Networks for Matrix Algebra Problems

B.Macukow 15 On the basis of this equation we can define the error (energy) function Neural Networks for Matrix Algebra Problems

B.Macukow 16 A simplified block diagram for matrix multiplication. In real it is one-layer network in spite that on the diagram there are three layers Neural Networks for Matrix Algebra Problems

B.Macukow 17 Only one out of these three layers responsible for matrix C is the subject of a learning procedure – realizing the equation y = Cx After the learning process the network has to fulfill the equation C = AB in the diagram there are two additional layers with constant weights (the elements of matrices A and B respectively). Neural Networks for Matrix Algebra Problems

B.Macukow 18 These layers are used to compute the vector d, according to d = Au = ABx Again we can apply a standard steepest- descent algorithm. The adaptation rule has the form where p is the number of a learning pattern. Neural Networks for Matrix Algebra Problems

B.Macukow 19 LU decomposition The standard LU decomposition of a square matrix A into: lower-triangular matrix L and upper-triangular matrix U such that: A = LU generally the LU decomposition is not unique. However, if the LU is factorization for a lower- triangular matrix L with unit diagonal elements factorization is unique. Neural Networks for Matrix Algebra Problems

B.Macukow 20 Multiplying both sides by arbitrary non- zero vector x=[x 1,x 2,...,x n ] and after some further transformation we get the energy function Neural Networks for Matrix Algebra Problems

B.Macukow 21 The two-layer linear network is more complicated than the network for the matrix inversion or multiplication. Here, both layers are the subject of learning procedure. The connection weights of the first layer are described by the matrix U and the second layer by the matrix L. Neural Networks for Matrix Algebra Problems

B.Macukow 22 A simplified block diagram Neural Networks for Matrix Algebra Problems

B.Macukow 23 The first layer performs a simple linear transformation z = Ux, where x is a given input vector. The second layer performs transformation y = Lz = LUx. The parallel layer with weights defined by the matrix A elements is used to calculate the desired (target) output d = Ax. Neural Networks for Matrix Algebra Problems

B.Macukow 24 The weights l ii are fixed and equal to unity, and proper elements of the matrices L and U are equal to zero. To minimize the error function we will apply the simplified back-propagation algorithm. Neural Networks for Matrix Algebra Problems

B.Macukow 25 We get for i > j, and dla i  j Neural Networks for Matrix Algebra Problems

B.Macukow 26 where and is the actual error of i-th output element for p-th pattern x p is the actual output of i-th element of the first layer for the same p-th pattern x p Neural Networks for Matrix Algebra Problems