THE NEURAL-NETWORK ANALYSIS & its applications DATA FILTERS Saint-Petersburg State University JASS 2006.

Slides:



Advertisements
Similar presentations
Artificial Neural Networks
Advertisements

Aggregating local image descriptors into compact codes
2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
Support Vector Machines
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
DATA-MINING Artificial Neural Networks Alexey Minin, Jass 2006.
Kostas Kontogiannis E&CE
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
The loss function, the normal equation,
Visual Recognition Tutorial
Machine Learning Neural Networks
Lecture 14 – Neural Networks
Neural NetworksNN 11 Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Simple Neural Nets For Pattern Classification
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Back-Propagation Algorithm
Visual Recognition Tutorial
Lecture 09 Clustering-based Learning
Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
CSCI 347 / CS 4206: Data Mining Module 04: Algorithms Topic 06: Regression.
Radial-Basis Function Networks
Radial Basis Function Networks
Projection methods in chemistry Autumn 2011 By: Atefe Malek.khatabi M. Daszykowski, B. Walczak, D.L. Massart* Chemometrics and Intelligent Laboratory.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Soft Computing Colloquium 2 Selection of neural network, Hybrid neural networks.
Lecture 12 Self-organizing maps of Kohonen RBF-networks
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
CZ5225: Modeling and Simulation in Biology Lecture 5: Clustering Analysis for Microarray Data III Prof. Chen Yu Zong Tel:
Artificial Neural Network Theory and Application Ashish Venugopal Sriram Gollapalli Ulas Bardak.
Neural NetworksNN 11 Neural netwoks thanks to: Basics of neural network theory and practice for supervised and unsupervised.
Artificial Neural Network Unsupervised Learning
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
A Scalable Self-organizing Map Algorithm for Textual Classification: A Neural Network Approach to Thesaurus Generation Dmitri G. Roussinov Department of.
NEURAL NETWORKS FOR DATA MINING
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.6: Linear Models Rodney Nielsen Many of.
Machine Learning Neural Networks (3). Understanding Supervised and Unsupervised Learning.
So Far……  Clustering basics, necessity for clustering, Usage in various fields : engineering and industrial fields  Properties : hierarchical, flat,
Soft Computing Lecture 8 Using of perceptron for image recognition and forecasting.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
CHEE825 Fall 2005J. McLellan1 Nonlinear Empirical Models.
Chapter 6 Neural Network.
Intro. ANN & Fuzzy Systems Lecture 16. Classification (II): Practical Considerations.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
CPH Dr. Charnigo Chap. 11 Notes Figure 11.2 provides a diagram which shows, at a glance, what a neural network does. Inputs X 1, X 2,.., X P are.
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
Lecture 2 Introduction to Neural Networks and Fuzzy Logic President UniversityErwin SitompulNNFL 2/1 Dr.-Ing. Erwin Sitompul President University
Data Mining: Concepts and Techniques1 Prediction Prediction vs. classification Classification predicts categorical class label Prediction predicts continuous-valued.
J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation.
Today’s Lecture Neural networks Training
Self-Organizing Network Model (SOM) Session 11
Chapter 7. Classification and Prediction
LECTURE 11: Advanced Discriminant Analysis
Learning in Neural Networks
Artificial Intelligence (CS 370D)
Lecture 22 Clustering (3).
Biological and Artificial Neuron
Biological and Artificial Neuron
Neuro-Computing Lecture 4 Radial Basis Function Network
Perceptron as one Type of Linear Discriminants
Lecture 16. Classification (II): Practical Considerations
Presentation transcript:

THE NEURAL-NETWORK ANALYSIS & its applications DATA FILTERS Saint-Petersburg State University JASS 2006

About me Name: Name: Alexey Minin Place of studying: Place of studying: Saint-Petersburg State University Current semester: Current semester: 7 th semester Field of interests: Field of interests: Neural Nets, Data filters for Optics (Holography), Computational Physics,EconoPhisics.

Content: What is Neural Net & it’s applications Neural Net analysis Self organizing Kohonen maps Data filters Obtained results

What is NeuroNet & it’s applications Recognition of images Recognition of images Processing of noised signals Processing of noised signals Addition of images Addition of images Associative search Classification Drawing up of schedules Optimization The forecast Diagnostics Prediction of risks

What is Neural Net & it’s applications M-X Recognition of images

What is Neural Net & it’s applications

PARADIGMS of neurocomputing Neural Net analysis Connection Localness and parallelism of calculations The training based on data (programming) Universality of training algorithms

Neural Net analysis What is Neuron? Typical formal neuron makes the elementary operation – weighs values of the inputs with the locally stored weights and makes above their sum nonlinear transformation: neuron makes nonlinear operation above a linear combination of inputs

Neural Net analysis Global communications Formal neurons Layers Connectionism

Neural Net analysis Localness and parallelism of calculations Localness of processing of the information Any neuron reacts only to the information from connected with it neurons without the appeal to a general plan of calculations Neurons are capable to function in parallel Parallelism of calculations

Comparison of ANN&BNN BRAINPC IBM Vprop=100m/sVprop=3*10 8 m/s 100hz10 9 hz N= neurons N=10 9 The parallelism degree ~10 14 like processors with 100 Hz frequency connected at the same time.

The training based on data (programming) Neural Net analysis Absence of the global plan Mode of distribution of the information on a network with corresponding adaptation neurons The algorithm is not set in advance, and generated by data Training of a network occurs on a small share of all possible situations then the trained network is capable to function in much wider range of patterns Local change by any neuron the selected parameters Synaptic weights Training of a network Patterns, on which Network is training An ability for generalization

Neural Net analysis Universality of training algorithms The only principle of studying - - is to find minimum of empirical error W – set of synaptic weights E (W) – error function The task is to find Global minimum The stochastic optimization as a way not to stick at local minimum

Neural Net analysis BASIS NEURAL NETS Perceptron Hopfield network Kohonen maps Probabilistic NNets NN with general regression Polynomial nets

The architecture of NN Neural Net analysis LEVEL-BY-LEVEL WITHOUT FEEDBACK RECURRENT with FEEDBACK (Elman-Jordan) PROTOTYPES OF ANY NEURAL ARCHITECTURE

Classification of NN Neural Net analysis By type of training with tutorwithout tutor In this case the network is offered most to find the latent laws in data file. So, redundancy of data supposes compression of the information, and a network it is possible to learn to find the most compact representation of such data, i.e. to make optimum coding the given kind of the entrance information.

Methodology of self-organizing cards Self-organizing Kohonen cards represent the type of the neural networks trained without the teacher. The network independently forms the outputs, adapting to signals acting on its input. As "teacher" of a network only data, that is an information available in them, the laws distinguishing entrance data from casual noise can serve. Cards unite in themselves two types of compression of the information: Downturn of dimension of data with the minimal loss of the information Reduction of a variety of data due to allocation of a final set of prototypes, and references of data to one of such types

Schematic representation of self-organizing network Methodology of self-organizing cards Neurons in the target layer are ordered and correspond to cells of a bi-dimensional card which can be painted by a principle of affinity of attributes

Hebb training rule Hebb, 1949 Change of weight at presentation of i th example is proportionally its inputs and outputs: : Change of weight at presentation of i th example is proportionally its inputs and outputs: : If to formulate training as a problem of optimization trained on Hebb neuron aspires to increase amplitude of the output: Where averaging is spent on training sample Training on Hebb in that kind in what it is described above, In practice not useful since leads to unlimited increase of amplitude of weights. NB: in this case there is no minimum error Vector representation

Oya training rule The member interfering is added To unlimited growth of weights Vector representation Rule Oya maximizes sensitivity of an output neuron at the limited amplitude of weights. It is easy to be convinced of it, having equated average change of weights to zero. Having increased then the right part of equality on w. We are convinced, that in balance Thus, weights trained neuron are located on hyper sphere : At training on Oya, a vector of weights neuron settles down on hyper sphere, In a direction maximizing Projection of entrance vectors.

Competition of neurons: the winner takes away all Basis algorithm Training of a competitive layer remains constant Winner: # of neuron winner I.e. the winner will appear neuron, giving the greatest response to the given entrance stimulus Training of the winner:

The winner takes away all One of variants of updating of a base rule of training of a competitive layer Consists in training not only the neuron-winner, but also its "neighbors", though and with In the smaller speed. Such approach - "pulling up" of the nearest to the winner neuron- It is applied in topographical Kohonen cards Function of the neighborhood is equal to unit for the neuron- -winner with an index And gradually falls down at removal from the neuron-winner Modified by Kohonen training rule Training on Kohonen reminds stretching an elastic grid of prototypes on Data file from training sample

Bidimentional topographical card of a set Three- dimensional data Each point in three-dimensional space gets in the cell of a grid having coordinate of the nearest to it’s neuron from bidimentional card.

The convenient tool of visualization Data is coloring topographical Cards, it is similar to how it do on Usual geographical cards. All attribute of data generates the coloring Cells of a card - on size of average value This attribute at the data who have got in given Cell. Visualization a topographical card, Induced by i-th component of entrance data Having collected together cards of all interesting Us of attributes, we shall receive topographical The atlas, giving integrated representation About structure of multivariate data.

Classified SOM for NASDAQ100 index for the period from 10-Nov-1997 till 27-Aug-2001 Methodology of self-organizing cards

Change in time of the log- price of actions of companies JP Morgan Chase (The top schedule) and American Express (the bottom schedule) for the period With 10-Jan-1994 on 27-Oct-1997 Change in time of the log- price of actions of companies JP Morgan Chase (The top schedule) and Citigroup (the bottom schedule) for the period c 10-Nov-1997 on 27-Aug- 2001

How to choose a variant? This is the forecast of the Sea level (Caspian)

DATA FILTERS Custom filters (e.g. Fourier filter) Adaptive filters (e.g. Kalman filter) Empirical mode decomposition Holder exponent

Adaptive filters won’t change phase Further we will keep in mind, that we are going to make forecasts, that’s why we need filters, which won’t change phase of the signal. Z -1 X(n) X(n-1) X(n-2) … X(n-nb) Z -1 b(2) b(3) b(nb+1) Z -1 -a(2) -a(3) -a(na+1) y(n) y(n-1) y(n-2) … y(n-nb)

Adaptive filters We saved all maxima, there is no phase distortion Siemens value, ad close (scaled)

Adaptive filters Let’s try to predict next value using zero-phase filter, having information about historical price: I used Perceptron with 3 hidden layers, logistic act function, rotation alg, 20 min

Adaptive filters Kalman filter K(n) ++ Z -1 ac

Adaptive filters Lets use Kalman filter, like the error estimator for the forecast of the zero-phase filtered data.

Empirical Mode Decomposition What is it? We can heuristically define a (local) high-frequency part {d(t), t− ≤ t ≤ t+}, or local detail, which corresponds to the oscillation terminating at the two minima and passing through the maximum which necessarily exists in between them. For the picture to be complete, one still has to identify the corresponding (local) low-frequency part m(t), or local trend, so that we have x(t) = m(t) + d(t) for t− ≤ t ≤ t+.

What is it? Empirical Mode Decomposition

Algorithm Given a signal x(t), the effective algorithm of EMD can be summarized as follows: 1. identify all extrema of x(t) 2. interpolate between minima (resp. maxima), ending up with some envelope emin(t) (resp. emax(t)) 3. compute the mean m(t) = (emin(t)+emax(t))/2 4. extract the detail d(t) = x(t) − m(t) 5. iterate on the residual m(t)

Empirical Mode Decomposition Lets do it for Siemens index

Empirical Mode Decomposition Lets do it for Siemens index We saved all strong maxima and there is no phase distortion

Empirical Mode Decomposition Lets make a forecast for Siemens index THERE WAS NO DELAY IN THE FORECAST AT ALL!!!

Holder exponent The main idea is next. Consider Holder derived, that So this formula is a somewhat connection between “bad” functions and “good” functions. If we will look on this formula with more precise we will notice, that we can catch moments in time, when our function knows, that it’s going to change it’s behavior from one to another. It means that today we can make a forecast on tomorrow behavior. But one should mention that we don’t know the sigh on what behavior is going to change.

Results

Thank You! Any QUESTIONS? SUGGESTIONS? IDESAS? Soft I’m using: 1)MatLab 2)NeuroShell 3)FracLab 4)Statistika 5)Builder C++