Akram Bitar and Larry Manevitz Department of Computer Science

Slides:



Advertisements
Similar presentations
Multi-Layer Perceptron (MLP)
Advertisements

Beyond Linear Separability
Slides from: Doug Gray, David Poole
NEURAL NETWORKS Backpropagation Algorithm
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
Multilayer Perceptrons 1. Overview  Recap of neural network theory  The multi-layered perceptron  Back-propagation  Introduction to training  Uses.
Kostas Kontogiannis E&CE
Machine Learning Neural Networks
Simple Neural Nets For Pattern Classification
Supervised learning 1.Early learning algorithms 2.First order gradient methods 3.Second order gradient methods.
Neural Networks Basic concepts ArchitectureOperation.
The back-propagation training algorithm
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Connectionist models. Connectionist Models Motivated by Brain rather than Mind –A large number of very simple processing elements –A large number of weighted.
November 19, 2009Introduction to Cognitive Science Lecture 20: Artificial Neural Networks I 1 Artificial Neural Network (ANN) Paradigms Overview: The Backpropagation.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Artificial neural networks:
CS623: Introduction to Computing with Neural Nets (lecture-10) Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Networks (ANN). Output Y is 1 if at least two of the three inputs are equal to 1.
Multiple-Layer Networks and Backpropagation Algorithms
Artificial Neural Networks
Chapter 9 Neural Network.
Chapter 11 – Neural Networks COMP 540 4/17/2007 Derek Singer.
11 CSE 4705 Artificial Intelligence Jinbo Bi Department of Computer Science & Engineering
Outline What Neural Networks are and why they are desirable Historical background Applications Strengths neural networks and advantages Status N.N and.
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 20 Oct 26, 2005 Nanjing University of Science & Technology.
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Intelligence Techniques Multilayer Perceptrons.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Modelling Language Evolution Lecture 1: Introduction to Learning Simon Kirby University of Edinburgh Language Evolution & Computation Research Unit.
Multi-Layer Perceptron
L. Manevitz U. Haifa 1 Neural Networks: Capabilities and Examples L. Manevitz Computer Science Department HIACS Research Center University of Haifa.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
CS621 : Artificial Intelligence
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
Neural Networks - lecture 51 Multi-layer neural networks  Motivation  Choosing the architecture  Functioning. FORWARD algorithm  Neural networks as.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Professor : Ming – Shyan Wang Department of Electrical Engineering Southern Taiwan University Thesis progress report Sensorless Operation of PMSM Using.
Neural Network and Deep Learning 王强昌 MLA lab.
CHEE825 Fall 2005J. McLellan1 Nonlinear Empirical Models.
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
Previous Lecture Perceptron W  t+1  W  t  t  d(t) - sign (w(t)  x)] x Adaline W  t+1  W  t  t  d(t) - f(w(t)  x)] f’ x Gradient.
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Chapter 6 Neural Network.
1 Technological Educational Institute Of Crete Department Of Applied Informatics and Multimedia Intelligent Systems Laboratory.
Bab 5 Classification: Alternative Techniques Part 4 Artificial Neural Networks Based Classifer.
Kim HS Introduction considering that the amount of MRI data to analyze in present-day clinical trials is often on the order of hundreds or.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Data Mining: Concepts and Techniques1 Prediction Prediction vs. classification Classification predicts categorical class label Prediction predicts continuous-valued.
Neural networks.
Learning in Neural Networks
One-layer neural networks Approximation problems
Artificial neural networks:
Soft Computing Applied to Finite Element Tasks
Prof. Carolina Ruiz Department of Computer Science
of the Artificial Neural Networks.
network of simple neuron-like computing elements
CS621: Artificial Intelligence Lecture 22-23: Sigmoid neuron, Backpropagation (Lecture 20 and 21 taken by Anup on Graphical Models) Pushpak Bhattacharyya.
Akram Bitar and Larry Manevitz Department of Computer Science
Prof. Carolina Ruiz Department of Computer Science
Outline Announcement Neural networks Perceptrons - continued
Presentation transcript:

Neural Network Time Series Forecasting of Finite-Element Mesh Adaptation Akram Bitar and Larry Manevitz Department of Computer Science University of Haifa & Dan Givoli Faculty of Aerospace Engineering Technion - Israel Institute of Technology A. Bitar

Content Introduction to Finite Element Method Time Dependent Partial Differential Equations The Finite Element Mesh Adaptation Problem Introduction to Neural Networks Time Series Prediction with Neural Networks Our Method For Solving The Mesh Adaptation Problem A. Bitar

Finite Element Method (FEM) What is it ? The most effective numerical techniques for solving various problems arising from mathematical physics and engineering The widely used numerical techniques for solving partial differential equations (PDEs) A. Bitar

Finite Element Method (FEM) Divides up the PDE’s domain into finite number of elements How does it work? Finds simple approximation on each element such that: Consistent with initial boundary conditions Consistent with neighboring elements FEM Mesh Solution found by linear algebra techniques A. Bitar

Time Dependent Partial Differential Equations Hyperbolic Wave Equations Parabolic Heat Equations

FEM and Time Dependent PDEs The time dependent PDEs are repeatedly solved for different constant times using the previous solution as start condition for the next one The “areas of interest” are propagated through the FEM mesh In order to achieve a good approximation the mesh should be dynamic and varying with time A. Bitar

FEM and Time Dependent PDEs For time dependent PDEs a critical regions should be subject to local mesh refinement. The critical regions are identified by the regions, which their local gradient shows bigger changes. A. Bitar

Mesh Adaptations Problem In current usage, the method is to use indicators (e.g. gradients) from the solution at the current time to identify where the mesh should be refined at the next time. The defect of this method that one is always operating one step behind (behind the “area of interest”) A. Bitar

Mesh Adaptation Problem Refine We miss the action A. Bitar

Our Method To predict the “area of interest” at the next time stage and refine the mesh accordingly Time Series Prediction via Neural Network methodology is used in order to predict the “area of interest” The Neural Network receives, as input, the gradient values at the recent time and predicts the gradient values at the next time stage A. Bitar

Neural Networks (NN) What is it? A biologically inspired model, which tries to simulate the human nervous system Consists of elements (neurons) and connections between them (weights) Can be trained to perform complex functions (e.g. classifications) by adjusting the value of the weights. A. Bitar

Neural Networks (NN) How does it work? The input signal is multiplied by the weights, summed together and then processed by the neuron Updates the NN weights through training scheme (e.g. Back-Propagation algorithm) A. Bitar

Feed-Forward Networks Step 2: Feed the Input Signal forward Input Layer Hidden Layers Output Layer Input Signals Output Signals Train the net over an input set until a convergence occurs Step1: Initialize Weights Step3: Compute the Error Signal (difference between the NN output and the desired Output) Step4: Feed the Error Signal backward and update the waits (in order to minimize the error)

Time Series Predicting Using NN What is time series? A series of data where the past values in the series may influence the future values. (the future value is a nonlinear function of its past m values) The Neural Network can be used as a nonlinear model that can be trained to map past and future values of a time series A. Bitar

Applying NNs to Time Dependent PDES

Neural Network Architecture Two networks One is for boundary elements and the other is for interior elements Network input Eight input units (six for boundary element network), the gradient of the element and its neighbors in the current and previous times Hidden Layers One hidden layer with six units Network output One output unit, that gives the prediction of the gradient value at the next time stage

Training Phase Training Set Training Performance We calculate the solution on the initial nondynamic mesh over all the given time space We chose random examples (about 600) and trained the net over this set to predict the gradient Training Performance For all the experiments that we did so far, the network training took at most 200 epochs to converge to an extremely small error A. Bitar

One Dimension Wave Equation Analytic Solution PDE

A. Bitar

A. Bitar

Two Dimension Wave Equation Analytic Solution PDE A. Bitar

Neural Network Predictor “Standard” Gradient Indicator Analytic Solution FEM Solution Time=0.4

Summary We have shown that the Time Series Prediction via Neural Network can accurately predict the gradient values By applying the NN predictor we obtained a substantial numerical improvement over the current methods