A Tutorial Keith A. Woodbury Mechanical Engineering Department

Slides:



Advertisements
Similar presentations
Using Parallel Genetic Algorithm in a Predictive Job Scheduling
Advertisements

CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 7: Learning in recurrent networks Geoffrey Hinton.
Institute of Intelligent Power Electronics – IPE Page1 Introduction to Basics of Genetic Algorithms Docent Xiao-Zhi Gao Department of Electrical Engineering.
1 Wendy Williams Metaheuristic Algorithms Genetic Algorithms: A Tutorial “Genetic Algorithms are good at taking large, potentially huge search spaces and.
1 Lecture 8: Genetic Algorithms Contents : Miming nature The steps of the algorithm –Coosing parents –Reproduction –Mutation Deeper in GA –Stochastic Universal.
EvoNet Flying Circus Introduction to Evolutionary Computation Brought to you by (insert your name) The EvoNet Training Committee The EvoNet Flying Circus.
COMP305. Part II. Genetic Algorithms. Genetic Algorithms.
Intro to AI Genetic Algorithm Ruth Bergman Fall 2002.
Khaled Rasheed Computer Science Dept. University of Georgia
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Genetic Algorithms: A Tutorial
Genetic Algorithm.
Genetic Algorithms and Ant Colony Optimisation
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Evolving a Sigma-Pi Network as a Network Simulator by Justin Basilico.
Slides are based on Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems.
Soft Computing Lecture 18 Foundations of genetic algorithms (GA). Using of GA.
SOFT COMPUTING (Optimization Techniques using GA) Dr. N.Uma Maheswari Professor/CSE PSNA CET.
Intro. ANN & Fuzzy Systems Lecture 36 GENETIC ALGORITHM (1)
Zorica Stanimirović Faculty of Mathematics, University of Belgrade
Boltzmann Machine (BM) (§6.4) Hopfield model + hidden nodes + simulated annealing BM Architecture –a set of visible nodes: nodes can be accessed from outside.
Genetic algorithms Charles Darwin "A man who dares to waste an hour of life has not discovered the value of life"
Genetic Algorithms Genetic Algorithms – What are they? And how they are inspired from evolution. Operators and Definitions in Genetic Algorithms paradigm.
Computational Complexity Jang, HaYoung BioIntelligence Lab.
Genetic Algorithms Introduction Advanced. Simple Genetic Algorithms: Introduction What is it? In a Nutshell References The Pseudo Code Illustrations Applications.
1 “Genetic Algorithms are good at taking large, potentially huge search spaces and navigating them, looking for optimal combinations of things, solutions.
Neural and Evolutionary Computing - Lecture 9 1 Evolutionary Neural Networks Design  Motivation  Evolutionary training  Evolutionary design of the architecture.
Derivative Free Optimization G.Anuradha. Contents Genetic Algorithm Simulated Annealing Random search method Downhill simplex method.
1 A New Method for Composite System Annualized Reliability Indices Based on Genetic Algorithms Nader Samaan, Student,IEEE Dr. C. Singh, Fellow, IEEE Department.
2005MEE Software Engineering Lecture 11 – Optimisation Techniques.
Evolution Programs (insert catchy subtitle here).
1 Genetic Algorithms and Ant Colony Optimisation.
 Based on observed functioning of human brain.  (Artificial Neural Networks (ANN)  Our view of neural networks is very simplistic.  We view a neural.
ECE 103 Engineering Programming Chapter 52 Generic Algorithm Herbert G. Mayer, PSU CS Status 6/4/2014 Initial content copied verbatim from ECE 103 material.
EE749 I ntroduction to Artificial I ntelligence Genetic Algorithms The Simple GA.
CITS7212: Computational Intelligence An Overview of Core CI Technologies Lyndon While.
GENETIC ALGORITHM Basic Algorithm begin set time t = 0;
D Nagesh Kumar, IIScOptimization Methods: M8L5 1 Advanced Topics in Optimization Evolutionary Algorithms for Optimization and Search.
CHEE825 Fall 2005J. McLellan1 Nonlinear Empirical Models.
Genetic Algorithm Dr. Md. Al-amin Bhuiyan Professor, Dept. of CSE Jahangirnagar University.
Artificial Intelligence By Mr. Ejaz CIIT Sahiwal Evolutionary Computation.
Genetic Algorithms. Solution Search in Problem Space.
Machine Learning Artificial Neural Networks MPλ ∀ Stergiou Theodoros 1.
Evolutionary Design of the Closed Loop Control on the Basis of NN-ANARX Model Using Genetic Algoritm.
Genetic Algorithm. Outline Motivation Genetic algorithms An illustrative example Hypothesis space search.
 Presented By: Abdul Aziz Ghazi  Roll No:  Presented to: Sir Harris.
Presented By: Farid, Alidoust Vahid, Akbari 18 th May IAUT University – Faculty.
 Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems n Introduction.
Evolutionary Computation Evolving Neural Network Topologies.
Introduction to Genetic Algorithms
Chapter 14 Genetic Algorithms.
Genetic Algorithm in TDR System
Genetic Algorithms.
Evolutionary Algorithms Jim Whitehead
Bulgarian Academy of Sciences
Chapter 4 Beyond Classical Search
Genetic Algorithms, Search Algorithms
Basics of Genetic Algorithms (MidTerm – only in RED material)
Genetic Algorithms: A Tutorial
GENETIC ALGORITHM A biologically inspired model of intelligence and the principles of biological evolution are applied to find solutions to difficult.
The use of Neural Networks to schedule flow-shop with dynamic job arrival ‘A Multi-Neural Network Learning for lot Sizing and Sequencing on a Flow-Shop’
Basics of Genetic Algorithms
EE368 Soft Computing Genetic Algorithms.
Boltzmann Machine (BM) (§6.4)
Searching for solutions: Genetic Algorithms
Neuro-Computing Lecture 2 Single-Layer Perceptrons
Introduction to Genetic Algorithm and Some Experience Sharing
Traveling Salesman Problem by Genetic Algorithm
Evolutionary Ensembles with Negative Correlation Learning
Genetic Algorithms: A Tutorial
Presentation transcript:

A Tutorial Keith A. Woodbury Mechanical Engineering Department Application of Genetic Algorithms and Neural Networks to the Solution of Inverse Heat Conduction Problems A Tutorial Keith A. Woodbury Mechanical Engineering Department

Paper/Presentation/Programs Not on the CD Available from www.me.ua.edu/inverse May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Overview Genetic Algorithms What are they? How do they work? Application to simple parameter estimation Application to Boundary Inverse Heat Conduction Problem May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Overview Neural Networks What are they? How do they work? Application to simple parameter estimation Discussion of boundary inverse heat conduction problem May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. MATLAB® Integrated environment for computation and visualization of results Simple programming language Optimized algorithms Add-in toolbox for Genetic Algorithms May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Genetic Algorithms What are they? GAs perform a random search of a defined N-dimensional solution space GAs mimic processes in nature that led to evolution of higher organisms Natural selection (“survival of the fittest”) Reproduction Crossover Mutation GAs do not require any gradient information and therefore may be suitable for nonlinear problems May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Genetic Algorithms How do they work? A population of genes is evaluated using a specified fitness measure The best members of the population are selected for reproduction to form the next generation. The new population is related to the old one in a particular way Random mutations occur to introduce new characteristics into the new generation May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Genetic Algorithms Rely heavily on random processes A random number generator will be called thousands of times during a simulation Searches are inherently computationally intensive Usually will find the global max/min within the specified search domain May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Genetic Algorithms Basic scheme (1)Initialize population (2)evaluate fitness of each member (3)reproduce with fittest members (4)introduce random mutations in new generation Continue (2)-(3)-(4) until prespecified number of generations are complete May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Role of Forward Solver Provide evaluations of the candidates in the population Similar to the role in conventional inverse problem May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Elitism Keep the best members of a generation to ensure that their characteristics continue to influence subsequent generations May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Encoding Population stored as coded “genes” Binary Encoding Represents data as strings of binary numbers Useful for certain GA operations (e.g., crossover) Real number encoding Represent data as arrays of real numbers Useful for engineering problems May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

Binary Encoding – Crossover Reproduction May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Binary Encoding Mutation Generate a random number for each “chromosome” (bit); If the random number is greater than a “mutation threshold” selected before the simulation, then flip the bit May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Real Number Encoding Genes stored as arrays of real numbers Parents selected by sorting population best to worst and taking the top “Nbest” for random reproduction May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Real Number Encoding Reproduction Weighted average of the parent arrays: Ci = wAi + (1-w)*Bi where w is a random number 0 ≤ w ≤ 1 If sequence of arrays are relevant, use a crosover-like scheme on the children May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Real Number Encoding Mutation If mutation threshold is passed, replace the entire array with a randomly generated one Introduces large changes into population May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Real Number Encoding Creep If a “creep threshold” is passed, scale the member of the population with Ci = ( 1 + w )*Ci where w is a random number in the range 0 ≤ w ≤ wmax. Both the creep threshold and wmax must be specified before the simulation begins Introduces small scale changes into population May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Simple GA Example Given two or more points that define a line, determine the “best” value of the intercept b and the slope m Use a least squares criterion to measure fitness: May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Make up some data >> b = 1; m = 2; >> xvals =[ 1 2 3 4 5]; >> yvals = b*ones(1,5) + m * xvals yvals = 3 5 7 9 11 May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Parameters Npop – number of members in population (low, high) – real number pair specifying the domain of the search space Nbest – number of the best members to use for reproduction at each new generation May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Parameters Ngen – total number of generations to produce Mut_chance – mutation threshold Creep_chance – creep threshold Creep_amount – parameter wmax May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Parameters Npop = 100 (low, high) = (-5, 5) Nbest = 10 Ngen = 100 May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

SimpleGA – Results (exact data) May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

SimpleGA – Convergence History May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

SimpleGA – Results (1% noise) May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

SimpleGA – Results (10% noise) May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. SimpleGA – 10% noise May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

Heat function estimation Each member of population is an array of Nunknown values representing the piecewise constant heat flux components Discrete Duhamel’s Summation used to compute the response of the 1-D domain May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Make up some data Use Duhamel’s summation with t = 0.001 Assume classic triangular heat flux May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. “Data” May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Two data sets “Easy” Problem – large t Choose every third point from the generated set Harder Problem – small t Use all the data from the generated set May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

GA program modifications Let Ngen, mut_chance, creep_chance, and creep_amount be vectors Facilitates dynamic strategy Example: Ngen = [ 100 200 ] mut_chance = [ 0.7 0.5 ] means let mut_chance = 0.7 for 100 generations and then let mut_chance = 0.5 until 200 generations May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

GA Program Modifications After completion of each pass of the Ngen array, redefine (low,high) based on (min,max) of the best member of the population Nelite = 5 May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. “Easy” Problem t = 0.18 First try, let Npop = 100 (low, high) = (-1, 1) Nbest = 10 Ngen = 100 May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. “Easy” Problem Npop = 100, Nbest = 10, Ngen = 100 May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. “Easy” Problem Npop = 100, Nbest = 10, Ngen = 100 May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. “Easy” Problem Npop = 100, Nbest = 10, Ngen = 100 May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

“Easy” Problem – another try Use variable parameter strategy Nbest = 20 Ngen =[ 200 350 500 650 750] mut_chance = [0.9 0.7 0.5 0.3 0.1] creep_chance = [ 0.9 0.9 0.9 0.9 0.9] creep_amount =[0.7 0.5 0.3 0.1 0.05 ] May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

“Easy” Problem – another try May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

“Easy” Problem – another try May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

“Easy” Problem – another try May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. “Hard” Problem has small time step data t = 0.06 Use same parameters as last Nbest = 20 Ngen =[ 200 350 500 650 750] mut_chance = [0.9 0.7 0.5 0.3 0.1] creep_chance = [ 0.9 0.9 0.9 0.9 0.9] creep_amount =[0.7 0.5 0.3 0.1 0.05 ] May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. “Hard” Problem May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. “Hard” Problem May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. “Hard” Problem May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. “Hard” Problem What’s wrong? Ill-posedness of the problem is apparent as t becomes small. Solution Add a Tikhonov regularizing term to the objective function May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. “Hard” Problem With 1 = 1.e-3 May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. “Hard” Problem With 1 = 1.e-3 May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. “Hard” Problem With 1 = 1.e-3 May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

Genetic Algorithms- Conclusions GAs are a random search procedure Domain of solution must be known GAs are computationally intensive GAs can be applied to ill-posed problems but cannot by-pass the ill-posedness of the problem Selection of solution parameters for GAs is important for successful simulation May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Neural Networks What are they? Collection of neurons interconnected with weights Intended to mimic the massively parallel operations of the human brain Act as interpolative functions for given set of facts May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Neural Networks How do they work? The network is trained with a set of known facts that cover the solution space During the training the weights in the network are adjusted until the correct answer is given for all the facts in the training set After training, the weights are fixed and the network answers questions not in the training data. These “answers” are consistent with the training data May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Neural Networks The neural network “learns” the relationship between the inputs and outputs by adjustment of the weights in the network When confronted with facts not in the training set, the weights and activation functions act to compute a result consistent with the training data May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Neural Networks Concurrent NNs accept all inputs at once Recurrent or dynamic NNs accept input sequentially and may have one or more outputs fed back to input We consider only concurrent NNs May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Role of Forward Solver Provide large number of (input,output) data sets for training May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Neural Networks Hidden Layer Input Layer Output Layer May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Neurons wn . w2 S w1 Activation function summation May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. MATLAB® Toolbox Facilitates easy construction, training and use of NNs Concurrent and recurrent networks Linear, tansig, and logsig activation functions Variety of training algorithms Backpropagation Descent methods (CG, Steepest Descent…) May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

Simple Parameter Estimation Given a number of points on a line, determine the slope (m) and intercept (b) of the line Fix number of points at 6 Restrict domain 0 ≤ x ≤ 1 Restrict range of b and m to [ 0, 1 ] May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Simple Example First approach: let the six values of x be fixed at 0, 0.2, 0.4, 0.6, 0.8, and 1.0 Inputs to the network will be the six values of y corresponding to these Outputs of the network will be the slope m and intercept b May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Simple Example Training data 20 columns of 6 rows of y data Columns 1 through 8 0 0.2500 0.5000 0.7500 1.0000 1.0000 0.7500 0.5000 0.2000 0.4000 0.6000 0.8000 1.0000 1.0000 0.8000 0.6000 0.4000 0.5500 0.7000 0.8500 1.0000 1.0000 0.8500 0.7000 0.6000 0.7000 0.8000 0.9000 1.0000 1.0000 0.9000 0.8000 0.8000 0.8500 0.9000 0.9500 1.0000 1.0000 0.9500 0.9000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Simple Example Network1 6 inputs – linear activation function 12 neurons in hidden layer Use tansig activation function 2 outputs – linear activation function trained until SSE < 10-8 May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Test Input Data input_data1 = 0.9000 0.1000 0.9000 0.3000 0.5000 0.3000 0.7000 0.7000 1.0400 0.2800 0.9200 0.4000 0.5600 0.4400 0.7600 0.8800 1.1800 0.4600 0.9400 0.5000 0.6200 0.5800 0.8200 1.0600 1.3200 0.6400 0.9600 0.6000 0.6800 0.7200 0.8800 1.2400 1.4600 0.8200 0.9800 0.7000 0.7400 0.8600 0.9400 1.4200 1.6000 1.0000 1.0000 0.8000 0.8000 1.0000 1.0000 1.6000 output_data1 = 0.7000 0.9000 0.1000 0.5000 0.3000 0.7000 0.3000 0.9000 May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Simple Example Network1:Test data1 results b m bNN mNN 0.9000 0.7000 0.8973 0.7236 0.1000 0.9000 0.0997 0.9006 0.9000 0.1000 0.9002 0.0998 0.3000 0.5000 0.3296 0.5356 0.5000 0.3000 0.5503 0.3231 0.3000 0.7000 0.3001 0.6999 0.7000 0.3000 0.7000 0.3001 0.7000 0.9000 0.7269 0.8882 May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Network2 Increase number of neurons in hidden layer to 24 Train until SSE < 10-15 Test data1 results: b m bNN mNN 0.9000 0.7000 0.8978 0.7102 0.1000 0.9000 0.0915 0.9314 0.9000 0.1000 0.9017 0.0920 0.3000 0.5000 0.2948 0.5799 0.5000 0.3000 0.4929 0.3575 0.3000 0.7000 0.3017 0.6934 0.7000 0.3000 0.6993 0.3032 0.7000 0.9000 0.7008 0.8976 May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Network3 Add a second hidden layer with 24 neurons Train until SSE < 10-18 Test data1 results: b m bNN mNN 0.9000 0.7000 0.8882 0.7018 0.1000 0.9000 0.1171 0.9067 0.9000 0.1000 0.8918 0.1002 0.3000 0.5000 0.3080 0.5084 0.5000 0.3000 0.5033 0.3183 0.3000 0.7000 0.2937 0.6999 0.7000 0.3000 0.7039 0.2996 0.7000 0.9000 0.7026 0.9180 May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Network Design First add more neurons in each layer Add more hidden layers if necessary May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

Simple Example – Second Approach Add the 6 values of x to the input (total of 12 inputs) Network4 24 neurons in hidden layer Trained until SSE < 10-12 May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Network4 Test data1 results b m bNN mNN 0.9000 0.7000 0.8972 0.7171 0.1000 0.9000 0.1016 0.9081 0.9000 0.1000 0.9002 0.0979 0.3000 0.5000 0.2940 0.5059 0.5000 0.3000 0.5115 0.3013 0.3000 0.7000 0.2996 0.6980 0.7000 0.3000 0.7000 0.3009 0.7000 0.9000 0.7108 0.8803 May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. Network4 Try with x values not in the training data (x = 0.1, .3, .45, .55, .7, .9 ) b m bNN mNN 0.9000 0.7000 0.9148 0.3622 0.1000 0.9000 0.1676 0.4901 0.9000 0.1000 0.9169 -0.1488 0.3000 0.5000 0.3834 0.1632 0.5000 0.3000 0.5851 0.0109 0.3000 0.7000 0.3705 0.3338 0.7000 0.3000 0.7426 0.0139 0.7000 0.9000 0.7284 0.5163 May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

NNs in Inverse Heat Conduction Two possibilities: whole domain and sequential Concurrent networks offer best possibility of solving the whole domain problem (Krejsa, et al 1999) May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

NNs in Inverse Heat Conduction Training data Must cover domain and range of possible inputs/outputs Use forward solver to supply solutions to many “standard” problems (linear, constant, triangular heat flux inputs) May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

NNs in Inverse Heat Conduction Ill-posedness? One possibility : train network with noisy data May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

NNs in Inverse Heat Conduction Krejsa, et al (1999) concluded that Radial basis functions and cascade correlation networks offer a better possibility for solution of the whole domain problem than standard backpropagation networks May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

Neural Networks - Conclusions NNs offer possibility of solution of parameter estimation and inverse problems Proper design of the network and training set is essential for successful application May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

4th Int. Conf. Inv. Probs. Eng. References M. Raudensky, K. A. Woodbury, J. Kral, and T. Brezina,, “Genetic Algorithm in Solution of Inverse Heat Conduction Problems,” Numerical Heat Transfer, Part B: Fundamentals, Vol 28, no 3, Oct.-Nov. 1995, pp. 293-306. J. Krejsa, K. A. Woodbury, J. D. Ratliff., and M. Raudensky, “Assessment of Strategies and Potential for Neural Networks in the IHCP,” Inverse Problems in Engineering, Vol 7, n 3, pp. 197-213. (1999) May 28, 2002 4th Int. Conf. Inv. Probs. Eng.