Download presentation
Presentation is loading. Please wait.
Published byEmily Todd Modified over 5 years ago
1
A Tutorial Keith A. Woodbury Mechanical Engineering Department
Application of Genetic Algorithms and Neural Networks to the Solution of Inverse Heat Conduction Problems A Tutorial Keith A. Woodbury Mechanical Engineering Department
2
Paper/Presentation/Programs
Not on the CD Available from May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
3
4th Int. Conf. Inv. Probs. Eng.
Overview Genetic Algorithms What are they? How do they work? Application to simple parameter estimation Application to Boundary Inverse Heat Conduction Problem May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
4
4th Int. Conf. Inv. Probs. Eng.
Overview Neural Networks What are they? How do they work? Application to simple parameter estimation Discussion of boundary inverse heat conduction problem May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
5
4th Int. Conf. Inv. Probs. Eng.
MATLAB® Integrated environment for computation and visualization of results Simple programming language Optimized algorithms Add-in toolbox for Genetic Algorithms May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
6
4th Int. Conf. Inv. Probs. Eng.
Genetic Algorithms What are they? GAs perform a random search of a defined N-dimensional solution space GAs mimic processes in nature that led to evolution of higher organisms Natural selection (“survival of the fittest”) Reproduction Crossover Mutation GAs do not require any gradient information and therefore may be suitable for nonlinear problems May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
7
4th Int. Conf. Inv. Probs. Eng.
Genetic Algorithms How do they work? A population of genes is evaluated using a specified fitness measure The best members of the population are selected for reproduction to form the next generation. The new population is related to the old one in a particular way Random mutations occur to introduce new characteristics into the new generation May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
8
4th Int. Conf. Inv. Probs. Eng.
Genetic Algorithms Rely heavily on random processes A random number generator will be called thousands of times during a simulation Searches are inherently computationally intensive Usually will find the global max/min within the specified search domain May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
9
4th Int. Conf. Inv. Probs. Eng.
Genetic Algorithms Basic scheme (1)Initialize population (2)evaluate fitness of each member (3)reproduce with fittest members (4)introduce random mutations in new generation Continue (2)-(3)-(4) until prespecified number of generations are complete May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
10
4th Int. Conf. Inv. Probs. Eng.
Role of Forward Solver Provide evaluations of the candidates in the population Similar to the role in conventional inverse problem May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
11
4th Int. Conf. Inv. Probs. Eng.
Elitism Keep the best members of a generation to ensure that their characteristics continue to influence subsequent generations May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
12
4th Int. Conf. Inv. Probs. Eng.
Encoding Population stored as coded “genes” Binary Encoding Represents data as strings of binary numbers Useful for certain GA operations (e.g., crossover) Real number encoding Represent data as arrays of real numbers Useful for engineering problems May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
13
Binary Encoding – Crossover Reproduction
May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
14
4th Int. Conf. Inv. Probs. Eng.
Binary Encoding Mutation Generate a random number for each “chromosome” (bit); If the random number is greater than a “mutation threshold” selected before the simulation, then flip the bit May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
15
4th Int. Conf. Inv. Probs. Eng.
Real Number Encoding Genes stored as arrays of real numbers Parents selected by sorting population best to worst and taking the top “Nbest” for random reproduction May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
16
4th Int. Conf. Inv. Probs. Eng.
Real Number Encoding Reproduction Weighted average of the parent arrays: Ci = wAi + (1-w)*Bi where w is a random number 0 ≤ w ≤ 1 If sequence of arrays are relevant, use a crosover-like scheme on the children May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
17
4th Int. Conf. Inv. Probs. Eng.
Real Number Encoding Mutation If mutation threshold is passed, replace the entire array with a randomly generated one Introduces large changes into population May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
18
4th Int. Conf. Inv. Probs. Eng.
Real Number Encoding Creep If a “creep threshold” is passed, scale the member of the population with Ci = ( 1 + w )*Ci where w is a random number in the range 0 ≤ w ≤ wmax. Both the creep threshold and wmax must be specified before the simulation begins Introduces small scale changes into population May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
19
4th Int. Conf. Inv. Probs. Eng.
Simple GA Example Given two or more points that define a line, determine the “best” value of the intercept b and the slope m Use a least squares criterion to measure fitness: May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
20
4th Int. Conf. Inv. Probs. Eng.
Make up some data >> b = 1; m = 2; >> xvals =[ ]; >> yvals = b*ones(1,5) + m * xvals yvals = May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
21
4th Int. Conf. Inv. Probs. Eng.
Parameters Npop – number of members in population (low, high) – real number pair specifying the domain of the search space Nbest – number of the best members to use for reproduction at each new generation May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
22
4th Int. Conf. Inv. Probs. Eng.
Parameters Ngen – total number of generations to produce Mut_chance – mutation threshold Creep_chance – creep threshold Creep_amount – parameter wmax May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
23
4th Int. Conf. Inv. Probs. Eng.
Parameters Npop = 100 (low, high) = (-5, 5) Nbest = 10 Ngen = 100 May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
24
SimpleGA – Results (exact data)
May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
25
SimpleGA – Convergence History
May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
26
SimpleGA – Results (1% noise)
May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
27
SimpleGA – Results (10% noise)
May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
28
4th Int. Conf. Inv. Probs. Eng.
SimpleGA – 10% noise May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
29
Heat function estimation
Each member of population is an array of Nunknown values representing the piecewise constant heat flux components Discrete Duhamel’s Summation used to compute the response of the 1-D domain May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
30
4th Int. Conf. Inv. Probs. Eng.
Make up some data Use Duhamel’s summation with t = 0.001 Assume classic triangular heat flux May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
31
4th Int. Conf. Inv. Probs. Eng.
“Data” May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
32
4th Int. Conf. Inv. Probs. Eng.
Two data sets “Easy” Problem – large t Choose every third point from the generated set Harder Problem – small t Use all the data from the generated set May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
33
GA program modifications
Let Ngen, mut_chance, creep_chance, and creep_amount be vectors Facilitates dynamic strategy Example: Ngen = [ ] mut_chance = [ ] means let mut_chance = 0.7 for 100 generations and then let mut_chance = 0.5 until 200 generations May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
34
GA Program Modifications
After completion of each pass of the Ngen array, redefine (low,high) based on (min,max) of the best member of the population Nelite = 5 May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
35
4th Int. Conf. Inv. Probs. Eng.
“Easy” Problem t = 0.18 First try, let Npop = 100 (low, high) = (-1, 1) Nbest = 10 Ngen = 100 May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
36
4th Int. Conf. Inv. Probs. Eng.
“Easy” Problem Npop = 100, Nbest = 10, Ngen = 100 May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
37
4th Int. Conf. Inv. Probs. Eng.
“Easy” Problem Npop = 100, Nbest = 10, Ngen = 100 May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
38
4th Int. Conf. Inv. Probs. Eng.
“Easy” Problem Npop = 100, Nbest = 10, Ngen = 100 May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
39
“Easy” Problem – another try
Use variable parameter strategy Nbest = 20 Ngen =[ ] mut_chance = [ ] creep_chance = [ ] creep_amount =[ ] May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
40
“Easy” Problem – another try
May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
41
“Easy” Problem – another try
May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
42
“Easy” Problem – another try
May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
43
4th Int. Conf. Inv. Probs. Eng.
“Hard” Problem has small time step data t = 0.06 Use same parameters as last Nbest = 20 Ngen =[ ] mut_chance = [ ] creep_chance = [ ] creep_amount =[ ] May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
44
4th Int. Conf. Inv. Probs. Eng.
“Hard” Problem May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
45
4th Int. Conf. Inv. Probs. Eng.
“Hard” Problem May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
46
4th Int. Conf. Inv. Probs. Eng.
“Hard” Problem May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
47
4th Int. Conf. Inv. Probs. Eng.
“Hard” Problem What’s wrong? Ill-posedness of the problem is apparent as t becomes small. Solution Add a Tikhonov regularizing term to the objective function May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
48
4th Int. Conf. Inv. Probs. Eng.
“Hard” Problem With 1 = 1.e-3 May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
49
4th Int. Conf. Inv. Probs. Eng.
“Hard” Problem With 1 = 1.e-3 May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
50
4th Int. Conf. Inv. Probs. Eng.
“Hard” Problem With 1 = 1.e-3 May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
51
Genetic Algorithms- Conclusions
GAs are a random search procedure Domain of solution must be known GAs are computationally intensive GAs can be applied to ill-posed problems but cannot by-pass the ill-posedness of the problem Selection of solution parameters for GAs is important for successful simulation May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
52
4th Int. Conf. Inv. Probs. Eng.
Neural Networks What are they? Collection of neurons interconnected with weights Intended to mimic the massively parallel operations of the human brain Act as interpolative functions for given set of facts May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
53
4th Int. Conf. Inv. Probs. Eng.
Neural Networks How do they work? The network is trained with a set of known facts that cover the solution space During the training the weights in the network are adjusted until the correct answer is given for all the facts in the training set After training, the weights are fixed and the network answers questions not in the training data. These “answers” are consistent with the training data May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
54
4th Int. Conf. Inv. Probs. Eng.
Neural Networks The neural network “learns” the relationship between the inputs and outputs by adjustment of the weights in the network When confronted with facts not in the training set, the weights and activation functions act to compute a result consistent with the training data May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
55
4th Int. Conf. Inv. Probs. Eng.
Neural Networks Concurrent NNs accept all inputs at once Recurrent or dynamic NNs accept input sequentially and may have one or more outputs fed back to input We consider only concurrent NNs May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
56
4th Int. Conf. Inv. Probs. Eng.
Role of Forward Solver Provide large number of (input,output) data sets for training May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
57
4th Int. Conf. Inv. Probs. Eng.
Neural Networks Hidden Layer Input Layer Output Layer May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
58
4th Int. Conf. Inv. Probs. Eng.
Neurons wn . w2 S w1 Activation function summation May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
59
4th Int. Conf. Inv. Probs. Eng.
MATLAB® Toolbox Facilitates easy construction, training and use of NNs Concurrent and recurrent networks Linear, tansig, and logsig activation functions Variety of training algorithms Backpropagation Descent methods (CG, Steepest Descent…) May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
60
Simple Parameter Estimation
Given a number of points on a line, determine the slope (m) and intercept (b) of the line Fix number of points at 6 Restrict domain 0 ≤ x ≤ 1 Restrict range of b and m to [ 0, 1 ] May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
61
4th Int. Conf. Inv. Probs. Eng.
Simple Example First approach: let the six values of x be fixed at 0, 0.2, 0.4, 0.6, 0.8, and 1.0 Inputs to the network will be the six values of y corresponding to these Outputs of the network will be the slope m and intercept b May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
62
4th Int. Conf. Inv. Probs. Eng.
Simple Example Training data 20 columns of 6 rows of y data Columns 1 through 8 May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
63
4th Int. Conf. Inv. Probs. Eng.
Simple Example Network1 6 inputs – linear activation function 12 neurons in hidden layer Use tansig activation function 2 outputs – linear activation function trained until SSE < 10-8 May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
64
4th Int. Conf. Inv. Probs. Eng.
Test Input Data input_data1 = output_data1 = May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
65
4th Int. Conf. Inv. Probs. Eng.
Simple Example Network1:Test data1 results b m bNN mNN May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
66
4th Int. Conf. Inv. Probs. Eng.
Network2 Increase number of neurons in hidden layer to 24 Train until SSE < 10-15 Test data1 results: b m bNN mNN May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
67
4th Int. Conf. Inv. Probs. Eng.
Network3 Add a second hidden layer with 24 neurons Train until SSE < 10-18 Test data1 results: b m bNN mNN May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
68
4th Int. Conf. Inv. Probs. Eng.
Network Design First add more neurons in each layer Add more hidden layers if necessary May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
69
Simple Example – Second Approach
Add the 6 values of x to the input (total of 12 inputs) Network4 24 neurons in hidden layer Trained until SSE < 10-12 May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
70
4th Int. Conf. Inv. Probs. Eng.
Network4 Test data1 results b m bNN mNN May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
71
4th Int. Conf. Inv. Probs. Eng.
Network4 Try with x values not in the training data (x = 0.1, .3, .45, .55, .7, .9 ) b m bNN mNN May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
72
NNs in Inverse Heat Conduction
Two possibilities: whole domain and sequential Concurrent networks offer best possibility of solving the whole domain problem (Krejsa, et al 1999) May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
73
NNs in Inverse Heat Conduction
Training data Must cover domain and range of possible inputs/outputs Use forward solver to supply solutions to many “standard” problems (linear, constant, triangular heat flux inputs) May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
74
NNs in Inverse Heat Conduction
Ill-posedness? One possibility : train network with noisy data May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
75
NNs in Inverse Heat Conduction
Krejsa, et al (1999) concluded that Radial basis functions and cascade correlation networks offer a better possibility for solution of the whole domain problem than standard backpropagation networks May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
76
Neural Networks - Conclusions
NNs offer possibility of solution of parameter estimation and inverse problems Proper design of the network and training set is essential for successful application May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
77
4th Int. Conf. Inv. Probs. Eng.
References M. Raudensky, K. A. Woodbury, J. Kral, and T. Brezina,, “Genetic Algorithm in Solution of Inverse Heat Conduction Problems,” Numerical Heat Transfer, Part B: Fundamentals, Vol 28, no 3, Oct.-Nov. 1995, pp J. Krejsa, K. A. Woodbury, J. D. Ratliff., and M. Raudensky, “Assessment of Strategies and Potential for Neural Networks in the IHCP,” Inverse Problems in Engineering, Vol 7, n 3, pp (1999) May 28, 2002 4th Int. Conf. Inv. Probs. Eng.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.