Presentation is loading. Please wait.

Presentation is loading. Please wait.

Surrogate model based design optimization

Similar presentations


Presentation on theme: "Surrogate model based design optimization"— Presentation transcript:

1 Surrogate model based design optimization
Aerospace design is synonymous with the use of long running and computationally intensive simulations, which are employed in the search for  optimal designs in the presence of multiple, competing objectives and constraints. The difficulty of this search is often exacerbated by numerical `noise' and inaccuracies in simulation data and the frailties of complex simulations, that is they often fail to return a result. Surrogate-based optimization methods can be employed to solve, mitigate, or circumvent problems associated with such searches. Please use the dd month yyyy format for the date for example 11 January The main title can be one or two lines long. Alex Forrester, Rolls-Royce UTC for Computational Engineering Bern, 22nd November 2010

2 Coming up before the break:
Surrogate model based optimization – the basic idea Kriging – an intuitive perspective Alternatives to Kriging Optimization using surrogates Constraints Missing data Parallel function evaluations Problems with Kriging error based methods

3 Surrogate model based optimization
SAMPLING PLAN OBSERVATIONS CONSTRUCT SURROGATE(S) design sensitivities available? multi-fidelity data? SEARCH INFILL CRITERION (optimization using the surrogate(s)) constraints present? noise in data? multiple design objectives? ADD NEW DESIGN(S) PRELIMINARY EXPERIMENTS Surrogate used to expedite search for global optimum Global accuracy of surrogate not a priority

4 Kriging (with a little help from Donald Jones)

5 Intuition is Important!
People are reluctant to use a tool they can’t understand Recall how basic probability was motivated by various games of chance involving dice, balls, and cards? In the same way, we can also make kriging intuitive. Therefore, we will now describe The Kriging Game

6 Game Equipment: 16 function cards (A1, A2,…, D4)
A B C D 1 2 3 4

7 Rules of the Kriging Game
Dealer shuffles cards and draws one at random. He does not show it. Player gets to ask the value at either x=1, x=2, x=3, or x=4 Based on the answer, the Player must guess the values of the function at all of x=1, x=2, x=3, and x=4 Dealer reveals the card. Player’s score is the sum of squared differences between the guesses and actual values (lower is better) The Player and Dealer switch roles and repeat. After 100 times, the person with the lowest score wins. What’s the best strategy?

8 Example: Ask value at x=2 and answer is y=1
A B C D 1 2 3 4

9 The value at x=2 rules out all but 4 functions: C1, A2, A3, B3
At any value other than x=2, we aren’t sure what is the value of the function. But we know the possible values. What guess will minimize our squared error?

10 Yes, it’s the mean — But why?

11 The best predictor is the mean
Our best predictor is the mean of the functions that match the sampled values. Using the range or standard deviations of the values, we could also give a confidence interval for our prediction.

12 Why could we predict with a confidence interval?
We had a set of possible functions and a probability distribution over them—in this case, all equally likely Given the data on the sampled points, we could subset out those functions that match, that is, we could “condition on the sampled data” To do this for more than a finite set of functions, we need a way to describe a “probability distribution over an infinite set of possible functions” — a stochastic process Each element of this infinite set of functions would be a “random function” But how do we describe and/or generate a random function?

13 How about a purely random function?
Here we have x values 0, 0.01, 0.02, …., 0.99, 1.00. At each of these we have generated a random number. Clearly this is not the kind of function we want.

14 What’s wrong with a purely random function?
No continuity! Values at y(x) and y(x+d) for small d can be very different. Root cause: the values at these points are independent. To fix this, we must assume the values are correlated, and that C(d) = Correlation( y(x+d), y(x) ) 1 as d0 Where the correlation is over all possible random functions. OK. Great. I need a correlation function C(d) with C(0)=1. But how do I use such a correlation function to generate a continuous random function?

15 Making a random function

16 The correlation function

17 We are ready! Assuming we have estimates of the correlation parameters (more on this later), we have a way of generate a set of functions — the equivalent of the cards in the Kriging Game. Using statistical methods involving “conditional probability,” we can condition on the data to get an (infinite) set of random functions that agree with the data.

18 Random Functions Conditioned on Sampled Points

19 Random Functions Conditioned on Sampled Points

20 The Predictor and Confidence Intervals

21 What it looks like in practice:
Sample the function to be predicted at a set of points i.e. run your experiments/simulations

22 20 Gaussian “bumps” with appropriate widths (chosen to maximize likelihood of data) centred around sample points

23 Multiply by weightings (again chosen to maximize likelihood of data)

24 Add together, with mean term, to predict function
Kriging prediction True function

25 Alternatives to Kriging

26 Moving least squares ✓ Quick ✓ Nice regularization parameter ✗ No useful confidence intervals ✗ How to choose polynomial & decay function?

27

28 Support vector regression
✓ Quick predictions in large design spaces ✗ Slow training (extra quadratic programming problem) ✓ Good noise filtering Lovely maths!

29

30 Multiple surrogates Surrogate built using a “committee machine” (also called “ensembles”) ✓ Hope to choose best model from a committee or combine a number of methods ✗ Often not mathematically rigorous and difficult to get confidence intervals Blind Kriging is, perhaps a good compromise ν selected by some data analytic procedure

31 Blind Kriging (mean function selected using Bayesian forward selection)

32 RMSE ~50% better than ordinary Kriging in this example

33

34

35

36 Optimization Using Surrogates

37 Polynomial regression based search (as Devil’s advocate)
Use divider pages to break up your presentation into logical sections and to provide a visual break for the viewer. The title can be one or two lines long.

38 Gaussian process prediction based optimization

39

40

41 Gaussian process prediction based optimization (as Devil’s advocate)

42 But, we have error estimates with Gaussian processes

43 Error estimates used to construct improvement criteria
Probability of improvement Expected improvement

44 Probability of improvement
Probability there will be any improvement, at all Can be extended to constrained and multi- objective problems

45 Expected improvement Useful metric that balances prediction & uncertainty Can be extended to constrained and multi- objective problems

46 Constrained EI

47 Probability of constraint satisfaction is just like the probability of improvement
Constraint function Prediction of constraint function Constraint limit Probability of satisfaction

48 Constrained expected improvement
Simply multiply by probability of constraint satisfaction:

49 A 2D example

50

51

52 Missing Data

53 What if design evaluations fail?
No infill point augmented to the surrogate model is unchanged optimization stalls Need to add some information or perturb the model add random point? impute a value based on the prediction at the failed point, so EI goes to zero here? use a penalized imputation (prediction + error estimate)?

54 Aerofoil design problem
2 shape functions (f1,f2) altered Potential flow solver (VGK) has ~35% failure rate 20 point optimal Latin hypercube max{E[I(x)]} updates until within one drag count of optimum

55 Results

56 A typical penalized imputation based optimization

57 Four variable problem f1,f2,f3,f4 varied 82% failure rate

58 A typical four variable penalized imputation based optimization
Legend as for two variable Red crosses indicate imputed update points. Regions of infeasible geometries are shown as dark blue. Blank regions represent flow solver failure

59 Parallel Function Evaluations

60 Simple parallelization of maximizing EI
Find maximum EI Assume function value here is not so good and impute a penalised value (we use prediction + predicted error) Rebuild and re-search EI Repeat 1-3 for number of processors before evaluating infill points

61 Problems With EI et al.

62 Two-stage approaches rely on parameter estimation
Choose correlation parameters by maximizing likelihood Then maximize expected improvement

63 What if parameters are estimated poorly?
Error estimates are wrong Usually under- estimates Search may dwell in local basins of attraction

64 Different parameter values have a big effect on expected improvement

65 Tea Break!


Download ppt "Surrogate model based design optimization"

Similar presentations


Ads by Google