Download presentation
1
Example II: Linear truss structure
Optimization goal is to minimize the mass of the structure Cross section areas of trusses as design variables Maximum stress in each element as inequality constraints Maximum displacement in loading points as inequality constraints Gradient-based and ARSM optimization perform much better if constraint equations are formulated separately instead of using total max_stress and max_disp as constraints Parametric optimizer like oS are running a black box CAE-process, therefore the gradients are determined with single side or central differences. Part 4: Multidisciplinary Optimization
2
Example II: Sensitivity analysis
MOP indicates only a1, a3, a8 as important variables for maximum stress and displacements, but all inputs are important for objective function Part 4: Multidisciplinary Optimization
3
Example II: Sensitivity analysis
a a a a a a a a a a10 max_stress max_disp stress10 stress9 stress8 stress6 stress5 stress4 stress3 stress2 stress1 disp4 disp2 mass MOP filter For single stress values used in constraint equations, each input variable occurs at least twice as important parameter Reduction of number of inputs seems not possible Part 4: Multidisciplinary Optimization
4
Example II: Gradient-based optimization
Best design with valid constraints: mass = 1595 (19% of initial mass) Areas of elements 2,5,6 and 10 are set to minimum Stresses in remaining elements reach maximum value 153 solver calls (+100 from DOE) Part 4: Multidisciplinary Optimization
5
Example II: Adaptive response surface
Best design with valid constraints: mass = 1613 (19% of initial mass) Areas of elements 2,6 and are set to minimum, 5 and 10 are close to minimum 360 solver calls Part 4: Multidisciplinary Optimization
6
Example II: EA (global search)
Best design with valid constraints: mass = 2087 (25% of initial mass) 392 solver calls Part 4: Multidisciplinary Optimization
7
Example II: EA (local search)
Best design with valid constraints: mass = 2049 (24% of initial mass) 216 solver calls (+392 from global search) Part 4: Multidisciplinary Optimization
8
Example II: Overview optimization results
Method Settings Mass Solver calls Constraints violated Initial - 8393 DOE LHS 3285 100 75% NLPQL diff. interval 0.01%, single sided 1595 153(+100) 42% ARSM defaults (local) 1613 360 80% EA global defaults 2087 392 56% EA local 2049 216(+392) 79% PSO global 2411 400 36% GA global 2538 381 25% SDI local 1899 70% Parametric optimizer like oS are running a black box CAE-process, therefore the gradients are determined with single side or central differences. NLPQL with small differentiation interval with best DOE as start design is most efficient Local ARSM gives similar parameter set EA/GA/PSO with default settings come close to global optimum GA with adaptive mutation has minimum constraint violation Part 4: Multidisciplinary Optimization
9
When to use which optimization algorithms
Gradient-based algorithms Most efficient method if gradients are accurate enough Consider its restrictions like local optima, only continuous variables and noise Response surface method Attractive method for a small set of continuous variables (<15) Adaptive RSM with default settings is the method of choice Biologic Algorithms GA/EA/PSO copy mechanisms of nature to improve individuals Method of choice if gradient or ARSM fails Very robust against numerical noise, non-linearities, number of variables,… Limitations and recommendations using the different strategies are given above. Using gradient algorithms and response surface algorithms, optiSLang provides high end state of the art. ARSM and the genetic/evolutionary algorithms are worldwide one of the best commercially available algorithms. These algorithms have been continuously developed over the last 5 years and have proven there functionality and robustness several times. They are usually the main reason for the customers decision to use optiSLang. Start Part 4: Multidisciplinary Optimization
10
Sensitivity Analysis and Optimization
1) Start with a sensitivity study using the LHS Sampling 2) Identify the important parameters and responses understand the problem reduce the problem Scan the whole Design Space Understand the Problem using CoP/MoP optiSLang For identification tasks, the search of a parameter set for the numerical model regarding the best fit between test and simulation (some time also called model update) is one of the most difficult optimization problems. First, we recommend always performing a sensitivity analysis to ensure: That the test is inside the variation space of the simulation (defined by the lower and upper bounds of varying parameter=identification space) That a set of sensible input variables to the update criteria can be found After achieving a set of sensitive parameter to tune regarding sensitive update criteria, starting from the best design of the sensitivity analysis the optimization part is often simple. See also: J. Will: The Calibration of Measurement and Simulation as Optimization Problem, Proceeding NAFEMS Seminar Virtual Testing – Simulationsverfahren als integrierter Baustein einer effizienten Produktentwicklung“” April 2006, Wiesbaden, Germany, In the example above 7 different test conditions are identified at the same time. In the beginning 6 input parameter are varied between physically useful lower and upper bounds. First it could be proven, that the tests are lying within the variation band of identification space. Second the sensitivity of that parameter was checked against different response values (integrals and peak values of acceleration, displacement and pressure curves) and a subset of 3 input variables (gas temperature, bag permeability and efficiency of the airbag opening) to three response values (acceleration integral+peak, pressure integral) are used for the identification. Search for Optima 3) Run an ARSM, gradient based or biological based optimization algorithm 4) Goal: user-friendly procedure provides as much automatism as possible Part 4: Multidisciplinary Optimization
11
Optimization of a Large Ship Vessel
Optimization of the total weight of two load cases with constrains (stresses) discrete Variables Self regulating evolutionary strategy Population of 4, uniform crossover for reproduction Active search for dominant genes with different mutation rates Solver: ANSYS Design Evaluations: 3000 Design Improvement: > 10 % EVOLUTIONARY ALGORITHM The software architecture and the evolutionary algorithms are ready to search for design improvements in very large design spaces. This example is according to our knowledge the word largest published industrial optimization problem. After a moderate number of solver calls (3000), significant design improvement was shown. The example is also a very early one from 2000/2001. Probably this kind of problems still need some adjustments of algorithm, therefore using the programmer modus of oS or having consultancy from dynardo help to successfully solve that kind of problems. The objective function was the reduction of the total weight. Due to the size of the finite element model and the small amount (compared to the number of variables) of planned solver calls (3000), a special mix of GA and EA with a self-regulation algorithm was chosen. The mutation was performed on grouped variables (with respect to sensitivity criteria) and some optimizer parameters had been formulated by a self-regulating algorithm. After approximately 400 generations, the self-regulation had slowed down the optimizing process and the self-regulation process was varied and restarted at this point. On the best history plot of the generations (Fig. 11), the jump of design improvement after this intersection can be observed clearly. After 3000 solver calls, we stopped with a very successful reduction of the total weight. The whole analysis was done an SGI-workstation with 20 processors. The machine was permanently loaded with other network users and we had to cover successfully some system and solver crashes within the total analysis time of about three weeks. You find a short project description on a very early paper of oS J. Will, C. Bucher, J. Riedel. Multidisciplinary non-linear optimization with Optimizing Structural Language – OptiSLang, Proceedings 19. CAD-FEM Users Meeting 2001, Berlin, Germany Part 4: Multidisciplinary Optimization
12
Optimization of passive safety
Adaptive Response Surface Methodology Optimization of passive safety performance US_NCAP & EURO_NCAP using Adaptive Response Surface Method 3 and 11 continuous variables weighted objective function Solver: MADYMO … continuation from previous slide: First: If the input parameters do scatter then we want to know how large is the scatter of the output parameters. In other words we want to quantify the uncertainty of the output parameters. Second: If the output parameters are uncertain then we want to know how large the probability is that an output parameter by chance does not meet some design criteria. I. e. we want to know how reliable the component is. Third: If the output parameters are uncertain then we want to know which parameters on the input side contribute the most to the uncertainty of the output parameter. Knowing this we will be able to tackle the “evil” drivers efficiently and improve the reliability of the component. Design Evaluations: 75 Design Improvement: 10 % Part 4: Multidisciplinary Optimization
13
Genetic Optimization of Spot Welds
134 binary variables, torsion loading, stress constrains Weak elitism to reach fast design improvement Fatigue related stress evaluation in all spot welds Solver: ANSYS (using automatic spot weld Meshing procedure) Design evaluations: 200 Design improvement: 47% Optimization of spot welds is a binary problem and genetic algorithms are the method of choice. See also: [02/1] Will, J., Bucher, C., Riedel. J, Akgün, T. „Stochastik und Optimierung: Anwendungen genetischer und stochastischer Verfahren zur multidisziplinären Optimierung in der Fahrzeugentwicklung“; VDI-Berichte Nr. 1701, 2002, S Part 4: Multidisciplinary Optimization
14
Optimization of an Oil Pan
The intention is to optimize beads to increase the first eigenfrequency of an oil pan by more than 40% Topology optimization indicate possibility > 40% improvement, but test failed Sensitivity study and parametric optimization using parametric CAD design + ANSYS workbench+optiSLang could solve the task. Initial design beads design after topology optimization beads design after parameter optimization Design Parameter 50 Design Evaluations: 500 CAE: ANSYS workbench CAD: Pro/ENGINEER [Veiz. A; Will, J.: Parametric optimization of an oil pan; Proceedings Weimarer Optimierung- und Stochastiktage 5.0, 2008] Part 4: Multidisciplinary Optimization
15
Multi Criteria Optimization Strategies
Several optimization criteria are formulated in terms of the input variables x Strategy A: Only the most important objective function is used as optimization goal Other objectives as constraints Strategy B: Weighting of single objectives Part 4: Multidisciplinary Optimization
16
Example: damped oscillator
Parametric optimizer like oS are running a black box CAE-process, therefore the gradients are determined with single side or central differences. Objective 1: minimize maximum amplitude after 5s Objective 2: minimize eigen-frequency DOE scan with 100 LHS samples gives good problem overview Weighted objectives require about 1000 solver calls Part 4: Multidisciplinary Optimization
17
Multi Criteria Optimization Strategies
Strategy C: Pareto Optimization If objectives are in conflict, a Pareto frontier exist. Only in that case, we recommend multicriteria (Pareto) optimization. If the objectives are not in conflict, the Pareto frontier will converge in a very small area, but will need much more solver runs than weighted objective functions. The multicriteria algorithm in oS is one of the three world wide best algorithms. (The other two are available in mF.) Part 4: Multidisciplinary Optimization
18
Multi Criteria Optimization Strategies
Design space Objective space Only for conflicting objectives a Pareto frontier exists For positively correlated objective functions only one optimum exists If objectives are in conflict, a Pareto frontier exist. Only in that case, we recommend multicriteria (Pareto) optimization. If the objectives are not in conflict, the Pareto frontier will converge in a very small area, but will need much more solver runs than weighted objective functions. The multicriteria algorithm in oS is one of the three world wide best algorithms. (The other two are available in mF.) Part 4: Multidisciplinary Optimization
19
Multi Criteria Optimization Strategies
Conflicting objectives If objectives are in conflict, a Pareto frontier exist. Only in that case, we recommend multicriteria (Pareto) optimization. If the objectives are not in conflict, the Pareto frontier will converge in a very small area, but will need much more solver runs than weighted objective functions. The multicriteria algorithm in oS is one of the three world wide best algorithms. (The other two are available in mF.) Correlated objectives Part 4: Multidisciplinary Optimization
20
Multi Criteria Optimization Strategies
Pareto dominance Solution a dominates solution c since a is better in both objectives Solution a is indifferent to b since each solution is better than the respective other in one objective (a dominates c) (a is indifferent to b) If objectives are in conflict, a Pareto frontier exist. Only in that case, we recommend multicriteria (Pareto) optimization. If the objectives are not in conflict, the Pareto frontier will converge in a very small area, but will need much more solver runs than weighted objective functions. The multicriteria algorithm in oS is one of the three world wide best algorithms. (The other two are available in mF.) Part 4: Multidisciplinary Optimization
21
Multi Criteria Optimization Strategies
Pareto optimality A solution is called Pareto-optimal if there is no decision vector that would improve one objective without causing a degradation in at least one other objective A solution a is called Pareto-optimal in relation to a set of solutions A, if it is not dominated by any other solution c Requirements for ideal multi-objective optimization Find a set of solutions close to the Pareto-optimal solutions (convergence) Find solutions which are diverse enough to represent the whole Pareto front (diversity) If objectives are in conflict, a Pareto frontier exist. Only in that case, we recommend multicriteria (Pareto) optimization. If the objectives are not in conflict, the Pareto frontier will converge in a very small area, but will need much more solver runs than weighted objective functions. The multicriteria algorithm in oS is one of the three world wide best algorithms. (The other two are available in mF.) Part 4: Multidisciplinary Optimization
22
Multi Criteria Optimization Strategies
Pareto Optimization using Evolutionary Algorithms Only in case of conflicting objectives, a Pareto frontier exists and Pareto optimization is recommended (optiSLang post processing supports 2 or 3 conflicting objectives) Effort to resolute Pareto frontier is higher than to optimize one weighted optimization function the picture show the problem with multi criteria optimization Of course searching for one optimal point needs less effort than searching for a set of optimal points If the user can only effort a certain amount of solver calls, using 200 solver calls for single criteria optimization (GA) is resulting in a design far in front of the PARETO frontier using the same amount of solver calls the decision of using a multi-criteria algorithm should be based on the knowledge of having objectives in conflict. Part 4: Multidisciplinary Optimization
23
Example: damped oscillator
Parametric optimizer like oS are running a black box CAE-process, therefore the gradients are determined with single side or central differences. Pareto optimization with EA gives good Pareto frontier with 123 solver calls Part 4: Multidisciplinary Optimization
24
Example II: linear truss structure
Anthill plot from ARSM Pareto front 1. For more complex problems the performance of the Pareto optimization can be improved if a good start population is available This can be found in selected designs of a previous DOE or single objective optimization Part 4: Multidisciplinary Optimization
25
Optimization Algorithms
Local adaptive RSM Biologic Algorithms Genetic algorithms, Evolutionary strategies & Particle Swarm Optimization Gradient-based algorithms Start Response surface method (RSM) Global adaptive RSM Pareto Optimization Limitations and recommendations using the different strategies are given above. Using gradient algorithms and response surface algorithms, optiSLang provides high end state of the art. ARSM and the genetic/evolutionary algorithms are worldwide one of the best commercially available algorithms. These algorithms have been continuously developed over the last 5 years and have proven there functionality and robustness several times. They are usually the main reason for the customers decision to use optiSLang. Part 4: Multidisciplinary Optimization
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.