Presentation is loading. Please wait.

Presentation is loading. Please wait.

Nonlinear balanced model residualization via neural networks Juergen Hahn.

Similar presentations


Presentation on theme: "Nonlinear balanced model residualization via neural networks Juergen Hahn."— Presentation transcript:

1 Nonlinear balanced model residualization via neural networks Juergen Hahn

2 Overview Model reduction for linear systems Nonlinear model reduction Residualization using neural nets Choosing a training set Extension to systems that are not controllable/observable Conclusions

3 Introduction Modern control algorithms require models Models are often nonlinear Computation time is important factor Reasons for model reduction: –decrease model size –gain insight into observable dynamics

4 Model reduction for linear systems Find balancing transformation such that

5 Model reduction for linear systems Retaining the states corresponding to the largest Hankel singular values is an optimal model reduction procedure This can be done in two ways: –Truncation –Residualization

6 Model reduction for linear systems Divide system up Truncated system

7 Model reduction for linear systems Set derivative equal to zero Residualized system

8 Nonlinear model reduction Linear gramians are insufficient for nonlinear models Nonlinear gramians cannot be calculated Use of empirical gramians which are an extension of linear gramians to any system Valid for nonlinear model of the form

9 Nonlinear model reduction Empirical controllability gramian –  ilm (t) = (x ilm (t)-x 0 ilm )( x ilm (t)-x 0 ilm ) T –x ilm (t) is the state of the nonlinear system corresponding to the impulsive input u(t) = c m T l e i  (t)

10 Nonlinear model reduction Empirical observability gramian –  lm ij (t) = (y ilm (t)-y 0 ilm ) T ( y jlm (t)-y 0 jlm ) –y ilm (t) is the output of the nonlinear system corresponding to the initial condition x(0) = c m T l e i.

11 Nonlinear model reduction Reduction procedure: –Obtain empirical gramians via simulation –Balance empirical gramians –Transform system via projection and reduce it using truncation residualization

12 Nonlinear model reduction Truncation method Residualization method

13 Nonlinear model reduction Nonlinear balanced truncation and residualization are excellent model approximations Residualization is the better approximation, but solution of a DAE system is required Not all simulation environments (like MATLAB) provide a robust DAE solver

14 Residualization using neural nets Use neural network to approximate algebraic equations –residualized equations contribute less to system behavior and therefore ideal for approximation

15 Residualization using neural nets Choice of neural net: –feedforward networks are sufficient to approximate algebraic equations –use remaining states and manipulated variables as inputs for the network –the outputs will be the variables that the algebraic equations were solved for in the original case

16 Residualization using neural nets

17 Choice of neural net: –parameters that can be adapted for specific problems structure and number of nodes of the hidden layer(s) transfer functions within each node –choice of appropriate training set Set size Dynamics reflected in the training set (how is the training set collected?)

18 Choosing a training set Consider the following neural net: –feedforward neural net –one hidden layer –5 nodes in the hidden layer –transfer functions (TF) in hidden layer are hyperbolic tangents

19 Choosing a training set Approach I: –evaluate the set of algebraic equations on each point of a uniform grid spanned by the inputs to the neural net

20 Example 2 reactors in series one reaction: A-->B volume, concentration and energy balance for each reactor (6 states) 2 inputs (heat transferred from first reactor, valve position at the outlet of 2nd reactor) 2 outputs (volume and temperature in the 2nd reactor)

21 Example A 8% change in the valve position results in the following change

22 Choosing a training set Approximation of the reduced model can be improved by a more ‘real life’ training set Approach II: –evaluate the set of algebraic equations along system trajectories

23 Choosing a training set Approach II: –simulate system trajectories for all possible combination of inputs –start at SS and move inputs –let system reach new steady state –switch back to original inputs and let system reach original steady state –solve the algebraic equations along these trajectories

24 Choosing a training set Approach II:Example

25 Choosing a training set Approach III: –similar to approach II, but does not solve the algebraic equation along trajectories, instead uses data from the states itself –this approximates real system and not residualized system –even less computation required for training than for 2nd approach

26 Comparing the approaches Same example as before, but analyze the error between approximation and original model

27 Comparing the approaches The model based upon the 2nd and 3rd training set works much better than the one for the first one Basing the training set on the real trajectories instead of the algebraic equations along these trajectories slightly improves the approximation the results obtained with the 3rd approach are comparable to balanced residualization

28 Extension to systems that are not controllable/observable Most states in large models are neither observable nor controllable This could lead to cases where the neural net would have many more outputs than inputs, which is undesirable Modify approach for these types of models

29 Extension to systems that are not controllable/observable Reduce only the part of the model that is observable and controllable via neural net (or maybe even only the most important subset of this)

30 Extension to systems that are not controllable/observable Example: Distillation Column –binary mixture with constant volatility –30 trays + reboiler + condenser (32 states) –reflux ratio can be changed (1 input) –distillate concentration is only measurement Analysis has shown that the first five states contribute by far the most to the input- output behavior

31 Extension to systems that are not controllable/observable Reduce system such that –3 states are retained –the neural network is based upon these three states and the one manipulated variable and has two outputs –the rest of the model is truncated

32 Extension to systems that are not controllable/observable A 10% change in the reflux ratio results in

33 Conclusions Nonlinear model residualization was analyzed Replacing the residualized part with a feedforward neural network enables solution by ODE solver Choice of training set is important Method has be extended to systems that are not completely observable/controllable


Download ppt "Nonlinear balanced model residualization via neural networks Juergen Hahn."

Similar presentations


Ads by Google