Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Self-optimizing control: Simple implementation of optimal operation Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science.

Similar presentations


Presentation on theme: "1 Self-optimizing control: Simple implementation of optimal operation Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science."— Presentation transcript:

1 1 Self-optimizing control: Simple implementation of optimal operation Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Tecnology (NTNU) Trondheim, Norway Benelux control meeting, March 2008 PART 1

2 2 Outline Control structure design –Hierarchical time-scale decomposition –Procedure control structure design Implementation of optimal operation –Centralized or hierarchical? What should we control? –Control of active constraints. Back-off –Control of unconstrained variables Looking for magic “self-optimizing” variables Maximum gain rule Examples

3 3 Idealized view of control (“Ph.D. control”)

4 4 Practice: Tennessee Eastman challenge problem (Downs, 1991) (“PID control”)

5 5 How we design a control system for a complex plant? How do we deal with complexity? Where do we start? What should we control? and why? etc.

6 6 Alan Foss (“Critique of chemical process control theory”, AIChE Journal,1973): The central issue to be resolved... is the determination of control system structure. Which variables should be measured, which inputs should be manipulated and which links should be made between the two sets? There is more than a suspicion that the work of a genius is needed here, for without it the control configuration problem will likely remain in a primitive, hazily stated and wholly unmanageable form. The gap is present indeed, but contrary to the views of many, it is the theoretician who must close it. Carl Nett (1989): Minimize control system complexity subject to the achievement of accuracy specifications in the face of uncertainty.

7 7 Control structure design Not the tuning and behavior of each control loop, But rather the control philosophy of the overall plant with emphasis on the structural decisions: –Selection of controlled variables (“outputs”) –Selection of manipulated variables (“inputs”) –Selection of (extra) measurements –Selection of control configuration (structure of overall controller that interconnects the controlled, manipulated and measured variables) –Selection of controller type (LQG, H-infinity, PID, decoupler, MPC etc.). S. Skogestad, ``Control structure design for complete chemical plants'', Computers and Chemical Engineering, Vol. 28, 219-234 (2004)

8 8 Control structure design Not the tuning and behavior of each control loop, But rather the control philosophy of the overall plant with emphasis on the structural decisions: –Selection of controlled variables (“outputs”) –Selection of manipulated variables (“inputs”) –Selection of (extra) measurements –Selection of control configuration (structure of overall controller that interconnects the controlled, manipulated and measured variables) –Selection of controller type (LQG, H-infinity, PID, decoupler, MPC etc.). That is: Control structure design includes all the decisions we need make to get from ``PID control’’ to “Ph.D” control S. Skogestad, ``Control structure design for complete chemical plants'', Computers and Chemical Engineering, Vol. 28, 219-234 (2004)

9 9 Process control: “Plantwide control” = “Control structure design for complete chemical plant” Large systems Each plant usually different – modeling expensive Slow processes – no problem with computation time Structural issues important –What to control? Extra measurements, Pairing of loops Previous work on plantwide control: Page Buckley (1964) - Chapter on “Overall process control” (still industrial practice) Greg Shinskey (1967) – process control systems Alan Foss (1973) - control system structure Bill Luyben et al. (1975- ) – case studies ; “snowball effect” George Stephanopoulos and Manfred Morari (1980) – synthesis of control structures for chemical processes Ruel Shinnar (1981- ) - “dominant variables” Jim Downs (1991) - Tennessee Eastman challenge problem Larsson and Skogestad (2000): Review of plantwide control

10 10 Control structure selection issues are identified as important also in other industries. Professor Gary Balas (Minnesota) at ECC’03 about flight control at Boeing: The most important control issue has always been to select the right controlled variables --- no systematic tools used!

11 11 Main objectives control system 1.Stabilization 2.Implementation of acceptable (near-optimal) operation ARE THESE OBJECTIVES CONFLICTING? Usually NOT –Different time scales Stabilization fast time scale –Stabilization doesn’t “use up” any degrees of freedom Reference value (setpoint) available for layer above But it “uses up” part of the time window (frequency range)

12 12 c s = y 1s MPC PID y 2s RTO u (valves) Follow path CV=y 1 ; MV=y 2s Stabilize + avoid drift CV=y 2 ; MV=u Min J (economics); MV=y 1s OBJECTIVE Dealing with complexity Main simplification: Hierarchical decomposition Process control The controlled variables (CVs) interconnect the layers

13 13 Example: Bicycle riding Note: design starts from the bottom Regulatory control: –First need to learn to stabilize the bicycle CV = y 2 = tilt of bike MV = body position Supervisory control: –Then need to follow the road. CV = y 1 = distance from right hand side MV=y 2s –Usually a constant setpoint policy is OK, e.g. y 1s =0.5 m Optimization: –Which road should you follow? –Temporary (discrete) changes in y 1s Hierarchical decomposition

14 14 Regulatory control (seconds) Purpose: “Stabilize” the plant by controlling selected ‘’secondary’’ variables (y 2 ) such that the plant does not drift too far away from its desired operation Use simple single-loop PI(D) controllers Note: The regulatory should be independent of changes in overall objectives

15 15 Supervisory control (minutes) Purpose: Keep primary controlled variables (c=y 1 ) at desired values, using as degrees of freedom the setpoints y 2s for the regulatory layer. Process industry: –Before : Many different “advanced” controllers, including feedforward, decouplers, overrides, cascades, selectors, Smith Predictors, etc. –Trend: Model predictive control (MPC) used as unifying tool. Structural issue: –What primary variables c=y 1 should we control?

16 16 Local optimization (hour) Purpose: Minimize cost function J and: –Identify active constraints –Recompute optimal setpoints y 1s for the controlled variables Status: Done manually by clever operators and engineers Trend: Real-time optimization (RTO) based on detailed nonlinear steady-state model Issues: –Optimization not reliable. –Need nonlinear steady-state model –Modelling is time-consuming and expensive

17 17 Stepwise procedure control structure design I. TOP-DOWN Step 1. DEGREES OF FREEDOM Step 2. OPERATIONAL OBJECTIVES (ECONOMICS) Step 3. OPTIMAL OPERATION (optimal path) Step 4. IMPLEMENTATION OF OPTIMAL OPERATION (Stay on optimal path): What to control ? (primary CV’s c=y 1 ) II. BOTTOM-UP (structure control system): Step 5. REGULATORY CONTROL LAYER (PID) “Stabilization” What more to control? (secondary CV’s y 2 ) Step 6. SUPERVISORY CONTROL LAYER (MPC) Decentralization? Step 7. OPTIMIZATION LAYER (RTO) Can we do without it? S. Skogestad, ``Control structure design for complete chemical plants'', Computers and Chemical Engineering, Vol. 28, 219-234 (2004)

18 18 Step 4. Implementation of optimal operation OUTLINE Centralized or hierarchical? What should we control (c)? 1.Control active constraints –Possible back-off 2.Remaining unconstrained: Self-optimizing control –Maximum gain rule –Examples

19 19 Implementation of optimal operation Consider static optimization problem. min u J(u,x,d) subject to: Model equations: f(u,x,d) = 0 Operational constraints: g(u,x,d) < 0 → u opt (d * ) Problem: Usually cannot use open-loop policy (u opt constant or on given trajectory) because disturbances d change How should we adjust the degrees of freedom (u)?

20 20 Problem: Too complicated (requires detailed model and description of uncertainty) Implementation of optimal operation (Cannot keep u 0opt constant) ”Obvious” solution: Centralized optimizing control Estimate d from measurements and recompute u opt (d)

21 21 In practice: Hierarchical decomposition with separate layers What should we control?

22 22 Self-optimizing control: When constant setpoints is OK Constant setpoint

23 23 What c’s should we control? Optimal solution is usually at constraints, that is, most of the degrees of freedom are used to satisfy “active constraints”, g(u,d) = 0 1. CONTROL ACTIVE CONSTRAINTS! –c s = value of active constraint –Implementation of active constraints is usually “obvious” May need “back-off” (safety limit) 2. WHAT MORE SHOULD WE CONTROL? –Remaining unconstrained degrees of freedom u (Sometimes none left, e.g. LP-problem) –Find associated “self-optimizing” variables c

24 24 Example: Runner Step 1. One degree of freedom (u): Power Step 2. Cost to be minimized J = T –Constraints –u ≤ umax –Follow track –Fitness (body model) Step 3. Optimal operation: Minimize J with respect to u(t) –Very complex problem (model) –However, from experience we can guess solution

25 25 Optimal operation runner 1.Short race (100 m, 200m) “Run as fast as you can” Optimal operation is constrained by max. power 2.Long race (800m – marathon) “Must be smarter”. Optimal operation is unconstrained. Minimize J = T Two main cases (modes): Control: Active constraint control! Operate at max. power (“obvious”) Control: Operate at optimal trade- off between “fast and slow” (not obvious what to control) u = power

26 26 What should we control? – Sprinter Optimal operation of Sprinter (100 m), J=T –One input: ”power/speed” –Active constraint control: Maximum speed (”no thinking required”)

27 27 What should we control? – Marathon Optimal operation of Marathon runner, J=T –No active constraints –Any ”self-optimizing” variable c (to control at constant setpoint)? c 1 = distance to leader of race c 2 = speed c 3 = heart rate c 4 = level of lactate in muscles

28 28 The difficult unconstrained variables Rule. Control variables that give ”self-optimizing control” Self-optimizing control: Constant setpoints c s give ”near-optimal operation” (= acceptable loss L for expected disturbances d and implementation errors n) Acceptable loss ) self-optimizing control Old idea. Morari et al. (1980): “”We want to find a function c of the process variables which when held constant, leads automatically to the optimal adjustments of the manipulated variables, and with it, the optimal operating conditions” S. Skogestad, ``Plantwide control: The search for the self-optimizing control structure'', J. Proc. Control, 10, 487-507 (2000).

29 29 Further examples self-optimizing control Marathon runner Central bank Cake baking Business systems (KPIs) Investment portifolio Biology Chemical process plants: Optimal blending of gasoline Define optimal operation (J) and look for ”magic” variable (c) which when kept constant gives acceptable loss (self- optimizing control) S. Skogestad, ``Near-optimal operation by self-optimizing control: From process control to marathon running and business systems'', Computers and Chemical Engineering, 29 (1), 127-137 (2004).

30 30 More on further examples Central bank. J = welfare. u = interest rate. c=inflation rate (2.5%) Cake baking. J = nice taste, u = heat input. c = Temperature (200C) Business, J = profit. c = ”Key performance indicator (KPI), e.g. –Response time to order –Energy consumption pr. kg or unit –Number of employees –Research spending Optimal values obtained by ”benchmarking” Investment (portofolio management). J = profit. c = Fraction of investment in shares (50%) Biological systems: –”Self-optimizing” controlled variables c have been found by natural selection –Need to do ”reverse engineering” : Find the controlled variables used in nature From this possibly identify what overall objective J the biological system has been attempting to optimize

31 31 Application: Process control

32 32 Step 1: DOFs Chemical HDA process 1 2 3 8 7 4 6 5 9 10 11 12 13 14 Conclusion: 14 steady-state DOFs (u0) Assume given column pressures feed:1.2 hex: 3, 4, 6 splitter 5, 7 compressor: 8 distillation: 2 each column

33 33 Optimal objective process control Step 2. Typical cost function process control (economics): Step 3. Optimal operation (pseudo steady-state assumption): min uss J(u ss,x,d) subject to: Model equations: f(u ss,x,d) = 0 Operational constraints: g(u ss,x,d) < 0 J = cost feed + cost energy – value products

34 34 Step 4. Optimal operation process control 1.Given feed Amount of products is then usually indirectly given and J = cost energy. Optimal operation is then usually unconstrained: 2.Feed free Products usually much more valuable than feed + energy costs small. Optimal operation is then usually constrained: minimize J = cost feed + cost energy – value products “maximize efficiency (energy)” “maximize production” Two main cases (modes) depending on marked conditions: Control: Active constraint control Operate at bottleneck (“obvious”) Control: Operate at optimal trade-off (not obvious how to do and what to control)

35 35 Optimal operation distillation column Distillation at steady state with given p and F: N=2 DOFs, e.g. L and V Cost to be minimized (economics) J = - P where P= p D D + p B B – p F F – p V V Constraints Purity D: For example x D, impurity · max Purity B: For example, x B, impurity · max Flow constraints: min · D, B, L etc. · max Column capacity (flooding): V · V max, etc. Pressure: 1) p given, 2) p free: p min · p · p max Feed: 1) F given 2) F free: F · F max Optimal operation: Minimize J with respect to steady-state DOFs value products cost energy (heating+ cooling) cost feed

36 36 What should we control? – Distillation Given feed 2 degrees of freedom (energy input V and product split D/B) What 2 variables should we control? 1. Purity spec. always active for valuable product –Avoid product “give-away” (“Sell water as methanol”) Also saves energy –Conclusion: Control purity of valuable product 1! 2. Normally one unconstrained degree of freedom –Trade-off between energy and product recovery –Self-optimizing variable? Composition of cheap product 2? valuable product methanol + max. 0.5% water cheap product (byproduct) water + max. 0.1% methanol + water

37 37 What should we control (c)? - More details 1.Control active constraints –Need Back-off 2.Remaining unconstrained DOFs (if any): Control “self-optimizing” variables –How do we find them? –Maximum gain rule

38 38 1. The “easy” active constraints Cost J Constrained variable C opt = C min J opt c “Obvious” that we want to keep (control) c at c opt

39 39 1. The “easy” active constraints C opt J opt c 1) If c = u = manipulated input (MV): Implementation trivial: Keep u at constraint (u min or u max ) 2) If c = y = output variable (CV): Use u to control y at constraint BUT: Need to introduce backoff (safety margin) “Obvious” : Want to keep (control) c at c opt = c constraint Constrained variable

40 40 Back-off for output constraints C opt J opt c a) If constraint on c can be violated dynamically (so only average matters) Required Backoff = “bias” (steady-state meas. error for c) b) If constraint on c can be not be violated dynamically (”hard constraint”) Required Backoff = “bias” + max. dynamic control error C s = c min + backoff Loss backoff Rule for control of hard output constraints: “Squeeze and shift”! Reduce variance (“Squeeze”) and “shift” setpoint c s to reduce backoff Cost J = Variance Can be reduced by improved control!

41 41 « SQUEEZE AND SHIFT » © Richalet SHIFT SQUEEZE Hard active constraints

42 42 2. The difficult unconstrained variables Cost J J opt Selected controlled variable (unconstrained) c opt c How do we find them (CVs) ? Intuition: “Dominant variables” (Shinnar) Rijnsdorp (1990): ``…requires good process insight and control structure know-how. “ Any systematic procedure?

43 43 Optimal operation Cost J Controlled variable c c opt J opt Two problems: 1. Optimum moves because of disturbances d: c opt (d) d LOSS Unconstrained degrees of freedom:

44 44 Optimal operation Cost J Controlled variable c c opt J opt Two problems: 1. Optimum moves because of disturbances d: c opt (d) 2. Implementation error, c = c opt + n d n LOSS Unconstrained degrees of freedom:

45 45 Effect of implementation error on cost (“problem 2”) BAD Good Unconstrained degrees of freedom:

46 46 Candidate controlled variables We are looking for some “magic” variables c to control..... What properties do they have?’ Intuitively 1: Should have small optimal variation Δc opt (d) –since we are going to keep them constant! Intuitively 2: Should have small “implementation error” n “Intuitively” 3: Should be sensitive to inputs u (remaining unconstrained degrees of freedom), that is, the gain G 0 from u to c should be large –Large gain gives flat optimum in c Can combine everything into the “maximum gain rule”: –Maximize scaled gain G = G o / span(c) Unconstrained degrees of freedom: span(c)

47 47 Optimizer Controller that adjusts u to keep c m = c s Plant cscs c m =c+n u c n d u c J c s =c opt u opt n Unconstrained degrees of freedom: Justification for why large gain is good Want the slope (= gain G 0 from u to c) large – corresponds to flat optimum in c Want small n

48 48 Maximum gain (MSV) rule Maximum gain rule (Skogestad and Postlethwaite, 1996): Control variables c with a large scaled gain  (G)  (G) is called the “Morari Resiliency index” (MRI) by Luyben Unconstrained degrees of freedom: “Gain”=  (G) : Minimum singular value (MSV) of the appropriately scaled steady-state gain matrix G from u to c

49 49 Proof of “maximum gain rule” (local analysis) u cost J u opt Unconstrained degrees of freedom: Detailed proof: I.J. Halvorsen, et al. ``Optimal selection of controlled variables'', Ind. Eng. Chem. Res., 42 (14), 3273-3284 (2003).

50 50 Maximum gain rule for scalar system Unconstrained degrees of freedom: J uu : Hessian for effect of u’s on cost Scalar system: J uu does not matter

51 51 Maximum gain rule in words Select controlled variables c for which the gain G 0 (=“controllable range”) is large compared to its span (=sum of optimal variation and control error) Unconstrained degrees of freedom:

52 52 Toy Example

53 53 Toy Example: Single measurements Want loss < 0.1: Consider variable combinations Constant input, c = y 4 = u

54 54 Want to find good candidate controlled variables c –Simplest: Control individual “measurements” y (also include inputs u): A. Maximum gain rule. Linearized models: –All measurements: y = G y u –Selected controlled variables (n c =n u ): c = G o u = H G y u –Want  (G) large, G = S 1 H G y J uu -1/2 Many candidate combinations: –Exist efficient branch-and-bound methods for maximixing  (G) (Cao and Kariwala, 2008) B. If no single-measurement combination acceptable: –Consider variable combinations, c = H y, where H is a “full” combination matrix C. Final choice between good candidates: –Nonlinear “brute force” evaluation of loss and feasibility + controllability considerations Use of maximum gain rule Y. Cao, V. Kariwala, “Bidirectional branch and bound for controlled variable selection. Part I. Principles and minimum singular value criterion”, Comp. Chem. Engng., In press (2008) Unconstrained degrees of freedom:

55 55 C. “Brute-force” procedure for selecting (primary) controlled variables Step 1 Determine DOFs for optimization Step 2 Definition of optimal operation J (cost and constraints) Step 3 Identification of important disturbances Step 4 Optimization (nominally and with disturbances) Step 5 Identification of candidate controlled variables (use active constraint control) Step 6 Evaluation of loss with constant setpoints for alternative controlled variables : Check in particular for feasibility Step 7 Evaluation and selection (including controllability analysis) Case studies: Tenneessee-Eastman, Propane-propylene splitter, recycle process, heat-integrated distillation S. Skogestad, ``Plantwide control: The search for the self-optimizing control structure'', J. Proc. Control, 10, 487-507 (2000).

56 56 EXAMPLE: Recycle plant (Luyben, Yu, etc.) 1 2 3 4 5 Given feedrate F 0 and column pressure: Dynamic DOFs: N m = 5 Column levels: N 0y = 2 Steady-state DOFs:N 0 = 5 - 2 = 3

57 57 Recycle plant: Optimal operation mTmT 1 remaining unconstrained degree of freedom

58 58 Control of recycle plant: Conventional structure (“Two-point”: x D ) LC XC LC XC LC xBxB xDxD Control active constraints (M r =max and x B =0.015) + x D

59 59 A. Maximum gain rule: Steady-state gain Luyben rule: Not promising economically Conventional: Looks good

60 60 How did we find the gains in the Table? 1.Find nominal optimum 2.Find (unscaled) gain G 0 from input to candidate outputs:  c = G 0  u. In this case only a single unconstrained input (DOF). Choose at u=L Obtain gain G 0 numerically by making a small perturbation in u=L while adjusting the other inputs such that the active constraints are constant (bottom composition fixed in this case) 3.Find the span for each candidate variable For each disturbance d i make a typical change and reoptimize to obtain the optimal ranges  c opt (d i ) For each candidate output obtain (estimate) the control error (noise) n The expected variation for c is then: span(c) =  i |  c opt (d i )| + |n| 4.Obtain the scaled gain, G = |G 0 | / span(c) 5.Note: The absolute value (the vector 1-norm) is used here to "sum up" and get the overall span. Alternatively, the 2-norm could be used, which could be viewed as putting less emphasis on the worst case. As an example, assume that the only contribution to the span is the implementation/measurement error, and that the variable we are controlling (c) is the average of 5 measurements of the same y, i.e. c=sum yi/5, and that each yi has a measurement error of 1, i.e. nyi=1. Then with the absolute value (1-norm), the contribution to the span from the implementation (meas.) error is span=sum abs(nyi)/5 = 5*1/5=1, whereas with the two-norn, span = sqrt(5*(1/5^2) = 0.447. The latter is more reasonable since we expect that the overall measurement error is reduced when taking the average of many measurements. In any case, the choice of norm is an engineering decision so there is not really one that is "right" and one that is "wrong". We often use the 2-norm for mathematical convenience, but there are also physical justifications (as just given!). IMPORTANT!

61 61 C. Check: “Brute force” loss evaluation: Disturbance in F 0 Loss with nominally optimal setpoints for M r, x B and c Luyben rule: Conventional

62 62 C. Check: “Brute force” loss evaluation: Implementation error Loss with nominally optimal setpoints for M r, x B and c Luyben rule:

63 63 B. Optimal measurement combination 1 unconstrained variable (#c = 1) 1 (important) disturbance: F 0 (#d = 1) “Optimal” combination requires 2 “measurements” (#y = #u + #d = 2) –For example, c = h 1 L + h 2 F BUT: Not much to be gained compared to control of single variable (e.g. L/F or x D )

64 64 Conclusion: Control of recycle plant Active constraint M r = M rmax Active constraint x B = x Bmin L/F constant: Easier than “two-point” control Assumption: Minimize energy (V) Self-optimizing

65 65 Summary unconstrained degrees of freedom: Looking for “magic” variables to keep at constant setpoints. How can we find them systematically? Candidates A. Start with: Maximum gain (minimum singular value) rule: B. If no good single candidates: Consider linear combinations (matrix H): C. Finally: “Brute force evaluation” of most promising alternatives. –Evaluate loss when the candidate variables c are kept constant –In particular, may be problem with feasibility –Also: Consider controllabiliy

66 66 Conclusion c=c s Self-optimizing control is when acceptable operation can be achieved using constant set points (c s ) for the controlled variables c (without the need to re-optimize when disturbances occur). Tomorrow (part 2): Effective Implementation of optimal operation using off-line computations Dynamic and other generalizations Links to optimal control and MPC Control the right variable! Important irrespective of what control strategy you use –Decentralized control –LQG –MPC –H-infinty –…. Look for “self-optimizing” variables with large scaled gain


Download ppt "1 Self-optimizing control: Simple implementation of optimal operation Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science."

Similar presentations


Ads by Google