Download presentation
Presentation is loading. Please wait.
Published bySteven Shields Modified over 9 years ago
1
1 Selv-optimaliserende regulering Anvendelser mot prosessindustrien, biologi og maratonløping Sigurd Skogestad Institutt for kjemisk prosessteknologi, NTNU, Trondheim HC, 31. januar 2012
2
2 Self-optimizing control From key performance indicators to control of biological systems Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Technology (NTNU) Trondheim PSE 2003, Kunming, 05-10 Jan. 2004
3
3 Outline Optimal operation Implememtation of optimal operation: Self-optimizing control What should we control? Applications –Marathon runner –KPI’s –Biology –... Optimal measurement combination Optimal blending example Focus: Not optimization (optimal decision making) But rather: How to implement decision in an uncertain world
4
4 Optimal operation of systems Theory: –Model of overall system –Estimate present state –Optimize all degrees of freedom Problems: –Model not available and optimization complex –Not robust (difficult to handle uncertainty) Practice –Hierarchical system –Each level: Follow order (”setpoints”) given from level above –Goal: Self-optimizing
5
5 Process operation: Hierarchical structure PID RTO MPC
6
6 Engineering systems Most (all?) large-scale engineering systems are controlled using hierarchies of quite simple single-loop controllers –Large-scale chemical plant (refinery) –Commercial aircraft 1000’s of loops Simple components: on-off + P-control + PI-control + nonlinear fixes + some feedforward Same in biological systems
7
7 What should we control? y 1 = c ? (economics) y 2 = ? (stabilization)
8
8 Self-optimizing Control Self-optimizing control is when acceptable operation can be achieved using constant set points (c s ) for the controlled variables c (without re-optimizing when disturbances occur). c=c s
9
9 Optimal operation (economics) Define scalar cost function J(u 0,d) –u 0 : degrees of freedom –d: disturbances Optimal operation for given d: min u0 J(u 0,d) subject to: f(u 0,d) = 0 g(u 0,d) < 0
10
10 Implementation of optimal operation Idea: Replace optimization by setpoint control Optimal solution is usually at constraints, that is, most of the degrees of freedom u 0 are used to satisfy “active constraints”, g(u 0,d) = 0 CONTROL ACTIVE CONSTRAINTS! –Implementation of active constraints is usually simple. WHAT MORE SHOULD WE CONTROL? –Find variables c for remaining unconstrained degrees of freedom u.
11
11 Unconstrained variables Cost J Selected controlled variable (remaining unconstrained) c opt J opt
12
12 Implementation of unconstrained variables is not trivial: How do we deal with uncertainty? 1. Disturbances d 2. Implementation error n c s = c opt (d * ) – nominal optimization n c = c s + n d Cost J J opt (d)
13
13 Problem no. 1: Disturbance d Cost J Controlled variable c opt (d * ) J opt d*d*d*d* d ≠ d * Loss with constant value for c ) Want c opt independent of d
14
14 Problem no. 2: Implementation error n Cost J c s =c opt (d * ) J opt d*d*d*d* Loss due to implementation error for c c = c s + n ) Want n small and ”flat” optimum
15
15 Optimizer Controller that adjusts u to keep c m = c s Plant cscs c m =c+n u c n d u c J c s =c opt u opt n ) Want c sensitive to u (”large gain”)
16
16 Which variable c to control? Define optimal operation: Minimize cost function J Each candidate variable c: With constant setpoints c s compute loss L for expected disturbances d and implementation errors n Select variable c with smallest loss
17
17 Constant setpoint policy: Loss for disturbances (“problem 1”) Acceptable loss ) self-optimizing control
18
18 Good candidate controlled variables c (for self-optimizing control) Requirements: The optimal value of c should be insensitive to disturbances (avoid problem 1) c should be easy to measure and control (rest: avoid problem 2) The value of c should be sensitive to changes in the degrees of freedom (Equivalently, J as a function of c should be flat) For cases with more than one unconstrained degrees of freedom, the selected controlled variables should be independent. Singular value rule (Skogestad and Postlethwaite, 1996): Look for variables that maximize the minimum singular value of the appropriately scaled steady-state gain matrix G from u to c
19
19 Examples self-optimizing control Marathon runner Central bank Cake baking Business systems (KPIs) Investment portifolio Biology Chemical process plants: Optimal blending of gasoline Define optimal operation (J) and look for ”magic” variable (c) which when kept constant gives acceptable loss (self- optimizing control)
20
20 Self-optimizing Control – Marathon Optimal operation of Marathon runner, J=T –Any self-optimizing variable c (to control at constant setpoint)?
21
21 Self-optimizing Control – Marathon Optimal operation of Marathon runner, J=T –Any self-optimizing variable c (to control at constant setpoint)? c 1 = distance to leader of race c 2 = speed c 3 = heart rate c 4 = level of lactate in muscles
22
22 Further examples Central bank. J = welfare. c=inflation rate (2.5%) Cake baking. J = nice taste, c = Temperature (200C) Business, J = profit. c = ”Key performance indicator (KPI), e.g. –Response time to order –Energy consumption pr. kg or unit –Number of employees –Research spending Optimal values obtained by ”benchmarking” Investment (portofolio management). J = profit. c = Fraction of investment in shares (50%) Biological systems: –”Self-optimizing” controlled variables c have been found by natural selection –Need to do ”reverse engineering” : Find the controlled variables used in nature From this identify what overall objective J the biological system has been attempting to optimize
23
23 Looking for “magic” variables to keep at constant setpoints. How can we find them? Consider available measurements y, and evaluate loss when they are kept constant (“brute force”): More general: Find optimal linear combination (matrix H):
24
24 Optimal measurement combination (Alstad) Basis: Want optimal value of c independent of disturbances ) – c opt = 0 ¢ d Find optimal solution as a function of d: u opt (d), y opt (d) Linearize this relationship: y opt = F d F – sensitivity matrix Want: To achieve this for all values of d: Always possible if
25
25 c s = y 1s MPC PID y 2s RTO u (valves) Follow path (+ look after other variables) CV=y 1 (+ u) ; MV=y 2s Stabilize + avoid drift CV=y 2 ; MV=u Min J (economics); MV=y 1s OBJECTIVE Dealing with complexity Main simplification: Hierarchical decomposition Process control The controlled variables (CVs) interconnect the layers
26
26 Outline Control structure design (plantwide control) A procedure for control structure design I Top Down Step 1: Define operational objective (cost) and constraints Step 2: Identify degrees of freedom and optimizate for disturbances Step 3: What to control ? (primary CV’s) (self-optimizing control) Step 4: Where set the production rate? (Inventory control) II Bottom Up Step 5: Regulatory control: What more to control (secondary CV’s) ? Step 6: Supervisory control Step 7: Real-time optimization Case studies
27
27 Main message 1. Control for economics (Top-down steady-state arguments) –Primary controlled variables c = y 1 : Control active constraints For remaining unconstrained degrees of freedom: Look for “self-optimizing” variables 2. Control for stabilization (Bottom-up; regulatory PID control) –Secondary controlled variables y 2 (“inner cascade loops”) Control variables which otherwise may “drift” Both cases: Control “sensitive” variables (with a large gain)!
28
28 Process control: “Plantwide control” = “Control structure design for complete chemical plant” Large systems Each plant usually different – modeling expensive Slow processes – no problem with computation time Structural issues important –What to control? Extra measurements, Pairing of loops Previous work on plantwide control: Page Buckley (1964) - Chapter on “Overall process control” (still industrial practice) Greg Shinskey (1967) – process control systems Alan Foss (1973) - control system structure Bill Luyben et al. (1975- ) – case studies ; “snowball effect” George Stephanopoulos and Manfred Morari (1980) – synthesis of control structures for chemical processes Ruel Shinnar (1981- ) - “dominant variables” Jim Downs (1991) - Tennessee Eastman challenge problem Larsson and Skogestad (2000): Review of plantwide control
29
29 Control structure selection issues are identified as important also in other industries. Professor Gary Balas (Minnesota) at ECC’03 about flight control at Boeing: The most important control issue has always been to select the right controlled variables --- no systematic tools used!
30
30 Main objectives control system 1.Stabilization 2.Implementation of acceptable (near-optimal) operation ARE THESE OBJECTIVES CONFLICTING? Usually NOT –Different time scales Stabilization fast time scale –Stabilization doesn’t “use up” any degrees of freedom Reference value (setpoint) available for layer above But it “uses up” part of the time window (frequency range)
31
31 c s = y 1s MPC PID y 2s RTO u (valves) Follow path (+ look after other variables) CV=y 1 (+ u) ; MV=y 2s Stabilize + avoid drift CV=y 2 ; MV=u Min J (economics); MV=y 1s OBJECTIVE Dealing with complexity Main simplification: Hierarchical decomposition Process control The controlled variables (CVs) interconnect the layers
32
32 Example: Bicycle riding Note: design starts from the bottom Regulatory control: –First need to learn to stabilize the bicycle CV = y 2 = tilt of bike MV = body position Supervisory control: –Then need to follow the road. CV = y 1 = distance from right hand side MV=y 2s –Usually a constant setpoint policy is OK, e.g. y 1s =0.5 m Optimization: –Which road should you follow? –Temporary (discrete) changes in y 1s Hierarchical decomposition
33
33 Regulatory control (seconds) Purpose: “Stabilize” the plant by controlling selected ‘’secondary’’ variables (y 2 ) such that the plant does not drift too far away from its desired operation Use simple single-loop PI(D) controllers Note: The regulatory should be independent of changes in overall objectives
34
34 Supervisory control (minutes) Purpose: Keep primary controlled variables (c=y 1 ) at desired values, using as degrees of freedom the setpoints y 2s for the regulatory layer. Process industry: –Before : Many different “advanced” controllers, including feedforward, decouplers, overrides, cascades, selectors, Smith Predictors, etc. –Trend: Model predictive control (MPC) used as unifying tool. Structural issue: –What primary variables c=y 1 should we control?
35
35 Local optimization (hour) Purpose: Minimize cost function J and: –Identify active constraints –Recompute optimal setpoints y 1s for the controlled variables Status: Done manually by clever operators and engineers Trend: Real-time optimization (RTO) based on detailed nonlinear steady-state model Issues: –Optimization not reliable. –Need nonlinear steady-state model –Modelling is time-consuming and expensive
36
36 Summary: The three layers Optimization layer (RTO; steady-state nonlinear model): Identifies active constraints and computes optimal setpoints for primary controlled variables (y 1 ). Supervisory control (MPC; linear model with constraints): Follow setpoints for y 1 (usually constant) by adjusting setpoints for secondary variables (MV=y 2s ) Look after other variables (e.g., avoid saturation for MV’s used in regulatory layer) Regulatory control (PID): Stabilizes the plant and avoids drift, in addition to following setpoints for y 2. MV=valves (u). Problem definition and overall control objectives (y 1, y 2 ) starts from the top. Design starts from the bottom. A good example is bicycle riding: Regulatory control: First you need to learn how to stabilize the bicycle (y 2 ) Supervisory control: Then you need to follow the road. Usually a constant setpoint policy is OK, for example, stay y 1s =0.5 m from the right hand side of the road (in this case the "magic" self-optimizing variable self-optimizing variable is y1=distance to right hand side of road)self-optimizing variable Optimization: Which road (route) should you follow?
37
37 Control structure design procedure I Top Down Step 1: Define operational objectives (optimal operation) –Cost function J (to be minimized) –Operational constraints Step 2: Identify degrees of freedom (MVs) and optimize for expected disturbances –Identify regions of active constraints Step 3: Select primary controlled variables c=y 1 (CVs) Step 4: Where set the production rate? (Inventory control) II Bottom Up Step 5: Regulatory / stabilizing control (PID layer) –What more to control (y 2 ; local CVs)? –Pairing of inputs and outputs Step 6: Supervisory control (MPC layer) Step 7: Real-time optimization (Do we need it?) Understanding and using this procedure is the most important part of this course!!!! y1y1 y2y2 Process MVs
38
38 Step 3. What should we control (c)? (primary controlled variables y 1 =c) 1.CONTROL ACTIVE CONSTRAINTS, c=c constraint 2.REMAINING UNCONSTRAINED, c=? What should we control? c = Hy H
39
39 Step 1. Define optimal operation (economics) What are we going to use our degrees of freedom u (MVs) for? Define scalar cost function J(u,x,d) –u: degrees of freedom (usually steady-state) –d: disturbances –x: states (internal variables) Typical cost function: Optimize operation with respect to u for given d (usually steady-state): min u J(u,x,d) subject to: Model equations: f(u,x,d) = 0 Operational constraints: g(u,x,d) < 0 J = cost feed + cost energy – value products
40
40 Optimal operation distillation column Distillation at steady state with given p and F: N=2 DOFs, e.g. L and V Cost to be minimized (economics) J = - P where P= p D D + p B B – p F F – p V V Constraints Purity D: For example x D, impurity · max Purity B: For example, x B, impurity · max Flow constraints: min · D, B, L etc. · max Column capacity (flooding): V · V max, etc. Pressure: 1) p given, 2) p free: p min · p · p max Feed: 1) F given 2) F free: F · F max Optimal operation: Minimize J with respect to steady-state DOFs value products cost energy (heating+ cooling) cost feed
41
41 Optimal operation 1.Given feed Amount of products is then usually indirectly given and J = cost energy. Optimal operation is then usually unconstrained: 2.Feed free Products usually much more valuable than feed + energy costs small. Optimal operation is then usually constrained: minimize J = cost feed + cost energy – value products “maximize efficiency (energy)” “maximize production” Two main cases (modes) depending on marked conditions: Control: Operate at bottleneck (“obvious what to control”) Control: Operate at optimal trade-off (not obvious what to control to achieve this)
42
42 Solution I (“obvious”): Optimal feedforward Problem: UNREALISTIC! 1.Lack of measurements of d 2.Sensitive to model error
43
43 Solution II (”obvious”): Optimizing control Estimate d from measurements y and recompute u opt (d) Problem: COMPLICATED! Requires detailed model and description of uncertainty y
44
44 Solution III (in practice): FEEDBACK with hierarchical decomposition When disturbance d: Degrees of freedom (u) are updated indirectly to keep CVs at setpoints y CVs: link optimization and control layers
45
45 How does self-optimizing control (solution III) work? When disturbances d occur, controlled variable c deviates from setpoint c s Feedback controller changes degree of freedom u to u FB (d) to keep c at c s Near-optimal operation / acceptable loss (self-optimizing control) is achieved if –u FB (d) ≈ u opt (d) –or more generally, J(u FB (d)) ≈ J(u opt (d)) Of course, variation of u FB (d) is different for different CVs c. We need to look for variables, for which J(u FB (d)) ≈ J(u opt (d)) or Loss = J(u FB (d)) - J(u opt (d)) is small
46
46 Remarks “self-optimizing control” 1. Old idea (Morari et al., 1980): “We want to find a function c of the process variables which when held constant, leads automatically to the optimal adjustments of the manipulated variables, and with it, the optimal operating conditions.” 2. “Self-optimizing control” = acceptable steady-state behavior (loss) with constant CVs. is similar to “Self-regulation” = acceptable dynamic behavior with constant MVs. 3. The ideal self-optimizing variable c is the gradient (c = J/ u = J u ) –Keep gradient at zero for all disturbances (c = J u =0) –Problem: no measurement of gradient
47
47 Step 3. What should we control (c)? Simple examples What should we control?
48
48 –Cost to be minimized, J=T –One degree of freedom (u=power) –What should we control? Optimal operation - Runner Optimal operation of runner
49
49 Self-optimizing control: Sprinter (100m) 1. Optimal operation of Sprinter, J=T –Active constraint control: Maximum speed (”no thinking required”) Optimal operation - Runner
50
50 2. Optimal operation of Marathon runner, J=T Optimal operation - Runner Self-optimizing control: Marathon (40 km)
51
51 Solution 1 Marathon: Optimizing control Even getting a reasonable model requires > 10 PhD’s … and the model has to be fitted to each individual…. Clearly impractical! Optimal operation - Runner
52
52 Solution 2 Marathon – Feedback (Self-optimizing control) –What should we control? Optimal operation - Runner
53
53 Optimal operation of Marathon runner, J=T Any self-optimizing variable c (to control at constant setpoint)? c 1 = distance to leader of race c 2 = speed c 3 = heart rate c 4 = level of lactate in muscles Optimal operation - Runner Self-optimizing control: Marathon (40 km)
54
54 Conclusion Marathon runner c = heart rate select one measurement Simple and robust implementation Disturbances are indirectly handled by keeping a constant heart rate May have infrequent adjustment of setpoint (heart rate) Optimal operation - Runner
55
55 Example: Cake Baking Objective: Nice tasting cake with good texture u 1 = Heat input u 2 = Final time d 1 = oven specifications d 2 = oven door opening d 3 = ambient temperature d 4 = initial temperature y 1 = oven temperature y 2 = cake temperature y 3 = cake color Measurements Disturbances Degrees of Freedom
56
56 Further examples self-optimizing control Marathon runner Central bank Cake baking Business systems (KPIs) Investment portifolio Biology Chemical process plants: Optimal blending of gasoline Define optimal operation (J) and look for ”magic” variable (c) which when kept constant gives acceptable loss (self- optimizing control)
57
57 More on further examples Central bank. J = welfare. u = interest rate. c=inflation rate (2.5%) Cake baking. J = nice taste, u = heat input. c = Temperature (200C) Business, J = profit. c = ”Key performance indicator (KPI), e.g. –Response time to order –Energy consumption pr. kg or unit –Number of employees –Research spending Optimal values obtained by ”benchmarking” Investment (portofolio management). J = profit. c = Fraction of investment in shares (50%) Biological systems: –”Self-optimizing” controlled variables c have been found by natural selection –Need to do ”reverse engineering” : Find the controlled variables used in nature From this possibly identify what overall objective J the biological system has been attempting to optimize
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.