1 Feedback: The simple and best solution. Applications to self-optimizing control and stabilization of new operating regimes Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Technology (NTNU) Trondheim University of Alberta, Edmonton, 29 April 2004
2 Outline About myself I. Why feedback (and not feedforward) ? II. Self-optimizing feedback control: What should we control? III. Stabilizing feedback control: Anti-slug control Conclusion
3 Trondheim, Norway
4 Trondheim Oslo UK NORWAY DENMARK GERMANY North Sea SWEDEN Arctic circle
5 NTNU, Trondheim
6 Sigurd Skogestad Born in : Siv.ing. Degree (MS) in Chemical Engineering from NTNU (NTH) : Process modeling group at the Norsk Hydro Research Center in Porsgrunn : Ph.D. student in Chemical Engineering at Caltech, Pasadena, USA. Thesis on “Robust distillation control”. Supervisor: Manfred Morari : Professor in Chemical Engineering at NTNU Since 1994: Head of process systems engineering center in Trondheim (PROST) Since 1999: Head of Department of Chemical Engineering 1996: Book “Multivariable feedback control” (Wiley) 2000,2003: Book “Prosessteknikk” (Norwegian) Group of about 10 Ph.D. students in the process control area
7 Research: Develop simple yet rigorous methods to solve problems of engineering significance. Use of feedback as a tool to 1. reduce uncertainty (including robust control), 2.change the system dynamics (including stabilization; anti-slug control), 3.generally make the system more well-behaved (including self-optimizing control). limitations on performance in linear systems (“controllability”), control structure design and plantwide control, interactions between process design and control, distillation column design, control and dynamics. Natural gas processes
8 Outline About myself I. Why feedback (and not feedforward) ? II. Self-optimizing feedback control: What should we control? III. Stabilizing feedback control: Anti-slug control Conclusion
9 Example G GdGd u d y Plant (uncontrolled system) 1 k=10 time 25
10 G GdGd u d y
11 Model-based control = Feedforward (FF) control G GdGd u d y ”Perfect” feedforward control: u = - G -1 G d d Our case: G=G d → Use u = -d
12 G GdGd u d y Feedforward control: Nominal (perfect model)
13 G GdGd u d y Feedforward: sensitive to gain error
14 G GdGd u d y Feedforward: sensitive to time constant error
15 G GdGd u d y Feedforward: Moderate sensitive to delay (in G or G d )
16 Measurement-based correction = Feedback (FB) control d G GdGd u y C ysys e
17 Feedback PI-control: Nominal case d G GdGd u y C ysys e Input u Output y Feedback generates inverse! Resulting output
18 G GdGd u d y C ysys e Feedback PI control: insensitive to gain error
19 Feedback: insenstive to time constant error G GdGd u d y C ysys e
20 Feedback control: sensitive to time delay G GdGd u d y C ysys e
21 Comment Time delay error in disturbance model (G d ): No effect (!) with feedback (except time shift) Feedforward: Similar effect as time delay error in G
22 Conclusion: Why feedback? (and not feedforward control) Simple: High gain feedback! Counteract unmeasured disturbances Reduce effect of changes / uncertainty (robustness) Change system dynamics (including stabilization) Linearize the behavior No explicit model required MAIN PROBLEM Potential instability (may occur suddenly) with time delay / RHP-zero
23 Outline About myself Why feedback (and not feedforward) ? II. Self-optimizing feedback control: What should we control? Stabilizing feedback control: Anti-slug control Conclusion
24 Optimal operation (economics) Define scalar cost function J(u 0,d) –u 0 : degrees of freedom –d: disturbances Optimal operation for given d: min u0 J(u 0,x,d) subject to: f(u 0,x,d) = 0 g(u 0,x,d) < 0
25 Implementation of optimal operation Optimal solution is usually at constraints, that is, most of the degrees of freedom are used to satisfy “active constraints”, g(u 0,d) = 0 CONTROL ACTIVE CONSTRAINTS! –Implementation of active constraints is usually simple. WHAT MORE SHOULD WE CONTROL? –We here concentrate on the remaining unconstrained degrees of freedom u.
26 Optimal operation Cost J Independent variable u (remaining unconstrained) u opt J opt
27 Implementation: How do we deal with uncertainty? 1. Disturbances d 2. Implementation error n u s = u opt (d 0 ) – nominal optimization n u = u s + n d Cost J J opt (d)
28 Problem no. 1: Disturbance d Cost J Independent variable u u opt (d 0 ) J opt d0d0d0d0 d ≠ d 0 Loss with constant value for u
29 Problem no. 2: Implementation error n Cost J Independent variable u u s =u opt (d 0 ) J opt d0d0d0d0 Loss due to implementation error for u u = u s + n
30 Estimate d and compute new u opt (d) Probem: Complicated and sensitive to uncertainty ”Obvious” solution: Optimizing control = ”Feedforward”
31 Alternative: Feedback implementation Issue: What should we control?
32 Engineering systems Most (all?) large-scale engineering systems are controlled using hierarchies of quite simple single-loop controllers –Commercial aircraft –Large-scale chemical plant (refinery) 1000’s of loops Simple components: on-off + P-control + PI-control + nonlinear fixes + some feedforward Same in biological systems
33 Process control hierarchy y 1 = c ? (economics) PID RTO MPC
34 Self-optimizing Control –Self-optimizing control is when acceptable loss can be achieved using constant set points (c s ) for the controlled variables c (without re- optimizing when disturbances occur). Define loss:
35 Constant setpoint policy: Effect of disturbances (“problem 1”)
36 Effect of implementation error (“problem 2”) BADGood
37 Self-optimizing Control – Marathon Optimal operation of Marathon runner, J=T –Any self-optimizing variable c (to control at constant setpoint)?
38 Self-optimizing Control – Marathon Optimal operation of Marathon runner, J=T –Any self-optimizing variable c (to control at constant setpoint)? c 1 = distance to leader of race c 2 = speed c 3 = heart rate c 4 = level of lactate in muscles
39 Further examples Central bank. J = welfare. u = interest rate. c=inflation rate (2.5%) Cake baking. J = nice taste, u = heat input. c = Temperature (200C) Business, J = profit. c = ”Key performance indicator (KPI), e.g. –Response time to order –Energy consumption pr. kg or unit –Number of employees –Research spending Optimal values obtained by ”benchmarking” Investment (portofolio management). J = profit. c = Fraction of investment in shares (50%) Biological systems: –”Self-optimizing” controlled variables c have been found by natural selection –Need to do ”reverse engineering” : Find the controlled variables used in nature From this possibly identify what overall objective J the biological system has been attempting to optimize
40 Good candidate controlled variables c (for self-optimizing control) Requirements: The optimal value of c should be insensitive to disturbances (avoid problem 1) c should be easy to measure and control (rest: avoid problem 2) The value of c should be sensitive to changes in the degrees of freedom (Equivalently, J as a function of c should be flat) For cases with more than one unconstrained degrees of freedom, the selected controlled variables should be independent. Singular value rule (Skogestad and Postlethwaite, 1996): Look for variables that maximize the minimum singular value of the appropriately scaled steady-state gain matrix G from u to c
41 Self-optimizing control: Recycle process J = V (minimize energy) N m = 5 3 economic (steady- state) DOFs Given feedrate F 0 and column pressure: Constraints: Mr < Mrmax, xB > xBmin = 0.98 DOF = degree of freedom
42 Recycle process: Control active constraints Active constraint M r = M rmax Active constraint x B = x Bmin One unconstrained DOF left for optimization: What more should we control? Remaining DOF:L
43 Recycle process: Loss with constant setpoint, c s Large loss with c = F (Luyben rule) Negligible loss with c =L/F or c = temperature
44 Recycle process: Proposed control structure for case with J = V (minimize energy) Active constraint M r = M rmax Active constraint x B = x Bmin Self-optimizing loop: Adjust L such that L/F is constant
45 Outline About myself Why feedback (and not feedforward) ? Self-optimizing feedback control: What should we control? III. Stabilizing feedback control: Anti-slug control Conclusion
46 Application stabilizing feedback control: Anti-slug control Slug (liquid) buildup Two-phase pipe flow (liquid and vapor)
47 Slug cycle (stable limit cycle) Experiments performed by the Multiphase Laboratory, NTNU
48
49 Experimental mini-loop
50 p1p1 p2p2 z Experimental mini-loop Valve opening (z) = 100%
51 p1p1 p2p2 z Experimental mini-loop Valve opening (z) = 25%
52 p1p1 p2p2 z Experimental mini-loop Valve opening (z) = 15%
53 p1p1 p2p2 z Experimental mini-loop: Bifurcation diagram Valve opening z % No slug Slugging
54 Avoid slugging? Design changes Feedforward control? Feedback control?
55 p1p1 p2p2 z Avoid slugging: 1. Close valve (but increases pressure) Valve opening z % No slugging when valve is closed Design change
56 Avoid slugging: 2. Other design changes to avoid slugging p1p1 p2p2 z Design change
57 Minimize effect of slugging: 3. Build large slug-catcher Most common strategy in practice p1p1 p2p2 z Design change
58 Avoid slugging: 4. Feedback control? Valve opening z % Predicted smooth flow: Desirable but open-loop unstable Comparison with simple 3-state model:
59 Avoid slugging: 4. ”Active” feedback control PT PC ref Simple PI-controller p1p1 z
60 Anti slug control: Mini-loop experiments Controller ONController OFF p 1 [bar] z [%]
61 Anti slug control: Full-scale offshore experiments at Hod-Vallhall field (Havre,1999)
62 Analysis: Poles and zeros y z P 1 [Bar]DP[Bar]ρ T [kg/m 3 ]F Q [m 3 /s]F W [kg/s] Operation points: Zeros: z P1P1 DPPoles ±0.0067i ±0.0092i P1P1 ρTρT DP FT Topside Topside measurements: Ooops.... RHP-zeros or zeros close to origin
63 Stabilization with topside measurements: Avoid “RHP-zeros by using 2 measurements Model based control (LQG) with 2 top measurements: DP and density ρ T
64 Summary anti slug control Stabilization of smooth flow regime = $$$$! Stabilization using downhole pressure simple Stabilization using topside measurements possible Control can make a difference! Thanks to: Espen Storkaas + Heidi Sivertsen and Ingvald Bårdsen
65 Conclusions Feedback is an extremely powerful tool Complex systems can be controlled by hierarchies (cascades) of single- input-single-output (SISO) control loops Control the right variables (primary outputs) to achieve ”self- optimizing control” Feedback can make new things possible (anti-slug)