Presentation is loading. Please wait.

Presentation is loading. Please wait.

Feedback: The simple and best solution

Similar presentations


Presentation on theme: "Feedback: The simple and best solution"— Presentation transcript:

1 Feedback: The simple and best solution
Feedback: The simple and best solution. Applications to self-optimizing control and stabilization of new operating regimes Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Technology (NTNU) Trondheim September 2005

2 Abstract Feedback: The simple and best solution Sigurd Skogestad, NTNU, Trondheim, Norway Most chemical engineers are (indirectly) trained to be “feedforward thinkers" and they immediately think of “model inversion'' when it comes doing control. Thus, they prefer to rely on models instead of data, although simple feedback solutions in many cases are much simpler and certainly more robust. The seminar starts with a simple comparison of feedback and feedforward control and their sensitivity to uncertainty. Then two nice applications of feedback are considered: 1. Implementation of optimal operation by "self-optimizing control". The idea is to turn optimization into a setpoint control problem, and the trick is to find the right variable to control. Applications include process control, pizza baking, marathon running, biology and the central bank of a country Stabilization of desired operating regimes. Here feedback control can lead to completely new and simple solutions. One example would be stabilization of laminar flow at conditions where we normally have turbulent flow. I the seminar a nice application to anti-slug control in multiphase pipeline flow is discussed. 

3 Outline About myself I. Why feedback (and not feedforward) ?
II. Self-optimizing feedback control: What should we control? III. Stabilizing feedback control: Anti-slug control Conclusion

4 Trondheim, Norway

5 Trondheim NORWAY Oslo DENMARK GERMANY UK Arctic circle SWEDEN
North Sea Trondheim NORWAY SWEDEN Oslo DENMARK GERMANY UK

6 NTNU, Trondheim

7 Sigurd Skogestad Born in 1955
1978: Siv.ing. Degree (MS) in Chemical Engineering from NTNU (NTH) : Process modeling group at the Norsk Hydro Research Center in Porsgrunn : Ph.D. student in Chemical Engineering at Caltech, Pasadena, USA. Thesis on “Robust distillation control”. Supervisor: Manfred Morari : Professor in Chemical Engineering at NTNU Since 1994: Head of process systems engineering center in Trondheim (PROST) Since 1999: Head of Department of Chemical Engineering 1996: Book “Multivariable feedback control” (Wiley) 2000,2003: Book “Prosessteknikk” (Norwegian) Group of about 10 Ph.D. students in the process control area

8 Research: Develop simple yet rigorous methods to solve problems of engineering significance.
Use of feedback as a tool to reduce uncertainty (including robust control), change the system dynamics (including stabilization; anti-slug control), generally make the system more well-behaved (including self-optimizing control). limitations on performance in linear systems (“controllability”), control structure design and plantwide control, interactions between process design and control, distillation column design, control and dynamics. Natural gas processes

9 Outline About myself I. Why feedback (and not feedforward) ?
II. Self-optimizing feedback control: What should we control? III. Stabilizing feedback control: Anti-slug control Conclusion

10 Example 1 d k=10 time 25 Gd G u y Plant (uncontrolled system)

11 G Gd u d y

12 Model-based control = Feedforward (FF) control
Gd G u y ”Perfect” feedforward control: u = - G-1 Gd d Our case: G=Gd → Use u = -d

13 Feedforward control: Nominal (perfect model)
Gd G u y Feedforward control: Nominal (perfect model)

14 Feedforward: sensitive to gain error
Gd G u y Feedforward: sensitive to gain error

15 Feedforward: sensitive to time constant error
Gd G u y Feedforward: sensitive to time constant error

16 Feedforward: Moderate sensitive to delay (in G or Gd)
u y Feedforward: Moderate sensitive to delay (in G or Gd)

17 Measurement-based correction = Feedback (FB) control
G Gd u y C ys e

18 Feedback generates inverse!
Gd u y C ys e Output y Input u Feedback generates inverse! Resulting output Feedback PI-control: Nominal case

19 Feedback PI control: insensitive to gain error
Gd ys e C G u y Feedback PI control: insensitive to gain error

20 Feedback: insenstive to time constant error
G Gd u d y C ys e Feedback: insenstive to time constant error

21 Feedback control: sensitive to time delay
G Gd u d y C ys e Feedback control: sensitive to time delay

22 Comment Time delay error in disturbance model (Gd): No effect (!) with feedback (except time shift) Feedforward: Similar effect as time delay error in G

23 Conclusion: Why feedback? (and not feedforward control)
Simple: High gain feedback! Counteract unmeasured disturbances Reduce effect of changes / uncertainty (robustness) Change system dynamics (including stabilization) Linearize the behavior No explicit model required MAIN PROBLEM Potential instability (may occur suddenly) with time delay / RHP-zero

24 Outline About myself Why feedback (and not feedforward) ?
II. Self-optimizing feedback control: What should we control? Stabilizing feedback control: Anti-slug control Conclusion

25 Optimal operation (economics)
Define scalar cost function J(u0,d) u0: degrees of freedom d: disturbances Optimal operation for given d: minu0 J(u0,x,d) subject to: f(u0,x,d) = 0 g(u0,x,d) < 0

26 ”Obvious” solution: Optimizing control = ”Feedforward”
Estimate d and compute new uopt(d) Probem: Complicated and sensitive to uncertainty

27 Same in biological systems
Engineering systems Most (all?) large-scale engineering systems are controlled using hierarchies of quite simple single-loop controllers Commercial aircraft Large-scale chemical plant (refinery) 1000’s of loops Simple components: on-off + P-control + PI-control + nonlinear fixes + some feedforward Same in biological systems

28 In Practice: Feedback implementation
Issue: What should we control?

29 Further layers: Process control hierarchy
PID RTO MPC y1 = c ? (economics)

30 Implementation of optimal operation
Optimal solution is usually at constraints, that is, most of the degrees of freedom are used to satisfy “active constraints”, g(u0,d) = 0 CONTROL ACTIVE CONSTRAINTS! Implementation of active constraints is usually simple. WHAT MORE SHOULD WE CONTROL? We here concentrate on the remaining unconstrained degrees of freedom u.

31 Optimal operation Cost J Jopt uopt Independent variable u
(remaining unconstrained)

32 Implementation: How do we deal with uncertainty?
1. Disturbances d 2. Implementation error n us = uopt(d0) – nominal optimization n u = us + n d Cost J  Jopt(d)

33 Problem no. 1: Disturbance d
d ≠ d0 Cost J d0 Jopt Loss with constant value for u uopt(d0) Independent variable u

34 Problem no. 2: Implementation error n
Cost J d0 Loss due to implementation error for u Jopt us=uopt(d0) u = us + n Independent variable u

35 Effect of implementation error (“problem 2”)
Good Good BAD

36 Self-optimizing Control
Define loss: Self-optimizing Control Self-optimizing control is when acceptable operation (=acceptable loss) can be achieved using constant set points (cs) for the controlled variables c (without the need for re-optimizing when disturbances occur). c=cs

37 Self-optimizing Control – Marathon
Optimal operation of Marathon runner, J=T Any self-optimizing variable c (to control at constant setpoint)?

38 Self-optimizing Control – Marathon
Optimal operation of Marathon runner, J=T Any self-optimizing variable c (to control at constant setpoint)? c1 = distance to leader of race c2 = speed c3 = heart rate c4 = level of lactate in muscles

39 Self-optimizing Control – Marathon
Optimal operation of Marathon runner, J=T Any self-optimizing variable c (to control at constant setpoint)? c1 = distance to leader of race (Problem: Feasibility for d) c2 = speed (Problem: Feasibility for d) c3 = heart rate (Problem: Impl. Error n) c4 = level of lactate in muscles (Problem: Impl.error n)

40 Self-optimizing Control – Sprinter
Optimal operation of Sprinter (100 m), J=T Active constraint control: Maximum speed (”no thinking required”)

41 Further examples Central bank. J = welfare. u = interest rate. c=inflation rate (2.5%) Cake baking. J = nice taste, u = heat input. c = Temperature (200C) Business, J = profit. c = ”Key performance indicator (KPI), e.g. Response time to order Energy consumption pr. kg or unit Number of employees Research spending Optimal values obtained by ”benchmarking” Investment (portofolio management). J = profit. c = Fraction of investment in shares (50%) Biological systems: ”Self-optimizing” controlled variables c have been found by natural selection Need to do ”reverse engineering” : Find the controlled variables used in nature From this possibly identify what overall objective J the biological system has been attempting to optimize

42 Mathematical: Local analysis
cost J u uopt

43 Good candidate controlled variables c (for self-optimizing control)
Requirements for c: The optimal value of c should be insensitive to disturbances (avoid problem 1) c should be easy to measure and control (rest: avoid problem 2) The value of c should be sensitive to changes in the degrees of freedom (Equivalently, J as a function of c should be flat) 4. For cases with more than one unconstrained degrees of freedom, the selected controlled variables should be independent. Maximum gain rule (Skogestad and Postlethwaite, 1996): Look for variables that maximize the scaled gain (Gs) (minimum singular value of the appropriately scaled steady-state gain matrix Gs from u to c)

44 Self-optimizing control: Recycle process J = V (minimize energy)
5 4 1 Given feedrate F0 and column pressure: 2 Nm = 5 3 economic (steady-state) DOFs 3 Constraints: Mr < Mrmax, xB > xBmin = 0.98 DOF = degree of freedom

45 Recycle process: Control active constraints
Mr = Mrmax Remaining DOF:L Active constraint xB = xBmin One unconstrained DOF left for optimization: What more should we control?

46 Maximum gain rule: Steady-state gain
Conventional: Looks good Luyben rule: Not promising economically

47 Recycle process: Loss with constant setpoint, cs
Large loss with c = F (Luyben rule) Negligible loss with c =L/F or c = temperature

48 Recycle process: Proposed control structure for case with J = V (minimize energy)
Active constraint Mr = Mrmax Active constraint xB = xBmin Self-optimizing loop: Adjust L such that L/F is constant

49 Outline About myself Why feedback (and not feedforward) ?
Self-optimizing feedback control: What should we control? III. Stabilizing feedback control: Anti-slug control Conclusion

50 Application stabilizing feedback control: Anti-slug control
Two-phase pipe flow (liquid and vapor) Slug (liquid) buildup

51 Slug cycle (stable limit cycle)
Experiments performed by the Multiphase Laboratory, NTNU

52

53 Experimental mini-loop

54 Experimental mini-loop Valve opening (z) = 100%

55 Experimental mini-loop Valve opening (z) = 25%

56 Experimental mini-loop Valve opening (z) = 15%

57 Experimental mini-loop: Bifurcation diagram
z Experimental mini-loop: Bifurcation diagram No slug Valve opening z % Slugging

58 Avoid slugging? Design changes Feedforward control? Feedback control?

59 Avoid slugging: 1. Close valve (but increases pressure)
z Design change Avoid slugging: 1. Close valve (but increases pressure) No slugging when valve is closed Valve opening z %

60 Avoid slugging: 2. Other design changes to avoid slugging
p1 p2 z

61 Minimize effect of slugging: 3. Build large slug-catcher
Design change Minimize effect of slugging: 3. Build large slug-catcher Most common strategy in practice p1 p2 z

62 Avoid slugging: 4. Feedback control?
Comparison with simple 3-state model: Valve opening z % Predicted smooth flow: Desirable but open-loop unstable

63 Avoid slugging: 4. ”Active” feedback control
PT PC ref z p1 Simple PI-controller

64 Anti slug control: Mini-loop experiments
p1 [bar] z [%] Controller ON Controller OFF

65 Anti slug control: Full-scale offshore experiments at Hod-Vallhall field (Havre,1999)

66 Analysis: Poles and zeros
Topside FT Operation points: ρT DP z P1 DP Poles 0.175 70.05 1.94 -6.11 0.0008±0.0067i 0.25 69 0.96 -6.21 0.0027±0.0092i P1 Zeros: y z P1 [Bar] DP[Bar] ρT [kg/m3] FQ [m3/s] FW [kg/s] 0.175 3.2473 0.0142 0.0048 0.25 3.4828 0.0131 Topside measurements: Ooops.... RHP-zeros or zeros close to origin

67 Stabilization with topside measurements: Avoid “RHP-zeros by using 2 measurements
Model based control (LQG) with 2 top measurements: DP and density ρT

68 Summary anti slug control
Stabilization of smooth flow regime = $$$$! Stabilization using downhole pressure simple Stabilization using topside measurements possible Control can make a difference! Thanks to: Espen Storkaas + Heidi Sivertsen and Ingvald Bårdsen

69 Conclusions Feedback is an extremely powerful tool
Complex systems can be controlled by hierarchies (cascades) of single-input-single-output (SISO) control loops Control the right variables (primary outputs) to achieve ”self-optimizing control” Feedback can make new things possible (anti-slug)


Download ppt "Feedback: The simple and best solution"

Similar presentations


Ads by Google