Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Control structure design for complete chemical plants (a systematic procedure to plantwide control) Sigurd Skogestad Department of Chemical Engineering.

Similar presentations


Presentation on theme: "1 Control structure design for complete chemical plants (a systematic procedure to plantwide control) Sigurd Skogestad Department of Chemical Engineering."— Presentation transcript:

1 1 Control structure design for complete chemical plants (a systematic procedure to plantwide control) Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Tecnology (NTNU) Trondheim, Norway Based on: Plenary presentation at ESCAPE’12, May 2002 Updated/expanded April 2004 for 2.5 h tutorial in Vancouver, Canada Further updated: August 2004

2 2 Outline About Trondheim and myself Control structure design (plantwide control) A procedure for control structure design I Top Down Step 1: Degrees of freedom Step 2: Operational objectives (optimal operation) Step 3: What to control ? (self-optimzing control) Step 4: Where set production rate? II Bottom Up Step 5: Regulatory control: What more to control ? Step 6: Supervisory control Step 7: Real-time optimization Case studies

3 3 Idealized view of control (“Ph.D. control”)

4 4 Practice: Tennessee Eastman challenge problem (Downs, 1991) (“PID control”)

5 5 Idealized view II: Optimizing control

6 6 Practice II: Hierarchical decomposition with separate layers What should we control?

7 7 Alan Foss (“Critique of chemical process control theory”, AIChE Journal,1973): The central issue to be resolved... is the determination of control system structure. Which variables should be measured, which inputs should be manipulated and which links should be made between the two sets? There is more than a suspicion that the work of a genius is needed here, for without it the control configuration problem will likely remain in a primitive, hazily stated and wholly unmanageable form. The gap is present indeed, but contrary to the views of many, it is the theoretician who must close it. Carl Nett (1989): Minimize control system complexity subject to the achievement of accuracy specifications in the face of uncertainty.

8 8 Control structure design Not the tuning and behavior of each control loop, But rather the control philosophy of the overall plant with emphasis on the structural decisions: –Selection of controlled variables (“outputs”) –Selection of manipulated variables (“inputs”) –Selection of (extra) measurements –Selection of control configuration (structure of overall controller that interconnects the controlled, manipulated and measured variables) –Selection of controller type (LQG, H-infinity, PID, decoupler, MPC etc.). That is: Control structure design includes all the decisions we need make to get from ``PID control’’ to “Ph.D” control

9 9 Process control: “Plantwide control” = “Control structure design for complete chemical plant” Large systems Each plant usually different – modeling expensive Slow processes – no problem with computation time Structural issues important –What to control? –Extra measurements –Pairing of loops

10 10 Previous work on plantwide control Page Buckley (1964) - Chapter on “Overall process control” (still industrial practice) Greg Shinskey (1967) – process control systems Alan Foss (1973) - control system structure Bill Luyben et al. (1975- ) – case studies ; “snowball effect” George Stephanopoulos and Manfred Morari (1980) – synthesis of control structures for chemical processes Ruel Shinnar (1981- ) - “dominant variables” Jim Downs (1991) - Tennessee Eastman challenge problem Larsson and Skogestad (2000): Review of plantwide control

11 11 Control structure selection issues are identified as important also in other industries. Professor Gary Balas (Minnesota) at ECC’03 about flight control at Boeing: The most important control issue has always been to select the right controlled variables --- no systematic tools used!

12 12 Main simplification: Hierarchical structure Need to define objectives and identify main issues for each layer PID RTO MPC

13 13 Regulatory control (seconds) Purpose: “Stabilize” the plant by controlling selected ‘’secondary’’ variables (y 2 ) such that the plant does not drift too far away from its desired operation Use simple single-loop PI(D) controllers Status: Many loops poorly tuned –Most common setting: K c =1,  I =1 min (default) –Even wrong sign of gain K c ….

14 14 Regulatory control……... Trend: Can do better! Carefully go through plant and retune important loops using standardized tuning procedure Exists many tuning rules, including Skogestad (SIMC) rules: –K c = (1/k) (  1 / [  c +  ])  I = min (  1, 4[  c +  ]), Typical:  c =  –“Probably the best simple PID tuning rules in the world” © Carlsberg Outstanding structural issue: What loops to close, that is, which variables (y 2 ) to control?

15 15 Supervisory control (minutes) Purpose: Keep primary controlled variables (c=y 1 ) at desired values, using as degrees of freedom the setpoints y 2s for the regulatory layer. Status: Many different “advanced” controllers, including feedforward, decouplers, overrides, cascades, selectors, Smith Predictors, etc. Issues: –Which variables to control may change due to change of “active constraints” –Interactions and “pairing”

16 16 Supervisory control…... Trend: Model predictive control (MPC) used as unifying tool. –Linear multivariable models with input constraints –Tuning (modelling) is time-consuming and expensive Issue: When use MPC and when use simpler single-loop decentralized controllers ? –MPC is preferred if active constraints (“bottleneck”) change. –Avoids logic for reconfiguration of loops Outstanding structural issue: –What primary variables c=y 1 to control?

17 17 Local optimization (hour) Purpose: Minimize cost function J and: –Identify active constraints –Recompute optimal setpoints y 1s for the controlled variables Status: Done manually by clever operators and engineers Trend: Real-time optimization (RTO) based on detailed nonlinear steady-state model Issues: –Optimization not reliable. –Need nonlinear steady-state model –Modelling is time-consuming and expensive

18 18 Objectives of layers: MV’s and CV’s c s = y 1s MPC PID y 2s RTO u (valves) CV=y 1 ; MV=y 2s CV=y 2 ; MV=u Min J; MV=y 1s

19 19 Outline About Trondheim and myself Control structure design (plantwide control) A procedure for control structure design I Top Down Step 1: Degrees of freedom Step 2: Operational objectives (optimal operation) Step 3: What to control ? (self-optimizing control) Step 4: Where set production rate? II Bottom Up Step 5: Regulatory control: What more to control ? Step 6: Supervisory control Step 7: Real-time optimization Case studies

20 20 Stepwise procedure plantwide control I. TOP-DOWN Step 1. DEGREES OF FREEDOM Step 2. OPERATIONAL OBJECTIVES Step 3. WHAT TO CONTROL? (primary CV’s c=y 1 ) Step 4. PRODUCTION RATE II. BOTTOM-UP (structure control system): Step 5. REGULATORY CONTROL LAYER (PID) “Stabilization” What more to control? (secondary CV’s y 2 ) Step 6. SUPERVISORY CONTROL LAYER (MPC) Decentralization Step 7. OPTIMIZATION LAYER (RTO) Can we do without it?

21 21 Outline About Trondheim and myself Control structure design (plantwide control) A procedure for control structure design I Top Down Step 1: Degrees of freedom Step 2: Operational objectives (optimal operation) Step 3: What to control ? (self-optimzing control) Step 4: Where set production rate? II Bottom Up Step 5: Regulatory control: What more to control ? Step 6: Supervisory control Step 7: Real-time optimization Case studies

22 22 Step 1. Degrees of freedom (DOFs) m – dynamic (control) degrees of freedom = valves u 0 – steady-state degrees of freedom N m : no. of dynamic (control) DOFs (valves) N u0 = N m - N 0 : no. of steady-state DOFs –N 0 = N 0y + N 0m : no. of variables with no steady-state effect N 0m : no. purely dynamic control DOFs N 0y : no. controlled variables (liquid levels) with no steady-state effect Cost J depends normally only on steady-state DOFs

23 23 N m = 5, N 0y = 2, N u0 = 5 - 2 = 3 ( 2 with given pressure) Distillation column with given feed

24 24 Heat-integrated distillation process

25 25 Heat exchanger with bypasses

26 26 Typical number of steady-state degrees of freedom (u 0 ) for some process units each external feedstream: 1 (feedrate) splitter: n-1 (split fractions) where n is the number of exit streams mixer: 0 compressor, turbine, pump: 1 (work) adiabatic flash tank: 1 (0 with fixed pressure) liquid phase reactor: 1 (volume) gas phase reactor: 1 (0 with fixed pressure) heat exchanger: 1 (duty or net area) distillation column excluding heat exchangers: 1 (0 with fixed pressure) + number of sidestreams

27 27 Check that there are enough manipulated variables (DOFs) - both dynamically and at steady-state (step 2) Otherwise: Need to add equipment –extra heat exchanger –bypass –surge tank

28 28 Outline About Trondheim and myself Control structure design (plantwide control) A procedure for control structure design I Top Down Step 1: Degrees of freedom Step 2: Operational objectives (optimal operation) Step 3: What to control ? (self-optimzing control) Step 4: Where set production rate? II Bottom Up Step 5: Regulatory control: What more to control ? Step 6: Supervisory control Step 7: Real-time optimization Case studies

29 29 Optimal operation (economics) What are we going to use our degrees of freedom for? Define scalar cost function J(u 0,x,d) –u 0 : degrees of freedom –d: disturbances –x: states (internal variables) Typical cost function: Optimal operation for given d: min u0 J(u 0,x,d) subject to: Model equations: f(u 0,x,d) = 0 Operational constraints: g(u 0,x,d) < 0 J = cost feed + cost energy – value products

30 30 Outline About Trondheim and myself Control structure design (plantwide control) A procedure for control structure design I Top Down Step 1: Degrees of freedom Step 2: Operational objectives (optimal operation) Step 3: What to control ? (self-optimzing control) Step 4: Where set production rate? II Bottom Up Step 5: Regulatory control: What more to control ? Step 6: Supervisory control Step 7: Real-time optimization Case studies

31 31 Step 3. What should we control (c)? Outline Implementation of optimal operation Self-optimizing control Uncertainty (d and n) Example: Marathon runner Methods for finding the “magic” self-optimizing variables: A. Large gain: Minimum singular value rule B. “Brute force” loss evaluation C. Optimal combination of measurements Example: Recycle process Summary

32 32 Implementation of optimal operation Optimal operation for given d * : min u0 J(u 0,x,d) subject to: Model equations: f(u 0,x,d) = 0 Operational constraints: g(u 0,x,d) < 0 → u 0opt (d * ) Problem: Cannot keep u 0opt constant because disturbances d change

33 33 Problem: Too complicated (requires detailed model and description of uncertainty) Implementation of optimal operation (Cannot keep u 0opt constant) ”Obvious” solution: Optimizing control Estimate d from measurements and recompute u 0opt (d)

34 34 Implementation of optimal operation (Cannot keep u 0opt constant) Simpler solution: Look for another variable c which is better to keep constant and control c at constant setpoint c s = c opt (d*) Note: u 0 will indirectly change when d changes

35 35 What c’s should we control? Optimal solution is usually at constraints, that is, most of the degrees of freedom are used to satisfy “active constraints”, g(u 0,d) = 0 CONTROL ACTIVE CONSTRAINTS! –c s = value of active constraint –Implementation of active constraints is usually simple. WHAT MORE SHOULD WE CONTROL? –Find variables c for remaining unconstrained degrees of freedom u.

36 36 What should we control? (primary controlled variables y 1 =c) Intuition: “Dominant variables” (Shinnar) Systematic: Minimize cost J(u 0,d * ) w.r.t. DOFs u 0. 1.Control active constraints (constant setpoint is optimal) 2.Remaining unconstrained DOFs: Control “self-optimizing” variables c for which constant setpoints c s = c opt (d * ) give small (economic) loss Loss = J - J opt (d) when disturbances d ≠ d * occur c = ? (economics) y 2 = ? (stabilization)

37 37 Self-optimizing Control Self-optimizing control is when acceptable operation can be achieved using constant set points (c s ) for the controlled variables c (without the need to re-optimizing when disturbances occur). c=c s

38 38 The difficult unconstrained variables Cost J Selected controlled variable (remaining unconstrained) c opt J opt c

39 39 Implementation of unconstrained variables is not trivial: How do we deal with uncertainty? 1. Disturbances d (c opt (d) changes) 2. Implementation error n (actual c ≠ c opt ) c s = c opt (d * ) – nominal optimization n c = c s + n d Cost J  J opt (d)

40 40 Problem no. 1: Disturbance d Cost J Controlled variable c opt (d * ) J opt d*d*d*d* d ≠ d * Loss with constant value for c ) Want c opt independent of d

41 41 Example: Tennessee Eastman plant J c = Purge rate Nominal optimum setpoint is infeasible with disturbance 2 Oopss.. bends backwards Conclusion: Do not use purge rate as controlled variable

42 42 Problem no. 2: Implementation error n Cost J c s =c opt (d * ) J opt d*d*d*d* Loss due to implementation error for c c = c s + n ) Want n small and ”flat” optimum

43 43 Effect of implementation error on cost (“problem 2”) BADGood

44 44 Example sharp optimum. High-purity distillation : c = Temperature top of column Temperature T top Water (L) - acetic acid (H) Max 100 ppm acetic acid 100 C: 100% water 100.01C: 100 ppm 99.99 C: Infeasible

45 45 Summary unconstrained variables: Which variable c to control? Self-optimizing control: Constant setpoints c s give ”near-optimal operation” (= acceptable loss L for expected disturbances d and implementation errors n) Acceptable loss ) self-optimizing control

46 46 Examples self-optimizing control Marathon runner Central bank Cake baking Business systems (KPIs) Investment portifolio Biology Chemical process plants: Optimal blending of gasoline Define optimal operation (J) and look for ”magic” variable (c) which when kept constant gives acceptable loss (self- optimizing control)

47 47 Self-optimizing Control – Marathon Optimal operation of Marathon runner, J=T –Any self-optimizing variable c (to control at constant setpoint)?

48 48 Self-optimizing Control – Marathon Optimal operation of Marathon runner, J=T –Any self-optimizing variable c (to control at constant setpoint)? c 1 = distance to leader of race c 2 = speed c 3 = heart rate c 4 = level of lactate in muscles

49 49 Self-optimizing Control – Marathon Optimal operation of Marathon runner, J=T –Any self-optimizing variable c (to control at constant setpoint)? c 1 = distance to leader of race (Problem: Feasibility for d) c 2 = speed (Problem: Feasibility for d) c 3 = heart rate (Problem: Impl. Error n) c 4 = level of lactate in muscles (Problem: Large impl.error n)

50 50 Self-optimizing Control – Sprinter Optimal operation of Sprinter (100 m), J=T –Active constraint control: Maximum speed (”no thinking required”)

51 51 Further examples Central bank. J = welfare. u = interest rate. c=inflation rate (2.5%) Cake baking. J = nice taste, u = heat input. c = Temperature (200C) Business, J = profit. c = ”Key performance indicator (KPI), e.g. –Response time to order –Energy consumption pr. kg or unit –Number of employees –Research spending Optimal values obtained by ”benchmarking” Investment (portofolio management). J = profit. c = Fraction of investment in shares (50%) Biological systems: –”Self-optimizing” controlled variables c have been found by natural selection –Need to do ”reverse engineering” : Find the controlled variables used in nature From this possibly identify what overall objective J the biological system has been attempting to optimize

52 52 Unconstrained degrees of freedom: Looking for “magic” variables to keep at constant setpoints. What properties do they have? Shinnar (1981): Control “Dominant variables” (undefined..) Skogestad and Postlethwaite (1996): The optimal value of c should be insensitive to disturbances c should be easy to measure and control accurately The value of c should be sensitive to changes in the steady-state degrees of freedom (Equivalently, J as a function of c should be flat) For cases with more than one unconstrained degrees of freedom, the selected controlled variables should be independent. Summarized by minimum singular value rule Avoid problem 1 (d) Avoid problem 2 (n)

53 53 Unconstrained degrees of freedom: Looking for “magic” variables to keep at constant setpoints. How can we find them systematically? A. Minimum singular value rule: B. “Brute force”: Consider available measurements y, and evaluate loss when they are kept constant: C. More general: Find optimal linear combination (matrix H):

54 54 Optimizer Controller that adjusts u to keep c m = c s Plant cscs c m =c+n u c n d u c J c s =c opt u opt n Unconstrained degrees of freedom: A. Minimum singular value rule Want the slope (= gain G from u to y) as large as possible

55 55 Unconstrained degrees of freedom: A. Minimum singular value rule Minimum singular value rule (Skogestad and Postlethwaite, 1996): Look for variables c that maximize the minimum singular value  ( G) of the appropriately scaled steady-state gain matrix G from u to c u: unconstrained degrees of freedom Loss Scaling is important: –Scale c such that their expected variation is similar (divide by optimal variation + noise) –Scale inputs u such that they have similar effect on cost J (J uu unitary)  (G) is called the Morari Resiliency index (MRI) by Luyben Detailed proof: I.J. Halvorsen, S. Skogestad, J.C. Morud and V. Alstad, ``Optimal selection of controlled variables'', Ind. Eng. Chem. Res., 42 (14), 3273-3284 (2003).

56 56 Minimum singular value rule in words Select controlled variables c for which their controllable range is large compared to their sum of optimal variation and control error controllable range = range c may reach by varying the inputs optimal variation: due to disturbance control error = implementation error n

57 57 B. “Brute-force” procedure for selecting (primary) controlled variables (Skogestad, 2000) Step 3.1 Determine DOFs for optimization Step 3.2 Definition of optimal operation J (cost and constraints) Step 3.3 Identification of important disturbances Step 3.4 Optimization (nominally and with disturbances) Step 3.5 Identification of candidate controlled variables (use active constraint control) Step 3.6 Evaluation of loss with constant setpoints for alternative controlled variables Step 3.7 Evaluation and selection (including controllability analysis) Case studies: Tenneessee-Eastman, Propane-propylene splitter, recycle process, heat-integrated distillation

58 58 B. Brute-force procedure Define optimal operation: Minimize cost function J Each candidate variable c: With constant setpoints c s compute loss L for expected disturbances d and implementation errors n Select variable c with smallest loss Acceptable loss ) self-optimizing control

59 59 Unconstrained degrees of freedom: C. Optimal measurement combination (Alstad, 2002) Basis: Want optimal value of c independent of disturbances ) –  c opt = 0 ¢  d Find optimal solution as a function of d: u opt (d), y opt (d) Linearize this relationship:  y opt = F  d F – sensitivity matrix Want: To achieve this for all values of  d: Always possible if Optimal when we disregard implementation error (n)

60 60 Alstad-method continued To handle implementation error: Use “sensitive” measurements, with information about all independent variables (u and d)

61 61 Toy Example

62 62 Toy Example

63 63 Toy Example

64 64 EXAMPLE: Recycle plant (Luyben, Yu, etc.) 1 2 3 4 5 Given feedrate F 0 and column pressure: Dynamic DOFs: N m = 5 Column levels: N 0y = 2 Steady-state DOFs:N 0 = 5 - 2 = 3

65 65 Recycle plant: Optimal operation mTmT 1 remaining unconstrained degree of freedom

66 66 Control of recycle plant: Conventional structure (“Two-point”: x D ) LC XC LC XC LC xBxB xDxD Control active constraints (M r =max and x B =0.015) + x D

67 67 Luyben rule Luyben rule (to avoid snowballing): “Fix a stream in the recycle loop” (F or D)

68 68 Luyben rule: D constant Luyben rule (to avoid snowballing): “Fix a stream in the recycle loop” (F or D) LC XC

69 69 A. Singular value rule: Steady-state gain Luyben rule: Not promising economically Conventional: Looks good

70 70 B. “Brute force” loss evaluation: Disturbance in F 0 Loss with nominally optimal setpoints for M r, x B and c Luyben rule: Conventional

71 71 B. “Brute force” loss evaluation: Implementation error Loss with nominally optimal setpoints for M r, x B and c Luyben rule:

72 72 C. Optimal measurement combination 1 unconstrained variable (#c = 1) 1 (important) disturbance: F 0 (#d = 1) “Optimal” combination requires 2 “measurements” (#y = #u + #d = 2) –For example, c = h 1 L + h 2 F BUT: Not much to be gained compared to control of single variable (e.g. L/F or x D )

73 73 Conclusion: Control of recycle plant Active constraint M r = M rmax Active constraint x B = x Bmin L/F constant: Easier than “two-point” control Assumption: Minimize energy (V) Self-optimizing

74 74 Recycle systems: Do not recommend Luyben’s rule of fixing a flow in each recycle loop (even to avoid “snowballing”)

75 75 Summary ”self-optimizing” control Operation of most real system: Constant setpoint policy (c = c s ) –Central bank –Business systems: KPI’s –Biological systems –Chemical processes Goal: Find controlled variables c such that constant setpoint policy gives acceptable operation in spite of uncertainty ) Self-optimizing control Method A: Maximize  (G) Method B: Evaluate loss L = J - J opt Method C: Optimal linear measurement combination:  c = H  y where HF=0 More examples later

76 76 Outline About Trondheim and myself Control structure design (plantwide control) A procedure for control structure design I Top Down Step 1: Degrees of freedom Step 2: Operational objectives (optimal operation) Step 3: What to control ? (self-optimzing control) Step 4: Where set production rate? II Bottom Up Step 5: Regulatory control: What more to control ? Step 6: Supervisory control Step 7: Real-time optimization Case studies

77 77 Step 4. Where set production rate? Very important! Determines structure of remaining inventory (level) control system Set production rate at (dynamic) bottleneck Link between Top-down and Bottom-up parts

78 78 Production rate set at inlet : Inventory control in direction of flow

79 79 Production rate set at outlet: Inventory control opposite flow

80 80 Production rate set inside process

81 81 Definition of bottleneck A unit (or more precisely, an extensive variable E within this unit) is a bottleneck (with respect to the flow F) if - With the flow F as a degree of freedom, the variable E is optimally at its maximum constraint (i.e., E= E max at the optimum) - The flow F is increased by increasing this constraint (i.e., dF/dE max > 0 at the optimum). A variable E is a dynamic( control) bottleneck if in addition - The optimal value of E is unconstrained when F is fixed at a sufficiently low value Otherwise E is a steady-state (design) bottleneck.

82 82 Reactor-recycle process: Given feedrate with production rate set at inlet

83 83 Reactor-recycle process: Reconfiguration required when reach bottleneck (max. vapor rate in column) MAX

84 84 Reactor-recycle process: Given feedrate with production rate set at bottleneck (column) F 0s

85 85 Outline About Trondheim and myself Control structure design (plantwide control) A procedure for control structure design I Top Down Step 1: Degrees of freedom Step 2: Operational objectives (optimal operation) Step 3: What to control ? (self-optimzing control) Step 4: Where set production rate? II Bottom Up Step 5: Regulatory control: What more to control ? Step 6: Supervisory control Step 7: Real-time optimization Case studies

86 86 II. Bottom-up Determine secondary controlled variables and structure (configuration) of control system (pairing) A good control configuration is insensitive to parameter changes Step 5. REGULATORY CONTROL LAYER 5.1Stabilization (including level control) 5.2Local disturbance rejection (inner cascades) What more to control? (secondary variables) Step 6. SUPERVISORY CONTROL LAYER Decentralized or multivariable control (MPC)? Pairing? Step 7. OPTIMIZATION LAYER (RTO)

87 87 Outline About Trondheim and myself Control structure design (plantwide control) A procedure for control structure design I Top Down Step 1: Degrees of freedom Step 2: Operational objectives (optimal operation) Step 3: What to control ? (self-optimzing control) Step 4: Where set production rate? II Bottom Up Step 5: Regulatory control: What more to control ? –Bonus: A little about tuning and “half rule” Step 6: Supervisory control Step 7: Real-time optimization Case studies

88 88 Step 5. Regulatory control layer Purpose: “Stabilize” the plant using local SISO PID controllers Enable manual operation (by operators) Main structural issues: What more should we control? (secondary cv’s, y 2 ) Pairing with manipulated variables (mv’s u 2 ) y 1 = c y 2 = ?

89 89 Example: Distillation Primary controlled variable: y 1 = c = y,x (compositions top, bottom) BUT: Delay in measurement of x + unreliable Regulatory control: For “stabilization” need control of: –Liquid level condenser (M D ) –Liquid level reboiler (M B ) –Pressure (p) –Holdup of light component in column (temperature profile) No steady-state effect

90 90 XCXC TC FC ysys y LsLs TsTs L T z XCXC

91 91 Degrees of freedom unchanged No degrees of freedom lost by control of secondary (local) variables as setpoints become y 2s replace inputs u 2 as new degrees of freedom GK y 2s u2u2 y2y2 y1y1 Original DOF New DOF Cascade control:

92 92 Objectives regulatory control layer Take care of “fast” control Simple decentralized (local) PID controllers that can be tuned on-line Allow for “slow” control in layer above (supervisory control) Make control problem easy as seen from layer above Stabilization (mathematical sense) Local disturbance rejection Local linearization (avoid “drift” due to disturbances) “stabilization” (practical sense) Implications for selection of y 2 : 1.Control of y 2 “stabilizes the plant” 2.y 2 is easy to control (favorable dynamics)

93 93 1. “Control of y 2 stabilizes the plant” “Mathematical stabilization” (e.g. reactor): Unstable mode is “quickly” detected (state observability) in the measurement (y 2 ) and is easily affected (state controllability) by the input (u 2 ). Tool for selecting input/output: Pole vectors –y 2 : Want large element in output pole vector: Instability easily detected relative to noise –u 2 : Want large element in input pole vector: Small input usage required for stabilization “Extended stabilization” (avoid “drift” due to disturbance sensitivity): Intuitive: y 2 located close to important disturbance Or rather: Controllable range for y 2 is large compared to sum of optimal variation and control error More exact tool: Partial control analysis

94 94 Recall rule for selecting primary controlled variables c: Controlled variables c for which their controllable range is large compared to their sum of optimal variation and control error Control variables y 2 for which their controllable range is large compared to their sum of optimal variation and control error controllable range = range y 2 may reach by varying the inputs optimal variation: due to disturbances control error = implementation error n Restated for secondary controlled variables y 2 : Want small Want large

95 95 Partial control analysis Primary controlled variable y 1 = c (supervisory control layer) Local control of y 2 using u 2 (regulatory control layer) Setpoint y 2s : new DOF for supervisory control

96 96 2. “y 2 is easy to control” Main rule: y 2 is easy to measure and located close to manipulated variable u 2 Statics: Want large gain (from u 2 to y 2 ) Dynamics: Want small effective delay (from u 2 to y 2 )

97 97 Aside: Effective delay and tunings PI-tunings from “SIMC rule” Use half rule to obtain first-order model –Effective delay θ = “True” delay + inverse response time constant + half of second time constant + all smaller time constants –Time constant τ 1 = original time constant + half of second time constant

98 98 Example cascade control (PI) d=6 G1G1 u2u2 y1y1 K1K1 ysys G2G2 K2K2 y2y2 y 2s

99 99 Example cascade control (PI) d=6 Without cascade With cascade G1G1 u y1y1 K1K1 ysys G2G2 K2K2 y2y2 y 2s

100 10 0 Example cascade control Inner fast (secondary) loop: –P or PI-control –Local disturbance rejection –Much smaller effective delay (0.2 s) Outer slower primary loop: –Reduced effective delay (2 s instead of 6 s) Time scale separation –Inner loop can be modelled as gain=1 + 2*effective delay (0.4s) Very effective for control of large-scale systems

101 10 1 Outline About Trondheim and myself Control structure design (plantwide control) A procedure for control structure design I Top Down Step 1: Degrees of freedom Step 2: Operational objectives (optimal operation) Step 3: What to control ? (self-optimzing control) Step 4: Where set production rate? II Bottom Up Step 5: Regulatory control: What more to control ? Step 6: Supervisory control Step 7: Real-time optimization Case studies

102 10 2 Step 6. Supervisory control layer Purpose: Keep primary controlled outputs c=y 1 at optimal setpoints c s Degrees of freedom: Setpoints y 2s in reg.control layer Main structural issue: Decentralized or multivariable?

103 10 3 Decentralized control (single-loop controllers) Use for: Noninteracting process and no change in active constraints +Tuning may be done on-line +No or minimal model requirements +Easy to fix and change -Need to determine pairing -Performance loss compared to multivariable control - Complicated logic required for reconfiguration when active constraints move

104 10 4 Multivariable control (with explicit constraint handling = MPC) Use for: Interacting process and changes in active constraints +Easy handling of feedforward control +Easy handling of changing constraints no need for logic smooth transition -Requires multivariable dynamic model -Tuning may be difficult -Less transparent -“Everything goes down at the same time”

105 10 5 Outline About Trondheim and myself Control structure design (plantwide control) A procedure for control structure design I Top Down Step 1: Degrees of freedom Step 2: Operational objectives (optimal operation) Step 3: What to control ? (self-optimzing control) Step 4: Where set production rate? II Bottom Up Step 5: Regulatory control: What more to control ? Step 6: Supervisory control Step 7: Real-time optimization Case studies

106 10 6 Step 7. Optimization layer (RTO) Purpose: Identify active constraints and compute optimal setpoints (to be implemented by supervisory control layer) Main structural issue: Do we need RTO? (or is process self- optimizing)

107 10 7 Outline About Trondheim and myself Control structure design (plantwide control) A procedure for control structure design I Top Down Step 1: Degrees of freedom Step 2: Operational objectives (optimal operation) Step 3: What to control ? (self-optimzing control) Step 4: Where set production rate? II Bottom Up Step 5: Regulatory control: What more to control ? Step 6: Supervisory control Step 7: Real-time optimization Conclusion / References Case studies

108 10 8 Conclusion Procedure plantwide control: I. Top-down analysis to identify degrees of freedom and primary controlled variables (look for self-optimizing variables) II. Bottom-up analysis to determine secondary controlled variables and structure of control system (pairing).

109 10 9 References Halvorsen, I.J, Skogestad, S., Morud, J.C., Alstad, V. (2003), “Optimal selection of controlled variables”, Ind.Eng.Chem.Res., 42, 3273-3284. Larsson, T. and S. Skogestad (2000), “Plantwide control: A review and a new design procedure”, Modeling, Identification and Control, 21, 209-240. Larsson, T., K. Hestetun, E. Hovland and S. Skogestad (2001), “Self-optimizing control of a large-scale plant: The Tennessee Eastman process’’, Ind.Eng.Chem.Res., 40, 4889-4901. Larsson, T., M.S. Govatsmark, S. Skogestad and C.C. Yu (2003), “Control of reactor, separator and recycle process’’, Ind.Eng.Chem.Res., 42, 1225-1234 Skogestad, S. and Postlethwaite, I. (1996), Multivariable feedback control, Wiley Skogestad, S. (2000). “Plantwide control: The search for the self-optimizing control structure”. J. Proc. Control 10, 487-507. Skogestad, S. (2003), ”Simple analytic rules for model reduction and PID controller tuning”, J. Proc. Control, 13, 291-309. Skogestad, S. (2004), “Control structure design for complete chemical plants”, Computers and Chemical Engineering, 28, 219-234. (Special issue from ESCAPE’12 Symposium, Haag, May 2002). … + more….. See home page of S. Skogestad: http://www.chembio.ntnu.no/users/skoge/


Download ppt "1 Control structure design for complete chemical plants (a systematic procedure to plantwide control) Sigurd Skogestad Department of Chemical Engineering."

Similar presentations


Ads by Google