Download presentation
Presentation is loading. Please wait.
Published byDolores Guzmán Cáceres Modified over 6 years ago
1
PLANTWIDE CONTROL How to design the control system for a complete plant in a systematic manner
Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Tecnology (NTNU) Trondheim, Norway Singapore / Petronas / Petrobras, March 2010, India 2010, Brazil July 2011
2
Main simplification: Hierarchical decomposition
Dealing with complexity Main simplification: Hierarchical decomposition The controlled variables (CVs) interconnect the layers Process control OBJECTIVE Min J (economics); MV=y1s RTO cs = y1s Follow path (+ look after other variables) CV=y1 (+ u); MV=y2s MPC y2s Stabilize + avoid drift CV=y2; MV=u PID u (valves)
3
Summary and references
The following paper summarizes the procedure: S. Skogestad, ``Control structure design for complete chemical plants'', Computers and Chemical Engineering, 28 (1-2), (2004). There are many approaches to plantwide control as discussed in the following review paper: T. Larsson and S. Skogestad, ``Plantwide control: A review and a new design procedure'' Modeling, Identification and Control, 21, (2000).
4
S. Skogestad ``Plantwide control: the search for the self-optimizing control structure'', J. Proc. Control, 10, (2000). S. Skogestad, ``Self-optimizing control: the missing link between steady-state optimization and control'', Comp.Chem.Engng., 24, (2000). I.J. Halvorsen, M. Serra and S. Skogestad, ``Evaluation of self-optimising control structures for an integrated Petlyuk distillation column'', Hung. J. of Ind.Chem., 28, (2000). T. Larsson, K. Hestetun, E. Hovland, and S. Skogestad, ``Self-Optimizing Control of a Large-Scale Plant: The Tennessee Eastman Process'', Ind. Eng. Chem. Res., 40 (22), (2001). K.L. Wu, C.C. Yu, W.L. Luyben and S. Skogestad, ``Reactor/separator processes with recycles-2. Design for composition control'', Comp. Chem. Engng., 27 (3), (2003). T. Larsson, M.S. Govatsmark, S. Skogestad, and C.C. Yu, ``Control structure selection for reactor, separator and recycle processes'', Ind. Eng. Chem. Res., 42 (6), (2003). A. Faanes and S. Skogestad, ``Buffer Tank Design for Acceptable Control Performance'', Ind. Eng. Chem. Res., 42 (10), (2003). I.J. Halvorsen, S. Skogestad, J.C. Morud and V. Alstad, ``Optimal selection of controlled variables'', Ind. Eng. Chem. Res., 42 (14), (2003). A. Faanes and S. Skogestad, ``pH-neutralization: integrated process and control design'', Computers and Chemical Engineering, 28 (8), (2004). S. Skogestad, ``Near-optimal operation by self-optimizing control: From process control to marathon running and business systems'', Computers and Chemical Engineering, 29 (1), (2004). E.S. Hori, S. Skogestad and V. Alstad, ``Perfect steady-state indirect control'', Ind.Eng.Chem.Res, 44 (4), (2005). M.S. Govatsmark and S. Skogestad, ``Selection of controlled variables and robust setpoints'', Ind.Eng.Chem.Res, 44 (7), (2005). V. Alstad and S. Skogestad, ``Null Space Method for Selecting Optimal Measurement Combinations as Controlled Variables'', Ind.Eng.Chem.Res, 46 (3), (2007). S. Skogestad, ``The dos and don'ts of distillation columns control'', Chemical Engineering Research and Design (Trans IChemE, Part A), 85 (A1), (2007). E.S. Hori and S. Skogestad, ``Selection of control structure and temperature location for two-product distillation columns'', Chemical Engineering Research and Design (Trans IChemE, Part A), 85 (A3), (2007). A.C.B. Araujo, M. Govatsmark and S. Skogestad, ``Application of plantwide control to the HDA process. I Steady-state and self-optimizing control'', Control Engineering Practice, 15, (2007). A.C.B. Araujo, E.S. Hori and S. Skogestad, ``Application of plantwide control to the HDA process. Part II Regulatory control'', Ind.Eng.Chem.Res, 46 (15), (2007). V. Kariwala, S. Skogestad and J.F. Forbes, ``Reply to ``Further Theoretical results on Relative Gain Array for Norn-Bounded Uncertain systems'''' Ind.Eng.Chem.Res, 46 (24), 8290 (2007). V. Lersbamrungsuk, T. Srinophakun, S. Narasimhan and S. Skogestad, ``Control structure design for optimal operation of heat exchanger networks'', AIChE J., 54 (1), (2008). DOI /aic.11366 T. Lid and S. Skogestad, ``Scaled steady state models for effective on-line applications'', Computers and Chemical Engineering, 32, (2008). T. Lid and S. Skogestad, ``Data reconciliation and optimal operation of a catalytic naphtha reformer'', Journal of Process Control, 18, (2008). E.M.B. Aske, S. Strand and S. Skogestad, ``Coordinator MPC for maximizing plant throughput'', Computers and Chemical Engineering, 32, (2008). A. Araujo and S. Skogestad, ``Control structure design for the ammonia synthesis process'', Computers and Chemical Engineering, 32 (12), (2008). E.S. Hori and S. Skogestad, ``Selection of controlled variables: Maximum gain rule and combination of measurements'', Ind.Eng.Chem.Res, 47 (23), (2008). V. Alstad, S. Skogestad and E.S. Hori, ``Optimal measurement combinations as controlled variables'', Journal of Process Control, 19, (2009) E.M.B. Aske and S. Skogestad, ``Consistent inventory control'', Ind.Eng.Chem.Res, 48 (44), (2009).
5
Outline Control structure design (plantwide control)
A procedure for control structure design I Top Down Step 1: Define operational objective (cost) and constraints Step 2: Identify degrees of freedom and optimizate for disturbances Step 3: What to control ? (primary CV’s) (self-optimizing control) Step 4: Where set the production rate? (Inventory control) II Bottom Up Step 5: Regulatory control: What more to control (secondary CV’s) ? Step 6: Supervisory control Step 7: Real-time optimization Case studies
6
Main message 1. Control for economics (Top-down steady-state arguments) Primary controlled variables c = y1 : Control active constraints For remaining unconstrained degrees of freedom: Look for “self-optimizing” variables 2. Control for stabilization (Bottom-up; regulatory PID control) Secondary controlled variables y2 (“inner cascade loops”) Control variables which otherwise may “drift” Both cases: Control “sensitive” variables (with a large gain)!
7
Idealized view of control (“Ph.D. control”)
8
Practice: Tennessee Eastman challenge problem (Downs, 1991) (“PID control”)
9
How we design a control system for a complete chemical plant?
Where do we start? What should we control? and why? etc.
10
Alan Foss (“Critique of chemical process control theory”, AIChE Journal,1973):
The central issue to be resolved ... is the determination of control system structure. Which variables should be measured, which inputs should be manipulated and which links should be made between the two sets? There is more than a suspicion that the work of a genius is needed here, for without it the control configuration problem will likely remain in a primitive, hazily stated and wholly unmanageable form. The gap is present indeed, but contrary to the views of many, it is the theoretician who must close it. Carl Nett (1989): Minimize control system complexity subject to the achievement of accuracy specifications in the face of uncertainty.
11
Control structure design
Not the tuning and behavior of each control loop, But rather the control philosophy of the overall plant with emphasis on the structural decisions: Selection of controlled variables (“outputs”) Selection of manipulated variables (“inputs”) Selection of (extra) measurements Selection of control configuration (structure of overall controller that interconnects the controlled, manipulated and measured variables) Selection of controller type (LQG, H-infinity, PID, decoupler, MPC etc.). That is: Control structure design includes all the decisions we need make to get from ``PID control’’ to “Ph.D” control
12
Each plant usually different – modeling expensive
Process control: “Plantwide control” = “Control structure design for complete chemical plant” Large systems Each plant usually different – modeling expensive Slow processes – no problem with computation time Structural issues important What to control? Extra measurements, Pairing of loops Previous work on plantwide control: Page Buckley (1964) - Chapter on “Overall process control” (still industrial practice) Greg Shinskey (1967) – process control systems Alan Foss (1973) - control system structure Bill Luyben et al. ( ) – case studies ; “snowball effect” George Stephanopoulos and Manfred Morari (1980) – synthesis of control structures for chemical processes Ruel Shinnar ( ) - “dominant variables” Jim Downs (1991) - Tennessee Eastman challenge problem Larsson and Skogestad (2000): Review of plantwide control
13
Control structure selection issues are identified as important also in other industries.
Professor Gary Balas (Minnesota) at ECC’03 about flight control at Boeing: The most important control issue has always been to select the right controlled variables --- no systematic tools used!
14
Main objectives control system
Stabilization Implementation of acceptable (near-optimal) operation ARE THESE OBJECTIVES CONFLICTING? Usually NOT Different time scales Stabilization fast time scale Stabilization doesn’t “use up” any degrees of freedom Reference value (setpoint) available for layer above But it “uses up” part of the time window (frequency range)
15
Main simplification: Hierarchical decomposition
Dealing with complexity Main simplification: Hierarchical decomposition The controlled variables (CVs) interconnect the layers Process control OBJECTIVE Min J (economics); MV=y1s RTO cs = y1s Follow path (+ look after other variables) CV=y1 (+ u); MV=y2s MPC y2s Stabilize + avoid drift CV=y2; MV=u PID u (valves)
16
Example: Bicycle riding
Hierarchical decomposition Example: Bicycle riding Note: design starts from the bottom Regulatory control: First need to learn to stabilize the bicycle CV = y2 = tilt of bike MV = body position Supervisory control: Then need to follow the road. CV = y1 = distance from right hand side MV=y2s Usually a constant setpoint policy is OK, e.g. y1s=0.5 m Optimization: Which road should you follow? Temporary (discrete) changes in y1s
17
Summary: The three layers
Optimization layer (RTO; steady-state nonlinear model): Identifies active constraints and computes optimal setpoints for primary controlled variables (y1). Supervisory control (MPC; linear model with constraints): Follow setpoints for y1 (usually constant) by adjusting setpoints for secondary variables (MV=y2s) Look after other variables (e.g., avoid saturation for MV’s used in regulatory layer) Regulatory control (PID): Stabilizes the plant and avoids drift, in addition to following setpoints for y2. MV=valves (u). Problem definition and overall control objectives (y1, y2) starts from the top. Design starts from the bottom. A good example is bicycle riding: Regulatory control: First you need to learn how to stabilize the bicycle (y2) Supervisory control: Then you need to follow the road. Usually a constant setpoint policy is OK, for example, stay y1s=0.5 m from the right hand side of the road (in this case the "magic" self-optimizing variable self-optimizing variable is y1=distance to right hand side of road) Optimization: Which road (route) should you follow?
18
Control structure design procedure
I Top Down Step 1: Define operational objectives (optimal operation) Cost function J (to be minimized) Operational constraints Step 2: Identify degrees of freedom (MVs) and optimize for expected disturbances Identify regions of active constraints Step 3: Select primary controlled variables c=y1 (CVs) Step 4: Where set the production rate? (Inventory control) II Bottom Up Step 5: Regulatory / stabilizing control (PID layer) What more to control (y2; local CVs)? Pairing of inputs and outputs Step 6: Supervisory control (MPC layer) Step 7: Real-time optimization (Do we need it?) Understanding and using this procedure is the most important part of this course!!!! y1 y2 MVs Process
19
Step 1. Define optimal operation (economics)
What are we going to use our degrees of freedom u (MVs) for? Define scalar cost function J(u,x,d) u: degrees of freedom (usually steady-state) d: disturbances x: states (internal variables) Typical cost function: Optimize operation with respect to u for given d (usually steady-state): minu J(u,x,d) subject to: Model equations: f(u,x,d) = 0 Operational constraints: g(u,x,d) < 0 J = cost feed + cost energy – value products
20
Optimal operation distillation column
Distillation at steady state with given p and F: N=2 DOFs, e.g. L and V Cost to be minimized (economics) J = - P where P= pD D + pB B – pF F – pV V Constraints Purity D: For example xD, impurity · max Purity B: For example, xB, impurity · max Flow constraints: min · D, B, L etc. · max Column capacity (flooding): V · Vmax, etc. Pressure: 1) p given, 2) p free: pmin · p · pmax Feed: ) F given 2) F free: F · Fmax Optimal operation: Minimize J with respect to steady-state DOFs cost energy (heating+ cooling) value products cost feed
21
Optimal operation minimize J = cost feed + cost energy – value products Two main cases (modes) depending on marked conditions: Given feed Amount of products is then usually indirectly given and J = cost energy. Optimal operation is then usually unconstrained: Feed free Products usually much more valuable than feed + energy costs small. Optimal operation is then usually constrained: “maximize efficiency (energy)” Control: Operate at optimal trade-off (not obvious what to control to achieve this) “maximize production” Control: Operate at bottleneck (“obvious what to control”)
22
Comments optimal operation
Do not forget to include feedrate as a degree of freedom!! For LNG plant it may be optimal to have max. compressor power or max. compressor speed, and adjust feedrate of LNG For paper machine it may be optimal to have max. drying and adjust the feedrate of paper (speed of the paper machine) to meet spec! Control at bottleneck see later: “Where to set the production rate”
23
Step 2: (a) Identify degrees of freedom and (b) optimize for expected disturbances
Optimization: Identify regions of active constraints Time consuming! 3 3 unconstrained degrees of freedom -> Find 3 CVs 2 1 Control 3 active constraints
24
Active constraint regions for two distillation columns in series
25
Step 2a: Degrees of freedom (DOFs) for operation
NOT as simple as one may think! To find all operational (dynamic) degrees of freedom: Count valves! (Nvalves) “Valves” also includes adjustable compressor power, etc. Anything we can manipulate! BUT: not all these have a (steady-state) effect on the economics
26
Optimizer (RTO) TPM H1 H2 d PROCESS y Stabilized process ny CV1s CV1
Optimally constant valves TPM CV1s Supervisory controller (MPC) CV1 unused valves CV2s CV2 Regulatory controller (PID) H1 H2 Physical inputs (valves) d PROCESS y Stabilized process ny Degrees of freedom for optimization (usually steady-state DOFs), MVopt = CV1s Degrees of freedom for supervisory control, MV1=CV2s + unused valves Physical degrees of freedom for stabilizing control, MV2 = valves (dynamic process inputs)
27
Steady-state degrees of freedom (DOFs)
IMPORTANT! DETERMINES THE NUMBER OF VARIABLES TO CONTROL! No. of primary CVs = No. of steady-state DOFs Methods to obtain no. of steady-state degrees of freedom (Nss): Equation-counting Nss = no. of variables – no. of equations/specifications Very difficult in practice Valve-counting (easier!) Nss = Nvalves – N0ss – Nspecs N0ss = variables with no steady-state effect Potential number for some units (useful for checking!) Correct answer: Will eventually it when we perform optimization CV = controlled variable (c)
28
Steady-state degrees of freedom (Nss): 2. Valve-counting
Nvalves = no. of dynamic (control) DOFs (valves) Nss = Nvalves – N0ss – Nspecs : no. of steady-state DOFs N0ss = N0y + N0,valves : no. of variables with no steady-state effect N0,valves : no. purely dynamic control DOFs N0y : no. controlled variables (liquid levels) with no steady-state effect Nspecs: No of equality specifications (e.g., given pressure)
29
Typical Distillation column
4 5 3 1 With given feed and pressure: NEED TO IDENTIFY 2 more CV’s - Typical: Top and btm composition 6 2 Nvalves = 6 , N0y = 2 , NDOF,SS = = 4 (including feed and pressure as DOFs) N0y : no. controlled variables (liquid levels) with no steady-state effect
30
Heat-integrated distillation process
31
Heat-integrated distillation process
32
Heat exchanger with bypasses
33
Heat exchanger with bypasses
34
Steady-state degrees of freedom (Nss): 3
Steady-state degrees of freedom (Nss): 3. Potential number for some process units each external feedstream: 1 (feedrate) splitter: n-1 (split fractions) where n is the number of exit streams mixer: 0 compressor, turbine, pump: 1 (work/speed) adiabatic flash tank: 0* liquid phase reactor: 1 (holdup reactant) gas phase reactor: 0* heat exchanger: 1 (bypass or flow) column (e.g. distillation) excluding heat exchangers: 0* + no. of sidestreams pressure* : add 1DOF at each extra place you set pressure (using an extra valve, compressor or pump), e.g. in adiabatic flash tank, gas phase reactor or absorption column *Pressure is normally assumed to be given by the surrounding process and is then not a degree of freedom Ref: Araujo, Govatsmark and Skogestad (2007) Extension to closed cycles: Jensen and Skogestad (2009) Real number may be less, for example, if there is no bypass valve
35
Heat exchanger with bypasses
36
Distillation column (with feed and pressure as DOFs)
split “Potential number”, Nss= 0 (distillation) + 1 (feed) + 2*1 (heat exchangers) + 1 (split) = 4 With given feed and pressure: N’ss = 4 – 2 = 2
37
Heat-integrated distillation process
38
HDA process Mixer FEHE Furnace PFR Quench Separator Compressor Cooler
Stabilizer Benzene Column Toluene H2 + CH4 CH4 Diphenyl Purge (H2 + CH4)
39
HDA process: steady-state degrees of freedom
8 7 feed:1.2 hex: 3, 4, 6 splitter 5, 7 compressor: 8 distillation: 2 each column 3 1 2 4 5 6 13 11 9 14 12 10 Conclusion: 14 steady-state DOFs Assume given column pressures
40
Otherwise: Need to add equipment
Check that there are enough manipulated variables (DOFs) - both dynamically and at steady-state (step 2) Otherwise: Need to add equipment extra heat exchanger bypass surge tank
41
QUIZ Degrees of freedom (Dynamic, steady-state)?
Heating Cooling Feed 51%A, 49%B Purge (mostly A, some B, trace C) Liquid Product (C) Flash Gas phase process (e.g. ammonia, methanol) Degrees of freedom (Dynamic, steady-state)? Expected active constraints? Proposed control structure?
42
Step 3: Implementation of optimal operation
Optimal operation for given d*: minu J(u,x,d) subject to: Model equations: f(u,x,d) = 0 Operational constraints: g(u,x,d) < 0 → uopt(d*) Problem: Usally cannot keep uopt constant because disturbances d change How should we adjust the degrees of freedom (u)? What should we control?
43
Solution I (“obvious”): Optimal feedforward
Problem: UNREALISTIC! Lack of measurements of d Sensitive to model error
44
Solution II (”obvious”): Optimizing control
y Estimate d from measurements y and recompute uopt(d) Problem: COMPLICATED! Requires detailed model and description of uncertainty
45
Solution III (in practice): FEEDBACK with hierarchical decomposition
y CVs: link optimization and control layers When disturbance d: Degrees of freedom (u) are updated indirectly to keep CVs at setpoints
46
“Self-Optimizing Control” = Solution III with constant setpoints
y Self-optimizing control: Constant setpoints give acceptable loss Issue: What should we control?
47
self-optimizing control
Formal Definition Controller Process d u(d) c = f(y) cs e - + n cm Acceptable loss ) self-optimizing control Self-optimizing control is said to occur when we can achieve an acceptable loss (in comparison with truly optimal operation) with constant setpoint values for the controlled variables without the need to reoptimize when disturbances occur. Reference: S. Skogestad, “Plantwide control: The search for the self-optimizing control structure'', Journal of Process Control, 10, (2000).
48
How does self-optimizing control (solution III) work?
When disturbances d occur, controlled variable c deviates from setpoint cs Feedback controller changes degree of freedom u to uFB(d) to keep c at cs Near-optimal operation / acceptable loss (self-optimizing control) is achieved if uFB(d) ≈ uopt(d) or more generally, J(uFB(d)) ≈ J(uopt(d)) Of course, variation of uFB(d) is different for different CVs c. We need to look for variables, for which J(uFB(d)) ≈ J(uopt(d)) or Loss = J(uFB(d)) - J(uopt(d)) is small
49
Remarks “self-optimizing control”
1. Old idea (Morari et al., 1980): “We want to find a function c of the process variables which when held constant, leads automatically to the optimal adjustments of the manipulated variables, and with it, the optimal operating conditions.” 2. “Self-optimizing control” = acceptable steady-state behavior (loss) with constant CVs. is similar to “Self-regulation” = acceptable dynamic behavior with constant MVs. 3. The ideal self-optimizing variable c is the gradient (c = J/ u = Ju) Keep gradient at zero for all disturbances (c = Ju=0) Problem: no measurement of gradient
50
Step 3. What should we control (c)? Simple examples
51
Optimal operation of runner
Optimal operation - Runner Optimal operation of runner Cost to be minimized, J=T One degree of freedom (u=power) What should we control?
52
Self-optimizing control: Sprinter (100m)
Optimal operation - Runner Self-optimizing control: Sprinter (100m) 1. Optimal operation of Sprinter, J=T Active constraint control: Maximum speed (”no thinking required”)
53
Self-optimizing control: Marathon (40 km)
Optimal operation - Runner Self-optimizing control: Marathon (40 km) 2. Optimal operation of Marathon runner, J=T
54
Solution 1 Marathon: Optimizing control
Optimal operation - Runner Solution 1 Marathon: Optimizing control Even getting a reasonable model requires > 10 PhD’s … and the model has to be fitted to each individual…. Clearly impractical!
55
Solution 2 Marathon – Feedback (Self-optimizing control)
Optimal operation - Runner Solution 2 Marathon – Feedback (Self-optimizing control) What should we control?
56
Self-optimizing control: Marathon (40 km)
Optimal operation - Runner Self-optimizing control: Marathon (40 km) Optimal operation of Marathon runner, J=T Any self-optimizing variable c (to control at constant setpoint)? c1 = distance to leader of race c2 = speed c3 = heart rate c4 = level of lactate in muscles
57
Conclusion Marathon runner
Optimal operation - Runner Conclusion Marathon runner select one measurement c = heart rate Simple and robust implementation Disturbances are indirectly handled by keeping a constant heart rate May have infrequent adjustment of setpoint (heart rate)
58
Example: Cake Baking Disturbances Measurements Degrees of Freedom
Objective: Nice tasting cake with good texture Disturbances Measurements d1 = oven specifications y1 = oven temperature d2 = oven door opening y2 = cake temperature d3 = ambient temperature y3 = cake color d4 = initial temperature u1 = Heat input u2 = Final time Degrees of Freedom
59
Further examples self-optimizing control
Marathon runner Central bank Cake baking Business systems (KPIs) Investment portifolio Biology Chemical process plants: Optimal blending of gasoline Define optimal operation (J) and look for ”magic” variable (c) which when kept constant gives acceptable loss (self-optimizing control)
60
More on further examples
Central bank. J = welfare. u = interest rate. c=inflation rate (2.5%) Cake baking. J = nice taste, u = heat input. c = Temperature (200C) Business, J = profit. c = ”Key performance indicator (KPI), e.g. Response time to order Energy consumption pr. kg or unit Number of employees Research spending Optimal values obtained by ”benchmarking” Investment (portofolio management). J = profit. c = Fraction of investment in shares (50%) Biological systems: ”Self-optimizing” controlled variables c have been found by natural selection Need to do ”reverse engineering” : Find the controlled variables used in nature From this possibly identify what overall objective J the biological system has been attempting to optimize
61
Summary Step 3. What should we control (c)?
c = H y y – available measurements (including u’s) H – selection of combination matrix What should we control? Equivalently: What should H be? Control active constraints! Unconstrained variables: Control self-optimizing variables!
62
1. CONTROL ACTIVE CONSTRAINTS!
63
Example: Optimal operation distillation
Cost to be minimized (economics) J = - P where P= pD D + pB B – pF F – pV V Constraints Purity D: For example xD, impurity · max Purity B: For example, xB, impurity · max Flow constraints: 0 · D, B, L etc. · max Column capacity (flooding): V · Vmax, etc. cost energy (heating+ cooling) value products cost feed
64
Expected active constraints distillation
valuable product methanol + max. 5% water cheap product (byproduct) + max. 2% + water Valuable product: Purity spec. always active (if we get paid for impurity) Reason: Amount of valuable product (D or B) should always be maximized Avoid product “give-away” (“Sell water as methanol”) Also saves energy Control implications valuable product: Control purity at spec. 2. “Cheap” product. May want to over-purify! Trade-off: Yes, increased recovery of valuable product (less loss) No, costs energy May give unconstrained optimum
65
Active constraint regions distillation
No pay for impurity (pD=x*PD0): Can have no active constraints F pV Infeas. xDmax xBmax, Vmax (-1) No constraints. Overpurify D and B (2) xDmax (1) xDmax, xBmax (0) Vmax, (1) Region below here does not exist if pD is constant (“expensive product purity constraint xDmax always active”) pV=0
66
1. CONTROL ACTIVE CONSTRAINTS!
Active input constraints: Just set at MAX or MIN Active output constraints: Need back-off Back-off Loss c ≥ cconstraint c J Jopt If constraint can be violated dynamically (only average matters) Required Back-off = “bias” (steady-state measurement error for c) If constraint cannot be violated dynamically (“hard constraint”) Required Back-off = “bias” + maximum dynamic control error Want tight control of hard output constraints to reduce the back-off “Squeeze and shift”
67
Example. Optimal operation = max. throughput
Example. Optimal operation = max. throughput. Want tight bottleneck control to reduce backoff! Time Back-off = Lost production Rule for control of hard output constraints: “Squeeze and shift”! Reduce variance (“Squeeze”) and “shift” setpoint cs to reduce backoff
68
Hard Constraints: «SQUEEZE AND SHIFT»
© Richalet
69
SUMMARY ACTIVE CONSTRAINTS
cconstraint = value of active constraint Implementation of active constraints is usually “obvious”, but may need “back-off” (safety limit) for hard output constraints Cs = Cconstraint - backoff Want tight control of hard output constraints to reduce the back-off “Squeeze and shift”
70
QUIZ (again) Degrees of freedom (Dynamic, steady-state)?
Heating Cooling Feed 51%A, 49%B Purge (mostly A, some B, trace C) Liquid Product (C) Flash Gas phase process (e.g. ammonia, methanol) Degrees of freedom (Dynamic, steady-state)? Expected active constraints? (1. Feed given; 2. Feed free) Proposed control structure?
71
2. UNCONSTRAINED VARIABLES: - WHAT MORE SHOULD WE CONTROL
2. UNCONSTRAINED VARIABLES: - WHAT MORE SHOULD WE CONTROL? - WHAT ARE GOOD “SELF-OPTIMIZING” VARIABLES? Intuition: “Dominant variables” (Shinnar) Is there any systematic procedure? A. Sensitive variables: “Max. gain rule” (Gain= Minimum singular value) B. “Brute force” loss evaluation C. Optimal linear combination of measurements, c = Hy
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.