Download presentation
Presentation is loading. Please wait.
Published byMalin Holm Modified over 6 years ago
1
Outline Skogestad procedure for control structure design I Top Down
Step S1: Define operational objective (cost) and constraints Step S2: Identify degrees of freedom and optimize operation for disturbances Step S3: Implementation of optimal operation What to control ? (primary CV’s). Active constraints Self-optimizing variables (for unconstrained degrees of freedom) Step S4: Where set the production rate? (Inventory control) II Bottom Up Step S5: Regulatory control: What more to control (secondary CV’s) ? Step S6: Supervisory control Step S7: Real-time optimization
2
Implementation of optimal operation
Optimal operatiom: Min_u J(u,d) subject to constraints At the optimum, some constraints are usually active (cactive) while the remaining degrees of freedom (u) are unconstrained. Implementation. For small deviations from the optimal point, the loss is linear in the active constraints, but only quadratic in the unconstrained degrees of freedom. This is why have rule 1: control the active constraints. Proof: Rewrite the loss in terms of the independent variables, which are selected as the active constraints (cactive) + selected unconstrained* variables (c). For small deviations from the optimal point**: Loss = J(c,d) – J(copt(d),d) = lambda [cactive-cactiveopt(d)] + (c-copt(d))’ Jcc (c-copt(d)) Cativeopt(d) = cmin or cmax, is usually independent of d. Here lambda is the Lagrange multiplier. Because of the linearity, the loss is largest for the active constraints when the deviation from the optimal point is small. Note: [cactive-copt] = |cactive-copt| when constraint is satisfied [cactive-copt] = infinity (infeasible operation) when constraint is not satisfied Exception to rule 1: If the Lagrange multiplier is small for some active constraint, then it may be better to select some other unconstrained CV,, but one then needs to back off from optimal point to guarantee feasibility.. *It does not matter what the unconstrained c’s are as long as they form an independent set that span the whole space. ** There are also a second-order term in terms of [cactive-copt], but it is small if the active constraints are tightly controlled. There could also be coupling terms from cactive to c; we usually handle these by treating cactive as a disturbance which affects copt(d) - This is reasonable because we usually first select the constrained variables (cactive) as CVs, and then afterwards consider the choice of unconstrained CVs (c).
3
Step S3: Implementation of optimal operation
Optimal operation for given d*: minu J(u,x,d) subject to: Model equations: f(u,x,d) = 0 Operational constraints: g(u,x,d) < 0 → uopt(d*) Problem: Usally cannot keep uopt constant because disturbances d change How should we adjust the degrees of freedom (u)? What should we control?
4
Solution I (“obvious”): Optimal feedforward
Problem: UNREALISTIC! Lack of measurements of d Sensitive to model error
5
Solution II (”obvious”): Optimizing control
y Estimate d from measurements y and recompute uopt(d) Problem: COMPLICATED! Requires detailed model and description of uncertainty
6
Solution III (in practice): FEEDBACK with hierarchical decomposition
y What should we select as CVs? When disturbance d: Degrees of freedom (u) are updated indirectly to keep CVs at setpoints
7
“Self-Optimizing Control” = Solution III with constant setpoints
y Self-optimizing control: Constant setpoints give acceptable loss Issue: What should we control?
8
Definition of self-optimizing control
Acceptable loss ) self-optimizing control “Self-optimizing control is when we achieve acceptable loss (in comparison with truly optimal operation) with constant setpoint values for the controlled variables (without the need to reoptimize when disturbances occur).” Reference: S. Skogestad, “Plantwide control: The search for the self-optimizing control structure'', Journal of Process Control, 10, (2000).
9
Remarks “self-optimizing control”
1. Old idea (Morari et al., 1980): “We want to find a function c of the process variables which when held constant, leads automatically to the optimal adjustments of the manipulated variables, and with it, the optimal operating conditions.” 2. “Self-optimizing control” = acceptable steady-state behavior with constant CVs. “Self-regulation” = acceptable dynamic behavior with constant MVs. 3. Choice of good c (CV) is always important, even with RTO layer included 3. For unconstrained DOFs: Ideal self-optimizing variable is gradient, Ju = J/ u Keep gradient at zero for all disturbances (c = Ju=0) Problem: no measurement of gradient
10
Step S3…. What should we control (c)? Simple examples
(What is c? What is H?) H y C = Hy H: Nonquare matrix Usually selection matrix of 0’s and some 1’s (measurement selection) Can also be full matrix (measurement combinations)
11
Optimal operation of runner
Optimal operation - Runner Optimal operation of runner Cost to be minimized, J=T One degree of freedom (u=power) What should we control?
12
Self-optimizing control: Sprinter (100m)
Optimal operation - Runner Self-optimizing control: Sprinter (100m) 1. Optimal operation of Sprinter, J=T Active constraint control: Maximum speed (”no thinking required”)
13
Self-optimizing control: Marathon (40 km)
Optimal operation - Runner Self-optimizing control: Marathon (40 km) 2. Optimal operation of Marathon runner, J=T
14
Solution 1 Marathon: Optimizing control
Optimal operation - Runner Solution 1 Marathon: Optimizing control Even getting a reasonable model requires > 10 PhD’s … and the model has to be fitted to each individual…. Clearly impractical!
15
Solution 2 Marathon – Feedback (Self-optimizing control)
Optimal operation - Runner Solution 2 Marathon – Feedback (Self-optimizing control) What should we control?
16
Self-optimizing control: Marathon (40 km)
Optimal operation - Runner Self-optimizing control: Marathon (40 km) Optimal operation of Marathon runner, J=T Any self-optimizing variable c (to control at constant setpoint)? c1 = distance to leader of race c2 = speed c3 = heart rate c4 = level of lactate in muscles
17
Conclusion Marathon runner
Optimal operation - Runner Conclusion Marathon runner select one measurement c = heart rate Simple and robust implementation Disturbances are indirectly handled by keeping a constant heart rate May have infrequent adjustment of setpoint (heart rate)
18
Further examples self-optimizing control
Marathon runner Central bank Cake baking Business systems (KPIs) Investment portifolio Biology Chemical process plants: Optimal blending of gasoline Define optimal operation (J) and look for ”magic” variable (c) which when kept constant gives acceptable loss (self-optimizing control)
19
More on further examples
Central bank. J = welfare. u = interest rate. c=inflation rate (2.5%) Cake baking. J = nice taste, u = heat input. c = Temperature (200C) Business, J = profit. c = ”Key performance indicator (KPI), e.g. Response time to order Energy consumption pr. kg or unit Number of employees Research spending Optimal values obtained by ”benchmarking” Investment (portofolio management). J = profit. c = Fraction of investment in shares (50%) Biological systems: ”Self-optimizing” controlled variables c have been found by natural selection Need to do ”reverse engineering” : Find the controlled variables used in nature From this possibly identify what overall objective J the biological system has been attempting to optimize
20
Step S3. Implementation of optimal operation. Summary so far.
Main issue: What should we control (c)? Equivalently: What should H be? c = H y c – controlled variables (CVs) y – available measurements (including u’s) H – selection or combination matrix Control active constraints! * Unconstrained variables: Control self-optimizing variables! * Of course, the active constraints may be viewed as the obvious self-optimizing variables
21
Sigurd’s rules for CV selection
Always control active constraints! (almost always) Purity constraint on expensive product always active (no overpurification): (a) "Avoid product give away" (e.g., sell water as expensive product) (b) Save energy (costs energy to overpurify) Unconstrained optimum: NEVER try to control a variable that reaches max or min at the optimum In particular, never try to control directly the cost J - Assume we want to minimize J (e.g., J = V = energy) - and we make the stupid choice os selecting CV = V = J - Then setting J < Jmin: Gives infeasible operation (cannot meet constraints) - and setting J > Jmin: Forces us to be nonoptimal (which may require strange operation; see Exercise 3 on evaporators)
22
Outline Skogestad procedure for control structure design I Top Down
Step S1: Define operational objective (cost) and constraints Step S2: Identify degrees of freedom and optimize operation for disturbances Step S3: Implementation of optimal operation What to control ? (primary CV’s). Active constraints Self-optimizing variables (for unconstrained degrees of freedom) Step S4: Where set the production rate? (Inventory control) II Bottom Up Step S5: Regulatory control: What more to control (secondary CV’s) ? Step S6: Supervisory control Step S7: Real-time optimization
23
1. CONTROL ACTIVE CONSTRAINTS!
Step S3, in more detail….. 1. CONTROL ACTIVE CONSTRAINTS! Active input constraints: Just set at MAX or MIN Example: Max. heat input (boilup) distillation Active output constraints: Need back-off The backoff is the “safety margin” from the active constraint and is defined as the difference between the constraint value and the chosen setpoint Backoff = |Constraint – Setpoint|
24
Backoff = |Constraint – Setpoint|
Active output constraints Need back-off Back-off Loss c ≥ cconstraint c J Jopt If constraint can be violated dynamically (only average matters) Required Back-off = “bias” (steady-state measurement error for c) If constraint cannot be violated dynamically (“hard constraint”) Required Back-off = “bias” + maximum dynamic control error Want tight control of hard output constraints to reduce the back-off. “Squeeze and shift”-rule The backoff is the “safety margin” from the active constraint and is defined as the difference between the constraint value and the chosen setpoint Backoff = |Constraint – Setpoint|
25
Hard Constraints: «SQUEEZE AND SHIFT»
© Richalet
26
Example. Optimal operation = max. throughput
Example. Optimal operation = max. throughput. Want tight bottleneck control to reduce backoff! Time Back-off = Lost production Rule for control of hard output constraints: “Squeeze and shift”! Reduce variance (“Squeeze”) and “shift” setpoint cs to reduce backoff
27
Example back-off Speed limit = 80 km/h (hard constraint)
Measurement error (bias): 8 km/h Control error (variation due to poor control): 5 km/h Total implementation error = 13 km/h (= Backoff) Setpoint [km/h] = 80 – 13 = 67 Can reduce backoff with better control (“squeeze and shift”) Speed limit = 80 km/h (average over 10 km distance) (soft constraint) Total implementation error = 8 km/h (= Backoff) Setpoint [km/h] = 80 – 8 = 72 Do not need to include the control error because it averages out
28
Example back-off. xB = purity product > 95% (min.)
CV = Active constraint D1 xB Example back-off. xB = purity product > 95% (min.) D1 directly to customer (hard constraint) Measurement error (bias): 1% Control error (variation due to poor control): 2% Backoff = 1% + 2% = 3% Setpoint xBs= % = 98% (to be safe) Can reduce backoff with better control (“squeeze and shift”) D1 to large mixing tank (soft constraint) Backoff = 1% Setpoint xBs= % = 96% (to be safe) Do not need to include control error because it averages out in tank xB xB,product ±2% 8
29
SUMMARY ACTIVE CONSTRAINTS
cconstraint = value of active constraint Implementation of active constraints is usually “obvious”, but may need “back-off” (safety limit) for hard output constraints Cs = Cconstraint - backoff Want tight control of hard output constraints to reduce the back-off “Squeeze and shift”
30
Outline Skogestad procedure for control structure design I Top Down
Step S1: Define operational objective (cost) and constraints Step S2: Identify degrees of freedom and optimize operation for disturbances Step S3: Implementation of optimal operation What to control ? (primary CV’s). Active constraints Self-optimizing variables (for unconstrained degrees of freedom) Step S4: Where set the production rate? (Inventory control) II Bottom Up Step S5: Regulatory control: What more to control (secondary CV’s) ? Step S6: Supervisory control Step S7: Real-time optimization
31
Step S3. Self-optimizing control
Unconstrained variables Step S3. Self-optimizing control What should we control? c = Hy H y OBVIOUS: CONTROL ACTIVE CONSTRAINTS, c=cconstraint NOT OBVIOUS: REMAINING UNCONSTRAINED, c=Hy? Ideal : Make initial response (from control layer) consistent with overall economic objective “Self-optimizing”: Acceptable loss with cs=constant
32
Ideal “self-optimizing” variable
Unconstrained degrees of freedom Ideal “self-optimizing” variable The ideal self-optimizing variable c is the gradient (c = J/ u = Ju) Keep gradient at zero for all disturbances (c = Ju=0) Problem: Usually no measurement of gradient, that is, cannot write Ju=Hy u cost J Ju=0 Ju 33
33
Unconstrained variables
H steady-state control error / selection controlled variable disturbance measurement noise Ideal: c = Ju In practise: c = H y
34
WHAT ARE GOOD “SELF-OPTIMIZING” VARIABLES?
Unconstrained variables WHAT ARE GOOD “SELF-OPTIMIZING” VARIABLES? Intuition: “Dominant variables” (Shinnar) Is there any systematic procedure? A. Sensitive variables: “Max. gain rule” (Gain= Minimum singular value) B. “Brute force” loss evaluation C. Optimal linear combination of measurements, c = Hy
35
«Brute force» approach: What to control?
Define optimal operation: Minimize cost function J Each candidate variable c: With constant setpoints cs compute loss L for expected disturbances d and implementation errors n Select variable c with smallest loss
36
Constant setpoint policy: self-optimizing control
Loss for disturbances Acceptable loss ) self-optimizing control
37
Good candidate controlled variables c (for self-optimizing control)
The optimal value of c should be insensitive to disturbances c should be easy to measure and control The value of c should be sensitive to changes in the degrees of freedom Proof: Follows
38
Unconstrained optimum
Optimal operation Cost J Jopt copt Controlled variable c
39
Unconstrained optimum
Optimal operation Cost J d Jopt n copt Controlled variable c Two problems: 1. Optimum moves because of disturbances d: copt(d) 2. Implementation error, c = copt + n
40
Candidate controlled variables c for self-optimizing control
Unconstrained optimum Candidate controlled variables c for self-optimizing control Intuitive: The optimal value of c should be insensitive to disturbances (avoid problem 1): 2. Optimum should be flat (avoid problem 2, implementation error). Equivalently: Value of c should be sensitive to degrees of freedom u. “Want large gain”, |G| Or more generally: Maximize minimum singular value, BAD Good
41
Control sensitive variables
Optimizer Controller that adjusts u to keep cm = cs Plant cs cm=c+n u c n d J cs=copt uopt nu = G-1 n n ) Want c sensitive to u (large gain G = dc/du) to get small variation in u (nu) when c varies (n)
42
Maximum Gain Rule in words
Unconstrained variables Maximum Gain Rule in words Select CVs that maximize (Gs) In words, select controlled variables c for which the gain G (= “controllable range”) is large compared to its span (= sum of optimal variation and control error)
43
EXAMPLE: Recycle plant (Luyben, Yu, etc.)
Recycle of unreacted A (+ some B) 5 Feed of A 4 1 2 Assume constant reactor temperature. Given feedrate F0 and column pressure: 3 Dynamic DOFs: Nm = 5 Column levels: N0y = 2 Steady-state DOFs: N0 = = 3 Product (98.5% B)
44
Recycle plant: Optimal operation
mT 1 remaining unconstrained degree of freedom, CV=?
45
Control of recycle plant: Conventional structure (“Two-point”: CV=xD)
LC LC xD XC XC xB LC Control active constraints (Mr=max and xB=0.015) + xD
46
Luyben rule Luyben rule (to avoid snowballing):
“Fix a stream in the recycle loop” (CV=F or D)
47
Luyben rule: CV=D (constant)
LC LC XC LC
48
A. Maximum gain rule: Steady-state gain
Conventional: Looks good Luyben rule: Not good economically
49
How did we find the gains in the Table?
Find nominal optimum Find (unscaled) gain G0 from input to candidate outputs: c = G0 u. In this case only a single unconstrained input (DOF). Choose at u=L Obtain gain G0 numerically by making a small perturbation in u=L while adjusting the other inputs such that the active constraints are constant (bottom composition fixed in this case) Find the span for each candidate variable For each disturbance di make a typical change and reoptimize to obtain the optimal ranges copt(di) For each candidate output obtain (estimate) the control error (noise) n The expected variation for c is then: span(c) = i |copt(di)| + |n| Obtain the scaled gain, G = |G0| / span(c) Note: The absolute value (the vector 1-norm) is used here to "sum up" and get the overall span. Alternatively, the 2-norm could be used, which could be viewed as putting less emphasis on the worst case. As an example, assume that the only contribution to the span is the implementation/measurement error, and that the variable we are controlling (c) is the average of 5 measurements of the same y, i.e. c=sum yi/5, and that each yi has a measurement error of 1, i.e. nyi=1. Then with the absolute value (1-norm), the contribution to the span from the implementation (meas.) error is span=sum abs(nyi)/5 = 5*1/5=1, whereas with the two-norn, span = sqrt(5*(1/5^2) = The latter is more reasonable since we expect that the overall measurement error is reduced when taking the average of many measurements. In any case, the choice of norm is an engineering decision so there is not really one that is "right" and one that is "wrong". We often use the 2-norm for mathematical convenience, but there are also physical justifications (as just given!). IMPORTANT!
50
B. “Brute force” loss evaluation: Disturbance in F0
Luyben rule: Conventional Loss with nominally optimal setpoints for Mr, xB and c
51
B. “Brute force” loss evaluation: Implementation error
Luyben rule: Loss with nominally optimal setpoints for Mr, xB and c
52
Conclusion: Control of recycle plant
Active constraint Mr = Mrmax Self-optimizing L/F constant: Easier than “two-point” control Assumption: Minimize energy (V) Active constraint xB = xBmin
53
Luyben’s rule to avoid snowballing
Luyben law no. 1 (“Plantwide process control”, 1998, pp. 57): “A stream somewhere in all recycle loops must be flow controlled. This is to prevent the snowball effect” Luyben rule may be OK dynamically (short time scale), BUT economically (steady-state): Recycle should increase with throughput Modified Luyben’s law 1 (by Sigurd): “Avoid having all streams in a recycle system on inventory control” This is to prevent the snowball effect Good economic control may then require that the stream which is not on inventory control is chosen as the TPM (throughput manipulator).
54
Toy Example Reference: I. J. Halvorsen, S. Skogestad, J. Morud and V. Alstad, “Optimal selection of controlled variables”, Industrial & Engineering Chemistry Research, 42 (14), (2003).
55
Toy Example: Single measurements
Constant input, c = y4 = u = 0 § 1(noise) Want loss < 0.1: Consider variable combinations (to be continued….)
56
Summary: Procedure selection controlled variables
Define economics and operational constraints Identify degrees of freedom and important disturbances Optimize for various disturbances Identify active constraints regions (off-line calculations) For each active constraint region do step 5-6: 5. Identify “self-optimizing” controlled variables for remaining degrees of freedom 6. Identify switching policies between regions
57
Comments. Analyzing a given CV choice, c=Hy
Evaluation of candidates can be time-consuming using general non-linear (“brute force”) formulation Pre-screening using local methods. Final verification for few promising alternatives by evaluating actual loss Local method: Maximum gain rule is not exact* but gives insight Alternative: “Exact local method” (loss method) uses same information, but somewhat more complicated Lower loss: try measurement combinations as CVs (next) *The maximum gain rule assumes that the worst-case setpoint errors Δci,opt(d) for each CV can appear together. In general, Δci,opt(d) are correlated.
58
Optimal measurement combination
CV=Measurement combination Optimal measurement combination Candidate measurements (y): Include also inputs u H measurement noise control error disturbance controlled variable
59
Nullspace method No measurement noise (ny=0)
CV=Measurement combination Nullspace method
60
Proof nullspace method
No measurement noise (ny=0) CV=Measurement combination Proof nullspace method Basis: Want optimal value of c to be independent of disturbances Find optimal solution as a function of d: uopt(d), yopt(d) Linearize this relationship: yopt = F d F – optimal sensitivity matrix Want: To achieve this for all values of d: Always possible if Optimal when we disregard implementation error (n) Amazingly simple! Sigurd is told how easy it is to find H V. Alstad and S. Skogestad, ``Null Space Method for Selecting Optimal Measurement Combinations as Controlled Variables'', Ind.Eng.Chem.Res, 46 (3), (2007).
61
Example. Nullspace Method for Marathon runner
u = power, d = slope [degrees] y1 = hr [beat/min], y2 = v [m/s] F = dyopt/dd = [ ]’ H = [h1 h2]] HF = 0 -> h1 f1 + h2 f2 = 0.25 h1 – 0.2 h2 = 0 Choose h1 = 1 -> h2 = 0.25/0.2 = 1.25 Conclusion: c = hr v Control c = constant -> hr increases when v decreases (OK uphill!)
62
Nullspace method (HF=0) gives Ju=0
Proof. Appendix B in: Jäschke and Skogestad, ”NCO tracking and self-optimizing control in the context of real-time optimization”, Journal of Process Control, (2011).
63
Toy Example
64
% TOY EXAMPLE FROM ALSTAD et al.
nu=1, nd=1 Gy = [ ]' Gyd= [ ]' Juu=2 F=[ ]' Wd=eye(1) Wn=eye(4) Y=[F*Wd Wn] % Single measurements H1 = [ ]; H2 = [ ]; H3=[ ]; H4=[ ]; %M = sqrtm{Ju}*inv(H*Gy)*H*Y H=H1; M = sqrtm(Juu)*inv(H*Gy)*H*Y; L1 = 0.5*svd(M)^2 %=100 (WORST) H=H2; M = sqrtm(Juu)*inv(H*Gy)*H*Y; L2 = 0.5*svd(M)^2 %=1.0025 H=H3; M = sqrtm(Juu)*inv(H*Gy)*H*Y; L3 = 0.5*svd(M)^2 %=0.26 (BEST) H=H4; M = sqrtm(Juu)*inv(H*Gy)*H*Y; L4 = 0.5*svd(M)^2 %=2 % Nullspace combination of y2 and y3 ny=2;[u,s,v]=svd([20 5]'), Hnull=u(:,[nd+1:ny])' % = [ ] H=[0 Hnull 0]; M = sqrtm(Juu)*inv(H*Gy)*H*Y; Lnull = 0.5*svd(M)^2 %=0.0425 % Nullspace, alternative explicit expression (here using all measurements): Ha = [Juu Jud]*pinv([Gy Gyd]); Ha = Ha/norm(Ha) % = [ ] H=Ha; M = sqrtm(Juu)*inv(H*Gy)*H*Y; Lnull2 = 0.5*svd(M)^2 %=0.0425 % Exact local method (loss method) He = (inv(Y*Y')*Gy)'; He = He/norm(He) % [ ] H=He; M = sqrtm(Juu)*inv(H*Gy)*H*Y; Le = 0.5*svd(M)^2 %=0.0405
65
”Exact local method” (with measurement noise) Problem definition
CV=Measurement combination ”Exact local method” (with measurement noise) Problem definition H cm /selection
66
”Exact local method” (with measurement noise)
CV=Measurement combination ”Exact local method” (with measurement noise) Controlled variables, cs = constant + - K H y cm u u J d Loss Loss with c=Hym=0 due to (i) Disturbances d (ii) Measurement noise ny Ref: Halvorsen et al. I&ECR, 2003 Kariwala et al. I&ECR, 2008 70
67
Relationship to max. gain rule and nullspace method
CV=Measurement combination Relationship to max. gain rule and nullspace method cs = constant + - K H y cm u “=0” in nullspace method (no noise) “Minimize” in Maximum gain rule ( maximize S1 G Juu-1/2 , G=HGy ) “Scaling” S1-1 for max.gain rule
68
Have extra degrees of freedom
CV=Measurement combination Non-convex optimization problem (Halvorsen et al., 2003) D : any non-singular matrix Have extra degrees of freedom st Improvement 1 (Alstad et al. 2009) Convex optimization problem Global solution Full H (not selection): Do not need Juu Q can be used as degrees of freedom for faster solution Analytical solution when YYT has full rank (w/ meas. noise): st Improvement 2 (Yelchuru et al., 2010)
69
Toy example...
70
Example: CO2 refrigeration cycle
pH J = Ws (work supplied) DOF = u (valve opening, z) Main disturbances: d1 = TH d2 = TCs (setpoint) d3 = UAloss What should we control?
71
CO2 refrigeration cycle
Step 1. One (remaining) degree of freedom (u=z) Step 2. Objective function. J = Ws (compressor work) Step 3. Optimize operation for disturbances (d1=TC, d2=TH, d3=UA) Optimum always unconstrained Step 4. Implementation of optimal operation No good single measurements (all give large losses): ph, Th, z, … Nullspace method: Need to combine nu+nd=1+3=4 measurements to have zero disturbance loss Simpler: Try combining two measurements. Exact local method: c = h1 ph + h2 Th = ph + k Th; k = bar/K Nonlinear evaluation of loss: OK!
72
CO2 cycle: Maximum gain rule
73
Refrigeration cycle: Proposed control structure
CV=Measurement combination Refrigeration cycle: Proposed control structure Control c= “temperature-corrected high pressure”
74
Case study
75
Case study Control structure design using self-optimizing control for economically optimal CO2 recovery* Step S1. Objective function= J = energy cost + cost (tax) of released CO2 to air 4 equality and 2 inequality constraints: stripper top pressure condenser temperature pump pressure of recycle amine cooler temperature CO2 recovery ≥ 80% Reboiler duty < 1393 kW (nominal +20%) Step S2. (a) 10 degrees of freedom: 8 valves + 2 pumps 4 levels without steady state effect: absorber 1,stripper 2,make up tank 1 Disturbances: flue gas flowrate, CO2 composition in flue gas + active constraints (b) Optimization using Unisim steady-state simulator. Region I (nominal feedrate): No inequality constraints active 2 unconstrained degrees of freedom =10-4-4 Step S3 (Identify CVs). 1. Control the 4 equality constraints 2. Identify 2 self-optimizing CVs. Use Exact Local method and select CV set with minimum loss. *M. Panahi and S. Skogestad, ``Economically efficient operation of CO2 capturing process, part I: Self-optimizing procedure for selecting the best controlled variables'', Chemical Engineering and Processing, 50, (2011).
76
Exact local method* for finding CVs
The set with the minimum worst case loss is the best Juu and F, the optimal sensitivity of the measurements with respect to disturbances, are obtained numerically * I.J. Halvorsen, S. Skogestad, J.C. Morud and V. Alstad, ‘Optimal selection of controlled variables’ Ind. Eng. Chem. Res., 42 (14), (2003)
77
Identify 2 self-optimizing CVs
39 candidate CVs - 15 possible tray temperature in absorber - 20 possible tray temperature in stripper - CO2 recovery in absorber and CO2 content at the bottom of stripper - Recycle amine flowrate and reboiler duty Use a bidirectional branch and bound algorithm* for finding the best CVs Best self-optimizing CV set in region I: c1 = CO2 recovery (95.26%) c2 = Temperature tray no. 16 in stripper These CVs are not necessarily the best if new constraints are met * V. Kariwala and Y. Cao. Bidirectional Branch and Bound for Controlled Variable Selection, Part II: Exact Local Method for Self-Optimizing Control, Computers & Chemical Engineering, 33(2009),
78
Proposed control structure with given nominal flue gas flowrate (region I)
79
Region II: large feedrates of flue gas (+30%)
(kmol/hr) Self-optimizing CVs in region I Reboiler duty (kW) Cost (USD/ton) CO2 recovery % Temperature tray no. 16 °C Optimal nominal point 219.3 95.26 106.9 1161 2.49 +5% feedrate 230.3 1222 +10% feedrate 241.2 1279 +15% feedrate 252.2 1339 +19.38%, when reboiler duty saturates 261.8 1393 (+20%) 2.50 +30% feedrate (reoptimized) 285.1 91.60 103.3 2.65 region I region II max Saturation of reboiler duty; one unconstrained degree of freedom left Use Maximum gain rule to find the best CV among 37 candidates : Temp. on tray no. 13 in the stripper: largest scaled gain, but tray 16 also OK
80
Proposed control structure with large flue gas flowrate (region II)
max
81
Conditions for switching between regions of active constraints (“supervisory control”)
Within each region of active constraints it is optimal to Control active constraints at ca = c,a, constraint Control self-optimizing variables at cso = c,so, optimal Define in each region i: Keep track of ci (active constraints and “self-optimizing” variables) in all regions i Switch to region i when element in ci changes sign
82
Example – switching policies CO2 plant (”supervisory control”)
Assume operating in region I (unconstrained) with CV=CO2-recovery=95.26% When reach maximum Q: Switch to Q=Qmax (Region II) (obvious) CO2-recovery will then drop below 95.26% When CO2-recovery exceeds 95.26%: Switch back to region I !!!
83
Conclusion optimal operation
ALWAYS: 1. Control active constraints and control them tightly!! Good times: Maximize throughput -> tight control of bottleneck 2. Identify “self-optimizing” CVs for remaining unconstrained degrees of freedom Use offline analysis to find expected operating regions and prepare control system for this! One control policy when prices are low (nominal, unconstrained optimum) Another when prices are high (constrained optimum = bottleneck) ONLY if necessary: consider RTO on top of this
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.