Download presentation
Presentation is loading. Please wait.
Published byJoleen Elliott Modified over 9 years ago
1
1 Effective Implementation of optimal operation using Self- optimizing control Sigurd Skogestad Institutt for kjemisk prosessteknologi NTNU, Trondheim
2
2 Plantwide control = Control structure design Not the tuning and behavior of each control loop, But rather the control philosophy of the overall plant with emphasis on the structural decisions: –Selection of controlled variables (“outputs”) –Selection of manipulated variables (“inputs”) –Selection of (extra) measurements –Selection of control configuration (structure of overall controller that interconnects the controlled, manipulated and measured variables) –Selection of controller type (LQG, H-infinity, PID, decoupler, MPC etc.). Previous work on plantwide control: Page Buckley (1964) - Chapter on “Overall process control” (still industrial practice) Greg Shinskey (1967) – process control systems Alan Foss (1973) - control system structure Bill Luyben et al. (1975- ) – case studies ; “snowball effect” George Stephanopoulos and Manfred Morari (1980) – synthesis of control structures for chemical processes Ruel Shinnar (1981- ) - “dominant variables” Jim Downs (1991) - Tennessee Eastman challenge problem Larsson and Skogestad (2000): Review of plantwide control
3
3 Plantwide control design procedure I Top Down Step 1: Define operational objectives (optimal operation) –Cost function J (to be minimized) –Operational constraints Step 2: Identify degrees of freedom (MVs) and optimize for expected disturbances Step 3: Select primary controlled variables c=y 1 (CVs) Step 4: Where set the production rate? (Inventory control) II Bottom Up Step 5: Regulatory / stabilizing control (PID layer) –What more to control (y 2 ; local CVs)? –Pairing of inputs and outputs Step 6: Supervisory control (MPC layer) Step 7: Real-time optimization (Do we need it?) y1y1 y2y2 Process MVs
4
4 y 1s MPC PID y 2s RTO u (valves) Process control: Implementation of optimal operation
5
5 Outline Implementation of optimal operation Paradigm 1: On-line optimizing control (including RTO) Paradigm 2: "Self-optimizing" control schemes –Precomputed (off-line) solution Examples Control of optimal measurement combinations –Nullspace method –Exact local method Summary/Conclusions
6
6 Optimal operation A typical dynamic optimization problem Implementation: “Open-loop” solutions not robust to disturbances or model errors Want to introduce feedback
7
7 Implementation of optimal operation Paradigm 1: On-line optimizing control where measurements are used to update model and states –Process control: Steady-state “real-time optimization” (RTO) Update model and states: “data reconciliation” Paradigm 2: “Self-optimizing” control scheme found by exploiting properties of the solution –Most common in practice, but ad hoc
8
8 Implementation: Paradigm 1 Paradigm 1: Online optimizing control Measurements are primarily used to update the model The optimization problem is resolved online to compute new inputs. Example: Conventional MPC, RTO (real-time optimization) This is the “obvious” approach (for someone who does not know control)
9
9 Example paradigm 1: On-line optimizing control of Marathon runner Even getting a reasonable model requires > 10 PhD’s … and the model has to be fitted to each individual…. Clearly impractical!
10
10 Implementation: Paradigm 2 Paradigm 2: Precomputed solutions based on off-line optimization Find properties of the solution suited for simple and robust on-line implementation Most common in practice, but ad hoc Proposed method: Do this is in a systematic manner. –Turn optimization into feedback problem. –Find regions of active constraints and in each region: 1.Control active constraints 2.Control “self-optimizing ” variables for the remaining unconstrained degrees of freedom “inherent optimal operation”
11
11 Example: Runner One degree of freedom (u): Power Cost to be minimized J = T –Constraints –u ≤ u max –Follow track –Fitness (body model) Optimal operation: Minimize J with respect to u(t) ISSUE: How implement optimal operation?
12
12 Solution 2 – Feedback (Self-optimizing control) –What should we control? Optimal operation - Runner
13
13 Self-optimizing control: Sprinter (100m) 1. Optimal operation of Sprinter, J=T –Active constraint control: Maximum speed (”no thinking required”) Optimal operation - Runner
14
14 Optimal operation of Marathon runner, J=T Any self-optimizing variable c (to control at constant setpoint)? c 1 = distance to leader of race c 2 = speed c 3 = heart rate c 4 = level of lactate in muscles Optimal operation - Runner Self-optimizing control: Marathon (40 km)
15
15 Implementation paradigm 2: Feedback control of Marathon runner c = heart rate Simplest case: select one measurement Simple and robust implementation Disturbances are indirectly handled by keeping a constant heart rate May have infrequent adjustment of setpoint (heart rate) measurements
16
16 Implementation (in practice): Local feedback control! “Self-optimizing control:” Constant setpoints for c gives acceptable loss y FeedforwardOptimizing controlLocal feedback: Control c (CV) d Controlled variables c (z in book!) 1.Active constraints 2.“Self-optimizing” variables c for remaining unconstrained degrees of freedom (u) No or infrequent online optimization needed. Controlled variables c are found based on off-line analysis.
17
17 Further examples self-optimizing control Marathon runner Central bank Cake baking Business systems (KPIs) Investment portifolio Biology Chemical process plants Define optimal operation (J) and look for ”magic” variable (c) which when kept constant gives acceptable loss (self- optimizing control)
18
18 More on further examples Central bank. J = welfare. u = interest rate. c=inflation rate (2.5%) Cake baking. J = nice taste, u = heat input. c = Temperature (200C) Business, J = profit. c = ”Key performance indicator (KPI), e.g. –Response time to order –Energy consumption pr. kg or unit –Number of employees –Research spending Optimal values obtained by ”benchmarking” Investment (portofolio management). J = profit. c = Fraction of investment in shares (50%) Biological systems: –”Self-optimizing” controlled variables c have been found by natural selection –Need to do ”reverse engineering” : Find the controlled variables used in nature From this possibly identify what overall objective J the biological system has been attempting to optimize
19
19 “Magic” self-optimizing variables: How do we find them? Intuition: “Dominant variables” (Shinnar) Is there any systematic procedure?
20
20 Optimal operation Cost J Controlled variable c c opt J opt Unconstrained optimum
21
21 Optimal operation Cost J Controlled variable c c opt J opt Two problems: 1. Optimum moves because of disturbances d: c opt (d) 2. Implementation error, c = c opt + n d n Unconstrained optimum
22
22 Candidate controlled variables c for self-optimizing control Intuitive 1.The optimal value of c should be insensitive to disturbances (avoid problem 1) 2.Optimum should be flat (avoid problem 2 – implementation error). Equivalently: Value of c should be sensitive to degrees of freedom u. “Want large gain”, |G| Or more generally: Maximize minimum singular value, Unconstrained optimum BADGood
23
23 Quantitative steady-state: Maximum gain rule Maximum gain rule (Skogestad and Postlethwaite, 1996): Look for variables that maximize the scaled gain (G s ) (minimum singular value of the appropriately scaled steady-state gain matrix G s from u to c) Unconstrained optimum G u c
24
24 Why is Large Gain Good? u J, c J opt u opt c opt c-c opt Loss Δc = G Δu Controlled variable Δc = G Δu Want large gain G: Large implementation error n (in c) translates into small deviation of u from u opt (d) - leading to lower loss Variation of u J(u)
25
25 Operational objective: Minimize cost function J(u,d) The ideal “self-optimizing” variable is the gradient (first-order optimality condition) (Halvorsen and Skogestad, 1997; Bonvin et al., 2001; Cao, 2003): Optimal setpoint = 0 BUT: Gradient can not be measured in practice Possible approach: Estimate gradient J u based on measurements y Approach here: Look directly for c as a function of measurements y (c=Hy) without going via gradient Ideal “Self-optimizing” variables Unconstrained degrees of freedom: I.J. Halvorsen and S. Skogestad, ``Optimizing control of Petlyuk distillation: Understanding the steady-state behavior'', Comp. Chem. Engng., 21, S249-S254 (1997) Ivar J. Halvorsen and Sigurd Skogestad, ``Indirect on-line optimization through setpoint control'', AIChE Annual Meeting, 16-21 Nov. 1997, Los Angeles, Paper 194h..
26
26 H
27
27 H
28
28 Optimal measurement combination H Candidate measurements (y): Include also inputs u
29
29 H Optimal H: Problem definition
30
30 Nullspace method No measurement noise (n y =0)
31
31 Amazingly simple! Sigurd is told how easy it is to find H Proof nullspace method Basis: Want optimal value of c to be independent of disturbances Find optimal solution as a function of d: u opt (d), y opt (d) Linearize this relationship: y opt = F d Want: To achieve this for all values of d: To find a F that satisfies HF=0 we must require Optimal when we disregard implementation error (n) V. Alstad and S. Skogestad, ``Null Space Method for Selecting Optimal Measurement Combinations as Controlled Variables'', Ind.Eng.Chem.Res, 46 (3), 846-853 (2007). No measurement noise (n y =0)
32
32 Example. Nullspace Method for Marathon runner u = power, d = slope [degrees] y 1 = hr [beat/min], y 2 = v [m/s] F = dy opt /dd = [0.25 -0.2]’ H = [h 1 h 2 ]] HF = 0 -> h 1 f 1 + h 2 f 2 = 0.25 h 1 – 0.2 h 2 = 0 Choose h 1 = 1 -> h 2 = 0.25/0.2 = 1.25 Conclusion: c = hr + 1.25 v Control c = constant -> hr increases when v decreases (OK uphill!)
33
33 Loss evaluation (with measurement noise) Loss with c=Hy m =0 due to (i) Disturbances d (ii) Measurement noise n y Book: Eq. (10.12) Controlled variables, c s = constant + + + + + - K H ymym c u u J Loss d Proof: See handwritten notes
34
34 1. Kariwala: Can minimize 2-norm (Frobenius norm) instead of singular value 2. BUT still Non-convex optimization problem (Halvorsen et al., 2003), see book p. 397 st Improvement (Alstad et al. 2009) Analytical solution («full» H) D : any non-singular matrix Have extra degrees of freedom Convex optimization problem Global solution - Do not need J uu - Analytical solution applies when YY T has full rank (w/ meas. noise): Optimal H (with measurement noise)
35
35 Special cases No noise (n y =0, W ny =0): Optimal is HF=0 (Nullspace method) But: If many measurement then solution is not unique No disturbances (d=0; W d =0) + same noise for all measurements (W ny =I): Optimal is H T =G y (“control sensitive measurements”) Proof: Use analytic expression c s = constant + + + + + - K H y c u
36
36 c s = constant + + + + + - K H y c u “Minimize” in Maximum gain rule ( maximize S 1 G J uu -1/2, G=HG y ) “Scaling” S 1 -1 for max.gain rule “=0” in nullspace method (no noise) Relationship to maximum gain rule
37
37 Special case: Indirect control = Estimation using “Soft sensor” Indirect control: Control measurements c= Hy such that primary output y 1 is constant –Optimal sensitivity F = dy opt /dd = (dy/dd) y1 –Distillation: y=temperature measurements, y 1 = product composition Estimation: Estimate primary variables, y 1 =c=Hy –y 1 = G 1 u, y = G y u –Same as indirect control if we use extra degrees of freedom (H 1 =DH) such that (H 1 G y )=G 1
38
38 Summary 1: Systematic procedure for selecting controlled variables for optimal operation 1.Define economics (cost J) and operational constraints 2.Identify degrees of freedom and important disturbances 3.Optimize for various disturbances 4.Identify active constraints regions (off-line calculations) For each active constraint region do step 5-6: 5.Identify “self-optimizing” controlled variables for remaining unconstrained degrees of freedom A. “Brute force” loss evaluation B. Sensitive variables: “Max. gain rule” (Gain= Minimum singular value) C. Optimal linear combination of measurements, c = Hy 6.Identify switching policies between regions
39
39 Summary 2 Systematic approach for finding CVs, c=Hy Can also be used for estimation S. Skogestad, ``Control structure design for complete chemical plants'', Computers and Chemical Engineering, 28 (1-2), 219-234 (2004). V. Alstad and S. Skogestad, ``Null Space Method for Selecting Optimal Measurement Combinations as Controlled Variables'', Ind.Eng.Chem.Res, 46 (3), 846-853 (2007). V. Alstad, S. Skogestad and E.S. Hori, ``Optimal measurement combinations as controlled variables'', Journal of Process Control, 19, 138- 148 (2009) Ramprasad Yelchuru, Sigurd Skogestad, Henrik Manum, MIQP formulation for Controlled Variable Selection in Self Optimizing Control IFAC symposium DYCOPS-9, Leuven, Belgium, 5-7 July 2010 Download papers: Google ”Skogestad”
40
40 Example: CO2 refrigeration cycle Unconstrained DOF (u) Control what? c=? pHpH
41
41 CO2 refrigeration cycle Step 1. One (remaining) degree of freedom (u=z) Step 2. Objective function. J = W s (compressor work) Step 3. Optimize operation for disturbances (d 1 =T C, d 2 =T H, d 3 =UA) Optimum always unconstrained Step 4. Implementation of optimal operation No good single measurements (all give large losses): –p h, T h, z, … Nullspace method: Need to combine n u +n d =1+3=4 measurements to have zero disturbance loss Simpler: Try combining two measurements. Exact local method: –c = h 1 p h + h 2 T h = p h + k T h ; k = -8.53 bar/K Nonlinear evaluation of loss: OK!
42
42 Refrigeration cycle: Proposed control structure Control c= “temperature-corrected high pressure”
43
43 Conclusion optimal operation ALWAYS: 1. Control active constraints and control them tightly!! –Good times: Maximize throughput -> tight control of bottleneck 2. Identify “self-optimizing” CVs for remaining unconstrained degrees of freedom Use offline analysis to find expected operating regions and prepare control system for this! –One control policy when prices are low (nominal, unconstrained optimum) –Another when prices are high (constrained optimum = bottleneck) ONLY if necessary: consider RTO on top of this
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.