Single Function Quine-McCluskey 2-Level Minimization Shantanu Dutt University of Illinois at Chicago Acknowledgement: Transcribed to Powerpoint by Huan.

Slides:



Advertisements
Similar presentations
FUNCTION OPTIMIZATION Switching Function Representations can be Classified in Terms of Levels Number of Levels, k, is Number of Unique Boolean (binary)
Advertisements

Glitches & Hazards.
CSEE 4823 Advanced Logic Design Handout: Lecture #2 1/22/15
ECE C03 Lecture 31 Lecture 3 Two-Level Logic Minimization Algorithms Hai Zhou ECE 303 Advanced Digital Design Spring 2002.
ELEC Digital Logic Circuits Fall 2008 Logic Minimization (Chapter 3) Vishwani D. Agrawal James J. Danaher Professor Department of Electrical and.
Quine-McCluskey (Tabular) Minimization  Two step process utilizing tabular listings to:  Identify prime implicants (implicant tables)  Identify minimal.
Single Function Quine-McCluskey 2-Level Minimization Shantanu Dutt University of Illinois at Chicago Acknowledgement: Transcribed to Powerpoint by Huan.
ECE 465 Petrick’s Algorithm for 2-level Minimization Shantanu Dutt University of Illinois at Chicago Acknowledgement: Transcribed to Powerpoint by Huan.
EDA (CS286.5b) Day 15 Logic Synthesis: Two-Level.
Chapter 3 Simplification of Switching Functions
Gate Logic: Two Level Canonical Forms
Contemporary Logic Design Two-Level Logic © R.H. Katz Transparency No. 4-1 Chapter #2: Two-Level Combinational Logic Section 2.3, Switches and Tools.

ECE Synthesis & Verification 1 ECE 667 ECE 667 Synthesis and Verification of Digital Systems Exact Two-level Minimization Quine-McCluskey Procedure.
ECE 667 Synthesis and Verification of Digital Systems
Logic gate level Part 3: minimizing circuits. Improving circuit efficiency Efficiency of combinatorial circuit depends on number & arrangement of its.
Overview Part 2 – Circuit Optimization 2-4 Two-Level Optimization
Courtesy RK Brayton (UCB) and A Kuehlmann (Cadence) 1 Logic Synthesis Two-Level Minimization I.
2-Level Minimization Classic Problem in Switching Theory
EECS 465: Digital Systems Lecture Notes # 2
1 Simplification of Boolean Functions:  An implementation of a Boolean Function requires the use of logic gates.  A smaller number of gates, with each.
Chapter 3 Simplification of Switching Functions. Simplification Goals Goal -- minimize the cost of realizing a switching function Cost measures and other.
Department of Computer Engineering
PLA/PALs and PLA Design Optimization
1 © 2015 B. Wilkinson Modification date: January 1, 2015 Designing combinational circuits Logic circuits whose outputs are dependent upon the values placed.
Quine-McCluskey (Tabular) Minimization Two step process utilizing tabular listings to: Identify prime implicants (implicant tables) Identify minimal PI.
2-Level Minimization Classic Problem in Switching Theory Tabulation Method Transformed to “Set Covering Problem” “Set Covering Problem” is Intractable.
Two-Level Simplification Approaches Algebraic Simplification: - algorithm/systematic procedure is not always possible - No method for knowing when the.
Two Level Networks. Two-Level Networks Slide 2 SOPs A function has, in general many SOPs Functions can be simplified using Boolean algebra Compare the.
Copyright © 2004 by Miguel A. Marin Revised McGILL UNIVERSITY DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING COURSE ECSE DIGITAL SYSTEMS.
2-1 Introduction Gate Logic: Two-Level Simplification Design Example: Two Bit Comparator Block Diagram and Truth Table A 4-Variable K-map for each of the.
ECE2030 Introduction to Computer Engineering Lecture 8: Quine-McCluskey Method Prof. Hsien-Hsin Sean Lee School of Electrical and Computer Engineering.
CMPUT Computer Organization and Architecture II1 CMPUT329 - Fall 2003 Topic 5: Quine-McCluskey Method José Nelson Amaral.
1 ECE2030 Introduction to Computer Engineering Lecture 8: Quine-McCluskey Method Prof. Hsien-Hsin Sean Lee School of ECE Georgia Institute of Technology.
1 Quine-McCluskey Method. 2 Motivation Karnaugh maps are very effective for the minimization of expressions with up to 5 or 6 inputs. However they are.
Ahmad Almulhem, KFUPM 2010 COE 202: Digital Logic Design Combinational Logic Part 3 Dr. Ahmad Almulhem ahmadsm AT kfupm Phone: Office:
Digital Logic (Karnaugh Map). Karnaugh Maps Karnaugh maps (K-maps) are graphical representations of boolean functions. One map cell corresponds to a row.
Karnaugh Maps (K maps).
Chapter 3 Simplification of Switching Functions. Simplification Goals Goal -- minimize the cost of realizing a switching function Cost measures and other.
©2010 Cengage Learning SLIDES FOR CHAPTER 6 QUINE-McCLUSKEY METHOD Click the mouse to move to the next page. Use the ESC key to exit this chapter. This.
Technical Seminar II Implementation of
PLA/PALs and PLA Design Optimization
EET 1131 Unit 5 Boolean Algebra and Reduction Techniques
CHAPTER 6 Quine-McCluskey Method
Lecture 6 Quine-McCluskey Method
EECS 465: Digital Systems Design Lecture Notes
ELEC Digital Logic Circuits Fall 2014 Logic Minimization (Chapter 3)
3-7 Other Two-level Implementations
Lecture #6 EGR 277 – Digital Logic
QUINE-McCLUSKEY METHOD
EECS 465: Digital Systems Design Lecture Notes #3
CS 352 Introduction to Logic Design
Perturbation method, lexicographic method
Chapter 6 Quine-McCluskey Method
Multi Function Quine-McCluskey 2-Level Minimization
EECS 465: Digital Systems Lecture Notes # 2
EECS 465: Digital Systems Lecture Notes # 2
Multi Function Quine-McCluskey 2-Level Minimization
Function Minimization Algorithms
EECS 465: Digital Systems Design Lecture Notes #3
COE 202: Digital Logic Design Combinational Logic Part 3
ECB2212-Digital Electronics
ECE 465 Petrick’s Algorithm for 2-level Minimization
Minimization of Switching Functions
Single Function Quine-McCluskey 2-Level Minimization
Overview Part 2 – Circuit Optimization
CHAPTER 6 QUINE-McCLUSKEY METHOD
ECE 352 Digital System Fundamentals
PLA/PALs and PLA Design Optimization
Presentation transcript:

Single Function Quine-McCluskey 2-Level Minimization Shantanu Dutt University of Illinois at Chicago Acknowledgement: Transcribed to Powerpoint by Huan Ren from Prof. Shantanu Dutt’s handwritten notes

Basic Two-Level Minimization Steps The basic steps are the same for K-Map and Quine McCluskey (QM): 1.Form all PIs 2.Determine a minimum cost set of PIs to cover all minterms (MTs) a)Determine EPIs, include them in the final expression and delete all MTs they cover b)Determine a min cost set of the remaining PIs to cover the remaining MTs. This is a really hard problem—NP-hard, i.e., to solve it optimally it takes a worst-case computation time that is exponential in the # of non-essential PIs.

QM: Forming all PIs using the Tabular Method Step 1 1.Form groups Gi of MTs/DCs, w/ a MT/DC belonging to Gi if it has i 1’s in its binary (0, 1) or ternary (0, 1, - or x) representation. Note that 0 <= i <= n (the # if variables). 2.Place the groups Gi’s in “rows” with increasing order of i. 3.For each MT/DC a in Gi, look for MTs/DCs b that it has logical adjacency with (so that they can be combined to form a larger implicant) only in Gi+1. If such a MT/DC b found in Gi+1, then the larger implicant (a, b) covers both a and b, which need to tick mark both a, b to indicate this. 4.Steps 1-3 are repeated in each subsequent “column” of larger implicants, as shown. 5.The process stops when for the current col. no adjacent implicants are found, so no larger implicants can be formed. 6.The implicants that are not tick-marked are PIs as they are not covered by any other implicant.

# of 1’s MT/ DC ABCD # of 1’s Impl. (MT/DC list) ABCD Diff. set 1 PI 1 1,300x1 2 1,50x01 4 1,9x ,3001x 1 2,60x10 4 8,9100x 1 2 3,70x11 4 5,701x1 2 5,13x ,7011x 1 9,131x01 4 Impl. (MT/DC list) ABCDDiff. set PI 2 1, 3 5, 7 0xx12,4 PI 3 1, 5 9,13 xx014,8 PI 4 2, 3 6, 7 0x1x1,4 Necessary (but not sufficient) conditions for combining 2 implicants to form a larger implicant that covers them (if the conditions are met, only then do we need to look at the ternary notations to determine combinability): Two MTs/DCs in only adj. groups can be combined only if their integer values differ by a power of 2. Two implicants in adj. groups can be combined only if their difference sets are the same and the pairwise absolute differences of their MT/DC list arranged in increasing order are the same and a power of 2 (e.g.: (i) g=(1,3), h=(5,7), DS for both is {2} & |(1,3)-(5,7)| = (4,4). Thus g,h may be combinable (but this is not a sufficient cond. and need to then look at their ternary notation), and from their ternary notation, they are. The # 4 from |g-h| is added to the diff. set of the combined implicants to obtain the diff. set {2, 4} of the larger implicant. (ii) g=(3,7), h=(9,13), DS for both is {4}, |(3,7)- (9,13)| = (6,6) and thus g,h are not combinable—same #’s in the g - h vector, but they are not a power of 2. Tricks for quick PI formation

2a QM: The PI Table (PIT)—Min-Cost MT Covering Inclusion removal/deletion: Removal of those PIs from the PIT that are to be included in the final expr. Thus the MTs they cover are also removed. * is used to indicate inclusion deletions (i.e. for EPIs and pseudo-EPIs—see later for definition); these are PIs included in the final minimized expression

2b 2a 2b QM: The PIT—Min-Cost MT Covering (cont.) Row covering Exclusion removal/deletion: Removal of those PIs (via covering) from the PIT that are not to be included in the final expr. Thus the MTs they cover are not removed. Note: Some other PIs in the previous PIT, e.g., (0,4), were excl. deleted as they covered no MTs in the reduced PIT. Inclusion removal/deletion * How? Will see shortly.

But, K-maps can be used effectively only up to 6-variable functions, and is susceptible to human error, especially for > 4 variables. QM is more methodical, applies to any # of variables, and does not rely on visual skills to identify PIs or a min-cost set of PIs to cover all MTs. It is thus also less prone to human errors if done manually, but more importantly it can be programmed and be used as a CAD tool for logic minimization.

QM: The PIT—Min-Cost MT Covering (cont.) * *

EPIs Inclusion deletions PI1 PI2 * *

C C C C Covering by one PI/row of another Covering by one MT/col of another Identify “pseudo-EPIs” (EPIs in the reduced PIT), and remove them (inclusion removal—removal from PIT, and inclusion in the min. expression) and the MTs they cover. Repeat above steps until all MTs covered In these cases both PIs cover each other, But that may not always be the case Exclusion deletions PI1 PI2 QM: The PIT—Min-Cost MT Covering (cont.) A row R i is said to cover row R j, if R i has X’s in all the cols that R j has X’s in (& possibly more). The idea behind deleting the covered row Rj is that the covering row Ri does the job of Rj, and probably more. Q for optimality: At what cost? A col C i is said to cover col C j, if C i has X’s in all the rows that C j has X’s in (and possibly more). The covering col. Ci is deleted. The idea is that that any PI that covers/include s Cj will cover/include the covering col. Ci; thus the latter is auto- matically incl. in the final expr. when including Cj Both row & col covering rules reduce the complexity of the min-cost covering problem. However row-covering does not necessarily preserve optimality (why?), while col-covering does (why?)

QM: The PIT—Min-Cost MT Covering (cont.) C C C C Covering by one MT/col of another Exclusion deletions Inclusion deletions Pseudo-EPIs PI3 PI4 f = PI1 + PI2 + PI3 + PI4 PI2 PI1 * *

QM: The PI Table—Min-Cost MT Covering (cont.)

Better heuristics than random choice can be used, as shown in subsequent slides * (for incl. removal) C C This is actually not a good heuristic. We will see better ones later. (same a pseudo EPI) (This includes checking if pseudo-EPIs exist and inclusion-deleting them as before. Go on applying all substeps in 2 until none of them apply; will see a more comprehensive procedure later) Note: We need to explicitly consider cost of PIs in both breaking a cyclic PIT and in row coverings in order to better minimize the total # of gate i/ps in a 2-level gate impl. of the expr. Will see these considerations shortly. * * *

Q-M: Don’t Cares Q-M for functions with don’t cares: –Difference from Q-M without X’s: Include X’s along w/ MTs when forming PIs using the tabular method (or using K-Maps) Eliminate PI’s that are composed of only X’s Don’t include X’s in the PI chart/table x2 x x 3 x x x8 x AB CD ABCAD AC

Column 1Column2Column3 G (1,3)0--1(1,3,5,7) (1,5)--01(1,5,9,13) (1,9)0-1-(2,3,6,7) 001-(2,3) G (2,6) (8,9) (3,7) 01-1(5,7) G (5,13) (6,7) 1-01(9,13) PI1 PI2 PI3 PI4 Note: Do not remove intermediate implicants that have only X’s (why?). Only final PIs w/ only X’s should be removed from the final set of Pis.

PI Table PIs13569 PI 1 PI 2 PI 3 PI PI 2 PI 3 ∩ ∩ ∩ * Pseudo EPI * EPI Covering arrows Delete either col 1 or Do not list any DC’s in the cols of the PI Table. We do not need to cover them.

QM+: An Extended QM-type Iterative Technique for Min-Cost Covering In the PI Table (PIT), identify all EPIs, include them in the min. express. & delete MTs they cover. Cost of a PI = # of literals +1 (AND & OR gate cost portions). Repeat –Repeat While no pseudo EPIs can be identified and any of Rule 2, 1’, 1 can be applied do –Reduce the RPIT further by using only the col covering rule (rule 2) and only row covering in which the covering row has cost <= that of the covered row (rule 1’). /* these coverings are optimal—proof? */ –If no rule could be applied above then apply only one row covering rule (rule 1) w/ the least difference in cost between covering and covered row (break ties by max # of remaining MTs covered by the covering row). /* this part is sub-optimal so need to apply in a limited manner */ End While Identify EPIs in the RPIT—these new EPIs are called pseudo or secondary EPIs— and for each inclusion delete them and delete from the RPIT the MTs they cover. –Until[no MTs left OR no pseudo EPIs exists] –If the RPIT is cyclic, make it acyclic by choosing a PI for inclusion in the min expression that either: 1.Covers the max # of remaining MTs; break ties based on smaller cost, and break further ties randomly. (Rule 3) 2.OR has the smallest cost/(# of covered MTs); break ties by max. # of MTs covered, and further ties randomly. (Rule 4) Until(all MTs are covered) Row covering rule (Rule 1): If row i covers row j, delete the covered (subset) row, row j. If both rows i and j cover each other, delete the row w/ higher cost (# of literals). Rule 1’ (strong Rule 1): Apply covering from row i to row j only if row i’s cost <= row j’s cost. Note that Rule 1’ is not in basic QM. Col covering rule (Rule 2): If col i covers col j delete the covering (superset) column, col i. If both cols I and j cover each other, then delete one of them arbitrarily Non-opt. ex: In a 7-var. func, cost(A) = (7-1) +1 =7, cost(B+C) = (2+1)+(2+1) = 6 PI A 32-mt PI C 16 X’s 32-mt PI B 15 X’s C C 1-mt

Cyclic PI Table—Examples of Rules 3 & PI PI PI PI PI PI PIsCost bc de In the next set of examples we will only use Rule 1 (not 1’) just to illustrate the effect of Rules 3 and 4 and not mix up too many alternate rules Example 1: Cyclic PI set in K-map (no EPIs) of a 5-var. function f. Cost of a PI = # literals + 1 All red PIs have the same cost (each covers 2 MTs) The blue PI has a smaller cost as it covers 2 MTs and 2 DCs. Min-cost PI Note: PI 4 =4,6,12,14 → # of literals = 5-2 = 3 → cost(PI 4 ) = 3+1 = 4 [# of literals +1] xx bc de a=0a=1

Cyclic PI Table (Contd.) Rule 3: Heuristic for breaking the cycle in a cyclic PI chart –Choose a PI for incl. deletion (incl. in expr., remove it and all MTs it covers) that covers the largest # of minterms –Break a tie by choosing the PI that has the least cost. –Break a further tie by choosing a PI in the tie arbitrarily. For previous example, # of MT’s covered are given to be the same If cost is not taken into account to break ties (and if so, this also amounts to randomly choosing a PI) –Choose PI 1 arbitrarily for incl. deletion –Reduced PIT on the right PIs Cost 2456 PI PI PI PI PI * * Cost = 5+5+5= c c c

Cyclic PI Table (Contd.) If cost is used to break ties of max MTs covered— Rule 3 (OR choice made by least cost/MT—Rule 4) –By above criterion, choose PI 4 first (for incl. deletion) PIs1235 PI 1 PI 2 PI 3 PI 5 PI 6 PI 5 PI 4 PI 3 PI 2 PI PIs * * * Cost = =

Cyclic PI Table (Contd.): Ex x2 x10 11 x x x BC DE BC DE PI 1 =0,4 PI 2 =4,5,12,13 A=0A=1 PI 3 =9,11,13,15 PI 4 =0,16 PI 5 =9,25 PI 6 =12,28 PI 7 =15,31 PI 8 =16,17,24,25 PI 9 =17,19 PI 10 =19,23 PI 11 =23,31 PI 12 =24,28

Taking # of MTs covered and cost into consideration (Rule 3, and in this case this results in the same soln. as applying the alternate Rule 4): Choose PI 8 first (incl. deletion) CostPIs PI 1 3+1PI 2 3+1PI 3 4+1PI 4 4+1PI 5 4+1PI 6 4+1PI 7 3+1PI 8 4+1PI 9 4+1PI PI PI * * * 88 8 * 9 * 99 * cost = (AND gates’ cost) + 6 (6 PIs  OR gate cost) = 22+6 =

# of MTs covered is ignored and only cost is taken into account w/ ties broken randomly (neither Rule 3 nor Rule 4 used) Choose PI 2 first (incl. deletion) based only on cost (Note also: PI2’s cost/MT = 4/3, while PI8’s is 4/4, so this is not a good choice by Rule 4, nor by Rule 3 as PI2 does not cover the max. # of MTs) CostPIs PI 1 3+1PI 2 3+1PI 3 4+1PI 4 4+1PI 5 4+1PI 6 4+1PI 7 3+1PI 8 4+1PI 9 4+1PI PI PI 12 1 * * 4 *

Reduced chart table: f=PI 2 +PI 4 +PI 12 +…… Again cyclic! Choose PI 3 by cost CostPIs PI 3 4+1PI 5 4+1PI 7 3+1PI 8 4+1PI 9 4+1PI PI 11 5 * * 4 * or This cost is 4 more than when using # of MTs as the selection criteria in an acyclic PI table

Time-Complexity of QM Let p be the # of PIs and m the # of MTs in an n-variable function # of iterations of QM (each iter. consists of EPIs/p-EPIs incl. deletion or application of rules 1 or 2): In each iteration at least 1 col or row is deleted (incl. or excl. deletions). Thus # of worst-case iterations is  (max(p, m)) (note could be  (max(p, m)) but more analysis needed for that). Each iteration takes  (max(m*p 2, p*m 2 )) time due to worst-case of all pair-wise comparisons among rows and cols to determine row and col coverings, and  (m*p) to determine EPIs (or psuedo-EPIs). Thus total time complexity is  (max(p,m))*(max(m*p 2, p*m 2 ))) =  (max(m*p 3, p 2 *m 2, p*m 3 )) =  (max(m*p 3, p*m 3 )), since if p p 2 *m 2, else (p >= m) m*p 3 >= p 2 *m 2 What are the worst case values of p and m as functions of n?

Optimality Issues in QM The 2 examples here show that bad coverings will not always lead to a non-optimal solns, but they can, as we see later. However, QM (i.e., tech. using Rule 1, Rule 2, EPI/p-EPI and cycle-breaking only; no concept of good and bad coverings) is not optimal—it will not give us an optimal solution for all problems. 2-mt PI A 4-mt PI B 4-mt PI C 8-mt PI D 8-mt PI E X X 4 X’s 8-mt PI F 4 X’s Optimal Soln. in a cyclic situation: Either F+B+C or D+E+A (cost=3n-7+3=3n-4). Either can be obtained by applying Rule 3 or 4 to the above cyclic situation to choose either of E, F or D first, followed by Rule 1 (1 “bad covering” at the end stage, C covs E or A covs B, has to be used). 2 mts 1 mt 2 mts 2-mt PI A 4-mt PI B 4-mt PI C 8-mt PI E X X 6 X’s Optimal Soln in a non-cyclic situation: for a 4-var. function: Possibilities: E+D+A (cost=3n-4) or B+C (cost=2n-2). 2n-2 = 2, so B+C is optimal for n >= 2. QM gives us B+C in spite of “bad” coverings (C covers E; B covers D). 8-mt PI D 2 mts 1 mt 2 mts

Optimality Issues in QM (contd.) QM is not optimal due to the sub-optimal use of row covering when the covering row is of greater cost than the covered one, i.e., if applying such “bad” coverings, QM may not give us an optimal solution. 8-mt PI A 8-mt PI B 8-mt PI C 32-mt PI D 4 X’s 2 X’s 30 X’s 4 mts Optimal Soln: for a 7-var. function: A+D of cost (4+1) + (2+1) = 8. QM gives us either A+C or B+C, both of cost (4+1) + (4+1) = 10, due to “bad” coverings (initially, C of cost 4+1 covers D of cost 2+1 & applying Rule 1 to C, D first, we get rid of D). 4 mts 2 mts Example No scope of applying Rule 1’ (good covering) only here. What is the solution if some bad coverings have to be applied? Take a broader/more-global view to avoid applying bad coverings?

Optimality Issues in QM (contd.) No scope of applying Rule 1’ (good covering) only in the previous example. What is the solution if some bad coverings have to be applied? Take a broader/more-global view to avoid applying bad coverings? How about pair-to-pair covering? This opens up covering possibilities that were not there between single PIs, and good pair to pair coverings should also preserve optimality. In the above example, PI pair A+D will cover all others like A+C or B+C using good covering, and will end up as the optimal solution (only remaining PI). So the process to be followed is: ‒Use regular QM until the only options are bad covering(s) or the PIT/RPIT is cyclic ‒If the above happens: switch to a PIT w/ rows that are PI-pairs and single PIs: what should be a PI-pair cost for 2-level and multi-level circuit implementation? the defns of good & bad row covs. remain the same; coverings can apply between any 2 rows when a PI pair Pij = {Pi, Pj} is incl. deleted (due to becoming p-EPI after >=1 good coverings), rows w/ single PIs of Pij are excl. deleted, and costs of other PI pairs that contain one of the PIs of Pij are appropriately reduced. However, a similar problem comes up if the only pair-to-pair coverings are bad ones or there are none. So this approach can mitigate the sub-optimality of single PI bad coverings (or no coverings in a cyclic PIT) due to good pair-to-pair coverings that open up, but not always. We can take the concept further to triple-PIs, etc., but in the extreme this approach “degenerates” into Petrick’s algorithm, which essentially considers all possible multi-PI sets that cover all MTs, and thus cover each other (and we only apply good coverings in this case, which are guaranteed to exist as we are considering all multi-PI sets—the optimal one good covers all others!).

Optimality Issues in QM (contd.): Proof of optimality of “good” covering Theorem: If row PIj covers row PIk in an PIT/RPIT and cost(PIj) <= cost(PIk) (this is the so- called “good” covering), then if PIk is exclusion deleted from the RPIT (i.e., PIk will not be included in the final SOP expression f), then this leads to a cost of the final SOP expression f that is <= the cost of f if PIk is not exclusion deleted (i.e., PIk is included in the final expr.) Proof: Two cases: 1.Case 1: PIk is not in any optimal solution. Clearly, in this case, optimality of the final solution will not be affected by exclusion deleting PIk 2.Case 2: PIk is in some optimal solution S. a)If PIj is also in the same solution, then we can reduce its cost by deleting PIk from S without affecting MT coverage in S (since PIj covers all the MTs of PIk not covered by “previously”-chosen EPIs or p-EPIs). Thus we reach a contradiction that S is an optimal solution. b)If PIj is not in S, we can replace PIk by PIj in S without affecting MT coverage in S, and reduce the cost of S or keep it the same. In the latter case, we get another optimal solution S’ that does not contain Pik Thus either PIk is not in any optimal solution or if it is, then we can get another optimal solution containing PIj instead of PIk. Hence optimality of the final solution will not be affected by exclusion deleting PIk QED

Optimality of QM (contd.): 2 nd Proof of optimality of “good” covering—more complex proof Theorem: If row PIj covers row PIk in an PIT/RPIT and cost(PIj) <= cost(PIk) (this is the so-called “good” covering), then if PIk is exclusion deleted from the RPIT (i.e., PIk will not be included in the final SOP expression f), then this leads to a cost of the final SOP expression f that is <= the cost of f if PIk is not exclusion deleted (i.e., PIk is included in the final expr.) Proof: For a PI PIr, let MT(PIr) be the set of MTs in the current RPIT covered by PIr. Case 1: MT(PIj) = MT(PIk). If PIk is included in f, then a smaller or same-cost solution can be obtained by replacing PIk by PIj and have PIj cover MT(PIk) (thus the same MTs as before are covered by f after this change). Case 2: MT(PIk) is a subset of MT(PIj). Let MT(PIj - PIk) be the set of MTs in the current RPIT covered by PIj and not by PIk. There are 2 subcases here: –Case 2a: Less cost is incurred by covering MT(PIj - PIk) by PIj than any other single or multiple PIs. In other words, PIj is part of the optimal solution. Thus if PIk is chosen to cover MT(PIk), then this solution’s cost can be reduced by deleting PIk from the solution and not affecting the coverage of MT(PIk), since these MTs are covered by PIj that is already part of the optimal solution. –Case 2b: Less cost is incurred by covering MT(PIj - PIk) by a set S of one or more PIs (see figure below where S = {PIr, PIm}) other than PIj (this can happen if PIs in S become pseudo-PIs later on in the QM process due to other MTs [i.e., MTs not in MT(PIj - PIk)], and are thus needed in f in any case, and they cover MT(PIj - PIk) essentially for free). In this case also, as in Case 1, if PIk is part of the final solution (to cover MT(PIk)), then a smaller or same-cost solution can be obtained by replacing PIk by PIj and and have PIj cover MT(PIk) (and thus the same MTs as before are covered by f after this change). Thus in all cases, PIj can replace PIk if the latter is present in f to result in a smaller or same-cost solution, while still covering all the MTs of PIk (and possibly more). Hence exclusion-deleting PIk due to “good” covering by PIj, leads to a cost of the final SOP expression f that is <= the cost of f if PIk is included in it. Thus good coverings retain the optimality of a solution. QED PIk X X PIj X X X X PIr X X X PIm X X X MT(PIj – PIk) Current or future pseudo-singleton cols Set S in Case 2b