Presentation is loading. Please wait.

Presentation is loading. Please wait.

Logic Synthesis Minimization of Boolean logic –Technology-independent mapping Objective: minimize # of implicants, # of literals, etc. Not directly related.

Similar presentations


Presentation on theme: "Logic Synthesis Minimization of Boolean logic –Technology-independent mapping Objective: minimize # of implicants, # of literals, etc. Not directly related."— Presentation transcript:

1 Logic Synthesis Minimization of Boolean logic –Technology-independent mapping Objective: minimize # of implicants, # of literals, etc. Not directly related to precise technology (# transistors), but correlated – consistent with objectives for any technology –Technology-dependent mapping Linked to precise technology/library Technology-independent mapping –Two-level minimization – sum of products (SOP)/product of sums (POS) Karnaugh maps – “visual” technique Quine-McCluskey method – algorithmic Heuristic minimization – fast and “pretty good,” but not exact –Multi-level minimization

2 Basic Definitions Specification of a function f –On-set f on : set of input combinations for which f evaluates to 1 –Off-set f off : set of input combinations for which f evaluates to 0 –Don’t care set f dc : set of input combinations over which function is unspecified Cubes –Can represent a function of k variables over a k-dimensional space –Example: f(x 1,x 2,x 3 ) =  m(0,3,5,6) + d(7) f on = {0,3,5,6}; f dc = {7}; f off = {1,2,4} –Graphically: x1x1 x2x2 x3x3 000 110 100 001 101 111 011 010

3 k-cubes k-cube: k-dim. subset of f on –0-cube = vertex in f on –k-cube = a pair of (k-1) cubes with a Hamming distance of 1 Examples –A 0-cube is a vertex –A 1-cube is an edge –A 2-cube is a face –A 3-cube is a 3D cube –A 4-cube is harder to visualize but can be shown as

4 More defintions Implicant –A k-cube whose vertices all lie in the f on  f dc and contains at least one element of f on Prime implicant –A k-cube implicant such that no k+1-cube containing this cube is an implicant Cover –A set of implicants whose union contains all elements of f on and no elements of f off (may contain some elements of f dc ) Minimum cover –A cover of minimum cost (e.g., cardinality) –A min cardinality cover composed only of prime implicants exists (if not, can combine some implicants into larger prime implicants)

5 Quine-McCluskey Method Illustration by example: f(x 1,x 2,x 3,x 4 ) =  m(0,5,7,8,9,10,11,14,15) 0-cubes1-cubes2-cubes 0 (0000) x0,8 (-000) A8,9,10,11 (10--) D 5 (0101) x5,7 (01-1) B10,11,14,15 (1-1-) E 7 (0111) x7,15 (-111) C 8 (1000) x8,9 (100-) x 9 (1001) x8,10 (10-0) x 10 (1010) x9,11 (10-1) x 11 (1011) x10,11 (101-) x 14 (1110) x10,14 (1-10) x 15 (1111) x11,15 (1-11) x 14,15 (111-) x “x” implies that the cube has been combined into a larger cube Prime implicants

6 Prime implicant table Essential Prime Implicant (PIs): –The only PI that covers a minterm (encircled in the table) –Must be included in any cover –Here, essential PIs = A,B,D,E – form a cover! –WARNING: this was luck – in general, essential PIs will not form a cover! PI’s  A (0,8) B (5,7) C (7,15) D (8,9,10,11) E (10,11,14,15) minterms  0x 5x 7xx 8xx 9x 10xx 11xx 14x 15xx

7 Reducing the prime implicant table PI table reduction –In general, essential PIs will not form a cover –Reduce table by removing essential PIs, corresponding minterms Further reduction: can remove Dominating rowsDominated columns Row m 1 dominates row m 2 Column J dominates column K PI’s  PQRS minterms  m1m1 xxx m2m2 xx m3m3 xx m4m4 xx PI’s  JKLM minterms  m1’m1’ xx m2’m2’ xx m3’m3’ xxx m4’m4’ xx

8 Branch-and-bound algorithm May still not have a cover –Example: example from previous slide after removing dominating row m 1 and consequently empty column P Can enumerate possibilities using a search tree –Binary search tree: include or exclude PI PI’s  QRS minterms  m2m2 xx m3m3 xx m4m4 xx Q R excludeinclude Done Cover = {R,S} PI’s  RS minterms  m4m4 xx include exclude Reduced PI table Done Cover = {Q,S} Done Cover = {Q,R}

9 Branch-and-bound algorithm (contd.) ESPRESSO-EXACT –Implementation of branching algorithm from previous slide –Traversal to a leaf node of the tree yields a cover (though possibly not a minimum cost cover) –ESPRESSO-EXACT adds bounding at any node: If Cost node + LB subtree > Best_cost_so_far, do not search the subtree –Cost node = cost (e.g., number of implicants) chosen so far –LB subtree = a lower bound on the cost of a subtree (can be determined by solving a maximal independent set problem) –Best_cost_so_far = cost of best cover found so far through the traversal of the search tree; initialized to 

10 Heuristic Logic Minimization Apply a sequence of logic transformations to reduce a cost function Transformations –Expand: Input expansion –Enlarge cube by combining smaller cubes –Reduces total number of cubes Output expansion –Use cube for one output to cover another –Reduce: break up cube into sub-cubes Increases total number of cubes Hope to allow overall cost reduction in a later expand operation –Irredundant Remove redundant cubes from a cover

11 Example Expand: input expansion Examples from G. Hachtel and F. Somenzi, “Logic Synthesis and Verification Algorithms,” Kluwer Academic Publishers, Boston, MA, 1996. x y z Off-set member On-set member xyzf 0001 01-1 -111 xyzf 0-01 01-1 -111 xyzf 0-01 -111 Redundant!

12 Example Expand: output expansion –Two output functions of three variables each with initial covers shown below Examples from G. Hachtel and F. Somenzi, “Logic Synthesis and Verification Algorithms,” Kluwer Academic Publishers, Boston, MA, 1996. x y z f1f1 f2f2 x y z f1f1 f2f2 xyzF1F2F1F2 0-110 1-010 00-01 -0001 -1101 xyzF1F2F1F2 0-111 1-010 00-01 -0001 -1101

13 Other operators Reduce Irredundant Reduce Future Expand operation Irredundant Identified as redundant; removed

14 Example of an application of operators Example from S. Devadas, A. Ghosh and K. Keutzer, “Logic Synthesis,” McGraw-Hill, New York, NY, 1994. --1111 100101 110110 10 -1-001 101001 011010 -01010 -10010 100010 reduce -01110 --1111 100101 110110 10 -1-001 101001 011010 001010 101010 -10010 100010 reduce --1111 100101 110110 -0-110 -1-001 101001 0-1010 -10010 10-010 expand (10 cubes) (12 cubes) (9 cubes)

15 Example of a minimization loop F = Expand(F, D) F = Irredundant(F,D) do { Cost = |F| F = Reduce(F,D) F = Expand(F,D) F = Irredundant(F,d) } while (|F| < Cost) F= Make_sparse(F,D) Make_sparse reduces output parts of a cube (e.g., from 11 to 10) to remove redundant connections) Example: xyzF1F2F1F2 11-10 10 1-111 0-001 xyzF1F2F1F2 11-10 10 1-101 0-001

16 Implementation of operators Uses “unate recursive paradigm” Definition: Shannon expansion –F(x 1,x 2, …, x i, …, x n ) = x i F(x 1,x 2, …, x i, …, x n ) + x i ’ F(x 1,x 2, …, x i, …, x n ) = x i F xi + x i ’ F xi’ (notationally) Unate function –Positive unate in variable x i : F xi  F xi’  F = x i F xi + F xi’ –Negative unate in variable x i : F xi  F xi’  F = F xi + x i ’ F xi’ –Unate function: positive unate or negative unate in each variable Unate recursive paradigm –Recursively perform Shannon expansions about the variables until a unate function is obtained –Why unate functions? Various operations (tautology checking, complementation, etc. are “easy” for unate functions)

17 Unateness Example –Unate cover (not minimum) Every column has only 1’s and –’s, or only 0’s and –’s –Nonunate cover: nonunate in y and z (both 1 and 0 appear in the columns) w x y z 0 – 1 – – – 1 – 0 – – 1 w x y z 1 – – – 0 1 – 1 1 0 Note on notation: The table at left represents the on set

18 Unate recursive paradigm a b c d e 1 – 1 – 0 – – 0 1 – – 1 1 0 1 1 – 1 0 1 Expand about binate variable FcFc Fc’Fc’ a b c d e – – – 1 – a b c d e 1 – – – 0 – 1 – 0 1 1 – – 0 1 (Unate function) FeFe Fe’Fe’ a b c d e – 1 – 0 – 1 – – 0 – a b c d e 1 – 1 – – (Unate function)

19 Example: Unate complementation (Example to show that unate operations are “easy”) x y z 1 0 – 1 1 0 0 – 1 x y z – – 1 x y z – 0 – – 1 0 xx’ x y z – – 0 x y z – – – y’y Complement x y z – – 1 Complement Empty set! Complement x y z – – 0 x y z – 1 1 x y z 1 1 1 0 – 0 Basic result: If F = x F x + x’ F x’ then F’ = x (F x )’+ x’ (F x’ )’ Proof: Let G = x (F x )’+ x’ (F x’ )’ Show that F+G = 1 and F.G = 0

20 Cofactors with respect to sets of cubes Can generalize the Shannon expansion to F = c F c + c’ F c’ where c is an set of cubes Result: c  F  F c is a tautology Example of finding a cofactor with respect to a cube: –Cofactor of F = with respect to c = [1 1 – – ] F c contains elements of F on that agree with c at all non-don’t care positions (in this example, in variables p and q) If so: replace non-don’t cares by “–” and copy the rest of the cube Following this prescription, F c is p q r s 1 1 0 – 0 1 – 0 1 1 p q r s – – 0 – – – 1 1

21 Checking for Tautology Checking for tautology: 1.F is a tautology  F xj is a tautology and F xj’ is a tautology 2.Let C be a unate cover of F. Then “F is a tautology”  “C has a row of all “-’s Example p q r s – – 1 – 1 – – 1 – 0 – – 0 – Binate variable r r’ p q r s – – 1 – – 1 – 0 p q r s 1 – – 1 – 0 – – Tautology! Since all leaf nodes are tautologies, the function is a tautology

22 The Expand Operator and Tautology Consider the function f with a cover G and f dc specified in the table –Objective: to expand 000 | 1 to 0 – 0 | 1 –Need to check if the expansion is valid, i.e., it does not overlap with f off –Define c i = 0 0 0 | 1 –d i = difference between c i and 0 – 0 | 1 = 0 1 0 | 1 here –Need to know if d i  Q = (G \ c i )  f dc : if so, can expand –In other words, check if Q di is a tautology –Q di can easily be verified here to be – – –| 1 here, which is a tautology x y zf 0 0 01 0 1 –1 – 1 11 1 0 0–

23 The Irredundant Operator and Tautology Objective: to check if a cube c i in a cover G of function F is redundant –In other words, check if c i  Q = (G \ c i )  F dc –In other words, check if Q ci is a tautology

24 Multilevel logic optimization Motivation –Two-level optimization (SOP, POS) is too limiting –Useful for structures like PLA’s, but most circuits are not designed in that way –May require gates with a large number of inputs –Restricts “sharing” of logic gates between outputs –Multilevel optimization permits more than two levels of gates between the inputs and the outputs –Necessarily heuristic Reference for this part: G. De Micheli, “Synthesis and Optimization of Digital Circuits,” McGraw-Hill, New York, NY, 1994.

25 Basic Transformations Elimination r = p + a’; s = r + b’  s = p + a’ + b’ Decomposition v = a’d + bd + cd + a’e  j = a’ + b + c; v = j d + a’e Extraction p = ce+de; t = ac+ad+bc+bd+e  k = c+d; p = ke; t = ka+kb+e Simplification u = q’c+qc’+qc  u = q+c Substitution t = ka+kb+e; q = a+b  t = kq+e (Others exist; these are the most common)

26 Transformations Apply the transformations heuristically Two methods: –Algorithmic: algorithm for each transformation type –Rule-based: according to a set of rules injected into the system by a human designer

27 A typical synthesis script script.rugged in the SIS synthesis system from Berkeley sweep; eliminate –1 simplify –m nocomp eliminate –1 sweep; eliminate 5 simplify –m nocomp resub –a fx resub –a; sweep eliminate –1; sweep full_simplify –m nocomp Explanation sweep: eliminates single-input vertices (w = x; y = w+z becomes y = x+z) eliminate k: eliminate defined earlier; Eliminates vertices so that area estimate increases by no more than k simplify –m nocomp: simplify defined earlier Invokes ESPRESSO to minimize without computing full off-set (“nocomp”) full_simplify –m nocomp: as above, but uses a larger don’t care set resub –a: algebraic substitute for vertex pairs fx: extracts double cube and single cube expressions

28 Algebraic model Also known as weak division Manipulation according to rules of polynomial algebra Support of a function –Sup(f) = set of all variables v that occur as v or v’ in a minimal representation of f –Sup(ab+c) = {a,b,c}; Sup(ab+a’b) = {b} –f is orthogonal to g (or f  g) if Sup(f)  Sup(g) =  g is an algebraic (or weak) divisor of f when –f = g h + r, provided h   and g  h –g divides f evenly if r =  –Example If f = ab+ac+d; g = b+c, then f = ag + d (here h = a, r = d) –The quotient, loosely referred to as f/g, is the largest cube h such that f = gh + r

29 Computing the quotient f/g Given f = {set of cubes c i }, g = {set of cubes a i } Define h i = {b j | a i b j  f} for all cubes a i  g (all multipliers of a cube a i that produce elements in f) f/g =  i=1 to |g| h i Example –f = abc + abde + abh + bcd, or f = {abc,abde,abh,bcd} –g = c + de + h, or g = {c,de,h} –h 1 = f/c = ab + bd, or {ab,bd}; h 2 = f/de = ab, or {ab}; h 3 = f/h = ab, or {ab} –f/g = h 1  h 2  h 3 = {ab} –(Confirmation: f = ab(c+de+h) + bcd = (f/g) g + r) Complexity of this method = |f|.|g|

30 Doing this more efficiently Encode a i  g with integer codes with a unique bit position for each literal in sup(g) –g = {c,de,h}; sup(g) = {c,d,e,h}; encoding = {1000,0110,0001} Encode c i  f with the same encoding –f = {abc,abde,abh,bcd}; encoding = {1000,0110,0001,1100} Sort {a i, c j } by their encodings to get –1100: bcd –1000: c, abc  h 1 = ab –0110: de, abde  h 2 = ab –0001: h, abh  h 3 = ab –(Not the same h i ’s, but the intersection is the same) –Complexity = O(n log n) where n = |f| + |g|

31 Finding good divisors Now that we know how to divide – how do we find good divisors? Primary divisors –P(f) = {f/c | f is a cube} –Example: f = abc + abde f/a = bc + bde is a primary divisor f/ab = c + de is a primary divisor –g is cube free if the only cube dividing g evenly (i.e., with remainder zero) is 1. Example: c+de Kernels –K(f) = set of primary divisors that are cube-free –f/ab belongs to the set of kernels; f/a does not. –Kernels are good candidates for divisors

32 Kernels and co-kernels For f = abc + abde, f/ab = c + de –c+de is a kernel –ab is a co-kernel Co-kernel of a kernel is not unique –f = acd + bcd + ae + be –f/a = f/b = cd+eKernel = cd+eCo-kernels = {a,b} –f/cd = f/e = a+bKernel = a+bCo-kernels = {cd,e}

33 Finding all kernels Kernel (f) Find c f so that f/c f is cube-free and c f has the largest number of literals K = Kernel1(0,f/c f ) if (f is cube-free) return(f  K) return(K) Kernel1(j,g) R = {g} for (i = j+1; i  n; i++) if ( i th literal l i has 0 or 1 terms) continue c e = cube that evenly divides g/l i and has the max number of literals /* kernel already identified */ if (l k is not in c e for all k  i) R = R  Kernel1(i,(g/l i )/c e ) return(R)

34 Example F = abc(d+e)(k+l) + agh + m F/a = bc(d+e)(k+l) + gh a b c F/ab = c(d+e)(k+l) [Leads to kernels (d+e) and (k+l)] F/ac = b(d+e)(k+l) Triggers “if” condition that finds that this kernel was found earlier and prunes the search tree here

35 Example: extraction and resubstitution F 1 = ab(c(d+e)+f+g)+h F 2 = ai(c(d+e)+f+j)+k 1.Generate kernels for F 1, F 2 2.Select K 1  K(F 1 ) and K 2  K(F 2 ) such that K 1  K 2 is not a cube 3.Set the new variable v to K 1  K 2 4.Rewrite F i = v (F i /v) + r i For the example: v 1 = d+e F 1 = ab(cv 1 +f+g)+h; F 2 = ai(cv 1 +f+j)+k v 2 = cv 1 +f F 1 = ab(v 2 +g)+h; F 2 = ai(v 2 +j)+k

36 Generic factorization algorithm Factor(F) If (F has no factor) return(F); D = Divisor(F); (Q,R) = Divide(F,D);/* F = QD + R*/ return(Factor(Q),Factor(D),Factor(R)); “Divisor” function identifies divisors, for example, based on a kernel-based algorithm “Divide” function may be algebraic (weak) division

37 Don’t care based optimization: an outline Two types of don’t cares considered here –Satisfiability don’t cares –Observability don’t cares –Others: SPFD’s (sets of pairs of functions to be differentiated) Satisfiability don’t cares (SDC’s) –Example: Consider Y 1 = a’b’ Y 2 = c’d’ Y 3 = Y 1 ’Y 2 ’ Since Y 1 = a’b’ is enforced by one equation, the minterms of “Y 1  (a’b’)” can be considered to be don’t cares In other words, Y 1 a’b’ + Y 1 (a+b) corresponds to a don’t care Similarly, “Y 2  (c’d’)” is also a don’t care

38 Don’t care based optimization (contd.) Observability don’t cares (ODC’s) –For r = p+q, if p = 1, then q is an observability don’t care –Similarly, can define ODC’s for AND operations, etc. Example of don’t care based optimization –y 1 = xw, y 2 = x’+y, f = y 1 +y 2 –Cost = 1 AND + 2 OR’s + 1 NOT –Minimize function for y 1 –SDC(y 1 ) = y 2  (x’+y) = y 2 xy’+y 2 ’x’+y 2 ’y –ODC(y 1 ) = y 2 w x y y 2 y1y1 1 1 – –1 – – – 1– – 1 0 1– – 0 – – – 1 0– y 1 = w ODC SDC’s y 1 = w, y 2 = x’+y, f = y 1 + y 2 y 2 = x’+y, f = w + y 2 (eliminate) (Cost: 2 OR’s + 1 NOT)

39 Acknowledgements Hardly anything in these notes is original, and they borrow heavily from sources such as –G. De Micheli, “Synthesis and Optimization of Digital Circuits,” McGraw-Hill, New York, NY, 1994. –S. Devadas, A. Ghosh and K. Keutzer, “Logic Synthesis,” McGraw-Hill, New York, NY, 1994 –G. Hachtel and F. Somenzi, “Logic Synthesis and Verification Algorithms,” Kluwer Academic Publishers, Boston, MA, 1996. –Notes from Prof. Brayton's synthesis class at UC Berkeley (http://www-cad.eecs.berkeley.edu/~brayton/courses/219b/219b.html) –Notes from Prof. Devadas's CAD class at MIT (http://glenfiddich.lcs.mit.edu/~devadas/6.373/lectures) –Possibly other sources that I may have omitted to acknowledge (my apologies)


Download ppt "Logic Synthesis Minimization of Boolean logic –Technology-independent mapping Objective: minimize # of implicants, # of literals, etc. Not directly related."

Similar presentations


Ads by Google