Presentation is loading. Please wait.

Presentation is loading. Please wait.

COE 561 Digital System Design & Synthesis Multiple-Level Logic Synthesis Dr. Muhammad E. Elrabaa Computer Engineering Department King Fahd University of.

Similar presentations


Presentation on theme: "COE 561 Digital System Design & Synthesis Multiple-Level Logic Synthesis Dr. Muhammad E. Elrabaa Computer Engineering Department King Fahd University of."— Presentation transcript:

1 COE 561 Digital System Design & Synthesis Multiple-Level Logic Synthesis Dr. Muhammad E. Elrabaa Computer Engineering Department King Fahd University of Petroleum & Minerals Dr. Muhammad E. Elrabaa Computer Engineering Department King Fahd University of Petroleum & Minerals

2 2 Outline … n Representations. n Taxonomy of optimization methods. Goals: area/delay. Algorithms: Algebraic/Boolean. Rule-based methods. n Examples of transformations. n Algebraic model. Algebraic division. Algebraic substitution. Single-cube extraction. Multiple-cube extraction. Decomposition. Factorization. Fast extraction. n Representations. n Taxonomy of optimization methods. Goals: area/delay. Algorithms: Algebraic/Boolean. Rule-based methods. n Examples of transformations. n Algebraic model. Algebraic division. Algebraic substitution. Single-cube extraction. Multiple-cube extraction. Decomposition. Factorization. Fast extraction.

3 3 … Outline n External and internal don’t care sets. n Controllability don’t care sets. n Observability don’t care sets. n Boolean simplification and substitution. n Testability properties of multiple-level logic. n Synthesis for testability. n Network delay modeling. n Algorithms for delay minimization. n Transformations for delay reduction. n External and internal don’t care sets. n Controllability don’t care sets. n Observability don’t care sets. n Boolean simplification and substitution. n Testability properties of multiple-level logic. n Synthesis for testability. n Network delay modeling. n Algorithms for delay minimization. n Transformations for delay reduction.

4 4 MotivationMotivation n Combinational logic circuits very often implemented as multiple-level networks of logic gates. n Provides several degrees of freedom in logic design Exploited in optimizing area and delay. Different timing requirements on input/output paths. n Multiple-level networks viewed as interconnection of single-output gates Single type of gate (e.g. NANDs or NORs). Instances of a cell library. Macro cells. n Multilevel optimization is divided into two tasks Optimization neglecting implementation constraints assuming loose models of area and delay. Constraints on the usable gates are taken into account during optimization. n Combinational logic circuits very often implemented as multiple-level networks of logic gates. n Provides several degrees of freedom in logic design Exploited in optimizing area and delay. Different timing requirements on input/output paths. n Multiple-level networks viewed as interconnection of single-output gates Single type of gate (e.g. NANDs or NORs). Instances of a cell library. Macro cells. n Multilevel optimization is divided into two tasks Optimization neglecting implementation constraints assuming loose models of area and delay. Constraints on the usable gates are taken into account during optimization.

5 5 Circuit Modeling n Logic network Interconnection of logic functions. Hybrid structural/behavioral model. n Bound (mapped) networks Interconnection of logic gates. Structural model. n Logic network Interconnection of logic functions. Hybrid structural/behavioral model. n Bound (mapped) networks Interconnection of logic gates. Structural model. Example of Bound Network and Logic Network A, b, c are primary inputs, x and y are primary outputs

6 6 Example of a Logic Network

7 7 Network Optimization n Two-level logic Area and delay proportional to cover size. Achieving minimum (or irredundant) covers corresponds to optimizing area and speed. Achieving irredundant cover corresponds to maximizing testability. n Multiple-level logic Minimal-area implementations do not correspond in general to minimum-delay implementations and vice versa. Minimize area (power) estimate (power and area are strongly correlated) subject to delay constraints. Minimize maximum delay subject to area (power) constraints. Minimize power consumption. subject to delay constraints. Maximize testability. n Two-level logic Area and delay proportional to cover size. Achieving minimum (or irredundant) covers corresponds to optimizing area and speed. Achieving irredundant cover corresponds to maximizing testability. n Multiple-level logic Minimal-area implementations do not correspond in general to minimum-delay implementations and vice versa. Minimize area (power) estimate (power and area are strongly correlated) subject to delay constraints. Minimize maximum delay subject to area (power) constraints. Minimize power consumption. subject to delay constraints. Maximize testability.

8 8 EstimationEstimation n Area Number of literals Corresponds to number of polysilicon strips (transistors) Number of functions/gates. n Delay Number of stages (unit delay per stage). Refined gate delay models (relating delay to function complexity and fanout). Sensitizable paths (detection of false paths). Wiring delays estimated using statistical models. n Area Number of literals Corresponds to number of polysilicon strips (transistors) Number of functions/gates. n Delay Number of stages (unit delay per stage). Refined gate delay models (relating delay to function complexity and fanout). Sensitizable paths (detection of false paths). Wiring delays estimated using statistical models.

9 9 Problem Analysis n Multiple-level optimization is hard. n Exact methods Exponential complexity. Impractical. n Approximate methods Heuristic algorithms. Rule-based methods. n Strategies for optimization Improve circuit step by step based on circuit transformations. Preserve network behavior. Methods differ in Types of transformations. Selection and order of transformations. n Multiple-level optimization is hard. n Exact methods Exponential complexity. Impractical. n Approximate methods Heuristic algorithms. Rule-based methods. n Strategies for optimization Improve circuit step by step based on circuit transformations. Preserve network behavior. Methods differ in Types of transformations. Selection and order of transformations.

10 10 EliminationElimination n Eliminate one function from the network. n Perform variable substitution. n Example s = r +b’; r = p+a’  s = p+a’+b’. n Eliminate one function from the network. n Perform variable substitution. n Example s = r +b’; r = p+a’  s = p+a’+b’.

11 11 DecompositionDecomposition n Break one function into smaller ones. n Introduce new vertices in the network. n Example v = a’d+bd+c’d+ae’.  j = a’+b+c’; v = jd+ae’ n Break one function into smaller ones. n Introduce new vertices in the network. n Example v = a’d+bd+c’d+ae’.  j = a’+b+c’; v = jd+ae’

12 12 FactoringFactoring n Factoring is the process of deriving a factored form from a sum-of-products form of a function. n Factoring is like decomposition except that no additional nodes are created. n Example F = abc+abd+a’b’c+a’b’d+ab’e+ab’f+a’be+a’bf (24 literals) After factorization F=(ab+a’b’)(c+d) + (ab’+a’b)(e+f) (12 literals) n Factoring is the process of deriving a factored form from a sum-of-products form of a function. n Factoring is like decomposition except that no additional nodes are created. n Example F = abc+abd+a’b’c+a’b’d+ab’e+ab’f+a’be+a’bf (24 literals) After factorization F=(ab+a’b’)(c+d) + (ab’+a’b)(e+f) (12 literals)

13 13 Extraction … n Find a common sub-expression of two (or more) expressions. n Extract sub-expression as new function. n Introduce new vertex in the network. n Example p = ce+de; t = ac+ad+bc+bd+e; (13 literals) p = (c+d)e; t = (c+d)(a+b)+e; (Factoring:8 literals)  k = c+d; p = ke; t = ka+ kb +e; (Extraction:9 literals) n Find a common sub-expression of two (or more) expressions. n Extract sub-expression as new function. n Introduce new vertex in the network. n Example p = ce+de; t = ac+ad+bc+bd+e; (13 literals) p = (c+d)e; t = (c+d)(a+b)+e; (Factoring:8 literals)  k = c+d; p = ke; t = ka+ kb +e; (Extraction:9 literals)

14 14 … Extraction

15 15 SimplificationSimplification n Simplify a local function (using Espresso). n Example u = q’c+qc’ +qc;  u = q +c; n Simplify a local function (using Espresso). n Example u = q’c+qc’ +qc;  u = q +c;

16 16 SubstitutionSubstitution n Simplify a local function by using an additional input that was not previously in its support set. n Example t = ka+kb+e.  t = kq +e; because q = a+b. n Simplify a local function by using an additional input that was not previously in its support set. n Example t = ka+kb+e.  t = kq +e; because q = a+b.

17 17 Example: Sequence of Transformations Original Network (33 lit.) Transformed Network (20 lit.)

18 18 Optimization Approaches n Algorithmic approach Define an algorithm for each transformation type. Algorithm is an operator on the network. Each operator has well-defined properties Heuristic methods still used. Weak optimality properties. Sequence of operators Defined by scripts. Based on experience. n Rule-based approach (IBM Logic Synthesis System) Rule-data base Set of pattern pairs. Pattern replacement driven by rules. n Algorithmic approach Define an algorithm for each transformation type. Algorithm is an operator on the network. Each operator has well-defined properties Heuristic methods still used. Weak optimality properties. Sequence of operators Defined by scripts. Based on experience. n Rule-based approach (IBM Logic Synthesis System) Rule-data base Set of pattern pairs. Pattern replacement driven by rules.

19 19 Elimination Algorithm … n Set a threshold k (usually 0). n Examine all expressions (vertices) and compute their values. n Vertex value = n*l – n – l (l is number of literals; n is number of times vertex variable appears in network) n Eliminate an expression (vertex) if its value (i.e. the increase in literals) does not exceed the threshold. n Set a threshold k (usually 0). n Examine all expressions (vertices) and compute their values. n Vertex value = n*l – n – l (l is number of literals; n is number of times vertex variable appears in network) n Eliminate an expression (vertex) if its value (i.e. the increase in literals) does not exceed the threshold.

20 20 … Elimination Algorithm n Example q = a + b s = ce + de + a’ + b’ t = ac + ad + bc + bd + e u = q’c + qc’ + qc v = a’d + bd + c’d + ae’ n Value of vertex q=n*l–n–l=3*2-3-2=1 It will increase number of literals => not eliminated n Assume u is simplified to u=c+q Value of vertex q=n*l–n–l=1*2-1-2=-1 It will decrease the number of literals by 1 => eliminated n Example q = a + b s = ce + de + a’ + b’ t = ac + ad + bc + bd + e u = q’c + qc’ + qc v = a’d + bd + c’d + ae’ n Value of vertex q=n*l–n–l=3*2-3-2=1 It will increase number of literals => not eliminated n Assume u is simplified to u=c+q Value of vertex q=n*l–n–l=1*2-1-2=-1 It will decrease the number of literals by 1 => eliminated

21 21 MIS/SIS Rugged Script n sweep; eliminate -1 n simplify -m nocomp n eliminate -1 n sweep; eliminate 5 n simplify -m nocomp n resub -a n fx n resub -a; sweep n eliminate -1; sweep n full-simplify -m nocomp n sweep; eliminate -1 n simplify -m nocomp n eliminate -1 n sweep; eliminate 5 n simplify -m nocomp n resub -a n fx n resub -a; sweep n eliminate -1; sweep n full-simplify -m nocomp Sweep eliminates single- input Vertices and those with a constant function. fx extracts double-cube and single-cube expression. resub –a performs algebraic substitution of all vertex pairs

22 22 Boolean and Algebraic Methods … n Boolean methods Exploit Boolean properties of logic functions. Use don't care conditions induced by interconnections. Complex at times. n Algebraic methods View functions as polynomials. Exploit properties of polynomial algebra. Simpler, faster but weaker. n Boolean methods Exploit Boolean properties of logic functions. Use don't care conditions induced by interconnections. Complex at times. n Algebraic methods View functions as polynomials. Exploit properties of polynomial algebra. Simpler, faster but weaker.

23 23 … Boolean and Algebraic Methods n Boolean substitution h = a+bcd+e; q = a+cd  h = a+bq +e Because a+bq+e = a+b(a+cd)+e = a+bcd+e; Relies on Boolean property b+1=1 n Algebraic substitution t = ka+kb+e; q=a+b  t = kq +e Because k(a+b) = ka+kb; holds regardless of any assumption of Boolean algebra. n Boolean substitution h = a+bcd+e; q = a+cd  h = a+bq +e Because a+bq+e = a+b(a+cd)+e = a+bcd+e; Relies on Boolean property b+1=1 n Algebraic substitution t = ka+kb+e; q=a+b  t = kq +e Because k(a+b) = ka+kb; holds regardless of any assumption of Boolean algebra.

24 24 The Algebraic Model … n Represents local Boolean functions by algebraic expressions Multilinear polynomial over set of variables with unit coefficients. n Algebraic transformations neglect specific features of Boolean algebra Only one distributive law applies a. (b+c) = ab+ac a + (b. c)  (a+b).(a+c) Complements are not defined Cannot apply some properties like absorption, idempotence, involution and Demorgan’s, a+a’=1 and a.a’=0 Symmetric distribution laws. Don't care sets are not used. n Represents local Boolean functions by algebraic expressions Multilinear polynomial over set of variables with unit coefficients. n Algebraic transformations neglect specific features of Boolean algebra Only one distributive law applies a. (b+c) = ab+ac a + (b. c)  (a+b).(a+c) Complements are not defined Cannot apply some properties like absorption, idempotence, involution and Demorgan’s, a+a’=1 and a.a’=0 Symmetric distribution laws. Don't care sets are not used.

25 25 … The Algebraic Model n Algebraic expressions obtained by Modeling functions in sum of products form. Make them minimal with respect to single-cube containment. n Algebraic operations restricted to expressions with disjoint support Preserve correspondence of result with sum-of-product forms minimal w.r.t single-cube containment. n Example (a+b)(c+d)=ac+ad+bc+bd; minimal w.r.t SCC. (a+b)(a+c)= aa+ac+ab+bc; non-minimal. (a+b)(a’+c)=aa’+ac+a’b+bc; non-minimal. n Algebraic expressions obtained by Modeling functions in sum of products form. Make them minimal with respect to single-cube containment. n Algebraic operations restricted to expressions with disjoint support Preserve correspondence of result with sum-of-product forms minimal w.r.t single-cube containment. n Example (a+b)(c+d)=ac+ad+bc+bd; minimal w.r.t SCC. (a+b)(a+c)= aa+ac+ab+bc; non-minimal. (a+b)(a’+c)=aa’+ac+a’b+bc; non-minimal.

26 26 Algebraic Division … n Given two algebraic expressions f dividend and f divisor, we say that f divisor is an Algebraic Divisor of f dividend, f quotient = f dividend /f divisor when f dividend = f divisor. f quotient + f remainder f divisor. f quotient  0 and the support of f divisor and f quotient is disjoint. n Example Let f dividend = ac+ad+bc+bd+e and f divisor = a+b Then f quotient = c+d f remainder = e Because (a+b) (c+d)+e = f dividend and {a,b}  {c,d} =  Non-algebraic division Let f i = a+bc and f j = a+b. Let f k = a+c. Then, f i = f j. f k = (a+b)(a+c) = f i but {a,b}  {a,c}   n Given two algebraic expressions f dividend and f divisor, we say that f divisor is an Algebraic Divisor of f dividend, f quotient = f dividend /f divisor when f dividend = f divisor. f quotient + f remainder f divisor. f quotient  0 and the support of f divisor and f quotient is disjoint. n Example Let f dividend = ac+ad+bc+bd+e and f divisor = a+b Then f quotient = c+d f remainder = e Because (a+b) (c+d)+e = f dividend and {a,b}  {c,d} =  Non-algebraic division Let f i = a+bc and f j = a+b. Let f k = a+c. Then, f i = f j. f k = (a+b)(a+c) = f i but {a,b}  {a,c}  

27 27 … Algebraic Division n An algebraic divisor is called a factor when the remainder is void. a+b is a factor of ac+ad+bc+bd n An expression is said to be cube free when it cannot be factored by a cube. a+b is cube free ac+ad+bc+bd is cube free ac+ad is non-cube free abc is non-cube free n An algebraic divisor is called a factor when the remainder is void. a+b is a factor of ac+ad+bc+bd n An expression is said to be cube free when it cannot be factored by a cube. a+b is cube free ac+ad+bc+bd is cube free ac+ad is non-cube free abc is non-cube free

28 28 Algebraic Division Algorithm … n Quotient Q and remainder R are sum of cubes (monomials). n Intersection is largest subset of common monomials. n Quotient Q and remainder R are sum of cubes (monomials). n Intersection is largest subset of common monomials.

29 29 … Algebraic Division Algorithm … n Example f dividend = ac+ad+bc+bd+e; f divisor = a+b; A = {ac, ad, bc, bd, e} and B = {a, b}. i = 1 C B 1 = a, D = {ac, ad} and D 1 = {c, d}. Q = {c, d}. i = 2 = n C B 2 = b, D = {bc, bd} and D 2 = {c, d}. Then Q = {c, d}  {c, d} = {c, d}. Result Q = {c, d} and R = {e}. f quotient = c+d and f remainder = e. n Example f dividend = ac+ad+bc+bd+e; f divisor = a+b; A = {ac, ad, bc, bd, e} and B = {a, b}. i = 1 C B 1 = a, D = {ac, ad} and D 1 = {c, d}. Q = {c, d}. i = 2 = n C B 2 = b, D = {bc, bd} and D 2 = {c, d}. Then Q = {c, d}  {c, d} = {c, d}. Result Q = {c, d} and R = {e}. f quotient = c+d and f remainder = e.

30 30 … Algebraic Division Algorithm n Example Let f dividend = axc+axd+bc+bxd+e; f divisor = ax+b i=1, C B 1 = ax, D = {axc, axd} and D 1 = {c, d}; Q={c, d} i = 2 = n; C B 2 = b, D = {bc, bxd} and D 2 = {c, xd}. Then Q = {c, d}  {c, xd} = {c}. f quotient = c and f remainder = axd+bxd+e. n Theorem: Given algebraic expressions f i and f j, then f i /f j is empty when f j contains a variable not in f i. f j contains a cube whose support is not contained in that of any cube of f i. f j contains more cubes than f i. The count of any variable in f j larger than in f i. n Example Let f dividend = axc+axd+bc+bxd+e; f divisor = ax+b i=1, C B 1 = ax, D = {axc, axd} and D 1 = {c, d}; Q={c, d} i = 2 = n; C B 2 = b, D = {bc, bxd} and D 2 = {c, xd}. Then Q = {c, d}  {c, xd} = {c}. f quotient = c and f remainder = axd+bxd+e. n Theorem: Given algebraic expressions f i and f j, then f i /f j is empty when f j contains a variable not in f i. f j contains a cube whose support is not contained in that of any cube of f i. f j contains more cubes than f i. The count of any variable in f j larger than in f i.

31 31 SubstitutionSubstitution n Substitution replaces a subexpression by a variable associated with a vertex of the logic network. n Consider expression pairs. n Apply division (in any order). n If quotient is not void Evaluate area/delay gain Substitute f dividend by j.f quotient + f remainder where j = f divisor n Use filters to reduce divisions. n Can not detect non-algebriac substitutions n Theorem Given two algebraic expressions f i and f j, f i /f j =  if there is a path from v i to v j in the logic network. n Substitution replaces a subexpression by a variable associated with a vertex of the logic network. n Consider expression pairs. n Apply division (in any order). n If quotient is not void Evaluate area/delay gain Substitute f dividend by j.f quotient + f remainder where j = f divisor n Use filters to reduce divisions. n Can not detect non-algebriac substitutions n Theorem Given two algebraic expressions f i and f j, f i /f j =  if there is a path from v i to v j in the logic network.

32 32 Substitution algorithm

33 33 ExtractionExtraction n Search for common sub-expressions (i.e common divisors) Single-cube extraction: monomial. Multiple-cube (kernel) extraction: polynomial n Search for appropriate divisors. n Cube-free expression Cannot be factored by a cube. n Kernel of an expression Cube-free quotient of the expression divided by a cube (the cube is called co-kernel). Single cube quotients are not kernels (divisible by itself) Cube-free expressions are kernels of themselves (co-kernel is1) n Kernel set K(f) of an expression Set of kernels. n Search for common sub-expressions (i.e common divisors) Single-cube extraction: monomial. Multiple-cube (kernel) extraction: polynomial n Search for appropriate divisors. n Cube-free expression Cannot be factored by a cube. n Kernel of an expression Cube-free quotient of the expression divided by a cube (the cube is called co-kernel). Single cube quotients are not kernels (divisible by itself) Cube-free expressions are kernels of themselves (co-kernel is1) n Kernel set K(f) of an expression Set of kernels.

34 34 Kernel Example n f x = ace+bce+de+g n Divide f x by a. Get ce. Not cube free. n Divide f x by b. Get ce. Not cube free. n Divide f x by c. Get ae+be. Not cube free. n Divide f x by ce. Get a+b. Cube free. Kernel! n Divide f x by d. Get e. Not cube free. n Divide f x by e. Get ac+bc+d. Cube free. Kernel! n Divide f x by g. Get 1. Not cube free. n Expression f x is a kernel of itself because cube free. n K(f x ) = {(a+b); (ac+bc+d); (ace+bce+de+g)}. n f x = ace+bce+de+g n Divide f x by a. Get ce. Not cube free. n Divide f x by b. Get ce. Not cube free. n Divide f x by c. Get ae+be. Not cube free. n Divide f x by ce. Get a+b. Cube free. Kernel! n Divide f x by d. Get e. Not cube free. n Divide f x by e. Get ac+bc+d. Cube free. Kernel! n Divide f x by g. Get 1. Not cube free. n Expression f x is a kernel of itself because cube free. n K(f x ) = {(a+b); (ac+bc+d); (ace+bce+de+g)}.

35 35 Theorem (Brayton and McMullen) n Two expressions f a and f b have a common multiple- cube divisor f d if and only if there exist kernels k a  K(f a ) and k b  K(f b ) s.t. f d is the sum of 2 (or more) cubes in k a  k b (intersection is largest subset of common monomials) n Consequence If kernel intersection is void, then the search for common sub- expression can be dropped. n Example n Two expressions f a and f b have a common multiple- cube divisor f d if and only if there exist kernels k a  K(f a ) and k b  K(f b ) s.t. f d is the sum of 2 (or more) cubes in k a  k b (intersection is largest subset of common monomials) n Consequence If kernel intersection is void, then the search for common sub- expression can be dropped. n Example f x = ace+bce+de+g;K(f x ) = {(a+b); (ac+bc+d); (ace+bce+de+g)} f y = ad+bd+cde+ge; K(f y ) = {(a+b+ce); (cd+g); (ad+bd+cde+ge)} f z = abc; The kernel set of f z is empty. Select intersection (a+b) f w = a+bf x = wce+de+g f y = wd+cde+gef z = abc

36 36 Kernel Set Computation … n Naive method Divide function by elements in power set of its support set. Weed out non cube-free quotients. n Smart way Use recursion Kernels of kernels are kernels of original expression. Exploit commutativity of multiplication. Kernels with co-kernels ab and ba are the same n A kernel has level 0 if it has no kernel except itself. n A kernel is of level n if it has at least one kernel of level n-1 no kernels of level n or greater except itself n Naive method Divide function by elements in power set of its support set. Weed out non cube-free quotients. n Smart way Use recursion Kernels of kernels are kernels of original expression. Exploit commutativity of multiplication. Kernels with co-kernels ab and ba are the same n A kernel has level 0 if it has no kernel except itself. n A kernel is of level n if it has at least one kernel of level n-1 no kernels of level n or greater except itself

37 37 …Kernel Set Computation n Y= adf + aef + bdf + bef + cdf + cef + g = (a+b+c)(d+e) f + g n Y= adf + aef + bdf + bef + cdf + cef + g = (a+b+c)(d+e) f + g KernelsCo-KernelsLevel (a+b+c)df, ef0 (d+e)af, bf, cf0 (a+b+c)(d+e)f1 (a+b+c)(d+e)f+g12

38 38 Recursive Kernel Computation: Simple Algorithm f is assumed to be cube-free If not divide it by its largest cube factor (obtained by intersection of support sets of all cubes) Skip cases where f/x yilds 1 cube

39 39 Recursive Kernel Computation Example n f = ace+bce+de+g n Literals a or b. No action required. n Literal c. Select cube ce: Recursive call with argument (ace+bce+de+g)/ce =a+b; No additional kernels. Adds a+b to the kernel set at the last step. n Literal d. No action required. n Literal e. Select cube e: Recursive call with argument ac+bc+d Kernel a+b is rediscovered and added. Adds ac + bc + d to the kernel set at the last step. n Literal g. No action required. n Adds ace+bce+de+g to the kernel set. n K = {(ace+bce+de+g); (a+b); (ac+bc+d); (a+b)}. n f = ace+bce+de+g n Literals a or b. No action required. n Literal c. Select cube ce: Recursive call with argument (ace+bce+de+g)/ce =a+b; No additional kernels. Adds a+b to the kernel set at the last step. n Literal d. No action required. n Literal e. Select cube e: Recursive call with argument ac+bc+d Kernel a+b is rediscovered and added. Adds ac + bc + d to the kernel set at the last step. n Literal g. No action required. n Adds ace+bce+de+g to the kernel set. n K = {(ace+bce+de+g); (a+b); (ac+bc+d); (a+b)}.

40 40 AnalysisAnalysis n Some computation may be redundant Example Divide by a and then by b. Divide by b and then by a. Obtain duplicate kernels. n Improvement Keep a pointer to literals used so far denoted by j. J initially set to 1. Avoids generation of co-kernels already calculated Sup(f)={x 1, x 2, …x n } (arranged in lexicographic order) f is assumed to be cube-free If not divide it by its largest cube factor Faster algorithm n Some computation may be redundant Example Divide by a and then by b. Divide by b and then by a. Obtain duplicate kernels. n Improvement Keep a pointer to literals used so far denoted by j. J initially set to 1. Avoids generation of co-kernels already calculated Sup(f)={x 1, x 2, …x n } (arranged in lexicographic order) f is assumed to be cube-free If not divide it by its largest cube factor Faster algorithm

41 41 Recursive Kernel Computation

42 42 Recursive Kernel Computation Examples… n f = ace+bce+de+g; sup(f)={a, b, c, d, e, g} n Literals a or b. No action required. n Literal c. Select cube ce: Recursive call with arguments: (ace+bce+de+g)/ce =a+b; pointer j = 3+1. Call considers variables {d, e, g}. No kernel. Adds a+b to the kernel set at the last step. n Literal d. No action required. n Literal e. Select cube e: Recursive call with arguments: ac+bc+d and pointer j = 5+1. Call considers variable {g}. No kernel. Adds ac+bc+d to the kernel set at the last step. n Literal g. No action required. n Adds ace+bce+de+g to the kernel set. n K = {(ace+bce+de+g); (ac+bc+d); (a+b)}. n f = ace+bce+de+g; sup(f)={a, b, c, d, e, g} n Literals a or b. No action required. n Literal c. Select cube ce: Recursive call with arguments: (ace+bce+de+g)/ce =a+b; pointer j = 3+1. Call considers variables {d, e, g}. No kernel. Adds a+b to the kernel set at the last step. n Literal d. No action required. n Literal e. Select cube e: Recursive call with arguments: ac+bc+d and pointer j = 5+1. Call considers variable {g}. No kernel. Adds ac+bc+d to the kernel set at the last step. n Literal g. No action required. n Adds ace+bce+de+g to the kernel set. n K = {(ace+bce+de+g); (ac+bc+d); (a+b)}.

43 43 …Recursive Kernel Computation Examples n Y= adf + aef + bdf + bef + cdf + cef + g Lexicographic order {a, b, c, d, e, f, g} n Y= adf + aef + bdf + bef + cdf + cef + g Lexicographic order {a, b, c, d, e, f, g} adf + aef + bdf + bef + cdf + cef + g d+e a+b+c ad+ae+bd+be+cd+ce af bf cf df ef f

44 44 Matrix Representation of Kernels … n Boolean matrix Rows: cubes. Columns: variables (in both true and complement form as needed). n Rectangle (R, C) Subset of rows and columns with all entries equal to 1. n Prime rectangle Rectangle not inside any other rectangle A co-kernel corresponds to a prime rectangle with at least two rows. n Co-rectangle (R, C’) of a rectangle (R, C) C’ are the columns not in C A kernel corresponds to the sum of terms in a co-rectangle restricted to the variables in C’ n Boolean matrix Rows: cubes. Columns: variables (in both true and complement form as needed). n Rectangle (R, C) Subset of rows and columns with all entries equal to 1. n Prime rectangle Rectangle not inside any other rectangle A co-kernel corresponds to a prime rectangle with at least two rows. n Co-rectangle (R, C’) of a rectangle (R, C) C’ are the columns not in C A kernel corresponds to the sum of terms in a co-rectangle restricted to the variables in C’

45 45 … Matrix Representation of Kernels … n f x = ace+bce+de+g n 1 st Rectangle (prime): ({1, 2}, {3, 5})  Co-kernel ce. Co-rectangle: ({1, 2}, {1, 2, 4, 6})  Kernel a+b. n 2 nd prime rectangle: ({1, 2, 3}, {5})  Co-kernel e. Co-rectangle: ({1, 2, 3}, {1, 2, 3, 4, 6})  Kernel ac+bc+d. n f x = ace+bce+de+g n 1 st Rectangle (prime): ({1, 2}, {3, 5})  Co-kernel ce. Co-rectangle: ({1, 2}, {1, 2, 4, 6})  Kernel a+b. n 2 nd prime rectangle: ({1, 2, 3}, {5})  Co-kernel e. Co-rectangle: ({1, 2, 3}, {1, 2, 3, 4, 6})  Kernel ac+bc+d.

46 46 … Matrix Representation of Kernels n Theorem: K is a kernel of f iff it is an expression corresponding to the co-rectangle of a prime rectangle of f. n The set of all kernels of a logic expression are in 1-1 correspondence with the set of all prime rectangles of the corresponding Boolean matrix. n A level-0 kernel is the co-rectangle of a prime rectangle of maximal width. n A prime rectangle of maximum height corresponds to a kernel of maximal level. n Theorem: K is a kernel of f iff it is an expression corresponding to the co-rectangle of a prime rectangle of f. n The set of all kernels of a logic expression are in 1-1 correspondence with the set of all prime rectangles of the corresponding Boolean matrix. n A level-0 kernel is the co-rectangle of a prime rectangle of maximal width. n A prime rectangle of maximum height corresponds to a kernel of maximal level.

47 47 Single-Cube Extraction … n Form auxiliary function Sum of all product terms of all functions. n Form matrix representation A rectangle with two rows represents a common cube. Best choice is a prime rectangle. n Use function ID for cubes Cube intersection from different functions. n Form auxiliary function Sum of all product terms of all functions. n Form matrix representation A rectangle with two rows represents a common cube. Best choice is a prime rectangle. n Use function ID for cubes Cube intersection from different functions.

48 48 … Single-Cube Extraction n Expressions f x = ace+bce+de+g f s = cde+b n Auxiliary function f aux = ace+bce+de+g + cde+b n Matrix: n Prime rectangle: ({1, 2, 5}, {3, 5}) n Extract cube ce. n Expressions f x = ace+bce+de+g f s = cde+b n Auxiliary function f aux = ace+bce+de+g + cde+b n Matrix: n Prime rectangle: ({1, 2, 5}, {3, 5}) n Extract cube ce.

49 49 Single-Cube Extraction Algorithm Extraction of an l-variable cube with multiplicity n saves n l – n – l literals

50 50 Multiple-Cube Extraction … n We need a kernel/cube matrix. n Relabeling Cubes by new variables. Kernels by cubes. n Form auxiliary function Sum of all kernels. n Extend cube intersection algorithm. n We need a kernel/cube matrix. n Relabeling Cubes by new variables. Kernels by cubes. n Form auxiliary function Sum of all kernels. n Extend cube intersection algorithm.

51 51 … Multiple-Cube Extraction n f p = ace+bce. K(f p ) = {(a+b)}. n f q = ae+be+d. K(f q ) = {(a+b), (ae +be+d)}. n Relabeling x a = a; x b = b; x ae = ae; x be = be; x d = d; K(f p ) = {{x a, x b }} K(f q ) = {{x a, x b }; {x ae, x be, x d }}. n f aux = x a x b + x a x b +x ae x be x d. n Co-kernel: x a x b. x a x b corresponds to kernel intersection a+b. Extract a+b from f p and f q. n f p = ace+bce. K(f p ) = {(a+b)}. n f q = ae+be+d. K(f q ) = {(a+b), (ae +be+d)}. n Relabeling x a = a; x b = b; x ae = ae; x be = be; x d = d; K(f p ) = {{x a, x b }} K(f q ) = {{x a, x b }; {x ae, x be, x d }}. n f aux = x a x b + x a x b +x ae x be x d. n Co-kernel: x a x b. x a x b corresponds to kernel intersection a+b. Extract a+b from f p and f q.

52 52 Kernel Extraction Algorithm N indicates the rate at which kernels are recomputed K indicates the maximum level of the kernel computed

53 53 Issues in Common Cube and Multiple-Cube Extraction n Greedy approach can be applied in common cube and multiple-cube extraction Rectangle selection Matrix update n Greedy approach may be myopic Local gain of one extraction considered at a time n Non-prime rectangles can contribute to lower cost covers than prime rectangles Quine’s theorem cannot be applied to rectangles n Greedy approach can be applied in common cube and multiple-cube extraction Rectangle selection Matrix update n Greedy approach may be myopic Local gain of one extraction considered at a time n Non-prime rectangles can contribute to lower cost covers than prime rectangles Quine’s theorem cannot be applied to rectangles

54 54 Decomposition … n Goals of decomposition Reduce the size of expressions to that typical of library cells. Small-sized expressions more likely to be divisors of other expressions. n Different decomposition techniques exist. n Algebraic-division-based decomposition Given an expression f with f divisor as one of its divisors. Associate a new variable, say t, with the divisor. Reduce original expression to f= t. f quotient + f reminder and t= f divisor. Apply decomposition recursively to the divisor, quotient and remainder. n Important issue is choice of divisor A kernel. A level-0 kernel. Evaluate all kernels and select most promising one. n Goals of decomposition Reduce the size of expressions to that typical of library cells. Small-sized expressions more likely to be divisors of other expressions. n Different decomposition techniques exist. n Algebraic-division-based decomposition Given an expression f with f divisor as one of its divisors. Associate a new variable, say t, with the divisor. Reduce original expression to f= t. f quotient + f reminder and t= f divisor. Apply decomposition recursively to the divisor, quotient and remainder. n Important issue is choice of divisor A kernel. A level-0 kernel. Evaluate all kernels and select most promising one.

55 55 … Decomposition n f x = ace+bce+de+g n Select kernel ac+bc+d. n Decompose: f x = te+g; f t = ac+bc+d; n Recur on the divisor f t Select kernel a+b Decompose: f t = sc+d; f s = a+b; n f x = ace+bce+de+g n Select kernel ac+bc+d. n Decompose: f x = te+g; f t = ac+bc+d; n Recur on the divisor f t Select kernel a+b Decompose: f t = sc+d; f s = a+b;

56 56 Decomposition Algorithm K is a threshold that determines the size of nodes to be decomposed.

57 57 Factorization Algorithm n FACTOR(f) If (the number of literals in f is one) return f K =choose_Divisor(f) (h, r) = Divide(f, k) Return (FACTOR(k) FACTOR(h) + FACTOR(r)) n Quick factoring: divisor restricted to first level-0 kernel found Fast and effective Used for area and delay estimation n Good factoring: best kernel divisor is chosen n Example: f = ab + ac + bd + ce + cg Quick factoring: f = a (b+c) + c (e+g) + bd (8 literals) Good factoring: f = c (a+e+g) + b(a+d) (7 literals) n FACTOR(f) If (the number of literals in f is one) return f K =choose_Divisor(f) (h, r) = Divide(f, k) Return (FACTOR(k) FACTOR(h) + FACTOR(r)) n Quick factoring: divisor restricted to first level-0 kernel found Fast and effective Used for area and delay estimation n Good factoring: best kernel divisor is chosen n Example: f = ab + ac + bd + ce + cg Quick factoring: f = a (b+c) + c (e+g) + bd (8 literals) Good factoring: f = c (a+e+g) + b(a+d) (7 literals)

58 58 One-Level-0-KernelOne-Level-0-Kernel One-Level-0-Kernel(f) If (|f| ≤1) return 0 If (L = Literal_Count(f) ≤ 1 ) return f For (i=1; i ≤n; i++){ If (L(i) > 1){ f C = largest cube containing i s.t. CUBES(f,C)=CUBES(f,i) return One-Level-0-Kernel(f/f C ) } n Literal_Count returns a vector of literal counts for each literal. If all counts are ≤1 then f is a level-0 kernel n The first literal with a count greater than one is chosen. One-Level-0-Kernel(f) If (|f| ≤1) return 0 If (L = Literal_Count(f) ≤ 1 ) return f For (i=1; i ≤n; i++){ If (L(i) > 1){ f C = largest cube containing i s.t. CUBES(f,C)=CUBES(f,i) return One-Level-0-Kernel(f/f C ) } n Literal_Count returns a vector of literal counts for each literal. If all counts are ≤1 then f is a level-0 kernel n The first literal with a count greater than one is chosen.

59 59 Boolean Methods n Exploit Boolean properties. Don't care conditions. n Minimization of the local functions. n Slower algorithms, better quality results. n Don’t care conditions related to embedding of a function in an environment Called external don’t care conditions n External don’t care conditions Controllability Observability n Exploit Boolean properties. Don't care conditions. n Minimization of the local functions. n Slower algorithms, better quality results. n Don’t care conditions related to embedding of a function in an environment Called external don’t care conditions n External don’t care conditions Controllability Observability

60 60 External Don't Care Conditions … n Controllability don't care set CDC in Input patterns never produced by the environment at the network's input. n Observability don't care set ODC out Input patterns representing conditions when an output is not observed by the environment. Relative to each output (n o entries). Vector notation used: ODC out. n Controllability don't care set CDC in Input patterns never produced by the environment at the network's input. n Observability don't care set ODC out Input patterns representing conditions when an output is not observed by the environment. Relative to each output (n o entries). Vector notation used: ODC out.

61 61 … External Don't Care Conditions n Inputs driven by a de-multiplexer. n CDC in = x 1 ’x 2 ’x 3 ’x 4 ’+x 1 x 2 +x 1 x 3 +x 1 x 4 +x 2 x 3 +x 2 x 4 +x 3 x 4. n Outputs observed when x 1 +x 4 =1. n Inputs driven by a de-multiplexer. n CDC in = x 1 ’x 2 ’x 3 ’x 4 ’+x 1 x 2 +x 1 x 3 +x 1 x 4 +x 2 x 3 +x 2 x 4 +x 3 x 4. n Outputs observed when x 1 +x 4 =1. CDC in is a vector with n o entries equal to CDC in

62 62 Internal Don't Care Conditions … n Induced by the network structure. n Controllability don't care conditions Patterns never produced at the inputs of a sub-network. n Observability don't care conditions Patterns such that the outputs of a sub-network are not observed. n Induced by the network structure. n Controllability don't care conditions Patterns never produced at the inputs of a sub-network. n Observability don't care conditions Patterns such that the outputs of a sub-network are not observed.

63 63 … Internal Don't Care Conditions n Example: x = a’+b; y= abx + a’cx n CDC of v y includes ab’x+a’x’. ab’  x=0; ab’x is a don’t care condition a’  x=1; a’x’ is a don’t care condition n Minimize f y to obtain: f y = ax+a’c. n Example: x = a’+b; y= abx + a’cx n CDC of v y includes ab’x+a’x’. ab’  x=0; ab’x is a don’t care condition a’  x=1; a’x’ is a don’t care condition n Minimize f y to obtain: f y = ax+a’c.

64 64 Satisfiability Don't Care Conditions n Invariant of the network x = f x  x ≠ f x  SDC. n Useful to compute controllability don't cares. n Example Assume x = a’ + b Since x ≠ (a’ + b) is not possible, x  (a’ + b)=x’a’ + x’b + xab’ is a don’t care condition. n Invariant of the network x = f x  x ≠ f x  SDC. n Useful to compute controllability don't cares. n Example Assume x = a’ + b Since x ≠ (a’ + b) is not possible, x  (a’ + b)=x’a’ + x’b + xab’ is a don’t care condition.

65 65 CDC Computation … n Network traversal algorithm Consider different cuts moving from input to output. n Initial CDC is CDC in. n Move cut forward. Consider SDC contributions of predecessors. Remove unneeded variables by consensus. n Consensus of a function f with respect to variable x is f x. f x’ n Network traversal algorithm Consider different cuts moving from input to output. n Initial CDC is CDC in. n Move cut forward. Consider SDC contributions of predecessors. Remove unneeded variables by consensus. n Consensus of a function f with respect to variable x is f x. f x’

66 66 … CDC Computation …

67 67 … CDC Computation n Assume CDC in = x 1 ’x 4 ’. n Select vertex v a Contribution to CDC cut : a  (x 2  x 3 ). CDC cut = x 1 ’x 4 ’ + a  (x 2  x 3 ). Drop variables D = {x 2, x 3 } by consensus CDC cut = x 1 ’x 4 ’. n Select vertex v b Contribution to CDC cut : b  (x 1 +a). CDC cut = x 1 ’x 4 ’ + b  (x 1 +a). Drop variable D = {x 1 } by consensus CDC cut = b’x 4 ’ +b’a. n... n CDC cut = e’ = z 2 ’. n Assume CDC in = x 1 ’x 4 ’. n Select vertex v a Contribution to CDC cut : a  (x 2  x 3 ). CDC cut = x 1 ’x 4 ’ + a  (x 2  x 3 ). Drop variables D = {x 2, x 3 } by consensus CDC cut = x 1 ’x 4 ’. n Select vertex v b Contribution to CDC cut : b  (x 1 +a). CDC cut = x 1 ’x 4 ’ + b  (x 1 +a). Drop variable D = {x 1 } by consensus CDC cut = b’x 4 ’ +b’a. n... n CDC cut = e’ = z 2 ’.

68 68 CDC Computation by Image … n Network behavior at cut: f. n CDC cut is just the complement of the image of (CDC in )’ with respect to f. n CDC cut is just the complement of the range of f when CDC in = . n Range can be computed recursively. Terminal case: scalar function. Range of y = f(x) is y+y’ (any value) unless f (or f’) is a tautology and the range is y (or y’). n Network behavior at cut: f. n CDC cut is just the complement of the image of (CDC in )’ with respect to f. n CDC cut is just the complement of the range of f when CDC in = . n Range can be computed recursively. Terminal case: scalar function. Range of y = f(x) is y+y’ (any value) unless f (or f’) is a tautology and the range is y (or y’).

69 69 … CDC Computation by Image … n range(f) = d range((b+c)| d=bc=1 ) +d’ range((b+c)| d=bc=0 ) n When d = 1, then bc = 1  b+c = 1 is TAUTOLOGY. n When d = 0, then bc = 0  b+c = {0, 1}. n range(f) = de+d’(e+e’) = de+d’ = d’ +e n range(f) = d range((b+c)| d=bc=1 ) +d’ range((b+c)| d=bc=0 ) n When d = 1, then bc = 1  b+c = 1 is TAUTOLOGY. n When d = 0, then bc = 0  b+c = {0, 1}. n range(f) = de+d’(e+e’) = de+d’ = d’ +e

70 70 … CDC Computation by Image … n Assume that variables d and e are expressed in terms of {x 1, a, x 4 } and CDC in = . CDC out = (e+d’)’ = de’ = z 1 z 2 ’

71 71 … CDC Computation by Image n Range computation can be transformed into an existential quantification of the characteristic function  (x,y)=1 representing y=f(x) n Range(f)=S x (  (x,y)) n BDD representation of characteristic functions and applying smoothing operators on BDDs simple and effective Practical for large circuits n Example: CDC in = x 1 ’x 4 ’. Characteristic equation:  =(d  (x 1 x 4 +a))’ (e  (x 1 +x 4 +a))’=1  =de(x 1 x 4 +a)+d’ea’(x 1 ’x 4 +x 1 x 4 ’)+d’e’a’x 1 ’x 4 ’ Range of f is S ax1x4 (  )=de+d’e+d’e’=d’+e Image of CDC in ’=x 1 +x 4 is equal to the range of  (x 1 +x 4 )  (x 1 +x 4 )=de(x 1 x 4 +a)(x 1 +x 4 )+d’ea’(x 1 ’x 4 +x 1 x 4 ’) Range of  (x1+x4)=S ax1x4 (de(x 1 x 4 +a)(x 1 +x 4 )+d’ea’(x 1 ’x 4 +x 1 x 4 ’)) = de + d’e = e.  CDC Out = e’ n Range computation can be transformed into an existential quantification of the characteristic function  (x,y)=1 representing y=f(x) n Range(f)=S x (  (x,y)) n BDD representation of characteristic functions and applying smoothing operators on BDDs simple and effective Practical for large circuits n Example: CDC in = x 1 ’x 4 ’. Characteristic equation:  =(d  (x 1 x 4 +a))’ (e  (x 1 +x 4 +a))’=1  =de(x 1 x 4 +a)+d’ea’(x 1 ’x 4 +x 1 x 4 ’)+d’e’a’x 1 ’x 4 ’ Range of f is S ax1x4 (  )=de+d’e+d’e’=d’+e Image of CDC in ’=x 1 +x 4 is equal to the range of  (x 1 +x 4 )  (x 1 +x 4 )=de(x 1 x 4 +a)(x 1 +x 4 )+d’ea’(x 1 ’x 4 +x 1 x 4 ’) Range of  (x1+x4)=S ax1x4 (de(x 1 x 4 +a)(x 1 +x 4 )+d’ea’(x 1 ’x 4 +x 1 x 4 ’)) = de + d’e = e.  CDC Out = e’

72 72 Network Perturbation n Modify network by adding an extra input . n Extra input can flip polarity of a signal x. n Replace local function f x by f x  . n Perturbed terminal behavior: f x (  ). n A variable is observable if a change in its polarity is perceived at an output. n Observability don’t-care set ODC for variable x is (f x (0)  f x (1))’ f x (0)=abc f x (1)=a’bc ODC x = (abc  a’bc)’ = b’+c’ Minimizing f x =ab with ODC x = b’+c’ leads to f x =a. n Modify network by adding an extra input . n Extra input can flip polarity of a signal x. n Replace local function f x by f x  . n Perturbed terminal behavior: f x (  ). n A variable is observable if a change in its polarity is perceived at an output. n Observability don’t-care set ODC for variable x is (f x (0)  f x (1))’ f x (0)=abc f x (1)=a’bc ODC x = (abc  a’bc)’ = b’+c’ Minimizing f x =ab with ODC x = b’+c’ leads to f x =a.

73 73 Observability Don't Care Conditions n Conditions under which a change in polarity of a signal x is not perceived at the outputs. n Complement of the Boolean difference ODC X = (  f/  x)’ = (f| x=1  f| x=0 )’ n Equivalence of perturbed function: (f x (0)  f x (1))’. n Observability don't care computation Problem Outputs are not expressed as function of all variables. If network is flattened to obtain f, it may explode in size. Requirement Local rules for ODC computation. Network traversal. n Conditions under which a change in polarity of a signal x is not perceived at the outputs. n Complement of the Boolean difference ODC X = (  f/  x)’ = (f| x=1  f| x=0 )’ n Equivalence of perturbed function: (f x (0)  f x (1))’. n Observability don't care computation Problem Outputs are not expressed as function of all variables. If network is flattened to obtain f, it may explode in size. Requirement Local rules for ODC computation. Network traversal.

74 74 Observability Don't Care Computation … n Assume single-output network with tree structure. n Traverse network tree (from primary outputs to inputs). n At root ODC out is given. n At internal vertices assuming y is the output of x ODC x = (  f y /  x)’ + ODC y = (f y | x=1  f y | x=0 )’+ ODC y n Example Assume ODC out = ODC e = 0. ODC b = (  f e /  b)’ = ((b+c)| b=1  (b+c)| b=0 )’= c. ODC c = (  f e /  c)’ = b. ODC x1 = ODC b + (  f b /  x 1 )’ = c+a1. … n Assume single-output network with tree structure. n Traverse network tree (from primary outputs to inputs). n At root ODC out is given. n At internal vertices assuming y is the output of x ODC x = (  f y /  x)’ + ODC y = (f y | x=1  f y | x=0 )’+ ODC y n Example Assume ODC out = ODC e = 0. ODC b = (  f e /  b)’ = ((b+c)| b=1  (b+c)| b=0 )’= c. ODC c = (  f e /  c)’ = b. ODC x1 = ODC b + (  f b /  x 1 )’ = c+a1. …

75 75 … Observability Don't Care Computation n General networks have fanout reconvergence. n For each vertex with two (or more) fanout stems The contribution of the ODC along the stems cannot be added. Wrong assumption is intersecting them ODC a,b =x 1 +c=x 1 +a+x 4 ODC a,c =x 4 +b=x 4 +a+x 1 ODC a,b  ODC a,c =x 1 +a+x 4 Variable a is not redundant Interplay of different paths. n More elaborate analysis. n General networks have fanout reconvergence. n For each vertex with two (or more) fanout stems The contribution of the ODC along the stems cannot be added. Wrong assumption is intersecting them ODC a,b =x 1 +c=x 1 +a+x 4 ODC a,c =x 4 +b=x 4 +a+x 1 ODC a,b  ODC a,c =x 1 +a+x 4 Variable a is not redundant Interplay of different paths. n More elaborate analysis.

76 76 Two-way Fanout Stem … n Compute ODC sets associated with edges. n Combine ODCs at vertex. n Formula derivation Assume two equal perturbations on the edges. n Compute ODC sets associated with edges. n Combine ODCs at vertex. n Formula derivation Assume two equal perturbations on the edges.

77 77 … Two-way Fanout Stem n ODC a,b = x 1 +c = x 1 +a 2 +x 4 n ODC a,c = x 4 +b = x 4 +a 1 +x 1 n ODC a = (ODC a,b |a2=a’  ODC a,c )’ = ((x 1 +a’+x 4 )  (x 4 +a+x 1 ))’ = x 1 +x 4 n ODC a,b = x 1 +c = x 1 +a 2 +x 4 n ODC a,c = x 4 +b = x 4 +a 1 +x 1 n ODC a = (ODC a,b |a2=a’  ODC a,c )’ = ((x 1 +a’+x 4 )  (x 4 +a+x 1 ))’ = x 1 +x 4 a1a1 a2a2

78 78 Multi-Way Stems Theorem n Let v x  V be any internal or input vertex. n Let {x i ; i = 1, 2, …, p} be the edge variables corresponding to {(x, y i ) ; i = 1, 2, …, p}. n Let ODC x,yi ; i = 1, 2, …, p the edge ODCs. n For a 3-fanout stem variable x, ODC x = ODC x,y1 |x2=x3=x’  ODC x,y2 |x3=x’  ODC a,y3 n Let v x  V be any internal or input vertex. n Let {x i ; i = 1, 2, …, p} be the edge variables corresponding to {(x, y i ) ; i = 1, 2, …, p}. n Let ODC x,yi ; i = 1, 2, …, p the edge ODCs. n For a 3-fanout stem variable x, ODC x = ODC x,y1 |x2=x3=x’  ODC x,y2 |x3=x’  ODC a,y3

79 79 Observability Don't Care Algorithm … n For each variable, intersection of ODC at all outputs yields condition under which output is not observed Global ODC of a variable n The global ODC conditions of the input variables is the input observability don’t care set ODC in. May be used as external ODC sets for optimizing a network feeding the one under consideration n For each variable, intersection of ODC at all outputs yields condition under which output is not observed Global ODC of a variable n The global ODC conditions of the input variables is the input observability don’t care set ODC in. May be used as external ODC sets for optimizing a network feeding the one under consideration

80 80 … Observability Don't Care Algorithm Global ODC of a is (x1x4)(x1+x4)=x1x4

81 81 Transformations with Don't Cares n Boolean simplification Use standard minimizer (Espresso). Minimize the number of literals. n Boolean substitution Simplify a function by adding an extra input. Equivalent to simplification with global don't care conditions. n Example Substitute q = a+cd into f h = a+bcd+e to get f h = a+bq +e. SDC set: q  (a+cd) = q’a+q’cd+qa’(cd)’. Simplify f h = a+bcd+e with q’a+q’cd+qa’(cd)’ as don't care. Simplication yields f h = a+bq +e. One literal less by changing the support of f h. n Boolean simplification Use standard minimizer (Espresso). Minimize the number of literals. n Boolean substitution Simplify a function by adding an extra input. Equivalent to simplification with global don't care conditions. n Example Substitute q = a+cd into f h = a+bcd+e to get f h = a+bq +e. SDC set: q  (a+cd) = q’a+q’cd+qa’(cd)’. Simplify f h = a+bcd+e with q’a+q’cd+qa’(cd)’ as don't care. Simplication yields f h = a+bq +e. One literal less by changing the support of f h.

82 82 Single-Vertex Optimization Two simplification strategies: 1.Optimizing one vertex at a time  the DC set changes as the network is traversed 2.Optimizing a group of vertices simultaneously

83 83 Optimization and Perturbations … n Replace f x by g x. n Perturbation  x = f x  g x. n Conditions for feasible replacement: Perturbation bounded by local don't care set   x  DC ext +ODC x If f x and g x have the same support set S(x) then  x  DC ext +ODC x + CDC S(x) If S(g x ) included in network variables  x  DC ext +ODC x + SDC x n Replace f x by g x. n Perturbation  x = f x  g x. n Conditions for feasible replacement: Perturbation bounded by local don't care set   x  DC ext +ODC x If f x and g x have the same support set S(x) then  x  DC ext +ODC x + CDC S(x) If S(g x ) included in network variables  x  DC ext +ODC x + SDC x

84 84 … Optimization and Perturbations Example … n No external don't care set. n Replace AND by wire: g x = a n Analysis  x = f x  g x = ab  a = ab’. ODC x = y’ = b’ +c’.  x = ab’  DC x = b’ +c’  feasible! Example … n No external don't care set. n Replace AND by wire: g x = a n Analysis  x = f x  g x = ab  a = ab’. ODC x = y’ = b’ +c’.  x = ab’  DC x = b’ +c’  feasible!

85 85 Synthesis and Testability n Testability Ease of testing a circuit. n Assumptions Combinational circuit. Single or multiple stuck-at faults. n Full testability Possible to generate test set for all faults. n Synergy between synthesis and testing. n Testable networks correlate to small-area networks. n Don't care conditions play a major role. n Testability Ease of testing a circuit. n Assumptions Combinational circuit. Single or multiple stuck-at faults. n Full testability Possible to generate test set for all faults. n Synergy between synthesis and testing. n Testable networks correlate to small-area networks. n Don't care conditions play a major role.

86 86 Test for Stuck-at-Faults n Net y stuck-at 0 Input pattern that sets y to true. Observe output. Output of faulty circuit differs. {t | y(t). ODC’ y (t) = 1}. n Net y stuck-at 1 Same, but set y to false. {t | y’(t). ODC’ y (t) = 1}. n Need controllability and observability. n Net y stuck-at 0 Input pattern that sets y to true. Observe output. Output of faulty circuit differs. {t | y(t). ODC’ y (t) = 1}. n Net y stuck-at 1 Same, but set y to false. {t | y’(t). ODC’ y (t) = 1}. n Need controllability and observability.

87 87 Using Testing Methods for Synthesis … n Redundancy removal. Use ATPG to search for untestable faults. n If stuck-at 0 on net y is untestable Set y = 0. Propagate constant. n If stuck-at 1 on y is untestable Set y = 1. Propagate constant. n Redundancy removal. Use ATPG to search for untestable faults. n If stuck-at 0 on net y is untestable Set y = 0. Propagate constant. n If stuck-at 1 on y is untestable Set y = 1. Propagate constant.

88 88 … Using Testing Methods for Synthesis

89 89 Redundancy Removal and Perturbation Analysis n Stuck-at 0 on y. y set to 0. Namely g x = f x | y=0 Perturbation  = f x  f x | y=0 = y.  f x /  y. n Perturbation is feasible  fault is untestable. n  = y.  f x /  y  DC x  fault is untestable n Making f x prime and irredundant with respect to DC x guarantees that all single stuck-at faults in f x are testable. n Stuck-at 0 on y. y set to 0. Namely g x = f x | y=0 Perturbation  = f x  f x | y=0 = y.  f x /  y. n Perturbation is feasible  fault is untestable. n  = y.  f x /  y  DC x  fault is untestable n Making f x prime and irredundant with respect to DC x guarantees that all single stuck-at faults in f x are testable.

90 90 Synthesis for Testability n Synthesize networks that are fully testable. Single stuck-at faults. Multiple stuck-at faults. n Two-level forms Full testability for single stuck-at faults Prime and irredundant cover. Full testability for multiple stuck-at faults Prime and irredundant cover when Single-output function. No product term sharing. Each component is PI. n Synthesize networks that are fully testable. Single stuck-at faults. Multiple stuck-at faults. n Two-level forms Full testability for single stuck-at faults Prime and irredundant cover. Full testability for multiple stuck-at faults Prime and irredundant cover when Single-output function. No product term sharing. Each component is PI.

91 91 … Synthesis for Testability n A complete single-stuck-at fault test set for a single- output sum-of-product circuit is a complete test set for all multiple stuck-at faults. n Single stuck-at fault testability of multiple-level network does not imply multiple stuck-at fault testability. n Fast extraction transformations are single stuck-at fault test-set preserving transformations. n Algebraic transformations preserve multiple stuck-at fault testability but not single stuck-at fault testability Factorization Substitution (without complement) Cube and kernel extraction n A complete single-stuck-at fault test set for a single- output sum-of-product circuit is a complete test set for all multiple stuck-at faults. n Single stuck-at fault testability of multiple-level network does not imply multiple stuck-at fault testability. n Fast extraction transformations are single stuck-at fault test-set preserving transformations. n Algebraic transformations preserve multiple stuck-at fault testability but not single stuck-at fault testability Factorization Substitution (without complement) Cube and kernel extraction

92 92 Synthesis of Testable Multiple-Level Networks … n A logic network G n (V, E), with local functions in sum of product form. n Prime and irredundant (PI) No literal nor implicant of any local function can be dropped. n Simultaneously prime and irredundant (SPI) No subset of literals and/or implicants can be dropped. n A logic network is PI if and only if its AND-OR implementation is fully testable for single stuck- at faults. n A logic network is SPI if and only if its AND-OR implementation is fully testable for multiple stuck- at faults. n A logic network G n (V, E), with local functions in sum of product form. n Prime and irredundant (PI) No literal nor implicant of any local function can be dropped. n Simultaneously prime and irredundant (SPI) No subset of literals and/or implicants can be dropped. n A logic network is PI if and only if its AND-OR implementation is fully testable for single stuck- at faults. n A logic network is SPI if and only if its AND-OR implementation is fully testable for multiple stuck- at faults.

93 93 … Synthesis of Testable Multiple-Level Networks n Compute full local don't care sets. Make all local functions PI w.r. to don't care sets. n Pitfall Don't cares change as functions change. n Solution Iteration (Espresso-MLD). If iteration converges, network is fully testable. n Flatten to two-level form. When possible -- no size explosion. n Make SPI by disjoint logic minimization. n Reconstruct multiple-level network Algebraic transformations preserve multifault testability. n Compute full local don't care sets. Make all local functions PI w.r. to don't care sets. n Pitfall Don't cares change as functions change. n Solution Iteration (Espresso-MLD). If iteration converges, network is fully testable. n Flatten to two-level form. When possible -- no size explosion. n Make SPI by disjoint logic minimization. n Reconstruct multiple-level network Algebraic transformations preserve multifault testability.

94 94 Timing Issues in Multiple-Level Logic Optimization n Timing optimization is crucial for achieving competitive logic design. n Timing verification: Check that a circuit runs at speed Satisfies I/O delay constraints. Satisfies cycle-time constraints. Delay modeling. Critical paths. The false path problem. n Algorithms for timing optimization. Minimum area subject to delay constraints. Minimum delay (subject to area constraints). n Timing optimization is crucial for achieving competitive logic design. n Timing verification: Check that a circuit runs at speed Satisfies I/O delay constraints. Satisfies cycle-time constraints. Delay modeling. Critical paths. The false path problem. n Algorithms for timing optimization. Minimum area subject to delay constraints. Minimum delay (subject to area constraints).

95 95 Delay Modeling n Gate delay modeling Straightforward for bound networks. Approximations for unbound networks. n Network delay modeling Compute signal propagation Topological methods. Logic/topological methods. n Gate delay modeling for unbound networks Virtual gates: Logic expressions. Stage delay model: Unit delay per vertex. Refined models: Depending on size and fanout. n Gate delay modeling Straightforward for bound networks. Approximations for unbound networks. n Network delay modeling Compute signal propagation Topological methods. Logic/topological methods. n Gate delay modeling for unbound networks Virtual gates: Logic expressions. Stage delay model: Unit delay per vertex. Refined models: Depending on size and fanout.

96 96 Network Delay Modeling … n For each vertex v i. Propagation delay d i. n Data-ready time t i. Denotes the time at which the data is ready at the output. Input data-ready times denote when inputs are available. Computed elsewhere by forward traversal n The maximum data-ready time occurring at an output vertex Corresponds to the longest propagation delay path Called topological critical path n For each vertex v i. Propagation delay d i. n Data-ready time t i. Denotes the time at which the data is ready at the output. Input data-ready times denote when inputs are available. Computed elsewhere by forward traversal n The maximum data-ready time occurring at an output vertex Corresponds to the longest propagation delay path Called topological critical path

97 97 … Network Delay Modeling … n Assume t a =0 and t b =10. n Propagation delays d g = 3; d h = 8; d m = 1; d k = 10; d l = 3; d n = 5; d p = 2; d q = 2; d x = 2; d y = 3; Maximum data-ready time is t y =25 Topological critical path: (v b, v n, v p, v l, v q, v y ). n Assume t a =0 and t b =10. n Propagation delays d g = 3; d h = 8; d m = 1; d k = 10; d l = 3; d n = 5; d p = 2; d q = 2; d x = 2; d y = 3; Maximum data-ready time is t y =25 Topological critical path: (v b, v n, v p, v l, v q, v y ). t g = 3+0=3 t h = 8+3=11 t k = 10+3=13 t n = 5+10=15 t p = 2+max{15,3}=17 t l = 3+max{13,17}=20 t m = 1+max{3,11,20}=21 t x = 2+21=23 t q = 2+20=22 t y = 3+22=25

98 98 … Network Delay Modeling … n For each vertex v i. Required data-ready time t i. Specified at the primary outputs. Computed elsewhere by backward traversal Slack s i. Difference between required and actual data-ready times n For each vertex v i. Required data-ready time t i. Specified at the primary outputs. Computed elsewhere by backward traversal Slack s i. Difference between required and actual data-ready times

99 99 … Network Delay Modeling n Required data-ready times t x = 25 and t y = 25. n Required data-ready times t x = 25 and t y = 25. Required Times & Slack: s x = 2; s y =0 t m = 25-2=23; s m =23-21=2 t q = 25-3=22; s q =22-22=0 t l = min{23-1,22-2}=20; s l =0 t h = 23-1=22; s h =22-11=11 t k = 20-3=17; s k =17-13=4 t p = 20-3=17; s p =17-17=0 t n = 17-2=15; s n =15-15=0 t b = 15-5=10; s b =10-10=0 t g = min{22-8;17-10;17-2}=7; s g =4 t a =7-3=4; s a =4-0=4 Data-Ready Times: t g = 3+0=3 t h = 8+3=11 t k = 10+3=13 t n = 5+10=15 t p = 2+max{15,3}=17 t l = 3+max{13,17}=20 t m = 1+max{3,11,20}=21 t x = 2+21=23 t q = 2+20=22 t y = 3+22=25 Propagation Delays : d g = 3; d h = 8; d m = 1; d k = 10; d l = 3; d n = 5; d p = 2; d q = 2; d x = 2; d y = 3

100 100 Topological Critical Path … n Assume topologic computation of Data-ready by forward traversal. Required data-ready by backward traversal. n Topological critical path Input/output path with zero slacks. Any increase in the vertex propagation delay affects the output data-ready time. n A topological critical path may be false. No event can propagate along that path. False path does not affect performance n Assume topologic computation of Data-ready by forward traversal. Required data-ready by backward traversal. n Topological critical path Input/output path with zero slacks. Any increase in the vertex propagation delay affects the output data-ready time. n A topological critical path may be false. No event can propagate along that path. False path does not affect performance

101 101 … Topological Critical Path Topological critical path: (v b, v n, v p, v l, v q, v y ).

102 102 False Path Example n All gates have unit delay. n All inputs ready at time 0. n Longest topological path: (v a, v c, v d, v y, v z ). Path delay: 4 units. False path: event cannot propagate through it n Critical true path: (v a, v c, v d, v y ). Path delay: 3 units. n All gates have unit delay. n All inputs ready at time 0. n Longest topological path: (v a, v c, v d, v y, v z ). Path delay: 4 units. False path: event cannot propagate through it n Critical true path: (v a, v c, v d, v y ). Path delay: 3 units.

103 103 Algorithms for Delay Minimization … n Alternate Critical path computation. Logic transformation on critical vertices. n Consider quasi critical paths Paths with near-critical delay. Small slacks. n Small difference between critical paths and largest delay of a non-critical path leads to smaller gain in speeding up critical paths only. n Alternate Critical path computation. Logic transformation on critical vertices. n Consider quasi critical paths Paths with near-critical delay. Small slacks. n Small difference between critical paths and largest delay of a non-critical path leads to smaller gain in speeding up critical paths only.

104 104 … Algorithms for Delay Minimization n Most critical delay optimization algorithms have the following framework:

105 105 Transformations for Delay Reduction … n Reduce propagation delay. n Reduce dependencies from critical inputs. n Favorable transformation Reduces local data-ready time. Any data-ready time increase at other vertices is bounded by the local slack. n Example Unit gate delay. Transformation: Elimination. Always favorable. Obtain several area/delay trade-off points. n Reduce propagation delay. n Reduce dependencies from critical inputs. n Favorable transformation Reduces local data-ready time. Any data-ready time increase at other vertices is bounded by the local slack. n Example Unit gate delay. Transformation: Elimination. Always favorable. Obtain several area/delay trade-off points.

106 106 … Transformations for Delay Reduction n W is a minimum-weight separation set from U. n Iteration 1 Values of v p, v q, v u = -1 Value of v s =0. Eliminate v p, v q. (No literal increase.) n Iteration 2 Value of v s =2, value of v u =-1. Eliminate v u. (No literal increase.) n Iteration 3 Eliminate v r, v s, v t. (Literals increase.) n W is a minimum-weight separation set from U. n Iteration 1 Values of v p, v q, v u = -1 Value of v s =0. Eliminate v p, v q. (No literal increase.) n Iteration 2 Value of v s =2, value of v u =-1. Eliminate v u. (No literal increase.) n Iteration 3 Eliminate v r, v s, v t. (Literals increase.)

107 107 More Refined Delay Models n Propagation delay grows with the size of the expression and with fanout load. n Elimination Reduces one stage. Yields more complex and slower gates. May slow other paths. n Substitution Adds one dependency. Loads and slows a gate. May slow other paths. Useful if arrival time of critical input is larger than other inputs n Propagation delay grows with the size of the expression and with fanout load. n Elimination Reduces one stage. Yields more complex and slower gates. May slow other paths. n Substitution Adds one dependency. Loads and slows a gate. May slow other paths. Useful if arrival time of critical input is larger than other inputs

108 108 Speed-Up Algorithm … n Decompose network into two-input NAND gates and inverters. n Determine a subnetwork W of depth d. n Collapse subnetwork by elimination. n Duplicate input vertices with successors outside W Record area penalty. Resynthesize W by timing-driven decomposition. n Heuristics Choice of W. Monitor area penalty and potential speed-up. n Decompose network into two-input NAND gates and inverters. n Determine a subnetwork W of depth d. n Collapse subnetwork by elimination. n Duplicate input vertices with successors outside W Record area penalty. Resynthesize W by timing-driven decomposition. n Heuristics Choice of W. Monitor area penalty and potential speed-up.

109 109 … Speed-Up Algorithm n Example NAND delay =2. INVERTER delay =1. All input data-ready=0 except t d =3. Critical Path: from V d to V x (11 delay units) Assume V x is selected and d=5. New critical path: 8 delay units. n Example NAND delay =2. INVERTER delay =1. All input data-ready=0 except t d =3. Critical Path: from V d to V x (11 delay units) Assume V x is selected and d=5. New critical path: 8 delay units.


Download ppt "COE 561 Digital System Design & Synthesis Multiple-Level Logic Synthesis Dr. Muhammad E. Elrabaa Computer Engineering Department King Fahd University of."

Similar presentations


Ads by Google