The Synergy between Logic Synthesis and Equivalence Checking

Slides:



Advertisements
Similar presentations
Recording Synthesis History for Sequential Verification Robert Brayton Alan Mishchenko UC Berkeley.
Advertisements

Reduction of Interpolants for Logic Synthesis John Backes Marc Riedel University of Minnesota Dept.
Introduction to Logic Synthesis Alan Mishchenko UC Berkeley.
1 FRAIGs: Functionally Reduced And-Inverter Graphs Adapted from the paper “FRAIGs: A Unifying Representation for Logic Synthesis and Verification”, by.
DAG-Aware AIG Rewriting Alan Mishchenko, Satrajit Chatterjee, Robert Brayton Department of EECS, University of California Berkeley Presented by Rozana.
Electrical and Computer Engineering Archana Rengaraj ABC Logic Synthesis basics ECE 667 Synthesis and Verification of Digital Systems Spring 2011.
Enhancing and Integrating Model Checking Engines Robert Brayton Alan Mishchenko UC Berkeley June 15, 2009.
05/04/06 1 Integrating Logic Synthesis, Tech mapping and Retiming Presented by Atchuthan Perinkulam Based on the above paper by A. Mishchenko et al, UCAL.
Scalable and Scalably-Verifiable Sequential Synthesis Alan Mishchenko Mike Case Robert Brayton UC Berkeley.
Combinational and Sequential Mapping with Priority Cuts Alan Mishchenko Sungmin Cho Satrajit Chatterjee Robert Brayton UC Berkeley.
ABC: A System for Sequential Synthesis and Verification BVSRC Berkeley Verification and Synthesis Research Center Robert Brayton, Niklas Een, Alan Mishchenko,
The Synergy between Logic Synthesis and Equivalence Checking R. Brayton UC Berkeley Thanks to SRC, NSF, California Micro Program and industrial sponsors,
Cut-Based Inductive Invariant Computation Michael Case 1,2 Alan Mishchenko 1 Robert Brayton 1 Robert Brayton 1 1 UC Berkeley 2 IBM Systems and Technology.
Research Roadmap Past – Present – Future Robert Brayton Alan Mishchenko Logic Synthesis and Verification Group UC Berkeley.
1 Alan Mishchenko Research Update June-September 2008.
A Semi-Canonical Form for Sequential Circuits Alan Mishchenko Niklas Een Robert Brayton UC Berkeley Michael Case Pankaj Chauhan Nikhil Sharma Calypto Design.
Enhancing Model Checking Engines for Multi-Output Problem Solving Alan Mishchenko Robert Brayton Berkeley Verification and Synthesis Research Center Department.
Sequential Equivalence Checking for Clock-Gated Circuits Hamid Savoj Robert Brayton Niklas Een Alan Mishchenko Department of EECS University of California,
Reducing Structural Bias in Technology Mapping
Synthesis for Verification
Power Optimization Toolbox for Logic Synthesis and Mapping
Alan Mishchenko UC Berkeley
Delay Optimization using SOP Balancing
BVSRC Berkeley Verification and Synthesis Research Center UC Berkeley
Enhancing PDR/IC3 with Localization Abstraction
SAT-Based Logic Optimization and Resynthesis
Robert Brayton Alan Mishchenko Niklas Een
Alan Mishchenko Robert Brayton UC Berkeley
Alan Mishchenko Satrajit Chatterjee Robert Brayton UC Berkeley
Introduction to Logic Synthesis with ABC
Magic An Industrial-Strength Logic Optimization, Technology Mapping, and Formal Verification System Alan Mishchenko UC Berkeley.
Logic Synthesis: Past, Present, and Future
Integrating an AIG Package, Simulator, and SAT Solver
Synthesis for Verification
Optimal Redundancy Removal without Fixedpoint Computation
The Synergy between Logic Synthesis and Equivalence Checking
Introduction to Formal Verification
SAT-Based Area Recovery in Technology Mapping
Alan Mishchenko University of California, Berkeley
Canonical Computation without Canonical Data Structure
SAT-Based Optimization with Don’t-Cares Revisited
Canonical Computation Without Canonical Data Structure
Robert Brayton UC Berkeley
Scalable and Scalably-Verifiable Sequential Synthesis
Improvements to Combinational Equivalence Checking
SAT-based Methods for Scalable Synthesis and Verification
Research Status of Equivalence Checking at Zhejiang University
Resolution Proofs for Combinational Equivalence
Integrating an AIG Package, Simulator, and SAT Solver
Introduction to Logic Synthesis
Improvements in FPGA Technology Mapping
Canonical Computation without Canonical Data Structure
Recording Synthesis History for Sequential Verification
Logic Synthesis: Past, Present, and Future
Delay Optimization using SOP Balancing
Logic Synthesis: Past and Future
Canonical Computation without Canonical Data Structure
Magic An Industrial-Strength Logic Optimization, Technology Mapping, and Formal Verification System Alan Mishchenko UC Berkeley.
Innovative Sequential Synthesis and Verification
Robert Brayton Alan Mishchenko Niklas Een
SAT-Based Logic Synthesis (yes, Logic Synthesis Is Everywhere!)
SAT-based Methods: Logic Synthesis and Technology Mapping
SAT-Based Logic Synthesis (yes, Logic Synthesis Is Everywhere!)
Introduction to Logic Synthesis with ABC
Scalable Don’t-Care-Based Logic Optimization and Resynthesis
Robert Brayton Alan Mishchenko Niklas Een
Alan Mishchenko Robert Brayton
Alan Mishchenko Department of EECS UC Berkeley
Integrating AIG Package, Simulator, and SAT Solver
Alan Mishchenko Robert Brayton UC Berkeley
Presentation transcript:

The Synergy between Logic Synthesis and Equivalence Checking R. Brayton UC Berkeley Thanks to SRC, NSF, and industrial sponsors, Actel, Altera, Calypto, Intel, Magma, Synplicity, Synopsys, Xilinx

Outline Mostly emphasize synthesis Look at the operations of classical logic synthesis Contrast these with newer methods based on ideas borrowed from verification Themes will be scalability and verifiability Look at some newer approaches to sequential logic synthesis and verification

Two Kinds of Synergy The algorithms and advancements in verification can be used in synthesis and vice versa. One enables the other Verification enables synthesis - equivalence checking capability enables acceptance of sequential operations retiming, use of unreachable states, sequential signal correspondence, etc. Synthesis enables verification Desire to use sequential synthesis operations (shown by superior results) spurs verification developments

Examples of The Synergy Similar solutions e.g. retiming in synthesis / retiming in verification Algorithm migration e.g. BDDs, SAT, induction, interpolation, rewriting Related complexity scalable synthesis <=> scalable verification (approximately) Common data-structures e.g. combinational and sequential AIGs

Quick Overview of “Classical” (technology independent) Logic Synthesis Boolean network Network manipulation (algebraic) Elimination (substituting a node into its fanouts) Decomposition (common-divisor extraction) Node minimization (Boolean) Espresso Don’t cares Resubstitution (algebraic or Boolean)

“Classical” Logic Synthesis Boolean network in SIS a b c d e x y f z Equivalent AIG in ABC a b c d f e x y z AIG is a Boolean network of 2-input AND nodes and invertors (dotted lines)

One AIG Node – Many Cuts Combinational AIG AIG can be used to compute many cuts for each node Each cut in AIG represents a different SIS node No a priori fixed boundaries Implies that AIG manipulation with cuts is equivalent to working on many Boolean networks at the same time f a b c d e Different cuts for the same node

Combinational Rewriting iterate 10 times { for each AIG node { for each k-cut derive node output as function of cut variables if ( smaller AIG is in the pre-computed library ) rewrite using improved AIG structure } Note: For 4-cuts, each AIG node has, on average, 5 cuts compared to a SIS node with only 1 cut Rewriting at a node can be very fast – using hash-table lookups, truth table manipulation, disjoint decomposition

Combinational Rewriting Illustrated Working AIG n n’ History AIG n n’ AIG rewriting looks at one AIG node, n, at a time A set of new nodes replaces the old fanin cone of n The rewriting can account for a better implementation which can use existing nodes in the network (DAG aware). A history AIG can be used to keep track of the transformations done the old root and the new root nodes are grouped into an equivalence class (more on this later)

Comparison of Two Syntheses “Classical” synthesis Boolean network Network manipulation (algebraic) Elimination Decomposition (common kernel extraction) Node minimization Espresso Don’t cares computed using BDDs Resubstitution “Contemporary” synthesis AIG network DAG-aware AIG rewriting (Boolean) Several related algorithms Rewriting Refactoring Balancing Node minimization Boolean decomposition Don’t cares computed using simulation and SAT Resubstitution with don’t cares Note: here all algorithms are scalable: No SOP, BDD, Espresso

Node Minimization Comparison b c d f e a b c d e x y f z Use BDD to find don’t cares. Express as SOP and call Espresso Evaluate the rewriting gain for all k-cuts of the node and take the best result. Use don’t cares later. Note: Computing cuts becomes a fundamental computation

Types of Don’t-Cares SDCs ODCs EXDCs Input patterns that never appear as an input of a node due to its transitive fanin ODCs Input patterns for which the output of a node is not observable EXDCs Pre-specified or computed external don’t cares (e.g. subsets of unreachable states)

Illustration of SDCs and ODCs (combinational) y x F x = 0, y = 1 is an SDC for node F Limited satisfiability  a b F a = 1, b = 1 is an ODC for F Limited observability 

Scalability of Don’t-Care Computation Scalability is achieved by windowing Window defines local context of a node Don’t-cares are computed and used in Post-mapping resynthesis a Boolean network derived from AIG network using technology mapping High-effort AIG minimization an AIG with some nodes clustered

Windowing a Node in the Network Boolean (SIS) network A window for a node in the network is the context in which the don’t-cares are computed. It includes: n levels of the TFI m levels of the TFO all re-convergent paths captured in this scope A window with its PIs and POs can be considered as a separate network Window POs Window PIs n = 3 m = 3 is a SIS node

Don’t-Care Computation Framework “Miter” constructed for the window POs C(X) … n X Y Window n X Y Same window with inverter

Resubstitution Resubstitution considers a node in a Boolean network and expresses it using a different set of fanins X X Computation can be enhanced by use of don’t cares

Resubstitution with Don’t-Cares - Overview Consider all or some nodes in Boolean network. For each node Create window Select possible fanin nodes (divisors) For each candidate subset of divisors If possible, rule it out with simulation Check resubstitution feasibility using SAT Compute resubstitution function using interpolation A low-cost by-product of proof of unsatisfiability Update the network if improvement

Resubstitution with Don’t Cares Given: node function F(x) to be replaced care set C(x) for the node candidate set of divisors {gi(x)} for re-expressing F(x) Find: A resubstitution function h(y) such that F(x) = h(g(x)) on the care set Substitution Theorem: Function h(y) exists if and only if for every pair of care minterms, x1 and x2, where F(x1) != F(x2) , there exists k such that gk(x1) != gk(x2) C(x) F(x) g1 g2 g3 C(x) F(x) g1 g2 g3 h(g) = F(x)

Example of Resubstitution Substitution Theorem: Any care minterm pair distinguished by F(x) should also be distinguished by at least one of the candidates {gk(x)} F(x) = (x1 x2)(x2  x3) Two candidate sets: {g1= x1’x2, g2 = x1 x2’x3}, {g3= x1  x2, g4 = x2 x3} Set {g3, g4} cannot be used for resubstitution while set {g1, g2} can (have to check all minterm pairs). x F(x) g1(x) g2(x) g3(x) g4(x) 000 001 010 1 011 100 101 110 111

Checking Resubstitution using SAT Miter for resubstitution check F F Substitution Theorem: Any care minterm pair distinguished by F(x) should also be distinguished by at least one of the candidates {gk(x)} Note use of care set. Resubstitution function exists if and only if problem is unsatisfiable.

Computing Dependency Function h by Interpolation (Theory) Consider two sets of clauses, A(x, y) and B(y, z), such that A(x, y)  B(y, z) = 0 y are the only variables common to A and B. An interpolant of the pair (A(x, y), B(y, z)) is a function h(y) depending only on the common variables y such that A(x, y)  h(y)  B(y, z) A(x, y) B(y, z) h(y) Boolean space (x,y,z)

Computing Dependency Function h by Interpolation (Implementation) Problem: Find function h(y), such that C(x)  [h(g(x))  F(x)], i.e. F(x) is expressed in terms of {gk}. Solution: Prove the corresponding SAT problem “unsatisfiable” Derive unsatisfiability resolution proof [Goldberg/Novikov, DATE’03] Divide clauses into A clauses and B clauses Derive interpolant from the unsatisfiability proof [McMillan, CAV’03] Use interpolant as the dependency function, h(g) Replace F(x) by h(g) if cost function improved A B h A B y Notes on this solution uses don’t cares does not use Espresso is more scalable

Sequential Synthesis and Sequential Equivalence Checking (SEC) Sequential SAT sweeping Retiming Sequential equivalence checking Focus – ensuring verifiability

SAT Sweeping Combinational CEC Naïve approach ? Applying SAT to the output of a miter SAT Naïve approach Build output miter – call SAT works well for many easy problems Better approach - SAT sweeping based on incremental SAT solving Detects possibly equivalent nodes using simulation Candidate constant nodes Candidate equivalent nodes Runs SAT on the intermediate miters in a topological order Refines the candidates using counterexamples Proving internal equivalences in a topological order A B SAT-1 D C SAT-2 ? PI

Sequential SAT Sweeping Similar to combinational in that it detects node equivalences But the equivalences are sequential – guaranteed to hold only in the reachable state space Every combinational equivalence is a sequential one, not vice versa  run combinational SAT sweeping beforehand Sequential equivalence is proved by k-step induction Base case Inductive case Efficient implementation of induction is key!

k-step Induction Base Case Inductive Case ? ? k = 2 Proving internal equivalences in a topological order in frame k+1 A B SAT-1 D C SAT-2 Assuming internal equivalences to in uninitialized frames 1 through k ? PI0 PI1 PIk Candidate equivalences: {A = B}, {C = D} A B SAT-3 D C SAT-4 SAT-1 SAT-2 ? PI0 PI1 Init state Proving internal equivalences in initialized frames 1 through k k = 2 Symbolic state

Efficient Implementation Two observations: Both base and inductive cases of k-step induction are combinational SAT sweeping problems Tricks and know-how from the above are applicable Base case is just BMC The same integrated package can be used starts with simulation performs node checking in a topological order benefits from the counter-example simulation Speculative reduction Deals with how assumptions are used in the inductive case

Speculative Reduction Given: Sequential circuit The number of frames to unroll (k) Candidate equivalence classes One node in each class is designated as the representative Speculative reduction moves fanouts to the representatives Makes 80% of the constraints redundant Dramatically simplifies the timeframes (observed 3x reductions) Leads to saving 100-1000x in runtime during incremental SAT A B Adding assumptions without speculative reduction A B Adding assumptions with speculative reduction

Guaranteed Verifiability for Sequential SAT Sweeping Observation: The resulting circuit after sequential SAT sweeping using k-step induction can be sequentially verified by k-step induction. (use some other k-step induction prover) D2 k-step induction D1 Synthesis D2 k-step induction D1 Verification

Experimental Synthesis Results Academic benchmarks 25 test cases (ITC ’99, ISCAS ’89, IWLS ’05) Industrial benchmarks 50 test cases Comparing three experimental synthesis runs Baseline comb synthesis and mapping Register correspondence (Reg Corr) structural register sweep register correspondence Signal correspondence (Sig Corr) signal correspondence

Experimental Synthesis Results Academic Benchmarks Numbers are geometric averages and their ratios Industrial Benchmarks Do not mention source of industrial benchmarks. Single clock domain

Sequential Synthesis and Equivalence Checking Problem: Iterated retiming and sequential synthesis has been shown to be very effective However, sequential equivalence checking is PSPACE complete How to make it simpler? leave a trail of results (History)

Recording a history Observation Each transformation can be broken down into a sequence of small steps Combinational rewriting Sequential rewritng Retiming Using DC’s obtained from a window

Recording Synthesis History Two AIG managers are used Working AIG (WAIG) History AIG (HAIG) Combinational structural hashing is used in both managers Two node-mappings are supported Every node in WAIG points to a node in HAIG Some nodes in HAIG point to other nodes in HAIG that are sequentially equivalent WAIG HAIG

Sequential Rewriting History AIG Sequential cut: {a,b,b1,c1,c} Sequentially equivalent History AIG after rewriting step. The History AIG accumulates sequential equivalence classes. new nodes History AIG rewrite Rewriting step.

Practicality Conceptually this is easy. Just modify each synthesis algorithm with the following Practically it is harder than we thought Since there was little interest we did not make the effort to put it fully in ABC. It still might be of interest to a company that does both synthesis and verification Working AIG createAigManager <---> deleteAigManager <---> createNode <---> replaceNode <---> deleteNode_recur <---> History AIG createAigManager deleteAigManager createNode, setWaigToHaigMapping setEquivalentHaigMapping do nothing

Summary and Conclusions Development of algorithms created for either synthesis or verification are effective in the other Leads to new improved (faster and more scalable) ways to synthesize equivalence check Sequential synthesis can be effective but must be able to equivalence check Limit scope of sequential synthesis Leave a history trail

end