SAT-Based Optimization with Don’t-Cares Revisited

Slides:



Advertisements
Similar presentations
The Synthesis of Cyclic Circuits with SAT and Interpolation By John Backes and Marc Riedel ECE University of Minnesota.
Advertisements

Clock Skewing EECS 290A Sequential Logic Synthesis and Verification.
DAG-Aware AIG Rewriting Alan Mishchenko, Satrajit Chatterjee, Robert Brayton Department of EECS, University of California Berkeley Presented by Rozana.
Combinational and Sequential Mapping with Priority Cuts Alan Mishchenko Sungmin Cho Satrajit Chatterjee Robert Brayton UC Berkeley.
Global Delay Optimization using Structural Choices Alan Mishchenko Robert Brayton UC Berkeley Stephen Jang Xilinx Inc.
Sequential Equivalence Checking for Clock-Gated Circuits Hamid Savoj Robert Brayton Niklas Een Alan Mishchenko Department of EECS University of California,
A Toolbox for Counter-Example Analysis and Optimization
Reducing Structural Bias in Technology Mapping
Synthesis for Verification
Technology Mapping into General Programmable Cells
Power Optimization Toolbox for Logic Synthesis and Mapping
Alan Mishchenko UC Berkeley
Mapping into LUT Structures
Delay Optimization using SOP Balancing
CS137: Electronic Design Automation
Enhancing PDR/IC3 with Localization Abstraction
SAT-Based Logic Optimization and Resynthesis
Robert Brayton Alan Mishchenko Niklas Een
New Directions in the Development of ABC
Alan Mishchenko Robert Brayton UC Berkeley
Alan Mishchenko Satrajit Chatterjee Robert Brayton UC Berkeley
Simple Circuit-Based SAT Solver
A Semi-Canonical Form for Sequential AIGs
Applying Logic Synthesis for Speeding Up SAT
Versatile SAT-based Remapping for Standard Cells
SAT-based Methods: Logic Synthesis and Technology Mapping
Integrating an AIG Package, Simulator, and SAT Solver
A Boolean Paradigm in Multi-Valued Logic Synthesis
Synthesis for Verification
SAT-Based Logic Synthesis (yes, Logic Synthesis Is Everywhere
Standard-Cell Mapping Revisited
Property Directed Reachability with Word-Level Abstraction
The Synergy between Logic Synthesis and Equivalence Checking
The Synergy between Logic Synthesis and Equivalence Checking
LUT Structure for Delay: Cluster or Cascade?
SAT-Based Area Recovery in Technology Mapping
Alan Mishchenko University of California, Berkeley
Canonical Computation without Canonical Data Structure
Canonical Computation Without Canonical Data Structure
Robert Brayton UC Berkeley
Scalable and Scalably-Verifiable Sequential Synthesis
Mapping into LUT Structures
SAT-based Methods for Scalable Synthesis and Verification
Topics Logic synthesis. Placement and routing..
FPGA Glitch Power Analysis and Reduction
Integrating Logic Synthesis, Technology Mapping, and Retiming
Reinventing The Wheel: Developing a New Standard-Cell Synthesis Flow
SAT-Based Logic Synthesis (yes, Logic Synthesis Is Everywhere!)
Integrating an AIG Package, Simulator, and SAT Solver
Improvements in FPGA Technology Mapping
Canonical Computation without Canonical Data Structure
SAT-based Methods: Logic Synthesis and Technology Mapping
Recording Synthesis History for Sequential Verification
SAT-based Methods: Logic Synthesis and Technology Mapping
Logic Synthesis: Past, Present, and Future
Delay Optimization using SOP Balancing
Canonical Computation without Canonical Data Structure
Reinventing The Wheel: Developing a New Standard-Cell Synthesis Flow
Innovative Sequential Synthesis and Verification
Robert Brayton Alan Mishchenko Niklas Een
CS137: Electronic Design Automation
SAT-Based Logic Synthesis (yes, Logic Synthesis Is Everywhere!)
SAT-based Methods: Logic Synthesis and Technology Mapping
SAT-Based Logic Synthesis (yes, Logic Synthesis Is Everywhere!)
Fast Min-Register Retiming Through Binary Max-Flow
Scalable Don’t-Care-Based Logic Optimization and Resynthesis
SAT-Based Logic Synthesis
Alan Mishchenko Robert Brayton
Alan Mishchenko Department of EECS UC Berkeley
Integrating AIG Package, Simulator, and SAT Solver
Presentation transcript:

SAT-Based Optimization with Don’t-Cares Revisited Alan Mishchenko Robert Brayton Department of EECS UC Berkeley

Overview Motivation for revisiting don’t-cares Overview of proposed improvements in Target node selection Windowing and candidate divisor selection Target node support minimization Deriving updated circuit structure Experiments Conclusion 2

Motivation Don’t-cares are often not used because Methods are hard to follow and not general enough Computation is believed to be slow / unscalable Improvements are not significant In this work, we address these limitations Clarify and unify the use of don’t-cares Show how to use them efficiently and scalably Demonstrate a substantial improvement in QoR

A Don’t-Care-Based Opt. Flow Candidate divisor selection and windowing Input netlist Node support minimization Iterator over nodes to be optimized Node selection / ordering depends on opt. criteria (area, delay, power, etc) Synthesis of updated logic structure Output netlist If a transform is accepted, update the network, resource limits, and node selection criteria

Improved Target Node Selection In the past, we considered all nodes in a topological order Now, selection depend on the optimization goal In delay optimization, we maintain a priority queue for delay-critical nodes, in which they are sorted by the number of critical PI-to-PO paths going through them Each time, we target the node that can reduce delay the most In area optimization, we maintain a priority queue for delay-critical nodes, in which they are sorted by the size of their MFFC Each time, we target the node that can reduce area the most In both cases, we choose nodes to maximize the gains

Improved Divisor Selection In the past, we did windowing before divisor selection Now, we do divisor selection before windowing If we start with windowing, we may end up including logic that is not needed for achieving the optimization goal For example, if the goal of optimization is to reduce delay at a node, we should find useful candidate divisors first, and then compute a window for them, not vice versa Divisor selection depends on the optimization goal For example, to reduce delay at a node, we consider divisors in the TFI of the critical fanins; other candidate divisors are not useful and can be skipped to save runtime

Improved Windowing In the past, window scope was fixed in advance Now, windowing divisor-dependent That is, we construct a window for a given set of divisors Windowing is reconvergence-driven Only that part of the TFO of the target node is included in the window, which has reconvergence with the included part of the TFI For example, if a part of the TFI or the TFO is disjoint-support with the rest of the window, we replace it by a free variable; this reduces window size and runtime without losing any optimization capabilities, because disjoint-support logic cones to do generate additional don’t-cares Windowing takes into account “observability leaks” For example, if one of the fanouts of the target node is a PO, there is no need to consider observability don’t-cares even if the TFO has reconvergence with the TFI, because the node’s function cannot change The same principle holds in a more general sense: Anytime fanout f of node m in the TFO of the target node n has no recovergence with the TFI, the window boundary stops at node m (there is no need to include fanouts of node m into the window of node n)

Improved Support Minimization In the past ( “mfs” and “mfs2”), used a heuristic procedure, which tries to remove some old fanins and add new fanins, in such as way that (1) optimization goal is achieved, (2) the support is feasible, (3) the support is minimized Now use a new procedure (sat_minimize_assumptions), which takes a priority-ordered list of all candidate divisors, and selects a minimal feasible subset of them, which satisfies the following criterion: a candidate divisor is included in the subset if and only if there is no other node with lower priority that can do the job

New Procedure: sat_minimize_assumptions() The procedure takes a set of assumptions, which makes the given problem instance UNSAT It returns a minimized set, for which the problem is UNSAT, while only including the necessary assumptions in the priority order int sat_minimize_assumptions( sat_solver* s, int * pLits, int nLits ) { if only one assumption is left, check if the problem is UNSAT if the problem is UNSAT, return 0; otherwise, return 1 divide assumptions into two parts (left and right) assume the left part solve recursively for the right part swap the parts solve recursively for the left part union solutions }

Improved Logic Structure Synthesis In the past, one node at a time is targeted and changed, by adding/removing fanins Now, we use QBF to synthesize a LUT structure meeting the optimization goal at each target node QBF can handle multi-output targets QBF can produce multi-node LUT structures QBF is fast in this application because Relatively small LUT structures are considered Connectivity is limited due to delay profile Don’t-cares leads to fewer care minterms to satisfy

Conclusion This work proposes new heuristics and improved algorithms for using don’t-cares For the first time, in our work on don’t-cares: SAT-based optimization is used for delay optimization Goal-oriented target-node selection is used Multi-node LUT structures are synthesized Multi-output targets can be used

Abstract The paper describes a SAT-based framework for logic optimization with don’t-cares aimed at reducing delay and area after LUT mapping. While individual components of the framework are known, its novelty is in synergistically combining the following aspects of SAT-based optimization for the first time: a) improved computation of delay and area criticality, b) novel reconvergence-driven windowing and divisor selection, c) the use of complete don’t-cares, and d) SAT-based generation of new useful cut-points in the network. Experimental results show that a preliminary implementation improves delay after LUT mapping at the cost of some area increase, compared to previous methods.