AN ITERATIVE TECHNIQUE FOR IMPROVED TWO-LEVEL LOGIC MINIMIZATION Kunal R. Shenoy Nikhil S. Saluja Nikhil S. Saluja (University of Colorado, Boulder) Sunil.

Slides:



Advertisements
Similar presentations
Problem solving with graph search
Advertisements

Traveling Salesperson Problem
Evaluating Heuristics for the Fixed-Predecessor Subproblem of Pm | prec, p j = 1 | C max.
Coding. Steps to Success 1.Create a PLAN including a detailed statement of requirements (SORs) 2.Write algorithms based on the SORs 3.Write pseudocode.
ICS-171:Notes 4: 1 Notes 4: Optimal Search ICS 171 Summer 1999.
EEE324 Digital Electronics Ian McCrumRoom 5B18, Lecture 5: Software for.
ECE C03 Lecture 31 Lecture 3 Two-Level Logic Minimization Algorithms Hai Zhou ECE 303 Advanced Digital Design Spring 2002.
Point-and-Line Problems. Introduction Sometimes we can find an exisiting algorithm that fits our problem, however, it is more likely that we will have.
Greedy Algorithms Basic idea Connection to dynamic programming Proof Techniques.
Two-Level Logic Synthesis -- Heuristic Method (ESPRESSO)
A Robust Algorithm for Approximate Compatible Observability Don’t Care (CODC) Computation Nikhil S. Saluja University of Colorado Boulder, CO Sunil P.
Logic Synthesis 2 Outline –Two-Level Logic Optimization –ESPRESSO Goal –Understand two-level optimization –Understand ESPRESSO operation.
1 Consensus Definition Let w, x, y, z be cubes, and a be a variable such that w = ax and y = a’z w = ax and y = a’z Then the cube xz is called the consensus.
Finite State Machine State Assignment for Area and Power Minimization Aiman H. El-Maleh, Sadiq M. Sait and Faisal N. Khan Department of Computer Engineering.
Games with Chance Other Search Algorithms CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 3 Adapted from slides of Yoonsuck Choe.
EDA (CS286.5b) Day 15 Logic Synthesis: Two-Level.
COE 561 Digital System Design & Synthesis Two-Level Logic Synthesis
Paper Title Your Name CMSC 838 Presentation. CMSC 838T – Presentation Motivation u Problem paper is trying to solve  Characteristics of problem  … u.
Spanning Trees.
Spanning Trees. 2 Spanning trees Suppose you have a connected undirected graph Connected: every node is reachable from every other node Undirected: edges.
An Algorithm to Minimize Leakage through Simultaneous Input Vector Control and Circuit Modification Nikhil Jayakumar Sunil P. Khatri Presented by Ayodeji.
1 Generalized Buffering of PTL Logic Stages using Boolean Division and Don’t Cares Rajesh Garg Sunil P. Khatri Department of Electrical and Computer Engineering,
Logic Synthesis Outline –Logic Synthesis Problem –Logic Specification –Two-Level Logic Optimization Goal –Understand logic synthesis problem –Understand.
This material in not in your text (except as exercises) Sequence Comparisons –Problems in molecular biology involve finding the minimum number of edit.
Gate Logic: Two Level Canonical Forms
A Probabilistic Method to Determine the Minimum Leakage Vector for Combinational Designs Kanupriya Gulati Nikhil Jayakumar Sunil P. Khatri Department of.
Winter 2014 S. Areibi School of Engineering University of Guelph
ECE Synthesis & Verification 1 ECE 667 ECE 667 Synthesis and Verification of Digital Systems Exact Two-level Minimization Quine-McCluskey Procedure.
ECE 667 Synthesis and Verification of Digital Systems
Spanning Trees. Spanning trees Suppose you have a connected undirected graph –Connected: every node is reachable from every other node –Undirected: edges.
DAST, Spring © L. Joskowicz 1 Data Structures – LECTURE 1 Introduction Motivation: algorithms and abstract data types Easy problems, hard problems.
Courtesy RK Brayton (UCB) and A Kuehlmann (Cadence) 1 Logic Synthesis Two-Level Minimization I.
Space-Filling DOEs Design of experiments (DOE) for noisy data tend to place points on the boundary of the domain. When the error in the surrogate is due.
Courtesy RK Brayton (UCB) and A Kuehlmann (Cadence) 1 Logic Synthesis Two-Level Minimization II.
Lecture 2 Geometric Algorithms. A B C D E F G H I J K L M N O P Sedgewick Sample Points.
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Crossing Minimisation (1) Ronald Kieft. Global contents Specific 2-layer Crossing Minimisation techniques After the break, by Johan Crossing Minimisation.
CS Learning Rules1 Learning Sets of Rules. CS Learning Rules2 Learning Rules If (Color = Red) and (Shape = round) then Class is A If (Color.
Heuristic Two-level Logic Optimization Giovanni De Micheli Integrated Systems Centre EPF Lausanne This presentation can be used for non-commercial purposes.
Two-Level Simplification Approaches Algebraic Simplification: - algorithm/systematic procedure is not always possible - No method for knowing when the.
ICS 252 Introduction to Computer Design Lecture 9 Winter 2004 Eli Bozorgzadeh Computer Science Department-UCI.
How Much Randomness Makes a Tool Randomized? Petr Fišer, Jan Schmidt Faculty of Information Technology Czech Technical University in Prague
Boolean Minimizer FC-Min: Coverage Finding Process Petr Fišer, Hana Kubátová Czech Technical University Department of Computer Science and Engineering.
ICS 252 Introduction to Computer Design Lecture 10 Winter 2004 Eli Bozorgzadeh Computer Science Department-UCI.
2-1 Introduction Gate Logic: Two-Level Simplification Design Example: Two Bit Comparator Block Diagram and Truth Table A 4-Variable K-map for each of the.
State university of New York at New Paltz Electrical and Computer Engineering Department Logic Synthesis Optimization Lect15: Heuristic Two Level Logic.
Thursday, May 9 Heuristic Search: methods for solving difficult optimization problems Handouts: Lecture Notes See the introduction to the paper.
BLAST: Basic Local Alignment Search Tool Altschul et al. J. Mol Bio CS 466 Saurabh Sinha.
Output Grouping Method Based on a Similarity of Boolean Functions Petr Fišer, Pavel Kubalík, Hana Kubátová Czech Technical University in Prague Department.
CALTECH CS137 Winter DeHon CS137: Electronic Design Automation Day 15: March 4, 2002 Two-Level Logic-Synthesis.
Daniel J. Cruz Dept. of Physics and Astronomy Texas A&M University.
An Introduction to Simulated Annealing Kevin Cannons November 24, 2005.
Boolean Functions 1 ECE 667 ECE 667 Synthesis and Verification of Digital Circuits Boolean Functions Basics Maciej Ciesielski Univ.
Parameter Reduction for Density-based Clustering on Large Data Sets Elizabeth Wang.
ICS 252 Introduction to Computer Design Lecture 8- Heuristics for Two-level Logic Synthesis Winter 2005 Eli Bozorgzadeh Computer Science Department-UCI.
Heuristic Minimization of Two-Level Logic
Artificial Intelligence Problem solving by searching CSC 361
Analysis and design of algorithm
Short paths and spanning trees
ICS 252 Introduction to Computer Design
Effective Social Network Quarantine with Minimal Isolation Costs
Spanning Trees.
Quine-McClusky Minimization Method
ECB2212-Digital Electronics
ICS 252 Introduction to Computer Design
Search.
Lecture 2 Geometric Algorithms.
Search.
Algorithm Course Algorithms Lecture 3 Sorting Algorithm-1
The Travelling Salesman
Presentation transcript:

AN ITERATIVE TECHNIQUE FOR IMPROVED TWO-LEVEL LOGIC MINIMIZATION Kunal R. Shenoy Nikhil S. Saluja Nikhil S. Saluja (University of Colorado, Boulder) Sunil P. Khatri (Texas A&M University, College Station, TX)

MOTIVATION  QUINE McCLUSKY  Exact Method for Logic Minimization  Compute Time is prohibitive for Minimizations (Doubly Exponential)  Times-out for medium complexity benchmarks  ESPRESSO  Heuristic Method for Logic Minimization  Based on Unate Recursive Paradigm  Inferior to Quine McClusky in minimization  Some sacrifice in optimization for a large reduction in run time  Relatively unchanged for 20 years. Performance Comparison Between Quine McClusky and Espresso Hyper-Exact bridges the gap between these 2 methods

HyperExact  Heuristic based on extracting user-defined number of cubes (hyper-cubes) from the onset and placing them in a ‘Hyper-Set’ and running Espresso iteratively.  Improved minimization obtained by considering all the hyper-cubes as don’t cares.  Falls in between Quine McClusky and Espresso in terms of Optimization and runtime.  HyperExact trades-off an acceptable increase in runtime to increase optimization of Espresso.  Three variants have been simulated on benchmarks  One variant showed an improvement in 27 of 58 cases in which there was a potential improvement over Espresso by Quine McClusky, with a peak improvement of 18%.

Hyper-Exact Black Initially, the HYPER_COVER is initialized to nil. Next, a set of cubes D*= {C i } is removed from the cover F and are added to both the don't care set D as well as the HYPER_COVER. We then call ESPRESSO on the cover iteratively until F becomes empty.

HyperExact philosophy  GASP style techniques try to find newer primes for ESPRESSO to use  Our method allows the exploration of more and newer reductions, improving overall result quality.  This is achieved by increasing the don’t care set in each iteration  Technique is orthogonal to the GASP algorithm used.

Two methods to choose D*   The first method chooses the k largest cubes in each iteration. The intuition behind this is that the largest cubes, when inserted in D, are likely to maximize the chances of other cubes being reduced in different ways.   We refer to this as the SIZE heuristic.   The second heuristic computes the distance of each cube of F from the remaining cubes using the cdist() routine in ESPRESSO. The cubes of F are sorted in ascending order of the sum of their distances to the remaining cubes, and we select the k best cubes from this sorted list. The intuition behind this heuristic is that a cube which has the lowest total distance to other cubes in the cover is likely to be one which allows the most other cubes to be reduced in different manners than before.   We refer to this heuristic as the DISTANCE heuristic.

HyperExact Variants  Hyper-Red  The motivating idea behind this variant of our technique is to reduce the run-time overhead by including the HYPER computations within the main ESPRESSO routine.  The HYPER-RED algorithm implements the computation of D within the ESPRESSO routine  The algorithm does reduce run-time compared to HYPER-BLACK, but the results are inferior to HYPER-BLACK, as described later  Hyper-Blue  The major difference between this algorithm and the HYPER-RED algorithm is that unlike HYPER-RED, this algorithm extracts essential primes from F during each iteration.  This improves run-time over HYPER-BLACK and HYPER-RED though it is still inferior in quality to HYPER-BLACK.  The decrease in run-time is because each iteration requires the manipulation of fewer cubes after essential primes are removed.

Observations  Shown for Black variant  This graph shows that runtime exponentially decreases with the iteration number, which is the primary reason for the heuristic’s success.  Don’t cares increase with iteration number, resulting in faster reduce/expand/irred computations

Results  The BLACK method typically performs the best, with the most number of wins compared to the RED or BLUE.  The BLACK method with the SIZE heuristic was the best choice in 17 cases for 10 iterations.  The RED method is faster than the BLACK method since the HYPER computations are in the main ESPRESSO routine. However RED performs worse than BLACK, since there is a possibility of inserting non-essential cubes in the HYPER COVER in the RED method reducing the quality of results.  BLUE is typically faster than RED, since it extracts essential primes in every iteration, reducing the size of the problem solved in each iteration. However, its results are comparable with RED, and inferior to BLACK