How Much Randomness Makes a Tool Randomized? Petr Fišer, Jan Schmidt Faculty of Information Technology Czech Technical University in Prague

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Airline Schedule Optimization (Fleet Assignment I)
G5BAIM Artificial Intelligence Methods
New Ways of Generating Large Realistic Benchmarks for Testing Synthesis Tools Petr Fišer, Jan Schmidt Faculty of Information Technology Czech Technical.
Informed Search Methods How can we improve searching strategy by using intelligence? Map example: Heuristic: Expand those nodes closest in “as the crow.
Methods of Proof Chapter 7, second half.. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound)
Solving Problem by Searching
IBM Labs in Haifa © 2005 IBM Corporation Adaptive Application of SAT Solving Techniques Ohad Shacham and Karen Yorav Presented by Sharon Barner.
It Is Better to Run Iterative Resynthesis on Parts of the Circuit Petr Fišer, Jan Schmidt Faculty of Information Technology Czech Technical University.
Spie98-1 Evolutionary Algorithms, Simulated Annealing, and Tabu Search: A Comparative Study H. Youssef, S. M. Sait, H. Adiche
Two-Level Logic Synthesis -- Heuristic Method (ESPRESSO)
Finite State Machine State Assignment for Area and Power Minimization Aiman H. El-Maleh, Sadiq M. Sait and Faisal N. Khan Department of Computer Engineering.
Games with Chance Other Search Algorithms CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 3 Adapted from slides of Yoonsuck Choe.
EDA (CS286.5b) Day 15 Logic Synthesis: Two-Level.
Technology Mapping.
Nature’s Algorithms David C. Uhrig Tiffany Sharrard CS 477R – Fall 2007 Dr. George Bebis.
Logic Synthesis Outline –Logic Synthesis Problem –Logic Specification –Two-Level Logic Optimization Goal –Understand logic synthesis problem –Understand.
Logic Synthesis 1 Outline –Logic Synthesis Problem –Logic Specification –Two-Level Logic Optimization Goal –Understand logic synthesis problem –Understand.
Task Assignment and Transaction Clustering Heuristics.
Logic Verification 1 Outline –Logic Verification Problem –Verification Approaches –Recursive Learning Approach Goal –Understand verification problem –Understand.
Logic Synthesis Primer
ECE 667 Synthesis and Verification of Digital Systems
Introduction to Simulated Annealing 22c:145 Simulated Annealing  Motivated by the physical annealing process  Material is heated and slowly cooled.
Overview Part 2 – Circuit Optimization 2-4 Two-Level Optimization
Ch. 11: Optimization and Search Stephen Marsland, Machine Learning: An Algorithmic Perspective. CRC 2009 some slides from Stephen Marsland, some images.
Metaheuristics The idea: search the solution space directly. No math models, only a set of algorithmic steps, iterative method. Find a feasible solution.
Escaping local optimas Accept nonimproving neighbors – Tabu search and simulated annealing Iterating with different initial solutions – Multistart local.
Efficient Model Selection for Support Vector Machines
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
KU College of Engineering Elec 204: Digital Systems Design
1 A Flexible Minimization and Partitioning Method Petr Fišer, Jan Hlavička Czech Technical University in Prague
Approximation Algorithms Pages ADVANCED TOPICS IN COMPLEXITY THEORY.
Flexible Two-Level Boolean Minimizer BOOM ‑ II and Its Applications Petr Fišer, Hana Kubátová Department of Computer Science and Engineering Czech Technical.
Boltzmann Machine (BM) (§6.4) Hopfield model + hidden nodes + simulated annealing BM Architecture –a set of visible nodes: nodes can be accessed from outside.
Informed search algorithms
1 Shanghai Jiao Tong University Informed Search and Exploration.
Major objective of this course is: Design and analysis of modern algorithms Different variants Accuracy Efficiency Comparing efficiencies Motivation thinking.
BoolTool: A Tool for Manipulation of Boolean Functions Petr Fišer, David Toman Czech Technical University in Prague Dept. of Computer Science and Engineering.
Boolean Minimizer FC-Min: Coverage Finding Process Petr Fišer, Hana Kubátová Czech Technical University Department of Computer Science and Engineering.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
2005MEE Software Engineering Lecture 11 – Optimisation Techniques.
ICS 252 Introduction to Computer Design Lecture 10 Winter 2004 Eli Bozorgzadeh Computer Science Department-UCI.
CS/EE 3700 : Fundamentals of Digital System Design Chris J. Myers Lecture 4: Logic Optimization Chapter 4.
Thursday, May 9 Heuristic Search: methods for solving difficult optimization problems Handouts: Lecture Notes See the introduction to the paper.
Two-Level Boolean Minimizer BOOM-II Petr Fišer, Hana Kubátová Department of Computer Science and Engineering Czech Technical University Karlovo nam. 13,
Weikang Qian. Outline Intersection Pattern and the Problem Motivation Solution 2.
On Logic Synthesis of Conventionally Hard to Synthesize Circuits Using Genetic Programming Petr Fišer, Jan Schmidt Faculty of Information Technology, Czech.
1 Branch and Bound Searching Strategies Updated: 12/27/2010.
Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1.
Survey of the Algorithms in the Column-Matching BIST Method Survey of the Algorithms in the Column-Matching BIST Method Petr Fišer, Hana Kubátová Department.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Output Grouping Method Based on a Similarity of Boolean Functions Petr Fišer, Pavel Kubalík, Hana Kubátová Czech Technical University in Prague Department.
Optimization Problems
CALTECH CS137 Winter DeHon CS137: Electronic Design Automation Day 15: March 4, 2002 Two-Level Logic-Synthesis.
1 Implicant Expansion Methods Used in The BOOM Minimizer Petr Fišer, Jan Hlavička Czech Technical University, Karlovo nám. 13, Prague 2
An Introduction to Simulated Annealing Kevin Cannons November 24, 2005.
Output Grouping-Based Decomposition of Logic Functions Petr Fišer, Hana Kubátová Department of Computer Science and Engineering Czech Technical University.
Local Search Algorithms and Optimization Problems
CPSC 322, Lecture 16Slide 1 Stochastic Local Search Variants Computer Science cpsc322, Lecture 16 (Textbook Chpt 4.8) Oct, 11, 2013.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Metaheuristics for the New Millennium Bruce L. Golden RH Smith School of Business University of Maryland by Presented at the University of Iowa, March.
ICS 252 Introduction to Computer Design Lecture 8- Heuristics for Two-level Logic Synthesis Winter 2005 Eli Bozorgzadeh Computer Science Department-UCI.
Computational Intelligence: Methods and Applications Lecture 26 Density estimation, Expectation Maximization. Włodzisław Duch Dept. of Informatics, UMK.
SINGLE-LEVEL PARTITIONING SUPPORT IN BOOM-II
A Boolean Paradigm in Multi-Valued Logic Synthesis
Objective of This Course
The use of Neural Networks to schedule flow-shop with dynamic job arrival ‘A Multi-Neural Network Learning for lot Sizing and Sequencing on a Flow-Shop’
Boltzmann Machine (BM) (§6.4)
Presentation transcript:

How Much Randomness Makes a Tool Randomized? Petr Fišer, Jan Schmidt Faculty of Information Technology Czech Technical University in Prague

IWLS’112 Outline Logic synthesis algorithms What they are? What nature? Randomization – yes or not? Pros & cons Some randomized algorithms Successful samples Necessary measure of randomness How much randomness is needed?

IWLS’113 Logic Synthesis Algorithms …are like: Two-level minimization (Espresso) Decomposition Resynthesis Resubstitution … In fact, they are (exploration) algorithms In fact, they are state space search (exploration) algorithms Usually they are of a nature Usually they are of a local search nature Strategies - complete: B&B Best-first BFS, DFS … Strategies - approximate: Best only First improvement … Sometimes they can be Sometimes they can be iterated

IWLS’114 Logic Synthesis Algorithms Most of these algorithms are Most of these algorithms are deterministic Deterministic heuristics Usually “best-only” or “first-improvement” nature No random decisions Two runs of the algorithm produce equal results There is no chance of obtaining different results Some rare exceptions Simulated annealing Genetic algorithms Genetic programming

IWLS’115 Logic Synthesis Algorithms State space search strategies: Deterministic heuristic approach When there are two (or more) equally valued choices, take the first one   is it always a good choice? Exhaustive (complete) approach When there are two (or more) equally valued choices, try all of them   computationally infeasible Randomized approach When there are two (or more) equally valued choices, take one randomly   all choices have equal chances, whereas exponential explosion is avoided

IWLS’116 Why Randomness? Randomized move selection: All choices have equal chances, whereas exponential explosion is avoided May be used in iterative way: Run the randomized algorithm several times and take the best solution Solutions from different iterations may be combined Randomized local search = compromise between best-only (first improvement) and exhaustive search The solution quality may be improved for a cost of runtime   quality / runtime tradeoff Anything new? Indeed not! See SA, GA, GP, … … but they are generic algorithms adapted to logic synthesis   call for dedicated randomized logic synthesis algorithms?

IWLS’117 Where Randomness? My algorithm uses smart heuristics, they are deterministic, there is no place for random decisions …so how can the algorithm be randomized? …probably it’s not possible

IWLS’118 “Deterministic” Reality “Deterministic” algorithms are sensitive to variable ordering The results are different for different variable ordering in the input fileEspresso up to 20% quality difference (in literals count)Espresso-exact up to 6% quality difference (in literals count) ABC, just “balance” up to 10% quality difference (in AIG nodes count) ABC, “choice” script, followed by “map” up to 67% quality difference (in gates count) 11% in average CUDD BDD package up to exponential difference in nodes count default variable ordering is lexicographical

IWLS’119 “Deterministic” Reality Concluded:Facts: Many “deterministic” algorithms are heavily influenced by variable ordering in the source file The source file variable ordering is random – from the point of view of synthesis   Synthesis may produce very bad results, without knowing the reason Possible solutions: Design better heuristics Accept randomness Note that the source file is random itself!

IWLS’1110 Why Randomness Not! Synthesis may produce unpredictable results   the results of repeated synthesis may significantly differ   small changes in the source may induce big changes in the final design   bad for area & delay estimation   bad for debugging   bad for anything… BUT: What if I swap two ports in the VHDL header? Should I expect 70% quality difference? Not random, actually – pseudorandom The “pseudo-randomized” algorithm produces equal results under given seed No chance of obtaining possibly better solutions

IWLS’1111 Randomness - Concluded + Chance of obtaining different solutions Quality / runtime trade-off + Influence of lexicographical ordering diminished - Possibility of unexpected behavior … but this could happen to deterministic algorithms too – lexicographical ordering … and if the algorithm produces near-optimum solutions, no big differences are expected - Two synthesis runs produce different results they do not, if the seed is fixed   seed as a synthesis parameter

IWLS’1112 Example 1: BOOM Two-level (SOP) minimizer Algorithm (very simplified): Generate on-set cover Put all the generated implicants into a common pool Solve the covering problem Go to 1 or stop This is randomized

IWLS’1113 Example 1: BOOM

IWLS’1114 Derandomization The random number generator restricted to produce only a given number of distinct numbers   Randomness factor, RF RF = 1: only 1 value is produced (0) RF = 2: only 2 values are produced (0, MAXINT) RF = infinity: no restriction

IWLS’1115 BOOM - Derandomized n = 20

IWLS’1116 BOOM - Derandomized n = 20

IWLS’1117 BOOM - Derandomized n = 20

IWLS’1118 BOOM – Cover Generation Generation of cover: the basic idea Implicants are generated in a greedy way Implicants are generated by reducing a tautological cube, by adding literals Literals that appear most frequently in the on-set are preferred If two or more literals have the same frequency, select one randomly   there are 2n choices   the random number generator needs to produce no more than 2n different values

IWLS’1119 Example 2: Resynthesis by Parts Multi-level optimization Algorithm (very simplified): Pick one network node (pivot) Extract a window of all nodes having a distance from the pivot up to a given value Resynthesize the window Put the window back Go to 1 or stop This is randomized

IWLS’1120 Example 2: Resynthesis by Parts e64

IWLS’1121 Resynthesis by Parts - Derandomized

IWLS’1122 Example 3: FC-Min Two-level (SOP) minimizer for multi-output functions Algorithm (very simplified): Generate on-set cover Group implicants are generated directly Rectangle cover problem is solved for PLA output matrix Greedy heuristic Put all the generated implicants into a common pool Solve the covering problem Go to 1 or stop

IWLS’1123 Example 3: FC-Min Rectangle cover solving algorithm: Select a row having maximum 1’s Append a row non-decreasing the number of covered 1’s IF ( random()  DF ) go to 2 The more rows the rectangle has, the more on-set 1’s is covered the less likely it will be a valid implicant   probabilistic implicant generation

IWLS’1124 Derandomized FC-Min

IWLS’1125 Derandomized FC-Min

IWLS’1126 Conclusions Randomness Allows for obtaining different solutions Repeated runs produce different results Allows for a cost / time trade-off Iterative run Can be introduced to most of algorithms Lexicographical ordering aspect Reproducibility is not a problem Pseudo-randomness Necessary measure of randomness Can be analytically derived By analysis of possible numbers of choices Some algorithms are “random enough” for DF = 2 Some algorithms need very high level of randomness Probabilistic algorithms, SA, …