Download presentation
Presentation is loading. Please wait.
Published byElisabeth Rice Modified over 8 years ago
1
How Much Randomness Makes a Tool Randomized? Petr Fišer, Jan Schmidt Faculty of Information Technology Czech Technical University in Prague fiserp@fit.cvut.cz, schmidt@fit.cvut.cz
2
IWLS’112 Outline Logic synthesis algorithms What they are? What nature? Randomization – yes or not? Pros & cons Some randomized algorithms Successful samples Necessary measure of randomness How much randomness is needed?
3
IWLS’113 Logic Synthesis Algorithms …are like: Two-level minimization (Espresso) Decomposition Resynthesis Resubstitution … In fact, they are (exploration) algorithms In fact, they are state space search (exploration) algorithms Usually they are of a nature Usually they are of a local search nature Strategies - complete: B&B Best-first BFS, DFS … Strategies - approximate: Best only First improvement … Sometimes they can be Sometimes they can be iterated
4
IWLS’114 Logic Synthesis Algorithms Most of these algorithms are Most of these algorithms are deterministic Deterministic heuristics Usually “best-only” or “first-improvement” nature No random decisions Two runs of the algorithm produce equal results There is no chance of obtaining different results Some rare exceptions Simulated annealing Genetic algorithms Genetic programming
5
IWLS’115 Logic Synthesis Algorithms State space search strategies: Deterministic heuristic approach When there are two (or more) equally valued choices, take the first one is it always a good choice? Exhaustive (complete) approach When there are two (or more) equally valued choices, try all of them computationally infeasible Randomized approach When there are two (or more) equally valued choices, take one randomly all choices have equal chances, whereas exponential explosion is avoided
6
IWLS’116 Why Randomness? Randomized move selection: All choices have equal chances, whereas exponential explosion is avoided May be used in iterative way: Run the randomized algorithm several times and take the best solution Solutions from different iterations may be combined Randomized local search = compromise between best-only (first improvement) and exhaustive search The solution quality may be improved for a cost of runtime quality / runtime tradeoff Anything new? Indeed not! See SA, GA, GP, … … but they are generic algorithms adapted to logic synthesis call for dedicated randomized logic synthesis algorithms?
7
IWLS’117 Where Randomness? My algorithm uses smart heuristics, they are deterministic, there is no place for random decisions …so how can the algorithm be randomized? …probably it’s not possible
8
IWLS’118 “Deterministic” Reality “Deterministic” algorithms are sensitive to variable ordering The results are different for different variable ordering in the input fileEspresso up to 20% quality difference (in literals count)Espresso-exact up to 6% quality difference (in literals count) ABC, just “balance” up to 10% quality difference (in AIG nodes count) ABC, “choice” script, followed by “map” up to 67% quality difference (in gates count) 11% in average CUDD BDD package up to exponential difference in nodes count default variable ordering is lexicographical
9
IWLS’119 “Deterministic” Reality Concluded:Facts: Many “deterministic” algorithms are heavily influenced by variable ordering in the source file The source file variable ordering is random – from the point of view of synthesis Synthesis may produce very bad results, without knowing the reason Possible solutions: 1. 1. Design better heuristics 2. 2. Accept randomness Note that the source file is random itself!
10
IWLS’1110 Why Randomness Not! Synthesis may produce unpredictable results the results of repeated synthesis may significantly differ small changes in the source may induce big changes in the final design bad for area & delay estimation bad for debugging bad for anything… BUT: 1. 1. What if I swap two ports in the VHDL header? Should I expect 70% quality difference? 2. 2. Not random, actually – pseudorandom The “pseudo-randomized” algorithm produces equal results under given seed 3. 3. No chance of obtaining possibly better solutions
11
IWLS’1111 Randomness - Concluded + Chance of obtaining different solutions Quality / runtime trade-off + Influence of lexicographical ordering diminished - Possibility of unexpected behavior … but this could happen to deterministic algorithms too – lexicographical ordering … and if the algorithm produces near-optimum solutions, no big differences are expected - Two synthesis runs produce different results they do not, if the seed is fixed seed as a synthesis parameter
12
IWLS’1112 Example 1: BOOM Two-level (SOP) minimizer Algorithm (very simplified): 1. 1. Generate on-set cover 2. 2. Put all the generated implicants into a common pool 3. 3. Solve the covering problem 4. 4. Go to 1 or stop This is randomized
13
IWLS’1113 Example 1: BOOM
14
IWLS’1114 Derandomization The random number generator restricted to produce only a given number of distinct numbers Randomness factor, RF RF = 1: only 1 value is produced (0) RF = 2: only 2 values are produced (0, MAXINT) RF = infinity: no restriction
15
IWLS’1115 BOOM - Derandomized n = 20
16
IWLS’1116 BOOM - Derandomized n = 20
17
IWLS’1117 BOOM - Derandomized n = 20
18
IWLS’1118 BOOM – Cover Generation Generation of cover: the basic idea Implicants are generated in a greedy way Implicants are generated by reducing a tautological cube, by adding literals Literals that appear most frequently in the on-set are preferred If two or more literals have the same frequency, select one randomly there are 2n choices the random number generator needs to produce no more than 2n different values
19
IWLS’1119 Example 2: Resynthesis by Parts Multi-level optimization Algorithm (very simplified): 1. 1. Pick one network node (pivot) 2. 2. Extract a window of all nodes having a distance from the pivot up to a given value 3. 3. Resynthesize the window 4. 4. Put the window back 5. 5. Go to 1 or stop This is randomized
20
IWLS’1120 Example 2: Resynthesis by Parts e64
21
IWLS’1121 Resynthesis by Parts - Derandomized
22
IWLS’1122 Example 3: FC-Min Two-level (SOP) minimizer for multi-output functions Algorithm (very simplified): 1. 1. Generate on-set cover Group implicants are generated directly Rectangle cover problem is solved for PLA output matrix Greedy heuristic 2. 2. Put all the generated implicants into a common pool 3. 3. Solve the covering problem 4. 4. Go to 1 or stop
23
IWLS’1123 Example 3: FC-Min Rectangle cover solving algorithm: 1. 1. Select a row having maximum 1’s 2. 2. Append a row non-decreasing the number of covered 1’s 3. 3. IF ( random() DF ) go to 2 The more rows the rectangle has, the more on-set 1’s is covered the less likely it will be a valid implicant probabilistic implicant generation
24
IWLS’1124 Derandomized FC-Min
25
IWLS’1125 Derandomized FC-Min
26
IWLS’1126 Conclusions Randomness Allows for obtaining different solutions Repeated runs produce different results Allows for a cost / time trade-off Iterative run Can be introduced to most of algorithms Lexicographical ordering aspect Reproducibility is not a problem Pseudo-randomness Necessary measure of randomness Can be analytically derived By analysis of possible numbers of choices Some algorithms are “random enough” for DF = 2 Some algorithms need very high level of randomness Probabilistic algorithms, SA, …
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.