Download presentation
Presentation is loading. Please wait.
Published byChastity Pitts Modified over 9 years ago
1
© KLMH Lienig Multi-Threaded Collision Aware Global Routing Bounded Length Maze Routing
2
© KLMH Lienig Contributions Optimal Bounded Length Maze Routing Heuristic Bounded Length Maze Routing Parallel Multi-Threaded Collision Aware Strategy for Multi-core Platforms
3
© KLMH Lienig Bounded Length vs Bounded Box Bounded Length Algo : 1.While(!viol) { 2. viol = Route(net, BL); 3. if(!viol) { 4. increase BL; 5. } 6. } Start with Manhattan distance Bounded Box Algo : 1.While(!viol) { 2. viol = Route(net, BB); 3. if(!viol) { 4. increase BB; 5. } 6. } Start with the MBB
4
© KLMH Lienig Maze Routing Dijkstra Algorithm (G, E, s, t) 1.curr_node = s; 2.Parent[u] = NULL; // parent of all nodes is null initially 3.Q : Fibonacci heap with no nodes initially; //might have multiple copies of a node 4.cost[s] = 0; 5.cost[u] = infinity for all nodes except s 6.While(curr_node != t) { 7. for (each neighbour u of curr_node) { 8. if (cost[curr_node] + w(curr_node, u) < cost[u]) { 9. parent[u] = curr_node; 10. cost[u] = cost[curr_node] + w (curr_node, u); 11. insert_to_Q(u); 12. } 13. } 14. curr_node = Extract_min(Q); //extracts the min cost node from Q 15. } Complexity : O(|E| + |V|log|V|)
5
© KLMH Lienig PROPOSED ROUTER FLOW DIAGRAM 1.MST Decomposition 2.Congestion Graph G = Route(net, viol) 3.While(!viol) { 4. NCRRoute(net, BL); 5. if(!viol) { 6. BL = Relax(BL) 7. } 8. } 9.Post Refinement 10.Layer Assignment NCR Route : Negotiation Based Congestion Rip up & Route
6
© KLMH Lienig Optimal Bounded Length Maze Routing Idea : Discard a path P i (s,v) if, wl(P i ) + Manh(v,t) > BL Comparison to Traditional Routing: 1.Prunes all paths with BL violations, thereby making the search space smaller 2.Keep more than one path unlike Traditional routing.
7
© KLMH Lienig OPTIMAL BLMR Cont’d What happens if keep the paths with lower cost. In this figure, cost(P 1 ) = 80, cost(P 2 ) = 90 wl(P 1 ) = 11, wl(P 2 ) = 5 BL = 16 If we keep only P 1 (lower cost), then it does not have enough slack to detour the congested graph around t. Thus, keep both P 1 & P 2. However, if cost(P i ) < cost(P j ) and wl(P i ) < wl(P j ), then P j is inferior to P j, can discard P j.
8
© KLMH Lienig Heuristic BLMR Problem with Optimal BLMR May have any number of paths that meet the criteria. Thus, slower Solution Need a Heuristic to select a single path. Examine each path if it has the required wire-length Select the lowest cost path with enough slack. If no candidate path have enough slack, select shortest path.
9
© KLMH Lienig Heuristic BLMR cont. Heuristic : ew k (v,t) = Manh(v, t) × (L k-1 (s, t) / Manh(s, t)) --1 Condition : wl(P i )+ ew k (v, t) ≤ BL ---------------------------------2 ew k (v,t) : estimated wire length from v to t in kth iteration L k-1 (s, t) : actual routed wirelength from s to t in k-1th iteration Pi(s,v) : Path from s to v wl(P i ) : wirelength of Path Pi Manh(v, t) : Manhattan distance from v to t Manh(s, t) : Manhattan distance from s to t The heuristic keeps on getting better with each iteration : 1.Overestimated wl from v to t in k th iteration : Path might be discarded by equation 2, thus in (k+1) th iteration, it gets better. 2.Under-estimated wl from v to t in k th iteration, actual wl (L K ) corrects it in the next iteration
10
© KLMH Lienig Bounded Length Relaxation With each iteration of rip-up & re-route, 1.Overflow decreases 2.Wire-length increases For the nets to be routed in the next iteration, BL is relaxed BL nk = Manh (s n, t n ) × (arctan(k − α ) + β) α, β are user defined (use 9,2.5 for this paper)
11
© KLMH Lienig Task-Based Concurrency Parallelism Rip & Re-route still takes 99.6% of total routing time on one of the difficult benchmarks (ISPD2007) Task Based vs Partition Based Concurrency Load might not be shared evenly between the threads because of differing congestions in different parts of the grid graph.
12
© KLMH Lienig Partition vs TCS TASK BASED CONCURRENCY & CHALLANGES Each entry in Task Q is a 2 pin routing task All threads pull one task out of Q Issues Same routing resource can be used by two threads unaware of each other. No Common Routing Resources (search restricted by partition)
13
© KLMH Lienig Challenges Cont’d
14
© KLMH Lienig Maze Routing & Collisions Maze routing happens in two phases : 1.Wave Propagation : explore every possible move. 2.Back-Tracing : Identify new routing path based on the paths explored. When will it be clear that collision occurred ? Not clear at Wave propagation Not clear at BackTracing Clear after BackTracing – both the threads have used the resource.
15
© KLMH Lienig Collision Aware RR Observations : 1.Nets closer are the most likely candidates for collisions. 2.About 41% of overflow nets in RR are due to collisions. 3.An overflow net has few overflow edges 4.It reuses most of its edges (80% of edges re-used)
16
© KLMH Lienig Using Observations in Collision Aware RR Thread T2 : marks the edges previously used by the net Thread T1 : see the increased cost of the common edges
17
© KLMH Lienig ALGORITHMS Collision Aware RR 1.Algorithm Collision-aware Rip-up and Reroute 2.Input: grid graph G 3.TaskQueue TQ; 4.while ( G has overflows) 5.update(TQ) // insert overflow net to TQ; 6.//parallel by each thread 7.while (TQ is not empty); 8.N ←extract_a_task(TQ); 9. BL k n ←relax_bounded_length(N); 10.collision_aware_BLMR(G, N, BL k n ); 11.end while end Collision Aware BLMR 1.Algorithm Collision-aware BLMR 2.Input: grid graph G, net N, bounded-length BL 3.mark_grid_edge( path(N),G); 4.ripup(path(N),G); 5.collision_aware_wave_propagation( N, G, BL); 6.newPath←back_tracing(N, G); 7.unmark_grid_edge(path(N),G); 8.path(N)←newPath; end
18
© KLMH Lienig Evaluation
19
© KLMH Lienig Evaluation cont.
20
© KLMH Lienig Summary BLMR Bounded length Maze Routing Optimal BLMR : paths based on slack left to reach the target Heuristic BLMR : select a single path based on heuristic The heuristic gets better with each iteration of rip-up & reroute Task Based Concurrency Better for load sharing compared to partition based concurrancy Collision may occur due to same resource used by more than one thread Collision Aware RR : avoid overflow due to race conditions.
21
© KLMH Lienig THANK YOU
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.