Presentation is loading. Please wait.

Presentation is loading. Please wait.

METRICS: A System Architecture for Design Process Optimization Andrew B. Kahng and Stefanus Mantik* UCSD CSE and ECE Depts., La Jolla, CA *UCLA CS Dept.,

Similar presentations


Presentation on theme: "METRICS: A System Architecture for Design Process Optimization Andrew B. Kahng and Stefanus Mantik* UCSD CSE and ECE Depts., La Jolla, CA *UCLA CS Dept.,"— Presentation transcript:

1 METRICS: A System Architecture for Design Process Optimization Andrew B. Kahng and Stefanus Mantik* UCSD CSE and ECE Depts., La Jolla, CA *UCLA CS Dept., Los Angeles, CA

2 Motivations How do we improve design productivity ? Does our design technology / capability yield better productivity than it did last year ? How do we formally capture best known methods, and how do we identify them in the first place ? Does our design environment support continuous improvement of the design process ? Does our design environment support what-if / exploratory design ? Does it have early predictors of success / failure? Currently, there are no standards or infrastructure for measuring and recording the semiconductor design process

3 Purpose of METRICS Standard infrastructure for the collection and the storage of design process information Standard list of design metrics and process metrics Analyses and reports that are useful for design process optimization METRICS allows: Collect, Data-Mine, Measure, Diagnose, then Improve

4 Related Works OxSigen LLC (Siemens 97-99) Enterprise- and project-level metrics (“normalized transistors”) Numetrics Management Systems DPMS Other in-house data collection systems  e.g., TI (DAC 96 BOF) Web-based design support  IPSymphony, WELD, VELA, etc. E-commerce infrastructure  Toolwire, iAxess, etc. Continuous process improvement Data mining and visualization

5 Outline Data collection process and potential benefits METRICS system architecture METRICS standards METRICS for design flow METRICS integration with datamining Current implementation Issues and conclusions

6 Potential Data Collection/Diagnoses What happened within the tool as it ran? what was CPU/memory/solution quality? what were the key attributes of the instance? what iterations/branches were made, under what conditions? What else was occurring in the project? spec revisions, constraint and netlist changes, … Example diagnosis: User performs same operation repeatedly with nearly identical inputs  tool is not acting as expected  solution quality is poor, and knobs are being twiddled

7 Benefits Benefits for project management  accurate resource prediction at any point in design cycle up front estimates for people, time, technology, EDA licenses, IP re-use...  accurate project post-mortems everything tracked - tools, flows, users, notes no “loose”, random data left at project end  management console web-based, status-at-a-glance of tools, designs and systems at any point in project Benefits for tool R&D  feedback on the tool usage and parameters used  improve benchmarking

8 Outline Data collection process and potential benefits METRICS system architecture METRICS standards METRICS for design flow METRICS integration with datamining Current implementation Issues and conclusions

9 METRICS System Architecture Inter/Intra-net DB Metrics Data Warehouse Web Server Java Applets Data Mining Reporting Transmitter wrapper Tool Transmitter API XML

10 METRICS Performance Transmitter  low CPU overhead multi-threads / processes – non-blocking scheme buffering – reduce number of transmissions  small memory footprint limited buffer size Reporting  web-based platform and location independent  dynamic report generation always up-to-date

11 XML Example TOTAL_WIRELENGTH 14250347 INTEGER 010312:220512 TOTAL_CPU_TIME 2150.28 DOUBLE 010312:220514

12 Transmitter Examples Wrapper-based transmitter #!/usr/local/bin/perl -w $TOOL = $0; $PID = `initProject`; $FID = `initFlow -pid ${PID}`; $TID = `initToolRun -pid ${PID} -fid ${FID}`; system “sendMetrics TOOL_NAME ${TOOL}\ STRING”; … while( ) { … system “sendMetrics ${NAME} ${VALUE}\ ${TYPE}”; … } system “terminateToolRun”; system “terminateFlow -pid ${PID} -fid ${FID}”; system “terminateProject -pid ${PID}”; exit 0; API-based transmitter #include “transmitter.h” int main(int argc, char* argv[]) { Transmitter MTR; MTR.initProject(); MTR.initFlow(); MTR.initToolRun(); MTR.sendMetrics(“TOOL_NAME”, argv[0],\ “STRING”); … MTR.sendMetrics(Name, Value, Type); … MTR.terminateToolRun(); MTR.terminateFlow(); MTR.terminateProject(); exit 0; }

13 Example Reports hen 95% rat 1%bull 2% donkey 2% % aborted per machine % aborted per task BA 8% ATPG 22% synthesis 20% physical 18% postSyntTA 13% placedTA 7% funcSim 7% LVS 5% CPU_TIME = 12 + 0.027 NUM_CELLS Correlation = 0.93

14 METRICS Server Oracle 8i Transmitter Servlets Reporting Servlets SQL via JDBC SQL via JDBC Apache Requests Reports

15 Open Source Architecture METRICS components are industry standards  e.g., Oracle 8i, Java servlets, XML, Apache web server, PERL/TCL scripts, etc. Custom generated codes for wrappers and APIs are publicly available  collaboration in development of wrappers and APIs  porting to different operating systems Codes are available at: http://vlsicad.cs.ucla.edu/GSRC/METRICS

16 Outline Data collection process and potential benefits METRICS system architecture METRICS standards METRICS for design flow METRICS integration with datamining Current implementation Issues and conclusions

17 METRICS Standards Standard metrics naming across tools  same name  same meaning, independent of tool supplier  generic metrics and tool-specific metrics  no more ad hoc, incomparable log files Standard schema for metrics database Standard middleware for database interface For complete current lists see: http://vlsicad.cs.ucla.edu/GSRC/METRICS

18 Generic and Specific Tool Metrics Partial list of metrics now being collected in Oracle8i

19 Outline Data collection process and potential benefits METRICS system architecture METRICS standards METRICS for design flow METRICS integration with datamining Current implementation Issues and conclusions

20 Flow Metrics Tool metrics alone are not enough  Design process consists of more than one tool  A given tool can be run multiple times  Design quality depends on the design flow and methodology (the order of the tools and the iteration within the flow) Flow definition  Directed graph G (V,E) V  T  { S, F } T  { T 1, T 2, T 3, …, T n } (a set of tasks) S  starting node, F  ending node E  { E s1, E 11, E 12, …, E xy } (a set of edges)  E xy x < y  forward path x = y  self-loop x > y  backward path

21 Flow Example S T1T1 T2T2 T3T3 T4T4 F Optional task Task sequence: T 1, T 2, T 1, T 2, T 3, T 3, T 3, T 4, T 2, T 1, T 2, T 4 S T1T1 T2T2 F T1T1 T2T2 T3T3 T3T3 T3T3 T4T4 T2T2 T1T1 T2T2 T4T4

22 Flow Tracking S T1T1 T2T2 F T1T1 T2T2 T3T3 T3T3 T3T3 T4T4 T2T2 T1T1 T2T2 T4T4 Task sequence: T 1, T 2, T 1, T 2, T 3, T 3, T 3, T 4, T 2, T 1, T 2, T 4

23 Optimization of Incremental Multilevel FM Partitioning Motivation: Incremental Netlist Partitioning  netlist ECOs are made; want top-down placement to remain similar to previous result  good approach [CaldwellKM00]: “V-cycling” based multilevel Fiduccia-Mattheyses  what is the best tuning of the approach for a given instance? break up the ECO perturbation into multiple smaller perturbations? #starts of the partitioner? within a specified CPU budget?

24 Optimization of Incremental Multilevel FM Partitioning Given: initial partitioning solution, CPU budget and instance perturbations (  I) Find: number of parts of incremental partitioning and number of starts  T i = incremental multilevel FM partitioning  Self-loop  multistart  n  number of breakups (  I =  1 +  2 +  3 +... +  n ) S T1T1 F T2T2 T3T3 TnTn...

25 Multilevel FM Experiment Flow Setup foreach testcase foreach  I foreach CPU budget foreach breakup I current = I initial S current = S initial for i = 1 to n I next = I current +  i run incremental multilevel FM partitioner on I next to produce S next if CPU current > CPU budget then break I current = I next S current = S next end

26 Flow Optimization Results If (27401 < num edges  34826) and (143.09 < cpu time  165.28) and (perturbation delta  0.1) then num_inc_parts = 4 and num_starts = 3 If (27401 < num edges  34826) and (85.27 < cpu time  143.09) and (perturbation delta  0.1) then num_inc_parts = 2 and num_starts = 1... Actual CPU Time (secs) Predicted CPU Time (secs)

27 Identifying the Effect of Wire Load Model Wire load model (WLM) is used for pre-layout estimation of wire delays Three different WLMs  statistical WLM  structural WLM  custom WLM Motivation:  identify if WLMs are useful for estimation  identify if WLMs are necessary for optimization  identify the best role of WLMs

28 Wireload Model Flow WLM flows for finding the appropriate role of WLM  T 1 = synthesis & technology mapping  T 2 = load wireload model (WLM)  T 3 = pre-placement optimization  T 4 = placement  T 5 = post-placement optimization  T 6 = global routing  T 7 = final routing  T 8 = custom WLM generation S T1T1 T2T2 T3T3 T4T4 F T5T5 T7T7 T8T8 T6T6

29 WLM Experiment Setup foreach testcase foreach WLM (statistical, structural, custom, and no WLM) foreach flow variant run PKS flow if WLM = structural then generate custom WLM end 6 different flow variants

30 WLM Flow Results Slack comparison for 6 flow variants Post-placement and pre-placement optimizations are important steps Choice of WLM depends on the design

31 Outline Data collection process and potential benefits METRICS system architecture METRICS standards METRICS for design flow METRICS integration with datamining Current implementation Issues and conclusions

32 Datamining Integration Database Datamining Tool(s) Datamining Interface Java Servlet Java Servlet SQL Tables Results DM Requests Inter-/Intranet

33 Categories of Data for DataMining Design instances and design parameters  attributes and metrics of the design instances  e.g., number of gates, target clock frequency, number of metal layers, etc. CAD tools and invocation options  list of tools and user options that are available  e.g., tool version, optimism level, timing driven option, etc. Design solutions and result qualities  qualities of the solutions obtained from given tools and design instances  e.g., number of timing violations, total tool runtime, layout area, etc.

34 Possible Usage of DataMining  Design instances and design parameters  CAD tools and invocation options  Design solutions and result qualities Given  and , estimate the expected quality of   e.g., runtime predictions, wirelength estimations, etc. Given  and , find the appropriate setting of   e.g., best value for a specific option, etc. Given  and , identify the subspace of  that is “doable” for the tool  e.g., category of designs that are suitable for the given tools, etc.

35 Example Applications with DM Parameter sensitivity analysis  input parameters that have the most impact on results Field of use analysis  limits at which the tool will break  tool sweet spots at which the tool will give best results Process monitoring  identify possible failure in the process (e.g., timing constraints are too tight, row utilization is too high, etc.) Resource monitoring  analysis of resource demands (e.g., disk space, memory, etc.)

36 DM Results: QPlace CPU Time If (num nets  7332) then CPU time = 21.9 + 0.0019 num cells + 0.0005 num nets + 0.07 num pads - 0.0002 num fixed cells If (num overlap layers = 0) and (num cells  71413) and (TD routing option = false) then CPU time = -15.6 + 0.0888 num nets - 0.0559 num cells - 0.0015 num fixed cells - num routing layer... Actual CPU Time (secs) Predicted CPU Time (secs)

37 Outline Data collection process and potential benefits METRICS system architecture METRICS standards METRICS for design flow METRICS integration with datamining Current implementation Issues and conclusions

38 Testbed I: Metricized P&R Flow Placed DEF QP ECO Legal DEF Congestion Map WRoute Capo Placer Routed DEF CongestionAnalysis Incr. WRoute Final DEF METRICSMETRICS LEF DEF

39 Testbed II: Metricized Cadence SLC Flow DEF Placed DEF QP Pearl METRICSMETRICS QP OptCTGen Incr. Routed DEF WRoute Optimized DEF LEF GCF,TLF Clocked DEF Constraints

40 Testbed III: Metricized Cadence PKS Flow Synthesis & Tech Map METRICSMETRICS Pre-placement Opt GRoute QPPost-placement Opt WRoute BuildGates

41 NELSIS Flow Manager Integration Flow managed by NELSIS

42 Outline Data collection process and potential benefits METRICS system architecture METRICS standards METRICS for design flow METRICS integration with datamining Current implementation Issues and conclusions

43 Current Status Complete prototype of METRICS system is working at UCLA with Oracle8i, Java Servlet and XML (other working prototypes are installed at Intel and Cadence) METRICS wrapper for Cadence and Cadence-UCLA flows, front-end tools (Ambit BuildGates and NCSim) Easiest proof of value: via use of regression suites METRICS system is integrated with Cubist datamining tool and NELSIS flow manager A complete METRICS system can be installed on a laptop and configured to work behind firewalls

44 Issues and Ongoing Work Issues for METRICS constituencies to solve  security: proprietary and confidential information  standardization: flow, terminology, data management, etc.  social: “big brother”, collection of social metrics, etc. Ongoing work with EDA, designer communities to identify tool metrics of interest  users: metrics needed for design process insight, optimization  vendors: implementation of the metrics requested, with standardized naming / semantics

45 http://vlsicad.cs.ucla.edu/GSRC/METRICS


Download ppt "METRICS: A System Architecture for Design Process Optimization Andrew B. Kahng and Stefanus Mantik* UCSD CSE and ECE Depts., La Jolla, CA *UCLA CS Dept.,"

Similar presentations


Ads by Google