Download presentation
Presentation is loading. Please wait.
1
Design Cost Modeling and Data Collection Infrastructure Andrew B. Kahng and Stefanus Mantik* UCSD CSE and ECE Departments (*) Cadence Design Systems, Inc. http://vlsicad.ucsd.edu/
2
ITRS Design Cost Model Engineer cost/year increases 5% / year ($181,568 in 1990) EDA tool cost/year (per engineer) increases 3.9% / year Productivity due to 8 major Design Technology innovations RTL methodology … Large-block reuse IC implementation suite Intelligent testbench Electronic System-level methodology Matched up against SOC-LP PDA content: SOC-LP PDA design cost = $20M in 2003 Would have been $630M without EDA innovations
3
SOC Design Cost
4
Outline Introduction and motivations METRICS system architecture Design quality metrics and tool quality metrics Applications of the METRICS system Issues and conclusions
5
Motivations How do we improve design productivity ? Is our design technology / capability better than last year? How do we formally capture best known methods, and how do we identify them in the first place ? Does our design environment support continuous improvement, exploratory what-if design, early predictions of success / failure,...? Currently, no standards or infrastructure for measuring and recording the semiconductor design process Can benefit project management accurate resource prediction at any point in design cycle accurate project post-mortems Can benefit tool R&D feedback on tool usage and parameters used improved benchmarking
6
Fundamental Gaps Data to be measured is not available Data is only available through tool log files Metrics naming and semantics are not consistent among different tools We do not always know what data should be measured Some metrics are less obviously useful Other metrics are almost impossible to discern
7
Purpose of METRICS Standard infrastructure for the collection and the storage of design process information Standard list of design metrics and process metrics Analyses and reports that are useful for design process optimization METRICS allows: Collect, Data-Mine, Measure, Diagnose, then Improve
8
Outline Introduction and motivations METRICS system architecture Components of METRICS System Flow tracking METRICS Standard Design quality metrics and tool quality metrics Applications of the METRICS system Issues and conclusions
9
METRICS System Architecture Inter/Intra-net DB Metrics Data Warehouse Web Server Java Applets Data Mining Reporting Transmitter wrapper Tool Transmitter API XML DAC00
10
Wrapper-based Transmitter Example #!/usr/local/bin/perl -w $TOOL = $0; $PID = `initProject`; $FID = `initFlow -pid ${PID}`; $TID = `initToolRun -pid ${PID} -fid ${FID}`; system “sendMetrics TOOL_NAME ${TOOL}\ STRING”; … while( ) { … system “sendMetrics ${NAME} ${VALUE}\ ${TYPE}”; … } system “terminateToolRun”; system “terminateFlow -pid ${PID} -fid ${FID}”; system “terminateProject -pid ${PID}”; exit 0; Tool Log/Output Files Stream Parsers XML Encoder Encryptor Transmitter Parsers stdout stderr stdin Internet/Intranet
11
API-based Transmitter Example #include “transmitter.h” int main(int argc, char* argv[]) { Transmitter MTR; MTR.initProject(); MTR.initFlow(); MTR.initToolRun(); MTR.sendMetrics(“TOOL_NAME”, argv[0],\ “STRING”); … MTR.sendMetrics(Name, Value, Type); … MTR.terminateToolRun(); MTR.terminateFlow(); MTR.terminateProject(); exit 0; } stdin Internet/Intranet API Library Tool stdout stderr XML Encoder Encryptor Transmitter SendMetric
12
METRICS Server DB JDBC Decryptor XML Parser Java Beans Receiver Apache + Servlet Input Form Receiver Servlet Reporting Servlet External Interface Dataminer Data translator Internet/Intranet
13
Example Reports nexus4 95% nexus10 1% nexus11 2% nexus12 2% % aborted per machine % aborted per task BA 8% ATPG 22% synthesis 20% physical 18% postSyntTA 13% placedTA 7% funcSim 7% LVS 5% CPU_TIME = 12 + 0.027 NUM_CELLS Correlation = 0.93
14
METRICS Performance Transmitter low CPU overhead multi-threads / processes – non-blocking scheme buffering – reduce number of transmissions small memory footprint limited buffer size Reporting web-based platform and location independent dynamic report generation always up-to-date
15
Flow Tracking Task sequence: T 1, T 2, T 1, T 2, T 3, T 3, T 3, T 4, T 2, T 1, T 2, T 4 S T1T1 T2T2 F T1T1 T2T2 T3T3 T3T3 T3T3 T4T4 T2T2 T1T1 T2T2 T4T4
16
Testbeds: Metricized P&R Flow METRICSMETRICS Placed DEF QP ECO Legal DEF Congestion Map WRoute Capo Placer Routed DEF CongestionAnalysis Incr WRoute Final DEF LEF DEF Placed DEF QP Pearl QP Opt CTGen Incr. Routed DEF WRoute Optimized DEF LEF GCF,TLF Clocked DEF Constraints Synthesis & Tech Map Pre-placement Opt GRoute QP Post-placement Opt WRoute Ambit PKS UCLA + Cadence flow Cadence PKS flow Cadence SLC flow
17
METRICS Standards Standard metrics naming across tools same name same meaning, independent of tool supplier generic metrics and tool-specific metrics no more ad hoc, incomparable log files Standard schema for metrics database Standard middleware for database interface
18
Generic and Specific Tool Metrics Partial list of metrics now being collected in Oracle8i
19
Open Source Architecture METRICS components are industry standards e.g., Oracle 8i, Java servlets, XML, Apache web server, PERL/TCL scripts, etc. Custom generated codes for wrappers and APIs are publicly available collaboration in development of wrappers and APIs porting to different operating systems Codes are available at: http://www.gigascale.org/metrics
20
Outline Introduction and motivations METRICS system architecture Design quality metrics and tool quality metrics Applications of the METRICS system Issues and conclusions
21
Tool Quality Metric: Behavior in the Presence of Input Noise [ISQED02] Goal: tool predictability Ideal scenario: can predict final solution quality even before running the tool Requires understanding of tool behavior Heuristic nature of tool: predicting results is difficult Lower bound on prediction accuracy: inherent tool noise Input noise "insignificant" variations in input data (sorting, scaling, naming,...) that can nevertheless affect solution quality Goal: understand how tools behave in presence of noise, and possibly exploit inherent tool noise
22
Monotone Behavior Monotonicity monotone solutions w.r.t. inputs Parameter Quality Parameter Quality
23
Smooth Behavior Smoothness “similar” solutions after perturbation Solution space
24
Monotonicity Studies OptimizationLevel: 1(fast/worst) … 10(slow/best) Opt Level123456789 QP WL2.500.97-0.20-0.111.430.581.290.641.70 QP CPU-59.7-51.6-40.4-39.3-31.5-31.3-17.3-11.9-6.73 WR WL2.951.52-0.290.071.590.920.890.941.52 Total CPU4.19-6.77-16.2-15.2-7.23-10.6-6.99-3.75-0.51 Note: OptimizationLevel is the tool's own knob for "effort"; it may or may not be well-conceived with respect to the underlying heuristics (bottom line is that the tool behavior is "non-monotone" from user viewpoint)
25
Noise Studies: Random Seeds 200 runs with different random seeds ½-percent spread in solution quality due to random seed -0.05%
26
Noise: Random Ordering & Naming Data sorting no effect on reordering Five naming perturbation random cell names without hierarchy (CR) E.g., AFDX|CTRL|AX239 CELL00134 random net names without hierarchy (NR) random cell names with hierarchy (CH) E.g., AFDX|CTRL|AX129 ID012|ID79|ID216 random net names with hierarchy (NH) random master cell names (MC) E.g., NAND3X4 MCELL0123
27
Noise: Random Naming (contd.) Wide range of variations (±3%) Hierarchy matters % Quality Loss Number of Runs
28
Noise: Hierarchy Swap hierarchy AA|BB|C03 XX|YY|C03 XX|YY|Z12 AA|BB|Z12 % Quality Loss Number of Runs
29
Noise is not Additive Noise 1 + Noise 2 = (Noise 1 & Noise 2 ) 1.260.981.06-1.33-0.37 0.861.381.07-0.690.16-1.92 0.83-0.81-0.46-0.180.57-0.85 0.42-0.201.220.34-1.14-0.57 2.902.401.031.810.840.83 -0.723.560.52-0.29-1.59 Cr Nr ?
30
Outline Introduction and motivations METRICS system architecture Design quality and tool quality Applications of the METRICS system Issues and conclusions
31
Categories of Collected Data Design instances and design parameters attributes and metrics of the design instances e.g., number of gates, target clock frequency, number of metal layers, etc. CAD tools and invocation options list of tools and user options that are available e.g., tool version, optimism level, timing driven option, etc. Design solutions and result qualities qualities of the solutions obtained from given tools and design instances e.g., number of timing violations, total tool runtime, layout area, etc.
32
Three Basic Application Types Design instances and design parameters CAD tools and invocation options Design solutions and result qualities Given and , estimate the expected quality of e.g., runtime predictions, wirelength estimations, etc. Given and , find the appropriate setting of e.g., best value for a specific option, etc. Given and , identify the subspace of that is “doable” for the tool e.g., category of designs that are suitable for the given tools, etc.
33
Estimation of QP CPU and Wirelength Goal: Estimate QPlace runtime for CPU budgeting and block partition Estimate placement quality (total wirelength) Collect QPlace metrics from 2000+ regression logfiles Use data mining (Cubist 1.07) to classify and predict, e.g.: Rule 1: [101 cases, mean 334.3, range 64 to 3881, est err 276.3] if ROW_UTILIZATION <= 76.15 then CPU_TIME = -249 + 6.7 ROW_UTILIZATION + 55 NUM_ROUTING_LAYER - 14 NUM_LAYER Rule 2: [168 cases, mean 365.7, range 20 to 5352, est err 281.6] if NUM_ROUTING_LAYER <= 4 then CPU_TIME = -1153 + 192 NUM_ROUTING_LAYER + 12.9 ROW_UTILIZATION - 49 NUM_LAYER Rule 3: [161 cases, mean 795.8, range 126 to 1509, est err 1069.4] if NUM_ROUTING_LAYER > 4 and ROW_UTILIZATION > 76.15 then CPU_TIME = - 33 + 8.2 ROW_UTILIZATION + 55 NUM_ROUTING_LAYER - 14 NUM_LAYER Data mining limitation sparseness of data
34
Cubist 1.07 Predictor for Total Wirelength
35
Optimization of Incremental Multilevel FM Partitioning Motivation: Incremental Netlist Partitioning Scenario: Design changes (netlist ECOs) are made, but we want the top-down placement result to remain similar to previous result RefinementClustering
36
Multilevel Partitioning RefinementClustering
37
Multi-level Partitioning Multi-level FM (MLFM) Much better than “flat” FM, very good solutions in near-linear time Critical to performance of top-down placers Implementation choices in MLFM V-cycling (“iterative ML”) - Karypis et al., DAC’97 a method of using an initial solution avoids clustering vertices that are in different sets allows us to run ML to improve results of previous runs Top-level partitioning with small tolerance first partition top level with lax tolerance use the result as initial solution for another FM run decrease tolerance to what it should be, run FM New clustering methods Tuning (solution pool size, un/coarsening ratios, etc.) Locally optimized implementations look very different !!!
38
Optimization of Incremental Multilevel FM Partitioning Motivation: Incremental Netlist Partitioning Scenario: Design changes (netlist ECOs) are made, but we want the top-down placement result to remain similar to previous result Good approach [CaldwellKM00]: “V-cycling” based multilevel Fiduccia-Mattheyses Our goal: What is the best tuning of the approach for a given instance? break up the ECO perturbation into multiple smaller perturbations? #starts of the partitioner? within a specified CPU budget?
39
Optimization of Incremental Multilevel FM Partitioning (contd.) Given: initial partitioning solution, CPU budget and instance perturbation ( I) Find: number of stages of incremental partitioning (i.e., how to break up I ) and number of starts T i = incremental multilevel FM partitioning Self-loop multistart n number of breakups ( I = 1 + 2 + 3 +... + n ) S T1T1 F T2T2 T3T3 TnTn...
40
Flow Optimization Results If (27401 < num edges 34826) and (143.09 < cpu time 165.28) and (perturbation delta 0.1) then num_inc_stages = 4 and num_starts = 3 If (27401 < num edges 34826) and (85.27 < cpu time 143.09) and (perturbation delta 0.1) then num_inc_stages = 2 and num_starts = 1... Up to 10% cutsize reduction with same CPU budget, using tuned #starts, #stages, etc. in ML FM
41
Outline Introduction and motivations METRICS system architecture Design quality and tool quality Applications for METRICS system Issues and conclusions
42
METRICS Deployment and Adoption Security: proprietary and confidential information cannot pass across company firewall may be difficult to develop metrics and predictors across multiple companies Standardization: flow, terminology, data management Social: “big brother”, collection of social metrics Data cleanup: obsolete designs, old methodology, old tools Data availability with standards: log files, API, or somewhere in between? “Design Factories” are using METRICS
43
Conclusions METRICS System : automatic data collection and real- time reporting New design and process metrics with standard naming Analysis of EDA tool quality in presence of input noise Applications of METRICS: tool solution quality estimator (e.g., placement) and instance-specific tool parameter tuning (e.g., incremental partitioner) Ongoing works: Construct active feedback from METRICS to design process for automated process improvement Expand the current metrics list to include enterprise metrics (e.g., number of engineers, number of spec revisions, etc.)
44
Thank You
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.