Download presentation
Presentation is loading. Please wait.
1
A METRICS System for Design Process Optimization Andrew B. Kahng and Stefanus Mantik* UCSD CSE and ECE Depts., La Jolla, CA *UCLA CS Dept., Los Angeles, CA
2
Purpose of METRICS Standard infrastructure for the collection and the storage of design process information Standard list of design metrics and process metrics Analyses and reports that are useful for design process optimization METRICS allows: Collect, Data-Mine, Measure, Diagnose, then Improve
3
METRICS System Architecture Inter/Intra-net DB Metrics Data Warehouse Web Server Java Applets Data Mining Reporting Transmitter wrapper Tool Transmitter API XML
4
XML Example TOTAL_WIRELENGTH 14250347 INTEGER 010312:220512 TOTAL_CPU_TIME 2150.28 DOUBLE 010312:220514
5
Transmitter Examples Wrapper-based transmitter #!/usr/local/bin/perl -w $TOOL = $0; $PID = `initProject`; $FID = `initFlow -pid ${PID}`; $TID = `initToolRun -pid ${PID} -fid ${FID}`; system “sendMetrics TOOL_NAME ${TOOL}\ STRING”; … while( ) { … system “sendMetrics ${NAME} ${VALUE}\ ${TYPE}”; … } system “terminateToolRun”; system “terminateFlow -pid ${PID} -fid ${FID}”; system “terminateProject -pid ${PID}”; exit 0; API-based transmitter #include “transmitter.h” int main(int argc, char* argv[]) { Transmitter MTR; MTR.initProject(); MTR.initFlow(); MTR.initToolRun(); MTR.sendMetrics(“TOOL_NAME”, argv[0],\ “STRING”); … MTR.sendMetrics(Name, Value, Type); … MTR.terminateToolRun(); MTR.terminateFlow(); MTR.terminateProject(); exit 0; }
6
Example Reports hen 95% rat 1%bull 2% donkey 2% % aborted per machine % aborted per task BA 8% ATPG 22% synthesis 20% physical 18% postSyntTA 13% placedTA 7% funcSim 7% LVS 5% CPU_TIME = 12 + 0.027 NUM_CELLS Correlation = 0.93
7
METRICS Server Oracle 8i Transmitter Servlets Reporting Servlets Apache Requests Reports Report EJB DB XFace EJB X’Mit EJB
8
Open Source Architecture METRICS components are industry standards e.g., Oracle 8i, Java servlets, XML, Apache web server, PERL/TCL scripts, etc. Custom generated codes for wrappers and APIs are publicly available collaboration in development of wrappers and APIs porting to different operating systems Codes are available at: http://vlsicad.cs.ucla.edu/GSRC/METRICS
9
METRICS Standards Standard metrics naming across tools same name same meaning, independent of tool supplier generic metrics and tool-specific metrics no more ad hoc, incomparable log files Standard schema for metrics database Standard middleware for database interface For complete current lists see: http://vlsicad.cs.ucla.edu/GSRC/METRICS
10
Generic and Specific Tool Metrics Partial list of metrics now being collected in Oracle8i
11
Flow Metrics Tool metrics alone are not enough Design process consists of more than one tool A given tool can be run multiple times Design quality depends on the design flow and methodology (the order of the tools and the iteration within the flow) Flow definition Directed graph G (V,E) V T { S, F } T { T 1, T 2, T 3, …, T n } (a set of tasks) S starting node, F ending node E { E s1, E 11, E 12, …, E xy } (a set of edges) E xy x < y forward path x = y self-loop x > y backward path
12
Flow Example S T1T1 T2T2 T3T3 T4T4 F Optional task Task sequence: T 1, T 2, T 1, T 2, T 3, T 3, T 3, T 4, T 2, T 1, T 2, T 4 S T1T1 T2T2 F T1T1 T2T2 T3T3 T3T3 T3T3 T4T4 T2T2 T1T1 T2T2 T4T4
13
Flow Tracking S T1T1 T2T2 F T1T1 T2T2 T3T3 T3T3 T3T3 T4T4 T2T2 T1T1 T2T2 T4T4 Task sequence: T 1, T 2, T 1, T 2, T 3, T 3, T 3, T 4, T 2, T 1, T 2, T 4
14
Chip Design Flow Example Simple chip design flow T 1 = synthesis & technology mapping T 2 = load wireload model (WLM) T 3 = pre-placement optimization T 4 = placement T 5 = post-placement optimization T 6 = global routing T 7 = final routing T 8 = custom WLM generation S T1T1 T2T2 T3T3 T4T4 F T5T5 T7T7 T8T8 T6T6
15
Optimization of Incremental Multilevel FM Partitioning Motivation: Incremental Netlist Partitioning Given: initial partitioning solution, CPU budget and instance perturbations ( I) Find: number of parts of incremental partitioning and number of starts T i = incremental multilevel FM partitioning Self-loop multistart n number of breakups ( I = 1 + 2 + 3 +... + n ) S T1T1 F T2T2 T3T3 TnTn...
16
Flow Optimization Results If (27401 < num edges 34826) and (143.09 < cpu time 165.28) and (perturbation delta 0.1) then num_inc_parts = 4 and num_starts = 3 If (27401 < num edges 34826) and (85.27 < cpu time 143.09) and (perturbation delta 0.1) then num_inc_parts = 2 and num_starts = 1... Actual CPU Time (secs) Predicted CPU Time (secs)
17
Datamining Integration Database Datamining Tool(s) Datamining Interface Java Servlet Java Servlet SQL Tables Results DM Requests Inter-/Intranet
18
Categories of Data for DataMining Design instances and design parameters attributes and metrics of the design instances e.g., number of gates, target clock frequency, number of metal layers, etc. CAD tools and invocation options list of tools and user options that are available e.g., tool version, optimism level, timing driven option, etc. Design solutions and result qualities qualities of the solutions obtained from given tools and design instances e.g., number of timing violations, total tool runtime, layout area, etc.
19
Possible Usage of DataMining Design instances and design parameters CAD tools and invocation options Design solutions and result qualities Given and , estimate the expected quality of e.g., runtime predictions, wirelength estimations, etc. Given and , find the appropriate setting of e.g., best value for a specific option, etc. Given and , identify the subspace of that is “doable” for the tool e.g., category of designs that are suitable for the given tools, etc.
20
DM Results: QPlace CPU Time If (num nets 7332) then CPU time = 21.9 + 0.0019 num cells + 0.0005 num nets + 0.07 num pads - 0.0002 num fixed cells If (num overlap layers = 0) and (num cells 71413) and (TD routing option = false) then CPU time = -15.6 + 0.0888 num nets - 0.0559 num cells - 0.0015 num fixed cells - num routing layer... Actual CPU Time (secs) Predicted CPU Time (secs)
21
Testbed: Metricized Cadence PKS Flow Synthesis & Tech Map METRICSMETRICS Pre-placement Opt GRoute QPPost-placement Opt WRoute BuildGates
22
NELSIS Flow Manager Integration Flow managed by NELSIS
23
Issues Tool interface: each tool has unique interface Security: proprietary and confidential information Standardization: flow, terminology, data management, etc. Cost of metrics collection: how many data are too many? Other non-EDA tools: LSF, License Manager, etc. Social: “big brother”, collection of social metrics, etc. Bug detection: report the configuration that trigger the bugs, etc.
24
Conclusions Metrics collection should be automatic and transparent API-based transmitter is the “best” approach Ongoing work with EDA, designer communities to identify tool metrics of interest users: metrics needed for design process insight, optimization vendors: implementation of the metrics requested, with standardized naming / semantics
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.