Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Measure, Then Improve Andrew B. Kahng April 9, 1999.

Similar presentations


Presentation on theme: "1 Measure, Then Improve Andrew B. Kahng April 9, 1999."— Presentation transcript:

1 1 Measure, Then Improve Andrew B. Kahng April 9, 1999

2 2 What do we want to improve? u Profits u = Design success (TT$) ? u = Design capability (design technology + mfg technology) ? u = Design process ? u = Designer productivity ? u = CAD tools ? –what is the $ value of a “better” scheduler, mapper, placer, … ? –what is the $ value of GUI, usability, … ? u Improving individual CAD tools is probably not the answer

3 3 What is the problem? u Complexity –ability to make silicon has outpaced ability to design it –complexity of data, system interactions u SOC –more functionality and customization, in less time –design at higher levels of abstraction, reuse existing design components –customized circuitry must be developed predictably, with less risk u Key question: “Will the project succeed, i.e., finish on schedule, under budget, while meeting performance goals?” u SOC design requires an organized, optimized design process

4 4 What is the design process? u Not like any “flow/methodology” bubble chart –backs of envelopes, budgeting wars –changed specs, silent decisions, e-mails, lunch discussions –ad hoc assignments of people, tools to meet current needs –proprietary databases, incompatible scripts/tools, platform- dependent GUIs, lack of usable standards –design managers operate on intuition, engineers focus on tool shortcomings u Why did it fail? –“CAD tools” –“inexperienced engineers” u Must measure to diagnose, and diagnose to improve

5 5 What should be measured? u We don’t have a clue –running a tool with wrong options, wrong subset of standard –bug in a translator/reader –assignment of junior designer to project with multiple clocks –difference between 300MHz and 200MHz in the spec –changing an 18-bit adder into a 28-bit adder midstream –decision to use domino in critical paths –one group stops attending budget/floorplan meetings u Solution: record everything, then mine the data

6 6 Design process data collection u What revision of what block was what tool called on? –by whom? –when? –how many times ? with what keystrokes ? u What happened within the tool as it ran ? –what was CPU/memory/solution quality ? –what were the key attributes of the instance ? –what iterations / branches were made, under what conditions ? u What else was occurring in the project ? –e-mails, spec revisions, constraint and netlist changes, … u Everything is fair game; bound only by server bandwidth

7 7 Example diagnoses u User performs same operation repeatedly with nearly identical inputs –tool is not acting as expected –solution quality is poor, and knobs are being twiddled u Email traffic in a project: –missed deadline, revised deadline; disengagement; project failed u Infinite possibilities! (and lots of interesting research…) time traffic

8 8 Benefits to project management u Resource projections before projects start –go / no go at earliest point u Accurate project post-mortems –everything was tracked: tools, flows, communications, changes –optimize next project based on past results –no data or information “loose” at project end u Less wasted resources –recover from bad runs (don’t make same mistakes twice) –prevent out of sync runs –no duplication of data / effort –R&D playground differentiated from Design u Efficient communications: email templates, auto-advisors... u Reproducibility: software assembly line science, not art

9 9 Benefits to tools R&D u Methodology for continuous tracking data over entire lifecycle of instrumented tools u More efficient analysis of realistic data –no need to rely only on extrapolations of tiny artificial “benchmarks” –no need to collect source files for test cases, and re-run in house u Facilitates identification of key design metrics, effects on tools –standardized vocabulary, schema for design/instance attributes –cf. Fujitsu CSI ? u Improves benchmarking –apples to apples, and what are the apples in the first place –apples to oranges as well, given enough correlation research

10 10 First steps u Schema for information within the design process u Repository for this information –data warehouse, APIs,... u Instrument a design process and collect real data –scripts around existing reports / logfiles –new releases of tools that are compliant with metrics schema –possible initial scope: RTL to layout (Fabrics) –candidates: EDA vendors, GSRC tool research, design driver projects –(cf. LSIL, IP Symphony, Semicustom Highway, …) u Data mining and data visualization tools u We should outsource most of the above, and concentrate on the mining, diagnosis, metrics definition

11 11 Schema fragments (placement) u Basic –runtime, peak memory, HPWL/RSMT, partitioner/analytic placer traces, timing/noise estimates, resynthesis ops –technology, layer stack, site map, cell library EEQ/LEQ flexibility, porosity measures, power distribution scheme, current delivery capability,... u Hierarchy-related –how repeaters treated in hierarchy, spare methodology –for each node: hierarchy level, xtor count, layout area, node’s placed bounding box, fanout,... –for each net: timing budget/slack, LCA in hierarchy, route controls, membership in bus, … u Routability –congestion, pin density, cell density, crossing,...

12 12 Recall: Optimization/Prediction u We need: –most relevant formulations and objectives –most relevant parameters of an instance –most relvant models of optimization heuristics (i.e., tools) –what should be passed down/up/between tools –bookshelf of formulations, interfaces, evaluators, … –culture change: standards for reporting/comparison, lower barriers to entry for research u Metrics implicitly gives all of these ?!

13 13 Manufacturability metrics u Impact of mfg on design productivity u Inter- and intra-die variation u Topography effects u Impact, tradeoffs of newer lithography techniques and materials u What is the appropriate abstraction of manufacturing process for design?

14 14 Potential research: new metrics u Tools: –scope of applicability –predictability –usability u Designs: –difficulty of design or manufacturing –verifiability, debuggability/probe-ability –likelihood of a bug escape –$ cost (function of design effort, integratability, migratability, …) u Statistical metrics, time-varying metrics u Key areas: –T1-T2 interface –T2-T3 interface

15 15 Other potential outcomes u Techniques for optimizing design processes u New measures of design value (as opposed to cost) –what should be the price of a design? u Meta-result: documentation and measurement of GSRC’s impact on the system design process u Metrics repository is the design repository ? u Summary: Record it, Mine it, Measure it, Diagnose it, … Then Improve it


Download ppt "1 Measure, Then Improve Andrew B. Kahng April 9, 1999."

Similar presentations


Ads by Google