Download presentation
Presentation is loading. Please wait.
Published byAdam Jones Modified over 11 years ago
1
Test and Test Equipment July 2011 San Francisco, California Dave Armstrong ITRS 2011 Test and Test Equipment – San Francisco, CA
2
2 2011 Test Team Akitoshi Nishimura Amit Majumdar Anne Gattiker Atul Goel Bill Price Burnie West Calvin Cheung Chris Portelli-Hale Dave Armstrong Dennis Conti Erik Volkerink Francois-Fabien Ferhani Frank Poehl Hirofumi Tsuboshita Hiroki Ikeda Hisao Horibe Brion Keller Nilanjan 'Mukherjee 'Rohit Kapur Sanjiv Taneja Satoru Takeda Sejang Oh Shawn Fetterolf Shoji Iwasaki Stefan Eichenberger Steve Comen Steve Tilden Steven Slupsky Takairo Nagata Takuya Kobayashi Tetsuo Tada Ulrich Schoettmer Wendy Chen Yasuo Sato Yervant Zorian Yi Cai Jerry Mcbride Jody Van Horn Kazumi Hatayama Ken Lanier Ken Taoka Ken-ichi Anzou Khushru Chhor Masaaki Namba Masahiro Kanase Michio Maekawa Mike Bienek Mike Peng Li Mike Rodgers Paul Roddy Peter Maxwell Phil Nigh Prasad Mantri Rene Segers Rob Aitken Roger Barth ITRS 2011 Test and Test Equipment – San Francisco, CA
3
3 2011 Changes New Section on 3D Device Test Challenges Updated Adaptive Testing section Logic / DFT – Major re-write of this section thanks to the addition of some new team members representing the major three EDA vendorsr Numerous other changes to specialty devices info. Test Cost – Test cost survey completed that quantifies industry view Other updates will be published for the Logic, Consumer/SOC, RF, ad Analog section. ITRS 2011 Test and Test Equipment San Francisco, CA
4
4 Test Cost Components NRE DFT design and validation Test development Device Die area increase Yield loss Untested Units Work Cell Building People Consumables DUT Interface Test Equipment Handling Tools Factory Automation Good Units Reject Units False Pass Units False Fail Units Previous Data More challenges in the future ITRS 2011 Test and Test Equipment – San Francisco, CA
5
5 Test Cost Components NRE DFT design and validation Test development Device Die area increase Yield loss Untested Units Test Cell Good Units Reject Units False Pass Units False Fail Units 3D Technology Add Many Challenges Probably Good Units NRE DFT design and validation Test development NRE DFT design and validation Test development NRE DFT design and validation Test development Probably Good Units Analysis Die Stacking Test Cell Rejected Units Pass/Fail Good Die in a Failing Stack Smart Manufacturing ITRS 2011 Test and Test Equipment – San Francisco, CA
6
Wafer Probe Final Test Burn-in Stack / Card / System Test Fab data Design data Business data Customer specs RT A/O stands for Real-Time Analysis & Optimization ETest Optical Inspection Other inline data Assembly/Build Data FAB Field Operation RT A/O PTAD PTAD is Post- Test Analysis & Dispositioning PTAD Databases & Automated Data Analysis (This may include multiple databases. Analysis includes capabilities like post-test statistical analysis, dynamic routings and feedforward data.) Databases & Automated Data Analysis (This may include multiple databases. Analysis includes capabilities like post-test statistical analysis, dynamic routings and feedforward data.) Assembly Operations (This includes test operations at any level of assembly.) RT A/O Adaptive Test Flow
7
Unchanged Revised New Drop 7 2011 Drivers Device trends – Increasing device interface bandwidth and data rates – Increasing device integration (SoC, SiP, MCP, 3D packaging) – Integration of emerging and non-digital CMOS technologies – Complex package electrical and mechanical characteristics – Device characteristics beyond the deterministic stimulus/response model – 3 Dimensional silicon - multi-die and Multi-layer – Multiple Power modes and Multiple time domains – Fault Tolerant architectures and protocols Test process complexity – Device customization / configuration during the test process – Distributed test to maintain cost scaling – Feedback data for tuning manufacturing – Adaptive test and Feedback data – Higher Order Dimensionality of test conditions – Concurrent test within a DUT – Maintaining unit level test traceability ITRS 2011 Test and Test Equipment – San Francisco, CA
8
2011 Drivers (2) Economic Scaling of Test – Physical and economic limits of packaged test parallelism – Test data volume and feedback data volume – Effective limit for speed difference of HVM ATE versus DUT – Managing interface hardware and (test) socket costs – Trade-off between the cost of test and the cost of quality – Balancing General Purpose Equipment vs. Multiple Insertions for System Test and BIST 8 Unchanged Revised New Drop ITRS 2011 Test and Test Equipment – San Francisco, CA
9
2011 Difficult Challenges Cost of Test and Overall Equipment Efficiency – Progress made in terms of test time, capital cost, multisite test – Continued innovation in DFT, Concurrent Test, Balancing DPM vs. Cost – Gains in some cases are now limited by Overall Equipment Efficiency Test Development as a Gate to Volume Production (Time to Market) – Increasing device complexity driving more complex test development. – Complexity also driven by the diversity of different types of device interfaces on a single chip. Potential yield losses – Tester inaccuracies (timing, voltage, current, temperature control, etc) – Over testing (e.g., delay faults on non-functional paths) – Mechanical damage during the testing process – Defects in test-only circuitry or spec failures in a test mode e.g., BIST, power, noise – Some IDDQ-only failures – Faulty repairs of normally repairable circuits – Decisions made on overly aggressive statistical post-processing – Multi-die stacks / TSV – Power management issues 9 ITRS 2011 Test and Test Equipment San Francisco, CA Unchanged Revised New Drop
10
2011 Difficult Challenges (2) Detecting Systemic Defects – Testing for local non-uniformities, not just hard defects – Detecting symptoms and effects of line width variations, finite dopant distributions, systemic process defects Screening for reliability – Effectiveness and Implementation of burn-in, IDDQ, and Vstress testing – Screening of multiple power down modes and binning based on power requirements – Detection of erratic, non deterministic, and intermittent device behavior 10 ITRS 2011 Test and Test Equipment – San Francisco, CA Unchanged Revised New Drop
11
2011 Future Opportunities Test Program Automation – Automatic generation of an entire test program. – Tester independent test programming language. – Mixed Signal still a test programming challenge Scan Diagnosis in the Presence of Compression Simulation and Modeling – Seamless integration of simulation & modeling into the testing process. – A move to a higher level of abstraction with Protocol Aware test resources. – Focused test generation based on layout, modeling, and fed back fabrication data. Convergence of Test and System Reliability Solution – Re-use of test collateral in different environments (ATE, Burn-in, System, Field) 11 Unchanged Revised New Drop 11 ITRS 2011 Test and Test Equipment – San Francisco, CA
12
Summary Stacked devices change many things for test. – The methods and approach seem available. – Considerable work ahead of us to implement. Adaptive testing is becoming a standard approach – Significant test data accumulation, distribution, and analysis challenges. Ongoing changes to the RF, Analog, and Specialty devices. Many more details to be published in the final document.
13
Thank You!
14
Backup
15
Tester Local Database For real-time data analysis & actions. Resident on Tester or in Test Cell. Latency: <1 second Retention: hours Local Database For real-time data analysis & actions. Resident on Tester or in Test Cell. Latency: <1 second Retention: hours Adaptive Test Database Architecture Example Database Data availability for production – lot setup or dispositioning. Latency: minutes Retention: hours to days Database Data availability for production – lot setup or dispositioning. Latency: minutes Retention: hours to days Large Database For long-term storage Latency: minutes Retention: months … with longer-term retrieval options. (data is available forever) Large Database For long-term storage Latency: minutes Retention: months … with longer-term retrieval options. (data is available forever) World-wide, cross-company databases
16
3D Device Testing Challenges Test Access Die level Access Die in Stack Access Test Flow/Cost/Resources Heterogeneous Die in the Stack Die in Stack Testing Die to Die Interactions Debug/Diagnosis DFT Test data managements, distribution, & security Power Implications
17
SOC / Logic Update Takes the device roadmap data and calculates: Fault Expectations both inside and outside the various cores using multiple fault models. Required test pattern lengths given five different assumptions: » Flat test patterns » Test implemented taking advantage of the circuit hierarchy » Tests implemented using compressed flat patterns. » Tests implemented using compressed hierarchal test patterns. » Using low power scan test approach.
18
LCD Device Probing Challenge Overcome New probe needle arrangement 4 layers + 4 layers = 8 layers could provide solution for LCD driver probe pad continually narrow down.
19
Higher Site Count Camera Chips Chief º Max º Min º Chief ray EPD D Tan( )= (D/2)/EPD D=Pupil diameter F-number = EPD / D Single site Four sites
20
MEMs Sensors For Handheld Devices GyroAccelerometers E-CompassPressure Expect a 10% yearly growth
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.