Rouwaida Kanj, *Rajiv Joshi, and Sani Nassif

Slides:



Advertisements
Similar presentations
A Fast Estimation of SRAM Failure Rate Using Probability Collectives Fang Gong Electrical Engineering Department, UCLA Collaborators:
Advertisements

Slides prepared by Timothy I. Matis for SpringSim’06, April 4, 2006 Estimating Rare Event Probabilities Using Truncated Saddlepoint Approximations Timothy.
Monte Carlo Methods and Statistical Physics
Chapter 4 Mathematical Expectation.
Stochastic Analog Circuit Behavior Modeling by Point Estimation Method
0 1 Width-dependent Statistical Leakage Modeling for Random Dopant Induced Threshold Voltage Shift Jie Gu, Sachin Sapatnekar, Chris Kim Department of Electrical.
Approaches to Data Acquisition The LCA depends upon data acquisition Qualitative vs. Quantitative –While some quantitative analysis is appropriate, inappropriate.
Importance Sampling. What is Importance Sampling ? A simulation technique Used when we are interested in rare events Examples: Bit Error Rate on a channel,
Sampling Attila Gyulassy Image Synthesis. Overview Problem Statement Random Number Generators Quasi-Random Number Generation Uniform sampling of Disks,
Resampling techniques Why resampling? Jacknife Cross-validation Bootstrap Examples of application of bootstrap.
CF-3 Bank Hapoalim Jun-2001 Zvi Wiener Computational Finance.
Nonlinear and Non-Gaussian Estimation with A Focus on Particle Filters Prasanth Jeevan Mary Knox May 12, 2006.
Descriptive statistics Experiment  Data  Sample Statistics Experiment  Data  Sample Statistics Sample mean Sample mean Sample variance Sample variance.
Active Appearance Models Suppose we have a statistical appearance model –Trained from sets of examples How do we use it to interpret new images? Use an.
Chapter 14 Simulation. Monte Carlo Process Statistical Analysis of Simulation Results Verification of the Simulation Model Computer Simulation with Excel.
Using ranking and DCE data to value health states on the QALY scale using conventional and Bayesian methods Theresa Cain.
Statistical Inference Lab Three. Bernoulli to Normal Through Binomial One flip Fair coin Heads Tails Random Variable: k, # of heads p=0.5 1-p=0.5 For.
Chapter 9 Numerical Integration Numerical Integration Application: Normal Distributions Copyright © The McGraw-Hill Companies, Inc. Permission required.
1 Assessment of Imprecise Reliability Using Efficient Probabilistic Reanalysis Farizal Efstratios Nikolaidis SAE 2007 World Congress.
1 CE 530 Molecular Simulation Lecture 7 David A. Kofke Department of Chemical Engineering SUNY Buffalo
1 Statistical Mechanics and Multi- Scale Simulation Methods ChBE Prof. C. Heath Turner Lecture 11 Some materials adapted from Prof. Keith E. Gubbins:
Noise & Uncertainty ASTR 3010 Lecture 7 Chapter 2.
Random Sampling, Point Estimation and Maximum Likelihood.
Lecture 12 Statistical Inference (Estimation) Point and Interval estimation By Aziza Munir.
Monte Carlo Simulation CWR 6536 Stochastic Subsurface Hydrology.
Probabilistic Mechanism Analysis. Outline Uncertainty in mechanisms Why consider uncertainty Basics of uncertainty Probabilistic mechanism analysis Examples.
Tolerance Limits Statistical Analysis & Specification
Module 1: Statistical Issues in Micro simulation Paul Sousa.
Chapter 4 Stochastic Modeling Prof. Lei He Electrical Engineering Department University of California, Los Angeles URL: eda.ee.ucla.edu
Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)
Propagation of Error Ch En 475 Unit Operations. Quantifying variables (i.e. answering a question with a number) 1. Directly measure the variable. - referred.
1 2 nd Pre-Lab Quiz 3 rd Pre-Lab Quiz 4 th Pre-Lab Quiz.
Robust System Design Session #11 MIT Plan for the Session Quiz on Constructing Orthogonal Arrays (10 minutes) Complete some advanced topics on OAs Lecture.
Lecture 2 Molecular dynamics simulates a system by numerically following the path of all particles in phase space as a function of time the time T must.
Machine Design Under Uncertainty. Outline Uncertainty in mechanical components Why consider uncertainty Basics of uncertainty Uncertainty analysis for.
Probabilistic methods in Open Earth Tools Ferdinand Diermanse Kees den Heijer Bas Hoonhout.
1 Introduction to Statistics − Day 4 Glen Cowan Lecture 1 Probability Random variables, probability densities, etc. Lecture 2 Brief catalogue of probability.
Statistics Presentation Ch En 475 Unit Operations.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
Lecture 8: Measurement Errors 1. Objectives List some sources of measurement errors. Classify measurement errors into systematic and random errors. Study.
Gil McVean, Department of Statistics Thursday February 12 th 2009 Monte Carlo simulation.
Chapter 9: Introduction to the t statistic. The t Statistic The t statistic allows researchers to use sample data to test hypotheses about an unknown.
EE201C Final Project Adeel Mazhar Charwak Apte. Problem Statement Need to consider reading and writing failure – Pick design point which minimizes likelihood.
HW5 and Final Project Yield Estimation and Optimization for 6-T SRAM Cell Fang Gong
R. Kass/W03 P416 Lecture 5 l Suppose we are trying to measure the true value of some quantity (x T ). u We make repeated measurements of this quantity.
Tree and Forest Classification and Regression Tree Bagging of trees Boosting trees Random Forest.
Unit 3: Probability.  You will need to be able to describe how you will perform a simulation  Create a correspondence between random numbers and outcomes.
Generalization Performance of Exchange Monte Carlo Method for Normal Mixture Models Kenji Nagata, Sumio Watanabe Tokyo Institute of Technology.
Sampling and Sampling Distribution
Confidence Intervals Cont.
Fang Gong HW5 and Final Project Yield Estimation and Optimization for 6-T SRAM Cell Fang Gong
Basic simulation methodology
SRAM Yield Rate Optimization EE201C Final Project
Stat 31, Section 1, Last Time Sampling Distributions
Chapter 4a Stochastic Modeling
Markov chain monte carlo
EE201C Modeling of VLSI Circuits and Systems Final Project
Stat 217 – Day 28 Review Stat 217.
Sampling Distribution
Sampling Distribution
Chapter 4a Stochastic Modeling
Efficient Quantification of Uncertainties Associated with Reservoir Performance Simulations Dongxiao Zhang, The University of Oklahoma . The efficiency.
Estimating probability of failure
Yiyu Shi*, Wei Yao*, Jinjun Xiong+ and Lei He*
Yield Optimization: Divide and Conquer Method
Monte Carlo I Previous lecture Analytical illumination formula
Lecture 4 - Monte Carlo improvements via variance reduction techniques: antithetic sampling Antithetic variates: for any one path obtained by a gaussian.
Learning From Observed Data
Propagation of Error Berlin Chen
Sampling Plans.
Presentation transcript:

Rouwaida Kanj, *Rajiv Joshi, and Sani Nassif Mixture Importance Sampling and Its Application to the Analysis of SRAM Designs in the Presence of Rare Failure Events Rouwaida Kanj, *Rajiv Joshi, and Sani Nassif IBM Austin Research Labs, Austin, TX *IBM TJ Watson Labs, Yorktown Heights, NY July 25, 2006

Motivation SRAM comprises 50% and more of modern designs. For density, SRAM uses the smallest possible devices and is therefore very sensitive to the impact of manufacturing variations. VTH variations from random dopant fluctuations. Dimension variations because of lithography, line edge roughness and other factors. Variations cause an SRAM to fail catastrophically. Example: cannot write 1 into a cell! Not a continuous event like for timing variations…

Analyze This! # Cells per Array Array Yield % 10 20 30 40 50 60 70 80 90 100 1000 10000 100000 1000000 Cell Fail Probability 1'% 0.1'% 0.01'% 0.001'% 0.0001'% 0.00001'% 0.000001'% Since failures are largely independent, achieving good yield on a large array require very low cell fail probabilities Need robust statistical methods to accurately estimate such rare fail probabilities

Why not use Standard Monte Carlo? 5.3σ Pf = Shaded area = 0.00001% In a 1Mbit array design, Yield=90%  cell’s Pf = 1e-5%  zeqv= 5.3 Millions of samples to estimate such low failure probabilities

Importance Sampling Combine the best of the available ideas… Monte Carlo works well, but wastes a lot of time sampling around the mean rather than in the tail! Importance Sampling gets around this problem by distorting the (natural) sampling function, f(x), to produce more samples in the important region(s) Mathematical manipulation follows to un-bias the estimates Use weighted averages  g(x)/f(x) f(x) Failed Samples More Failed Samples f(x) g(x)=U(x)

Mixture Importance Sampling Uniform sampling g(x) = U(x) Can do better Shifted Gaussian g(x) = f(x - μs) We refer to μs as the shift center Mixture Importance Sampling (MixIS) Use a mixture of distributions adjusted so that the “right” area gets the attention! without leaving any cold spots. E.g.: g(x) = λ1* f(x) + λ2* U(x) + (1 - λ1- λ2) * f(x - μs) f(x) U(x) f(x-μs ) μs

Approximating the Shift Center μs VTN2 mechanism 2 For a given performance metric (array function: stability, writability…) Uniformly sample the space * If # failures is <40 ** (Estimating a mean) Go to 1 Find center of gravity (C.O.G.), μs, of bad points f(x) μs mechanism 1 VTN1

Approximating the Shift Center μs * Quasi-random sequences like Sobol sequences guarantee that the samples are maximally spread in the space Sample is a good representative of the population Failure region mean offers a good approximation to the population mean It does not result from a clustered set of failing samples ** μs accuracy is not very important A small # failures estimates a mean with good confidence Want approximate C.O.G. Importance sampling will follow For a given performance metric (array function: stability, writability…) Uniformly sample the space * If # failures is <40 ** (Estimating a mean) Go to 1 Find center of gravity (C.O.G.), μs, of bad points

Case Study: µs convergence VTN1 VTN2 6s - 6s z=5.5 a Assume linear Fail Boundary All variables contribute to fail equally 1 variable contributes to the fail Find expected number of fails due to uniform sampling Shaded region  failure Pf corresponds to z (@ 5.5)

Case Study: µs convergence (cont’d) log(Nf) ~ -0.06*(nD)+1.71 Nf: expected number of fails per 1000 uniform samples nD: number of dimensions Few thousand uniform samples sufficient for finding µs

Great Accuracy and SpeedUp Number of simulations needed to obtain a converging estimate Speedup= NMC/ NMixIS NMC: number of Monte Carlo simulations NMixIS: number of importance sampling simulation

Great Accuracy and SpeedUp f(x) s Analytical fun(xi) Analytical yield MixIS yield Experiment. # Equivalent  Great Accuracy Impressive speedup upto 105X vs Monte Carlo

SRAM Stability Analysis BitLine Left BitLine Right 6T SRAM cells 65nm technology 64 cells/ bitline VTi (i=1 to 6) independent Gaussian ~ N(0, VT) Study the read/write stability of the cell Destructive read: During read ‘0’, cell may erroneously flip Writability It is possible that we cannot write a ‘0’ to the cell L R Word Line

EQV=4 , standard Monte Carlo SRAM Yield: MC vs MixIS Application: Writability yield of an SRAM cell MixIS is very efficient Few thousand samples needed to converge Monte Carlo method looses its efficiency as EQV increases For EQV~4, needed more than 100k samples to converge Monte Carlo MixIS 1.49 1.53 2.56 2.51 3.49 3.42 4.15 4.22 Too long! 4.96 5.68 6.06 EQV=4 , standard Monte Carlo did not converge yet Monte Carlo MixIS

Conclusions Efficient methodology for analyzing SRAM designs in the presence of variability, in terms of Evaluating SRAM stability, writability, readability and other performance metrics Yield estimation of memory designs Comparator for variety of SRAM designs/technologies With the proper choice of a sampling distribution, Importance Sampling lends itself as a comprehensive and computationally efficient methodology Monte-Carlo-like accuracy Several orders of magnitude of gain in computational speed Verified with hardware data as well