EE201C Final Project Adeel Mazhar Charwak Apte. Problem Statement Need to consider reading and writing failure – Pick design point which minimizes likelihood.

Slides:



Advertisements
Similar presentations
A Fast Estimation of SRAM Failure Rate Using Probability Collectives Fang Gong Electrical Engineering Department, UCLA Collaborators:
Advertisements

Slide 1 Bayesian Model Fusion: Large-Scale Performance Modeling of Analog and Mixed- Signal Circuits by Reusing Early-Stage Data Fa Wang*, Wangyang Zhang*,
Probabilistic Analysis using FEA A. Petrella. What is Probabilistic Analysis ‣ All input parameters have some uncertainty ‣ What is the uncertainty in.
CHAPTER 8 A NNEALING- T YPE A LGORITHMS Organization of chapter in ISSO –Introduction to simulated annealing –Simulated annealing algorithm Basic algorithm.
Stochastic Analog Circuit Behavior Modeling by Point Estimation Method
Generated Waypoint Efficiency: The efficiency considered here is defined as follows: As can be seen from the graph, for the obstruction radius values (200,
Probabilistic Re-Analysis Using Monte Carlo Simulation
1 Reinforcement Learning Introduction & Passive Learning Alan Fern * Based in part on slides by Daniel Weld.
Approaches to Data Acquisition The LCA depends upon data acquisition Qualitative vs. Quantitative –While some quantitative analysis is appropriate, inappropriate.
Aspects of Conditional Simulation and estimation of hydraulic conductivity in coastal aquifers" Luit Jan Slooten.
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
Optimization via Search CPSC 315 – Programming Studio Spring 2009 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
Pattern Classification, Chapter 3 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P.
Clustering.
End of Chapter 8 Neil Weisenfeld March 28, 2005.
A) Transformation method (for continuous distributions) U(0,1) : uniform distribution f(x) : arbitrary distribution f(x) dx = U(0,1)(u) du When inverse.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Nonlinear Stochastic Programming by the Monte-Carlo method Lecture 4 Leonidas Sakalauskas Institute of Mathematics and Informatics Vilnius, Lithuania EURO.
Optimization via Search CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
1 Assessment of Imprecise Reliability Using Efficient Probabilistic Reanalysis Farizal Efstratios Nikolaidis SAE 2007 World Congress.
Metaheuristics The idea: search the solution space directly. No math models, only a set of algorithmic steps, iterative method. Find a feasible solution.
Space-Filling DOEs Design of experiments (DOE) for noisy data tend to place points on the boundary of the domain. When the error in the surrogate is due.
Monte Carlo Simulation of Interacting Electron Models by a New Determinant Approach Mucheng Zhang (Under the direction of Robert W. Robinson and Heinz-Bernd.
Efficient Model Selection for Support Vector Machines
Particle Filtering in Network Tomography
Optimal n fe Tian-Li Yu & Kai-Chun Fan. n fe n fe = Population Size × Convergence Time n fe is one of the common used metrics to measure the performance.
Stochastic Algorithms Some of the fastest known algorithms for certain tasks rely on chance Stochastic/Randomized Algorithms Two common variations – Monte.
CPSC 502, Lecture 15Slide 1 Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 16 Nov, 3, 2011 Slide credit: C. Conati, S.
Monte Carlo I Previous lecture Analytical illumination formula This lecture Numerical evaluation of illumination Review random variables and probability.
Stochastic Linear Programming by Series of Monte-Carlo Estimators Leonidas SAKALAUSKAS Institute of Mathematics&Informatics Vilnius, Lithuania
1 Unconstrained Optimization Objective: Find minimum of F(X) where X is a vector of design variables We may know lower and upper bounds for optimum No.
Phase diagram calculation based on cluster expansion and Monte Carlo methods Wei LI 05/07/2007.
Stochastic DAG Scheduling using Monte Carlo Approach Heterogeneous Computing Workshop (at IPDPS) 2012 Extended version: Elsevier JPDC (accepted July 2013,
Chapter 4 Stochastic Modeling Prof. Lei He Electrical Engineering Department University of California, Los Angeles URL: eda.ee.ucla.edu
A Passive Approach to Sensor Network Localization Rahul Biswas and Sebastian Thrun International Conference on Intelligent Robots and Systems 2004 Presented.
CAS 721 Course Project Implementing Branch and Bound, and Tabu search for combinatorial computing problem By Ho Fai Ko ( )
5-1 ANSYS, Inc. Proprietary © 2009 ANSYS, Inc. All rights reserved. May 28, 2009 Inventory # Chapter 5 Six Sigma.
FORS 8450 Advanced Forest Planning Lecture 5 Relatively Straightforward Stochastic Approach.
CHAPTER 4, Part II Oliver Schulte Summer 2011 Local Search.
Kanpur Genetic Algorithms Laboratory IIT Kanpur 25, July 2006 (11:00 AM) Multi-Objective Dynamic Optimization using Evolutionary Algorithms by Udaya Bhaskara.
02/17/10 CSCE 769 Optimization Homayoun Valafar Department of Computer Science and Engineering, USC.
Latin Hypercube Sampling Example Jake Blanchard Spring 2010 Uncertainty Analysis for Engineers1.
QuickYield: An Efficient Global-Search Based Parametric Yield Estimation with Performance Constraints Fang Gong 1, Hao Yu 2, Yiyu Shi 1, Daesoo Kim 1,
MCMC reconstruction of the 2 HE cascade events Dmitry Chirkin, UW Madison.
Different Local Search Algorithms in STAGE for Solving Bin Packing Problem Gholamreza Haffari Sharif University of Technology
Lecture 18, CS5671 Multidimensional space “The Last Frontier” Optimization Expectation Exhaustive search Random sampling “Probabilistic random” sampling.
HW5 and Final Project Yield Estimation and Optimization for 6-T SRAM Cell Fang Gong
EE 201C Homework 4 [Due on Feb 26, 2013] Wei Wu
Warehouse Lending Optimization Paul Parker (2016).
Generalization Performance of Exchange Monte Carlo Method for Normal Mixture Models Kenji Nagata, Sumio Watanabe Tokyo Institute of Technology.
Bayesian Neural Networks
Optimization via Search
Rouwaida Kanj, *Rajiv Joshi, and Sani Nassif
Fang Gong HW5 and Final Project Yield Estimation and Optimization for 6-T SRAM Cell Fang Gong
SRAM Yield Rate Optimization EE201C Final Project
Han Zhao Advisor: Prof. Lei He TA: Fang Gong
EE201C Modeling of VLSI Circuits and Systems Final Project
Chapter 4a Stochastic Modeling
Markov Networks.
EE201C Modeling of VLSI Circuits and Systems Final Project
Haim Kaplan and Uri Zwick
Collaborative Filtering Matrix Factorization Approach
Chapter 4a Stochastic Modeling
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Yield Optimization: Divide and Conquer Method
More on Search: A* and Optimization
Stephen Govea Javad Zandazad
Optimization and Some Traditional Methods
EE 201C Homework 5 [Due on March 12, 2012]
Stochastic Methods.
Presentation transcript:

EE201C Final Project Adeel Mazhar Charwak Apte

Problem Statement Need to consider reading and writing failure – Pick design point which minimizes likelihood of failure – (Diagrams taken from problem statement PDF file – Credited to Fang Gong.)

Exhaustive Search Done in 2 phases – First phase, 10 steps per design variable, 100 QMC samples per design point – Second phase, 5 steps per design variable, QMC samples per design point – Yield was given priority over power and area – Multiple points were found with the same yield. – Sort by area, then pick lowest min area

Many Points with same yield M2vthM5vthM2leffM5leffYieldPowerArea E E E E E E E E E E

Results of Exhaustive Search M2 Length=95nm M5 Length=95nm M2 Vth= 0.5V M5 Vth= 0.2V Yield= Power= 3.18E-05 (nom=3.2130e-5) Area= (nom=1.1516)

Analytical study We wanted to see the effect of changing each parameter from nominal on the yield We found that certain parameters had a very strong and consistent effect on the yield We used this to allow us to reduce the solution space.

Analytical Study Parameter Effect of setting to max Effect of setting to min Setpoint M2 Len Read mV mV um Write mV mV M5 Len Read mV um Write mV mV M2 Vth Read mV mV 0.5V Write mV mV M5 Vth Read mV 0.2V Write mV mV ReadRd= mVLess than 164.3mV mV WriteWr= mVLess than 629mV mV

Finding Optimum Design Point Start at a point, and use gradient descent – Explicitly optimize for a cost function – Minimal number of wasted samples Leverage WLHS to quickly determine the yield of a design point The solution space is really only two variables, since M5 can be fixed to an optimum value, based on the yield criteria given. Method has to be problem specific – Gradient descent is bad if lots of local minima – Have not encountered any here

Gradient Descent Pick a starting point For each parameter – Simulate at starting point +/- that parameter – Pick best as new starting point – Repeat – If difference between old point and current point is small, exit

Initial Results Used QMC for each design point 500 points per design point 123 spice runs – 45min – Exhaustive is 10,000 spice runs – 4-7 hours Yield= M2len = (need to fix constraint code) M5len = M2vth = M5vth =

Sampling Methods tried 1.Monte Carlo 2.Quasi Monte Carlo 3.Latin Hypercube Sampling 4.Weighed Latin Hypercube Sampling 5.Importance Sampling We have talked about MC and QMC, Next few slides look at LHS, WLHS and IS.

Latin Hypercube Sampling CDF is used to divide variable span into equi- probable partitions. The Gaussian distribution of the variable is preserved. LHS Samples for L1, L2, Vt1, Vt2 are obtained and randomly permuted. Only as good as QMC.

LHS vs QMC 1.Clusters and Voids can be identified in LHS and stratified LHS. 2.Convergence rate and accuracy are lower than QMC.

Weighted LHS Amplification of samples in the predicted failure zone. Assigning appropriate weight to the failures. The critical point where we start, oversampling affects the accuracy of weighted LHS.

Importance Sampling PDF is actually shifted to generate more samples in the predicted failure zone. Accuracy depends on choice of the shift vector and nature of the solution space. The failures are weighed: P(x)| old / P(x)| new.

Results TechniqueNumber of Samples YieldSpice RuntimeSpice Runtime Degradation Wrapper Runtime Monte Carlo x1.0 Quasi-MC 4.3x x1.2x LHS 4.7 x x1.42x Weighted LHS % x Importance Sampling % x1.7x

Convergence Rates QMC LHS MC IS WLHS

Conclusions Tested 5 different Sampling Algorithms. WLHS and IS are the most promising. We will leverage WLHS to find the optimum design point. Gradient Descent works for this problem due to the lack of local minima in the solution space. Monte Carlo is ~81x slower as compared to WLHS, and 70x slower as compared to IS.