Presentation is loading. Please wait.

Presentation is loading. Please wait.

SRAM Yield Rate Optimization EE201C Final Project

Similar presentations


Presentation on theme: "SRAM Yield Rate Optimization EE201C Final Project"— Presentation transcript:

1 SRAM Yield Rate Optimization EE201C Final Project
Spring 2010 Chia-Hao Chang, Chien-An Chen

2 Exhausted Search: Too Slow! 4 Variables, Leff1, Vth1, Leff2, Vth2
Evenly slice each variable and calculate all the possible combinations. By slicing Vth into 10 and Leff into 3, we have total 900 nominal points. Run a Monte Carlo Simulation on each nominal points and compare the yield rate. To reduce the samples required, we run Quasi Monte Carlo Simulation with 200 samples at each nominal point. Hence, there are total points need to be simulated. Too Slow! Power(W) Area(m²) Voltage(mV) Yield Initial Design 8.988e-6 1.881e-13 0.534 Optimal Design 8.8051E-6 1.292e-13 0.998

3 Improvements. Strategy 1
Instead of doing Monte-Carlo simulation at each nominal point, we spent the effect on a more detailed uniform testing for our SRAM cell. The result gives us a 4-D matrix of size 40k The same effort as 40 nominal points with 1k samples for Monte-Carlo simulation

4 Strategy 1 Assume the grid size is a standard deviation:
Idea: With the 4-D Matrix, using near by data to do a estimation for yield matrix. Assume the grid size is a standard deviation: Then we could approximate the yield rate by trying to find expectation. 0.38 0.24 0.24 0.061 0.061 n-2 F(-2) n-1 F(-1) n F(0) n+1 F(1) n+2 F(2)

5 Strategy 1 Yield should be Y(v)=EXP[ distributed v× success function]
0.38 0.24 0.24 0.061 0.061 Yield should be Y(v)=EXP[ distributed v× success function] = With 4–D->5^4 = check 625 neighbor points Result: yield value for each nominal point n-2 F(-2) n-1 F(-1) n F(0) n+1 F(1) n+2 F(2)

6 Strategy1 Results: Computational effort: spice simulation : ~30mins ×625 matlab computation: ~30mins Mn1 Leff(m) Mn1 Vth(V) Mn2 Leff(m) Mn2 Vth(V) Initial Design 1e-07 0.2607 Optimal Design 9.722e-08 0.4368 9.500e-08 0.2 Power(W) Area(m²) Voltage(mV) Yield Initial Design 8.988e-6 1.881e-13 0.534 Optimal Design 8.854e-6 1.1496e-13 0.9999

7 Strategy 2 Idea: The “Success” data points are not sparse. They locate in cluster. If we can efficiently locate these “clusters,” the searching time can be greatly reduced. After locating these clusters, we only need to search the points in these clusters. Intuitively, the “centroids” of the biggest cluster should have the highest yield rate

8 Strategy 2 “Importance Sampling” can help us here!
Since points in the clusters tend to have high yield rate, more samples are needed to achieve an accurate yield rate approximation. However, traditional Monte Carlo simulation is time consuming and doesn’t take into consider the distribution of the data. We could put more emphasis on the “boundary” in stead of the center. “Importance Sampling” can help us here!

9 Extract 10 points that have the highest yield rate during the importance sampling and run a more accurate Quasi Monte Carlo Simulation. Given the range of Vth and Leff, evenly segment each variables. Totally 2500 points are simulated. Run Kmeans algorithm to find the clusters in the 4 dimension space. We locate the 5 most clustered points. The order of our decision rules is Yield Rate->Power ->Area. The final results are sorted accordingly. Check ±10% variation around the centroids and apply Importance sampling to approximate the yield rate. 99.98% Yield Rate Leff1:1e-07 Vth1:0.49 Leff2:1e-07 Vth2:0.2

10 K-means Algorithm Given an initial set of k means m1(1),…,mk(1) , which can be randomly assigned Assignment Step: Assign each observation to the cluster with the closest mean. Update step: Calculate the new means to be the centroid of the observations in the cluster. Iterate until the Assignment Step no longer changes!

11 Strategy2 Results: Computational effort: (2500+5*100*50=) spice simulation : ~15mins K-Means Algorithm : ~10 seconds Mn1 Leff(m) Mn1 Vth(V) Mn2 Leff(m) Mn2 Vth(V) Initial Design 1e-07 0.2607 Optimal Design 0.49 0.2 Power(W) Area(m²) Voltage(mV) Yield Initial Design 8.988e-6 1.881e-13 0.534 Optimal Design 8.667e-6 0.998

12 Conclusion: These two strategies can be chosen based on the nature of the distribution. Strategy1 is in favor of more sparse data while strategy2 is in favor of clustered data. Exhausted Search can guarantee to find a global optimal points, but it is usually not practical in a real design. The two methods we purposed can both find the optimal points more efficiently.


Download ppt "SRAM Yield Rate Optimization EE201C Final Project"

Similar presentations


Ads by Google