ECE 539 Project Jialin Zhang

Slides:



Advertisements
Similar presentations
CSC321: Introduction to Neural Networks and Machine Learning Lecture 24: Non-linear Support Vector Machines Geoffrey Hinton.
Advertisements

Support Vector Machines
A New Design Tool for Nanoplasmonic Solar Cells using 3D Full Wave Optical Simulation with 1D Device Transport Models Liming Ji* and Vasundara V. Varadan.
Neural Networks II CMPUT 466/551 Nilanjan Ray. Outline Radial basis function network Bayesian neural network.
Decision Support Systems
RBF Neural Networks x x1 Examples inside circles 1 and 2 are of class +, examples outside both circles are of class – What NN does.
Radial Basis Functions
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2006 Shreekanth Mandayam ECE Department Rowan University.
Speaker Adaptation for Vowel Classification
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks / Fall 2004 Shreekanth Mandayam ECE Department Rowan University.
Power Systems Application of Artificial Neural Networks. (ANN)  Introduction  Brief history.  Structure  How they work  Sample Simulations. (EasyNN)
Introduction to Boosting Aristotelis Tsirigos SCLT seminar - NYU Computer Science.
Neural Networks An Introduction.
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks I PROF. DR. YUSUF OYSAL.
CSCI 347 / CS 4206: Data Mining Module 04: Algorithms Topic 06: Regression.
Radial-Basis Function Networks
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL.
Introduction to MATLAB Neural Network Toolbox
KOHONEN SELF ORGANISING MAP SEMINAR BY M.V.MAHENDRAN., Reg no: III SEM, M.E., Control And Instrumentation Engg.
1/20 Obtaining Shape from Scanning Electron Microscope Using Hopfield Neural Network Yuji Iwahori 1, Haruki Kawanaka 1, Shinji Fukui 2 and Kenji Funahashi.
Cascade Correlation Architecture and Learning Algorithm for Neural Networks.
NEURAL NETWORKS FOR DATA MINING
Radial Basis Function Networks:
Well Log Data Inversion Using Radial Basis Function Network Kou-Yuan Huang, Li-Sheng Weng Department of Computer Science National Chiao Tung University.
Soft Computing Lecture 8 Using of perceptron for image recognition and forecasting.
Center for Materials for Information Technology an NSF Materials Science and Engineering Center Optical Lithography Lecture 13 G.J. Mankey
12/4/981 Automatic Target Recognition with Support Vector Machines Qun Zhao, Jose Principe Computational Neuro-Engineering Laboratory Department of Electrical.
An Artificial Neural Network Approach to Surface Waviness Prediction in Surface Finishing Process by Chi Ngo ECE/ME 539 Class Project.
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
Speech Lab, ECE, State University of New York at Binghamton  Classification accuracies of neural network (left) and MXL (right) classifiers with various.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Estimation of car gas consumption in city cycle with ANN Introduction  An ANN based approach to estimation of car fuel consumption  Multi Layer Perceptron.
Artificial Neural Networks for Data Mining. Copyright © 2011 Pearson Education, Inc. Publishing as Prentice Hall 6-2 Learning Objectives Understand the.
Neural Networks for EMC Modeling of Airplanes Vlastimil Koudelka Department of Radio Electronics FEKT BUT Metz,
UNCLASSIFIED Fundamental Aspects of Radiation Event Generation for Electronics and Engineering Research Robert A. Weller Institute for Space and Defense.
Networked Embedded Systems Sachin Katti & Pengyu Zhang EE107 Spring 2016 Lecture 13 Interfacing with the Analog World.
Neural network based hybrid computing model for wind speed prediction K. Gnana Sheela, S.N. Deepa Neurocomputing Volume 122, 25 December 2013, Pages 425–429.
Speech Recognition through Neural Networks By Mohammad Usman Afzal Mohammad Waseem.
Machine Learning 12. Local Models.
Neural networks and support vector machines
CS 9633 Machine Learning Support Vector Machines
Deep Learning for Dual-Energy X-Ray
Deep Feedforward Networks
For Monochromatic Imaging
Neural Networks: Improving Performance in X-ray Lithography Applications ECE 539 Ryan T. Hogg May 10, 2000.
An Introduction to Support Vector Machines
Prof. Carolina Ruiz Department of Computer Science
3. Applications to Speaker Verification
Forward & Backward selection in hybrid network
Neural Networks & MPIC.
Neural Networks Advantages Criticism
Hyperparameters, bias-variance tradeoff, validation
Research Interests.
Training a Neural Network
Neural Networks & MPIC.
By: Behrouz Rostami, Zeyun Yu Electrical Engineering Department
Prediction of Wine Grade
network of simple neuron-like computing elements
Neural Network - 2 Mayank Vatsa
An Introduction To The Backpropagation Algorithm
Neural Networks II Chen Gao Virginia Tech ECE-5424G / CS-5824
Introduction to Radial Basis Function Networks
Model generalization Brief summary of methods
Neural Networks II Chen Gao Virginia Tech ECE-5424G / CS-5824
Dynamics of Training Noh, Yung-kyun Mar. 11, 2003
Using Clustering to Make Prediction Intervals For Neural Networks
End-to-End Facial Alignment and Recognition
Artificial Neural Network learning
Prof. Carolina Ruiz Department of Computer Science
Outline Announcement Neural networks Perceptrons - continued
Presentation transcript:

ECE 539 Project Jialin Zhang The Optimization of Neural Networks Model for X-ray Lithography of Semiconductor ECE 539 Project Jialin Zhang

Introduction X-ray lithography with nm-level wavelengths provides both high structural resolution as good as 0.1 μm and a wide scope of advantages for the application in semiconductor production. The parameters such as gap, bias, absorb thickness are important to determine the quality of the lithography. This project deals with optimization of parameters for semiconductor manufacturing, in the case of x-ray lithography.

Data and Existing Approach Data source: 1327 train samples,125 test samples--Department of Electrical and Computer Engineering and Center for X-ray Lithography Data structure: 3 inputs--absorber thickness, gap, bias 3 outputs--linewidth, integrated modulation transfer function, fidelity Existing Approach: A neural network based on radial-basis function the multivariate function: (linewidth, IMTF, fidelity)=F(absorber thickness, gap, bias) 125 training samples: regularly distributed in the input space error performance: (tested on the test samples, ”Point to Point”) mean error: 0.2% ~0.4 % maximum error: 4%

Goal decrease the number of training samples necessary to obtain a mapping from the inputs to the outputs improve the error performance ---the ideal maximum error is below 0.1%

Decrease training samples number Pre-Process the training data Data distribution feature: (After recombining the data set ) Range of the data set of 1452 sample: 200,220,240,260,280,300,320,340,360,380,400—11(absorber thickness) 10000,12000,14000,16000,18000,20000,22000,24000,26000,28000,30000-10(gap) -18,-14,-10,-6,-2,2,6,10,14,18,22,26—12(bias) Input Range: -0.2~0.4 Train sample: 64 Test sample: 125 Approach: Radial-basis Function Parameter choosing( λ, σ)

Decrease training samples number Result: A mapping from the inputs to the outputs based on radial-basis function is obtained by training 64 training samples and choosing the optimal parameters for radial-basis function. The “Point to Point” mean errors: 0.7%~0.9% The “Point to Point” maximum error is 5.6%

Improve the error performance Approach: Increase the number of training samples --the smallest “Point to Point” maximum error that has ever achieved is 0.4%. use different types of neural networks (Multi-layer Perceptron) --A better error performance is expected to be achieved

Current Result A mapping from the inputs to the outputs based on radial-basis function is obtained by training 64 training samples (compared with 125 training sample) and choosing the optimal parameters for radial-basis function. The “Point to Point” mean errors are 0.7%~0.9% (compared with 0.2%~0.4%)and maximum error is 5.6%(compared with 4%). The error performance of the mapping is improved by increasing the number of training samples and the smallest “Point to Point” maximum error is 0.4%(The ideal error performance is below 0.1%).