Process – Structure Linkage for Cellulose NanoCyrstals (CNC)

Slides:



Advertisements
Similar presentations
INTRODUCTION TO Machine Learning 2nd Edition
Advertisements

Pattern Recognition and Machine Learning
Biointelligence Laboratory, Seoul National University
Pattern Recognition and Machine Learning
« هو اللطیف » By : Atefe Malek. khatabi Spring 90.
Model assessment and cross-validation - overview
Chapter 4: Linear Models for Classification
Face Recognition & Biometric Systems, 2005/2006 Face recognition process.
The loss function, the normal equation,
Supervised and Unsupervised learning and application to Neuroscience Cours CA6b-4.
Logistic Regression Rong Jin. Logistic Regression Model  In Gaussian generative model:  Generalize the ratio to a linear model Parameters: w and c.
Predictive Automatic Relevance Determination by Expectation Propagation Yuan (Alan) Qi Thomas P. Minka Rosalind W. Picard Zoubin Ghahramani.
Linear Models Tony Dodd January 2007An Overview of State-of-the-Art Data Modelling Overview Linear models. Parameter estimation. Linear in the.
An Overview of State-of- the-Art Data Modelling Introduction.
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
Statistical analysis and modeling of neural data Lecture 4 Bijan Pesaran 17 Sept, 2007.
1 Statistical Tools for Multivariate Six Sigma Dr. Neil W. Polhemus CTO & Director of Development StatPoint, Inc. Revised talk:
沈致远. Test error(generalization error): the expected prediction error over an independent test sample Training error: the average loss over the training.
Regression analysis Control of built engineering objects, comparing to the plan Surveying observations – position of points Linear regression Regression.
University of Southern California Department Computer Science Bayesian Logistic Regression Model (Final Report) Graduate Student Teawon Han Professor Schweighofer,
Prediction of Malignancy of Ovarian Tumors Using Least Squares Support Vector Machines C. Lu 1, T. Van Gestel 1, J. A. K. Suykens 1, S. Van Huffel 1, I.
Kernel Methods A B M Shawkat Ali 1 2 Data Mining ¤ DM or KDD (Knowledge Discovery in Databases) Extracting previously unknown, valid, and actionable.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION.
Optimal Bayes Classification
Mixture of Gaussians This is a probability distribution for random variables or N-D vectors such as… –intensity of an object in a gray scale image –color.
ISCG8025 Machine Learning for Intelligent Data and Information Processing Week 3 Practical Notes Application Advice *Courtesy of Associate Professor Andrew.
Bayesian Phylogenetics. Bayes Theorem Pr(Tree|Data) = Pr(Data|Tree) x Pr(Tree) Pr(Data)
Bias and Variance of the Estimator PRML 3.2 Ethem Chp. 4.
Guest lecture: Feature Selection Alan Qi Dec 2, 2004.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
4.8 Polynomial Word Problems. a) Define the variable, b) Write the equation, and c) Solve the problem. 1) The sum of a number and its square is 42. Find.
Over-fitting and Regularization Chapter 4 textbook Lectures 11 and 12 on amlbook.com.
Bias and Variance of the Estimator PRML 3.2 Ethem Chp. 4.
ICS 178 Introduction Machine Learning & data Mining Instructor max Welling Lecture 4: Least squares Regression.
Machine Learning CUNY Graduate Center Lecture 6: Linear Regression II.
Local Likelihood & other models, Kernel Density Estimation & Classification, Radial Basis Functions.
Linear Models Tony Dodd. 21 January 2008Mathematics for Data Modelling: Linear Models Overview Linear models. Parameter estimation. Linear in the parameters.
Ch 1. Introduction Pattern Recognition and Machine Learning, C. M. Bishop, Updated by J.-H. Eom (2 nd round revision) Summarized by K.-I.
Statistics and probability Dr. Khaled Ismael Almghari Phone No:
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
3. Linear Models for Regression 後半 東京大学大学院 学際情報学府 中川研究室 星野 綾子.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 1: INTRODUCTION.
Oral Presentation Applied Machine Learning Course YOUR NAME
Ex1: Event Generation (Binomial Distribution)
Variational Bayes Model Selection for Mixture Distribution
CSE 4705 Artificial Intelligence
Alan Qi Thomas P. Minka Rosalind W. Picard Zoubin Ghahramani
Special Topics In Scientific Computing
Bias and Variance of the Estimator
Probabilistic Models for Linear Regression
Ellipse Fitting COMP 4900C Winter 2008.
Linear Regression.
Project 1 Binary Classification
Cellulose Nanocrystals (CNC) PROCESS – STRUCTURE LINKAGE
10701 / Machine Learning Today: - Cross validation,
Introduction PCA (Principal Component Analysis) Characteristics:
Pattern Recognition and Machine Learning
Biointelligence Laboratory, Seoul National University
Nonlinear Fitting.
The loss function, the normal equation,
Part I Review Highlights, Chap 1, 2
Mathematical Foundations of BME Reza Shadmehr
Parametric Methods Berlin Chen, 2005 References:
Machine learning overview
Ch 3. Linear Models for Regression (2/2) Pattern Recognition and Machine Learning, C. M. Bishop, Previously summarized by Yung-Kyun Noh Updated.
Elena Mikhalkova, Nadezhda Ganzherli, Yuri Karyakin, Dmitriy Grigoryev
Using Clustering to Make Prediction Intervals For Neural Networks
Introduction to Machine learning
Uncertainty Propagation
What is Artificial Intelligence?
Presentation transcript:

Process – Structure Linkage for Cellulose NanoCyrstals (CNC) Presentation – 3 Sezen Yucel

Process Parameters: Acid concentration & Process time Dataset Process Parameters: Acid concentration & Process time 58 % 62 % 64 % 60 mins 120 mins 105 mins 45 mins 75 mins 105 mins C = 58, 62, 64 (%) T = 45, 60, 75, 105, 120 (min)

Image Analysis Rods as ellipses Majoraxis ~ Length Minoraxis ~ Width Histograms for length and width

Size Distributions (Length) Probability Density Distributions

Size Distributions (width) Probability Density Distributions

1st Approach Probability 2nd Approach Probability & data 2 Approach for PCA Example Data Set = {3, 5, 5, 6, 6, 6, 6, 7, 9,9} Histogram of the Data 1st Approach Probability Same range for each data 0 for no-data 1 2 3 4 5 6 7 8 9 10 0.1 0.2 0.4 Frequency Probability 2nd Approach Probability & data 1 2 3 4 5 6 7 8 9 10 0.1 0.2 0.4 0.3 2.4 0.7 1.8 0.48 0.16 0.39 0.11 0.29 Weighted Weighted/Sum

PCA results (1st approach – only probability C = 64% T = 45, 75, 105 (min) C = 62% T = 105 (min) C = 58% T = 60, 120 (min)

PCA results (1st approach – only probability) Basis Vectors for Length 1 2 3 4 5 6 PC1 -0.20 -0.24 -0.08 0.07 0.25 0.20 PC2 -0.04 -0.05 0.12 0.03 -0.01 PC3 0.06 0.004 0.001 -0.007

PCA results (2ND approach – probability & Data) C = 64% T = 45, 75, 105 (min) C = 62% T = 105 (min) C = 58% T = 60, 120 (min)

PCA results (2ND approach – probability & Data) Basis Vectors for Length 1 2 3 4 5 6 PC1 -0.19 -0.22 -0.08 0.05 0.26 0.19 PC2 -0.049 -0.06 0.135 0.028 -0.039 -0.016 PC3 -0.062 -0.059 -0.012 0.025 0.011 0.004

Desired model – Process structure linkage (Acid Hydrolysis) Structure (Particle Morphology) Concentration = 58, 62, 64 (%) Time = 45, 60, 75, 105, 120 (min) Inputs  Process Parameters Outputs  Size Distributions L_d W_d

First model - polynomial regression (1st order) f1  R Square = .85 f2  R Square = .14 f3  R Square = .57 Fitting polynomial equations for PC scores f1, f2, f3  MultiPolyRegress(X,Y,1) Where X = Y = ----------------------------------------------------------------------- PC1 = f1(C,T) PC2 = f2(C,T) PC3 = f3(C,T) 2nd order polynomial fitting  Overfitting C T 58 60 120 62 105 64 45 75 PC1 PC2 PC3 -0.20 -0.037 0.057 -0.237 -0.054 -0.05 0.083 0.115 -0.006 0.067 0.026 0.004 0.255 0.043 0.001 0.198 -0.007

A new model for small number of data such as: Future work A new model for small number of data such as: Bayesian Linear Regression (BLR) Or Gaussian Process Regression (GPR) Hyperparameter optimization Validation of the new model (on the image which has different process parameters (C=62%, T=75 min) but less number of particles (~50)

Thank you & Questions