Using Machine Learning for Epistemic Uncertainty Quantification in Combustion and Turbulence Modeling.

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Applications of one-class classification
Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Dialogue Policy Optimisation
Aggregating local image descriptors into compact codes
Unsupervised Learning
Evaluating Classifiers
Flamelet-based combustion model for compressible flows
CS Statistical Machine learning Lecture 13 Yuan (Alan) Qi Purdue CS Oct
Ensemble Emulation Feb. 28 – Mar. 4, 2011 Keith Dalbey, PhD Sandia National Labs, Dept 1441 Optimization & Uncertainty Quantification Abani K. Patra, PhD.
Sampling: Final and Initial Sample Size Determination
Sampling Distributions (§ )
Propagation of Error Ch En 475 Unit Operations. Quantifying variables (i.e. answering a question with a number) 1. Directly measure the variable. - referred.
1 Learning Dynamic Models from Unsequenced Data Jeff Schneider School of Computer Science Carnegie Mellon University joint work with Tzu-Kuo Huang, Le.
Structure learning with deep neuronal networks 6 th Network Modeling Workshop, 6/6/2013 Patrick Michl.
Assuming normally distributed data! Naïve Bayes Classifier.
Slide 1 EE3J2 Data Mining EE3J2 Data Mining Lecture 10 Statistical Modelling Martin Russell.
Predictive Automatic Relevance Determination by Expectation Propagation Yuan (Alan) Qi Thomas P. Minka Rosalind W. Picard Zoubin Ghahramani.
Regionalized Variables take on values according to spatial location. Given: Where: A “structural” coarse scale forcing or trend A random” Local spatial.
CSC2535: 2013 Advanced Machine Learning Lecture 3a: The Origin of Variational Bayes Geoffrey Hinton.
01/24/05© 2005 University of Wisconsin Last Time Raytracing and PBRT Structure Radiometric quantities.
T-distribution & comparison of means Z as test statistic Use a Z-statistic only if you know the population standard deviation (σ). Z-statistic converts.
Kalman filtering techniques for parameter estimation Jared Barber Department of Mathematics, University of Pittsburgh Work with Ivan Yotov and Mark Tronzo.
CS 8751 ML & KDDSupport Vector Machines1 Support Vector Machines (SVMs) Learning mechanism based on linear programming Chooses a separating plane based.
Monte Carlo Simulation CWR 6536 Stochastic Subsurface Hydrology.
COMMON EVALUATION FINAL PROJECT Vira Oleksyuk ECE 8110: Introduction to machine Learning and Pattern Recognition.
Normal Distributions Z Transformations Central Limit Theorem Standard Normal Distribution Z Distribution Table Confidence Intervals Levels of Significance.
Machine Learning Seminar: Support Vector Regression Presented by: Heng Ji 10/08/03.
ECE 8443 – Pattern Recognition Objectives: Error Bounds Complexity Theory PAC Learning PAC Bound Margin Classifiers Resources: D.M.: Simplified PAC-Bayes.
Things To Study When Preparing For the Terra Nova Test.
Jun-Won Suh Intelligent Electronic Systems Human and Systems Engineering Department of Electrical and Computer Engineering Speaker Verification System.
17 May 2007RSS Kent Local Group1 Quantifying uncertainty in the UK carbon flux Tony O’Hagan CTCD, Sheffield.
Continuous Variables Write message update equation as an expectation: Proposal distribution W t (x t ) for each node Samples define a random discretization.
An Asymptotic Analysis of Generative, Discriminative, and Pseudolikelihood Estimators by Percy Liang and Michael Jordan (ICML 2008 ) Presented by Lihan.
Propagation of Error Ch En 475 Unit Operations. Quantifying variables (i.e. answering a question with a number) 1. Directly measure the variable. - referred.
Motor Control. Beyond babbling Three problems with motor babbling: –Random exploration is slow –Error-based learning algorithms are faster but error signals.
Statistics Presentation Ch En 475 Unit Operations.
D. M. J. Tax and R. P. W. Duin. Presented by Mihajlo Grbovic Support Vector Data Description.
CHEMISTRY ANALYTICAL CHEMISTRY Fall Lecture 6.
RSVM: Reduced Support Vector Machines Y.-J. Lee & O. L. Mangasarian First SIAM International Conference on Data Mining Chicago, April 6, 2001 University.
Machine Learning Chapter 5. Evaluating Hypotheses
ECE-7000: Nonlinear Dynamical Systems Overfitting and model costs Overfitting  The more free parameters a model has, the better it can be adapted.
Machine Learning ICS 178 Instructor: Max Welling Supervised Learning.
CS Statistical Machine learning Lecture 12 Yuan (Alan) Qi Purdue CS Oct
Heat release modeling FPVA-based model V. Terrapon and H. Pitsch 1 Stanford PSAAP Center - Working draft.
Statistics Presentation Ch En 475 Unit Operations.
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
Learning Theory Reza Shadmehr Distribution of the ML estimates of model parameters Signal dependent noise models.
Machine Learning Expectation Maximization and Gaussian Mixtures CSE 473 Chapter 20.3.
SUPERVISED AND UNSUPERVISED LEARNING Presentation by Ege Saygıner CENG 784.
Machine Learning Expectation Maximization and Gaussian Mixtures CSE 473 Chapter 20.3.
Ch 1. Introduction Pattern Recognition and Machine Learning, C. M. Bishop, Updated by J.-H. Eom (2 nd round revision) Summarized by K.-I.
8 Sept 2006, DEMA2006Slide 1 An Introduction to Computer Experiments and their Design Problems Tony O’Hagan University of Sheffield.
Deep Feedforward Networks
Machine Learning Basics
Structure learning with deep autoencoders
Igor V. Cadez, Padhraic Smyth, Geoff J. Mclachlan, Christine and E
INF 5860 Machine learning for image classification
Generalized Spatial Dirichlet Process Models
Binary Logic.
Non-parametric Filters: Particle Filters
Machine learning overview
Introduction to Sensor Interpretation
Non-parametric Filters: Particle Filters
Introduction to Sensor Interpretation
Interval Estimation Download this presentation.
Using Clustering to Make Prediction Intervals For Neural Networks
Probabilistic Surrogate Models
Uncertainty Propagation
Presentation transcript:

Using Machine Learning for Epistemic Uncertainty Quantification in Combustion and Turbulence Modeling

Epistemic UQ Use machine learning to learn the error between the low fidelity model and the high fidelity model Want to use it as a correction and an estimate of error Working on two aspects -- Approximate the real source term (in progress equation) given a RANS+FPVA solution Approximate the real Reynolds stress anisotropy given an eddy-viscosity based RANS solution Preliminary work We will show a way it could be done, not how it should be done

Basic Idea We can compare low fidelity results to high fidelity results and learn an error model Model answers: “What is the true value given the low-fidelity result” If the error model is stochastic (and correct), draws from that model give us estimates of uncertainty. To make model fitting tractable we decouple the problem Model of local uncertainty based on flow-features Model of coupling of uncertainty on a macro scale

Local Model

Model Generation Outline Get a training set which consists of low-fidelity solutions alongside the high-fidelity results Choose a set of features in high-fidelity to be learned ( y ) Choose a set of features in low-fidelity which are good representations of the error ( x ) Learn a model for the true output given the input flow features

Example In the RANS/DNS case, we are interested in the RANS turbulence model errors Input of the model is RANS location of the barycentric map, the marker, wall distance, and (5 dimensional) Output of the model is DNS location in the barycentric map (2 dimensional)

Local Model

Sinker For a test location, each point in the training set is given a weight set by a kernel function Then, using the true result at the training points and the weights, compute a probability distribution over the true result

Example Problem

30 Samples

100 Samples

300 Samples

1000 Samples

10000 Samples

Combustion Modeling DNS finite rate chemistry dataset as high fidelity model, RANS flamelet model is low fidelity model Input flow features are the flamelet table variables (mixture fraction, mixture fraction variance, progress variable) Output flow variable is source term in progress-variable equation Use a GP as the spatial fit

‘Truth’ Model Dataset used : Snapshots of temporal mixing layer data from Amirreza

Trajectory Random Draws FPVA Table

Initial condition

Results of ML scheme

Application to EUQ of RANS

Input Data Add in marker, normalized wall distance, and p/ε as additional flow features, and use Sinker

Model Output

Not perfect, but way better

Generating Errorbars Each point also has a variance associated with it (which is an ellipse for now) We can use these uncertainties to generate error bars on macroscopic quantities Draw two Gaussian random variables, and tweak the barycentric coordinate by that many standard deviations in x and y If the point goes off the triangle, project it back onto the triangle Gives us a family of new turbulence models

Random Draws

Random Draws

Conclusions Promising early results Basic idea: Learn `mean and variance’ of error distribution of modeling terms in the space of FEATURES There is a lot of work to be done Feature selection Better uncertainty modeling (non-Gaussian) Kernel selection Need to develop a progressive / logical test suite to evaluate the quality of a model