Approximate Models and Noise
State of the Art Sources of uncertainty –Uncertainty in inputs –Uncertainty in external factors –Uncertainty model output –Uncertainty in constraints Sources of noise in models –Experimental noise –Lack of coverage of models –Inaccurate/incomplete validation –Choice/availability of descriptors How do we deal with this –Probabilistic modelling –Robustness techniques – sensitivity to noise –Normal distributions
Problems Don’t fully understand form of probability distributions –Prior distributions No data! Descriptors typically have low information content
Promising Approaches Multiple models based on different approaches – consensus –But need multiple sets of training data –Global vs local models Non-dimensional transforms (Buckingham Theorem) to reduce noise in input data –E.g. pK i vs K i –But, are there other approaches? Distribution fitting to data (when/if available) Better models, accuracy and transferability –E.g. quantum mechanical descriptors –Capture underlying physical model Estimate of inaccuracy of current models More data –Directly comparable data – where from? –Use computationally expensive calculations as input to empirical methods – but still limitations to accuracy