Evolutionary Computational Intelligence Lecture 9: Noisy Fitness Ferrante Neri University of Jyväskylä.

Slides:



Advertisements
Similar presentations
Lecture 6 Outline – Thur. Jan. 29
Advertisements

1 Evolutionary Computational Inteliigence Lecture 6b: Towards Parameter Control Ferrante Neri University of Jyväskylä.
FTP Biostatistics II Model parameter estimations: Confronting models with measurements.
Sampling: Final and Initial Sample Size Determination
1 Virtual COMSATS Inferential Statistics Lecture-7 Ossam Chohan Assistant Professor CIIT Abbottabad.
Chap 8-1 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc. Chapter 8 Estimation: Single Population Statistics for Business and Economics.
Sampling Distributions (§ )
Topic 6: Introduction to Hypothesis Testing
Chapter 8 Estimating Single Population Parameters
Chapter 8 Estimation: Additional Topics
Copyright 2004 David J. Lilja1 Errors in Experimental Measurements Sources of errors Accuracy, precision, resolution A mathematical model of errors Confidence.
Evolutionary Computational Intelligence Lecture 10a: Surrogate Assisted Ferrante Neri University of Jyväskylä.
Sample size computations Petter Mostad
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 7-1 Chapter 7 Confidence Interval Estimation Statistics for Managers.
Basic Business Statistics, 10e © 2006 Prentice-Hall, Inc. Chap 8-1 Chapter 8 Confidence Interval Estimation Basic Business Statistics 10 th Edition.
Chapter 8 Estimation: Single Population
Chap 9-1 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc. Chapter 9 Estimation: Additional Topics Statistics for Business and Economics.
1 Simple Linear Regression Chapter Introduction In this chapter we examine the relationship among interval variables via a mathematical equation.
Active Appearance Models Suppose we have a statistical appearance model –Trained from sets of examples How do we use it to interpret new images? Use an.
Evolutionary Computational Intelligence Lecture 8: Memetic Algorithms Ferrante Neri University of Jyväskylä.
Fall 2006 – Fundamentals of Business Statistics 1 Business Statistics: A Decision-Making Approach 6 th Edition Chapter 7 Estimating Population Values.
Chapter 7 Estimating Population Values
Statistics for Managers Using Microsoft Excel, 5e © 2008 Pearson Prentice-Hall, Inc.Chap 8-1 Statistics for Managers Using Microsoft® Excel 5th Edition.
Business Statistics, A First Course (4e) © 2006 Prentice-Hall, Inc. Chap 8-1 Chapter 8 Confidence Interval Estimation Business Statistics, A First Course.
1/49 EF 507 QUANTITATIVE METHODS FOR ECONOMICS AND FINANCE FALL 2008 Chapter 9 Estimation: Additional Topics.
Standard error of estimate & Confidence interval.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 7-1 Chapter 7 Confidence Interval Estimation Statistics for Managers.
Physics 114: Lecture 15 Probability Tests & Linear Fitting Dale E. Gary NJIT Physics Department.
Confidence Interval Estimation
1 Today Null and alternative hypotheses 1- and 2-tailed tests Regions of rejection Sampling distributions The Central Limit Theorem Standard errors z-tests.
Basic Business Statistics, 11e © 2009 Prentice-Hall, Inc. Chap 8-1 Chapter 8 Confidence Interval Estimation Basic Business Statistics 11 th Edition.
Confidence Interval Estimation
Chap 8-1 Copyright ©2013 Pearson Education, Inc. publishing as Prentice Hall Chapter 8 Confidence Interval Estimation Business Statistics: A First Course.
Estimation of Statistical Parameters
Basic Business Statistics, 10e © 2006 Prentice-Hall, Inc. Chap 8-1 Chapter 8 Confidence Interval Estimation Basic Business Statistics 11 th Edition.
Chap 20-1 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc. Chapter 20 Sampling: Additional Topics in Sampling Statistics for Business.
Prof. Dr. S. K. Bhattacharjee Department of Statistics University of Rajshahi.
PROBABILITY (6MTCOAE205) Chapter 6 Estimation. Confidence Intervals Contents of this chapter: Confidence Intervals for the Population Mean, μ when Population.
Statistical Fundamentals: Using Microsoft Excel for Univariate and Bivariate Analysis Alfred P. Rovai Estimation PowerPoint Prepared by Alfred P. Rovai.
Lecture 12 Statistical Inference (Estimation) Point and Interval estimation By Aziza Munir.
PARAMETRIC STATISTICAL INFERENCE
Section 8.1 Estimating  When  is Known In this section, we develop techniques for estimating the population mean μ using sample data. We assume that.
Statistical Methods Introduction to Estimation noha hussein elkhidir16/04/35.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Ch9. Inferences Concerning Proportions. Outline Estimation of Proportions Hypothesis concerning one Proportion Hypothesis concerning several proportions.
Statistical Fundamentals: Using Microsoft Excel for Univariate and Bivariate Analysis Alfred P. Rovai Estimation PowerPoint Prepared by Alfred P. Rovai.
RDPStatistical Methods in Scientific Research - Lecture 41 Lecture 4 Sample size determination 4.1 Criteria for sample size determination 4.2 Finding the.
© Copyright McGraw-Hill 2000
A Passive Approach to Sensor Network Localization Rahul Biswas and Sebastian Thrun International Conference on Intelligent Robots and Systems 2004 Presented.
Stat 1510: Sampling Distributions
23 November Md. Tanvir Al Amin (Presenter) Anupam Bhattacharjee Department of Computer Science and Engineering,
 The point estimators of population parameters ( and in our case) are random variables and they follow a normal distribution. Their expected values are.
Statistics for Managers Using Microsoft Excel, 5e © 2008 Pearson Prentice-Hall, Inc.Chap 8-1 Statistics for Managers Using Microsoft® Excel 5th Edition.
Week 6. Statistics etc. GRS LX 865 Topics in Linguistics.
Business Statistics: A Decision-Making Approach, 6e © 2005 Prentice-Hall, Inc. Chap 7-1 Business Statistics: A Decision-Making Approach 6 th Edition Chapter.
6. Population Codes Presented by Rhee, Je-Keun © 2008, SNU Biointelligence Lab,
Evolving RBF Networks via GP for Estimating Fitness Values using Surrogate Models Ahmed Kattan Edgar Galvan.
Chance Constrained Robust Energy Efficiency in Cognitive Radio Networks with Channel Uncertainty Yongjun Xu and Xiaohui Zhao College of Communication Engineering,
Evolutionary Computing Chapter 11. / 7 Chapter 11: Non-stationary and Noisy Function Optimisation What is a non-stationary problem? Effect of uncertainty.
CHAPTER- 3.1 ERROR ANALYSIS.  Now we shall further consider  how to estimate uncertainties in our measurements,  the sources of the uncertainties,
Basic Business Statistics, 11e © 2009 Prentice-Hall, Inc. Chap 8-1 Chapter 8 Confidence Interval Estimation Business Statistics: A First Course 5 th Edition.
Statistical Inference for the Mean Objectives: (Chapter 8&9, DeCoursey) -To understand the terms variance and standard error of a sample mean, Null Hypothesis,
Hypothesis Tests for 1-Proportion Presentation 9.
Fundamentals of Data Analysis Lecture 11 Methods of parametric estimation.
Chapter 8 Confidence Interval Estimation Statistics For Managers 5 th Edition.
Confidence Intervals Cont.
Rutgers Intelligent Transportation Systems (RITS) Laboratory
Confidence Interval Estimation
Sampling Distributions (§ )
Evolutionary Computational Intelligence
Presentation transcript:

Evolutionary Computational Intelligence Lecture 9: Noisy Fitness Ferrante Neri University of Jyväskylä

Real world optimization problems Many real world optimization problems are characterized by uncertainties This means that the same solutions takes different fitness values on the basis of the time when it is calculated

Classification of uncertainties Uncertainties in optimization can be categorized into three classes. Noisy fitness function Approximated fitness function Robustness

Noisy Fitness Noise in fitness evaluations may come from many different sources such as sensory measurement errors or randomized simulations. Example: optimization based on expereimental setup. Motor drive

Approximated Fitness Function When the fitness function is very expensive to evaluate, or an analytical fitness function is not available, approximated fitness functions are often used instead. These approximated models implicitly introduce a noise which is the difference between the approximated value and real fitness value, which is unknown.

Perturbation in the environment Often, when a solution is implemented, the design variables or the environmental parameters are subject to perturbations or changes Example satellite problem: due to the movement of the earth we are having some changes in the fitness values of the same solution

General formulation of uncertain problem A classical formulation of a noisy/uncertain fitness is given by: We are not really interested in the fact the noise is Gaussian but it is fundamental that the noise is zero mean!!

Zero mean: Explicit Averaging If the noise is zero mean, it is true that the average over a certain number of samples generates a ”good” estimation of the actual fitness values Thus, the most classical approach tends to compute the fitness each solution a certain number of times (samples) and then calculate the average

Failing of deterministic algorithms The noise introduces some ”false optima” in the fitness landscape and obviously a method which employs implicit or explicit information about the gradient can likely fail The estimation of the neighborhood cannot be properly done because the search is misled by the noise

Better success of EAs Evolutionary algorithms, due to their inner structure, so not perform comparison among neighbors and thus showed to be better performing in noisy environment Some recent papers are in fact stating that even rather standard EAs (e.g. self-adaptive ES) can lead to good results in noisy environment

Not universal success of EAs This success is only restricted to specific cases and it strongly depends on the problem under examination EAs, like all the optimization algorithms, contain some comparison amongst solutions in order to determine which one is better and which one is worse In EAs this role is given to parent and survivor selection

Population based: Implicit Averaging EAs are population based algorithms thus another kind of averaging can be carried out Many scientists observed that large population size is efficient in defeating the noise since it is given a chance to calculate several neighbor solutions and thus detect promising areas

Another kind of averaging Explicit and Implicit Averaging are in the class of averaging over the time Branke proposed averaging over the space:to calculate the fitness by averaging over the neighborhood of the point to be evaluated Implicit assumption: the noise in the neighborhood has the same characteristics as the noise at the point to be evaluated, and that the fitness landscape is locally smooth. This is not always true!!! E.g. systems with instable regions

High computational cost It is clear that an averaging operation (most of all over the time), requires extra fitness evaluations and thus an increase of computational overhead In some cases, in order to have reliable results it is necessary to spend a lot of efforts

Adaptive Averaging example: Explicit Inplicit

Prudent-daring survivor selection If not the individuals are re-sampled I can apply Two cooperative selection schemes – Prudent: selects individuals which are reliable (re- sampled) and fairly promising – Daring: selects individuals which are unreliable (fitness calculated only once) but look very promising Reliable solution + computational saving

Adaptive Prudent Daring Evolutionary Algorithm

APDEA Results

Tolerance Interval 1/2 noise is Gaussian and that its standard deviation has the same constant value tolerance interval in the case of Gaussian distribution

Tolerance Interval 2/2 If solution A is better than B of quantity equal to the half of the width of the tolerance interval, it is surely better (with a certain confidence level) If the distance in the fitness is smaller, then a re-sampling is required

Adaptive Tolerant Evolutionary Algorithm

Comparison APDEA vs. ATEA

Comparative analysis APDEA is more general since it requires only that the noise is zero mean APDEA is better performing in terms of convergence velocity ATEA requires a preliminary analysis APDEA requires a more extensive parameter setting