Approximation and Visualization of Interactive Decision Maps Short course of lectures Alexander V. Lotov Dorodnicyn Computing Center of Russian Academy.

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

High-order Pareto frontier approximation and visualization: 30 years of experience and new trends Abstract of the paper at MCDM 2011 Alexander V. Lotov.

CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
FTP Biostatistics II Model parameter estimations: Confronting models with measurements.
Approximation and Visualization of Interactive Decision Maps Short course of lectures Alexander V. Lotov Dorodnicyn Computing Center of Russian Academy.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Separating Hyperplanes
1 s-t Graph Cuts for Binary Energy Minimization  Now that we have an energy function, the big question is how do we minimize it? n Exhaustive search is.
Gizem ALAGÖZ. Simulation optimization has received considerable attention from both simulation researchers and practitioners. Both continuous and discrete.
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
Visual Recognition Tutorial
Approximation and Visualization of Interactive Decision Maps Short course of lectures Alexander V. Lotov Dorodnicyn Computing Center of Russian Academy.
Content Based Image Clustering and Image Retrieval Using Multiple Instance Learning Using Multiple Instance Learning Xin Chen Advisor: Chengcui Zhang Department.
Approximation and Visualization of Interactive Decision Maps Short course of lectures Alexander V. Lotov Dorodnicyn Computing Center of Russian Academy.
Learning From Data Chichang Jou Tamkang University.
Reformulated - SVR as a Constrained Minimization Problem subject to n+1+2m variables and 2m constrains minimization problem Enlarge the problem size and.
2806 Neural Computation Support Vector Machines Lecture Ari Visa.
Approximation and Visualization of Interactive Decision Maps Short course of lectures Alexander V. Lotov Dorodnicyn Computing Center of Russian Academy.
1 Swiss Federal Institute of Technology Computer Engineering and Networks Laboratory Classical Exploration Methods for Design Space Exploration (multi-criteria.
Lecture 09 Clustering-based Learning
A New Algorithm for Solving Many-objective Optimization Problem Md. Shihabul Islam ( ) and Bashiul Alam Sabab ( ) Department of Computer Science.
Metaheuristics The idea: search the solution space directly. No math models, only a set of algorithmic steps, iterative method. Find a feasible solution.
Collaborative Filtering Matrix Factorization Approach
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
06 - Boundary Models Overview Edge Tracking Active Contours Conclusion.
Approximation and Visualization of Interactive Decision Maps Short course of lectures Alexander V. Lotov Dorodnicyn Computing Center of Russian Academy.
ENCI 303 Lecture PS-19 Optimization 2
Integrating Neural Network and Genetic Algorithm to Solve Function Approximation Combined with Optimization Problem Term presentation for CSC7333 Machine.
Approximation and Visualization of Interactive Decision Maps Short course of lectures Alexander V. Lotov Dorodnicyn Computing Center of Russian Academy.
Basic Statistics for Engineers. Collection, presentation, interpretation and decision making. Prof. Dudley S. Finch.
Ken YoussefiMechanical Engineering Dept. 1 Design Optimization Optimization is a component of design process The design of systems can be formulated as.
Stochastic Linear Programming by Series of Monte-Carlo Estimators Leonidas SAKALAUSKAS Institute of Mathematics&Informatics Vilnius, Lithuania
Managerial Decision Making and Problem Solving
Visualization-based Reasonable Goals Method and its Web Application for Supporting e-Participation in Environmental Decision Making Alexander V. Lotov.
Boltzmann Machine (BM) (§6.4) Hopfield model + hidden nodes + simulated annealing BM Architecture –a set of visible nodes: nodes can be accessed from outside.
An Introduction to Support Vector Machine (SVM) Presenter : Ahey Date : 2007/07/20 The slides are based on lecture notes of Prof. 林智仁 and Daniel Yeung.
Yaomin Jin Design of Experiments Morris Method.
(Particle Swarm Optimisation)
Evolving Virtual Creatures & Evolving 3D Morphology and Behavior by Competition Papers by Karl Sims Presented by Sarah Waziruddin.
Optimization with Neural Networks Presented by: Mahmood Khademi Babak Bashiri Instructor: Dr. Bagheri Sharif University of Technology April 2007.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
2005MEE Software Engineering Lecture 11 – Optimisation Techniques.
Computational Intelligence: Methods and Applications Lecture 23 Logistic discrimination and support vectors Włodzisław Duch Dept. of Informatics, UMK Google:
Neural Networks - Lecture 81 Unsupervised competitive learning Particularities of unsupervised learning Data clustering Neural networks for clustering.
CS654: Digital Image Analysis
*Partially funded by the Austrian Grid Project (BMBWK GZ 4003/2-VI/4c/2004) Making the Best of Your Data - Offloading Visualization Tasks onto the Grid.
Akram Bitar and Larry Manevitz Department of Computer Science
Vaida Bartkutė, Leonidas Sakalauskas
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Introduction to Optimization
Visualization-based Reasonable Goals Method and its Web Application for Supporting e-Participation in Environmental Decision Making Alexander V. Lotov.
Evolutionary multi-objective algorithm design issues Karthik Sindhya, PhD Postdoctoral Researcher Industrial Optimization Group Department of Mathematical.
Computational Intelligence: Methods and Applications Lecture 24 SVM in the non-linear case Włodzisław Duch Dept. of Informatics, UMK Google: W Duch.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
1 Design of experiment for computer simulations Let X = (X 1,…,X p )  R p denote the vector of input values chosen for the computer program Each X j is.
Chapter 9 Sampling Distributions 9.1 Sampling Distributions.
Fundamentals of Data Analysis Lecture 11 Methods of parametric estimation.
Nizhni Novgorod State University Faculty of Computational Mathematics and Cybernetics Searching Globally-Optimal Decisions for Multidimensional Optimization.
Chapter 7. Classification and Prediction
Solver & Optimization Problems
One-layer neural networks Approximation problems
An Algorithm for Multi-Criteria Optimization in CSPs
Computer Vision Lecture 12: Image Segmentation II
Chap 3. The simplex method
Collaborative Filtering Matrix Factorization Approach
Multi-Objective Optimization
Boltzmann Machine (BM) (§6.4)
Akram Bitar and Larry Manevitz Department of Computer Science
Presentation transcript:

Approximation and Visualization of Interactive Decision Maps Short course of lectures Alexander V. Lotov Dorodnicyn Computing Center of Russian Academy of Sciences and Lomonosov Moscow State University

Lecture 7. Non-linear Feasible Goals Method and its applications Plan of the lecture 1.Approximation for visualization of the feasible objective set 2.Application of the FGNL for conceptual design of future aircrafts 3.Identification of economic systems by visualization of Y=f(X) 4.Approximation for visualization of the EPH 5.Hybrid methods for approximating the EPH in the non- convex case 6.Statistical tests 7.A simple hybrid method: combination of two-phase, three- phase and plastering methods 8.Study of a cooling equipment in continuous casting of steel 9.Parallel computing

The main problems that arise in the non-linear case: 1. non-convexity of the set Y=f(X); and 2. time-consuming algorithms for global scalar optimization.

Approximation of the feasible objective set f(X) may be needed at least in two cases: 1) decision maker does not want to maximize or minimize the performance indicators; 2) identification problem is considered. We approximate the set f(X) by using simulation of random decisions, filtering their outputs and approximating f(X) by a system of cubes. Then, on-line visualization of the feasible objective set is possible. Thus, we apply simulation-based multi-criteria optimization. Such an approach can be applied in the case of models given by computational modules, too. Approximation for visualization of the feasible objective set

Example model: the well-known Peak Function where

which are subject of maximization. Let us consider an example. Imagine that we want to locate the monitoring station at the point where maximal pollution occurs. Let be the pollution level forecasted by the i-th expert. Different values of criteria characterize difference in knowledge of experts concerning pollution distribution. Let us consider five criterion functions

Let us consider several three-criterion graphs

Software demonstration

Application of the FGNL for conceptual design of future aircrafts Four construction parameters were considered 1. Draught of the engine per weight (P 0 );  Overall drag coefficient of the aircraft (C x0 );  Inductive drag coefficient of the aircraft (A); 4. Lift coefficient of the aircraft (C Y ). The aircraft was described by flight characteristics: 1. Rotation speed at a given height (W_max5000); 2. Time of elevation to a given height (TimeH); 3. Time of acceleration to a given speed (TimeV).

Exploration of the decision space: Squeezed variety of feasible values of draft of the engine (P 0 ), frontal resistance (С X0 ) and elevating force (C Y ).

Identification of economic systems by visualization of Y=f(X)

Approximation for visualization of the EPH The EPH is approximated by the set T* that is the union of the non-negative cones with apexes in a finite number of points y of the set Y=f(X). The set of such points y is called the approximation base and is denoted by T. Multiple slices of such an approximation can be computed and displayed fairly fast.

Visualization example for 8 criteria

Goal identification

Demonstration of Pareto Front Viewer

Hybrid methods for approximating the EPH in the non-convex case

The models under study Computing of the objective functions (the model) can be given by a computational module (black box) provided by user and unknown for the researcher. Thus, a very broad scope of non-linear models can be studied. Our methods provide inputs which depend on the method; the module computes outputs of these inputs (or by using simulation, or by solving boundary problem, or in some other way). Due to it we can even use simulation-based local optimization of random decisions or genetic optimization.

The scheme of the methods

We apply hybrid methods that include: 1) global random search; 2) adaptive local optimization; 3) importance sampling; 4) genetic algorithm. Statistical tests of the approximation quality play the leading role in approximation process.

Statistical tests Quality of an approximation T* is studied by using the concept of completeness h T = Pr {f(x)  T* : x  X }. We estimate Pr { h T > h* }   for a given reliability  by using a random sample H N = {x 1, …, x N }. Let h T (N) = n/N, where n=|f(x i )  T*|. Then, h T (N) is a non-biased estimate of h T. Moreover, h T (N) – (– ln (1 –  ) / (2N) ) 1/2 describes the confidence interval.

Completeness function Let (T*) ε be the ε–neighborhood of T*. Then, h T (ε)= Pr {f(x)  (T*) ε : x  X } is the completeness function. Important characteristics of the function h T (N) (ε) is the value ε max =δ(f(H N ), T*).

Two optimization-based completeness functions for different iterations (1 and 7)

The optimization-based completeness In the problems of high dimension of decision variable can happen that the sample completeness is equal to 1, but the approximation is bad. In this case optimization-based completeness function is used h T (ε) = Pr{f(Φ(x 0 ))  (T*) ε : x  X } where Φ:X → X is the “improvement” mapping, which is usually based on local optimization of a scalar function of criteria. The mapping moves the point f(Φ(x 0 )) in the direction of the Pareto frontier. As usually, a random sample H N {x 1, …, x N } is generated and h T (N) (ε)=n(ε)/N, where n(ε)=|f(Φ(x i 0 ))  (T*) ε | is computed.

One-phase method An iteration. A current approximation base T must be given. 1. Testing the base T. Generate a random sample H N  X, compute h T (N) (ε). If h T (N) (ε) (or some values as h T (N) (0) and ε max =δ(f(H N ),T*) in automatic testing) satisfy the requirements, stop. 2. Forming new base. Form a list that includes points of T and sample points that not belong to T*, exclude dominated points. By this a new approximation base is found. Start next iteration.

Two-phase method An iteration. A current approximation base T must be given. 1. Testing the base T. Generate a random sample H N  X, compute Φ(H N ). Construct h T (N) (ε). If the function h T (N) (ε) (or some values as h T (N) (0) and ε max =δ(f(Φ(H N )),T*) in the case of automatic testing) satisfy the requirements, stop. 2. Forming new base. As usually. Start next iteration.

Three-phase method An iteration. Current base T and a neighborhood B of decisions which images constitute T must be given 1. Testing the base T. Generate two random samples H 1  X and H 2  B, compute Φ( H 1 ) and Φ( H 2 ). Construct h T (N) (ε). If h T (N) (ε) satisfies the requirements, stop. 2. Forming new base. As usually. 3. Forming new neighborhood B using statistics of extreme values. Start next iteration.

Three-phase method An iteration. Current base T and a neighborhood B of decisions which images constitute T must be given 1. Testing the base T. Generate two random samples H 1  X and H 2  B, compute Φ( H 1 ) and Φ( H 2 ). Construct h T (N) (ε). If h T (N) (ε) satisfies the requirements, stop. 2. Forming new base. As usually. 3. Forming new neighborhood B using statistics of extreme values. Start next iteration.

Forming new neighborhood B The neighborhood B is the constituted of the balls in decision space with centers in current Pareto-optimal decisions. They have the same radius  k. The value of the radius is computed using the statistics of extreme values. Namely, we consider the distances of new Pareto-optimal decisions the old Pareto-optimal decisions. Then, we order the distances in accordance to their growth d(N), d(N-l),… where d(N) is the most distanced point. Then,  k = d(N) + , where θ = r(l, χ)( d(N) – d(N-l)), while , is the reliability, 0<  <1. Here r(l, χ) = {[1-(1-χ) 1/l](-1/a) – 1}(-1) and a = (ln l) / ln[(d(N) – d(N-l)) / (d(N) – d(N-1))], l << N (we took l=10).

Plastering method “Plastering” method that has some properties of genetic algorithms (as cross-over) is used at the very end of the approximation process. An iteration. Current approximation base T and numbers q,  1,  2 must be given. 1. Testing the base T. Let H be the set of inputs that result in points of the approximation base T. Select N random pairs (h i, h j ) that satisfy  1 ≤ d(f(h i ), f(h j )) ≤  2 from the set H. Select q random points on the segment connecting the points h i and h j, and denote them by H l, l=1,…,N. Compute objective points for the points x  H l, l=1,…,N. Construct h T (N) (ε). If h T (N) (ε) satisfies the requirements, stop. 2. Forming new base. 3. Filtering if needed Start next iteration.

A simple hybrid method: combination of two-phase, three-phase and plastering methods Iterations of two-phase method are carried out until h T (N) (0) and ε max =δ(f(Φ(H N )),T*) are close to zero. Iterations of three-phase method are carried out until h T (N) (0) and ε max =δ(f(Φ(H N )), T*) for it are close to zero. Iterations the genetic method carried out until h T (N) (0) and ε max =δ(f(Φ(H N )), T*) for it satisfy some requirements.

Study of a cooling equipment in continuous casting of steel. The research was carried out jointly with Dr. Kaisa Miettinen, Finland, at the University of Jyvaskyla, Finland.

Cooling in the continuous casting process

Criteria J 1 is the original single optimization criterion: deviation from the desired surface temperature of the steel strand must be minimized. J 2 to J 5 are the penalty criteria introduced to describe violation of constraints imposed on : J 2 – surface temperature; J 3 – gradient of surface temperature along the strand; J 4 – on the temperature after point z 3 ; and J 5 – on the temperature at point z 5. J 2 to J 5 were considered in this study.

Description of the module FEM/FDM module was developed in Finland, by researchers from University of Jyvaskyla. Properties of the model: 325 control variables that describe intensity of water application. Properties of local simulation-based optimization: one local optimization required about calculations of the gradient and about additional calculations of the value of f(x).

Next pictures demonstrate the approximation

Parallel computing

Parallel computing (processor clusters and grid-computing) The method has the form that can be used in parallel computing. Thus, it can be easily implemented at parallel platforms – it is sufficient to separate data generation and data analysis (Research in the framework of contract with Russian Federal Agency for Science).

Important property of our hybrid methods Since our methods are based on random sampling, partial loss of the results is not dangerous. It influences the reliability of the results but does not destroy the process. Due to it, application in GRID network is possible.

Two platform implementation is needed

Example of scenario template for hybrid method

Scenario editor-1

Scenario editor - 2