Presentation is loading. Please wait.

Presentation is loading. Please wait.

Approximation and Visualization of Interactive Decision Maps Short course of lectures Alexander V. Lotov Dorodnicyn Computing Center of Russian Academy.

Similar presentations


Presentation on theme: "Approximation and Visualization of Interactive Decision Maps Short course of lectures Alexander V. Lotov Dorodnicyn Computing Center of Russian Academy."— Presentation transcript:

1 Approximation and Visualization of Interactive Decision Maps Short course of lectures Alexander V. Lotov Dorodnicyn Computing Center of Russian Academy of Sciences and Lomonosov Moscow State University

2 Lecture 6. Mathematical basis of the IDM technique Plan of the lecture 1.Computational basis of the IDM technique 2.Approximating the EPH for the convex models 3.The Estimation Refinement method 4.Theory of polyhedral approximation of convex bodies 5.Properties of the ER method 6.Constructing the convex hull of a polyhedron and a point 7.Application of the ER method for approximating the EPH 8.Constructing the bi-objective slices

3 Computational basis of the IDM technique The IDM technique is based on visualization of the decision maps by overlapping bi-criterion slices of the EPH.

4 Remind that the EPH is given by or

5 P(Y) f(X) Illustration of the EPH

6 Due to the stability of the EPH, a stable approximation process can be developed. Approximation of the EPH for 3 to 8 criteria is the main computational problem that has been solved by us. The EPH is approximated: in the convex case – by a polytope plus the non-negative cone, in the non-convex case – by a system of cones. Here the convex case is considered. The Estimation Refinement method developed by V.Bushenkov and A.Lotov in the beginning of the 1980s is now the main computational tool for approximating the multidimensional convex sets (including the EPH). Approximating the EPH

7 The Estimation Refinement method The Estimation Refinement method is an adaptive method for polyhedral approximating of multidimensional convex bodies. We consider it for the case of approximating the feasible criterion set Y. The method is based on computing the support function of the set Y, i.e. g Y (u) = max { : y  Y}, for specially selected directions u that belong to the unit ball S = {u  R m : = 1}. In the case of an MOO problem, that is, y=f(x) where x  X, computing the value of the support function for a directions u from the unit ball consists in solving a convex optimization problem g Y (u) = max { : y=f(x), x  X}. Various optimization techniques do exist that can solve this problem for particular sets X and mappings f, especially in the case of the convex set f(X). In the case of linear models, various linear programming methods can be used for computation of the values of the support function for directions u  S.

8 Non-adaptive approximation of convex bodies Polyhedral approximating of convex compact bodies is often based on computing the values of the support function for a given grid, i.e. finite system of directions {u 1, u 2, …, u L }. It is clear that such a grid neglects the actual shape of the body being approximated. For this reason, the approach based on the a priori grids is not the best one. G.Sonnewend (1983) proved that the methods using grids of this kind are not optimal. They require too many evaluations of the support function; polytopes constructed by them have too many vertices and faces.

9 Adaptive approximation of convex bodies The Estimation Refinement (ER) method is one of the adaptive methods for polyhedral approximating the compact convex bodies, which are asymptotically optimal. The optimality of the adaptive methods is related to their ability to adapt the directions {u 1, u 2, …, u L } to the form of the approximated body. The adaptive methods compute the support function for the directions identified in the approximation process. To the extent of our knowledge, the first adaptive method (NISE) was proposed by J.L.Cohon (1978). The method was used for approximating a two-dimensional convex body (actually, its non-dominated frontier).

10 The ER method is an iterative method The ER method is an iterative adaptive method. A compact convex body C is approximated adaptively by a sequence P 0, P 1,..., P k,... of internal polyhedra with the increasing number of vertices that belong to the boundary of C. The vertices of the polyhedron P k include all vertices of P k-1 plus a new vertex. Simultaneously, external approximating polyhedra are constructed.

11 Initial approximation The initial approximation is given by a simplex described both as a list of m+1 vertices as well as a solution set of a linear inequality system. Some problems may arise if the approximated body is “too flat”. We do not discuss such problems here.

12 Description of the ER method Let U(P) denote the finite set of unit outer normals to the facets of the approximating internal polyhedron. The finite set U(P) is defined if the polyhedron P is given in the form of a solution set of a linear inequality system. Let us describe the (k + 1)-th iteration of the method. Prior to the iteration, we should have constructed the internal polyhedron P i k in the form of a solution set of the linear inequality system D k y≤d k and in the form of a list of its vertices; the external polyhedron P e k must be given in the form of a solution set of a linear inequality system.

13 Step 1 The direction u* from U(P k ) is found that solves If then stop. Else the point y* is selected such as

14 Step 2 Let P i k+1 = conv {y*, P i k } U(P i k+1 ) is constructed upon constructing conv {y*, P i k } in the form of a solution set of a linear inequality system. The external approximation is given by the polyhedron

15

16 Approximation precision Note that both the internal and the external polyhedra are constructed. By this, the approximation precision is controlled automatically by evaluating the value as well as graphically, since Moreover, the Hausdorff distance between sets C 1 and C 2 can be studies, i.e. the value δ(C 1, C 2 ) = max { sup {d (x, C 2 ): x  C 1 }, sup {d (x, C 1 ): x  C 2 } }. It was proven that for any convex compact body C.

17 Theory of polyhedral approximation

18 Polyhedra of best approximation To evaluate the quality of iterative methods, a ‘reference’ sequence of polyhedra must be considered that gives the best approximation of the convex compact body C. As it is proved by P.M. Gruber (1983), among the polyhedra with a given number of vertices N there always exists a polyhedron P N with the minimum of distance to the body C. Such a polyhedron is denoted as the polyhedron of best approximation (PBA). It is known that  (C, P N )  0 while N . For m>2, the polyhedra of best approximation cannot be constructed effectively.

19 Known estimates If the body C has a sufficiently smooth boundary, there exist positive constants k C and K C such that k C / N 2/(m-1)   (C, P N )  K C / N 2/(m-1) (Bronshtein and Ivanov, 1975, Schneider and Weacker, 1981, and Gruber and Kendrov, 1982). Thus, the distance of the PBA from the approximated body C decreases with the order of convergence 2/(m-1). Examples for m=2 one obtains 1/N 2 for m=3 one obtains 1/N for m=5 one obtains 1/N 0.5 for m=7 one obtains 1/N 1/3

20 The PBA as an ideal sequence The PBA cannot be found, but their sequence can be used as the ‘reference’ sequence of approximating polyhedron in a general case. Note that the PBA do not provide an iterative sequence at all, since the vertices of P N are not related to vertices of P N-1. Thus, one can not even dream concerning an iterative procedure that constructs the sequence of PBA. However, it is important to remember that the PBA-based ‘reference’ sequence of polyhedra provides an ideal that is not feasible in reality.

21 Hausdorff class of methods Thus, any sequence of polyhedra generated by approximation methods can be compared with the sequence of PBA. Note that the polytopes generated by an iterative method cannot approximate the body C better than PBA. G.Kamenev (1992 and 1993) introduced the notion of the Hausdorff class of methods for iterative polyhedral approximating of compact convex bodies. A method is denoted as a Hausdorff method with a constant γ > 0 for a body C, if it results in a sequence of polyhedra {P k }, k = 0, 1,... for which it holds  (P k, P k+1 )  γ  (P k, C), k = 0, 1,...

22 Asymptotical optimality of the Hausdorff methods Hausdorff methods construct polyhedra that are close to the sequence of PBA (Kamenev, 1993): for convex compact bodies with a sufficiently smooth boundary, the distance  (P k, C) has the order of convergence 2/(m-1). This statement means that the Hausdorff methods are asymptotically optimal with respect to the order of the number of vertices. Since the number of vertices of the polyhedron P k is related to the number of iterations, the Hausdorff methods are asymptotically optimal with respect to the order of the number of iterations.

23 Asymptotic efficiency of an optimal iterative method Since the Hausdorff methods are optimal with respect to the order of the number of vertices, it is interesting to know about the ratio of distances  (P k, C) and  (P N, C). Let us consider a sequence of polyhedra F = {P k }, k = 0, 1,.... The value is denoted as the asymptotic efficiency of the method that was used for generating the sequence F. Evidently, that h(F) = 1 can be achieved by the sequence of PBA. For a not optimal sequence it holds h(F) = 0. For an optimal sequence, it holds 0 ≤ h(F) < 1. It was shown that for the sequence produced by a Hausdorff method for convex compact bodies with a sufficiently smooth boundary, it holds

24 Properties of the ER method

25 Main theoretical results concerning the ER method Kamenev (1994) proved that, for any compact convex body with a sufficiently smooth boundary, the ER method is a Hausdorff method with some constant γ and that asymptotically (while N  ) the value of γ is close to 1. Thus, for convex compact bodies with a sufficiently smooth boundary, the ER method provides the same (as PBA) order of asymptotical convergence, i.e.  (C, P k ) ~ Const / N k 2/(m-1) Moreover, it holds

26 Asymptotic optimality in respect to the order of calculation of the support function Recently (Kamenev and Efremov, 2006) it was proven that the ER method is asymptotically optimal in respect to the order of calculation of the support function.

27 Mutually Converging Polyhedra (MCP) Note that the external polyhedron plays a fairly passive role in the ER method. It only helps to evaluate the distance  (C, P k ), but does not play any role at the iterations of the method. In contrast, external polytopes play an active role in different iterative methods for polyhedral approximating the multiple-dimensional compact convex bodies, for example, in the method of Mutually Converging Polyhedra (MCP).

28 The scheme of the MCM method Prior to the (k + 1)-th iteration of the method, we should have two polyhedra constructed: the internal polyhedron P k and the external polyhedron Q k, both in the form of the solution set of linear inequality systems. Then, the iteration consists of two steps: Step 1. Find u*  U(Pk) which solves. Compute a point y*  C such that = g C (u* ). Step 2. (a) Find U(P k+1 ) for P k+1 = conv {y*, P k } upon constructing a linear inequality system, which solution set coincides with P k+1 = conv {y*, P k }; (b) Let Q k+1 = Q k  {y  R m :  g C (u* ) }. The vertices of the initial polyhedron P 0 are supposed to be located on the faces of the initial polyhedron Q 0. For this reason, the resultant polyhedra also possess this property.

29 Features of the MCP method As one can see, the MCP method is based on the incremental scheme. However, due to the active role of the external polyhedron Q k, the MCP method requires only one evaluating the support function for the body C per iteration. In this sense, it differs from the ER method, which is related to a fairly large number of such evaluations. The direction u*, for which the support function of the body C is computed, is selected on the basis of the polyhedron Q k. Due to this, the direction can be found faster than in the case of the ER method. However, it can result in an inefficient selecting of the new vertex. Experiments support the anxiety.

30 Modified MCP method To avoid the problems of the MCP method, a new method named the Modified MCP (MMCP) method was proposed. The modification looks fairly simple – one has to introduce a threshold β, which values are between one and zero, and to check the inequality > β before Step 2 of the method is started. The condition means that the potential vertex y*  C is sufficiently distant from the polyhedron P k. If the condition is satisfied, the new vertex y* is included into the polyhedron P k and the related inequality  g C (u*) is included into the description of the polyhedron Q k. In the opposite case, the vertex y* is not included into the polyhedron P k, but the related inequality  g C (u*) is still included into the description of the polyhedron Q k. Then, process returns to Step 1. The iteration is completed only after such a direction u* is found that satisfies the condition.

31 Comparison of the ER and MCP methods Note that for β = 0 the modified MCP method coincides with the MCP method, and its sequence of polyhedra coincides with the polyhedral sequence constructed by the ER method in the case of β = 1. However, the ER method does not require constructing of the sequence of the polyhedra Q k and searching for the best direction u* for the polyhedra Q k. By the way, the last problem is not so simple as it looks at the first glance. Though the polyhedra Q k belong to the space of the small dimension m, they are described by linear inequality systems that usually have only a small number of zeros among their coefficients. For this reason, solution of optimization problems at Step 1 of the modified MCP method may require computational time comparable with computational time of Step 1 in the ER method. The above small modification of the MCP method transforms it into a Hausdorff method.

32 Constructing the convex hull of a polyhedron and a point (beneath-beyond method)

33 Suppose that one needs to construct the faces of the convex hull of polytope ABCDEF and point G (Figure a). It is clear that the convex hull will include all faces of the original polytope invisible from point G (that is, ABCD, BCE, CDE, DEF and ADF). However, it will not include any of the faces visible from point G (that is, ABF and BEF, which are shown shaded in Figure). Instead, the convex hull will acquire new faces (Figure b). These new faces are the faces of a minimal cone whose apex is located at G and contains the polytope ABCDEF. Each such face lies in a plane passing through the edge of the polytope lying at the boundary between the visible and invisible parts of the polytope's surface. It is clear that the boundary between the visible and invisible parts consists of intersections of pairs of adjacent faces of the polytope, one being visible and another being invisible. For example, the boundary in Figure 4.2.1 consists of the edges AB, BE, EF, and FA. Say, the edge BE is the intersection of the visible face BEF and of the invisible face BCE. The new face BEG passes through the edge BE. This scheme can be easily generalized to a space of an arbitrary dimension m if we recall that the boundary between the parts of the polytope that are visible and invisible from a point in R m consists of m-2 dimensional faces.

34 Stable variant of beneath-beyond method Particular methods that implement the beneath-beyond scheme differ in the way they solve the three following problems: How to determine whether a facet is visible from a point; How to determine whether two facets are adjacent or not; and How to transform the representation of the polyhedron into the convex hull. A stable form of the beneath-beyond method was developed by O.Chernykh in 1986 and is based on the convolution of a linear inequality system. The convolution of such systems was proposed by Fourier in 1826 and developed in Russia in 60s.

35 The idea of the method due to Chernykh Though the method due to Chernykh is applied in the case of a sequence of vertices arriving one after another, let us describe it by considering the construction of a convex hull for a collection of s points {v 1, v 2,..., v s }  R m. The convex hull must be constructed in the form of the solution set of a linear inequality system. By definition, a point y  R m belongs to a convex hull of the above points if there exist values λ 1, λ 2,..., λ s such that Consider the space R s+m of variables λ 1, λ 2,..., λ s and y. Then, the above system describes a polyhedral set in this space. The convex hull of points {v 1, v 2,..., v s } is the (orthogonal) projection of the set onto the space R m of variables y. Therefore, in order to construct the desired convex hull, one can construct the projection of the polyhedral set space R s+m of variables λ 1, λ 2,..., λ s and y onto the space R m of variables y.

36 Projections of polyhedral sets DEFINITION. Let a set M  R p  R q be specified. The set M w = { w  R q :  v: (v,w)  M } is known as the (orthogonal) projection of the set M onto R q. To construct the desired projection of the set M, one can use the methods proposed for the convolution of systems of linear inequalities by J.B.Fourier (1826). The convolution methods help to construct the projection M w of a convex polyhedral set M  R p  R q in the form M w = {w  R q : Dw  d }.

37 Convolution of linear inequality systems Let the convex polyhedral set M is specified as M = { (v,w)  R p  R q : Av + Bw  c}, where A, B are specified matrices, and c is a specified vector. It is needed to find a matrix D and a vector d of the description of its projection M w. Fourier proposed the convolution method for eliminating the vector v from the finite system of linear inequalities in a way that results in the constructing of the projection. The method starts with the elimination of one coordinate of the vector v, i.e. with constructing the projection of the set M onto R p ‑ 1  R q. Constructing of the projection is based on eliminating the first component of the vector v. Then the process continues until all components of the vector v are eliminated. Eliminating of one component of the vector v is carried out by summation of pairs of the inequalities. The Fourier method resembles to some degree the method proposed by Gauss for solving systems of linear equations. However, the Fourier method is more sophisticated.

38 The Fourier method for p = 1 In the case p = 1, the system may be recast as a i v +  c i, i = 1, 2,..., N, where a i are numbers and b i are vectors. We break up all inequalities into three groups, P+, P- and P0, in correspondence to the sign of the coefficient by the variable v. The system describing the projection includes all inequalities, which have zero coefficients at the variable v (i.e. the inequalities that belong to P0), all possible linear combinations of pairs of inequalities with opposite signs of the coefficients, i.e. all inequalities  a j c i - a i c j, where i  P-, j  P+. When excluding a greater number of variables by the Fourier method, the procedure remains the same: on a current elimination step, the system, which is a result of elimination of the previous coordinate, is taken as the starting system.

39 Reduced fundamental convolution (S.N.Chernikov) In 1960s, S.N.Chernikov developed the reduced fundamental method for convolution of linear inequality systems. It constructs matrices D and vectors d with the minimal number of rows (while b and c are considered as parameters). The reduced fundamental method establishes the relation between the vertices and the hyperfaces of the projection (inequality index storage).

40 Constructing the convex hull-1 Now we can construct the convex hull by projecting the polyhedral set It can be constructed by eliminating the variables λ 1, λ 2,..., λ s from the above system of equalities and inequalities. We eliminate the variables λ from the system in the same order as they are numbered. The first m+1 variables λ 1, λ 2,..., λ m+1 can be eliminated from the system by expressing them in terms of the other variables and using the equalities of the system. The remaining variables can be eliminated by the method of reduced fundamental convolution.

41 Constructing the convex hull-2 Let us consider the resulting system obtained after the variables λ 1, λ 2,..., λ q, where q  m+1, have been eliminated. The system provides, in effect, the description of the convex hull of the points {v 1, v 2,..., v q }: it is only needed to equate the variables λ q+1,..., λ s to zero in it. The excluding the next variable λ q+1 from the resulting system is equivalent to attaching a point v q+1 to the convex hull of the points {v 1, v 2,..., v q }. Thus, when constructing a convex hull, the information about the points that have yet to be attached is not needed. Therefore, it is not necessary to know the future points themselves or even their number.

42 Result: a stable beneath-beyond scheme Thus, this method can be used for constructing the polyhedron sequentially, one point after another. Important that this procedure transforms the method into the method based on the beneath-beyond scheme. The inequality index storage used in the reduced fundamental convolution method is equivalent to a partial storage of combinatorial structure of the intermediate polyhedra. Therefore, a polyhedron is, in effect, stored as a system of inequalities, each of which corresponds to a face of the polyhedron. Also, each inequality is stored along with information that gives the numbers of the vertices, which belong to particular faces. This information helps to solve three problems listed above. For example, the question as to whether or not a given face is visible from a new point is answered by inserting the point being attached in the linear inequality corresponding to that face.

43 Application of the ER method for approximating the EPH In the process of approximating the EPH in the convex case, the only special feature of the ER method consists in constructing the initial approximation. In contrast to approximating a compact body, the initial approximation is the nonnegative cone with the vertex in a Pareto optimal criterion point.

44 Constructing the bi-objective slices Let a polyhedral set be given by Then, its two-dimensional slice, which is parallel to the axes x j1 and x j2 (J={j1,j2}) and passes through the point x*, is given by

45 One can construct multiple slices for the same variables very fast One simply has first to order the two-dimensional vectors (a i,j1, a i,j2 ), The order does not depend on the values Illustration


Download ppt "Approximation and Visualization of Interactive Decision Maps Short course of lectures Alexander V. Lotov Dorodnicyn Computing Center of Russian Academy."

Similar presentations


Ads by Google