Constrained stress majorization using diagonally scaled gradient projection Tim Dwyer and Kim Marriott Clayton School of Information Technology Monash.

Slides:



Advertisements
Similar presentations
Constraint Optimization We are interested in the general non-linear programming problem like the following Find x which optimizes f(x) subject to gi(x)
Advertisements

Optimization with Constraints
Adam Networks Research Lab Transformation Methods Penalty and Barrier methods Study of Engineering Optimization Adam (Xiuzhong) Chen 2010 July 9th Ref:
Support Vector Machines
SVMs Reprised. Administrivia I’m out of town Mar 1-3 May have guest lecturer May cancel class Will let you know more when I do...
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Thursday, April 25 Nonlinear Programming Theory Separable programming Handouts: Lecture Notes.
Easy Optimization Problems, Relaxation, Local Processing for a small subset of variables.
Optimality conditions for constrained local optima, Lagrange multipliers and their use for sensitivity of optimal solutions Today’s lecture is on optimality.
Fast Node Overlap Removal Tim Dwyer¹ Kim Marriott¹ Peter J. Stuckey² ¹Monash University ²The University of Melbourne Victoria, Australia.
(A fast quadratic program solver for) Stress Majorization with Orthogonal Ordering Constraints Tim Dwyer 1 Yehuda Koren 2 Kim Marriott 1 1 Monash University,
Easy Optimization Problems, Relaxation, Local Processing for a single variable.
IPSep-CoLa: The choice of a new generation (of graph visualisation enthusiasts) Tim Dwyer – Monash University, Australia Yehuda Koren – AT&T Research Kim.
Constrained Optimisation and Graph Drawing Tim Dwyer Monash University Australia
New Techniques for Visualisation of Large and Complex Networks with Directed Edges Tim Dwyer 1 Yehuda Koren 2 1 Monash University, Victoria, Australia.
Optimization in Engineering Design 1 Lagrange Multipliers.
Numerical Optimization
Easy Optimization Problems, Relaxation, Local Processing for a small subset of variables.
Tutorial 7 Constrained Optimization Lagrange Multipliers
Optimization Methods Unconstrained optimization of an objective function F Deterministic, gradient-based methods Running a PDE: will cover later in course.
1 MF-852 Financial Econometrics Lecture 2 Matrix Operations in Econometrics, Optimization with Excel Roy J. Epstein Fall 2003.
Unconstrained Optimization Problem
CS Pattern Recognition Review of Prerequisites in Math and Statistics Prepared by Li Yang Based on Appendix chapters of Pattern Recognition, 4.
Easy Optimization Problems, Relaxation, Local Processing for a single variable.
Lecture outline Support vector machines. Support Vector Machines Find a linear hyperplane (decision boundary) that will separate the data.
SVMs Reprised Reading: Bishop, Sec 4.1.1, 6.0, 6.1, 7.0, 7.1.
MAE 552 – Heuristic Optimization Lecture 1 January 23, 2002.
Lecture 10: Support Vector Machines
Objectives: Set up a Linear Programming Problem Solve a Linear Programming Problem.
An Introduction to Optimization Theory. Outline Introduction Unconstrained optimization problem Constrained optimization problem.
Constrained Optimization Rong Jin. Outline  Equality constraints  Inequality constraints  Linear Programming  Quadratic Programming.
Tier I: Mathematical Methods of Optimization
UNCONSTRAINED MULTIVARIABLE
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 9. Optimization problems.
1 Least Cost System Operation: Economic Dispatch 2 Smith College, EGR 325 March 10, 2006.
Flow Models and Optimal Routing. How can we evaluate the performance of a routing algorithm –quantify how well they do –use arrival rates at nodes and.
Summarized by Soo-Jin Kim
Roman Keeney AGEC  In many situations, economic equations are not linear  We are usually relying on the fact that a linear equation.
ENCI 303 Lecture PS-19 Optimization 2
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
L8 Optimal Design concepts pt D
1  Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
METU Informatics Institute Min720 Pattern Classification with Bio-Medical Applications Part 7: Linear and Generalized Discriminant Functions.
Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,
L25 Numerical Methods part 5 Project Questions Homework Review Tips and Tricks Summary 1.
Gradient Methods In Optimization
Application: Multiresolution Curves Jyun-Ming Chen Spring 2001.
1 Lecture 7 Linearization of a Quadratic Assignment Problem Indicator Variables.
دانشگاه صنعتي اميركبير دانشكده مهندسي پزشكي استاد درس دكتر فرزاد توحيدخواه بهمن 1389 کنترل پيش بين-دکتر توحيدخواه MPC Stability-2.
GUIDED PRACTICE for Example – – 2 12 – 4 – 6 A = Use a graphing calculator to find the inverse of the matrix A. Check the result by showing.
Support Vector Machines Reading: Ben-Hur and Weston, “A User’s Guide to Support Vector Machines” (linked from class web page)
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
Linear Programming Interior-Point Methods D. Eiland.
Inequality Constraints Lecture 7. Inequality Contraints (I) n A Review of Lagrange Multipliers –As we discussed last time, the first order necessary conditions.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
Optimization in Engineering Design Georgia Institute of Technology Systems Realization Laboratory 117 Penalty and Barrier Methods General classical constrained.
D Nagesh Kumar, IISc Water Resources Systems Planning and Management: M2L2 Introduction to Optimization (ii) Constrained and Unconstrained Optimization.
Optimal Control.
Excel’s Solver Use Excel’s Solver as a tool to assist the decision maker in identifying the optimal solution for a business decision. Business decisions.
Bounded Nonlinear Optimization to Fit a Model of Acoustic Foams
Support Vector Machines Introduction to Data Mining, 2nd Edition by
Statistical Learning Dong Liu Dept. EEIS, USTC.
CS5321 Numerical Optimization
The Lagrange Multiplier Method
EE5780 Advanced VLSI Computer-Aided Design
COSC 4335: Other Classification Techniques
CS5321 Numerical Optimization
COSC 4368 Machine Learning Organization
Transformation Methods Penalty and Barrier methods
Constraints.
Presentation transcript:

Constrained stress majorization using diagonally scaled gradient projection Tim Dwyer and Kim Marriott Clayton School of Information Technology Monash University Australia

 Separation constraints: x 1 +d ≤ x 2, y 1 +d ≤ y 2 can be used with force-directed layout to impose certain spacing requirements Constrained stress majorization layout (x 1,y 1 ) (x 2,y 2 ) (x 3,y 3 ) w1w1 w2w2 h2h2 h3h3 x 1 + ≤ x 2 (w 1 +w 2 ) 2 y 3 + ≤ y 2 (h 2 +h 3 ) 2  In this talk we present:  Diagonal scaling for faster gradient projection  Changes to our active-set solver  Evaluation of the new method  Constrained stress majorization  Stress majorization - reduce overall layout stress  Gradient projection - solve quadratic programs  Active-set solver - projection step

“Unix” Graph data From

Stress majorization stress(X) (x,y)*(x,y)* x* y* x* y*

 Instead of solving unconstrained quadratic forms we solve subject to separation constraints  i.e. Quadratic Programming Constrained stress majorization stress(X) x* y* x* y* (x,y)*(x,y)*

Gradient projection -g -αg-αg x0x0 x1x1

Gradient projection -αg-αg x1x1

d x2x2 x1x1 βdβd

x*

 A badly scaled problem can have poor GP convergence  Condition number of is Convergence

 A badly scaled problem can have poor GP convergence  Perfect scaling should give immediate convergence Convergence Newton’s method:

 Transform entire problem s.t. Scaled gradient projection

 Is itself a quadratic program  Solve with active-set style method  Move each x i to u i  Build blocks of active constraints Projection operation u subj to: x l +d ≤ x r uiui b d a c e

 Is itself a quadratic program  Solve with active-set style method  Move each x i to u i  Build blocks of active constraints  Find most violated constraint x l +d ≤ x r Projection operation u subj to: x l +d ≤ x r uiui b d a c e

 Is itself a quadratic program  Solve with active-set style method  Move each x i to u i  Build blocks of active constraints:  Find most violated constraint x l +d ≤ x r  Satisfy and add to block B  Move B to average position of constituent vars Projection operation u subj to: x l +d ≤ x r uiui b d a c e

 Is itself a quadratic program  Solve with active-set style method  Move each x i to u i  Build blocks of active constraints:  Find most violated constraint x l +d ≤ x r  Add to block B (satisfy constraint)  Move B to average position of constituent vars Projection operation u subj to: x l +d ≤ x r uiui etc… b d a c e

 Is itself a quadratic program  Solve with active-set style method  Move each x i to u i  Build blocks of active constraints:  Find most violated constraint x l +d ≤ x r  Add to block B (satisfy constraint)  Move B to average position of constituent vars Projection operation u subj to: x l +d ≤ x r uiui etc… b d a c e

 Is itself a quadratic program:  Solve with active-set style method:  Move each x i to u i  Build blocks of active constraints:  Find most violated constraint x l +d ≤ x r  Add to block B (satisfy constraint)  Move B to average position of constituent vars: Projection operation u subj to: x l +d ≤ x r uiui etc… b d a c e

 Block structure is preserved between projection operations  Before each projection previous blocks are checked for split points (ensures convergence) Projection operation: incremental b d a c e

 Block structure is preserved between projection operations  Before each projection previous blocks are checked for split points (ensures convergence)  In next projection blocks will be moved as one to new weighted average desired positions Projection operation: incremental b d a c e

Projection operation b d a c e  Is itself a quadratic program  Scaling by a full n×n matrix turns separation constraints into linear constraints over n variables u subj to: x l +d ≤ x r uiui

Scaling for stress majorization  Q is diagonally dominant:  Choose diagonal s.t. ≤

 Diagonal scaling:  Separation constraints:  Need new expressions for  Optimal block position  Lagrange multipliers for active constraints Scaled separation constraints

 minimize subject to active constraints : where:  minimum at: Optimum block position

 Optimum at: where: Optimum block position

Test cases unconstrained constrained

Test cases unconstrained constrained

Results

Improved convergence

 Diagonal scaling  is cheap to compute  transforms separation constraints into scaled separation constraints  not full linear constraints  so we can still use block tricks  is appropriate for improving condition of graph Laplacian matrices because they are diagonally dominant  particularly improves Laplacian condition if graph has wide variation in degree (often in practical applications) Summary