Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Some useful Contraction Mappings  Results for a particular choice of norms.

Slides:



Advertisements
Similar presentations
Bending, Breaking and Squishing Stuff Marq Singer Red Storm Entertainment
Advertisements

Solved problems on comparison theorem for series.
The Comparison Test Let 0 a k b k for all k.. Mika Seppälä The Comparison Test Comparison Theorem A Assume that 0 a k b k for all k. If the series converges,
1 OR II GSLM Outline  some terminology  differences between LP and NLP  basic questions in NLP  gradient and Hessian  quadratic form  contour,
This is an example of a bad talk (Disclaimer: The paper that should have been presented in this talk is a classic in the field, a great paper: this talk,
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Lecture 8 – Nonlinear Programming Models Topics General formulations Local vs. global solutions Solution characteristics Convexity and convex programming.
Chapter 9 Perceptrons and their generalizations. Rosenblatt ’ s perceptron Proofs of the theorem Method of stochastic approximation and sigmoid approximation.
Separating Hyperplanes
Chapter 4: Network Layer
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
The Most Important Concept in Optimization (minimization)  A point is said to be an optimal solution of a unconstrained minimization if there exists no.
1cs542g-term Notes  Assignment 1 due tonight ( me by tomorrow morning)
Gradient Methods April Preview Background Steepest Descent Conjugate Gradient.
1 L-BFGS and Delayed Dynamical Systems Approach for Unconstrained Optimization Xiaohui XIE Supervisor: Dr. Hon Wah TAM.
Numerical Optimization
Nonlinear Optimization for Optimal Control
Lecture 1 Linear Variational Problems (Part I). 1. Motivation For those participants wondering why we start a course dedicated to nonlinear problems by.
1 L-BFGS and Delayed Dynamical Systems Approach for Unconstrained Optimization Xiaohui XIE Supervisor: Dr. Hon Wah TAM.
Linear Regression  Using a linear function to interpolate the training set  The most popular criterion: Least squares approach  Given the training set:
Support Vector Machines Formulation  Solve the quadratic program for some : min s. t.,, denotes where or membership.  Different error functions and measures.
Message Passing Algorithms for Optimization
Optimal Throughput Allocation in General Random Access Networks P. Gupta, A. Stolyar Bell Labs, Murray Hill, NJ March 24, 2006.
Gradient Methods May Preview Background Steepest Descent Conjugate Gradient.
Unconstrained Optimization Problem
CS Pattern Recognition Review of Prerequisites in Math and Statistics Prepared by Li Yang Based on Appendix chapters of Pattern Recognition, 4.
Gradient Methods Yaron Lipman May Preview Background Steepest Descent Conjugate Gradient.
Advanced Topics in Optimization
Linear Discriminant Functions Chapter 5 (Duda et al.)
Optimality Conditions for Nonlinear Optimization Ashish Goel Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
CSE 245: Computer Aided Circuit Simulation and Verification
Why Function Optimization ?
An Introduction to Optimization Theory. Outline Introduction Unconstrained optimization problem Constrained optimization problem.
CS Subdivision I: The Univariate Setting Peter Schröder.
Definition and Properties of the Cost Function

9 1 Performance Optimization. 9 2 Basic Optimization Algorithm p k - Search Direction  k - Learning Rate or.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
KKT Practice and Second Order Conditions from Nash and Sofer
ITERATIVE TECHNIQUES FOR SOLVING NON-LINEAR SYSTEMS (AND LINEAR SYSTEMS)
Solving Scalar Linear Systems Iterative approach Lecture 15 MA/CS 471 Fall 2003.
Adaptive CSMA under the SINR Model: Fast convergence using the Bethe Approximation Krishna Jagannathan IIT Madras (Joint work with) Peruru Subrahmanya.
Nonlinear Programming.  A nonlinear program (NLP) is similar to a linear program in that it is composed of an objective function, general constraints,
Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Appendix A. Mathematical Background EE692 Parallel and Distribution Computation.
CSE 245: Computer Aided Circuit Simulation and Verification Matrix Computations: Iterative Methods I Chung-Kuan Cheng.
Survey of Kernel Methods by Jinsan Yang. (c) 2003 SNU Biointelligence Lab. Introduction Support Vector Machines Formulation of SVM Optimization Theorem.
Analytic Placement Algorithms Chung-Kuan Cheng CSE Department, UC San Diego, CA Contact: 1.
1  Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Gradient Methods In Optimization
Survey of unconstrained optimization gradient based algorithms
Recitation4 for BigData Jay Gu Feb LASSO and Coordinate Descent.
Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Ch. 3 Iterative Method for Nonlinear problems EE692 Parallel and Distribution.
Krylov-Subspace Methods - I Lecture 6 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
Linear & Nonlinear Programming -- Basic Properties of Solutions and Algorithms.
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Maximum Norms & Nonnegative Matrices  Weighted maximum norm e.g.) x1x1 x2x2.
Approximation Algorithms based on linear programming.
Optimal Control.
Computational Optimization
Nuclear Norm Heuristic for Rank Minimization
CS5321 Numerical Optimization
CSCI B609: “Foundations of Data Science”
Nonlinear Network Structures for Optimal Control
L5 Optimal Design concepts pt A
Preliminary.
Artificial Intelligence 10. Neural Networks
Totally Asynchronous Iterative Algorithms
Variational Inequalities
CS5321 Numerical Optimization
Presentation transcript:

Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Some useful Contraction Mappings  Results for a particular choice of norms  Prop.1.12

Network Systems Lab. Korea Advanced Institute of Science and Technology No.2 Some useful Contraction Mappings  Prop.1.13 Assume the following:

Network Systems Lab. Korea Advanced Institute of Science and Technology No.3 Unconstrained Optimization  Jacobi algorithm(Generalization of the JOR for linear eq.s)  Gauss-Seidel algorithm(Generalization of the SOR for linear eq.s)

Network Systems Lab. Korea Advanced Institute of Science and Technology No.4  Gradient algorithm(Generalization of the Richardson’s for linear eq.s)  Gauss-Seidel variant of the Gradient algorithm  The above 4 algorithms are called the Descent Algorithm; in fact, the Gradient algorithm is called the Steepest Descent Algorithm.

Network Systems Lab. Korea Advanced Institute of Science and Technology No.5   Descent Direction Θ

Network Systems Lab. Korea Advanced Institute of Science and Technology No.6  Scaled Gradient algorithm

Network Systems Lab. Korea Advanced Institute of Science and Technology No.7  Newton and Approximate Newton Methods  Even for nonquadratic case, Newton’s algorithm converges much faster (under certain assumptions) than previously introduced algorithms, particularly in the neighborhood of the optimal solution [OrR 70]

Network Systems Lab. Korea Advanced Institute of Science and Technology No.8  The Jacobi algorithm can be viewed as an approximation of Newton’s algorithm in which the off-diagonal entries of are ignored.  Approximate Newton Method

Network Systems Lab. Korea Advanced Institute of Science and Technology No.9  Convergence Analysis using the descent approach  Assumption 2.1  Lemma 2.1 (Descent Lemma)

Network Systems Lab. Korea Advanced Institute of Science and Technology No.10  Prop. 2.1 (Convergence of Descent Algorithms)

Network Systems Lab. Korea Advanced Institute of Science and Technology No.11

Network Systems Lab. Korea Advanced Institute of Science and Technology No.12

Network Systems Lab. Korea Advanced Institute of Science and Technology No.13  Show that Jacobi, Gradient, scaled Gradient, Newton and Approximate Newton satisfy the conditions of Prop.2.1 (under certain assumptions), who implies that for these algorithms.

Network Systems Lab. Korea Advanced Institute of Science and Technology No.14  Gradient Algorithm  Scaled Gradient Algorithm

Network Systems Lab. Korea Advanced Institute of Science and Technology No.15  Prop. 2.2 (Convergence of the Gauss-Seidel Algorithm)

Network Systems Lab. Korea Advanced Institute of Science and Technology No.16

Network Systems Lab. Korea Advanced Institute of Science and Technology No.17  The case of a convex cost function  Prop. 2.3 (Convergence of Descent Methods in Convex Optim.)  Prop. 2.4 (Geometric Convergence for Strictly Convex Optim.)

Network Systems Lab. Korea Advanced Institute of Science and Technology No.18

Network Systems Lab. Korea Advanced Institute of Science and Technology No.19 Convexity  Definition A.13  Convex setNon-convex set Strictly Convex Convex, but not strictly convex Non-convex CCC

Network Systems Lab. Korea Advanced Institute of Science and Technology No.20 Convexity (Cont’d)  Proposition A.35  A linear function is convex  The weighted sum of convex functions with positive weights is convex   Any vector norm is convex  Proposition A.36   Proposition A.39 

Network Systems Lab. Korea Advanced Institute of Science and Technology No.21 Convexity (Cont’d)  Proposition A.40   Proposition A.41 (Strong Convexity) 

Network Systems Lab. Korea Advanced Institute of Science and Technology No.22 Constrained Optimization   Proposition 3.1 (Optimality Condition)

Network Systems Lab. Korea Advanced Institute of Science and Technology No.23 Constrained Optimization (Cont’d)   Proposition 3.2 (Projection Theorem)

Network Systems Lab. Korea Advanced Institute of Science and Technology No.24 proof of prop 3.2

Network Systems Lab. Korea Advanced Institute of Science and Technology No.25 Gradient Projection Algorithm    

Network Systems Lab. Korea Advanced Institute of Science and Technology No.26 Proposition 3.3  Assumption 3.1  Same as Assumption 2.1 ( as in unconstrained optimization)  Prop 3.3 (Properties of the gradient projection mapping) 

Network Systems Lab. Korea Advanced Institute of Science and Technology No.27 Proof of Proposition 3.3  Proof of Proposition 3.3 (a)

Network Systems Lab. Korea Advanced Institute of Science and Technology No.28 Proof of Proposition 3.3  Proof of Proposition 3.3 (b)  Proof of Proposition 3.3 (c)

Network Systems Lab. Korea Advanced Institute of Science and Technology No.29 Proposition 3.4  Convergence of the Gradient Projection Algorithm proof ) refer to proposition 3.3

Network Systems Lab. Korea Advanced Institute of Science and Technology No.30 Proposition 3.5  Geometric Convergence for strongly convex problems

Network Systems Lab. Korea Advanced Institute of Science and Technology No.31 Scaled Gradient Projection Algorithms    

Network Systems Lab. Korea Advanced Institute of Science and Technology No.32  Proposition 3.7

Network Systems Lab. Korea Advanced Institute of Science and Technology No.33 The case of a product constraint set : parallel implementations      

Network Systems Lab. Korea Advanced Institute of Science and Technology No.34   The assumption that X is a Cartesian product opens up the possibility for a Gauss-Seidel version of the gradient projection algorithm. 

Network Systems Lab. Korea Advanced Institute of Science and Technology No.35 Proposition 3.8  Convergence of the Gauss-Seidel Gradient Projection Algorithm