Presentation is loading. Please wait.

Presentation is loading. Please wait.

Nonnegative polynomials and applications to learning

Similar presentations


Presentation on theme: "Nonnegative polynomials and applications to learning"β€” Presentation transcript:

1 Nonnegative polynomials and applications to learning
Georgina Hall Princeton, ORFE Joint work with Amir Ali Ahmadi Mihaela Curmei Princeton, ORFE Ex-Princeton, ORFE

2 Nonnegative polynomials
A polynomial 𝑝 π‘₯ ≔𝑝 π‘₯ 1 ,…, π‘₯ 𝑛 is nonnegative if 𝑝 π‘₯ β‰₯0,βˆ€π‘₯∈ ℝ 𝑛 . 𝑝 π‘₯ = π‘₯ 4 βˆ’5 π‘₯ 2 βˆ’π‘₯+10 Is this polynomial nonnegative?

3 Optimizing over nonnegative polynomials
Interested in more than checking nonnegativity of a given polynomial Problems of the type: Linear objective and affine constraints in the coefficients of 𝑝 (e.g., sum of coefs =1) min 𝑝 𝐢(𝑝 ) 𝑠.𝑑. 𝐴 𝑝 =𝑏 𝒑 𝒙 β‰₯𝟎, βˆ€π’™ Decision variables are the coefficients of the polynomial 𝑝 Nonnegativity condition Why would we be interested in problems of this type?

4 1. Shape-constrained regression
Impose e.g., monotonicity or convexity on the regressor Example: price of a car with respect to age How does this relate to optimizing over nonnegative polynomials? Monotonicity of a polynomial regressor over a range Nonnegativity of partial derivatives over that range Convexity of a polynomial regressor 𝐻 π‘₯ ≽0,βˆ€π‘₯, i.e., 𝑦 𝑇 𝐻 π‘₯ 𝑦β‰₯0, βˆ€ π‘₯,𝑦

5 2. Difference of Convex (DC) programming
Problems of the form min 𝑓 0 (π‘₯) 𝑠.𝑑. 𝑓 𝑖 π‘₯ ≀0 where 𝑓 𝑖 π‘₯ ≔ 𝑔 𝑖 π‘₯ βˆ’ β„Ž 𝑖 π‘₯ , 𝑔 𝑖 , β„Ž 𝑖 convex convex ⇔ π’š 𝑻 𝑯 𝒙 π’šβ‰₯𝟎, βˆ€π’™,π’š ML Applications: Sparse PCA, Kernel selection, feature selection in SVM Studied for quadratics, polynomials nice to study question computationally Hiriart-Urruty, 1985 Tuy, 1995

6 Outline of the rest of the talk
Very brief introduction to sum of squares Revisit shape-constrained regression Revisit difference of convex programming

7 Sum of squares polynomials
Is this polynomial nonnegative? NP-hard to decide for degree β‰₯4. What if 𝑝 can be written as a sum of squares (sos)? Sufficient condition for nonnegativity Can optimize over the set of sos polynomials using SDP.

8 Revisiting Monotone Regression
[Ahmadi, Curmei, GH, 2017]

9 Monotone regression: problem definition
N data points: π‘₯ 𝑖 , 𝑦 𝑖 with π‘₯ 𝑖 ∈ ℝ 𝑛 , 𝑦 𝑖 βˆˆβ„ noisy measurements of a monotone function 𝑦 𝑖 =𝑓 π‘₯ 𝑖 + πœ– 𝑖 Feature domain: box π΅βŠ† ℝ 𝑛 Monotonicity profile: 𝜌 𝑗 = 1 βˆ’1 0 for 𝑗=1,…,𝑛. if 𝑓 is monotonically increasing w.r.t. π‘₯ 𝑗 if 𝑓 is monotonically decreasing w.r.t. π‘₯ 𝑗 if no monotonicity requirements on 𝑓 w.r.t. π‘₯ 𝑗 Can this be done computationally? How good is this approximation? Goal: Fit a polynomial to the data that has monotonicity profile 𝜌 over B.

10 NP-hardness and SOS relaxation
Theorem: Given a cubic polynomial 𝑝, a box 𝐡, and a monotonicity profile 𝜌, it is NP-hard to test whether 𝑝 has profile 𝜌 over 𝐡. SOS relaxation: 𝒑 has odd degree πœ•π‘(π‘₯) πœ• π‘₯ 𝑗 = 𝜎 0 π‘₯ + 𝑖 𝜎 𝑖 π‘₯ 𝑏 𝑖 + βˆ’ π‘₯ 𝑖 π‘₯ 𝑖 βˆ’ 𝑏 𝑖 βˆ’ where 𝜎 𝑖 ,𝑖=0,…,𝑛 are sos polynomials πœ•π‘(π‘₯) πœ• π‘₯ 𝑗 β‰₯0, βˆ€π‘₯∈𝐡, where 𝐡= 𝑏 1 βˆ’ , 𝑏 1 + ×…× 𝑏 𝑛 βˆ’ , 𝑏 𝑛 + 𝒑 has even degree πœ•π‘(π‘₯) πœ• π‘₯ 𝑗 = 𝜎 0 π‘₯ + 𝑖 𝜎 𝑖 π‘₯ 𝑏 𝑖 + βˆ’ π‘₯ 𝑖 + 𝑖 𝜏 𝑖 (π‘₯) π‘₯ 𝑖 βˆ’ 𝑏 𝑖 βˆ’ where 𝜎 𝑖 , 𝜏 𝑖 are sos polynomials

11 Approximation theorem
Theorem: For any πœ–>0, and any 𝐢 1 function 𝑓 with monotonicity profile 𝜌, there exists a polynomial 𝑝 with the same profile 𝜌, such that max π‘₯∈𝐡 𝑓 π‘₯ βˆ’π‘ π‘₯ <πœ– . Moreover, one can certify its monotonicity profile using SOS. Proof uses results from approximation theory and Putinar’s Positivstellensatz

12 Numerical experiments (1/2)
Low noise environment High noise environment

13 Numerical experiments (2/2)
Low noise environment High noise environment n=4, d=7

14 Revisiting difference of convex programming
[Ahmadi, GH*, 2016] * Winner of the 2016 INFORMS Computing Society Best Student Paper Award

15 Difference of Convex (dc) decomposition
Interested in problems of the form: min 𝑓 0 (π‘₯) 𝑠.𝑑. 𝑓 𝑖 π‘₯ ≀0 where 𝑓 𝑖 π‘₯ ≔ 𝑔 𝑖 π‘₯ βˆ’ β„Ž 𝑖 π‘₯ , 𝑔 𝑖 , β„Ž 𝑖 convex. Leads to difference of convex (dc) decomposition problem: Given a polynomial 𝑓, find 𝑔 and β„Ž such that 𝒇=π’ˆβˆ’π’‰, where 𝑔,β„Ž convex polynomials. Does such a decomposition always exist? Can it be efficiently computed? Is it unique?

16 Existence of dc decomposition (1/3)
Recall: Theorem: Any polynomial can be written as the difference of two sos-convex polynomials. Corollary: Any polynomial can be written as the difference of two convex polynomials. 𝑓(π‘₯) convex ⇔ 𝑦 𝑇 𝐻 𝑓 π‘₯ 𝑦β‰₯0, βˆ€π‘₯,π‘¦βˆˆ ℝ 𝑛 ⇐ 𝑦 𝑇 𝐻 𝑓 π‘₯ 𝑦 sos SDP SOS-convexity

17 Existence of dc decomposition (2/3)
Lemma: Let 𝐾 be a full dimensional cone in a vector space 𝐸. Then any π‘£βˆˆπΈ can be written as 𝑣= π‘˜ 1 βˆ’ π‘˜ 2 , π‘˜ 1 , π‘˜ 2 ∈𝐾. Proof sketch: =:π‘˜β€² βˆƒ 𝛼<1 such that 1βˆ’π›Ό 𝑣+π›Όπ‘˜βˆˆπΎ E K ⇔𝑣= 1 1βˆ’π›Ό π‘˜ β€² βˆ’ 𝛼 1βˆ’π›Ό π‘˜ To change π’Œ π’Œβ€² 𝒗 π‘˜ 1 ∈𝐾 π‘˜ 2 ∈𝐾

18 Existence of dc decomposition (3/3)
Here, 𝐸={polynomials of degree 2𝑑 in 𝑛 variables}, 𝐾={sos-convex polynomials of degree 2𝑑 in 𝑛 variables}. Remains to show that 𝐾 is full dimensional: Also shows that a decomposition can be obtained efficiently: In fact, we show that a decomposition can be found via LP and SOCP (not covered here). βˆ‘ 𝒙 π’Š 𝟐 𝒅 can be shown to be in the interior of 𝐾. 𝒇=π’ˆβˆ’π’‰, π’ˆ,𝒉 sos-convex solving is an SDP.

19 Uniqueness of dc decomposition
Dc decomposition: given a polynomial 𝑓, find convex polynomials 𝑔 and β„Ž such that 𝒇=π’ˆβˆ’π’‰. Questions: Does such a decomposition always exist? Can I obtain such a decomposition efficiently? Is this decomposition unique? οƒΌ Yes οƒΌ Through sos-convexity Alternative decompositions 𝑓 π‘₯ = 𝑔 π‘₯ +𝑝 π‘₯ βˆ’ β„Ž π‘₯ +𝑝 π‘₯ 𝑝(π‘₯) convex Initial decomposition x𝑓 π‘₯ =𝑔 π‘₯ βˆ’β„Ž(π‘₯) β€œBest decomposition?”

20 Convex-Concave Procedure (CCP)
min 𝑓 0 (π‘₯) 𝑠.𝑑. 𝑓 𝑖 π‘₯ ≀0, 𝑖=1,…,π‘š Heuristic for minimizing DC programming problems. Idea: Input π‘˜β‰”0 x π‘₯ 0 , initial point 𝑓 𝑖 = 𝑔 𝑖 βˆ’ β„Ž 𝑖 , 𝑖=0,…,π‘š Convexify by linearizing 𝒉 x 𝒇 π’Š π’Œ 𝒙 = 𝑔 𝑖 π‘₯ βˆ’( β„Ž 𝑖 π‘₯ π‘˜ +𝛻 β„Ž 𝑖 π‘₯ π‘˜ 𝑇 π‘₯βˆ’ π‘₯ π‘˜ ) Solve convex subproblem Take π‘₯ π‘˜+1 to be the solution of min 𝑓 0 π‘˜ π‘₯ 𝑠.𝑑. 𝑓 𝑖 π‘˜ π‘₯ ≀0, 𝑖=1,…,π‘š convex convex affine π‘˜β‰”π‘˜+1 𝒇 π’Š π’Œ 𝒙 𝒇 π’Š (𝒙)

21 Convex-Concave Procedure (CCP)
Toy example: min π‘₯ 𝑓 π‘₯ , where 𝑓 π‘₯ ≔𝑔 π‘₯ βˆ’β„Ž(π‘₯) Convexify 𝑓 π‘₯ to obtain 𝑓 0 (π‘₯) Initial point: π‘₯ 0 =2 Minimize 𝑓 0 (π‘₯) and obtain π‘₯ 1 Reiterate π‘₯ ∞ π‘₯ 3 π‘₯ 4 π‘₯ 2 π‘₯ 1 π‘₯ 0 π‘₯ 0

22 Picking the β€œbest” decomposition for CCP
Algorithm Linearize 𝒉 𝒙 around a point π‘₯ π‘˜ to obtain convexified version of 𝒇(𝒙) Algorithm Linearize 𝒉 𝒙 around a point π‘₯ π‘˜ to obtain convexified version of 𝒇(𝒙) Algorithm Linearize 𝒉 𝒙 around a point π‘₯ π‘˜ to obtain convexified version of 𝒇(𝒙) Idea Pick β„Ž π‘₯ such that it is as close as possible to affine around π‘₯ π‘˜ Mathematical translation Minimize curvature of β„Ž

23 Undominated decompositions (1/2)
Definition: g ,β„Žβ‰”π‘”βˆ’f is an undominated decomposition of 𝑓 if no other decomposition of 𝑓 can be obtained by subtracting a (nonaffine) convex function from β„Ž. π’ˆ 𝒙 = 𝒙 πŸ’ + 𝒙 𝟐 , 𝒉 𝒙 =πŸ’ 𝒙 𝟐 +πŸπ’™βˆ’πŸ Convexify around π‘₯ 0 =2 to get 𝒇 𝟎 𝒙 𝒇 𝒙 = 𝒙 πŸ’ βˆ’πŸ‘ 𝒙 𝟐 +πŸπ’™βˆ’πŸ Cannot substract something convex from g and get something convex again. DOMINATED BY π’ˆ β€² 𝒙 = 𝒙 πŸ’ , 𝒉 β€² 𝒙 =πŸ‘ 𝒙 𝟐 +πŸπ’™βˆ’πŸ Convexify around π‘₯ 0 =2 to get 𝒇 πŸŽβ€² 𝒙 If π’ˆβ€² dominates π’ˆ then the next iterate in CCP obtained using π’ˆ β€² always beats the one obtained using π’ˆ.

24 Undominated decompositions (2/2)
Theorem: Given a polynomial 𝑓, consider min 1 𝐴 𝑛 𝑆 π‘›βˆ’1 π‘‡π‘Ÿ 𝐻 β„Ž π‘‘πœŽ , (where 𝐴 𝑛 = 2 πœ‹ 𝑛/2 Ξ“(𝑛/2) ) s.t. 𝑓=π‘”βˆ’β„Ž, 𝑔 convex, β„Ž convex Any optimal solution is an undominated dcd of 𝑓 (and an optimal solution always exists). Theorem: If 𝑓 has degree 4, it is NP-hard to solve (⋆). Idea: Replace 𝑓=π‘”βˆ’β„Ž, 𝑔, β„Ž convex by 𝑓=π‘”βˆ’β„Ž, 𝑔,β„Ž sos-convex. (⋆) 𝑔,β„Ž

25 Comparing different decompositions (1/2)
Solving the problem: min 𝐡= π‘₯ π‘₯ ≀𝑅} 𝑓 0 , where 𝑓 0 has 𝑛=8 and 𝑑=4. Decompose 𝑓 0 , run CCP for 4 minutes and compare objective value. Feasibility Undominated min 𝑔,β„Ž 1 𝐴 𝑛 𝑆 π‘›βˆ’1 π‘‡π‘Ÿ 𝐻 𝑔 π‘‘πœŽ 𝑠.𝑑. 𝑓 0 =π‘”βˆ’β„Ž 𝑔,β„Ž sos-convex min g,h 0 s.t. 𝑓 0 =π‘”βˆ’β„Ž 𝑔,β„Ž sos-convex

26 Comparing different decompositions (2/2)
Average over 30 instances Solver: Mosek Computer: 8Gb RAM, 2.40GHz processor Feasibility Undominated Conclusion: Performance of CCP strongly affected by initial decomposition.

27 Main messages Optimization over nonnegative polynomials has many applications. Powerful SDP/SOS-based relaxations available. Two particular applications here: monotone regression and difference of convex programming. Future directions: Recent algorithmic developments to improve scalability of SDP. Using DC programming for sparse regression min π‘₯ ||𝐴π‘₯βˆ’π‘ ​ πœ† π‘₯ ​ 0

28 Thank you for listening
Questions? Want to learn more?

29 Imposing monotonicity
Example: For what values of 𝒂 and 𝒃 is the following polynomial monotone? 𝒑 𝒙 = 𝒙 πŸ’ +𝒂 𝒙 πŸ‘ +𝒃 𝒙 𝟐 βˆ’ 𝒂+𝒃 𝒙 Theorem: A polynomial 𝑝(π‘₯) of degree 2𝑑 is monotone on [0,1] if and only if 𝑝 β€² π‘₯ =π‘₯ 𝑠 1 π‘₯ + 1βˆ’π‘₯ 𝑠 2 π‘₯ , where 𝑠 1 (π‘₯) and 𝑠 2 (π‘₯) are some sos polynomials of degree 2π‘‘βˆ’2. Search for sos polynomials using SDP!

30 1. Polynomial optimization
𝜸 βˆ— min π‘₯ 𝑝(π‘₯) 𝑠.𝑑. 𝑓 𝑖 π‘₯ ≀0 𝑔 𝑗 π‘₯ =0 max 𝛾 𝛾 𝑠.𝑑. 𝑝 π‘₯ βˆ’π›Ύβ‰₯0, βˆ€π‘₯∈{ 𝑓 𝑖 π‘₯ ≀0, 𝑔 𝑗 π‘₯ =0} ML applications: Low-rank matrix completion Training deep nets with polynomial activation functions Nonnegative matrix factorization Dictionary learning Sparse recovery with nonconvex regularizers


Download ppt "Nonnegative polynomials and applications to learning"

Similar presentations


Ads by Google