Subdivision methods for solving polynomial equations

Slides:



Advertisements
Similar presentations
Request Dispatching for Cheap Energy Prices in Cloud Data Centers
Advertisements

SpringerLink Training Kit
Luminosity measurements at Hadron Colliders
From Word Embeddings To Document Distances
Choosing a Dental Plan Student Name
Virtual Environments and Computer Graphics
Chương 1: CÁC PHƯƠNG THỨC GIAO DỊCH TRÊN THỊ TRƯỜNG THẾ GIỚI
THỰC TIỄN KINH DOANH TRONG CỘNG ĐỒNG KINH TẾ ASEAN –
D. Phát triển thương hiệu
NHỮNG VẤN ĐỀ NỔI BẬT CỦA NỀN KINH TẾ VIỆT NAM GIAI ĐOẠN
Điều trị chống huyết khối trong tai biến mạch máu não
BÖnh Parkinson PGS.TS.BS NGUYỄN TRỌNG HƯNG BỆNH VIỆN LÃO KHOA TRUNG ƯƠNG TRƯỜNG ĐẠI HỌC Y HÀ NỘI Bác Ninh 2013.
Nasal Cannula X particulate mask
Evolving Architecture for Beyond the Standard Model
HF NOISE FILTERS PERFORMANCE
Electronics for Pedestrians – Passive Components –
Parameterization of Tabulated BRDFs Ian Mallett (me), Cem Yuksel
L-Systems and Affine Transformations
CMSC423: Bioinformatic Algorithms, Databases and Tools
Some aspect concerning the LMDZ dynamical core and its use
Bayesian Confidence Limits and Intervals
实习总结 (Internship Summary)
Current State of Japanese Economy under Negative Interest Rate and Proposed Remedies Naoyuki Yoshino Dean Asian Development Bank Institute Professor Emeritus,
Front End Electronics for SOI Monolithic Pixel Sensor
Face Recognition Monday, February 1, 2016.
Solving Rubik's Cube By: Etai Nativ.
CS284 Paper Presentation Arpad Kovacs
انتقال حرارت 2 خانم خسرویار.
Summer Student Program First results
Theoretical Results on Neutrinos
HERMESでのHard Exclusive生成過程による 核子内クォーク全角運動量についての研究
Wavelet Coherence & Cross-Wavelet Transform
yaSpMV: Yet Another SpMV Framework on GPUs
Creating Synthetic Microdata for Higher Educational Use in Japan: Reproduction of Distribution Type based on the Descriptive Statistics Kiyomi Shirakawa.
MOCLA02 Design of a Compact L-­band Transverse Deflecting Cavity with Arbitrary Polarizations for the SACLA Injector Sep. 14th, 2015 H. Maesaka, T. Asaka,
Hui Wang†*, Canturk Isci‡, Lavanya Subramanian*,
Fuel cell development program for electric vehicle
Overview of TST-2 Experiment
Optomechanics with atoms
داده کاوی سئوالات نمونه
Inter-system biases estimation in multi-GNSS relative positioning with GPS and Galileo Cecile Deprez and Rene Warnant University of Liege, Belgium  
ლექცია 4 - ფული და ინფლაცია
10. predavanje Novac i financijski sustav
Wissenschaftliche Aussprache zur Dissertation
FLUORECENCE MICROSCOPY SUPERRESOLUTION BLINK MICROSCOPY ON THE BASIS OF ENGINEERED DARK STATES* *Christian Steinhauer, Carsten Forthmann, Jan Vogelsang,
Particle acceleration during the gamma-ray flares of the Crab Nebular
Interpretations of the Derivative Gottfried Wilhelm Leibniz
Advisor: Chiuyuan Chen Student: Shao-Chun Lin
Widow Rockfish Assessment
SiW-ECAL Beam Test 2015 Kick-Off meeting
On Robust Neighbor Discovery in Mobile Wireless Networks
Chapter 6 并发:死锁和饥饿 Operating Systems: Internals and Design Principles
You NEED your book!!! Frequency Distribution
Y V =0 a V =V0 x b b V =0 z
Fairness-oriented Scheduling Support for Multicore Systems
Climate-Energy-Policy Interaction
Hui Wang†*, Canturk Isci‡, Lavanya Subramanian*,
Ch48 Statistics by Chtan FYHSKulai
The ABCD matrix for parabolic reflectors and its application to astigmatism free four-mirror cavities.
Measure Twice and Cut Once: Robust Dynamic Voltage Scaling for FPGAs
Online Learning: An Introduction
Factor Based Index of Systemic Stress (FISS)
What is Chemistry? Chemistry is: the study of matter & the changes it undergoes Composition Structure Properties Energy changes.
THE BERRY PHASE OF A BOGOLIUBOV QUASIPARTICLE IN AN ABRIKOSOV VORTEX*
Quantum-classical transition in optical twin beams and experimental applications to quantum metrology Ivano Ruo-Berchera Frascati.
The Toroidal Sporadic Source: Understanding Temporal Variations
FW 3.4: More Circle Practice
ارائه یک روش حل مبتنی بر استراتژی های تکاملی گروه بندی برای حل مسئله بسته بندی اقلام در ظروف
Decision Procedures Christoph M. Wintersteiger 9/11/2017 3:14 PM
Limits on Anomalous WWγ and WWZ Couplings from DØ
Presentation transcript:

Subdivision methods for solving polynomial equations B. Mourrain, J.P. Pavone , 2009

abstract The purpose of the talk today is to present a new algorithm for solving a system of polynomials in a domain of ℝ 𝑛 . It uses: a powerful reduction strategy based on univariate root finder using Bernstein basis representation And Descarte's rule of signs.

motivation Physics Computer Graphics mechanics Robotics solving a system of polynomials is a subject that has been studied extensively throughout history and has practical applications in many different fields: Physics Computer Graphics mechanics The film and game industry Robotics Financial information processing Bioinformatics Coding Signal and image processing Computer vision dynamics and flow And many many more..

motivation We will briefly review two examples of such uses to emphasize the importance of solving these problems efficiently, quickly and precisely.

Example #1 - GPS Our GPS device receives a satellite signal with the following information packet: Satellite identity (serial number k) Satellite coordinates in space ( 𝑥 𝑘 , 𝑦 𝑘 , 𝑧 𝑘 ) relative to the center of the earth. The 𝑡 𝑘 time point where the information was transmitted in relation to an agreed global clock known unknown

Distance between the satellite and the device according to coordinates Example #1 - GPS We want to find our position in space (x, y, z). In addition, suppose that the point of time in which we receive the information from all the satellites is unknown and is indicated by t. We get: Distance between the satellite and the device according to coordinates The distance between the satellite and the instrument according to time differences from transmission to reception

Example #1 - GPS Each satellite will give us one equation in four variables. If we subtract the first satellite equation from the rest of the equations, we will obtain a simple linear equations system of four variables. From five satellites we can offer a solution of a simple system. Of course, our location changes in a fraction of a second and relies on a larger number of satellites. Hence, we must be able to solve large equations systems quickly.

Example #2 - Computer Graphics Computer graphics is an area that deals with the study of methods for creating and processing digital visual content by computer. In order to create a "smooth" view of different objects, a display with a high refresh rate is required, meaning that any change of the arena and / or the viewer's perspective should be displayed in second fraction. This is a refresh rate of 50 or 60 frames per second, which creates display that looks completely smooth.

Example #2 - Computer Graphics A common method for representing objects in computer graphics is to assemble them from different polygons. Each polygon is represented by an equation and we need to be able to find their intersection (solving system of equations). For example, for the purpose of emphasizing certain parts of an object.

introduction As stated, solving system of polynomials is the basis of many geometric problems. In the solution discussed today, We exploit the properties of polynomial representations in the Bernstein basis, to deduce easily information on the corresponding real functions in a domain of in ℝ 𝑛 .

introduction In previous lectures we dealt extensively with Bernstein's representation. It is known to be more numerically stable. Extensive use of these curves was made following the work of the French engineer Pierre Bezier in 1962 who used them to design a Renault vehicle body. Their properties (control points, convex hull , etc.) combined with the reduction methods explain the varied amount of algorithms proposed to solve univariate systems.

family which is based on subdivision techniques The situation in the multivariate case has not been studied so extensively. Two main subfamilies coexist: family which is based on subdivision techniques family which is based on Reduction approaches

subdivision techniques The subdivision approaches use an exclusion test The result of the test is: no solution exists or there may be a solution. If the result is that there is no solution then we will reject the given domain. otherwise, we will divide the domain. We will continue to carry out the process until a certain criterion is accomplished (size of the field or much more complex). The method provides algorithms with multiple iterations, especially in cases of multiple roots. However, the iteration price is significantly lower than the second approach.

Reduction approaches The power of the method is based on the ability to focus on the parts of the domain where the roots are located. A reduction can not completely replace division because it is not always possible to reduce the given domain reduction significantly reduces the number of iterations and of course it has drastic effects on performance.

Bernstein polynomial representation So far, we have dealt with Bernstein polynomials in the limited interval [0,1]. Let us now look at a general representation of a polynomial in any [a, b] section. Each univariate polynomial 𝑓 𝑥 ∈𝕂[𝑥] from degree d can be represented as follows: for each 𝑎<𝑏∈ℝ, 𝑓 𝑥 = 𝑖=0 𝑑 𝑏 𝑖 𝑑 𝑖 1 𝑏−𝑎 𝑑 (𝑥−𝑎) 𝑖 𝑏−𝑥 𝑑−𝑖 We will indicate the Bernstein polynomials: B 𝑑 𝑖 𝑥;𝑎,𝑏 = 𝑑 𝑖 1 𝑏−𝑎 𝑑 (𝑥−𝑎) 𝑖 𝑏−𝑥 𝑑−𝑖 And then , 𝑓 𝑥 = 𝑖=0 𝑑 𝑏 𝑖 B 𝑑 𝑖 𝑥;𝑎,𝑏

Lemma #1- Descarte's rule of signs The number of real roots of 𝒇 𝒙 = 𝒊=𝟎 𝒅 𝒃 𝒊 𝑩 𝒅 𝒊 𝒙;𝒂,𝒃 in [𝒂,𝒃] is bounded by the number 𝑽(𝒃) of sign changes of 𝒃= 𝒃 𝒊 𝒊=𝟎,..,𝒏 As a consequence, if 𝑽 𝒃 =𝟎 there is no root in [𝒂,𝒃] and if 𝑽 𝒃 =𝟏 , there is one root in [𝒂,𝒃]

An example of the Lemma in standard representation using monomial 𝑓 𝑥 = 𝑥 3 + 𝑥 2 −𝑥−1 The set of coefficients’ signs is : [++ − − ] , hence there is one positive root. For negative roots we will look on 𝑓 −𝑥 = −𝑥 3 + 𝑥 2 +𝑥−1 and then the set of coefficients' signs is : [−+ + − ] So there are two negative roots Indeed, 𝒇 𝒙 = 𝒙+𝟏 𝟐 (𝒙−𝟏) and the roots are: (-1) from multiplicity 2 and 1.

If we extend the representation to the multi-dimensional case, to any polynomial 𝒇 𝒙 𝟏 ,.., 𝒙 𝒏 ∈𝕂[ 𝒙 𝟏 ,.. 𝒙 𝒏 ] where 𝒙 𝒊 from degree 𝒅 𝒊 It can be represented as follows: 𝒇 𝒙 𝟏 ,…., 𝒙 𝒏 = 𝒊 𝟏 =𝟎 𝒅 𝟏 … 𝒊 𝒏 =𝟎 𝒅 𝒏 𝒃 𝒊 𝟏 ,…, 𝒊 𝒏 𝑩 𝒅 𝟏 𝒊 𝟏 𝒙 𝟏 ; 𝒂 𝟏 , 𝒃 𝟏 … 𝑩 𝒅 𝒏 𝒋 𝒏 ( 𝒙 𝒏 ; 𝒂 𝒏 , 𝒃 𝒏 )

definition For any polynomial 𝑓 𝑥 1 ,.., 𝑥 𝑛 ∈𝕂[ 𝑥 1 ,.. 𝑥 𝑛 ] and 𝑗=1,2,..,𝑛 : m j f; x j = i j =0 d j mi n 0≤ i k ≤ d k , k≠j b i 1 ,…, i n B d j i j ( x j ; a j , b j ) M f; x j = i j =0 d j max 0≤ i k ≤ d k , k≠j b i 1 ,…, i n B d j i j ( x j ; a j , b j )

The picture illustrates projection of the control points and the enveloping univariate polynomials

Projection Lemma 𝒎 𝒋 𝒇; 𝒖 𝒋 ≤𝒇 𝒖 ≤ 𝑴 𝒋 𝒇; 𝒖 𝒋 For any 𝒖= 𝒖 𝟏 ,…, 𝒖 𝒏 ∈𝑫 and 𝐣 =𝟏,𝟐,..,𝐧 we have 𝒎 𝒋 𝒇; 𝒖 𝒋 ≤𝒇 𝒖 ≤ 𝑴 𝒋 𝒇; 𝒖 𝒋

Projection Lemma - proof First, recalled that previously we have shown that for any k = 1, ..., n 𝑘=0 𝑑 𝑘 𝐵 𝑑 𝑘 𝑖 𝑘 𝑢 𝑘 ; 𝑎 𝑘 , 𝑏 𝑘 =1 . And then: 𝑓 𝑢 = 𝑖 1 =0 𝑑 1 … 𝑖 𝑛 =0 𝑑 𝑛 𝑏 𝑖 1 ,…, 𝑖 𝑛 𝐵 𝑑 1 𝑖 1 𝑢 1 ; 𝑎 1 , 𝑏 1 …. 𝐵 𝑑 𝑛 𝑖 𝑛 𝑢 𝑛 ; 𝑎 𝑛 , 𝑏 𝑛 ≤( i j d j max 0≤ i k ≤ d k , k≠j b i 1 ,…, i n B d j i j ( x j ; a j , b j ) ) 0≤ 𝑖 𝑙 ≤ 𝑑 𝑙 , 𝑖≠ 𝑗 (𝑘≠𝑗) 𝐵 𝑑 𝑘 𝑖 𝑘 𝑢 𝑘 ; 𝑎 𝑘 , 𝑏 𝑘 ≤ 𝑀 𝑗 𝑓; 𝑢 𝑗 In the same way we can show the second direction of the bounder of the minimum polynomial.

Corollary For any root ξ= 𝜉 1 ,…, 𝜉 𝑛 ∈ ℝ 𝑛 of the equation 𝑓 𝑥 =0 in domain D , we have μ 𝑗 ≤ 𝜉 𝑗 ≤ 𝜇 𝑗 where: 𝜇 𝑗 is a root of 𝑚 𝑗 𝑓; 𝑥 𝑗 =0 and μ 𝑗 is a root of 𝑀 𝑗 𝑓; 𝑥 𝑗 =0 in [ 𝑎 𝑗 , 𝑏 𝑗 ] 𝜇 𝑗 = 𝑎 𝑗 , μ 𝑗 = 𝑏 𝑗 if 𝑚 𝑗 𝑓; 𝑥 𝑗 =0 There is no root in [ 𝑎 𝑗 , 𝑏 𝑗 ] if 𝑀 𝑗 𝑓; 𝑥 𝑗 =0

Univariate root solver Our approach is based on an effective solution for finding univariate roots. Common methods to approximate the root are based on bisection. We perform these methods by: splitting the interval into two sub-sections selecting the sub-section containing the root. The division can be done in several ways:

Method #1- in Bernstein basis using de Casteljau algorithm This is a recursive algorithm that allows us to evaluate a Bernstein polynomial. The algorithm is based on the formula: 𝑏 𝑖 0 = 𝑏 𝑖 𝑖=0,..,𝑑 𝑏 𝑖 𝑟 = 1−𝑡 𝑏 𝑖 𝑟−1 +𝑡 𝑏 𝑖+1 𝑟−1 𝑖=0,…,𝑑−𝑟 The coefficients 𝑏 𝑖 𝑟 at a given level are obtained as (1−𝑡,𝑡) barycenter of two consecutive 𝑏 𝑖 𝑟−1 , 𝑏 𝑖+1 𝑟−1 of the previous level. We repeat the process until we reach one point.

Method #1- in Bernstein basis using de Casteljau algorithm C# implementation: The algorithm requires 𝜗( d 2 ) arithmetic operations

Method #2- in monomial basis using Horner methods For the polynomial 𝑝 𝑥 = 𝑖=0 𝑛 𝑎 𝑖 𝑥 𝑖 and some point 𝑥 0 , we will do the following steps: 𝑏 𝑛 = 𝑎 𝑛 𝑏 𝑛−1 = 𝑎 𝑛−1 + 𝑏 𝑛 𝑥 0 . 𝑏 0 = 𝑎 0 + 𝑏 1 𝑥 0 And get 𝑝 𝑥 0 = 𝑏 0

example -Horner methods The polynomial 𝑓 𝑥 =2 𝑥 3 −6 𝑥 2 +2𝑥−1 and the point 𝑥 0 =3 : 𝑖𝑛𝑑𝑒𝑒𝑑, 𝑓 3 =5

example -Horner methods From the algorithm we obtain additional important information. If we divide 𝑓(𝑥) by (𝑥− 𝑥 0 ) the remainder of the division will be the last coefficient on the bottom row and the other coefficients represent a polynomial of one degree less which is the outcome of the division. In our example we get : 2 𝑥 3 −6 𝑥 2 +2𝑥−1 𝑥−3 = 2 𝑥 2 +2 𝑥−3 +5

Horner methods C implementation: The algorithm requires 𝜗(𝑑) arithmetic operations

How to use evaluation methods for root finding For polynomial 𝑝 𝑛 (𝑥) with roots 𝑧 𝑛 <…< 𝑧 1 we will do the next algorithm: Find 𝑧 1 by another method with a initial guess (Newton–Raphson) Use Horner methods to find 𝑝 𝑛−1 = 𝑝 𝑛 (𝑥− 𝑧 1 ) Go to the first step with 𝑝 𝑛−1 and 𝑧 1 Repeat the steps until you find all the roots.

Method #3- a secant-like method A two-point method in which we draw a line between two points and the position where the secant intersects the X axis becomes our new point. We repeat the steps in an iterative way. C implementation:

Method #4 Computing iteratively the first intersection of the convex hull of the control polygon, with the x-axis and in subdividing the polynomial representation at this point

Method #5 - Newton-like method in monomial basis single-point methods where we split the domain at the point where the tangent cuts the X axis. If there is no such point, we will cross in the middle of the section.

Experimentations on polynomials with random roots shows superiority of Horner and Newton iterations (Methods 2, 5). These methods allow us to solve more than 10 6 equations with the precision of ϵ= 10 −12 (experiments were performed on an Intel Pentium4 2.0 GHz processor). the Newton outcome the Horner method in “simple” situations.

Multivariate root finding Now we will discuss system of s polynomial equations with n variables and coefficients in ℝ. 𝑓 1 𝑥 1 ,.. 𝑥 𝑛 =0,…, 𝑓 𝑠 𝑥 1 ,… 𝑥 𝑛 =0. We are looking for an approximation of the real root of 𝑓 𝑥 =0 in the domain D = 𝑎 1 , 𝑏 1 𝑥…𝑥[ 𝑎 𝑛 , 𝑏 𝑛 ] with precision ϵ.

General schema of the subdivision algorithm Step #1- applying a preconditioning step on the equations Step #2 -reducing the domain Step #3- if the reduction ratio is too small, splitting the domain

We are starting from polynomials with exact rational coefficients Converted them into Bernstein basis using exact arithmetic Round their coefficients up and down to the closest machine precision floating point numbers. Preconditioning, reduction or subdivision steps will be performed on these enveloping polynomials obtained. All along the algorithm, the enveloping polynomials are upper and lower bounds of the actual polynomials.

General schema of the subdivision algorithm Step #1- applying a preconditioning step on the equations Step #2 -reducing the domain Step #3- if the reduction ratio is too small, splitting the domain

Preconditioner First, we convert the equation system into a matrix of size 𝑠𝑋𝑠. such a transformation may increase the degree of some equations. we may receive a sparse matrix. To avoid producing a sparse matrix we might prefer a partial preconditioner on a subsets of the equations sharing a subset of variables. For simplicity, let us suppose that polynomials 𝑓 1 ,… 𝑓 𝑠 are already expressed in Bernstein's base and we will now discuss two types of transformations:

Local straightening Global transformation Step #1- applying a preconditioning step on the equations Global transformation Local straightening

Global transformation A typical difficult situation appears when two of the functions have similar graphs in domain D. A way to avoid such a situation is to transform these equations in order to increase the differences between them.

Global transformation For 𝑓,𝑔∈ℝ 𝑋 , let: 𝑑𝑖𝑠 𝑡 2 𝑓,𝑔 = 𝑓−𝑔 2 . In order to use the above formula, we define a polynomial norm at the Bernstein polynomial base B in the following way: 𝑓 2 = 0≤ 𝑖 1 ≤ 𝑑 1 ,…,0≤ 𝑖 𝑛 ≤ 𝑑 𝑛 𝑏 𝑖 1 ,…, 𝑖 𝑛 𝑓 2 . 𝑏 𝑓 is the vector of the control coefficients of function f. This norm on the vector space of polynomials generated by the basis B is associated to a scalar product that we denote by <| >.

Global transformation Goal: improve the angle between the vectors by creating a system that is orthogonal for <|>.

Proposition Let 𝐐= < 𝒇 𝒊 𝒇 𝒋 > 𝟏≤𝒊,𝒋≤𝒔 and let E be a matrix of unitary eigenvectors of Q. Then 𝒇 = 𝑬 𝒕 𝒇 is a system of polynomials which are orthogonal for the scalar product <|>.

proof Let 𝑓 = 𝐸 𝑡 𝑓=( 𝑓 1 ,…, 𝑓 𝑛 ) . Then the matrix of scalar products ( 𝑓 𝑖 | 𝑓 𝑗 ) is 𝑓 𝑓 𝑡 = 𝐸 𝑡 𝑓 𝐸 𝑡 𝑓 𝑡 = 𝐸 𝑡 𝑓 𝑓 𝑡 𝐸= 𝐸 𝑡 𝑄𝐸=𝑑𝑖𝑎𝑔( 𝜎 1 ,…., 𝜎 𝑛 ) where 𝜎 1 ,…., 𝜎 𝑛 are the positive eigenvalues of Q. This shows that the system 𝑓 is orthonormal for the scalar product <|>

The picture illustrates the impact of the global preconditioner on two bivariate functions which graphs are very closed to each other before the preconditioning, and which are well separated after this preconditioning step.

Local straightening We consider square systems, for which 𝑠=𝑛. Since we are going to use the projection Lemma, interesting situation for reduction steps, are when the zero-level of the functions 𝑓 𝑖 are orthogonal to the 𝑥 𝑖 -directions. We illustrate this remark :

Local straightening In case (a) - the reduction based on the corollary from the projection Lemma, will be of no use. The projection of the graphs cover the intervals. In case (b) - a good reduction strategy will yield a good approximation of the roots.

The idea of this preconditioner is to transform the system, in order to be closed to the case (b). We transform locally the system f into a syste J 𝑓 −1 ( 𝑢 0 ) where J 𝑓 𝑢 0 = 𝜕 𝑥 𝑖 𝑓 𝑗 𝑢 0 1≤𝑖,𝑗≤𝑠 is the Jacobian matrix of f at the point 𝑢 0 ∈𝐷. A direct computation shows that locally (in a neighborhood of 𝑢 0 ), the level-set of 𝑓 𝑖 (𝑖=1,..,𝑛) are orthogonal to the 𝑥 𝑖 -axes.

General schema of the subdivision algorithm Step #1- applying a preconditioning step on the equations Step #2 -reducing the domain Step #3- if the reduction ratio is too small, splitting the domain

reducing the domain A possible reduction strategy is: Find the first root of the polynomial 𝑚 𝑗 ( 𝑓 𝑘 ;𝑢𝑗) Find the last root of the polynomial 𝑀 𝑗 ( 𝑓 𝑘 ;𝑢𝑗) In the given domain 𝑎 𝑗 , 𝑏 𝑗 . reduce the domain to [ 𝜇 𝑗 , μ 𝑗 ] As we proof at the projection Lemma. An effective search of the roots will result a fast and efficient process.

reducing the domain Another approach is IPP – “Interval Projected Polyhedron”. This method is based on the use of the convex hull feature. we will reduce the search domain of the root to a domain where the convex hull cuts the axis.

The following picture shows the improvement of the first approach compared to IPP that can lead to a significant change in the algorithm.

General schema of the subdivision algorithm Step #1- applying a preconditioning step on the equations Step #2 -reducing the domain Step #3- if the reduction ratio is too small, splitting the domain

Subdivision strategy check the sign of the control coefficients of 𝑓 𝑖 in the domain. If for one of the nonzero polynomial 𝑓 𝑘 , its control coefficient vectors has no sign change: Then D does not contain any root - should be excluded. Otherwise, split the domain in half in the direction j where | b j − 𝑎 𝑗 | is maximal and larger than a given size.

Subdivision strategy Another variant of the method is based on a division of the domain in the direction j where b j − 𝑎 𝑗 >𝜖 and if the coefficients of 𝑀 𝑗 ( 𝑓 𝑘 ;𝑢𝑗) are not all positive and of 𝑚 𝑗 𝑓 𝑘 ;𝑢𝑗 are not all negative. This method allows us to accept domains that are more suited to root geometry, but a post - processing step for gluing together connected domains may be required.

Good to know: The algorithm we presented was implemented in the C ++ library “synaps”: http://www-sop.inria.fr/teams/galaad-static/

experiments Our objective in these experimentations is to evaluate the impact of reduction approaches compared with subdivision techniques, with and without preconditioning. The different methods that we compare are the following: sbd stands for subdivision rd stands for reduction. sbds stands for subdivision using the global preconditioner rds stands for reduction using the global preconditioner rdl stands for reduction using the local preconditioner.

result size Number of iterations in the process number of For each method and example, we present the following data: Number of iterations in the process number of subdivisions steps number of domains computed the time in milliseconds on a Intel Pentium 4 2.0 GHz with 512 MB RAM result size

Details: The most interesting characterization for us is the number of iterations. Simple roots are found using bisection. changing the algorithm can improve the process time. It will not change the number of iterations. The subdivision rules are based on a splitting according to the largest variable. The calculations are done with the accuracy of 𝟏𝟎 −𝟔 .

results implicit curve intersection problems defined by bi-homogeneous polynomials Case #1 Case #2

results intersection of two curves Case #3 Case #4

results Case #5 self- intersection Case #6 rational curves

Analysis of results The superiority of a reduction approach was observed. mainly reduction with local strategies (rdl) that converged at the lowest number of iterations. In complex cases (case 4 for example) reduction and subdivision methods with preconditioning (sbds, rdl, rds) have better performance compared with the classical reduction or subdivision (rd, sbd).

Analysis of results In most examples only rds, rdl provide a good answer. In these examples, the maximal difference between the coefficients of the upper and the lower polynomials (which indicates potential instability during the computations) remains much smaller than the precision we expect on the roots. Since our reduction principle is based on projection, difficulties may arise when we increases the number of variables in our systems

Another experiment presents problems in 6 variables with a degree ≤2 that comes from the robotics community. In this example, we will remain with the three reduction methods: rd, rds, rdl and add combinations of them that use projections: Before and after local straightening (rd + rdl) Before and after global preconditioning (rd + rds) The three techniques (rd + rdl + rds).

The combination of projections is an improvement. The global preconditioner tends to be better than local straightening. If we look at the first table, we see that the number of iterations of rdl and of rd are close, but it is not the case for rds. The reason is that rdl uses a local information (a Jacobian evaluation) while rds use a global information computed using all the coefficients in the system.

We first experiment that both subdivisions and reductions methods are bad solutions if they are not preconditioned. But using the same preconditioner reduction will, most of the time, beat subdivision.

Conclusion We presented algorithms for solving polynomial equations based on subdivision methods. The innovation in this approach composed of two main additions: • Integration of preconditioning steps and transformation of the system to achieve convergence in fewer iterations. • Improving the use of reduction strategies. We have shown improvement to the reduction strategy by basing on methods for finding roots in multivariate systems.

Conclusion In the past there have been articles in the field that argued that the strategy of reduction is not interesting because these methods do not prevent many subdivisions. In our experiments it can be clearly seen that reduction may save many divisions. We emphasize that despite the many advantages of the reduction strategy, the experiments demonstrate that it is better to use a preconditioning subdivision than a pure reduction.

In conclusion, we need to understand what kind of problem we are dealing with and adapt the best strategies to solve it with the tools we have acquired today.