Download presentation
Presentation is loading. Please wait.
Published bySolomon Ball Modified over 8 years ago
1
Classical mathematics and new challenges László Lovász Microsoft Research One Microsoft Way, Redmond, WA 98052 lovasz@microsoft.com Theorems and Algorithms
2
Algorithmic vs. structural mathematics Geometric constructions Euclidean algorithm Newton’s method Gaussian elimination ancient and classical algorithms
3
An example: diophantine approximation and continued fractions Givenfind rational approximation such that and continued fraction expansion
4
30’s: Mathematical notion of algorithms Church, Turing, Post recursive functions, Λ-calculus, Turing-machines Church, Gödel algorithmic and logical undecidability A mini-history of algorithms
5
50’s, 60’s: Computers the significance of running time simple and complex problems sorting searching arithmetic … Travelling Salesman matching network flows factoring …
6
late 60’s-80’s: Complexity theory P=NP? Time, space, information complexity Polynomial hierarchy Nondeterminism, good characteriztion, completeness Randomization, parallelism Classification of many real-life problems into P vs. NP-complete
7
90’s: Increasing sophistication upper and lower bounds on complexity algorithmsnegative results factoring volume computation semidefinite optimization topology algebraic geometry coding theory
8
Higlights of the 90’s: Approximation algorithms positive and negative results Probabilistic algorithms Markov chains, high concentration, nibble methods, phase transitions Pseudorandom number generators from art to science: theory and constructions
9
Approximation algorithms: The Max Cut Problem maximize NP-hard …Approximations?
10
Easy with 50% error Erdős ~’65 Polynomial with 12% error Goemans-Williamson ’93 ??? Arora-Lund-Motwani- Sudan-Szegedy ’92 Hastad NP-hard with 6% error (Interactive proof systems, PCP) (semidefinite optimization)
11
Randomized algorithms (making coin flips): Algorithms and probability Algorithms with stochastic input: difficult to analyze even more difficult to analyze important applications (primality testing, integration, optimization, volume computation, simulation) even more important applications Difficulty: after a few iterations, complicated functions of the original random variables arise.
12
Strong concentration (Talagrand) Laws of Large Numbers: sums of independent random variables is strongly concentrated General strong concentration: very general “smooth” functions of independent random variables are strongly concentrated Nibble, martingales, rapidly mixing Markov chains,… New methods in probability:
13
Example Want:such that: - any 3 linearly independent - every vector is a linear combination of 2 Few vectors O( q)? (was open for 30 years) Every finite projective plane of order q has a complete arc of size q polylog ( q ). Kim-Vu
14
Second idea: choose at random ????? Solution:Rödl nibble + strong concentration results First idea: use algebraic construction (conics,…) gives only about q
15
Driving forces for the next decade New areas of applications The study of very large structures More tools from classical areas in mathematics
16
New areas of application: interaction between discrete and continuous Biology: genetic code population dynamics protein folding Physics: elementary particles, quarks, etc. (Feynman graphs) statistical mechanics (graph theory, discrete probability) Economics: indivisibilities (integer programming, game theory) Computing: algorithms, complexity, databases, networks, VLSI,...
17
Very large structures -genetic code -brain -animal -ecosystem -economy -society How to model them? non-constant but stable partly random -internet -VLSI -databases
18
Very large structures: how to model them? Graph minorsRobertson, Seymour, Thomas If a graph does not contain a given minor, then it is essentially a 1-dimensional structure of essentially 2-dimensional pieces. up to a bounded number of additional nodes tree-decomposition embedable in a fixed surface except for “fringes” of bounded depth
19
The nodes of graph can be partitioned into a bounded number of essentially equal parts so that almost all bipartite graphs between 2 parts are essentially random (with different densities). with k 2 exceptions Very large structures: how to model them? Regularity LemmaSzeméredi 74 given >0 and k>1, # of parts is between k and f(k, ) difference at most 1 for subsets X,Y of the two parts, # of edges between X and Y is p|X||Y| n 2
20
How to model them? How to handle them algorithmically? heuristics/approximation algorithms -internet -VLSI -databases -genetic code -brain -animal -ecosystem -economy -society A complexity theory of linear time? Very large structures linear time algorithms sublinear time algorithms (sampling)
21
Example: Volume computation Given:, convex Want: volume of K by a membership oracle; with relative error ε Not possible in polynomial time, even if ε=n cn. Elekes, Bárány, Füredi Possible in randomized polynomial time, for arbitrarily small ε. Dyer, Frieze, Kannan in n More and more tools from classical math
22
Complexity: For self-reducible problems, counting sampling (Jerrum-Valiant-Vazirani) Enough to sample from convex bodies * * * * * * * * * must be exponential in n
23
Complexity: For self-reducible problems, counting sampling (Jerrum-Valiant-Vazirani) Enough to sample from convex bodies by sampling …
24
Complexity: For self-reducible problems, counting sampling (Jerrum-Valiant-Vazirani) Enough to sample from convex bodies Algorithmic results: Use rapidly mixing Markov chains (Broder; Jerrum-Sinclair) Enough to estimate the mixing rate of random walk on lattice in K
25
Complexity: For self-reducible problems, counting sampling (Jerrum-Valiant-Vazirani) Enough to sample from convex bodies Algorithmic results: Use rapidly mixing Markov chains (Broder; Jerrum-Sinclair) Enough to estimate the mixing rate of random walk on lattice in K Graph theory (expanders): use conductance to estimate eigenvalue gap Alon, Jerrum-Sinclair Probability: use eigenvalue gap K’ K” F
26
Complexity: For self-reducible problems, counting sampling (Jerrum-Valiant-Vazirani) Enough to sample from convex bodies Algorithmic results: Use rapidly mixing Markov chains (Broder; Jerrum-Sinclair) Enough to estimate the mixing rate of random walk on lattice in K Graph theory (expanders): use conductance to estimate eigenvalue gap Alon, Jerrum-Sinclair Enough to prove isoperimetric inequality for subsets of K Differential geometry:Isoperimetric inequality Dyer Frieze Kannan 1989 Probability: use eigenvalue gap
27
Differential equations: bounds on Poincaré constant Paine-Weinberger bisection method, improved isoperimetric inequality LL-Simonovits 1990 Log-concave functions:reduction to integration Applegate-Kannan 1992 Convex geometry:Ball walk LL 1992 Statistics:Better error handling Dyer-Frieze 1993 Optimization:Better prepocessing LL-Simonovits 1995 achieving isotropic position Kannan-LL-Simonovits 1998 Functional analysis: isotropic position of convex bodies
28
Geometry: projective (Hilbert) distance affine invariant isoperimetric inequality analysis of hit-and-run walk LL 1999 Differential equations: log-Sobolev inequality elimination of “start penalty” for lattice walk Frieze-Kannan 1999 log-Cheeger inequalityelimination of “start penalty” for ball walk Kannan-LL 1999 Scientific computing: non-reversible chains mix better; lifting Diaconis-Holmes-Neal Feng-LL-Pak walk with inertia Aspnes-Kannan-LL
29
Linear algebra : eigenvalues semidefinite optimization higher incidence matrices homology theory More and more tools from classical math Geometry : geometric representations convexity Analysis: generating functions Fourier analysis, quantum computing Number theory: cryptography Topology, group theory, algebraic geometry, special functions, differential equations,…
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.