1 Randomness in Computation Example 1: Breaking symmetry. Example 2: Finding witnesses. Example 3: Monte Carlo integration. Example 4: Approximation algorithms.

Slides:



Advertisements
Similar presentations
Unconditional Weak derandomization of weak algorithms Explicit versions of Yao s lemma Ronen Shaltiel, University of Haifa :
Advertisements

Comp 122, Spring 2004 Order Statistics. order - 2 Lin / Devi Comp 122 Order Statistic i th order statistic: i th smallest element of a set of n elements.
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
A Randomized Polynomial- Time Simplex Algorithm for Linear Programming CS3150 Course Presentation.
Order Statistics Sorted
Monte Carlo Methods and Statistical Physics
“Devo verificare un’equivalenza polinomiale…Che fò? Fò dù conti” (Prof. G. Di Battista)
1 NP-Complete Problems. 2 We discuss some hard problems:  how hard? (computational complexity)  what makes them hard?  any solutions? Definitions 
Randomized Algorithms Kyomin Jung KAIST Applied Algorithm Lab Jan 12, WSAC
Chernoff Bounds, and etc.
Complexity 16-1 Complexity Andrei Bulatov Non-Approximability.
Complexity 18-1 Complexity Andrei Bulatov Probabilistic Algorithms.
CS151 Complexity Theory Lecture 7 April 20, 2004.
Design and Analysis of Algorithms
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
Randomized Computation Roni Parshani Orly Margalit Eran Mantzur Avi Mintz
CS151 Complexity Theory Lecture 7 April 20, 2015.
1 Traveling Salesman Problem (TSP) Given n £ n positive distance matrix (d ij ) find permutation  on {0,1,2,..,n-1} minimizing  i=0 n-1 d  (i),  (i+1.
The Theory of NP-Completeness
Randomized Computation
Computability and Complexity 24-1 Computability and Complexity Andrei Bulatov Approximation.
Study Group Randomized Algorithms Jun 7, 2003 Jun 14, 2003.
10/31/02CSE Greedy Algorithms CSE Algorithms Greedy Algorithms.
Public key ciphers 1 Session 5.
10/31/02CSE Greedy Algorithms CSE Algorithms Greedy Algorithms.
Ch. 8 & 9 – Linear Sorting and Order Statistics What do you trade for speed?
Randomized Algorithms Morteza ZadiMoghaddam Amin Sayedi.
1.1 Chapter 1: Introduction What is the course all about? Problems, instances and algorithms Running time v.s. computational complexity General description.
1 The Theory of NP-Completeness 2012/11/6 P: the class of problems which can be solved by a deterministic polynomial algorithm. NP : the class of decision.
Ragesh Jaiswal Indian Institute of Technology Delhi Threshold Direct Product Theorems: a survey.
Stochastic Algorithms Some of the fastest known algorithms for certain tasks rely on chance Stochastic/Randomized Algorithms Two common variations – Monte.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Approximation Algorithms Pages ADVANCED TOPICS IN COMPLEXITY THEORY.
Chapter 12 Recursion, Complexity, and Searching and Sorting
Stochastic Linear Programming by Series of Monte-Carlo Estimators Leonidas SAKALAUSKAS Institute of Mathematics&Informatics Vilnius, Lithuania
1 Lesson 8: Basic Monte Carlo integration We begin the 2 nd phase of our course: Study of general mathematics of MC We begin the 2 nd phase of our course:
1 2 Probabilistic Computations  Extend the notion of “efficient computation” beyond polynomial-time- Turing machines.  We will still consider only.
Computation Model and Complexity Class. 2 An algorithmic process that uses the result of a random draw to make an approximated decision has the ability.
. CLASSES RP AND ZPP By: SARIKA PAMMI. CONTENTS:  INTRODUCTION  RP  FACTS ABOUT RP  MONTE CARLO ALGORITHM  CO-RP  ZPP  FACTS ABOUT ZPP  RELATION.
Cliff Shaffer Computer Science Computational Complexity.
Monté Carlo Simulation  Understand the concept of Monté Carlo Simulation  Learn how to use Monté Carlo Simulation to make good decisions  Learn how.
1 The Theory of NP-Completeness 2 Cook ’ s Theorem (1971) Prof. Cook Toronto U. Receiving Turing Award (1982) Discussing difficult problems: worst case.
Umans Complexity Theory Lectures Lecture 1a: Problems and Languages.
Fall 2013 CMU CS Computational Complexity Lectures 8-9 Randomness, communication, complexity of unique solutions These slides are mostly a resequencing.
Chapter5: Evaluating Hypothesis. 개요 개요 Evaluating the accuracy of hypotheses is fundamental to ML. - to decide whether to use this hypothesis - integral.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 7.
Amplification and Derandomization Without Slowdown Dana Moshkovitz MIT Joint work with Ofer Grossman (MIT)
Week 13 - Monday.  What did we talk about last time?  B-trees  Hamiltonian tour  Traveling Salesman Problem.
ICS 353: Design and Analysis of Algorithms
NP-Completness Turing Machine. Hard problems There are many many important problems for which no polynomial algorithms is known. We show that a polynomial-time.
Analysis of Algorithms Spring semester 2002 Uri Zwick
Algorithms for hard problems Introduction Juris Viksna, 2015.
NP Completeness Piyush Kumar. Today Reductions Proving Lower Bounds revisited Decision and Optimization Problems SAT and 3-SAT P Vs NP Dealing with NP-Complete.
Approximation algorithms
Umans Complexity Theory Lectures
Lesson 8: Basic Monte Carlo integration
Probabilistic Algorithms
Integer Programming An integer linear program (ILP) is defined exactly as a linear program except that values of variables in a feasible solution have.
Optimization problems such as
Lecture 8 Randomized Algorithms
Computability and Complexity
Bin Fu Department of Computer Science
Objective of This Course
Uri Zwick – Tel Aviv Univ.
Optimization with Meta-Heuristics
CS21 Decidability and Tractability
CS151 Complexity Theory Lecture 7 April 23, 2019.
ADVANCED COMPUTATIONAL MODELS AND ALGORITHMS
Complexity Theory: Foundations
Presentation transcript:

1 Randomness in Computation Example 1: Breaking symmetry. Example 2: Finding witnesses. Example 3: Monte Carlo integration. Example 4: Approximation algorithms. Example 5: The probabilistic method.

2 QuickSort QuickSort(a){ x = select(a); first = QuickSort(less(a,x)); last = QuickSort(greater(a,x)); return first.x.last; }

3 Variations of QuickSort select(a) = a[1] (or a[n/2] ) Worst case time Average case time select(a) = a[random(1..n)] Worst case expected time select(a) = median(a) Worst case time Derandomization

4 QuickSort For designing derandomized version of QuickSort detailed knowledge of randomized QuickSort and its analysis is needed – Can this be avoided? Even though we had to be clever, we didn’t redesign the algorithm – we merely replaced the random element with a specific element. This will be typical. Derandomized QuickSort retains asymptotic complexity of randomized QuickSort but is less practical – Sad but seemingly unavoidable – in general we will even be prepared to accept a polynomial slowdown....

5 The Simplex Method for Linear Programming

6 The Simplex Method Pivoting rule: Rule for determining which neighbor corner to go to. Several rules: Largest Coefficient, Largest Increase, Smallest Subscript,... All known deterministic rules have worst case exponential behavior. Random Pivoting is conjectured to be worst case expected polynomial.

7 The Simplex Method Can we prove Random Pivoting to lead to expected polynomial time simplex method? Very deep and difficult question, related to Hirsch’ conjecture. If Random Pivoting is expected polynomial, can we then find a polynomial deterministic pivoting rule? To be dealt with!

8 Example 2: Finding Witnesses Is (x-y)(x+y) = (x ¢ x) – (y ¢ y) ? Is (x-y)(x+y)(z-u)+u-(v+u)(x-y) + uz = (y+z)(u-v)+u +(uv-uz)+(u-v) ?

9 Equivalence of Arithmetic Expressions Obvious Deterministic Algorithm: Convert to sum-of-terms by applying distributive law. Collect terms. Compare. Exponential complexity.

10 Randomized Algorithm Let n be the total length of the expressions. Pick random values a, b, c, … in {1,..,10 n} for the variables. If the two expressions evaluate to equal values, say “yes”, otherwise say “no”. Polynomial complexity

11 Analysis (a,b,c,…) is a potential witness of the inequivalence of the expressions. If the expressions are equivalent, the procedure says “yes” with probability 1. If they are not, the procedure says “no” with probability at least 9/10 by the Schwartz-Zippel Lemma. The procedure has one-sided error probability at most 1/10. Arithmetic Expression Equivalence is in coRP.

12 Amplification by repetition Repeat t times with independently chosen values. Return “no” if at least one test returns “no”. If expressions are equivalent, the procedure returns “yes” with probability 1. If expressions are not equivalent, the procedure returns “yes” with probability at most 10 -t.

13 Derandomizing RP algorithms Can we find an polynomial time algorithm which always finds a witness or reports none exists? Can we achieve error probability 10 -t using much less than ≈ t r random bits, where r is the number of bits used to achieve error probability 1/10?

14 Complexity classes Why will I not focus much on RP, BPP, coRP, ZPP,…? Equivalence of Arithmetic Expressions is essentially the only important decision problem for which we have an efficient Monte Carlo and no known efficient deterministic algorithm (Primality used to be the other example)!

15 Example 3: Monte Carlo Integration A Approximately 4/10. What is the area of A?

16 Monte Carlo Integration volume(A,m){ c := 0; for(j=1; j<=m; j++) if(random(U)Є A) c++; return c/m; }

17 Chernoff Bound Let X 1, X 2, …, X m be independent 0-1 variables. Let X =  X i and  = E[X]. Pr[ |X-  | ¸ d] · e - 2 d 2 / m

18 Analysis of Monte Carlo Integration Let m = 10 (1/ε) 2 log(1/δ). Chernoff Bounds With probability at least 1- δ, volume (A,m) correctly estimates the volume of A within additive error ε.

19 Derandomizing Monte Carlo Integration Can we find an efficient deterministic algorithm which estimates the volume of A within additive error 1/100 (with A given as “procedure parameter”)? Can we achieve error probability δ using much less than ≈ log(|U|) log(1/δ) random bits?

20 Complexity Classes P=BPP is not known to imply that Monte Carlo Integration can be derandomized.

21 Example 4: Approximation Algorithms and Heuristics Randomized Approximation Algorithms: Solves (NP-hard) optimization problem with specified expected approximation ratio. Example: Goemans-Williamson MAXCUT algorithm. Randomized Approximation Heuristics: No approximation ratio, but may do well on many natural instances.

22 What is the best Traveling Salesman Tour?

23 Local Search

24 Local Search

25 Local Search

26 Local Search LocalSearch(x){ y := feasible solution to x; while(exists z in N(y) with val(z) < val(y)) y := z; return y; }

27 Local Search Local Search often finds good solutions to instances of optimization problems, but may be trapped in bad local optima. Randomized versions of local search often perform better.

28 Simulated Annealing SimulatedAnnealing(x, T){ y := feasible solution to x; repeat{ Pick a random neighbor z of y; y := z with probability min(1, exp((val(y)-val(z))/T)); }until(tired); return the best y found; }

29 Derandomizing approximation algorithms and heuristics. Can a randomized approximation algorithm with a known approximation ratio be converted into a deterministic approximation algorithm with similar ratio? Can a randomized approximation heuristic which behaves well on certain instances be converted into a deterministic heuristic which behaves well on the same instances?