2/14/13CMPS 3120 Computational Geometry1 CMPS 3120: Computational Geometry Spring 2013 Expected Runtimes Carola Wenk.

Slides:



Advertisements
Similar presentations
Discrete Random Variables To understand what we mean by a discrete random variable To understand that the total sample space adds up to 1 To understand.
Advertisements

Lecture Discrete Probability. 5.2 Recap Sample space: space of all possible outcomes. Event: subset of of S. p(s) : probability of element s of.
1 Press Ctrl-A ©G Dear2009 – Not to be sold/Free to use Tree Diagrams Stage 6 - Year 12 General Mathematic (HSC)
Copyright © Cengage Learning. All rights reserved. 8.6 Probability.
Chapter 5 Probability Distributions. E.g., X is the number of heads obtained in 3 tosses of a coin. [X=0] = {TTT} [X=1] = {HTT, THT, TTH} [X=2] = {HHT,
Section 7.4 (partially). Section Summary Expected Value Linearity of Expectations Independent Random Variables.
22C:19 Discrete Structures Discrete Probability Fall 2014 Sukumar Ghosh.
Chapter 3 Section 3.3 Basic Rules of Probability.
Random Variables and Distributions Lecture 5: Stat 700.
1 Random Variables Supplementary Notes Prepared by Raymond Wong Presented by Raymond Wong.
CHAPTER 6: RANDOM VARIABLES AND EXPECTATION
Probability Distributions: Finite Random Variables.
1 Algorithms CSCI 235, Fall 2012 Lecture 9 Probability.
14/6/1435 lecture 10 Lecture 9. The probability distribution for the discrete variable Satify the following conditions P(x)>= 0 for all x.
Expected values and variances. Formula For a discrete random variable X and pmf p(X): Expected value: Variance: Alternate formula for variance:  Var(x)=E(X^2)-[E(X)]^2.
ENGG 2040C: Probability Models and Applications Andrej Bogdanov Spring Conditional probability.
Random Variables A random variable is simply a real-valued function defined on the sample space of an experiment. Example. Three fair coins are flipped.
9/7/06CS 6463: AT Computational Geometry1 CS 6463: AT Computational Geometry Fall 2006 Expected Runtimes Carola Wenk.
Lecture 1 Sec
Copyright © Cengage Learning. All rights reserved. 8.6 Probability.
22C:19 Discrete Structures Discrete Probability Spring 2014 Sukumar Ghosh.
3. Conditional probability
Binomial Distribution
Probability Distributions
Review of Chapter
12.1/12.2 Probability Quick Vocab: Random experiment: “random” act, no way of knowing ahead of time Outcome: results of a random experiment Event: a.
Discrete Probability Distributions
Chapter 4 Random Variables - 1. Outline Random variables Discrete random variables Expected value 2.
Chapter 3 Section 3.7 Independence. Independent Events Two events A and B are called independent if the chance of one occurring does not change if the.
Discrete Math Section 16.3 Use the Binomial Probability theorem to find the probability of a given outcome on repeated independent trials. Flip a coin.
Math 145 September 18, Terminologies in Probability  Experiment – Any process that produces an outcome that cannot be predicted with certainty.
Lecturer : FATEN AL-HUSSAIN Discrete Probability Distributions Note: This PowerPoint is only a summary and your main source should be the book.
Binomial Probability Theorem In a rainy season, there is 60% chance that it will rain on a particular day. What is the probability that there will exactly.
Terminologies in Probability
Math 145 October 5, 2010.
Math 145 June 9, 2009.
Natural Language Processing
22C:19 Discrete Math Discrete Probability
Reference: (Material source and pages)
UNIT 8 Discrete Probability Distributions
Algorithms CSCI 235, Fall 2017 Lecture 10 Probability II
CS104:Discrete Structures
Math 145 September 25, 2006.
Lecture 8 Randomized Algorithms
Math 145.
Chapter 9 Section 1 Probability Review.
Terminologies in Probability
Math 145 February 22, 2016.
Terminologies in Probability
Terminologies in Probability
Random Variable Two Types:
Expected values and variances
Random Variables and Probability Distributions
Terminologies in Probability
Math 145 February 26, 2013.
Math 145 June 11, 2014.
Math 145 September 29, 2008.
Math 145 June 8, 2010.
Algorithms CSCI 235, Spring 2019 Lecture 10 Probability II
Math 145 October 3, 2006.
Math 145 June 26, 2007.
Terminologies in Probability
6.2 Probability Models.
Math 145 February 12, 2008.
Terminologies in Probability
Math 145 September 24, 2014.
Math 145 October 1, 2013.
Math 145 February 24, 2015.
Presentation transcript:

2/14/13CMPS 3120 Computational Geometry1 CMPS 3120: Computational Geometry Spring 2013 Expected Runtimes Carola Wenk

2/14/13CMPS 3120 Computational Geometry2 Probability Let S be a sample space of possible outcomes. E  S is an event The (Laplacian) probability of E is defined as P(E)=|E|/|S|  P(s)=1/|S| for all s  S Example: Rolling a (six-sided) die S = {1,2,3,4,5,6} P(2) = P({2}) = 1/|S| = 1/6 Let E = {2,6}  P(E) = 2/6 = 1/3 = P(rolling a 2 or a 6) Note: This is a special case of a probability distribution. In general P(s) can be quite arbitrary. For a loaded die the probabilities could be for example P(6)=1/2 and P(1)=P(2)=P(3)=P(4)=P(5)=1/10. In general: For any s  S and any E  S 0  P(s)  1  P(s) = 1 P(E) =  P(s) sSsS sEsE

2/14/13CMPS 3120 Computational Geometry3 Random Variable A random variable X on S is a function from S to R X: S → R Example 1: Flip coin three times. S = {HHH, HHT, HTH, HTT, THH, THT, TTH, TTT} Let X(s) = # heads in s  X(HHH) = 3 X(HHT)=X(HTH)=X(THH) = 2 X(TTH)=X(THT)=X(HTH) = 1 X(TTT) = 0 Example 2: Play game: Win $5 when getting HHH, pay $1 otherwise Let Y(s) be the win/loss for the outcome s  Y(HHH) = 5 Y(HHT) = Y(HTH) = … = -1 What is the average win/loss? Tail Head

2/14/13CMPS 3120 Computational Geometry4 Expected Value The expected value of a random variable X: S→R is defined as E(X) =  P(s)  X(s) =  P({X=x})  x Example 2 (continued): E(Y) =  P(s)  Y(s) = P(HHH)  5 +  P(s)  (-1) = 1/2 3   1/2 3  (-1) = (5-7)/2 3 = -2/8 = -1/4 =  P({Y=y})  y = P(HHH)  5 + P(HHT)  (-1) + P(HTH)  (-1) + P(HTT)  (-1) + P(THH)  (-1) + P(THT)  (-1) + P(TTH)  (-1) + P(TTT)  (-1)  The average win/loss is E(Y) = -1/4 sSsSxRxR sSsS s  S\{HHH} xRxR Theorem (Linearity of Expectation): Let X,Y be two random variables on S. Then the following holds: E(X+Y) = E(X) + E(Y) Proof: E(X+Y) =  P(s)  (X(s)+Y(s)) =  P(s)  X(s) +  P(s)  Y(s) = E(X) + E(Y) Notice the similarity to the arithmetic mean (or average). sSsS sSsS sSsS

2/14/13CMPS 3120 Computational Geometry5 Randomized algorithms Allow random choices during the algorithm Sample space S = {all sequences of random choices} The runtime T: S→R is a random variable. The runtime T(s) depends on the particular sequence s of random choices.  Consider the expected runtime E(T) Example 3 (Quicksort): Pick a random pivot to partition the array around. Runtime depends on random pivot choices. Best case: ½ : ½ split in each step. (In fact, 1/10 : 9/10 would work as well.)  O(n log n) runtime Worst case: 0 : (n-1) split  O(n 2 ) runtime Expected runtime E(T) “averages” over all possible splits, and one can show that E(T) = O(n log n)

2/14/13CMPS 3120 Computational Geometry6 Randomized algorithms Example 4 (Trapezoidal map): The runtime depends on the order in which trapezoids are inserted: –k i = # new trapezoids created; total runtime is  i=1 k i –Worst case: k i =O(i)  Total worst-case runtime is  i=1 i = O(n 2 ) –Best case: k i =O(1)  Total best-case runtime is  i=1 1 = O(n) Insert segments in random order: –  = {all possible permutations/orders of segments}; |  | = n! for n segments –k i = k i (  ) for some random order  –E(k i ) = O(1) (see theorem on slides) –Expected runtime E(T) = E(  i=1 k i ) =  i=1 E(k i ) = O(  i=1 1) = O(n) n n n n n n linearity of expectation