Recursion Jeff Edmonds York University COSC 6111 Lecture 3 Friends & Steps for Recursion Derivatives Recursive Images Multiplying Parsing Ackermann.

Slides:



Advertisements
Similar presentations
Grade School Revisited: How To Multiply Two Numbers Great Theoretical Ideas In Computer Science Anupam Gupta Danny Sleator CS Fall 2010 Lecture.
Advertisements

Grade School Revisited: How To Multiply Two Numbers Great Theoretical Ideas In Computer Science Victor Adamchik Danny Sleator CS Spring 2010 Lecture.
Divide-and-Conquer Recursive in structure –Divide the problem into several smaller sub-problems that are similar to the original but smaller in size –Conquer.
CSE332: Data Abstractions Lecture 2: Math Review; Algorithm Analysis Tyler Robison Summer
Data Structures, Spring 2004 © L. Joskowicz 1 Data Structures – LECTURE 3 Recurrence equations Formulating recurrence equations Solving recurrence equations.
Data Structures, Spring 2006 © L. Joskowicz 1 Data Structures – LECTURE 3 Recurrence equations Formulating recurrence equations Solving recurrence equations.
Introduction Credits: Steve Rudich, Jeff Edmond, Ping Xuan.
Grade School Revisited: How To Multiply Two Numbers Great Theoretical Ideas In Computer Science Steven RudichCS Spring 2004 Lecture 16March 4,
Recursion Credits: Jeff Edmonds, Ping Xuan. MULT(X,Y): If |X| = |Y| = 1 then RETURN XY Break X into a;b and Y into c;d e = MULT(a,c) and f =MULT(b,d)
On Time Versus Input Size Great Theoretical Ideas In Computer Science Steven RudichCS Spring 2004 Lecture 15March 2, 2004Carnegie Mellon University.
Introduction By Jeff Edmonds York University COSC 3101 Lecture 1 So you want to be a computer scientist? Grade School Revisited: How To Multiply Two Numbers.
Analysis of Algorithms
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Grade School Revisited: How To Multiply Two Numbers Great Theoretical Ideas In Computer Science S. Rudich V. Adamchik CS Spring 2006 Lecture 18March.
Arithmetic.
1 Recursion Jeff Edmonds York University COSC 2011 Lecture 5 Ruler Example One Step at a Time Stack of Stack Frames Friends and Strong Induction Recurrence.
COMPSCI 102 Introduction to Discrete Mathematics.
1 What NOT to do I get sooooo Frustrated! Marking the SAME wrong answer hundreds of times! I will give a list of mistakes which I particularly hate marking.
Relevant Mathematics Lecture 2 Classifying Functions The Time Complexity of an Algorithm Adding Made Easy Recurrence Relations.
Relevant Mathematics Jeff Edmonds York University COSC 3101 Lecture skipped. Lecture skipped. (Done when needed) Logic Quantifiers The Time Complexity.
MA/CSSE 473 Day 03 Asymptotics A Closer Look at Arithmetic With another student, try to write a precise, formal definition of “t(n) is in O(g(n))”
Divide-and-Conquer 7 2  9 4   2   4   7
Analysis of Algorithm Lecture 3 Recurrence, control structure and few examples (Part 1) Huma Ayub (Assistant Professor) Department of Software Engineering.
Lecture 2 Computational Complexity
Analysis of Algorithms
Fundamentals of Algorithm Analysis Dr. Steve Goddard CSCE 310J: Data Structures &
Great Theoretical Ideas in Computer Science.
Recursive Algorithms Introduction Applications to Numeric Computation.
1 Section 5.5 Solving Recurrences Any recursively defined function ƒ with domain N that computes numbers is called a recurrence or recurrence relation.
Divide and Conquer Andreas Klappenecker [based on slides by Prof. Welch]
Arrays Tonga Institute of Higher Education. Introduction An array is a data structure Definitions  Cell/Element – A box in which you can enter a piece.
Télécom 2A – Algo Complexity (1) Time Complexity and the divide and conquer strategy Or : how to measure algorithm run-time And : design efficient algorithms.
CMPT 438 Algorithms. Why Study Algorithms? Necessary in any computer programming problem ▫Improve algorithm efficiency: run faster, process more data,
Program Efficiency & Complexity Analysis. Algorithm Review An algorithm is a definite procedure for solving a problem in finite number of steps Algorithm.
COMPSCI 102 Introduction to Discrete Mathematics.
J. Elder COSC 3101N Thinking about Algorithms Abstractly Lecture 1. Introduction.
Introduction Pizzanu Kanongchaiyos, Ph.D.
Recurrences David Kauchak cs161 Summer Administrative Algorithms graded on efficiency! Be specific about the run times (e.g. log bases) Reminder:
4/3/2003CSE More Math CSE Algorithms Euclidean Algorithm Divide and Conquer.
Divide and Conquer Andreas Klappenecker [based on slides by Prof. Welch]
COMPSCI 102 Discrete Mathematics for Computer Science.
COSC 3101Winter, 2004 COSC 3101 Design and Analysis of Algorithms Lecture 2. Relevant Mathematics: –The time complexity of an algorithm –Adding made easy.
Lecture 5 Today, how to solve recurrences We learned “guess and proved by induction” We also learned “substitution” method Today, we learn the “master.
Divide and Conquer. Recall Divide the problem into a number of sub-problems that are smaller instances of the same problem. Conquer the sub-problems by.
Master Method Some of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
Divide and Conquer Faculty Name: Ruhi Fatima Topics Covered Divide and Conquer Matrix multiplication Recurrence.
BITS Pilani Pilani Campus Data Structure and Algorithms Design Dr. Maheswari Karthikeyan Lecture1.
Grade School Revisited: How To Multiply Two Numbers CS Lecture 4 2 X 2 = 5.
Complexity of Algorithms Fundamental Data Structures and Algorithms Ananda Guna January 13, 2005.
On Time Versus Input Size Great Theoretical Ideas In Computer Science S. Rudich V. Adamchik CS Spring 2006 Lecture 17March 21, 2006Carnegie Mellon.
CMPT 438 Algorithms.
All-pairs Shortest paths Transitive Closure
How to Think about Algorithms
Jeff Edmonds York University
CS 3343: Analysis of Algorithms
Great Theoretical Ideas in Computer Science
Great Theoretical Ideas in Computer Science
Lecture 3: Divide-and-conquer
How To Multiply Two Numbers
Grade School Revisited: How To Multiply Two Numbers
By Jeff Edmonds York University
On Time Versus Input Size
Data Structures Review Session
Grade School Revisited: How To Multiply Two Numbers
Introduction to Discrete Mathematics
At the end of this session, learner will be able to:
Algorithms Recurrences.
Divide and Conquer Merge sort and quick sort Binary search
Introduction to Discrete Mathematics
Divide-and-Conquer 7 2  9 4   2   4   7
Presentation transcript:

Recursion Jeff Edmonds York University COSC 6111 Lecture 3 Friends & Steps for Recursion Derivatives Recursive Images Multiplying Parsing Ackermann

X = 33 Y = 12 ac = 3 bd = 6 (a+b)(c+d) = 18 XY = 396 MULT(X,Y): If |X| = |Y| = 1 then RETURN XY Break X into a;b and Y into c;d e = MULT(a,c) and f =MULT(b,d) RETURN e 10 n + (MULT(a+b, c+d) – e - f) 10 n/2 + f Friends & Strong Induction Consider your input instance Allocate work Construct one or more subinstances Assume by magic your friends give you the answer for these. Use this help to solve your own instance. Do not worry about anything else. Who your boss is. How your friends solve their instance. X = 3 Y = 1 XY=3 X = 3 Y = 2 XY=6 X = 6 Y = 3 XY=18

Trust your friends to solve subinstances. The subinstance given must be smaller and must be an instance to the same problem. Combine solution given by friend to construct your own solution for your instance. Focus on one step. Do not talk of their friends friends friends. Solve small instances on your own. I am obsessed with the Friends - Strong Induction View of Recursion.

Input Size An Measure of Size, Size –is a function that takes as input the computation’s input I and that outputs –Eg Input is, Size( ) = n or 3n+7m –Algorithm Designer gets to decide! –According to this measure Your friend’s instance must be smaller than yours. You must solve sufficiently small instances on your own. a real number

Input Size Does the program always halt? m keeps getting bigger! Algorithm Designer gets to define the measure of size. –Size( ) = n Sufficiently small to halt. I = Size =

Input Size Does the program always halt? Each subinstance is smaller. But according to what measure of size? –Size( ) = ? There is an infinite path! I = ….

Input Size Does the program always halt? According to what single measure of size are both instances smaller? –Size( ) = ? I = Maximum dept = ? Every friend’s instance is smaller. Or there is an infinite path!

Recursion on Trees Evaluate Equation Tree = ? 12 7 Get help from friends

Recursion on Trees Evaluate Equation Tree 12 7 = root op ( value on left, value on right ) = root op (12,7) = = 19

Recursion on Trees Evaluate Equation Tree 7 Base Case ?

Recursion on Trees

Derivatives Input: a function f. Output: Its derivative df / dx.

Derivatives

Input: a function f. Output: Its derivative df / dx.

Derivatives Input: a function f. Output: Its derivative df / dx.

Derivatives Input: a function f. Output: Its derivative df / dx. g

Simplify Input: a function f. Output: f simplified.

Simplify

Recursive Images if n=0, draw else draw And recursively Draw here with n-1 if n=1 n=0

Recursive Images if n=0, draw else draw And recursively Draw here with n-1 if n=2 n=1

Recursive Images if n=0, draw else draw And recursively Draw here with n-1 if n=3 n=2

Recursive Images if n=0, draw else draw And recursively Draw here with n-1 if n=30

Recursive Images if n=1if n=2if n=3 if n=4 if n=5 if n=0

Recursive Images if n=1 if n=0 if n=2if n=3if n=4 if n=5

Recursive Images L(n) = 4 / 3 L(n-1)    = ( 4 / 3 ) n

Grade School Revisited: How To Multiply Two Numbers 2 X 2 = 5 A Few Example Algorithms Rudich

Complex Numbers Remember how to multiply 2 complex numbers? (a+bi)(c+di) = [ac –bd] + [ad + bc] i Input: a,b,c,d Output: ac-bd, ad+bc If a real multiplication costs 1 and an addition costs a penny. What is the cheapest way to obtain the output from the input? Can you do better than 4.02?

Gauss’ $3.05 Method: Input: a,b,c,d Output: ac-bd, ad+bc m 1 = ac m 2 = bd A 1 = m 1 – m 2 = ac-bd m 3 = (a+b)(c+d) = ac + ad + bc + bd A 2 = m 3 – m 1 – m 2 = ad+bc

Question: The Gauss “hack” saves one multiplication out of four. It requires 25% less work. Could there be a context where performing 3 multiplications for every 4 provides a more dramatic savings?

How to add 2 n-bit numbers. ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** + Tom Lehrer New Math

How to add 2 n-bit numbers. *** *** ** ** ** ** ** ** ** ** ** ** *** *** ** ** ** ** ** ** ** ** +

*** *** ** ** ** ** ** ** ** ** *** *** **** **** ** ** ** ** ** ** ** ** +

*** *** ** ** ** ** ** ** *** *** **** **** **** **** ** ** ** ** ** ** ** ** +

*** *** ** ** ** ** *** *** **** **** **** **** **** **** ** ** ** ** ** ** ** ** +

*** *** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** * * * * + * ** *

Time complexity of grade school addition *** *** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** + * ** * **** **** T(n) = The amount of time grade school addition uses to add two n-bit numbers = θ(n) = linear time. On any reasonable computer adding 3 bits can be done in constant time.

# of bits in numbers timetime f = θ (n) means that f can be sandwiched between two lines

Please feel free to ask questions! Rudich

Is there a faster way to add? QUESTION: Is there an algorithm to add two n-bit numbers whose time grows sub- linearly in n?

Any algorithm for addition must read all of the input bits –Suppose there is a mystery algorithm that does not examine each bit –Give the algorithm a pair of numbers. There must be some unexamined bit position i in one of the numbers –If the algorithm is not correct on the numbers, we found a bug –If the algorithm is correct, flip the bit at position i and give the algorithm the new pair of numbers. It give the same answer as before so it must be wrong since the sum has changed

So any algorithm for addition must use time at least linear in the size of the numbers. Grade school addition is essentially as good as it can be.

How to multiply 2 n-bit numbers. X * * * * * * * * * * * * * * * * * * * * * * * * * * * * n2n2

I get it! The total time is bounded by cn 2. Can we do it faster?

How to multiply 2 n-bit numbers. Kindergarten Algorithm a × b = a + a + a a b T(n) = Time multiply = θ(b) = linear time. Fast? × I easily ask the question. You take a life time to answer it.

How to multiply 2 n-bit numbers. Kindergarten Algorithm a × b = a + a + a a b T(n) = Time multiply two n-bit numbers = θ(b) Way slow! × Value = b = # bits = n = log 2 (b) = 60 = θ(2 n ).

Grade School Addition: Linear time Grade School Multiplication: Quadratic time Kindergarten Multiplication: Exponential time No matter how dramatic the difference in the constants the quadratic function will eventually dominate the linear function # of bits in numbers timetime And all exponential functions are VERY fast growing

Neat! We have demonstrated that as things scale multiplication is a harder problem than addition. Mathematical confirmation of our common sense.

Don’t jump to conclusions! We have argued that grade school multiplication uses more time than grade school addition. This is a comparison of the complexity of two algorithms. To argue that multiplication is an inherently harder problem than addition we would have to show that no possible multiplication algorithm runs in linear time.

Grade School Addition: θ (n) time Grade School Multiplication: θ (n 2 ) time Is there a clever algorithm to multiply two numbers in linear time?

Despite years of research, no one knows! If you resolve this question, York will give you a PhD!

Is there a faster way to multiply two numbers than the way you learned in grade school?

Divide And Conquer (an approach to faster algorithms) DIVIDE my instance to the problem into smaller instances to the same problem. Have a friend (recursively) solve them. Do not worry about it yourself. GLUE the answers together so as to obtain the answer to your larger instance.

Multiplication of 2 n-bit numbers X = Y = X = a 2 n/2 + b Y = c 2 n/2 + d XY = ac 2 n + (ad+bc) 2 n/2 + bd ab cd

Multiplication of 2 n-bit numbers X = Y = XY = ac 2 n + (ad+bc) 2 n/2 + bd ab cd MULT(X,Y): If |X| = |Y| = 1 then RETURN XY Break X into a;b and Y into c;d RETURN MULT(a,c) 2 n + (MULT(a,d) + MULT(b,c)) 2 n/2 + MULT(b,d)

Time required by MULT T(n) = time taken by MULT on two n-bit numbers What is T(n)? What is its growth rate? Is it θ(n 2 )?

Recurrence Relation T(1) = k for some constant k T(n) = 4 T(n/2) + k’ n + k’’ for some constants k’ and k’’ MULT(X,Y): If |X| = |Y| = 1 then RETURN XY Break X into a;b and Y into c;d RETURN MULT(a,c) 2 n + (MULT(a,d) + MULT(b,c)) 2 n/2 + MULT(b,d)

Let’s be concrete T(1) = 1 T(n) = 4 T(n/2) + n How do we unravel T(n) so that we can determine its growth rate?

Technique 1 Guess and Verify Recurrence Relation: T(1) = 1 & T(n) = 4T(n/2) + n Guess: G(n) = 2n 2 – n Verify: Left Hand SideRight Hand Side T(1) = 2(1) 2 – 1 T(n) = 2n 2 – n 1 4T(n/2) + n = 4 [2( n / 2 ) 2 – ( n / 2 )] + n = 2n 2 – n

Technique 2: Decorate The Tree T(n) = n + 4 T(n/2) n T(n/2) T(n) = T(n) = n + 4 T(n/2) n T(n/2) T(n) = T(1) T(1) = 1 1 =

n T(n/2) T(n) =

n T(n/2) T(n) = n/2 T(n/4)

n T(n) = n/2 T(n/4) n/2 T(n/4) n/2 T(n/4) n/2 T(n/4)

n T(n) = n/ n/4

n n/2 + n/2 + n/2 + n/2 Level i is the sum of 4 i copies of n/2 i n/4 + n/4 + n/4 + n/4 + n/4 + n/4 + n/4 + n/4 + n/4 + n/4 + n/4 + n/4 + n/4 + n/4 + n/4 + n/ i

=1  n = 4  n/2) = 16  n/4) = 4 i  n/2 i ) Total: ? = 4 logn  n/2 logn ) = n log4  a logn = (2 loga ) logn = 2 (loga · logn) = (2 logn ) loga = n loga 1

Geometric Sum

∑ i=0..n r i = r 0 + r 1 + r r n =  (biggest term) Geometric Increasing True when ever terms increase quickly

=1  n = 4  n/2 = 16  n/4 = 4 i  n/2 i Total: θ( n log4 ) = θ( n 2 ) = 4 logn  n/2 logn =n log4  1 Total = biggest

Divide and Conquer MULT: θ (n 2 ) time Grade School Multiplication: θ (n 2 ) time All that work for nothing! Let’s understand the math better.

Evaluating: T(n) = aT(n/b)+f(n) Level i h

Evaluating: T(n) = aT(n/b)+f(n) Level Instance size i h

Evaluating: T(n) = aT(n/b)+f(n) Level Instance size 0n 1 2 i h

Evaluating: T(n) = aT(n/b)+f(n) Level Instance size 0n 1 n/b 2 i h

Evaluating: T(n) = aT(n/b)+f(n) Level Instance size 0n 1 n/b 2 n/b 2 i h

Evaluating: T(n) = aT(n/b)+f(n) Level Instance size 0n 1 n/b 2 n/b 2 i n/b i h n/b h

Evaluating: T(n) = aT(n/b)+f(n) Level Instance size 0n 1 n/b 2 n/b 2 i n/b i h n/b h

Evaluating: T(n) = aT(n/b)+f(n) Level Instance size 0n 1 n/b 2 n/b 2 i n/b i h n/b h = 1 base case

Evaluating: T(n) = aT(n/b)+f(n) Level Instance size 0n 1 n/b 2 n/b 2 i n/b i h n/b h = 1

Evaluating: T(n) = aT(n/b)+f(n) Level Instance size 0n 1 n/b 2 n/b 2 i n/b i h = log n / log b n/b h = 1 b h = n h log b = log n h = log n / log b

Evaluating: T(n) = aT(n/b)+f(n) Level Instance size Work in stack frame 0n 1 n/b 2 n/b 2 i n/b i h = log n / log b 1

Evaluating: T(n) = aT(n/b)+f(n) Level Instance size Work in stack frame 0n f(n) 1 n/bf(n/b) 2 n/b 2 f(n/b 2 ) i n/b i f(n/b i ) h = log n / log b 1T(1)

Evaluating: T(n) = aT(n/b)+f(n) Level Instance size Work in stack frame # stack frames 0n f(n) 1 n/bf(n/b) 2 n/b 2 f(n/b 2 ) i n/b i f(n/b i ) h = log n / log b n/b h T(1)

Evaluating: T(n) = aT(n/b)+f(n) Level Instance size Work in stack frame # stack frames 0n f(n) 1 1 n/bf(n/b) a 2 n/b 2 f(n/b 2 ) a2a2 i n/b i f(n/b i ) aiai h = log n / log b n/b h T(1) ahah

Evaluating: T(n) = aT(n/b)+f(n) Level Instance size Work in stack frame # stack frames 0n f(n) 1 1 n/bf(n/b) a 2 n/b 2 f(n/b 2 ) a2a2 i n/b i f(n/b i ) aiai h = log n / log b n/b h T(1) ahah

Evaluating: T(n) = aT(n/b)+f(n) Level Instance size Work in stack frame # stack frames 0n f(n) 1 1 n/bf(n/b) a 2 n/b 2 f(n/b 2 ) a2a2 i n/b i f(n/b i ) aiai h = log n / log b n/b h T(1) ahah a h = a = n log n / log b log a / log b

Evaluating: T(n) = aT(n/b)+f(n) Level Instance size Work in stack frame # stack frames Work in Level 0n f(n) 1 1 n/bf(n/b) a 2 n/b 2 f(n/b 2 ) a2a2 i n/b i f(n/b i ) aiai h = log n / log b n/b h T(1) n log a / log b

Evaluating: T(n) = aT(n/b)+f(n) Level Instance size Work in stack frame # stack frames Work in Level 0n f(n) 1 1 · f(n) 1 n/bf(n/b) a a · f(n/b) 2 n/b 2 f(n/b 2 ) a2a2 a 2 · f(n/b 2 ) i n/b i f(n/b i ) aiai a i · f(n/b i ) h = log n / log b n/b h T(1) n log a / log b n · T(1) log a / log b Total Work T(n) = ∑ i=0..h a i  f(n/b i )

Evaluating: T(n) = aT(n/b)+f(n) = ∑ i=0..h a i  f(n/b i ) If a Geometric Sum ∑ i=0..n x i  θ(max(first term, last term))

Evaluating: T(n) = aT(n/b)+f(n) Level Instance size Work in stack frame # stack frames Work in Level 0n f(n) 1 1 · f(n) 1 n/bf(n/b) a a · f(n/b) 2 n/b 2 f(n/b 2 ) a2a2 a 2 · f(n/b 2 ) i n/b i f(n/b i ) aiai a i · f(n/b i ) h = log n / log b n/b h T(1) n log a / log b n · T(1) log a / log b Dominated by Top Level or Base Cases

Divide and Conquer MULT: θ (n 2 ) time Grade School Multiplication: θ (n 2 ) time All that work for nothing! Let’s try to multiply faster.

MULT revisited MULT calls itself 4 times. Can you see a way to reduce the number of calls? MULT(X,Y): If |X| = |Y| = 1 then RETURN XY Break X into a;b and Y into c;d RETURN MULT(a,c) 2 n + (MULT(a,d) + MULT(b,c)) 2 n/2 + MULT(b,d)

Gauss’ Hack: Input: a,b,c,d Output: ac, ad+bc, bd A 1 = ac A 3 = bd m 3 = (a+b)(c+d) A 2 = m 3 – A 1 - A 3 = ad + bc 2 additions and one multiplication = ac + ad + bc + bd

Gaussified MULT (Karatsuba 1962) T(n) = 3 T(n/2) + n Actually: T(n) = 2 T(n/2) + T(n/2 + 1) + kn MULT(X,Y): If |X| = |Y| = 1 then RETURN XY Break X into a;b and Y into c;d e = MULT(a,c) and f =MULT(b,d) RETURN e2 n + (MULT(a+b, c+d) – e - f) 2 n/2 + f

n T(n) = n/2 T(n/4) n/2 T(n/4) n/2 T(n/4) n/2 T(n/4)

n T(n/2) T(n) =

n T(n/2) T(n) = n/2 T(n/4)

n T(n) = n/2 T(n/4) n/2 T(n/4) n/2 T(n/4)

n n/2 + n/2 + n/2 Level i is the sum of 3 i copies of n/2 i n/4 + n/4 + n/4 + n/4 + n/4 + n/4 + n/4 + n/4 + n/ i

=1  n = 3  n/2 = 9  n/4 = 3 i  n/2 i = 3 logn  n/2 logn =n log3  1 Total: θ( n log3 ) = θ( n ) Total = biggest

Evaluating: T(n) = aT(n/b)+f(n) Level Instance size Work in stack frame # stack frames Work in Level 0n f(n) 1 1 · f(n) 1 n/bf(n/b) a a · f(n/b) 2 n/b 2 f(n/b 2 ) a2a2 a 2 · f(n/b 2 ) i n/b i f(n/b i ) aiai a i · f(n/b i ) h = log n / log b n/b h T(1) n log a / log b n · T(1) log a / log b Dominated by Top Level or Base Cases

Time for top level: Time for base cases: Dominated?: c = 1 < 1.58 = log a / log b θ(n ) = log a / log b θ(n ) = θ(n 1.58 ) log 3 / log 2 Hence, T(n) = ? = θ(base cases) = θ(n ) = θ(n 1.58 ). log a / log b Evaluating: T(n) = aT(n/b) + n c = 3T(n/2) + n n c = = n 1 1

Dramatic improvement for large n Not just a 25% savings! θ( n 2 ) vs θ( n )

Cutting into d pieces. X =  i=0..d-1 a i 2 i/d Y =  j=0..d-1 b j 2 j/d a d-1 a2a2 a1a1 a0a0 b d-1 b2b2 b1b1 b0b0 X = Y = … … Instances for friends?  i=0..d-1  j=0,d-1. a i b j 2 (i+j)/d XY =

Cutting into d pieces. Instances for friends Number of friends is d2.d2.  i=0..d-1  j=0,d-1. a i b j 2 (i+j)/d XY = a d-1 a2a2 a1a1 a0a0 b d-1 b2b2 b1b1 b0b0 Y\XY\X aibjaibj … … … … bjbj aiai

Time for top level: Time for base cases: Dominated?: c = 1 < 2 = log a / log b θ(n ) = log a / log b θ(n ) log d^2 / log d Hence, T(n) = ? = θ(base cases) = θ(n ) = θ(n 2 ). log a / log b Evaluating: T(n) = aT(n/b) + n c = T(n/ ) + n c = = n 1 1 d2d2 d n1n1 = θ(n 2 ) Cutting into d pieces.

All that work for nothing! Cutting into d pieces.

 i=0..d-1  j=0,d-1. a i b j 2 (i+j)/d XY = a d-1 a2a2 a1a1 a0a0 b d-1 b2b2 b1b1 b0b0 Y\XY\X … … … … bjbj aiai Values needed Number of friends is Instance for friend Fancy Gaussian trick =  k=0..2d-2  (  j a j b k-j ) 2 k/d 2d-1 aibjaibj

log a / log b = Evaluating: T(n) = aT(n/b) + n c = T(n/ ) + 2d-1d n1n1 Cutting into d pieces. log 2d-1 / log d ≈ log 2d / log d = log(d) +1 / log d → 1 Say d=logn

Time for top level: Time for base cases: Dominated?: c = 1 ≈ log a / log b θ(n ) log a / log b Hence, T(n) = ? = θ(n log n). Evaluating: T(n) = aT(n/b) + n c = T(n/ ) + n c = = n 1 1 2d-1d n1n1 ≈ θ(n 1 ) Cutting into d pieces. Actually θ(n log n log log n).

Multiplication Algorithms Kindergarten ? n2nn2n Grade Schooln2n2 Karatsuban 1.58… Fastest Knownn logn loglogn 3*4=

Input: Output: s=6*8+((2+42)*(5+12)+987*7*123+15*54) Parsing/Compiling

Input: Java Code Parsing/Compiling Output: MARIE Machine Code simulating the Java code.

Input: Java Code Parsing/Compiling Output: MARIE Machine Code simulating the Java code. Challenge: Keep track of three algorithms simultaneously The compiler The Java code being compiled The MARIE code being produced.

Parsing/Compiling

Algorithm: GetExp( s, i ) Input: s is a string of tokens i is a start index Output: p is a parsing of the longest valid expression j is the end index s=6*8+((2+42)*(5+12)+987*7*123+15*54)

Parsing/Compiling Algorithm: GetTerm( s, i ) Input: s is a string of tokens i is a start index Output: p is a parsing of the longest valid term j is the end index s=6*8+((2+42)*(5+12)+987*7*123+15*54)

Parsing/Compiling Algorithm: GetFact( s, i ) Input: s is a string of tokens i is a start index Output: p is a parsing of the longest valid factor j is the end index s=6*8+((2+42)*(5+12)+987*7*123+15*54)

Algorithm: GetExp( s, i ) s=6*8+((2+42)*(5+12)+987*7*123+15*54) p

Algorithm: GetExp( s, i ) Exp … p + + +

Algorithm: GetExp( m ) MARIE Machine Code that evaluates an expression and stores its value in memory cell indexed by m. Output:

Algorithm: GetTerm( s, i ) s=6*8+((2+42)*(5+12)+987*7*123+15*54) p

Algorithm: GetTerm( s, i ) Term … p * * *

Algorithm: GetTerm( m ) MARIE Machine Code that evaluates a term and stores its value in memory cell indexed by m. Output:

Parsing Algorithm: GetFact( s, i ) s=6*8+((2+42)*(5+12)+987*7*123+15*54)

Parsing Algorithm: GetFact( s, i ) Fact 42

Algorithm: GetFact( s, i ) s=6*8+((2+42)*(5+12)+987*7*123+15*54) p

Algorithm: GetFact( s, i ) Fact p ()

Algorithm: GetFact( m ) MARIE Machine Code that evaluates a factor and stores its value in memory cell indexed by m. Next token determines which case the factor is. Output:

Algorithm: GetFactArray( m ) MARIE Machine Code that evaluates a factor and stores its value in memory cell indexed by m. Output:

Algorithm: GetIfStatement() MARIE Machine Code that executes an IfStatement Output:

Look Ahead One: A grammar is said to be look ahead one if, given any two rules for the same non-terminal, the first place that the rules di ff er in actual characters. This feature allows our parsing algorithm to look only at the next token in order to decide what to do next. Thus the algorithm runs in linear time. Parsing/Compiling A ⇒ B ’u’ C ’w’ E A ⇒ B ’u’ C ’x’ F A ⇒ B ’u’ C A ⇒ B ’v’ G H A ⇒ F G (Ok if first character(s) within a B is different than in a F.) (and next character is not ‘w’ or ‘x’) ? This parsing algorithm only works for Look Ahead One Grammars.

Parsing/Compiling A ⇒ ( A ) A ⇒  Generates (((()))) This parsing algorithm only works for Look Ahead One Grammars. A ⇒ a A a A ⇒  Generates aaaaaaaa GetA(s,i) if( s[i] = ‘(‘ )  p A,j A , = GetA(s,i+1) if( s[j A ] = ‘)‘ ) return(  ‘(‘ p A ’)’ , j A +1) else return( error ) else return(  ) Not Look Ahead One ? (next character is ‘a’) Parser can’t find middle. A ⇒ A B A ⇒ A C Recurses forever.

Parsing/Compiling A ⇒ BC A ⇒ DE B ⇒ b.... D ⇒ d.... This parsing algorithm only works for Look Ahead One Grammars. GetA(s,i) if( s[i] = ‘b‘ )  p B,j B , = GetB(s,i)  p C,j C , = GetC(s,j B ) return(  p B p C , j C ) elseif( s[i] = ‘d‘ )  p D,j D , = GetD(s,i)  p E,j E , = GetC(s,j D ) return(  p D p E , j E ) Not Look Ahead One Don’t know whether to call GetB or GetD A ⇒ BC A ⇒ DE B ⇒ bbb.... D ⇒ bbb....

Parsing Stackframes  nodes in parse tree

Ackermann’s Function n applications How big is A(5,5)?

Ackermann’s Function n applications

Ackermann’s Function n applications

Ackermann’s Function n applications

Ackermann’s Function

End