Presentation is loading. Please wait.

Presentation is loading. Please wait.

Reducing number of operations: The joy of algebraic transformations CS498DHP Program Optimization.

Similar presentations


Presentation on theme: "Reducing number of operations: The joy of algebraic transformations CS498DHP Program Optimization."— Presentation transcript:

1 Reducing number of operations: The joy of algebraic transformations CS498DHP Program Optimization

2 Number of operations and execution time Fewer number of operations does not necessarily mean shorter execution times. –Because of scheduling in a parallel environment. –Because of locality. –Because of communication in a parallel program. Nevertheless, although it has to be applied carefully, reducing the number of operations is one of the important optimizations. In this presentation, we discuss transformation to reduce the number of operations or reduce the length of scheduling in an idealized parallel environment where communication costs are zero.

3 Scheduling Consider the expression tree: It can be shortened by applying –Associativity and commutativity: [a+h+b*(c+g+d*e*f) ] or –Associativity, commutativity and distributivity: [a+h+b*c+b*g+b*d*e*f]. The second expression is the sortest of the three. This means that with enough resources the third expression is the fastest although is has the most operations. + h + a * b + + c * f * e g d

4 Locality Consider: do i=1.n c(i) = a(i)+b(i)+a(i)/b(i) end do … do i=1,n x(i) = (a(i)+b(i))*t(i)+a(i)/b(i) end do do i=1,n d(i) = a(i)/b(i) c(i) = a(i)+b(i)+d(i) end do … do i=1,n x(i) = (a(i)+b(i))*t(i)+d(i) end do The sequence on the right executes fewer operations, but, if n is large enough, it also incurs in more cache misses. (We assume that t is computed between the two loops so that they cannot be fused.)

5 Communication in parallel programs Consider: cobegin … do i=1,n a(i) =.. end do send a(1:n) … // … receive a(1:n) … coend cobegin … do i=1,n a(i) =.. end do … // … do i=1,n a(i) =.. end do … coend The sequence on the right executes more operation s, but it would execute faster if the send operation is expensive.

6 Approaches to reducing cost of computation Eliminate (syntactically) redundant computations. Apply algebraic transformations to reduce the number of operations. Decompose sequential computations for parallel execution. Apply algebraic transformations to reduce the height of expressions trees and thus reduce execution time in a parallel environment.

7 Elimination of redundant computations Many of the transformations were discussed in the context of compiler transformations. –Common subexpression elimination –Loop invariant removal –Elimination of redundant counters –Loop unrolling (not discussed, but should have). It eliminates bookkeeping operations.

8 However, compilers will not eliminate all redundant computations. Here is an example where user intervention is needed: The following sequence do i=1,n s = a(i)+s end do … do i=1,n-1 t = a(i)+t end do …t…

9 May be replaced by do i=1,n-1 t = a(i)+t end do s=t+a(n) … …t… This transformation is not usually done by compilers.

10 2.Another example, from C, is the loop for (i = 0; i < n; i++) { for (j = 0; j < n; j++) { a[i,j]=0; } Which, if a is n × n, can be transformed into the loop below that has fewer bookkeeping operations. b=a; for (i = 0; i < n*n; i++) { *b=0; b++; }

11 Applying algebraic transformations to reduce the number of operations For example, the expressions a*(b*c)+(b*a)*d+a*e can be transformed into (a*b)*(c+d)+a*e by distributivity and then by associativity and distributivity into a*(b*(c+d)+e). Notice that associativity has to be applied with care. For example, suppose we are operating on floating point values and that x is very much larger than y and z=-x. Then (y+x)+z may give 0 as a result, while y+(x+z) gives y as an answer.

12 The application of algebraic rules can be very sophisticated. Consider the computation of x n. A naïve implementation would require n-1 multiplications. However, if we represent n in binary as n=b 0 +2(b 1 +2(b 2 + …)) and notice that x n =x b0 (x b1+2(b2 + …) ) 2, the number of multiplications can be reduced to O(log n).

13 function power(x,n) (assume n>0) if n==1 then return x if n%2==1 then return x*power(x,n-1) else x=power(x,n/2); return x*x

14 Horner’s rule A polynomial A(x) = a 0 + a 1 x + a 2 x² + a 3 x³ +... may be written as A(x) = a 0 + x(a 1 + x(a 2 + x(a 3 +...))). As a result, a polynomial may be evaluated at a point x', that is A(x') computed, in Θ(n) time using Horner's rule. That is, repeated multiplications and additions, rather than the naive methods of raising x to powers, multiplying by the coefficient, and accumulating.Θ(n)

15 Conventional matrix multiplication Asymptotic complexity: 2n 3 operations Each recursion step (blocked version): 8 multiplications, 4 additions

16 Strassen’s Algorithm Asymptotic complexity: O(n log 2 7 ) = O(n 2.8… ) operations Each recursion step: 7 multiplications, 18 additions/subtractions Asymptotic complexity is solution of T(n)=7T(n/2)+18(n/2) 2

17 Winograd Asymptotic complexity: O(n 2.8.. )operations Each recursion step: 7 multiplications, 15 additions/subtractions

18 Parallel matrix multiplication Parallel matrix multiplication can be accomplished without redundant operations. First observe that the time to compute a sum of n elements, given enough resources, is.

19 Time:

20

21 With sufficient replication and computational resources matrix multiplication can take just one multiplication step and additions

22 Copying can also be done in logarithmic steps

23 Parallelism and redundancy Algebra rules can be applied to reduce tree height. In some cases, the height of the tree is reduced at the expense of an increase in the number of operations

24

25

26

27

28

29 Parallel Prefix

30

31

32

33

34

35

36

37

38 Redundancy in parallel sorting. Sorting networks.

39 Comparator (2-sorter) x y min(x, y) max(x, y) inputs outputs

40 Comparison Network 0 0 1 1 0 0 1 1 0 0 1 1 1 1 0 0 n / 2 comparisons per stage d stages

41 Sorting Networks Sorting Network 1 0 0 1 0 0 1 1 0 0 0 0 1 1 1 1 inputs outputs sorted

42 Insertion Sort Network inputs outputs depth 2n 3

43 comparator stages comparators Odd-even transposition sort O(n)O(n) O(n2)O(n2) Bubblesort O(n)O(n) O(n2)O(n2) Bitonic sort O(log(n) 2 ) O(n·log(n) 2 ) Odd-even mergesort O(log(n) 2 ) O(n·log(n) 2 ) Shellsort O(log(n) 2 ) O(n·log(n) 2 )


Download ppt "Reducing number of operations: The joy of algebraic transformations CS498DHP Program Optimization."

Similar presentations


Ads by Google