Algorithm Correctness A correct algorithm is one in which every valid input instance produces the correct output. The correctness must be proved mathematically.

Slides:



Advertisements
Similar presentations
MATH 224 – Discrete Mathematics
Advertisements

Analysis of Algorithms
Comp 122, Spring 2004 Divide and Conquer (Merge Sort)
1 Divide & Conquer Algorithms. 2 Recursion Review A function that calls itself either directly or indirectly through another function Recursive solutions.
Divide-and-Conquer Recursive in structure –Divide the problem into several smaller sub-problems that are similar to the original but smaller in size –Conquer.
Chapter 2. Getting Started. Outline Familiarize you with the to think about the design and analysis of algorithms Familiarize you with the framework to.
A Basic Study on the Algorithm Analysis Chapter 2. Getting Started 한양대학교 정보보호 및 알고리즘 연구실 이재준 담당교수님 : 박희진 교수님 1.
11 Computer Algorithms Lecture 6 Recurrence Ch. 4 (till Master Theorem) Some of these slides are courtesy of D. Plaisted et al, UNC and M. Nicolescu, UNR.
2. Getting started Hsu, Lih-Hsing. Computer Theory Lab. Chapter 2P Insertion sort Example: Sorting problem Input: A sequence of n numbers Output:
ALGORITHMS Introduction. Definition Algorithm: Any well-defined computational procedure that takes some value or set of values as input and produces some.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 5.
Analysis of Algorithms CS 477/677 Sorting – Part B Instructor: George Bebis (Chapter 7)
Spring 2015 Lecture 5: QuickSort & Selection
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
CS 253: Algorithms Chapter 7 Mergesort Quicksort Credit: Dr. George Bebis.
Introduction to Analysis of Algorithms
Sorting. Input: A sequence of n numbers a 1, …, a n Output: A reordering a 1 ’, …, a n ’, such that a 1 ’ < … < a n ’
CS421 - Course Information Website Syllabus Schedule The Book:
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu.
Lecture 2: Divide and Conquer I: Merge-Sort and Master Theorem Shang-Hua Teng.
1 Data Structures A program solves a problem. A program solves a problem. A solution consists of: A solution consists of:  a way to organize the data.
CS3381 Des & Anal of Alg ( SemA) City Univ of HK / Dept of CS / Helena Wong 4. Recurrences - 1 Recurrences.
CS Main Questions Given that the computer is the Great Symbol Manipulator, there are three main questions in the field of computer science: What kinds.
Analysis of Algorithms CS 477/677
Analysis of Algorithms COMP171 Fall Analysis of Algorithms / Slide 2 Introduction * What is Algorithm? n a clearly specified set of simple instructions.
Introduction CIS 606 Spring The sorting problem Input: A sequence of n numbers 〈 a 1, a 2, …, a n 〉. Output: A permutation (reordering) 〈 a’ 1,
Computer Algorithms Lecture 10 Quicksort Ch. 7 Some of these slides are courtesy of D. Plaisted et al, UNC and M. Nicolescu, UNR.
Introduction to Algorithm design and analysis
HOW TO SOLVE IT? Algorithms. An Algorithm An algorithm is any well-defined (computational) procedure that takes some value, or set of values, as input.
1 Chapter 24 Developing Efficient Algorithms. 2 Executing Time Suppose two algorithms perform the same task such as search (linear search vs. binary search)
Analyzing Recursive Algorithms A recursive algorithm can often be described by a recurrence equation that describes the overall runtime on a problem of.
Algorithm Analysis An algorithm is a clearly specified set of simple instructions to be followed to solve a problem. Three questions for algorithm analysis.
10/13/20151 CS 3343: Analysis of Algorithms Lecture 9: Review for midterm 1 Analysis of quick sort.
10/14/ Algorithms1 Algorithms - Ch2 - Sorting.
Lecture 2 Algorithm Analysis Arne Kutzner Hanyang University / Seoul Korea.
The Selection Problem. 2 Median and Order Statistics In this section, we will study algorithms for finding the i th smallest element in a set of n elements.
CMPT 438 Algorithms. Why Study Algorithms? Necessary in any computer programming problem ▫Improve algorithm efficiency: run faster, process more data,
ALGORITHMS THIRD YEAR BANHA UNIVERSITY FACULTY OF COMPUTERS AND INFORMATIC Lecture three Dr. Hamdy M. Mousa.
Getting Started Introduction to Algorithms Jeff Chastine.
September 17, 2001 Algorithms and Data Structures Lecture II Simonas Šaltenis Nykredit Center for Database Research Aalborg University
September 9, Algorithms and Data Structures Lecture II Simonas Šaltenis Nykredit Center for Database Research Aalborg University
QuickSort (Ch. 7) Like Merge-Sort, based on the three-step process of divide- and-conquer. Input: An array A[1…n] of comparable elements, the starting.
COSC 3101A - Design and Analysis of Algorithms 2 Asymptotic Notations Continued Proof of Correctness: Loop Invariant Designing Algorithms: Divide and Conquer.
Algorithm Analysis Part of slides are borrowed from UST.
Divide-and-Conquer UNC Chapel HillZ. Guo. Divide-and-Conquer It’s a technique instead of an algorithm Recursive in structure – Divide the problem into.
1/6/20161 CS 3343: Analysis of Algorithms Lecture 2: Asymptotic Notations.
Midterm Review 1. Midterm Exam Thursday, October 15 in classroom 75 minutes Exam structure: –TRUE/FALSE questions –short questions on the topics discussed.
Introduction to Algorithms (2 nd edition) by Cormen, Leiserson, Rivest & Stein Chapter 2: Getting Started.
Algorithms A well-defined computational procedure that takes some value as input and produces some value as output. (Also, a sequence of computational.
2IS80 Fundamentals of Informatics Fall 2015 Lecture 6: Sorting and Searching.
1 Ch. 2: Getting Started. 2 About this lecture Study a few simple algorithms for sorting – Insertion Sort – Selection Sort (Exercise) – Merge Sort Show.
Lecture 2 Algorithm Analysis Arne Kutzner Hanyang University / Seoul Korea.
CSC317 1 So far so good, but can we do better? Yes, cheaper by halves... orkbook/cheaperbyhalf.html.
Chapter 15 Running Time Analysis. Topics Orders of Magnitude and Big-Oh Notation Running Time Analysis of Algorithms –Counting Statements –Evaluating.
September 18, Algorithms and Data Structures Lecture II Simonas Šaltenis Aalborg University
CS6045: Advanced Algorithms Sorting Algorithms. Sorting Input: sequence of numbers Output: a sorted sequence.
Lecture 2 Algorithm Analysis
CMPT 438 Algorithms.
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
Divide and Conquer (Merge Sort)
Ch. 2: Getting Started.
Ack: Several slides from Prof. Jim Anderson’s COMP 202 notes.
Introduction To Algorithms
Quicksort Quick sort Correctness of partition - loop invariant
Algorithms and Data Structures Lecture II
Presentation transcript:

Algorithm Correctness A correct algorithm is one in which every valid input instance produces the correct output. The correctness must be proved mathematically.

Algorithm Complexity Algorithm complexity is a measure of the resources an algorithm uses. The 2 resources we care about are: Time and Space For a given algorithm, we express these quantities as a function of the input size.

Algorithm Efficiency Time efficiency indicates how fast an algorithm runs. Space efficiency refers to the *extra* space (beyond that needed for the input) that the algorithm requires. The amount of extra space required is of less concern for most programming applications than is the time required.

Time is major efficiency measure Time efficiency indicates how fast an algorithm runs. Observation: Almost all algorithms run longer on larger inputs. Therefore, it is logical to investigate an algorithm’s efficiency as a function of some parameter n indicating the algorithm’s input size. The size of the input is usually obvious, but the running time may be influenced by other factors than the number of items in an array.

Algorithm efficiency may vary for different instances of equal size Some algorithms take the same amount of time on all input instances of a given size. For others, there are best-case, worst-case, and average-case input instances with running times that depend on more than just the input size. For algorithm A on input of size n: Worst-case input : The input(s) for which A executes the most steps for all inputs of size n. Best-case input : The input(s) for which A executes the fewest steps for all inputs of size n.

Analyzing Algorithms Analyzing an algorithm in this course involves predicting the number of steps executed by an algorithm without implementing the algorithm. We can do this in a machine- and language-independent way using : 1.the RAM model of computation (single processor) 1.asymptotic analysis of worst-case complexity

RAM Model of Computation Single-processor RAM, instructions executed sequentially. To make the notion of a step as machine-independent as possible, assume: Each execution of the i th line takes time c i, where c i is a constant.

Algorithm Efficiency Imprecise metric: Experimental measurement of running time. Better metric: count the number of times each of the algorithm’s operations is executed. However, this exact count is often overkill. Best metric: identify the operation that contributes most to the total running time and count the number of times that operation is executed. ==> the basic operation The basic operation is usually the statement or set of statements inside the most deeply nested loop.

Counting Steps in Pseudocode Pitfalls: If the line is a subroutine call, then the actual call takes constant time, but the execution of the subroutine being called might not. If the line specifies operations other than primitive ones, then it might take more than constant time. Example: “sort the points by x- coordinate.”

MaxElement (A[1…n]) 1. maxval = A[1] 2. for i = 2 to n 3. if A[i] > maxval 4. maxval = A[i] 5. return maxval INPUT: An array A[1…n] of comparable items OUTPUT: The value of the largest element in A Pseudocode Give the line number(s) of the basic operation(s). Does this algorithm have different running time on different input arrays of size n? If so, give examples of best- and worst-case input instances.

Expressing Loops as Summations Express the worst-case running time for the basic operation on the last slide as a summation using the bounds given in the code. Where does the 1 come from?

Rule 1 for sum manipulation In general, the solution to a summation formula as shown on the last slide is (u is for upper and l is for lower): So the solution to the summation we just saw is:

SequentialSearch(A[1…n], k) 1. i = 1 2. while i < n+1 and A[i]  k 3. i = i if i < n+1 return i 5. else return -1 INPUT: An array A[1…n] of comparable items and a search key k OUTPUT: The index of the leftmost occurrence of k in A or -1 if k is not in A Pseudocode Does this algorithm have different running time on different input arrays of size n? What is the worst-case input instance? Give the line number(s) of the basic operation(s).

SequentialSearch(A[1…n], k) 1. i = 1 2. while i < n+1 and A[i]  k 3. i = i if i < n+1 return i 5. else return -1 INPUT: An array A[1…n] of comparable items and a search key k OUTPUT: The index of the leftmost occurrence of k in A or -1 if k is not in A Express this while loop as a summation and solve it. Since the running time depends on the input instance, the summation depends on whether we use the best, worst, or average case.

Sum expressing running time of Sequential Search on Worst-Case Input The summation and its solution:

General Plan for Analyzing Time Efficiency of Non-recursive Algorithms 1.Decide on a parameter indicating input size. 2.Identify the algorithm’s basic operation. 3.Figure out whether the number of times the basic operation is executed depends only on the size of the input. If it also depends on some additional property, the worst-case, average-case, and best- case efficiencies can be given separately. 4.Set up a sum expressing the number of times the basic operation is executed. 5.Use standard rules of sum manipulation to find a closed-form formula for the count.

InsertionSort(A) 1. for j = 2 to length[A] 2. key = A[j] 3. i = j – 1 // Insert A[j] into sorted sequence A[1…j- 1] 4. while i > 0 and A[i] > key 5. A[i + 1] = A[i] 6. i = i A[i + 1] = key INPUT: An array A[1…n] of comparable items {a 1, a 2,..., a n } OUTPUT: A permutation of the input array {a 1, a 2,..., a n } such that a 1  a 2 ...  a n. Are best-case and worst-case different?

Counting primitive operations InsertionSort sorts an array of elements from lowest to highest position. InsertionSort(A) 1. for j = 2 to length[A] 2. key = A[j] 3. i = j while i > 0 and A[i] > key 5. A[i + 1] = A[i] 6. i = i A[i + 1] = key Assume that the i th line takes time c i, which is a constant. When a while or for loop is executed, the test in the loop header is executed one more time than the loop body. The while loop may be executed a different number of times for different values of j, so let t j be the number of times line 4 is executed for each value of j.

Analysis of InsertionSort InsertionSort(A) times repeated 1. for j = 2 to length[A] n 2. key = A[j] n-1 3. i = j - 1 n-1 4. while i > 0 and A[i] > key 5. A[i + 1] = A[i] 6. i = i A[i + 1] = key n-1

Analysis of InsertionSort InsertionSort(A) 1. for j = 2 to length[A] 2. key = A[j] 3. i = j while i > 0 and A[i] > key 5. A[i + 1] = A[i] 6. i = i A[i + 1] = key For insertion sort, the running time varies for different input instances. What is the “exact” running time for the best case? What is the “exact” running time for the worst case?

Rule 2 for sum manipulation For insertion sort, the inner while loop is run the most when it is always the case that A[i] > key, so the upper limit is t j = j.

Why analyze running time on worst-case inputs? The worst-case time gives a guaranteed upper bound for any input. For some algorithms, the worst case occurs often. For example, when searching for a given key in an array, the worst case often occurs because the key is not in the array. The average case frequently differs from the worst case by a factor of ½.

UniqueElements 1.Is there a difference in T(n) for best- and worst-case input? Give examples of a best- and a worst-case solution. 2.Give the line number(s) of the basic operation. 3.Set up a sum representing the number of times the basic operation is executed in the worst-case (don’t solve it). 4.What is the running time in the worst-case? (solve summation here) Homework 1 (due next class) INPUT: An array A[1…n] of comparable items OUTPUT: Returns “true” if all items are unique and “false” otherwise UniqueElements(A) 1.for i = 1 to length[A]-1 2. for j = i +1 to length[A] 3. if A[i] = A[j] return false 4.return true

Proving Correctness Using Loop Invariants Using loop invariants is like mathematical induction: To prove that a property holds, you prove a base case and an inductive step. Showing that the invariant holds before the first iteration is like the base case. Showing that the invariant holds from iteration to iteration is like the inductive step. The termination part differs from the usual use of mathematical induction, in which the inductive step is used infinitely. We stop the “induction” when the loop terminates.

Correctness of InsertionSort InsertionSort(A) 1. for j = 2 to length[A] 2. key = A[j] 3. i = j while i > 0 and A[i] > key 5. A[i + 1] = A[i] 6. i = i A[i + 1] = key In order to show that InsertionSort actually sorts the items in A in non-decreasing order, what do we need to prove? 1.The algorithm terminates. 2.Let A’ denote the output of InsertionSort. Then at termination, A’[1]≤ A’[2] ≤ A’[3] ≤ … ≤ A’[n]. 3.The elements of A’ form a permutation of A.

Proving Correctness--Insertion Sort Loop invariant: Let j be the position of the key in the array A. At the start of each iteration of the for loop, the sub-array A[1...j-1] consists of the elements originally in A[1...j-1], but in sorted order. InsertionSort(A) 1.for j = 2 to length[A] 2. key = A[j] 3. i = j while i>0 and A[i]>key 5. A[i+1] = A[i] 6. i = i A[i+1] = key We need to show that the loop invariant is true prior to the first iteration (basis or initialization) before each iteration (inductive hypothesis), so it remains true for the next iteration (inductive step or maintenance) when the loop terminates the invariant allows us to argue that the algorithm is correct (termination).

Proving Correctness--Insertion Sort Basis (initialization): When j = 2, A[1...j-1] has a single element and is therefore trivially sorted. Loop invariant: Let j be the position of the key in the array A. At the start of each iteration of the for loop, the sub-array A[1...j-1] consists of the elements originally in A[1...j-1], but in sorted order. InsertionSort(A) 1.for j = 2 to length[A] 2. key = A[j] 3. i = j while i>0 and A[i]>key 5. A[i+1] = A[i] 6. i = i A[i+1] = key

Proving Correctness--Insertion Sort Maintenance Step or Inductive Hypothesis (IHOP): Assume the invariant holds through the beginning of the iteration in which the position of j=k. Loop invariant: Let j be the position of the key in the array A. At the start of each iteration of the for loop, the sub-array A[1...j-1] consists of the elements originally in A[1...j-1], but in sorted order. InsertionSort(A) 1.for j = 2 to length[A] 2. key = A[j] 3. i = j while i>0 and A[i]>key 5. A[i+1] = A[i] 6. i = i A[i+1] = key

Proving Correctness--Insertion Sort Loop invariant: Let j be the position of the key in the array A. At the start of each iteration of the for loop, the sub-array A[1...j-1] consists of the elements originally in A[1...j-1], but in sorted order. When j = k, key = A[k]. By the IHOP, we know that the sub-array A[1…k-1] is in sorted order. During this iteration, items A[k-1], A[k-2], A[k-3] and so on are each moved one position to the right until either a value less than key is found or until k-1 values have been shifted right, when the value of key is inserted. Due to the total ordering on integers, key will be inserted in the correct position in the values A[1…k-1], so at the end of iteration k, the sub-array A[1…k] will contain only the elements that were originally in A[1…k], but in sorted order. Therefore, the loop invariant holds at the start of the iteration when the position of j = k+1. InsertionSort(A) 1.for j = 2 to length[A] 2. key = A[j] 3. i = j while i>0 and A[i]>key 5. A[i+1] = A[i] 6. i = i A[i+1] = key

Proving Correctness--Insertion Sort Loop invariant: Let j be the position of the key in the array A. At the start of each iteration of the for loop, the sub-array A[1...j-1] consists of the elements originally in A[1...j-1], but in sorted order. Termination: The for loop ends when j = n+1. By the IHOP, we have that the subarray A[1...n] is in sorted order. Therefore, the entire array is sorted and the algorithm ends correctly. InsertionSort(A) 1.for j = 2 to length[A] 2. key = A[j] 3. i = j while i>0 and A[i]>key 5. A[i+1] = A[i] 6. i = i A[i+1] = key

In-class exercise: Correctness of BubbleSort BubbleSort(A) (assume problem statement = that of InsertionSort) 1.for i = 1 to A.length – 1 2. for j = A.length downto i+1 3. if A[j] < A[j-1] 4. swap A[j] with A[j-1] How do we express the data size? Does the algorithm have the same or different running time for all inputs of size n? Give a summation for the running time of BubbleSort and solve it. In order to show that BubbleSort actually sorts the items in A in non- decreasing order, what do we need to prove? State a loop invariant for the outer for loop and prove that the loop invariant holds.

Order of Growth An abstraction to ease analysis. Usually expressed as  Asymptotic analysis calculates algorithm running time in terms of its rate of growth with increasing problem size. To make this task easier, we can - Drop lower order terms - Ignore the constant coefficient in the leading term. For insertion sort the order of growth is n 2, meaning that when the input size doubles, the running time quadruples. We usually consider one algorithm to be more efficient than another if its worst- case running time has a smaller order of growth.

Asymptotic Analysis Main idea: We are interested in the running time in the limit as the input size grows to infinity. InsertionSort is an example of an incremental algorithm because it processes each element in sequence. In the worst-case, T(n) for insertion sort grows like n 2 (but it is a mistake to say T(n) equals n 2 ). In the best-case, T(n) for insertion sort grows like n. We usually consider one algorithm to be more efficient than another if its worst-case running time has a smaller order of growth.

Analysis of Divide-and-Conquer Algorithms The divide-and-conquer paradigm (Ch.2, Sect.3) divide the problem into a number of subproblems conquer the subproblems (solve them) –Base case: If the subproblems are small enough, solve them by brute force combine the subproblem solutions to get the solution to the original problem Example: Merge Sort divide the n-element sequence to be sorted into two n/2-element sequences. conquer the subproblems recursively. (Base case occurs when the sequences are of size 1). combine the resulting two sorted n/2-element sequences by merging them together. Divide and conquer algorithms generally involve recursion. To analyze recursive algorithms, we need to take a closer look at logarithms.

Review of Logarithms notation convention for logarithms: lgn = log 2 n (binary logarithm) lnn = log e n (natural logarithm) A logarithm is an inverse exponential function. Saying b x = y is equivalent to saying log b y = x. While exponential functions grow very fast, log functions grow very slowly. properties of logarithms: log b (xy) = log b x + log b y log b (x/y) = log b x - log b y log b x a = alog b x log b a= log x a/log x b a = b log b a (e.g., n lg2 = 2 lgn = n)

More Notation Floor:  x  = the largest integer ≤ x Ceiling:  x  = the smallest integer ≥ x Geometric series: Harmonic series: Telescoping series:

MergeSort(A,p,r) 1.if p < r 2. q =  (p+r)/2  3. MergeSort (A,p,q) 4. MergeSort (A,q+1,r) 5. Merge (A,p,q,r) Merge(A,p,q,r) 1.n 1 = q-p+1; n 2 = r-q; 2.Create arrays L[1...n 1 +1] and R[1...n 2 +1] 3.for i = 1 to n 1 4. L[i] = A[p+i-1] 5.for i = 1 to n 2 6. R[i] = A[q+i] 7.L[n 1 +1] = R[n 2 +1] =  8.i = j = 1 9.for k = p to r 10. if L[i]  R[j] 11. A[k] = L[i] 12. i = i else A[k] = R[j] 14. j = j+1 Since Merge is non-recursive, we can analyze it using the rules we have already seen. What are the lines of the basic operation? What is the running time? Initial call: MergeSort(A,1,length(A))

General Plan for Analyzing Time Efficiency of Recursive Algorithms 1.Decide on a parameter indicating input size. 2.Identify the algorithm’s basic operation. 3.Figure out whether the number of times the basic operation is executed varies on different inputs of the same size; if it can, you may need to consider the cases separately. Usually, concentrate on worst-case. 4.Set up a recurrence relation, with the appropriate initial condition, for the number of times the basic operation is executed. 5.Solve the recurrence or otherwise ascertain the order of growth.

Analyzing Divide-and-Conquer Algorithms A recursive algorithm can often be described by a recurrence relation that describes the overall runtime on a problem of size n in terms of the runtime on smaller inputs. For divide-and-conquer algorithms, we get recurrences like: T(n) =  (1) if n  c aT(n/b) + D(n) + C(n)otherwise a = number of sub-problems we divide the problem into n/b = size of the sub-problems D(n) = time to divide the size n problem into sub-problems C(n) = time to combine the sub-problem solutions to get the answer for the problem of size n where

Analyzing Merge-Sort Divide (lgn + 1 levels) D(n) =  (1) Each division takes constant time  indicates order of growth

Analyzing Merge-Sort C(n) =  (n)  (1)if n = 1 2T(n/2) +  (n)otherwise T(n) = Recurrence for worst-case running time for Merge-Sort Merge (lgn + 1 levels) At each level, merge takes linear time  (n)

Analyzing Merge-Sort  (1)if n = 1 2T(n/2) +  (n)otherwise T(n) = Recurrence for worst-case running time for Merge-Sort a = 2 (two sub-problems) n/b = n/2 (each sub-problem has size approx. n/2) D(n) =  (1) (just compute midpoint of array) C(n) =  (n) (merging can be done by scanning sorted sub-arrays) aT(n/b) + D(n) + C(n)

Solving the Merge-Sort recurrence There are several methods to solve a recurrence relation like the one for merge-sort. The easiest for this exact form of relation is to use the Master Theorem, which we will see in Ch. 4. Another way is to use a recursion tree which shows successive expansions of the recurrence. Start with an initial cost of cn as the root of the tree. The original problem has 2 subproblems of size n/2. Each of those subproblems has 2 subproblems of size n/4. Continue until the size of the subproblems is 1.

Recursion Tree for Merge-Sort c if n = 1 2T(n/2) + cn otherwise T(n) = Recurrence for worst-case running time of Merge-Sort cn cn/2cn/2cn cn/4cncn/4 c ccccccccn cnlgn + cn lgn + 1 levels (h = lgn)

END of Lecture 2 (Appendix follows)

Another example: Algorithm PrefixAverages(X): Input: An array X[1…n] of numbers. Output:An array A[1…n] of numbers such that A[i] is the average of elements X[1],..., X[i]. 1. Create an array A such that length[A] = n 2. s = 0 3. for j = 1 to n 5. s = s + X[j] 6. A[j] = s / j 7. return A What is the line number(s) of the basic operation? Set up a sum for the number of times the basic operation is run. Give the worst-case running time.

Proving Correctness--PrefixAverages PrefixAverages(X) 1.Create array A of length n 2.s = 0 3.for j = 1 to n 4. s = s + X[j] 5. A[j] = s/j 6.return A Loop invariant: At the start of each iteration of the for loop, s = (X[1]+…+X[j-1]) and A[j-1] = s/j-1