Introduction to Algorithms

Slides:



Advertisements
Similar presentations
CS 263.  Classification of algorithm against a model pattern ◦ Each model demonstrate the performance scalability of an algorithm  Sorting algorithms.
Advertisements

Introduction to Analysis of Algorithms
Complexity Analysis (Part I)
Complexity Analysis (Part I)
Analysis of Algorithms COMP171 Fall Analysis of Algorithms / Slide 2 Introduction * What is Algorithm? n a clearly specified set of simple instructions.
COMP s1 Computing 2 Complexity
Algorithm Analysis & Complexity We saw that a linear search used n comparisons in the worst case (for an array of size n) and binary search had logn comparisons.
Introduction to complexity. 2 Analysis of Algorithms Why do we need to analyze algorithms? –To measure the performance –Comparison of different algorithms.
Analysis of Algorithms
Big Oh Algorithms are compared to each other by expressing their efficiency in big-oh notation Big O notation is used in Computer Science to describe the.
1. The Role of the Algorithms in Computer Algorithms – 1/2 Algorithm: Any well-defined computation procedure that takes some value, or set of values,
Computer programming.
Major objective of this course is: Design and analysis of modern algorithms Different variants Accuracy Efficiency Comparing efficiencies Motivation thinking.
1Computer Sciences Department. Book: Introduction to Algorithms, by: Thomas H. Cormen Charles E. Leiserson Ronald L. Rivest Clifford Stein Electronic:
Program Efficiency & Complexity Analysis. Algorithm Review An algorithm is a definite procedure for solving a problem in finite number of steps Algorithm.
Chapter 10 Algorithm Analysis.  Introduction  Generalizing Running Time  Doing a Timing Analysis  Big-Oh Notation  Analyzing Some Simple Programs.
Liang, Introduction to Java Programming, Sixth Edition, (c) 2007 Pearson Education, Inc. All rights reserved Chapter 23 Algorithm Efficiency.
Chapter 1. The Role of the Algorithms in Computer.
1 The Role of Algorithms in Computing. 2 Computational problems A computational problem specifies an input-output relationship  What does the.
FURQAN MAJEED ALGORITHMS. A computer algorithm is a detailed step-by-step method for solving a problem by using a computer. An algorithm is a sequence.
Algorithm Complexity is concerned about how fast or slow particular algorithm performs.
LECTURE 9 CS203. Execution Time Suppose two algorithms perform the same task such as search (linear search vs. binary search) and sorting (selection sort.
Data Structures I (CPCS-204) Week # 2: Algorithm Analysis tools Dr. Omar Batarfi Dr. Yahya Dahab Dr. Imtiaz Khan.
Algorithm Analysis 1.
Complexity Analysis (Part I)
CMPT 438 Algorithms.
16 Searching and Sorting.
19 Searching and Sorting.
Sorting and "Big Oh" ASFA AP Computer Science A SortingBigOh.
Advanced Algorithms Analysis and Design
Introduction to Algorithms
Mathematical Foundation
Introduction to Analysis of Algorithms
Big-O notation Linked lists
Analysis of Algorithms
COP 3503 FALL 2012 Shayan Javed Lecture 15
Searching – Linear and Binary Searches
Chapter 9: Searching, Sorting, and Algorithm Analysis
Introduction to complexity
Introduction to Algorithms
Big O notation Big O notation is used in Computer Science to describe the performance or complexity of an algorithm. Big O specifically describes the worst-case.
Introduction to Analysis of Algorithms
Big-O notation.
Algorithm Analysis CSE 2011 Winter September 2018.
Algorithms Furqan Majeed.
Teach A level Computing: Algorithms and Data Structures
Efficiency (Chapter 2).
Algorithm Analysis (not included in any exams!)
Algorithm design and Analysis
Algorithm An algorithm is a finite set of steps required to solve a problem. An algorithm must have following properties: Input: An algorithm must have.
Objective of This Course
What is CS 253 about? Contrary to the wide spread belief that the #1 job of computers is to perform calculations (which is why the are called “computers”),
Algorithm Efficiency Chapter 10.
Big O Notation.
Comparing Algorithms Unit 1.2.
COSC 320 Advanced Data Structures and Algorithm Analysis
CS 201 Fundamental Structures of Computer Science
Searching CLRS, Sections 9.1 – 9.3.
Algorithm Analysis Bina Ramamurthy CSE116A,B.
Analyzing an Algorithm Computing the Order of Magnitude Big O Notation
GC 211:Data Structures Algorithm Analysis Tools
Introduction to Algorithms
Introduction to Algorithm and its Complexity Lecture 1: 18 slides
Data Structures Introduction
Sum this up for me Let’s write a method to calculate the sum from 1 to some n public static int sum1(int n) { int sum = 0; for (int i = 1; i
Complexity Analysis (Part I)
Module 8 – Searching & Sorting Algorithms
Algorithms and data structures: basic definitions
Complexity Analysis (Part I)
Algorithm Analysis How can we demonstrate that one algorithm is superior to another without being misled by any of the following problems: Special cases.
Presentation transcript:

Introduction to Algorithms Lecture # 1

Algorithm An algorithm is a well defines computational procedure that takes some value or set of values as an input and produces some value or set of values as an output. An algorithm is thus a sequence of computational steps that transforms the input into the output. Algorithm can be viewed as a tool to solve a well defined computational problem. The statement of the problem specifies in general terms the desired input/output relationship. The algorithm describes a specific computational procedure for achieving that input/output relationship.

Algorithm example Problem Sort a sequence of numbers into non decreasing order. Input: sequence of ‘n’ numbers( a1, a2, a3, a4, …, an). Output: A permutation ( reordering) of the input sequence such that a1 <= a2<= a3<=…<= an

Algorithm example We have a number of good sorting algorithms. (quick sort, bubble sort, heap sort, selection sort, insertion sort, shell sort etc) The selection of one out of them depends upon The number of items to be sorted The extend to which items are already somewhat sorted. Possible restrictions on the item values The architecture of the computer The kind of storage devices to be used ( RAM , VM , DISK etc)

An algorithm is said to be correct if , for every input instance, it halts with the correct output.

What kind of problems are solved by algorithms? BIO-Informatics The Human Genome Project Determining the sequence of 3 billion chemical base pairs that make up human DNA. Storing this information in databases and developing tools for data analysis each of this requires sophisticated algorithm The Internet Search example Electronic Commerce Public key cryptography Digital Signatures verification

Algorithm as a Technology Two reasons to consider for good SE Design Computer Speed (Time) Memory (Space)

Be careful to differentiate between: Performance: how much time/memory/disk/... is actually used when a program is run. This depends on the machine, compiler, etc. as well as the code. Complexity: how do the resource requirements of a program or algorithm scale, i.e., what happens as the size of the problem being solved gets larger.

Time Requirement by a Method The time required by a method is proportional to the number of "basic operations" that it performs. Here are some examples of basic operations: one arithmetic operation (e.g., +, *). one assignment one test (e.g., x == 0) one read one write (of a primitive type)

Constant Time Some methods perform the same number of operations every time they are called. For example, the size method of the List class always performs just one operation: return numItems; the number of operations is independent of the size of the list. We say that methods like this (that always perform a fixed number of basic operations) require constant time.

The Problem Size or the Input Size Other methods may perform different numbers of operations, depending on the value of a parameter or a field. For example, for the array implementation of the List class, the remove method has to move over all of the items that were to the right of the item that was removed (to fill in the gap). The number of moves depends both on the position of the removed item and the number of items in the list. We call the important factors (the parameters and/or fields whose values affect the number of operations performed) the problem size or the input size.

When we consider the complexity of a method, we don't really care about the exact number of operations that are performed; instead, we care about how the number of operations relates to the problem size. If the problem size doubles, does the number of operations stay the same? double? increase in some other way? For constant-time methods like the size method, doubling the problem size does not affect the number of operations (which stays the same).

Furthermore, we are usually interested in the worst case: what is the most operations that might be performed for a given problem size (other cases -- best case and average case.

For example, as discussed earlier, the remove method has to move all of the items that come after the removed item one place to the left in the array. In the worst case, all of the items in the array must be moved. Therefore, in the worst case, the time for remove is proportional to the number of items in the list, and we say that the worst-case time for remove is linear in the number of items in the list. For a linear-time method, if the problem size doubles, the number of operations also doubles.

Big O- Notation Big O notation is used in Computer Science to describe the performance or complexity of an algorithm. Big O specifically describes the worst-case scenario, and can be used to describe the execution time required or the space used (e.g. in memory or on disk) by an algorithm.

O(1) O(1) describes an algorithm that will always execute in the same time (or space) regardless of the size of the input data set.

The Add Procedure int add ( int a, int b){ int c ; // constant c = a + b ; // constant return c ; // constant }

O(N) O(N) describes an algorithm whose performance will grow linearly and in direct proportion to the size of the input data set. The example also demonstrates how Big O favors the worst-case performance scenario; a matching string could be found during any iteration of the for loop and the function would return early, but Big O notation will always assume the upper limit where the algorithm will perform the maximum number of iterations.

Example : The Search within an array bool ContainsValue(int [ ] sample, value) { for(int i = 0; i < sample.length; i++) // n times if(sample[i] == value) { return true; } } return false;

O(N2) O(N2) represents an algorithm whose performance is directly proportional to the square of the size of the input data set. This is common with algorithms that involve nested iterations over the data set. Deeper nested iterations will result in O(N3), O(N4) etc.

Example bool ContainsDuplicates(String[] strings) { for(int i = 0; i < strings.Length; i++) { for(int j = 0; j < strings.Length; j++) { if(i == j) // Don't compare with self { continue; } if(strings[i] == strings[j]) { return true; } } return false;

Logarithms Binary search is a technique used to search sorted data sets. It works by selecting the middle element of the data set, essentially the median, and compares it against a target value. If the values match it will return success. If the target value is higher than the value of the probe element it will take the upper half of the data set and perform the same operation against it. Likewise, if the target value is lower than the value of the probe element it will perform the operation against the lower half. It will continue to halve the data set with each iteration until the value has been found or until it can no longer split the data set.

This type of algorithm is described as O(log N). The iterative halving of data sets described in the binary search example produces a growth curve that peaks at the beginning and slowly flattens out as the size of the data sets increase e.g. an input data set containing 10 items takes one second to complete, a data set containing 100 items takes two seconds, and a data set containing 1000 items will take three seconds.

Doubling the size of the input data set has little effect on its growth as after a single iteration of the algorithm the data set will be halved and therefore on a par with an input data set half the size. This makes algorithms like binary search extremely efficient when dealing with large data sets.

Test Your self What is the worst-case complexity of the each of the following code fragments? Two loops in a row: for (i = 0; i < N; i++) { sequence of statements } for (j = 0; j < M; j++) { sequence of statements } How would the complexity change if the second loop went to N instead of M?

A nested loop followed by a non-nested loop: for (i = 0; i < N; i++) { for (j = 0; j < N; j++) { sequence of statements } for (k = 0; k < N; k++) { sequence of statements }

A nested loop in which the number of times the inner loop executes depends on the value of the outer loop index: for (i = 0; i < N; i++) { for (j = N; j > i; j--) { sequence of statements } }