 The amount of time it takes a computer to solve a particular problem depends on:  The hardware capabilities of the computer  The efficiency of the.

Slides:



Advertisements
Similar presentations
Dynamic Programming 25-Mar-17.
Advertisements

MATH 224 – Discrete Mathematics
The Efficiency of Algorithms
Copyright © 2014, 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Starting Out with C++ Early Objects Eighth Edition by Tony Gaddis,
Chapter 9: Searching, Sorting, and Algorithm Analysis
Searching Kruse and Ryba Ch and 9.6. Problem: Search We are given a list of records. Each record has an associated key. Give efficient algorithm.
Algorithm Analysis (Big O) CS-341 Dick Steflik. Complexity In examining algorithm efficiency we must understand the idea of complexity –Space complexity.
CS 206 Introduction to Computer Science II 09 / 10 / 2008 Instructor: Michael Eckmann.
Complexity Analysis (Part I)
Other time considerations Source: Simon Garrett Modifications by Evan Korth.
Cmpt-225 Algorithm Efficiency.
Algorithm Efficiency and Sorting
Concept of Basic Time Complexity Problem size (Input size) Time complexity analysis.
Data Structures Using C++ 2E Chapter 9 Searching and Hashing Algorithms.
1 Section 2.3 Complexity of Algorithms. 2 Computational Complexity Measure of algorithm efficiency in terms of: –Time: how long it takes computer to solve.
Analysis of Algorithms COMP171 Fall Analysis of Algorithms / Slide 2 Introduction * What is Algorithm? n a clearly specified set of simple instructions.
Algorithm Analysis (Big O)
BIT Presentation 4.  An algorithm is a method for solving a class of problems.  While computer scientists think a lot about algorithms, the term.
© Janice Regan, CMPT 128, Jan CMPT 128: Introduction to Computing Science for Engineering Students Integer Data representation Addition and Multiplication.
1 Complexity Lecture Ref. Handout p
Recursion, Complexity, and Searching and Sorting By Andrew Zeng.
Week 11 Introduction to Computer Science and Object-Oriented Programming COMP 111 George Basham.
1 Lecture 5 Floating Point Numbers ITEC 1000 “Introduction to Information Technology”
1 Chapter 24 Developing Efficient Algorithms. 2 Executing Time Suppose two algorithms perform the same task such as search (linear search vs. binary search)
Algorithms and Programming
1 Ethics of Computing MONT 113G, Spring 2012 Session 13 Limits of Computer Science.
Week 5 - Monday.  What did we talk about last time?  Linked list implementations  Stacks  Queues.
Recursion, Complexity, and Sorting By Andrew Zeng.
计算机科学概述 Introduction to Computer Science 陆嘉恒 中国人民大学 信息学院
Chapter 12 Recursion, Complexity, and Searching and Sorting
Analysis of Algorithms
C++ Programming: From Problem Analysis to Program Design, Second Edition Chapter 19: Searching and Sorting.
Arrays Tonga Institute of Higher Education. Introduction An array is a data structure Definitions  Cell/Element – A box in which you can enter a piece.
Unsolvability and Infeasibility. Computability (Solvable) A problem is computable if it is possible to write a computer program to solve it. Can all problems.
Simple Iterative Sorting Sorting as a means to study data structures and algorithms Historical notes Swapping records Swapping pointers to records Description,
B-Trees. Motivation for B-Trees So far we have assumed that we can store an entire data structure in main memory What if we have so much data that it.
B-Trees. CSM B-Trees 2 Motivation for B-Trees So far we have assumed that we can store an entire data structure in main memory What if we have so.
Starting Out with C++ Early Objects Seventh Edition by Tony Gaddis, Judy Walters, and Godfrey Muganda Modified for use by MSU Dept. of Computer Science.
Week 8 - Wednesday.  What did we talk about last time?  Level order traversal  BST delete  2-3 trees.
Review 1 Arrays & Strings Array Array Elements Accessing array elements Declaring an array Initializing an array Two-dimensional Array Array of Structure.
3.3 Complexity of Algorithms
COSC 2007 Data Structures II Chapter 13 Advanced Implementation of Tables IV.
1 Algorithms  Algorithms are simply a list of steps required to solve some particular problem  They are designed as abstractions of processes carried.
SNU OOPSLA Lab. 1 Great Ideas of CS with Java Part 1 WWW & Computer programming in the language Java Ch 1: The World Wide Web Ch 2: Watch out: Here comes.
CS 206 Introduction to Computer Science II 09 / 18 / 2009 Instructor: Michael Eckmann.
CS 206 Introduction to Computer Science II 01 / 30 / 2009 Instructor: Michael Eckmann.
Liang, Introduction to Java Programming, Sixth Edition, (c) 2007 Pearson Education, Inc. All rights reserved Chapter 23 Algorithm Efficiency.
Week 12 - Friday.  What did we talk about last time?  Finished hunters and prey  Class variables  Constants  Class constants  Started Big Oh notation.
27-Jan-16 Analysis of Algorithms. 2 Time and space To analyze an algorithm means: developing a formula for predicting how fast an algorithm is, based.
Chapter VI What should I know about the sizes and speeds of computers?
C++ How to Program, 7/e © by Pearson Education, Inc. All Rights Reserved.
1. Searching The basic characteristics of any searching algorithm is that searching should be efficient, it should have less number of computations involved.
Searching Topics Sequential Search Binary Search.
Copyright © 2011 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Starting Out with C++ Early Objects Seventh Edition by Tony Gaddis, Judy.
Copyright © 2014 Curt Hill Algorithm Analysis How Do We Determine the Complexity of Algorithms.
Algorithm Analysis with Big Oh ©Rick Mercer. Two Searching Algorithms  Objectives  Analyze the efficiency of algorithms  Analyze two classic algorithms.
COSC 1306 COMPUTER SCIENCE AND PROGRAMMING Jehan-François Pâris
Section 1: Problem solving AQA Computing A2 © Nelson Thornes 2009 Examples of problems with different time complexities Section 1.2: Comparing algorithms.
COMP261 Lecture 23 B Trees.
Analysis of Algorithms
CSE 143 Lecture 8 More Stacks and Queues; Complexity (Big-Oh)
4. Computational Problem Solving
Introduction to Data Structures
Analysis of Algorithms
CSE 143 Lecture 8 More Stacks and Queues; Complexity (Big-Oh)
Analysis of Algorithms
Sum this up for me Let’s write a method to calculate the sum from 1 to some n public static int sum1(int n) { int sum = 0; for (int i = 1; i
Analysis of Algorithms
Algorithm Analysis How can we demonstrate that one algorithm is superior to another without being misled by any of the following problems: Special cases.
Presentation transcript:

 The amount of time it takes a computer to solve a particular problem depends on:  The hardware capabilities of the computer  The efficiency of the software used to solve the problem.

 The more that you use your computer, the slower it gets, primarily because people tend to customize their systems to make work easier. Therefore these customizations typically make the boot process take much longer than when the computer was new. Adding these improvements can also make the computer less stable and more prone to crashing.

 Often when you install free software, that software installs other software which can use your system resources which tends to slow your system down.  Free software always has to be paid for. Sometimes this is done by increasing sales for other products by the same vendor. Other times other vendors pay the original vendor to piggyback their software on the install. Still other software brings ads with the software.  The net result is that over time most computers become slower.

 Computational Complexity is the study of the complexity of algorithms.  Problems can often be solved in several ways. So we measure the complexity of the particular solution rather than the complexity of the problem.  Complexity refers to the amount of resources an algorithm takes, not how difficult the algorithm is to understand.

 The resources an algorithm typically needs are space and time.  Space measures the amount of memory necessary to store the data used in implementing the algorithm.  Time measures the number of steps used in executing the algorithm. We do not use seconds because faster computers might take fewer seconds than slower computers.

 Both time and space are typically measured based on the amount of data to be processed.  For example the process of alphabetizing 100 names will take more space and more steps than alphabetizing 25 names.  Thus computational complexity typically is measured based on the number of items, n, being processed.

 Since we are interested in speed we will only examine the number of steps that are taken to execute an algorithm.

 To determine if an integer is odd or even one only needs look at the last digit so the time this problem takes to solve is independent of the number of digits in the number. This algorithm has time constant time complexity.  To add two integers with n digits (as we learned in elementary school) we start at the end, add the two digits, write down the answer and either carry one or not, and back up one space and repeat this process until the numbers are finished. This algorithm has time complexity n,

 Multiplying two n digit integers: This process has time complexity n 2. Start at the right most digit of the bottom number and multiply it by the last digit of the above number and write down the last digit and carry something to the previous column of the top number. Then repeat this step adding the amount carried at the end. This process will occur n times since the top number has n digits. Now repeat this n step process for each of the previous digits of the bottom number, moving the results one place to the left each time. This is a total of n groups of n steps, or n 2 steps total. Then we add the columns up which takes n more steps.

 If we have a list of n items we could search for a particular value by starting at the beginning and going through the list until we either found the value or got to the end of the list. This takes N steps, so the algorithm has complexity n. It is called the linear search.  On the other hand if the list is in order we can use the binary search. First look at the value in the middle. If it’s too small then it is eliminated but so are all the values before it. We then perform the same step on the part of the list that’s still left. Each time we eliminate half the data, until we have either found the value or exhausted the list. This algorithm has complexity log n because doubling the size of the list only adds one more step to the process.

 A salesman must meet with clients and chooses the order to meet with them so as to minimize travel time or mileage or expense or …  One possible method of finding the most efficient order to choose for meeting the clients is to look at all the possibilities and choose the best one. This method is referred to as exhaustive search. For example if there are three clients, A, B and C then the salesman could go from Home to A to B to C and back to home, HABCH. There are five other possible choices: HACBH, HBACH, HBCAH, HCABH, and HCBAH.

 For N clients there are N! (N factorial) possible orders to meet the clients. N! = 1*2*3*…*N, so 4! = 1*2*3*4 = 24 different routes. Why?  If the salesman had 10 clients, there would be 10! Different routes to check.  Thus the time complexity of this algorithm is therefore N!

 In chess, the two opponents are referred to as white and black, based on the color of the pieces that each player uses.  White always moves first. A move in chess is completed when white has moved followed by black making a move.  Half a move is referred to as a ply.  One method of determining what move to make is to examine how your opponent will respond to your move and see which move that you might make is best.

 In order to refine your estimation you will need to see how you might respond to the move that your opponent makes in response to your move.  So for example, if you use a 4 ply search you would evaluate a move by looking at how your opponent would respond to the move you made in response to his move in response to yours.  This algorithm assumes that your opponent will always make the best move. Typically at any given position, there are about 20 reasonable moves that you can make in chess.

 Thus a four ply depth search would mean examining 20x20x20x20 = 20 4 positions.  This algorithm has an exponential time complexity.

 Constant time complexity means that an algorithm requires approximately the same amount of time regardless of the data.  Linear time complexity doubles the time when the amount of data doubles  N 2 time complexity quadruples the time when the amount of data doubles.  Log n time complexity adds only one step when the amount of data doubles.

 With exponential and factorial time complexity the time can become intractable.  For example with the chess problem and a computer that can analyze one billion moves per second, (which may be faster than any computer currently in existence), a depth of 2, 4 or 6 ply could evaluated in less than one second. But 8 ply depth takes 25.6 seconds per move and 10 ply depth takes almost 3 hours per move.

 Factorial complexity is even worse.  Consider the traveling salesman problem and a computer that can compute a billion routes per second:  3 or 5 or 10 clients can be evaluated in less than one second.  But 15 clients takes almost 22 seconds to evaluate  And 20 clients takes more than 77 years to evaluate.

 Some time complexity levels do not scale.  For both exponential and factorial time complexity as the data increases in size, algorithms of this level of complexity cannot be solved in a practical manner.  The only solution in this case is to use a different algorithm or find a sub-optimal solution to the problem, i.e. find a good route or a good chess move but not necessarily the best one.