Pattern Matching 4/17/2017 7:14 AM Pattern Matching Pattern Matching.

Slides:



Advertisements
Similar presentations
© 2004 Goodrich, Tamassia Pattern Matching1. © 2004 Goodrich, Tamassia Pattern Matching2 Strings A string is a sequence of characters Examples of strings:
Advertisements

MATH 224 – Discrete Mathematics
Michael Alves, Patrick Dugan, Robert Daniels, Carlos Vicuna
Binary Trees CSC 220. Your Observations (so far data structures) Array –Unordered Add, delete, search –Ordered Linked List –??
Chapter 4: Trees Part II - AVL Tree
Types of Algorithms.
Greedy Algorithms Amihood Amir Bar-Ilan University.
Sorting Comparison-based algorithm review –You should know most of the algorithms –We will concentrate on their analyses –Special emphasis: Heapsort Lower.
Overview What is Dynamic Programming? A Sequence of 4 Steps
Quick Sort, Shell Sort, Counting Sort, Radix Sort AND Bucket Sort
Bar Ilan University And Georgia Tech Artistic Consultant: Aviya Amir.
Data Compressor---Huffman Encoding and Decoding. Huffman Encoding Compression Typically, in files and messages, Each character requires 1 byte or 8 bits.
1 CSC 421: Algorithm Design & Analysis Spring 2013 Space vs. time  space/time tradeoffs  examples: heap sort, data structure redundancy, hashing  string.
Tries Search for ‘bell’ O(n) by KMP algorithm O(dm) in a trie Tries
Comp. Eng. Lab III (Software), Pattern Matching1 Pattern Matching Dr. Andrew Davison WiG Lab (teachers room), CoE ,
Pattern Matching1. 2 Outline and Reading Strings (§9.1.1) Pattern matching algorithms Brute-force algorithm (§9.1.2) Boyer-Moore algorithm (§9.1.3) Knuth-Morris-Pratt.
Data Structures Lecture 3 Fang Yu Department of Management Information Systems National Chengchi University Fall 2010.
Goodrich, Tamassia String Processing1 Pattern Matching.
CPSC 231 Organizing Files for Performance (D.H.) 1 LEARNING OBJECTIVES Data compression. Reclaiming space in files. Compaction. Searching. Sorting, Keysorting.
Sorting Heapsort Quick review of basic sorting methods Lower bounds for comparison-based methods Non-comparison based sorting.
Sequence Alignment Variations Computing alignments using only O(m) space rather than O(mn) space. Computing alignments with bounded difference Exclusion.
This material in not in your text (except as exercises) Sequence Comparisons –Problems in molecular biology involve finding the minimum number of edit.
TTIT33 Algorithms and Optimization – Dalg Lecture 2 HT TTIT33 Algorithms and optimization Lecture 2 Algorithms Sorting [GT] 3.1.2, 11 [LD] ,
1 prepared from lecture material © 2004 Goodrich & Tamassia COMMONWEALTH OF AUSTRALIA Copyright Regulations 1969 WARNING This material.
© 2004 Goodrich, Tamassia Dynamic Programming1. © 2004 Goodrich, Tamassia Dynamic Programming2 Matrix Chain-Products (not in book) Dynamic Programming.
Pattern Matching1. 2 Outline Strings Pattern matching algorithms Brute-force algorithm Boyer-Moore algorithm Knuth-Morris-Pratt algorithm.
String Matching. Problem is to find if a pattern P[1..m] occurs within text T[1..n] Simple solution: Naïve String Matching –Match each position in the.
Chapter 9: Text Processing Pattern Matching Data Compression.
1 Chapter 24 Developing Efficient Algorithms. 2 Executing Time Suppose two algorithms perform the same task such as search (linear search vs. binary search)
Text Processing 1 Last Update: July 31, Topics Notations & Terminology Pattern Matching – Brute Force – Boyer-Moore Algorithm – Knuth-Morris-Pratt.
CS212: DATA STRUCTURES Lecture 10:Hashing 1. Outline 2  Map Abstract Data type  Map Abstract Data type methods  What is hash  Hash tables  Bucket.
CSC401 – Analysis of Algorithms Chapter 9 Text Processing
C++ Programming: Program Design Including Data Structures, Fourth Edition Chapter 19: Searching and Sorting Algorithms.
Arrays An array is a data structure that consists of an ordered collection of similar items (where “similar items” means items of the same type.) An array.
20/10/2015Applied Algorithmics - week31 String Processing  Typical applications: pattern matching/recognition molecular biology, comparative genomics,
Lossless Compression CIS 465 Multimedia. Compression Compression: the process of coding that will effectively reduce the total number of bits needed to.
Introduction to Algorithms Chapter 16: Greedy Algorithms.
Prof. Amr Goneid, AUC1 Analysis & Design of Algorithms (CSCE 321) Prof. Amr Goneid Department of Computer Science, AUC Part 8. Greedy Algorithms.
Strings and Pattern Matching Algorithms Pattern P[0..m-1] Text T[0..n-1] Brute Force Pattern Matching Algorithm BruteForceMatch(T,P): Input: Strings T.
String Searching CSCI 2720 Spring 2007 Eileen Kraemer.
UNIT 5.  The related activities of sorting, searching and merging are central to many computer applications.  Sorting and merging provide us with a.
Types of Algorithms. 2 Algorithm classification Algorithms that use a similar problem-solving approach can be grouped together We’ll talk about a classification.
CSC 212 – Data Structures Lecture 36: Pattern Matching.
Contest Algorithms January 2016 Three types of string search: brute force, Knuth-Morris-Pratt (KMP) and Rabin-Karp 13. String Searching 1Contest Algorithms:
Internal and External Sorting External Searching
Week 15 – Wednesday.  What did we talk about last time?  Review up to Exam 1.
Searching CSE 103 Lecture 20 Wednesday, October 16, 2002 prepared by Doug Hogan.
1/39 COMP170 Tutorial 13: Pattern Matching T: P:.
Chapter 11. Chapter Summary  Introduction to trees (11.1)  Application of trees (11.2)  Tree traversal (11.3)  Spanning trees (11.4)
Chapter 15 Running Time Analysis. Topics Orders of Magnitude and Big-Oh Notation Running Time Analysis of Algorithms –Counting Statements –Evaluating.
1 COMP9024: Data Structures and Algorithms Week Ten: Text Processing Hui Wu Session 1, 2016
Advanced Data Structures Lecture 8 Mingmin Xie. Agenda Overview Trie Suffix Tree Suffix Array, LCP Construction Applications.
1 String Matching Algorithms Mohd. Fahim Lecturer Department of Computer Engineering Faculty of Engineering and Technology Jamia Millia Islamia New Delhi,
HUFFMAN CODES.
CSC317 Greedy algorithms; Two main properties:
Tries 07/28/16 11:04 Text Compression
Tries 5/27/2018 3:08 AM Tries Tries.
Pattern Matching 9/14/2018 3:36 AM
13 Text Processing Hongfei Yan June 1, 2016.
Greedy Algorithms Many optimization problems can be solved more quickly using a greedy approach The basic principle is that local optimal decisions may.
Pattern Matching 12/8/ :21 PM Pattern Matching Pattern Matching
Pattern Matching 1/14/2019 8:30 AM Pattern Matching Pattern Matching.
KMP String Matching Donald Knuth Jim H. Morris Vaughan Pratt 1997.
Pattern Matching 2/15/2019 6:17 PM Pattern Matching Pattern Matching.
Tries 2/23/2019 8:29 AM Tries 2/23/2019 8:29 AM Tries.
Pattern Matching Pattern Matching 5/1/2019 3:53 PM Spring 2007
Pattern Matching 4/27/2019 1:16 AM Pattern Matching Pattern Matching
Sequences 5/17/ :43 AM Pattern Matching.
Algorithm Course Algorithms Lecture 3 Sorting Algorithm-1
Presentation transcript:

Pattern Matching 4/17/2017 7:14 AM Pattern Matching Pattern Matching

Strings A string is a sequence of characters Examples of strings: Java program HTML document DNA sequence Digitized image An alphabet S is the set of possible characters for a family of strings Example of alphabets: ASCII Unicode {0, 1} {A, C, G, T} Let P be a string of size m A substring P[i .. j] of P is the subsequence of P consisting of the characters with ranks between i and j A prefix of P is a substring of the type P[0 .. i] A suffix of P is a substring of the type P[i ..m - 1] Given strings T (text) and P (pattern), the pattern matching problem consists of finding a substring of T equal to P Applications: Text editors Search engines Biological research Pattern Matching

Brute-Force Algorithm Algorithm BruteForceMatch(T, P) Input text T of size n and pattern P of size m Output starting index of a substring of T equal to P or -1 if no such substring exists for i  0 to n - m { test shift i of the pattern } j  0 while j < m  T[i + j] = P[j] j  j + 1 if j = m return i {match at i} else break while loop {mismatch} return -1 {no match anywhere} The brute-force pattern matching algorithm compares the pattern P with the text T for each possible shift of P relative to T, until either a match is found, or all placements of the pattern have been tried Brute-force pattern matching runs in time O(nm) Example of worst case: T = aaa … ah P = aaah may occur in images and DNA sequences unlikely in English text Pattern Matching

Boyer-Moore Heuristics The Boyer-Moore’s pattern matching algorithm is based on two heuristics Looking-glass heuristic: Compare P with a subsequence of T moving backwards Character-jump heuristic: When a mismatch occurs at T[i] = c If P contains c, shift P to align the last occurrence of c in P with T[i] Else, shift P to align P[0] with T[i + 1] Example Pattern Matching

Last-Occurrence Function Boyer-Moore’s algorithm preprocesses the pattern P and the alphabet S to build the last-occurrence function L mapping S to integers, where L(c) is defined as the largest index i such that P[i] = c or -1 if no such index exists Example: S = {a, b, c, d} P = abacab The last-occurrence function can be represented by an array indexed by the numeric codes of the characters The last-occurrence function can be computed in time O(m + s), where m is the size of P and s is the size of S c a b d L(c) 4 5 3 -1 Pattern Matching

The Boyer-Moore Algorithm Case 1: j  1 + l Algorithm BoyerMooreMatch(T, P, S) L  lastOccurenceFunction(P, S ) i  m - 1 j  m - 1 repeat if T[i] = P[j] if j = 0 return i { match at i } else i  i - 1 j  j - 1 { character-jump } l  L[T[i]] i  i + m – min(j, 1 + l) until i > n - 1 return -1 { no match } Case 2: 1 + l  j Pattern Matching

Example Pattern Matching

Analysis Boyer-Moore’s algorithm runs in time O(nm + s) Example of worst case: T = aaa … a P = baaa The worst case may occur in images and DNA sequences but is unlikely in English text Boyer-Moore’s algorithm is significantly faster than the brute-force algorithm on English text Pattern Matching

The KMP Algorithm - Motivation Knuth-Morris-Pratt’s algorithm compares the pattern to the text in left-to-right, but shifts the pattern more intelligently than the brute-force algorithm. When a mismatch occurs, what is the most we can shift the pattern so as to avoid redundant comparisons? Answer: the largest prefix of P[0..j] that is a suffix of P[1..j] . . a b a a b x . . . . . a b a a b a j a b a a b a No need to repeat these comparisons Resume comparing here Pattern Matching

KMP Failure Function Knuth-Morris-Pratt’s algorithm preprocesses the pattern to find matches of prefixes of the pattern with the pattern itself The failure function F(j) is defined as the size of the largest prefix of P[0..j] that is also a suffix of P[1..j] Knuth-Morris-Pratt’s algorithm modifies the brute-force algorithm so that if a mismatch occurs at P[j]  T[i] we set j  F(j - 1) j 1 2 3 4 5 P[j] a b F(j) Pattern Matching

The KMP Algorithm The failure function can be represented by an array and can be computed in O(m) time At each iteration of the while-loop, either i increases by one, or the shift amount i - j increases by at least one (observe that F(j - 1) < j) Hence, there are no more than 2n iterations of the while-loop Thus, KMP’s algorithm runs in optimal time O(m + n) Algorithm KMPMatch(T, P) F  failureFunction(P) i  0 j  0 while i < n if T[i] = P[j] if j = m - 1 return i - j { match } else i  i + 1 j  j + 1 if j > 0 j  F[j - 1] return -1 { no match } Pattern Matching

Computing the Failure Function The failure function can be represented by an array and can be computed in O(m) time The construction is similar to the KMP algorithm itself At each iteration of the while-loop, either i increases by one, or the shift amount i - j increases by at least one (observe that F(j - 1) < j) Hence, there are no more than 2m iterations of the while-loop Algorithm failureFunction(P) F[0]  0 i  1 j  0 while i < m if P[i] = P[j] {we have matched j + 1 chars} F[i]  j + 1 i  i + 1 j  j + 1 else if j > 0 then {use failure function to shift P} j  F[j - 1] else F[i]  0 { no match } Pattern Matching

Example j 1 2 3 4 5 P[j] a b c F(j) Pattern Matching

Binary Failure function For your assignment, you are to do the binary failure function. Since there are only two possible charaters, when you fail at a character, you know what you were looking at when you failed. Thus, you state the maximum number of character that match the previous characters of the pattern AND the opposite of the current character. Binary Failure Regular Failure function a b 2 1 3 4 a b 1 2 3 Pattern Matching

Tries: Basic Ideas Preprocess fixed text rather than pattern Store strings in trees, one character per node Use in search engines, dictionaries, prefixes Fixed alphabet with canonical ordering Use special character as word terminator Pattern Matching

Tries are great if Word matching (know where word begins) Text is large, immutable, and searched often. Web crawlers (for example) can afford to preprocessed text ahead of time knowing that MANY people want to search contents of all web pages. Pattern Matching

Facts Prefixes with length i stop in level i # leaves = # strings (words in text) Is a multi-way tree, used similarly to the way we use a binary search tree. Tree height = length of longest word Tree size O(combined length of all words) Insertion and search as in multi-way ordered trees, O(word length) Word, not substring, matching Could use 27-ary trees instead Exclude stop words from trie as won’t search for them Pattern Matching

Trie Example Pattern Matching

Compressed Tries When there is only one child of a node, a waste of space, so store substrings at nodes Then the tree size is O(s), the number of words Pattern Matching

Compressed Tries with Indexes avoids variable length strings Pattern Matching

Suffix Tries Tree of all suffixes of a string Used for substrings, not just full words Used in pattern matching – a substring is the prefix of a suffix (all words are from same string) Changes linear search for the beginning of the pattern to a tree search Pattern Matching

Suffix Tries are efficient In space, O(n) rather than O(n2), because characters only need to appear once Efficient – O(dn) to construct, O(dm) to use d is size of alphabet Pattern Matching

Search Engines Inverted index (file) has words as keys, occurrence lists (webpages) as value (access by content) Also called concordance – omit stop words Can use a trie effectively Multiple keywords return the intersection of occurrence lists Can use sequences in fixed order with merging for intersection gives ranking – major challenge Pattern Matching

Text Compression and Similarity A. Text Compression 1. Text characters are encoded as binary integers; different encodings may result in more or fewer bits to represent the original text a. Compression is achieved by using variable, rather than fixed size encoding (e.g. ASCII or Unicode) b. Compression is valuable in reduced bandwidth communication, storage space minimization Pattern Matching

2. Huffman encoding a. Shorter encodings for more frequently occurring characters b. Prefix code - can’t have one code be a prefix of another c. Most useful when character frequencies differ widely Encoding may change from text to text, or may be defined for a class of texts, like Morse code. Pattern Matching

Huffman algorithm uses binary trees Start with an individual tree for each character, storing character and frequency at root. Iteratively merge trees with two smallest frequencies at the root, writing sum of frequencies of children at each internal node. Greedy Algorithm Pattern Matching

Complexity is O(n + d log d), where the text of n characters has d distinct characters n is to process the text calculating frequencies d log d is the cost of heaping the frequency trees, then iteratively removing two, merging, and inserting one. Pattern Matching

Text Similarity Detect similarity to focus on, or ignore, slight differences a. DNA analysis b. Web crawlers omit duplicate pages, distinguish between similar ones c. Updated files, archiving, delta files, and editing distance Pattern Matching

Longest Common Subsequence One measure of similarity is the length of the longest common subsequence between two texts. This is NOT a contiguous substring, so it loses a great deal of structure. I doubt that it is an effective metric for similarity, unless the subsequence is a substantial part of the whole text. Pattern Matching

LCS algorithm uses the dynamic programming approach How do we write LCS in terms of other LCS problems? The parameters for the smaller problems being composed to solve a larger problem are the lengths of a prefix of X and a prefix of Y. Pattern Matching

Find recursion: Let L(i,j) be the LCS between two strings X(0 Find recursion: Let L(i,j) be the LCS between two strings X(0..i) and Y(0..j). Suppose we know L(i, j), L(i+1, j) and L(i, j+1) and want to know L(i+1, j+1). a. If X[i+1] = Y[j+1] then it is L(i, j) + 1. b. If X[i+1] != Y[j+1] then it is max(L[i, j+1], L(i+1, j)) Pattern Matching

* a b c d g h t m s 1 e 2 f 3 4 Pattern Matching

* i d o n t l k e c Pattern Matching

This algorithm initializes the array or table for L by putting 0’s along the borders, then is a simple nested loop filling up values row by row. This it runs in O(nm) While the algorithm only tells the length of the LCS, the actual string can easily be found by working backward through the table (and string), noting points at which the two characters are equal Pattern Matching

This material in not in your text (except as exercises) Sequence Comparisons Problems in molecular biology involve finding the minimum number of edit steps which are required to change one string into another. Three types of edit steps: insert, delete, replace. Example: abbc babb abbc  bbc  bbb  babb (3 steps) abbc  babbc  babb (2 steps) We are trying to minimize the number of steps. Pattern Matching

Idea: look at making just one position right Idea: look at making just one position right. Find all the ways you could use. Count how long each would take and recursively figure total cost. Orderly way of limiting the exponential number of combinations to think about. For ease in coding, we make the last character right (rather than any other). Pattern Matching

There are four possibilities (pick the cheapest) If we delete an, we need to change A(0..n-1) to B(0..m). The cost is C(n,m) = C(n-1,m) + 1 C(n,m) is the cost of changing the first n of str1 to the first m of str2. 2. If we insert a new value at the end of A(n) to match bm, we would still have to change A(n) to B(m-1). The cost is C(n,m) = C(n,m-1) + 1 3. If we replace an with bm, we still have to change A(n-1) to B(m-1). The cost is C(n,m) = C(n-1,m-1) + 1 4. If we match an with bm, we still have to change A(n-1) to B(m-1). The cost is C(n,m) = C(n-1,m-1) Pattern Matching

Bad situation - unless we can reuse results. Dynamic Programming. We have turned one problem into three problems - just slightly smaller. Bad situation - unless we can reuse results. Dynamic Programming. We store the results of C(i,j) for i = 1,n and j = 1,m. If we need to reconstruct how we would achieve the change, we store both the cost and an indication of which set of subproblems was used. Pattern Matching

Complexity: O(mn) - but needs O(mn) space as well. M(i,j) which indicates which of the four decisions lead to the best result. Complexity: O(mn) - but needs O(mn) space as well. Consider changing do to redo: Consider changing mane to mean: Pattern Matching

Changing “do” to “redo” Assume: match is free; others are 1 * r e d o I-0 I-1 I-2 I-3 I-4 D-1 R-1 R-2 M-2 D-2 R-3 Pattern Matching

Changing “mane” to “mean” * m e a n I-0 I-1 I-2 I-3 I-4 D-1 M-0 D-2 R-1 M-1 D-3 R-2 D-4 M-2 Pattern Matching

Longest Increasing Subsequence of single list Find the longest increasing subsequence in a sequence of distinct integers. Idea 1. Given a sequence of size less than m, can find the longest sequence of it. (Recursion) The problem is that we don't know how to increase the length. Case 1: It either can be added to the longest subsequence or not Case 2: It is possible that it can be added to a non-selected subsequence (creating a sequence of equal length - but having a smaller ending point) Case 3: It can be added to a non-selected sub-sequence creating a sequence of smaller length but successors make it a good choice. Example: 5 1 10 2 20 30 40 4 5 6 7 8 9 10 11 Pattern Matching

Idea 2. Given a sequence of size string < m, we know how to find all the longest increasing subsequences. Hard. There are many, and we need it for all lengths. Pattern Matching

Idea 3 Given a sequence of size < m, can find the longest subsequence with the smallest ending point. We might have to create a smaller subsequence, before we create a longer one. Pattern Matching

Idea 4. Given a sequence of size <m, can find the best increasing sequence (BIS) for every length (k < m-1). For each new item in the sequence, when we add it to the sequence of length 3 will it be better than the current sequence of length 4? Pattern Matching

For s= 1 to n (or recursively the other way) For k = s downto 1 until find correct spot If BIS(k) > As and BIS(k-1) < As BIS(k) = As Pattern Matching

Actually, we don't need the sequential search as can do a binary search. 5 1 10 2 12 8 15 18 45 6 7 3 8 9 Length BIS 1 1 2 2 3 3 4 7 5 8 6 9 To output the sequence would be difficult as you don't know where the sequence is. You would have to reconstruct. Pattern Matching

Try: 8 1 4 2 9 10 3 5 14 11 12 7 Length End Pos 1st Replacement 2nd 1 6 12 Pattern Matching

Probabilistic Algorithms Suppose we wanted to find a number that is greater than the median (the number for which half are bigger). We could sort them - O(n log n) and then select one. We could find the biggest - but stop looking half way through. O(n/2) Cannot guarantee one in the upper half in less than n/2 comparisons. What if you just wanted good odds? Pick two numbers, pick the larger one. What is probability it is in the lower half? Pattern Matching

There are four possibilities: both are lower the first is lower the other higher. the first is higher the other lower both are higher. We will be right 75% of the time! We only lose if both are in the lowest half. Pattern Matching

Select k elements and pick the biggest, the probability of being correct is 1 - 1/2k . Good odds - controlled odds. Termed a Monte Carlo algorithm. It may give the wrong result with very small probability. Another type of probabilistic algorithm is one that never gives a wrong result, but its running time is not guaranteed. Termed Las Vegas algorithm as you are guaranteed success if you try long enough. Pattern Matching

A coloring Problem: Las Vegas Style Let S be a set with n elements. (n only effects complexity not algorithm) Let S1, S2... Sk be a collection of distinct (in some way different) subsets of S, each containing exactly r elements such that k 2r-2 . (Use this fact to bound the time) GOAL: Color each element of S with one of two colors (red or blue) such that each subset Si contains at least one red and one blue element. Pattern Matching

Idea Try coloring them randomly and then just checking to see if you happen to win. Checking is fast, as you can quit checking each subset when you see one of each. You can quit checking the collection when any single color subset is found. What is the probability that all items in a set are red? 1/2r as equal probability that each color is assigned and r items in the set. Pattern Matching

What is the probability that any one of the collection is all red? k/2r Since we are looking for the or of a set of probabilities, we add. k is bound by 2r-2 so k*1/2r <= 1/4 The probability of all blue or all red in a single set is one half. (double probability of all red) If our random coloring fails, we simply try again until success. Our expected number of attempts is 2. Pattern Matching

Finding a Majority Let E be a sequence of integers x1,x2,x3, ... xn The multiplicity of x in E is the number of times x appears in E. A number z is a majority in E if its multiplicity is greater than n/2. Problem: given a sequence of numbers, find the majority in the sequence or determine that none exists. NOTE: we don’t want to merely find who has the most votes, but determine who has more than half of the votes. Pattern Matching

We are assuming no limit of the number of possible candidates. For example, suppose there is an election. Candidates are represented as integers. Votes are represented as a list of candidate numbers. We are assuming no limit of the number of possible candidates. Pattern Matching

Ideas 1. sort the list O(n log n) 2. If have a balanced tree of candidate names, complexity would be n log c (where c is number of candidates) Note, if we don’t know how many candidates, we can’t give them indices. 3. See if median (kth largest algorithm) occurs more than n/2 times. O(n) 4. Take a small sample. Find the majority - then count how many times it occurs in the whole list. This is a probabilistic approach (right?). 5. Make one pass - Discard elements that won’t affect majority. Pattern Matching

Note: if xi  xj and we remove both of them, then the majority in the old list is the majority in the new list. If xi is a majority, there are m xi’s out of n, where m > n/2. If we remove two elements, (m-1 > (n-2)/2). The converse is not true. If there is no majority, removing two may make something a majority in the smaller list: 1,2,4,5,5. Pattern Matching

Thus, our algorithm will find a possible majority. Algorithm: find two unequal elements. Delete them. Find the majority in the smaller list. Then see if it is a majority in the original list. How do we remove elements? It is easy. We scan the list in order. We are looking for a pair to eliminate. Let i be the current position. All the items before xi which have not been eliminated have the same value. All you really need to keep is the number of times, Occurs, this candidate, C value occurs (which has not been deleted). Pattern Matching

For example: List: 1 4 6 3 4 4 4 2 9 0 2 4 1 4 2 2 3 2 4 2 Occurs: X X 1 X 1 2 3 2 1 X 1 X 1 X 1 2 1 2 1 2 Candidate: 1 6 4 4 4 4 4 ? 2 ? 1 ? 2 2 2 2 2 2 2 is a candidate, but is not a majority in the whole list. Complexity: n-1 compares to find a candidate. n-1 compares to test if it is a majority. So why do this over other ways? Simple to code. No different in terms of complexity, but interesting to think about. Pattern Matching