Asynchronous Pattern Matching - Metrics Amihood Amir.

Slides:



Advertisements
Similar presentations
Sublinear Algorithms … Lecture 23: April 20.
Advertisements

The beauty of prime numbers vs the beauty of the random Ely Porat Bar-Ilan University Israel.
Lecture 24 MAS 714 Hartmut Klauck
Longest Common Subsequence
Applied Algorithmics - week7
Embedding the Ulam metric into ℓ 1 (Ενκρεβάτωση του μετρικού χώρου Ulam στον ℓ 1 ) Για το μάθημα “Advanced Data Structures” Αντώνης Αχιλλέως.
Michael Alves, Patrick Dugan, Robert Daniels, Carlos Vicuna
Greedy Algorithms Amihood Amir Bar-Ilan University.
§7 Quicksort -- the fastest known sorting algorithm in practice 1. The Algorithm void Quicksort ( ElementType A[ ], int N ) { if ( N < 2 ) return; pivot.
Asynchronous Pattern Matching - Metrics Amihood Amir CPM 2006.
Bar Ilan University And Georgia Tech Artistic Consultant: Aviya Amir.
By Groysman Maxim. Let S be a set of sites in the plane. Each point in the plane is influenced by each point of S. We would like to decompose the plane.
Theoretical Program Checking Greg Bronevetsky. Background The field of Program Checking is about 13 years old. Pioneered by Manuel Blum, Hal Wasserman,
Combinatorial Pattern Matching CS 466 Saurabh Sinha.
Complexity 26-1 Complexity Andrei Bulatov Interactive Proofs.
Advanced Topics in Algorithms and Data Structures Page 1 Parallel merging through partitioning The partitioning strategy consists of: Breaking up the given.
1 L is in NP means: There is a language L’ in P and a polynomial p so that L 1 · L 2 means: For some polynomial time computable map r : 8 x: x 2 L 1 iff.
Function Matching Amihood Amir Yonatan Aumann Moshe Lewenstein Ely Porat Bar Ilan University.
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 12 June 18, 2006
1 Polynomial Church-Turing thesis A decision problem can be solved in polynomial time by using a reasonable sequential model of computation if and only.
Parallel Merging Advanced Algorithms & Data Structures Lecture Theme 15 Prof. Dr. Th. Ottmann Summer Semester 2006.
Contents Introduction Related problems Constructions –Welch construction –Lempel construction –Golomb construction Special properties –Periodicity –Nonattacking.
This material in not in your text (except as exercises) Sequence Comparisons –Problems in molecular biology involve finding the minimum number of edit.
Analysis of Algorithms CS 477/677
COMP305. Part II. Genetic Algorithms. Genetic Algorithms.
Job Scheduling Lecture 19: March 19. Job Scheduling: Unrelated Multiple Machines There are n jobs, each job has: a processing time p(i,j) (the time to.
Pattern Matching in Weighted Sequences Oren Kapah Bar-Ilan University Joint Work With: Amihood Amir Costas S. Iliopoulos Ely Porat.
6/29/20151 Efficient Algorithms for Motif Search Sudha Balla Sanguthevar Rajasekaran University of Connecticut.
Orgad Keller Modified by Ariel Rosenfeld Less Than Matching.
Asynchronous Pattern Matching - Address Level Errors Amihood Amir Bar Ilan University 2010.
11 -1 Chapter 11 Randomized Algorithms Randomized algorithms In a randomized algorithm (probabilistic algorithm), we make some random choices.
S C A L E D Pattern Matching Amihood Amir Ayelet Butman Bar-Ilan University Moshe Lewenstein and Johns Hopkins University Bar-Ilan University.
Survey: String Matching with k Mismatches Moshe Lewenstein Bar Ilan University.
Simulating a CRCW algorithm with an EREW algorithm Lecture 4 Efficient Parallel Algorithms COMP308.
Induction and recursion
1 A Simpler 1.5- Approximation Algorithm for Sorting by Transpositions Combinatorial Pattern Matching (CPM) 2003 Authors: T. Hartman & R. Shamir Speaker:
Chapter 14 Randomized algorithms Introduction Las Vegas and Monte Carlo algorithms Randomized Quicksort Randomized selection Testing String Equality Pattern.
Semi-Numerical String Matching. All the methods we’ve seen so far have been based on comparisons. We propose alternative methods of computation such as:
11 -1 Chapter 11 Randomized Algorithms Randomized Algorithms In a randomized algorithm (probabilistic algorithm), we make some random choices.
On The Connections Between Sorting Permutations By Interchanges and Generalized Swap Matching Joint work of: Amihood Amir, Gary Benson, Avivit Levy, Ely.
Length Reduction in Binary Transforms Oren Kapah Ely Porat Amir Rothschild Amihood Amir Bar Ilan University and Johns Hopkins University.
JM - 1 Introduction to Bioinformatics: Lecture III Genome Assembly and String Matching Jarek Meller Jarek Meller Division of Biomedical.
Faster Algorithm for String Matching with k Mismatches (II) Amihood Amir, Moshe Lewenstin, Ely Porat Journal of Algorithms, Vol. 50, 2004, pp
1 Network Coding and its Applications in Communication Networks Alex Sprintson Computer Engineering Group Department of Electrical and Computer Engineering.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
Private Approximation of Search Problems Amos Beimel Paz Carmi Kobbi Nissim Enav Weinreb (Technion)
Combinatorial Optimization Problems in Computational Biology Ion Mandoiu CSE Department.
Great Theoretical Ideas in Computer Science.
DIGITAL COMMUNICATIONS Linear Block Codes
Graph Colouring L09: Oct 10. This Lecture Graph coloring is another important problem in graph theory. It also has many applications, including the famous.
Hashing Amihood Amir Bar Ilan University Direct Addressing In old days: LD 1,1 LD 2,2 AD 1,2 ST 1,3 Today: C
06/12/2015Applied Algorithmics - week41 Non-periodicity and witnesses  Periodicity - continued If string w=w[0..n-1] has periodicity p if w[i]=w[i+p],
1 Efficient Algorithms for Substring Near Neighbor Problem Alexandr Andoni Piotr Indyk MIT.
Ravello, Settembre 2003Indexing Structures for Approximate String Matching Alessandra Gabriele Filippo Mignosi Antonio Restivo Marinella Sciortino.
Counting II: Recurring Problems And Correspondences Great Theoretical Ideas In Computer Science John LaffertyCS Fall 2005 Lecture 7Sept 20, 2005Carnegie.
Strings Basic data type in computational biology A string is an ordered succession of characters or symbols from a finite set called an alphabet Sequence.
Computer Science Background for Biologists CSC 487/687 Computing for Bioinformatics Fall 2005.
Interchange and Weighted-Interchange Rearrangement Distances in Strings Joint work of: Amihood Amir, Tzvika Hartman, Oren Kapah and Avivit Levy.
On the Hardness of Optimal Vertex Relabeling and Restricted Vertex Relabeling Amihood Amir Benny Porat.
Complexity 24-1 Complexity Andrei Bulatov Interactive Proofs.
Pattern Matching With Don’t Cares Clifford & Clifford’s Algorithm Orgad Keller.
The NP class. NP-completeness Lecture2. The NP-class The NP class is a class that contains all the problems that can be decided by a Non-Deterministic.
Sorting by placement and Shift Sergi Elizalde Peter Winkler By 資工四 B 周于荃.
Amihood Amir, Gary Benson, Avivit Levy, Ely Porat, Uzi Vishne
The NP class. NP-completeness
Algorithms and Complexity
In Pattern Matching Convolutions: O(n log m) using FFT b0 b1 b2
3. Brute Force Selection sort Brute-Force string matching
3. Brute Force Selection sort Brute-Force string matching
3. Brute Force Selection sort Brute-Force string matching
Presentation transcript:

Asynchronous Pattern Matching - Metrics Amihood Amir

Motivation

In the “old” days: Pattern and text are given in correct sequential order. It is possible that the content is erroneous. New paradigm: Content is exact, but the order of the pattern symbols may be scrambled. Why? Transmitted asynchronously? The nature of the application?

Example: Swaps Tehse knids of typing mistakes are very common So when searching for pattern These we are seeking the symbols of the pattern but with an order changed by swaps. Surprisingly, pattern matching with swaps is easier than pattern matching with mismatches (ACHLP:01)

Example: Reversals AAAGGCCCTTTGAGCCC AAAGAGTTTCCCGGCCC Given a DNA substring, a piece of it can detach and reverse. This process still computationally tough. Question: What is the minimum number of reversals necessary to sort a permutation of 1,…,n

Global Rearrangements? Berman & Hannenhalli (1996) called this Global Rearrangement as opposed to Local Rearrangement (edit distance). Showed it is NP-hard. Our Thesis: This is a special case of errors in the address rather than content.

Example: Transpositions AAAGGCCCTTTGAGCCC AATTTGAGGCCCAGCCC Given a DNA substring, a piece of it can be transposed to another area. Question: What is the minimum number of transpositions necessary to sort a permutation of 1,…,n ?

Complexity? Bafna & Pevzner (1998), Christie (1998), Hartman (2001): 1.5 Polynomial Approximation. Not known whether efficiently computable. This is another special case of errors in the address rather than content.

Example: Block Interchanges AAAGGCCCTTTGAGCCC AAGTTTAGGCCCAGCCC Given a DNA substring, two non-empty subsequences can be interchanged. Question: What is the minimum number of block interchanges necessary to sort a permutation of 1,…,n ? Christie (1996): O(n ) 2

Summary Biology: sorting permutations Reversals (Berman & Hannenhalli, 1996) Transpositions (Bafna & Pevzner, 1998) Pattern Matching: Swaps (Amir, Lewenstein & Porat, 2002) NP-hard ? Block interchanges O(n 2 ) (Christie, 1996) O(n log m) Note: A swap is a block interchange simplification 1. Block size2. Only once3. Adjacent

Edit operations map Reversal, Transposition, Block interchange: 1. arbitrary block size 2. not once 3. non adjacent 4. permutation5. optimization Interchange: 1. block of size 1 2. not once 3. non adjacent 4. permutation5. optimization Generalized-swap: 1. block of size 12. once 3. non adjacent 4. repetitions 5. optimization/decision Swap: 1. block of size 1 2. once 3. adjacent 4. repetitions 5. optimization/decision

S=abacbF=bbaca interchange S=abacbF=bbaac interchange matches S 1 =bbaca S 2 =bbaac S=abacbF=bcaba generalized-swap matches S 1 =bbaca S 2 =bcaba Definitions

Generalized Swap Matching INPUT:text T[0..n], pattern P[0..m] OUTPUT: all i s.t. P generalized-swap matches T[i..i+m] Reminder:Convolution The convolution of the strings t[1..n] and p[1..m] is the string t*p such that: Fact: The convolution of n-length text and m-length pattern can be done in O(n log m) time using FFT.

In Pattern Matching Convolutions: O(n log m) using FFT b 0 b 1 b 2

Problem: O(n log m) only in algebraically closed fields, e.g. C. Solution: Reduce problem to (Boolean/integer/real) multiplication. S This reduction costs! Example: Hamming distance. Counting mismatches is equivalent to Counting matches A B A B C A B B B A

Example: Count all “hits” of 1 in pattern and 1 in text

For Define: 1 if a=b 0 o/w Example:

For Do: + + Result: The number of times a in pattern matches a in text + the number of times b in pattern matches b in text + the number of times c in pattern matches c in text.

Idea:assign natural numbers to alphabet symbols, and construct: T’:replacing the number a by the pair a 2,-a P’:replacing the number b by the pair b, b 2. Convolution of T’ and P’ gives at every location 2i:  j=0..m h(T’[2i+j],P’[j]) where h(a,b)=ab(a-b).  3-degree multivariate polynomial. Generalized Swap Matching: a Randomized Algorithm…

Since: h(a,a)=0 h(a,b)+h(b,a)=ab(b-a)+ba(a-b)=0, a generalized-swap match  0 polynomial. Example: Text: ABCBAABBC Pattern: CCAABABBB 1 -1, 4 -2, 9 -3,4 -2,1 -1,1 -1,4 -2,4 -2, , 3 9, 1 1,1 1,2 4, 1 1,2 4, 2 4, ,12 -18,9 -3,4 -2,2 -4,1 -1,8 -8,8 -8,18 -12

Problem: It is possible that coincidentally the result will be 0 even if no swap match. Example: for text ace and pattern bdf we get a multivariate degree 3 polynomial: We have to make sure that the probability for such a possibility is quite small. Generalized Swap Matching: a Randomized Algorithm…

What can we say about the 0’s of the polynomial? By Schwartz-Zippel Lemma prob. of 0  degree/|domain|. Conclude: Theorem: There exist an O(n log m) algorithm that reports all generalized-swap matches and reports false matches with prob.  1/n.

Generalized Swap Matching: De-randomization? Can we detect 0’s thus de-randomize the algorithm? Suggestion: Take h 1,…h k having no common root. It won’t work, k would have to be too large !

Generalized Swap Matching: De-randomization?… Theorem:  (m/log m) polynomial functions are required to guarantee a 0 convolution value is a 0 polynomial. Proof: By a linear reduction from word equality. Given: m-bit words w1 w2 at processors P1 P2 Construct: T=w1,1,2,…,m P=1,2,…,m,w2. Now, T generalized-swap matches P iff w1=w2. Communication Complexity: word equality requires exchanging  (m) bits, We get: k  log m=  (m), so k must be  (m/log m). P1 computes: w1 * (1,2,…,m) log m bit result P2 computes: (1,2,…,m) * w2

Interchange Distance Problem INPUT:text T[0..n], pattern P[0..m] OUTPUT: The minimum number of interchanges s.t. T[i..i+m] interchange matches P. Reminder:permutation cycle The cycles (143) 3-cycle, (2) 1-cycle represent Fact: The representation of a permutation as a product of disjoint permutation cycles is unique.

Interchange Distance Problem… Lemma: Sorting a k-length permutation cycle requires exactly k-1 interchanges. Proof: By induction on k. Theorem: The interchange distance of an m-length permutation  is m-c(  ), where c(  ) is the number of permutation cycles in . Result: An O(nm) algorithm to solve the interchange distance problem. A connection between sorting by interchanges and generalized-swap matching? Cases: (1), (2 1), (3 1 2)

Interchange Generation Distance Problem INPUT:text T[0..n], pattern P[0..m] OUTPUT: The minimum number of interchange- generations s.t. T[i..i+m] interchange matches P. Definition: Let S=S 1,S 2,…,S k =F, S l+1 derived from S l via interchange I l. An interchange-generation is a subsequence of I 1,…,I k-1 s.t. the interchanges have no index in common. Note: Interchanges in a generation may occur in parallel.

Interchange Generation Distance Problem… Lemma: Let  be a cycle of length k>2. It is possible to sort  in 2 generations and k-1 interchanges. Example:(1,2,3,4,5,6,7,8,0) generation 1: (1,8),(2,7),(3,6),(4,5) (8,7,6,5,4,3,2,1,0) generation 2: (0,8),(1,7),(2,6),(3,5) (0,1,2,3,4,5,6,7,8)

Sorting a General Cycle in Two Rounds Algorithm: Exchange contents of locations 2 and n 3 and n-1 4 and n-2. Then bring every one to place simultaneously.

EXAMPLE: Index Sorted Permutation

Why Does it Work? We want to send 0 to its place So in the last location should be the number where 0 is now. This number is in location 2. Same reasoning true for 1, 2, …

Interchange Generation Distance Problem… Theorem: Let maxl(  ) be the length of the longest permutation cycle in an m-length permutation . The interchange generation distance of  is exactly: 1.0, if maxl(  )=1. 2.1, if maxl(  )=2. 3.2, if maxl(  )>2. Note: There is a generalized-swap match iff sorting by interchanges is done in 1 generation.

L1-Distance Problem INPUT:text T[0..n], pattern P[0..m] OUTPUT: The L1-distance between T[i..i+m] and P for all i  [0,n-m]. Where, L1-distance(T[i..i+m],P)=  |j-  Ti (j)| Do we need to try all pairings of same letters? How do we pair the symbols? Example: Text: ABCBAABBC Pattern: CCAABABBB

L1-Distance Problem Definition:Let T1[0..n], be a string over ∑ and T2[0..n] a permutation of T1. A pairing between T1 and T2 is a bijection M:{0,…,n} → {0,…,n} Where T1[i]=T2[M(i)], for all i=0,…,n The L1-distance between T1 and T2 under M is The L1-distance between T1 and T2 is

L1-Distance Problem EXAMPLE: T1= A B B A B C A C B M1 T2= C A C B B B B A A d M1 L1 (T1,T2)=

L1-Distance Problem EXAMPLE: T1= A B B A B C A C B M1 T2= C A C B B B B A A d M1 L1 (T1,T2)=8

L1-Distance Problem EXAMPLE: T1= A B B A B C A C B M1 T2= C A C B B B B A A d M1 L1 (T1,T2)=8+5=13

L1-Distance Problem EXAMPLE: T1= A B B A B C A C B M1 T2= C A C B B B B A A d M1 L1 (T1,T2)=8+5+2=15

L1-Distance Problem EXAMPLE: T1= A B B A B C A C B M1 T2= C A C B B B B A A d M1 L1 (T1,T2)= =19

L1-Distance Problem EXAMPLE: T1= A B B A B C A C B M1 T2= C A C B B B B A A d M1 L1 (T1,T2)= =20

L1-Distance Problem EXAMPLE: T1= A B B A B C A C B M1 T2= C A C B B B B A A d M1 L1 (T1,T2)= =23

L1-Distance Problem EXAMPLE: T1= A B B A B C A C B M1 T2= C A C B B B B A A d M1 L1 (T1,T2)= =28

L1-Distance Problem EXAMPLE: T1= A B B A B C A C B M1 T2= C A C B B B B A A d M1 L1 (T1,T2)= =35

L1-Distance Problem EXAMPLE: T1= A B B A B C A C B M1 T2= C A C B B B B A A d M1 L1 (T1,T2)= =40

L1-Distance Problem EXAMPLE: T1= A B B A B C A C B M2 T2= C A C B B B B A A d M2 L1 (T1,T2)=

L1-Distance Problem EXAMPLE: T1= A B B A B C A C B M2 T2= C A C B B B B A A d M2 L1 (T1,T2)=1

L1-Distance Problem EXAMPLE: T1= A B B A B C A C B M2 T2= C A C B B B B A A d M2 L1 (T1,T2)=1+2=3

L1-Distance Problem EXAMPLE: T1= A B B A B C A C B M2 T2= C A C B B B B A A d M2 L1 (T1,T2)=1+2+2=5

L1-Distance Problem EXAMPLE: T1= A B B A B C A C B M2 T2= C A C B B B B A A d M2 L1 (T1,T2)= =9

L1-Distance Problem EXAMPLE: T1= A B B A B C A C B M2 T2= C A C B B B B A A d M2 L1 (T1,T2)= =10

L1-Distance Problem EXAMPLE: T1= A B B A B C A C B M2 T2= C A C B B B B A A d M2 L1 (T1,T2)= =15

L1-Distance Problem EXAMPLE: T1= A B B A B C A C B M2 T2= C A C B B B B A A d M2 L1 (T1,T2)= =17

L1-Distance Problem EXAMPLE: T1= A B B A B C A C B M2 T2= C A C B B B B A A d M2 L1 (T1,T2)= =22

L1-Distance Problem EXAMPLE: T1= A B B A B C A C B M2 T2= C A C B B B B A A d M2 L1 (T1,T2)= =24 Note that d M2 L1 (T1,T2)=40 How do we choose the pairing function?

L1-Distance Problem… Fortunately, we know how an optimal pairing looks... lemma: Let T1,T2  ∑ m be two strings s.t. T2 is a permutation of T1. Let M be the pairing function that, for all a and k, moves the k-th a in T1 to the location of the k-th a in T2. Then, d L1 (T1,T2)=d M L1 (T1,T2) Result: An O(nm) algorithm to solve the L1-distance problem. Example: An optimal pairing Text: ABCBAABBC Pattern: CCAABABBB

L1-Distance Problem… Proof of pairing lemma: Note that in M, for a fixed alphabet symbol a, there are no “crossovers”. We will show that crossovers do not help. Cases: M M’ 1) x y z Cost in M=x+y+y+z Cost in M’=x+y+z+y

L1-Distance Problem… Proof of pairing lemma (continued…): 2) M M’ x y z Cost in M=x+z Cost in M’=x+y+y+z

L1-Distance Problem… Proof of pairing lemma (continued…): 3) M M’ x y z Cost in M=x+z Cost in M’=x+y+z+y

L1-Distance Problem… For pattern with distinct entries we can do better... Idea: Use computations from position i to position i+1. Example: Text: Pattern: 123 Dist(1)=|1-3|+|2-1|+|3-2|=4 Direction relative to pattern: LeftMatch Right 2,3 1

L1-Distance Problem… Dist(2)=? Now we move to the next location… Text: Pattern: 123 Direction relative to pattern: Left (+1 to Dist) Match Right (-1 to Dist) 2,3 1,2

L1-Distance Problem… Dist(2)=Dist(1)+|Left|-|Right|-left most+right most= = |3-2|=4(=|1-2|+|2-3|+|3-1|) Result: An O(n) algorithm to solve the L1-distance problem for pattern with distinct entries. Next location distance computation… Text: Pattern: 123

L2-Distance Problem Definition:Let T1[0..n], be a string over ∑ and T2[0..n] a permutation of T1. A pairing between T1 and T2 is a bijection M:{0,…,n} → {0,…,n} Where T1[i]=T2[M(i)], for all i=0,…,n The L2-distance between T1 and T2 under M is The L2-distance between T1 and T2 is

L2-Distance Problem INPUT:text T[0..n], pattern P[0..m] OUTPUT: The L2-distance between T[i..i+m] and P for all i  [0,n-m]. Where, L2-distance(T (i),P)=  |j-M T(i) (j)| 2 Do we need to try all pairings of same letters? No, the pairing lemma works here too. Result: An O(nm) algorithm to solve the L2-distance problem. We can do better…

L2-Distance Problem… Idea:Consider T and P of same length. Let list a (P) and list a (T) for each letter a, be the list of locations in which a occurs in P and T. The L2-distance between T and P is: d L2 (T,P)=  a  P  j  [0,…|list(a)|] (list a (P)[j]-list a (T (i) )[j]) 2 Schematically: Fix letter a: T P list(T) = i 1, i 2, …, i k list(P) = j 1, j 2, …, j k

L2-Distance Problem… easy to compute easy to compute convolution

L2-Distance Problem…large text Idea:Consider list a (P) and list a (T (i) ) for each letter a, the list of locations in which a occurs in P and T (i). The L2-distance between T (i) and P is: d L2 (T (i),P)=  a  P  j  [0,…|list(a)|] (list a (P)[j]-list a (T (i) )[j]) 2 Now, we want to use list a (T) instead of list a (T (i) ). Now the text location indices are not fixed…

L2-Distance Problem…large text Schematically: Fix letter a: T x P list(T) = i 1, i 2, …, i k,...,i r list(P) = j 1, j 2, …, j k For location x, the text indices need to be i 2 -x, i 3 -x, … Convolution formula:

L2-Distance Problem…large text easy to compute convolutioneasy to compute

L2-Distance Problem…large alphabet Problem: What happens when unbounded number of alphabet symbols? How many convolutions? Let ∑={a 1,…,a k } and let n i be the number of times a i occurs in T, i=1,…,k. Clearly, n 1 +n 2 +…+n k =n. Time: For each symbol a i : Total Time: Result: An O(n log m) algorithm to solve the L2- distance problem.

Open Problems 1. Interchange distance faster than O(nm)? 2. Asynchronous communication – different errors in address bits. 3. Different error measures than interchange/block interchange/transposition/reversals for errors arising from address bit errors. Note: The techniques employed in asynchronous pattern matching have so far proven new and different from traditional pattern matching.

The End