Today's Question:  What are you most afraid of as a programmer?        GIVE A SHOUT

Technical Article => Programming =>  Algorithm

Overlap Detection      2012-03-05 05:05:36      4,893    0    0

How does one detect when two strings overlap? In the case below, the four-letter suffix of string 1 matches the four-letter prefix of string 2.

1: "Fire at Will"
2:         "William Riker is number one"

Sometimes there are several matches; finding the longest is not always straight forward.

1: "Have some CoCo and CoCo"
2:                      "CoCo and CoCo is here."
2:                    "CoCo and CoCo is here."
2:           "CoCo and CoCo is here."

The naïve solution is to take ever smaller substrings of each string, compare them, then bail out when the first match is found. This is quick to program, very reliable, and very inefficient for long strings.

  1. def commonOverlapNaive(text1, text2):  
  2.   x = min(len(text1), len(text2))  
  3.   while x > 0:  
  4.     if text1[-x:] == text2[:x]:  
  5.       break  
  6.     x -= 1  
  7.   return x  

A more efficient solution is to use a modified version of the Knuth-Morris-Pratt algorithm to scan the strings while keeping track of partial matches. This is very tricky to program, with many opportunities for off-by-one errors.

  1. def commonOverlapKmp(text1, text2):  
  2.   # Cache the text lengths to prevent multiple calls.  
  3.   text1_length = len(text1)  
  4.   text2_length = len(text2)  
  5.   # Eliminate the null case.  
  6.   if text1_length == 0 or text2_length == 0:  
  7.     return 0  
  8.   # Truncate the longer string.  
  9.   if text1_length > text2_length:  
  10.     text1 = text1[-text2_length:]  
  11.   elif text1_length < text2_length:  
  12.     text2 = text2[:text1_length]  
  13.   text_length = min(text1_length, text2_length)  
  14.   # Quick check for the worst case.  
  15.   if text1 == text2:  
  16.     return text_length  
  18.   # Build partial match table from text2.  
  19.   table = [0] * text_length  
  20.   table[0] = -1  
  21.   #table[1] = 0  
  22.   pos = 2  
  23.   cnd = 0  
  24.   while pos < text_length:  
  25.     if text2[pos - 1] == text2[cnd]:  
  26.       cnd += 1  
  27.       table[pos] = cnd  
  28.       pos += 1  
  29.     elif cnd > 0:  
  30.       cnd = table[cnd]  
  31.     else:  
  32.       table[pos] = 0  
  33.       pos += 1  
  35.   # Search text1.  
  36.   m = 0  
  37.   i = 0  
  38.   while m + i < text_length:  
  39.     if text2[i] == text1[m + i]:  
  40.       i += 1  
  41.       if m + i == text_length:  
  42.         return i  
  43.     else:  
  44.       m += i - table[i]  
  45.       if table[i] > -1:  
  46.         i = table[i]  
  47.       else:  
  48.         i = 0  
  49.   return 0  

A somewhat simpler approach is to leverage the highly efficient indexOf function that is built into most languages (it is called find in Python). Start by assuming a single letter overlap and search for that letter in the second string. If found, then check the two substrings for equality. Then use indexOf to locate any instance of the substring plus one character. Keep repeating until no matches are found, then return the last confirmed substring match.

  1. def commonOverlapIndexOf(text1, text2):  
  2.   # Cache the text lengths to prevent multiple calls.  
  3.   text1_length = len(text1)  
  4.   text2_length = len(text2)  
  5.   # Eliminate the null case.  
  6.   if text1_length == 0 or text2_length == 0:  
  7.     return 0  
  8.   # Truncate the longer string.  
  9.   if text1_length > text2_length:  
  10.     text1 = text1[-text2_length:]  
  11.   elif text1_length < text2_length:  
  12.     text2 = text2[:text1_length]  
  13.   # Quick check for the worst case.  
  14.   if text1 == text2:  
  15.     return min(text1_length, text2_length)  
  17.   # Start by looking for a single character match  
  18.   # and increase length until no match is found.  
  19.   best = 0  
  20.   length = 1  
  21.   while True:  
  22.     pattern = text1[-length:]  
  23.     found = text2.find(pattern)  
  24.     if found == -1:  
  25.       return best  
  26.     length += found  
  27.     if text1[-length:] == text2[:length]:  
  28.       best = length  
  29.       length += 1  

Now that we have three functions that all do the same thing, which one is faster? Well, that depends. Let's feed random strings to each function, like this:
  cAx&"[|J{aL[xJu081e:(grxnV`kOOe#&`y#AxfA/;o2~WVE1qMUVqk~ ^]>...
The resulting logarithmic timing plots are fairly clear. The naïve algorithm scales worse than O(n), though it beats KMP until string lengths reach 10,000. KMP is solidly O(n) as advertised. The IndexOf algorithm is also O(n) -- but one hundred times faster than KMP.

[Graph of timings on overlap detection in random strings]

However, the IndexOf algorithm relies on there not being too many coincidental matches. Let's see what happens when we feed it a pathological input string, like this:
The resulting logarithmic timing plots show a sudden change in behaviour. The naïve and KMP algorithms are unchanged, while IndexOf becomes the worst of the bunch.

[Graph of timings on overlap detection in pathological strings]

Now we have a problem. The best algorithm can become the worst algorithm, depending on the inputs. KMP offers a consistently scalable solution regardless of input, but potentially at a cost of 100x performance. Let's take a closer look at the inputs. Can we recreate the pathological behaviour with real-world data? Let's start by creating fractal strings with lots of repetitions, like this:
This is similar to patterns found in source code. The resulting logarithmic timing plots shows that performance returns to near-optimum levels.

[Graph of timings on overlap detection in fractal strings]

A major user of string manipulation utilities these days is the genetics world. DNA has an alphabet of four letters, A, C, G and T. This will certainly result in large numbers of coincidental matches. Let's download the human genome, like this:
The resulting logarithmic timing plots once again show this as near-optimum. I fed it a fifth of a human and got an answer back in fifteen seconds.

[Graph of timings on overlap detection in DNA]

Computer Science (always suspect any discipline which feels the need to add the word 'science' to its name) is often an exercise in compromises. Does one choose the algorithm that works best most of the time (IndexOf)? Does one choose the algorithm that will never fail badly (KMP)? Does one choose the algorithm that's easiest to program and least likely to contain bugs (Naïve)? Does one spawn off two threads each running different algorithms to execute on separate processors with the winner terminating the loser?

Special thanks to Tancred Lindholm for continually asking awkward questions every time I thought I was finished.




Share on Facebook  Share on Twitter  Share on Google+  Share on Weibo  Share on Reddit  Share on Digg  Share on Tumblr    Delicious



No comment for this article.


Breaking working feature

By sonic0002