Boyer–Moore string search algorithm
From Wikipedia, the free encyclopedia
The Boyer–Moore string search algorithm is a particularly efficient string searching algorithm. It was developed by Bob Boyer and J Strother Moore in 1977. The algorithm preprocesses the target string (key) that is being searched for, but not the string being searched (unlike some algorithms which preprocess the string to be searched, and can then amortize the expense of the preprocessing by searching repeatedly). The execution time of the Boyer-Moore algorithm can actually be sub-linear: it doesn't need to actually check every character of the string to be searched but rather skips over some of them. Generally the algorithm gets faster as the key being searched for becomes longer. Its efficiency derives from the fact that, with each unsuccessful attempt to find a match between the search string and the text it's searching in, it uses the information gained from that attempt to rule out as many positions of the text as possible where the string could not match.
Contents |
[edit] How the algorithm works
- | - | - | - | - | - | - | X | - | - | - | - | - | - | - |
A | N | P | A | N | M | A | N | - | - | - | - | - | - | - |
- | A | N | P | A | N | M | A | N | - | - | - | - | - | - |
- | - | A | N | P | A | N | M | A | N | - | - | - | - | - |
- | - | - | A | N | P | A | N | M | A | N | - | - | - | - |
- | - | - | - | A | N | P | A | N | M | A | N | - | - | - |
- | - | - | - | - | A | N | P | A | N | M | A | N | - | - |
- | - | - | - | - | - | A | N | P | A | N | M | A | N | - |
- | - | - | - | - | - | - | A | N | P | A | N | M | A | N |
What people frequently find surprising about the Boyer-Moore algorithm when they first encounter it is that its verifications – its attempts to check whether a match exists at a particular position – work backwards. If it starts a search at the beginning of a text for the word "ANPANMAN", for instance, it checks the eighth position of the text to see if it contains an "N". If it finds the "N", it moves to the seventh position to see if that contains the last "A" of the word, and so on until it checks the first position of the text for a "A".
Why Boyer-Moore takes this backward approach is clearer when we consider what happens if the verification fails – for instance, if instead of an "N" in the eighth position, we find an "X". The "X" doesn't appear anywhere in "ANPANMAN", and this means there is no match for the search string at the very start of the text – or at the next seven positions following it, since those would all fall across the "X" as well. After checking just one character, we're able to skip ahead and start looking for a match starting at the ninth position of the text, just after the "X".
This explains why the best-case performance of the algorithm, for a text of length N and a fixed pattern of length M, is N/M: in the best case, only one in M characters needs to be checked. This also explains the somewhat counter-intuitive result that the longer the pattern we are looking for, the faster the algorithm will be usually able to find it.
The algorithm precomputes two tables to process the information it obtains in each failed verification: one table calculates how many positions ahead to start the next search based on the identity of the character that caused the match attempt to fail; the other makes a similar calculation based on how many characters were matched successfully before the match attempt failed. (Because these two tables return results indicating how far ahead in the text to "jump", they are sometimes called "jump tables", which should not be confused with the more common meaning of jump tables in computer science.)
The first table is easy to calculate: Start at the last character of the search string with a count of 0, and move towards the first character; each time you move left, increment the count by 1, and if the character you are on is not in the table already, add it along with the current count. All other characters receive a count equal to the length of the search string.
Example: For the string ANPANMAN, the first table would be as shown (for clarity, entries are shown in the order they would be added to the table):
Character | Shift |
---|---|
N | 0 |
A | 1 |
M | 2 |
P | 5 |
all other characters | 8 |
The second table is slightly more difficult to calculate: for each value of N less than the length of the search string, we must first calculate the pattern consisting of the last N characters of the search string, preceded by a mis-match for the character before it; then we initially line it up with the search pattern and determine the least number of characters the partial pattern must be shifted left before the two patterns match. For instance, for the search string ANPANMAN, the table would be as follows:
N | Pattern | Shift |
---|---|---|
0 | 1 | |
1 | 8 | |
2 | 3 | |
3 | 6 | |
4 | 6 | |
5 | 6 | |
6 | 6 | |
7 | 6 |
It may be easier to see how we derived the figures in the "shift" column if we look at the following table, which actually shows each partial pattern shifted as many places to the left as needed to match against the search string. (The character which must not match is shown here as the lower-case version of that character; thus the pattern "mAN", for instance, can be read as "The string 'AN', preceded by any character except an 'M'".)
ANPANMAN -------- n 1 aN 8 mAN 3 nMAN 6 aNMAN 6 pANMAN 6 nPANMAN 6 aNPANMAN 6
The worst-case performance of the algorithm to find all matches is approximately N*M. This worst case is hit when the string to be searched consists of repetitions of a single character, and the target string consists of M-1 repetitions of that character preceded by a single instance of a different character. In this scenario, the algorithm must check N-M+1 different offsets in the text for a match, and each such check takes M comparisons.
The worst-case to find all occurrences in a text needs approximately 3*N comparisons, hence the complexity is O(n), regardless whether the text contains a match or not. The proof is due to Richard Cole, see R. COLE, Tight bounds on the complexity of the Boyer-Moore algorithm, Proceedings of the 2nd Annual ACM-SIAM Symposium on Discrete Algorithms, (1991) for details. This proof took some years to determine. In the year it was devised, 1977, the maximum number was shown to be 6*N, in 1980 it was shown to be no more than 4*N, until Cole's result in 1991.
[edit] Example implementation
Here is an example implementation of the Boyer-Moore algorithm, written in C.
Note: The method of constructing the good-match table (skip[]) in this example is slower than it needs to be (for simplicity of implementation). It does not make a fair comparison to other algorithms, should you try to compare their speed. A faster method should be used instead.
#include <string.h> #include <limits.h> /* This helper function checks, whether the last "portion" bytes * of "needle" (which is "nlen" bytes long) exist within the "needle" * at offset "offset" (counted from the end of the string), * and whether the character preceding "offset" is not a match. * Notice that the range being checked may reach beyond the * beginning of the string. Such range is ignored. */ static int boyermoore_needlematch (const unsigned char* needle, size_t nlen, size_t portion, size_t offset) { ssize_t virtual_begin = nlen-offset-portion; ssize_t ignore = 0; if(virtual_begin < 0) { ignore = -virtual_begin; virtual_begin = 0; } if(virtual_begin > 0 && needle[virtual_begin-1] == needle[nlen-portion-1]) return 0; return memcmp(needle + nlen - portion + ignore, needle + virtual_begin, portion - ignore) == 0; } static size_t max(ssize_t a, ssize_t b) { return a>b ? a : b; } /* Returns a pointer to the first occurrence of "needle" * within "haystack", or NULL if not found. */ const unsigned char* memmem_boyermoore (const unsigned char* haystack, size_t hlen, const unsigned char* needle, size_t nlen) { size_t skip[nlen]; /* Array of shifts with self-substring match check */ ssize_t occ[UCHAR_MAX+1]; /* Array of last occurrence of each character */ size_t a, hpos; if(nlen > hlen || nlen <= 0 || !haystack || !needle) return NULL; /* Preprocess #1: init occ[]*/ /* Initialize the table to default value */ for(a=0; a<UCHAR_MAX+1; ++a) occ[a] = -1; /* Then populate it with the analysis of the needle */ /* But ignoring the last letter */ for(a=0; a<nlen-1; ++a) occ[needle[a]] = a; /* Preprocess #2: init skip[] */ /* Note: This step could be made a lot faster. * A simple implementation is shown here. */ for(a=0; a<nlen; ++a) { size_t value = 0; while(value < nlen && !boyermoore_needlematch(needle, nlen, a, value)) ++value; skip[nlen-a-1] = value; } /* Search: */ for(hpos=0; hpos <= hlen-nlen; ) { size_t npos=nlen-1; while(needle[npos] == haystack[npos+hpos]) { if(npos == 0) return haystack + hpos; --npos; } hpos += max(skip[npos], npos - occ[haystack[npos+hpos]]); } return NULL; }
[edit] Variants
[edit] Turbo Boyer-Moore algorithm
In computer science, the Turbo Boyer-Moore algorithm is a string searching algorithm. It is a variant of the Boyer-Moore string search algorith which takes an additional constant amount of space to complete a search within 2n comparisons (as opposed to 3n for Boyer-Moore). (n is the number of characters in the text to be searched). [1]
[edit] See Also
[edit] External links
- Animation of the Boyer-Moore algorithm
- An example of the Boyer-Moore algorithm from the homepage of J Strother Moore, co-inventor of the algorithm
- An explanation of the algorithm (with sample C code)
- Cole et al, Tighter lower bounds on the exact complexity of string matching