Bit array

From Wikipedia, the free encyclopedia

A bit array (or bitmap, in some cases) is an array data structure which compactly stores individual bits (boolean values). It implements a simple set data structure storing a subset of {1,2,...,n} and is effective at exploiting bit-level parallelism in hardware to perform operations quickly. A typical bit array stores kw bits, where w is the number of bits in the unit of storage, such as a byte or word, and k is some integer. If the number of bits to be stored does not divide w, some space is wasted due to internal fragmentation.

Contents

[edit] Basic operations

Although most machines are not able to address individual bits in memory, nor have instructions to manipulate single bits, each bit in a word can be singled out and manipulated using bitwise operations. In particular:

  • OR can be used to set a bit to one: 11101010 OR 00000100 = 11101110
  • AND and NOT can be used to set a bit to zero: 11101010 AND (NOT 00000010) = 11101000
  • AND together with zero-testing can be used to determine if a bit is set:
11101010 AND 00010000 = 00000000 = 0
11101010 AND 00000010 = 00000010 ≠ 0
  • XOR can be used to invert or toggle a bit:
11101010 XOR 00000100 = 11101110
11101110 XOR 00000100 = 11101010

To obtain the bit mask needed for these operations, we can use a bit shift operator to shift the number 1 to the left by the appropriate number of places.

We can view a bit array as a subset of {1,2,...,n}, where a 1 bit indicates a number in the set and a 0 bit a number not in the set. This set data structure uses about n/w words of space, where w is the number of bits in each machine word. Whether the least significant bit or the most significant bit indicates the smallest-index number is largely irrelevant, but the former tends to be preferred.

Given two bit arrays of the same size representing sets, we can compute their union, intersection, and set-theoretic difference using n/w simple bit operations each (2n/w for difference), as well as the complement of either:

 for i from 0 to n/w-1
     complement_a[i] := not a[i]
     union[i]        := a[i] or b[i]
     intersection[i] := a[i] and b[i]
     difference[i]   := a[i] and (not b[i])

If we wish to iterate through the bits of a bit array, we can do this efficiently using a doubly-nested loop which loops through each word, one at a time. Only n/w memory accesses are required:

 for i from 0 to n/w-1
     index := 0    // if needed
     word := a[i]
     for b from 0 to w-1
         value := word and 1 ≠ 0
         word := word shift right 1
         // do something with value
         index := index + 1   // if needed

Both of these code samples exhibit ideal locality of reference, and so get a large performance boost from a data cache. If a cache line is k words, only about n/wk cache misses will occur.

[edit] More complex operations

If we wish to find the number of 1 bits in a bit array, sometimes called the population function, there are efficient branch-free algorithms which can compute the number of bits in a word using a series of simple bit operations. We simply run such an algorithm on each word and keep a running total. Counting zeros is similar.

As an example of such an algorithm taken from Hank Warren's Hacker's Delight, note that you can use the bitwise and operator to mask out either the odd-index or even-index bits. If you do both, then shift one to the right and add them together, the result is a word where each 2-bit field contains the sum of two bits in the original word. We can repeat this trick to sum adjacent pairs of two-bit fields, then four-bit fields, and so on until only one field remains. Here's the algorithm in C for a 16-bit number and an example:

1 1 0 0 1 1 1 1 1 1 0 1 1 0 1 1
int population16(short x) {
    x = ((x & 0xAAAA) >> 1) + (x & 0x5555);
    x = ((x & 0xCCCC) >> 2) + (x & 0x3333);
    x = ((x & 0xF0F0) >> 4) + (x & 0x0F0F);
    x = ((x & 0xFF00) >> 8) + (x & 0x00FF);
    return x;
}

The 16-bit population algorithm. Some of the later bit masks could be removed, replaced with just one at the end, since overflow becomes impossible beyond the second step.

10 00 10 10 10 01 01 10
0010 0100 0011 0011
00000110 00000110
0000000000001100

Similarly, sorting a bit array is trivial to do in O(n) time using counting sort — we count the number of ones k, fill the last k/w words with ones, set only the low k mod w bits of the next word, and set the rest to zero.

Bit arrays are useful in some contexts as priority queues. The goal in such a context is to identify the one bit of smallest index. Some machines have a find first one or find first zero operation that does this on a single word. With this, the operation is obvious: find the first nonzero word and run find first one on it, or find first zero on its complement. When this instruction is not available, there are other tricky sequences of bit operations that can accomplish the same thing effectively, if not quite as quickly.

[edit] Advantages and disadvantages

Bit arrays, despite their simplicity, have a number of marked advantages over other data structures for the same problems:

  • They are extremely compact; few other data structures can store n independent pieces of data in n/w words.
  • They allow small arrays of bits to be stored and manipulated in the register set for long periods of time with no memory accesses.
  • Because of their ability to exploit bit-level parallelism, limit memory access, and maximally utilize the data cache, they often outperform many other data structures on practical data sets, even those which are more efficient asymptotically.

However, bit arrays aren't the solution to everything. In particular:

  • They are wasteful set data structures for sparse sets (those with few elements compared to their range) in both time and space. For such applications, Judy arrays, tries, or even Bloom filters should be considered instead.
  • Accessing individual elements can be expensive and difficult to express in some languages. If random access is more common than sequential and the array is relatively small, a byte array may be preferable on a machine with byte addressing. A word array, however, is probably not justified due to the huge space overhead and additional cache misses it causes, unless the machine only has word addressing.

[edit] Applications

Because of their compactness, bit arrays have a number of applications in areas where space or efficiency is at a premium. Most commonly, they are used to represent a simple group of boolean flags or an ordered sequence of boolean values.

We mentioned above that bit arrays are used for priority queues, where the bit at index k is set if and only if k is in the queue; this data structure is used, for example, by the Linux kernel, and benefits strongly from a find-first-zero operation in hardware.

Bit arrays can be used for the allocation of memory pages, inodes, disk sectors, etc. In such cases, the term bitmap may be used. However, this term is frequently used to refer to raster images, which may use multiple bits per pixel.

Another application of bit arrays is the Bloom filter, a probabilistic set data structure that can store large sets in a small space in exchange for a small probability of error. It is also possible to build probabilistic hash tables based on bit arrays that accept either false positives or false negatives.

Bit arrays and the operations on them are also important for constructing succinct data structures, which use close to the minimum possible space. In this context, operations like finding the nth 1 bit or counting the number of 1 bits up to a certain position become important.

Bit arrays are also a useful abstraction for examining streams of compressed data, which often contain elements that occupy portions of bytes or are not byte-aligned. For example, the compressed Huffman coding representation of a single 8-bit character can be anywhere from 1 to 255 bits long.

[edit] Language support

The C programming language's bitfields, pseudo-objects found in structs with size equal to some number of bits, are in fact small bit arrays; they are limited in that they cannot span words. Although they give a convenient syntax, the bits are still accessed using bitwise operators on most machines, and they can only be defined statically (like C's static arrays, their sizes are fixed at compile-time). It is also a common idiom for C programmers to use words as small bit arrays and access bits of them using bit operators.

In C++, although individual bools typically occupy a byte, the STL type vector<bool> is a partial specialization which packs bits into words. Implementors should note, however, that its operator[] function does not return a pointer to an element (obviously), but a pointer to a proxy object. This might seem a minor point, but it means that vector<bool> is not a standard STL container, which is why the use of vector<bool> is generally discouraged. Another unique STL class, bitset, creates a vector of bits fixed at a particular size at compile-time, and which in its interface and syntax more resembles the idiomatic use of words as bit sets by C programmers. It also has some additional power, such as the ability to efficiently count the number of bits that are set.

In Java, the class java.util.BitSet creates a bit array which is then manipulated with functions named after bitwise operators familiar to C programmers. Unlike the bitset in C++, the Java BitSet expands dynamically if a bit is set at an index beyond the current size of the bit vector.

The .NET Framework supplies a BitArray collection class. It stores boolean values, supports random access and bitwise operators, can be iterated over, and its Length property can be changed to grow or truncate it.

Although Standard ML has no support for bit arrays, Standard ML of New Jersey has an extension, the BitArray structure, in its SML/NJ Library. It is not fixed in size and supports set operations and bit operations, including, unusually, shift operations.

[edit] See also