Image segmentation strives to partition a digital image into regions of pixels with similar properties, e.g. homogeneity[1]. The higher-level region representation simplifies image analysis tasks such as counting objects or detecting changes, because region attributes (e.g. average intensity or shape[2]) can be compared more readily than raw pixels.
Contents |
To speed up segmentation of large images, the work could be divided among several CPUs. One means of accomplishing this involves splitting images into tiles that are processed independently. However, regions that straddle a tile border might be split up or lost if the fragments do not meet the segmentation algorithm's minimum size requirements. A trivial workaround involves overlapping tiles, i.e. allowing each processor to consider additional pixels around its tile's border. Unfortunately this increases the computational load, since the processors on both sides of a tile boundary are performing redundant work. Also, only objects smaller than the tile overlap are guaranteed to be preserved, which means that long objects such as rivers in aerial imagery are still likely to be split. In some cases, the independent tiles' results can be fused to approximate the true results[3]. An alternative exists in the form of graph-based segmentation methods. The connectivity information inherent to graphs allows performing independent work on parts of the original image, and reconnecting them to yield an exact result as if processing had occurred globally.
The possibility of stitching together independent sub-images motivates adding connectivity information to the pixels. This can be viewed as a graph, the nodes of which are pixels, and edges represent connections between pixels. A simple and comparatively space-efficient variant of this is a grid graph, whereby each pixel is connected to its neighbors in the four cardinal directions. Since the pixel neighborship relation is symmetric, the resulting graph is undirected and only half its edges (e.g. each pixel's eastern and southern neighbor) need be stored. The final step calls for encoding pixel similarity information in edge weights, so that the original image is no longer needed. In the simplest case, edge weights are computed as the difference of pixel intensities.
A minimum spanning tree (MST) is a minimum-weight, cycle-free subset of a graph's edges such that all nodes are connected. In 2004, Felzenszwalb introduced a segmentation method[4] based on Kruskal's MST algorithm. Edges are considered in increasing order of weight; their endpoint pixels are merged into a region if this doesn't cause a cycle in the graph, and if the pixels are 'similar' to the existing regions' pixels. Detecting cycles is possible in near-constant time with the aid of a disjoint-set data structure[5]. Pixel similarity is judged by a heuristic, which compares the weight to a per-segment threshold. The algorithm outputs multiple disjunct MSTs, i.e. a forest; each tree corresponds to a segment. The complexity of the algorithm is quasi-linear because sorting edges is possible in linear time via counting sort.
In 2009, Wassenberg et al. developed an algorithm[6] that computes multiple independent Minimum Spanning Forests and then stitches them together. This enables parallel processing without splitting objects on tile borders. Instead of a fixed weight threshold, an initial connected-component labeling is used to estimate a lower bound on the threshold, which can reduce both over- and undersegmentation. Measurements show that the implementation outperforms Felzenszwalb's sequential algorithm by an order of magnitude.