strives to partition a digital image into regions of pixels with similar properties, e.g. homogeneity. The higher-level region representation simplifies image analysis tasks such as counting objects or detecting changes, because region attributes can be compared more readily than raw pixels.
Motivation for graph-based methods
To speed up segmentation of large images, the work could be divided among several CPUs. One means of accomplishing this involves splitting images into tiles that are processed independently. However, regions that straddle a tile border might be split up or lost if the fragments do not meet the segmentation algorithm's minimum size requirements. A trivial workaround involves overlapping tiles, i.e. allowing each processor to consider additional pixels around its tile's border. Unfortunately this increases the computational load, since the processors on both sides of a tile boundary are performing redundant work. Also, only objects smaller than the tile overlap are guaranteed to be preserved, which means that long objects such as rivers in aerial imagery are still likely to be split. In some cases, the independent tiles' results can be fused to approximate the true results. An alternative exists in the form of graph-based segmentation methods. The connectivity information inherent to graphs allows performing independent work on parts of the original image, and reconnecting them to yield an exact result as if processing had occurred globally.
From images to graphs
The possibility of stitching together independent sub-images motivates adding connectivity information to the pixels. This can be viewed as a graph, the nodes of which are pixels, and edges represent connections between pixels. A simple and comparatively space-efficient variant of this is a grid graph, whereby each pixel is connected to its neighbors in the four cardinal directions. Since the pixel neighborship relation is symmetric, the resulting graph is undirected and only half its edges need be stored. The final step calls for encoding pixel similarity information in edge weights, so that the original image is no longer needed. In the simplest case, edge weights are computed as the difference of pixel intensities.
A minimum spanning tree is a minimum-weight, cycle-free subset of a graph's edges such that all nodes are connected. In 2004, Felzenszwalb introduced a segmentation method based on Kruskal's MST algorithm. Edges are considered in increasing order of weight; their endpoint pixels are merged into a region if this doesn't cause a cycle in the graph, and if the pixels are 'similar' to the existing regions' pixels. Detecting cycles is possible in near-constant time with the aid of a disjoint-set data structure. Pixel similarity is judged by a heuristic, which compares the weight to a per-segment threshold. The algorithm outputs multiple disjunct MSTs, i.e. a forest; each tree corresponds to a segment. The complexity of the algorithm is quasi-linear because sorting edges is possible in linear time via counting sort. In 2009, Wassenberg et al. developed an algorithm that computes multiple independent Minimum Spanning Forests and then stitches them together. This enables parallel processing without splitting objects on tile borders. Instead of a fixed weight threshold, an initial connected-component labeling is used to estimate a lower bound on the threshold, which can reduce both over- and undersegmentation. Measurements show that the implementation outperforms Felzenszwalb's sequential algorithm by an order of magnitude. In 2017, Saglam and Baykan used Prim’s sequential representation of minimum spanning tree and proposed a new cutting criterion for image segmentation. They construct the MST with Prim's MST algorithm using the Fibonacci Heap data structure. The method achieves an important success on the test images in fast execution time.