Heap (data structure)

This article is about the programming data structure. For the dynamic memory area, see Dynamic memory allocation.
Example of a complete binary max-heap with node keys being integers from 1 to 100

In computer science, a heap is a specialized tree-based data structure that satisfies the heap property: If A is a parent node of B then the key of node A is ordered with respect to the key of node B with the same ordering applying across the heap. A heap can be classified further as either a "max heap" or a "min heap". In a max heap, the keys of parent nodes are always greater than or equal to those of the children and the highest key is in the root node. In a min heap, the keys of parent nodes are less than or equal to those of the children and the lowest key is in the root node. Heaps are crucial in several efficient graph algorithms such as Dijkstra's algorithm, and in the sorting algorithm heapsort. A common implementation of a heap is the binary heap, in which the tree is a complete binary tree (see figure).

In a heap, the highest (or lowest) priority element is always stored at the root, hence the name heap. A heap is not a sorted structure and can be regarded as partially ordered. As visible from the heap-diagram, there is no particular relationship among nodes on any given level, even among the siblings. When a heap is a complete binary tree, it has a smallest possible height—a heap with N nodes always has log N height. A heap is a useful data structure when you need to remove the object with the highest (or lowest) priority.

Note that, as shown in the graphic, there is no implied ordering between siblings or cousins and no implied sequence for an in-order traversal (as there would be in, e.g., a binary search tree). The heap relation mentioned above applies only between nodes and their parents, grandparents, etc. The maximum number of children each node can have depends on the type of heap, but in many types it is at most two, which is known as a binary heap.

The heap is one maximally efficient implementation of an abstract data type called a priority queue, and in fact priority queues are often referred to as "heaps", regardless of how they may be implemented. Note that despite the similarity of the name "heap" to "stack" and "queue", the latter two are abstract data types, while a heap is a specific data structure, and "priority queue" is the proper term for the abstract data type.

A heap data structure should not be confused with the heap which is a common name for the pool of memory from which dynamically allocated memory is allocated. The term was originally used only for the data structure.

Operations

The common operations involving heaps are:

Basic
Creation
Inspection
Internal

Implementation

Heaps are usually implemented in an array (fixed size or dynamic array), and do not require pointers between elements. After an element is inserted into or deleted from a heap, the heap property may be violated and the heap must be balanced by internal operations.

Full and almost full binary heaps may be represented in a very space-efficient way (as an implicit data structure) using an array alone. The first (or last) element will contain the root. The next two elements of the array contain its children. The next four contain the four children of the two child nodes, etc. Thus the children of the node at position n would be at positions 2n and 2n + 1 in a one-based array, or 2n + 1 and 2n + 2 in a zero-based array. This allows moving up or down the tree by doing simple index computations. Balancing a heap is done by shift-up or shift-down operations (swapping elements which are out of order). As we can build a heap from an array without requiring extra memory (for the nodes, for example), heapsort can be used to sort an array in-place.

Different types of heaps implement the operations in different ways, but notably, insertion is often done by adding the new element at the end of the heap in the first available free space. This will generally violate the heap property, and so the elements are then shifted up until the heap property has been reestablished. Similarly, deleting the root is done by removing the root and then putting the last element in the root and shifting down to rebalance. Thus replacing is done by deleting the root and putting the new element in the root and shifting down, avoiding a shifting up step compared to pop (shift down of last element) followed by push (shift up of new element).

Construction of a binary (or d-ary) heap out of a given array of elements may be performed in linear time using the classic Floyd algorithm, with the worst-case number of comparisons equal to 2N − 2s2(N) − e2(N) (for a binary heap), where s2(N) is the sum of all digits of the binary representation of N and e2(N) is the exponent of 2 in the prime factorization of N.[4] This is faster than a sequence of consecutive insertions into an originally empty heap, which is log-linear (or linearithmic).[lower-alpha 1]

Variants

Comparison of theoretic bounds for variants

In the following time complexities[5] O(f) is an asymptotic upper bound and Θ(f) is an asymptotically tight bound (see Big O notation). Function names assume a min-heap.

Operation Binary[5] Binomial[5] Fibonacci[5] Pairing[6] Brodal[7][lower-alpha 2] Rank-pairing[9] Strict Fibonacci[10]
find-min Θ(1) Θ(1) Θ(1) Θ(1) Θ(1) Θ(1) Θ(1)
delete-min Θ(log n) Θ(log n) O(log n)[lower-alpha 3] O(log n)[lower-alpha 3] O(log n) O(log n)[lower-alpha 3] O(log n)
insert Θ(log n) Θ(1)[lower-alpha 3] Θ(1) Θ(1) Θ(1) Θ(1) Θ(1)
decrease-key Θ(log n) Θ(log n) Θ(1)[lower-alpha 3] o(log n)[lower-alpha 3][lower-alpha 4] Θ(1) Θ(1)[lower-alpha 3] Θ(1)
merge Θ(n) O(log n)[lower-alpha 5] Θ(1) Θ(1) Θ(1) Θ(1) Θ(1)
  1. Each insertion takes O(log(k)) in the existing size of the heap, thus \sum_{k=1}^n O(\log k). Since \log n/2 = (\log n) - 1, a constant factor (half) of these insertions are within a constant factor of the maximum, so asymptotically we can assume k = n; formally the time is n O(\log n) - O(n) = O(n \log n). This can also be readily seen from Stirling's approximation.
  2. Brodal and Okasaki later describe a persistent variant with the same bounds except for decrease-key, which is not supported. Heaps with n elements can be constructed bottom-up in O(n).[8]
  3. 1 2 3 4 5 6 7 Amortized time.
  4. Bounded by \Omega(\log\log n), O(2^{2\sqrt{\log\log n}})[11][12]
  5. n is the size of the larger heap.

Applications

The heap data structure has many applications.

Implementations

See also

References

  1. The Python Standard Library, 8.4. heapq — Heap queue algorithm, heapq.heappush
  2. The Python Standard Library, 8.4. heapq — Heap queue algorithm, heapq.heappop
  3. The Python Standard Library, 8.4. heapq — Heap queue algorithm, heapq.heapreplace
  4. Suchenek, Marek A. (2012), "Elementary Yet Precise Worst-Case Analysis of Floyd's Heap-Construction Program", Fundamenta Informaticae (IOS Press) 120 (1): 75–92, doi:10.3233/FI-2012-751.
  5. 1 2 3 4 Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L. (1990). Introduction to Algorithms (1st ed.). MIT Press and McGraw-Hill. ISBN 0-262-03141-8.
  6. Iacono, John (2000), "Improved upper bounds for pairing heaps", Proc. 7th Scandinavian Workshop on Algorithm Theory, Lecture Notes in Computer Science 1851, Springer-Verlag, pp. 63–77, doi:10.1007/3-540-44985-X_5
  7. Brodal, Gerth S. (1996), "Worst-Case Efficient Priority Queues", Proc. 7th Annual ACM-SIAM Symposium on Discrete Algorithms (PDF), pp. 52–58
  8. Goodrich, Michael T.; Tamassia, Roberto (2004). "7.3.6. Bottom-Up Heap Construction". Data Structures and Algorithms in Java (3rd ed.). pp. 338341.
  9. Haeupler, Bernhard; Sen, Siddhartha; Tarjan, Robert E. (2009). "Rank-pairing heaps" (PDF). SIAM J. Computing: 1463–1485.
  10. Brodal, G. S. L.; Lagogiannis, G.; Tarjan, R. E. (2012). Strict Fibonacci heaps (PDF). Proceedings of the 44th symposium on Theory of Computing - STOC '12. p. 1177. doi:10.1145/2213977.2214082. ISBN 9781450312455.
  11. Fredman, Michael Lawrence; Tarjan, Robert E. (1987). "Fibonacci heaps and their uses in improved network optimization algorithms" (PDF). Journal of the Association for Computing Machinery 34 (3): 596615. doi:10.1145/28869.28874.
  12. Pettie, Seth (2005). "Towards a Final Analysis of Pairing Heaps" (PDF). Max Planck Institut für Informatik.
  13. Frederickson, Greg N. (1993), "An Optimal Algorithm for Selection in a Min-Heap", Information and Computation (PDF) 104 (2), Academic Press, pp. 197–214, doi:10.1006/inco.1993.1030

External links

Wikimedia Commons has media related to Heaps.
The Wikibook Data Structures has a page on the topic of: Min and Max Heaps
This article is issued from Wikipedia - version of the Wednesday, February 10, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.