Garbage collection (computer science)

From Wikipedia, the free encyclopedia

In computer science, garbage collection (GC) is a form of automatic memory management. The garbage collector, or just collector, attempts to reclaim garbage, or memory used by objects that will never be accessed or mutated again by the application. Garbage collection was invented by John McCarthy around 1959 to solve the problems of manual memory management in his Lisp programming language. [1]

Garbage collection is often portrayed as the opposite of manual memory management, which requires the programmer to specify which objects to deallocate and return to the memory system. However, many systems use a combination of the two approaches, and there are other techniques being studied (such as region inference) to solve the same fundamental problem.[1]

Contents

[edit] Description

The basic principle of how a garbage collector works is:

  1. Determine what data objects in a program will not be accessed in the future
  2. Reclaim the resources used by those objects

By making manual memory deallocation unnecessary (and often forbidding it), garbage collection frees the programmer from having to worry about releasing objects that are no longer needed, which can otherwise consume a significant amount of design effort. It also aids programmers in their efforts to make programs more stable, because it prevents several classes of runtime errors. For example, it prevents dangling pointer errors, where a reference to a deallocated object is used. (The pointer still points to the location in memory where the object or data was, even though the object or data has since been deleted and the memory may now be used for other purposes, creating a dangling pointer.) Many computer languages require garbage collection, either as part of the language specification (e.g. Java, C#, and most scripting languages) or effectively for practical implementation (e.g. formal languages like lambda calculus); these are said to be garbage-collected languages. Other languages were designed for use with manual memory management, but have garbage collected implementations (e.g., C, C++). Newer Delphi versions support garbage collected dynamic arrays, long strings, variants and interfaces. Some languages, like Modula-3, allow both garbage collection and manual memory management to co-exist in the same application by using separate heaps for collected and manually managed objects, or yet others like D, which is garbage-collected but allows the user to manually delete objects and also entirely disable garbage collection when speed is required. In any case, it is far easier to implement garbage collection as part of the language's compiler and runtime system,[citation needed] but post hoc GC systems exist, including ones that do not require recompilation. The garbage collector will almost always be closely integrated with the memory allocator.

[edit] Benefits

Garbage collection frees the programmer from manually dealing with memory allocation and deallocation. As a result, certain categories of bugs are eliminated or substantially reduced:

  • Dangling pointer bugs, which occur when a piece of memory is freed while there are still pointers to it, and one of those pointers is used.
  • Double free bugs, which occur when the program attempts to free a region of memory that is already free.
  • Certain kinds of memory leaks, in which a program fails to free memory that is no longer referenced by any variable, leading over time to memory exhaustion.

Researchers draw a distinction between "physical" and "logical" memory leaks. In a physical memory leak, the last pointer to a region of allocated memory is removed, but the memory is not freed. In a logical memory leak, a region of memory is still referenced by a pointer, but is never actually used. [2] Garbage collectors generally can do nothing about logical memory leaks. Novice programmers sometimes believe that garbage collection makes memory leaks impossible, not realizing that logical leaks are still possible.

[edit] Tracing garbage collectors

Tracing garbage collectors are the most common type of garbage collector. They first determine which objects are reachable (or potentially reachable), and then discard all remaining objects.

[edit] Reachability of an object

Informally, a reachable object can be defined as an object for which there exists some variable in the program environment that leads to it, either directly or through references from other reachable objects. More precisely, objects can be reachable in only two ways:

  1. A distinguished set of objects are assumed to be reachable—these are known as the roots. Typically, these include all the objects referenced from anywhere in the call stack (that is, all local variables and parameters in the functions currently being invoked), and any global variables.
  2. Anything referenced from a reachable object is itself reachable; more formally, reachability is a transitive closure.

The reachability definition of "garbage" is not optimal, insofar as the last time a program uses an object could be long before that object falls out of the environment scope. A distinction is sometimes drawn between syntactic garbage, those objects the program cannot possibly reach, and semantic garbage, those objects the program will in fact never again use. For example:

Object x = new Foo();
Object y = new Bar();
x = new Quux();
/* at this point, we know that the Foo object will
 * never be accessed - it is syntactic garbage
 */
 
if(x.check_something()) {
  x.do_something(y);
}
System.exit(0);
/* in the above block, y *could* be semantic garbage,
 * but we won't know until x.check_something() returns
 * some value - if it returns at all
 */

The problem of precisely identifying semantic garbage can easily be shown to be partially decidable: a program that allocates an object X, runs an arbitrary input program P, and uses X if and only if P finishes would require a semantic garbage collector to solve the halting problem. Although conservative heuristic methods for semantic garbage detection remain an active research area, essentially all practical garbage collectors focus on syntactic garbage as described here.

[edit] Basic algorithm

Tracing collectors are called that way because they trace through the working set of memory. These garbage collectors perform collection in cycles. A cycle is started when the collector decides (or is notified) that it needs to reclaim storage, which in particular happens when the system is low on memory. The original method involves a naive mark-and-sweep in which the entire memory set is touched several times.

In this method, each object in memory has a flag (typically a single bit) reserved for garbage collection use only. This flag is always cleared (counter-intuitively), except during the collection cycle. The first stage of collection sweeps the entire 'root set', marking each accessible object as being 'in-use'. All objects transitively accessible from the root set are marked, as well. Finally, each object in memory is again examined; those with the in-use flag still cleared are not reachable by any program or data, and their memory is freed. (For objects which are marked in-use, the in-use flag is cleared again, preparing for the next cycle.)

This method has several disadvantages, the most notable being that the entire system must be suspended during collection; no mutation of the working set can be allowed. This will cause programs to 'freeze' periodically (and generally unpredictably), making real-time and time-critical applications impossible. In addition, the entire working memory must be examined, much of it twice, potentially causing problems in paged memory systems.

Because of these pitfalls, all modern tracing garbage collectors implement some variant of the tri-colour marking abstraction, but simple collectors (such as the mark-and-sweep collector) often do not make this abstraction explicit. Tri-colour marking works as follows:

  1. Create initial white, grey, and black sets; these sets will be used to maintain progress during the cycle. Initially the white set or condemned set is the set of objects that are candidates for having their memory recycled. The black set is the set of objects that cheaply can be proven to have no references to objects in the white set; in many implementations the black set starts off empty. The grey set is all the remaining objects that may or may not have references to objects in the white set (and elsewhere). These sets partition memory; every object in the system, including the root set, is in precisely one set.
  2. Pick an object from the grey set. Blacken this object (move it to the black set), by greying all the white objects it references directly.
  3. Repeat the previous step until the grey set is empty.
  4. When there are no more objects in the grey set, then all the objects remaining in the white set are provably not reachable and the storage occupied by them can be reclaimed.

The tri-colour marking algorithm preserves an important invariant:

No black object points directly to a white object.

This ensures that the white objects can be safely destroyed once the grey set is empty. (Some variations on the algorithm do not preserve the tricolour invariant but they use a modified form for which all the important properties hold.)

The tri-colour method has an important advantage: it can be performed 'on-the-fly', without halting the system for significant time periods. This is accomplished by marking objects as they are allocated and during mutation, maintaining the various sets. By monitoring the size of the sets, the system can perform garbage collection periodically, rather than as-needed. Also, the need to touch the entire working set each cycle is avoided.

[edit] Implementation strategies

In order to implement the basic tri-colour algorithm, several important design decisions must be made, which can significantly affect the performance characteristics of the garbage collector.

[edit] Moving vs. non-moving

Once the unreachable set has been determined, the garbage collector may simply release the unreachable objects and leave everything else as it is, or it may copy some or all of the reachable objects into a new area of memory, updating all references to those objects as needed. These are called "non-moving" and "moving" garbage collectors, respectively.

At first, a moving GC strategy may seem inefficient and costly compared to the non-moving approach, since much more work would appear to be required on each cycle. In fact, however, the moving GC strategy leads to several performance advantages, both during the garbage collection cycle itself and during actual program execution:

  • No additional work is required to reclaim the space freed by dead objects; the entire region of memory from which reachable objects were moved can be considered free space. In contrast, a non-moving GC must visit each unreachable object and somehow record that the memory it alone occupied is available.
  • Similarly, new objects can be allocated very quickly. Since large contiguous regions of memory are usually made available by the moving GC strategy, new objects can be allocated by simply incrementing a 'free memory' pointer. A non-moving strategy may, after some time, lead to a heavily fragmented heap, requiring expensive consultation of "free lists" of small available blocks of memory in order to allocate new objects.
  • If an appropriate traversal order is used, objects that refer to each other frequently can be moved very close to each other in memory, increasing the likelihood that they will be located in the same cache line or virtual memory page. This can significantly speed up access to these objects through these references.

One disadvantage of a moving garbage collector is that it only allows access through references that are managed by the garbage collected environment, and does not allow pointer arithmetic. This is because any native pointers to objects will be invalidated when the garbage collector moves the object (they become dangling pointers). For interoperability with native code, the garbage collector must copy the object contents to a location outside of the garbage collected region of memory. An alternative approach is to allow pinning the object in memory, preventing the garbage collector from moving it, and allowing the memory to be directly shared with native pointers (and possibly allowing pointer arithmetic).[2]

[edit] Copying vs. mark-and-sweep vs. mark-and-don't-sweep

To further refine the distinction, tracing collectors can also be divided by considering how the three sets of objects (white, grey, and black) are maintained during a collection cycle.

The most straightforward approach is the semi-space collector, which dates to 1969. In this moving GC scheme, memory is partitioned into a "from space" and "to space". Initially, objects are allocated into "to space", until it becomes full and a collection is triggered. At the start of a collection, the "to space" becomes the "from space", and vice versa. The objects reachable from the root set are copied from the "from space" to the "to space". These objects are scanned in turn, and all objects that they point to are copied to "to space", until all reachable objects have been copied to "to space". Once the program continues execution, new objects are once again allocated from the "to space" until it is once again full and the process is repeated. This approach has the advantage of conceptual simplicity (the three object color sets are implicitly constructed during the copying process), but the disadvantage that a (possibly) very large contiguous region of free memory is necessarily required on every collection cycle. This technique is also known as stop-and-copy. Cheney's algorithm is an improvement on the semi-space collector.

A mark and sweep garbage collector maintains a bit (or two) with each object to record whether it is white or black; the grey set is either maintained as a separate list (such as the process stack) or using another bit. As the reference tree is traversed during a collection cycle (the "mark" phase), these bits are manipulated by the collector to reflect the current state. A final "sweep" of the memory areas then frees white objects. The mark and sweep strategy has the advantage that, once the unreachable set is determined, either a moving or non-moving collection strategy can be pursued; this choice of strategy can even be made at runtime, as available memory permits. It has the disadvantage of "bloating" objects by a small amount.

A mark and don't sweep garbage collector, like the mark-and-sweep, maintains a bit with each object to record whether it is white or black; the gray set is either maintained as a separate list (such as the process stack) or using another bit. There are two key differences here. First, black and white mean different things than they do in the mark and sweep collector. In a "mark and don't sweep" system, all reachable objects are always black. An object is marked black at the time it is allocated, and it will stay black even if it becomes unreachable. A white object is unused memory and may be allocated. Second, the interpretation of the black/white bit can change. Initially, the black/white bit may have the sense of (0=white, 1=black). If an allocation operation ever fails to find any available (white) memory, that means all objects are marked used (black). The sense of the black/white bit is then inverted (for example, 0=black, 1=white). Everything becomes white. This momentarily breaks the invariant that reachable objects are black, but a full marking phase follows immediately, to mark them black again. Once this is done, all unreachable memory is white. No "sweep" phase is necessary.

[edit] Generational GC (aka Ephemeral GC)

It has been empirically observed that in many programs, the most recently created objects are also those most likely to quickly become unreachable (known as infant mortality or the generational hypothesis). A generational GC divides objects into generations and, on most cycles, will place only the objects of a subset of generations into the initial white (condemned) set. Furthermore, the runtime system maintains knowledge of when references cross generations by observing the creation and overwriting of references. When the garbage collector runs, it may be able to use this knowledge to prove that some objects in the initial white set are unreachable without having to traverse the entire reference tree. If the generational hypothesis holds, this results in much faster collection cycles while still reclaiming most unreachable objects.

In order to implement this concept, many[citation needed] generational garbage collectors use separate memory regions for different ages of objects. When a region becomes full, those few objects that are referenced from older memory regions are promoted (copied) up to the next highest region, and the entire region can then be overwritten with fresh objects. This technique permits very fast incremental garbage collection, since the garbage collection of only one region at a time is all that is typically required.

Generational garbage collection is a heuristic approach, and some unreachable objects may not be reclaimed on each cycle. It may therefore occasionally be necessary to perform a full mark and sweep or copying garbage collection to reclaim all available space. In fact, runtime systems for modern programming languages (such as Java and the .NET Framework) usually use some hybrid of the various strategies that have been described thus far; for example, most collection cycles might look only at a few generations, while occasionally a mark-and-sweep is performed, and even more rarely a full copying is performed to combat fragmentation. The terms "minor cycle" and "major cycle" are sometimes used to describe these different levels of collector aggression.

[edit] Stop-the-world vs. incremental vs. concurrent

Simple stop-the-world garbage collectors completely halt execution of the program to run a collection cycle, thus guaranteeing that new objects are not allocated and objects do not suddenly become unreachable while the collector is running. This has the obvious disadvantage that the program can perform no useful work while a collection cycle is running (sometimes called the "embarrassing pause").

Incremental garbage collectors are designed to reduce this disruption by interleaving their work with activity from the main program. Careful design is necessary to ensure that the main program does not interfere with the garbage collector and vice versa; for example, when the program needs to allocate a new object, the runtime system may either need to suspend it until the collection cycle is complete, or somehow notify the garbage collector that there exists a new, reachable object.

Finally, a concurrent garbage collector can run concurrently in real time with the main program on a shared memory multiprocessing machine. Complex locking regimes may be necessary in order to guarantee correctness, and cache issues also make this less helpful than one might imagine. Nonetheless, concurrent GC may be desirable for multiprocessing applications with high performance requirements.

[edit] Precise vs. conservative and internal pointers

Some collectors running in a particular environment can correctly identify all pointers (references) in an object; these are called "precise" (or "exact" or "accurate") collectors, the opposite being a "conservative" or "partly conservative" collector. "Conservative" collectors have to assume that any bit pattern in memory could be a pointer if (when interpreted as a pointer) it would point into any allocated object. Thus, conservative collectors may have some false negatives, where storage is not released because of accidental fake pointers, but this is rarely a significant drawback in practice. Whether a precise collector is practical usually depends on type safety properties of the programming language. An example for which a conservative garbage collector would be needed is the C programming language which allows pointers to be type cast into integers and back to pointers.

A related issue concerns internal pointers, or pointers to fields within an object. If the semantics of a language allow internal pointers, then there may be many different addresses that can refer to the same object, which complicates determining whether an object is garbage or not.

[edit] Performance implications

Tracing garbage collectors require some implicit runtime overhead that may be beyond the control of the programmer, and can sometimes lead to performance problems. For example, commonly used Stop-The-World garbage collectors, which pause program execution at arbitrary times, may make GC languages inappropriate for some embedded systems, high-performance server software, and applications with real-time needs.

A more fundamental issue is that garbage collectors violate locality of reference, since they deliberately go out of their way to find bits of memory that haven't been accessed recently. The performance of modern computer architectures is increasingly tied to caching, which depends on the assumption of locality of reference for its effectiveness. Some garbage collection methods result in better locality of reference than others. Generational garbage collection is relatively cache-friendly, and copying collectors automatically defragment memory helping to keep related data together. Nonetheless, poorly timed garbage collection cycles could have a severe performance impact on some computations, and for this reason many runtime systems provide mechanisms that allow the program to temporarily suspend, delay or activate garbage collection cycles.

Despite these issues, for many practical purposes, allocation/deallocation-intensive algorithms implemented in garbage collected languages can actually be faster than their equivalents using manual heap allocation.[citation needed] A major reason for this is that the garbage collector allows the runtime system to amortize allocation and deallocation operations in a potentially advantageous fashion.[citation needed] Each has different sorts of overheads:

Manual heap allocation:
  • search for best/first-fit block of sufficient size
  • free list maintenance
Garbage collection:
  • locate reachable objects
  • copy reachable objects for moving collectors
  • read/write barriers for incremental collectors
  • search for best/first-fit block and free list maintenance for non-moving collectors

It is difficult to compare the two cases directly, as their behavior depends on the situation. For example, in the best case for a garbage-collecting system, allocation just increments a pointer; but in the best speed performance case for manual heap allocation, the allocator maintains freelists of specific sizes and allocation only requires following a pointer. However this size segregation usually cause a large degree of external fragmentation and this can impact cache behaviour. Memory allocation in a garbage collected language may be implemented using heap allocation behind the scenes (rather than simply incrementing a pointer), so the performance advantages listed above don't all apply in this case. In some situations, most notably embedded systems, it is possible to avoid both garbage collection and heap management overhead by preallocating pools of memory and using a custom, lightweight scheme for allocation/deallocation [3].

The overhead of write barriers is more likely to be noticeable in an imperative-style program which frequently writes pointers into existing data structures than in a functional-style program which constructs data only once and never changes them.

Some advances in garbage collection can be understood as reactions to performance issues. Early collectors were stop-the-world collectors, but the performance of this approach was distracting in interactive applications. Incremental collection avoided this disruption, but at the cost of decreased efficiency due to the need for barriers. Generational collection techniques are used with both stop-the-world and incremental collectors to increase performance--the trade-off is that some garbage is not detected as garbage for longer than normal.

[edit] Reference counting

Main article: Reference counting

Reference counting is a form of automatic memory management where each object has a count of the number of references to it. An object's reference count is incremented when a reference to it is created, and decremented when a reference is destroyed. The object's memory is reclaimed when the count reaches zero.

There are two major disadvantages to reference counting:

  • If two or more objects refer to each other, they can create a cycle whereby neither will be collected as their mutual references never let their reference counts become zero. Some GC systems (like the one in CPython) using reference counting use specific cycle-detecting algorithms to deal with this issue.
  • In naive implementations, each assignment of a reference and each reference falling out of scope often require modifications of one or more reference counters. However, optimizations to this are described in the literature. When used in a multithreaded environment, these modifications (increment and decrement) may need to be interlocked. This is usually a very expensive operation.

[edit] Escape analysis

Main article: Escape analysis

Escape analysis can be used to convert heap allocations to stack allocations, thus reducing the amount of work needed to be done by the garbage collector.

[edit] Availability

Generally speaking, higher-level programming languages are more likely to have garbage collection as a standard feature. In languages that do not have it built in, garbage collection can often be added as a library, as with the Boehm garbage collector for C and C++.

Functional programming languages, like ML, Haskell, and APL, have garbage collection built in. Lisp, which introduced functional programming, is especially notable for using garbage collection before this technique was commonly used.

Other dynamic languages, such as Ruby (but not Perl, or PHP, which use reference counting), also tend to use GC. Object-oriented programming languages like Smalltalk, Java and ECMAScript usually provide integrated garbage collection, a notable exception being C++.

Historically, languages intended for beginners, such as BASIC and the Lisp-derived Logo, have often used garbage collection for variable-length data types, such as strings and lists, so as to not burden programmers with manual memory management. On early microcomputers, with their limited memory and slow processors, garbage collection could often cause apparently random, inexplicable pauses in the midst of program operation.

[edit] Limited environments

Garbage collection is rarely used on embedded or real-time systems because of the perceived need for very tight control over the use of limited resources. However, garbage collectors compatible with such limited environments have been developed. [4] The Microsoft .NET Micro Framework and Java Platform, Micro Edition are embedded software platforms that, like their larger cousins, include garbage collection.

[edit] See also

[edit] References

  1. ^ Note that there is an ambiguity of terms, as theory often uses the terms manual garbage-collection and automatic garbage-collection rather than manual memory management and garbage-collection, and does not restrict garbage-collection to memory management, rather considering that any logical or physical resource may be garbage-collected.
  2. ^ Maebe, Ronsse, and De Bosschere, "Precise detection of memory leaks." http://www.cs.virginia.edu/woda2004/papers/maebe.pdf
  3. ^ [cap-talk] Memory allocation in embedded systems
  4. ^ Wei Fu and Carl Hauser, "A Real-Time Garbage Collection Framework for Embedded Systems". ACM SCOPES '05, 2005. http://portal.acm.org/ft_gateway.cfm?id=1140392&type=pdf&coll=GUIDE&dl=GUIDE&CFID=15151515&CFTOKEN=6184618

[edit] External links