Stack-based memory allocation

From Wikipedia, the free encyclopedia

Stacks in computing architectures are regions of memory where data is added or removed in a Last-In-First-Out manner.

In most modern computer systems, each thread has a reserved region of memory referred to as its stack. When a function executes, it may add some of its state data to the top of the stack; when the function exits it is responsible for removing that data from the stack. If a region of memory lies on the thread's stack, that memory is said to have been allocated on the stack.

Because the data is added and removed in a last-in-first-out manner, stack allocation is very simple and typically faster than heap allocation. Another advantage is that memory on the stack is automatically reclaimed when the function exits, which can be convenient for the programmer.

A disadvantage of stack based memory allocation is that a thread's stack size can be as small as a few dozen kilobytes. Allocating more memory on the stack than is available can result in a crash due to stack overflow. Another disadvantage is that the memory stored on the stack is automatically deallocated when the function that created it returns, and thus the function must copy the data if they should be available to other parts of the program after it returns.

Some processors families, such as the x86, have special instructions for manipulating the stack of the currently executing thread. Other processor families, including PowerPC and MIPS, do not have explicit stack support, but instead rely on convention and delegate stack management to the operating system's Application Binary Interface (ABI).

[edit] See also