Integer overflow

From Wikipedia, the free encyclopedia

In computer programming, an integer overflow occurs when an arithmetic operation attempts to create a numeric value that is larger than can be represented within the available storage space. For instance, adding 1 to the largest value that can be represented. The most common result in these cases is for the least significant representable bits of the result to be stored (the result is said to wrap). On some processors the result saturates, that is once the maximum value is reached attempts to make it larger simply return the maximum result.

Contents

[edit] Origin

The register width of a processor determines the the range of values that can be represented. Typical binary register widths include:

8 bits (maximum representable value 255),
16 bits (maximum representable value 65,535),
32 bits (the most common width for personal computers as of 2005, maximum representable value 4,294,967,295),
64 bits (maximum representable value 18,446,744,073,709,551,615),
128 bits.

Since an arithmetic operation may produce a result larger than the maximum representable value, an potential error condition may result. In the C programming language, for example, signed integer overflow causes undefined behavior, although arithmetic on unsigned integers, however, is reduced modulo a power of two, meaning that unsigned integers "wrap around" on overflow.

Diagram that illustrates wrapping behavior of integer representation.

In computer graphics or signal processing, it is typical to work on data that ranges from 0 to 1 or from -1 to 1. An example of this is a grayscale image where 0 represents black, 1 represents white, and values in-between represent varying shades of gray. One operation that you may want to support is brightening the image by multiplying every pixel by a constant. Saturated arithmetic allows you to just blindly multiply every pixel by that constant without worrying about overflow by just sticking to a reasonable outcome that all these pixels larger than 1 (i.e. "brighter than white") just become white and all values "darker than black" just become black.

[edit] Security ramifications

In some cases a program may be expecting a variable to always hold a positive value. If it is a signed quantity its value can wrap to become negative and this unexpected behavior may cause unintended behavior.

Subtracting from a small unsigned value may cause it to wrap to a large positive value which may also be an unexpected behavior.

[edit] See also

[edit] External links

In other languages