Atomic operation

From Wikipedia, the free encyclopedia

An atomic operation in computer science refers to a set of operations that can be combined so that they appear to the rest of the system to be a single operation.

Contents

[edit] Conditions

To accomplish this, two conditions must be met:

  1. Until the entire set of operations completes, no other process can know about the changes being made; and
  2. If any of the operations fail then the entire set of operations fails, and the state of the system is restored to the state it was in before any of the operations began.

To the rest of the system, it appears that the set of operations either succeeds or fails all at once. No in-between state is accessible. This is an atomic operation.

Even without the complications of multiple processing units, this can be non-trivial to implement. As long as there is the possibility of a change in the flow of control, without atomicity there is the possibility that the system can enter an invalid state (invalid as defined by the program, a so-called invariant).

[edit] Example

[edit] One process

For example, imagine a single process is running on a computer incrementing a memory location. To increment that memory location:

  1. the process reads the value in the memory location;
  2. the process adds one to the value;
  3. the process writes the new value back into the memory location.

[edit] Two processes

Now, imagine two processes are running incrementing a single, shared memory location:

  1. the first process reads the value in memory location;
  2. the first process adds one to the value;

but before it can write the new value back to the memory location it is suspended, and the second process is allowed to run:

  1. the second process reads the value in memory location, the same value that the first process read;
  2. the second process adds one to the value;
  3. the second process writes the new value into the memory location.

The second process is suspended and the first process allowed to run again:

  1. the first process writes a now-wrong value into the memory location, unaware that the other process has already updated the value in the memory location.

This is a trivial example. In a real system, the operations can be more complex and the errors introduced extremely subtle. For example, reading a 64-bit value from memory may actually be implemented as two sequential reads of two 32-bit memory locations. If a process has only read the first 32-bits, and before it reads the second 32-bits the value in memory gets changed, it will have neither the original value nor the new value but a mixed-up garbage value.

Furthermore, the specific order in which the processes run can change the results, making such an error difficult to detect and debug.

[edit] Locking

A clever programmer might suggest that a lock should be placed around this "critical section". However, without hardware support in the processor, a lock is nothing more than a memory location which must be read, inspected, and written. Algorithms, such as spin locking, have been devised that implement software-only locking, but these can be inefficient.

Most modern processors have some facility which can be used to implement locking, such as an atomic test-and-set or compare-and-swap operation, or the ability to temporarily turn off interrupts ensuring that the currently running process cannot be suspended.

[edit] See also

In other languages