Shared memory

From Wikipedia, the free encyclopedia

[edit] In Hardware

Diagram of a typical Shared memory system. Three processors are connected to the same memory module through a bus or crossbar switch
Diagram of a typical Shared memory system. Three processors are connected to the same memory module through a bus or crossbar switch

In computer hardware, shared memory refers to a (typically) large block of random access memory that can be accessed by several different central processing units (CPUs) in a multiple-processor computer system.

A shared memory system is relatively easy to program since all processors share a single view of data and the communication between processors can be as fast as memory accesses to a same location.

The issue with shared memory systems is that many CPUs need fast access to memory and will likely cache memory,which has two complications:

  • CPU-to-memory connection becomes bottleneck. Shared memory computers can not scale very well. Most of them have only ten processors.
  • Cache coherence: Whenever one cache is updated with information that may be used by other processors, the change needs to be reflected to the other processors, otherwise the different processors will be working with incoherent data (see cache coherence and memory coherence). Such coherence protocols can, when they work well, provide extremely high performance access to shared information between multiple processors. On the other hand they can sometimes become overloaded and become a bottleneck to performance.

The alternatives to shared memory are distributed memory and distributed shared memory, with another, similar set of issues. See also Non-Uniform Memory Access.

[edit] See also

[edit] External links

Topics in Parallel Computing  v  d  e 
General High-performance computing
Parallelism Data parallelismTask parallelism
Theory SpeedupAmdahl's lawFlynn's TaxonomyCost efficiencyGustafson's LawKarp-Flatt Metric
Elements ProcessThreadFiberParallel Random Access Machine
Coordination MultiprocessingMultitaskingMemory coherencyCache coherencyBarrierSynchronizationDistributed computingGrid computing
Programming Programming modelImplicit parallelismExplicit parallelism
Hardware Computer clusterBeowulfSymmetric multiprocessingNon-Uniform Memory AccessCache only memory architectureAsymmetric multiprocessingSimultaneous multithreadingShared memoryDistributed memoryMassively parallel processingSuperscalar processingVector processingSupercomputer
Software Distributed shared memoryApplication checkpointing
APIs PthreadsOpenMPMessage Passing Interface (MPI)
Problems Embarrassingly parallelGrand Challenge


In other languages