Cost efficiency

From Wikipedia, the free encyclopedia

Cost efficiency (or cost optimality), in the context of parallel computer algorithms, refers to a measure of how effectively parallel computing can be used to solve a particular problem. A parallel algorithm is considered cost efficient if its asymptotic running time multiplied by the number of processing units involved in the computation is comparable to the running time of the best sequential algorithm.

For example, an algorithm that can be solved in O(n) time using the best known sequential algorithm and O(\frac{n}{p}) in a parallel computer with O(p) processors will be considered cost efficient.

[edit] References

  • Advanced Computer Architectures: A Design Space Approach, D. Sima, T. Fountain and P. Kacsuk, Addison-Wesley, 1997.
Topics in Parallel Computing  v  d  e 
General High-performance computing
Parallelism Data parallelismTask parallelism
Theory SpeedupAmdahl's lawFlynn's TaxonomyCost efficiencyGustafson's LawKarp-Flatt Metric
Elements ProcessThreadFiberParallel Random Access Machine
Coordination MultiprocessingMultithreadingMultitaskingMemory coherencyCache coherencyBarrierSynchronizationDistributed computingGrid computing
Programming Programming modelImplicit parallelismExplicit parallelism
Hardware Computer clusterBeowulfSymmetric multiprocessingNon-Uniform Memory AccessCache only memory architectureAsymmetric multiprocessingSimultaneous multithreadingShared memoryDistributed memoryMassively parallel processingSuperscalar processingVector processingSupercomputer
Software Distributed shared memoryApplication checkpointing
APIs PthreadsOpenMPMessage Passing Interface (MPI)
Problems Embarrassingly parallelGrand Challenge