Implicit parallelism

From Wikipedia, the free encyclopedia

In computer science, implicit parallelism is a characteristic of a programming language that allows a compiler to automatically exploit the parallelism inherent to the computations expressed by some of the language's constructs. A pure implicitly parallel language does not need special directives, operators or functions to enable parallel execution.

Some parallel programming languages with implicit parallelism are: LabVIEW, HPF, ZPL, NESL and SISAL.

[edit] Advantages

A programmer that writes implicitly parallel code does not need to worry about task division or process communication, focusing instead in the problem that his or her program is intended to solve. Implicit parallelism generally facilitates the design of parallel programs and therefore results in a substantial improvement of programmer productivity.

[edit] Disadvantages

Languages with implicit parallelism reduce the control that the programmer has over the parallel execution of the program, resulting sometimes in less-than-optimal parallel efficiency. The makers of the Oz programming language also note that their early experiments with implicit parallelism showed that implicit parallelism made debugging difficult and object models unnecessarily awkward.[1]

[edit] Notes

  1. ^ Seif Haridi (2006-06-14). Introduction. Tutorial of Oz. Retrieved on 2007-09-20.