Programming Complexity

From Wikipedia, the free encyclopedia

Programming Complexity is the complexity of programs, programming and languages, and one of the unsolved problems in software engineering.

Applications are complex to the extent that when programmers resign or are terminated, companies fail if those companies have no one capable of understanding what the programmers did[who?]. Because of this, researchers establish metrics which measure the complexity and can be used to figure out how to reduce the complexity of the software.

One measure of the complexity of a program is the complexity of the algorithm, which is the number of steps to solve an algorithm or problem (see optimization problem).[1] A smaller complexity means less steps and a more efficient program. Efficiency and optimization are important to professional writers of software who often code millions of interconnected methods in one program, all with large complexities and analytic and organizational difficulty.

There are several metrics one can use to measure programming complexity:

  • data complexity (Chapin Metric)
  • data flow complexity (Elshof Metric)
  • data access complexity (Card Metric)
  • interface complexity (Henry Metric)
  • control flow complexity (McCabe Metric)
  • decisional complexity (McClure Metric)
  • branching complexity (Sneed Metric)
  • language complexity (Halstead Metric)
  • cyclomatic complexity


[edit] References

[edit] See also