Software metric

From Wikipedia, the free encyclopedia

A software metric is a measure of some property of a piece of software or its specifications.

Since quantitative methods have proved so powerful in the other sciences, computer science practitioners and theoreticians have worked hard to bring similar approaches to software development. Tom DeMarco stated, “You can’t control what you can't measure” in DeMarco, T. (1982) Controlling Software Projects: Management, Measurement & Estimation, Yourdon Press, New York, USA, p3.

Contents

[edit] Common software metrics

Common software metrics include:

[edit] Limitations

It is very difficult to satisfactorily define or measure "how much" software there is in a program, especially when making such a prediction prior to the detail design. The practical utility of software metrics has thus been limited to narrow domains where the measurement process can be stabilized.

Management methodologies such as the Capability Maturity Model or ISO 9000 have therefore focused more on process metrics which assist in monitoring and controlling the processes that produce the software.

Examples of process metrics affecting software:

  • Number of times the program failed to rebuild overnight
  • Number of defects introduced per developer hour
  • Number of changes to requirements
  • Hours of programmer time available and spent per week
  • Number of patch releases required after first product ship

[edit] Criticisms

Potential weaknesses and criticism of the metrics approach:

  • Unethical: It is said to be unethical to reduce a person’s performance to a small number of numerical variables and then judge him/her by that measure. A supervisor may assign the most talented programmer to the hardest tasks on a project; which means it may take the longest time to develop the task and may generate the most defects due to the difficulty of the task. Uninformed managers overseeing the project might then judge the programmer as performing poorly without consulting the supervisor who has the full picture.
  • Demeaning: “Management by numbers” without regard to the quality of experience of the employees, instead of “managing people.”
  • Skewing: The measurement process is biased by the act of measurement by employees seeking to maximize management’s perception of their performances. For example, if lines of code are used to judge performance, then employees will write as many separate lines of code as possible, and if they find a way to shorten their code, they may not use it.
  • Inaccurate: No known metrics are both meaningful and accurate. Lines of code measure exactly what is typed, but not of the difficulty of the problem. Function points were developed to better measure the complexity of the code or specification, but they require personal judgment to use well. Different estimators will produce different results. This makes function points hard to use fairly and unlikely to be used well by everyone.

[edit] Gaming Metrics

Industry experience suggests that the design of metrics will encourage certain kinds of behaviour from the people being measured. The common phrase applied is “you get what you measure” (or “be careful what you wish for”).

A simple example that is actually quite common is the cost-per-function-point metric applied in some Software Process Improvement programs as an indicator of productivity. The simplest way to achieve a lower cost-per-FP is to make function points arbitrarily smaller. Since there is no standard way of measuring function points, the metric is wide open to gaming – that is, cheating.

One school of thought on metrics design suggests that metrics communicate the real intention behind the goal, and that people should do exactly what the metric tells them to do. This is a spin-off of Test driven development, where developers are encouraged to write the code specifically to pass the test. If that’s the wrong code, then they wrote the wrong test. In the metrics design process, gaming is a useful tool to test metrics and help make them more robust, as well as for helping teams to more clearly and effectively articulate their real goals.

It should be noted that there are very few industry-standard metrics that stand up to even moderate gaming.

[edit] Balancing Metrics

One way to avoid the “be careful what you wish for” trap is to apply a suite of metrics that balance each other out. In software projects, it’s advisable to have at least one metric for each of the following:

  • Schedule
  • Size/Complexity
  • Cost
  • Quality

Too much emphasis on any one of these aspects of performance is likely to create an imbalance in the team’s motivations, leading to a dysfunctional project.

The Balanced scorecard is a useful tool for managing a suite of metrics that address multiple performance perspectives.

[edit] See also

[edit] References

[edit] External links