User:Glapu/Software development effort estimation cost
From Wikipedia, the free encyclopedia
Software development effort estimation is the process of predicting the most realistic use of effort required to develop or maintain software based on incomplete, uncertain and/or noisy input. Effort estimates may be used as input to project plans, iteration plans, budgets, investment analyses, pricing processes and bidding rounds.
Contents |
[edit] State-of-practice
Published surveys on estimation practice suggest that expert estimation is the dominant strategy when estimating software development effort[1].
Typically, effort estimates are over-optimistic and there is a strong over-confidence in their accuracy. The mean effort overrun seems to be about 30% and not decreasing over time. For a review of effort estimation error surveys, see [2]. However, the measurement of estimation error is not unproblematic, see Assessing and interpreting the_accuracy of effort estimates. The strong over-confidence in the accuracy of the effort estimates is illustrated by the finding that, on average, if a software professional is 90% confident or “almost sure” to include the actual effort in a minimum-maximum interval, the observed frequency of including the actual effort is only 60-70% [3].
Currently the term “effort estimate” is used to denote as different concepts as most likely use of effort (modal value), the effort that corresponds to a probability of 50% of not exceeding (median), the planned effort, the budgeted effort or the effort used to propose a bid or price to the client. This is believed to be unfortunate, because communication problems may occur and because the concepts serve different goals [4] [5].
[edit] History
Software researchers and practitioners have been addressing the problems of effort estimation for software development projects since at least the 1960s; see, e.g., work by Farr [6] and Nelson [7].
Most of the research has focused on the construction of formal software effort estimation models. The early models were typically based on regression analysis or mathematically derived from theories from other domains. Since then a high number of model building approaches have been evaluated, such as approaches founded on case-based reasoning, classification and regression trees, simulation, neural networks, Bayesian statistics, lexical analysis of requirement specifications, genetic programming, linear programming, economic production models, soft computing, fuzzy logic modeling, statistical bootstrapping, and combinations of one or more of these models. The perhaps most common estimation products today, e.g., the formal estimation models COCOMO and SLIM have their basis in estimation research conducted in the 1970s and 1980s. The estimation approaches based on functionality-based size measures, e.g., function points, is also based on research conducted in the 1970s and 1980s, but are re-appearing with modified size measures under different labels, such as “use case points” [8] in the 1990s and 2000s.
[edit] Estimation approaches
There are many ways of categorizing estimation approaches, see for example [9][10]. The top level categories are the following:
- Expert estimation: The quantification step, i.e., the step where the estimate is produced, is based on judgmental processes.
- Formal estimation model: The quantification step is based on mechanical processes, e.g., the use of a formula derived from historical data.
- Combination-based estimation: The quantification step is based on a judgmental or mechanical combination of estimates from different sources.
Below are examples of estimation approaches within each category.
Estimation approach | Category | Examples of support of implementation of estimation approach |
---|---|---|
Analogy-based estimation | Formal estimation model | ANGEL |
WBS-based (bottom up) estimation | Expert estimation | MS Project, company specific activity templates |
Parametric models | Formal estimation model | COCOMO, SLIM |
Size-based -based estimation models | Formal estimation model | Function Point Analysis, Use Case Analysis, User Stories-based estimation in Agile software development |
Group estimation | Expert estimation | Planning poker, Wideband Delphi |
Mechanical combination | Combination-based estimation | Average of an analogy-based and a Work breakdown structure-based effort estimate |
Judgmental combination | Combination-based estimation | Expert judgment based on estimates from a parametric model and group estimation |
[edit] Selection of estimation approach
The evidence on differences in estimation accuracy of different estimation approaches and models suggest that there is no “best approach” and that the relative accuracy of one approach or model in comparison to another depends strongly on the context [11]. This implies that different organizations benefit from different estimation approaches. Findings, summarized in [12], that may support the selection of estimation approach based on the expected accuracy of an approach include:
- Expert estimation is on average at least as accurate as model-based effort estimation. In particular, situations with unstable relationships and information of high importance not included in the model may suggest use of expert estimation. This assumes, of course, that experts with relevant experience are available.
- Formal estimation models not tailored to a particular organization’s own context, may be very inaccurate. Use of own historical data is consequently crucial if one cannot be sure that the estimation model’s core relationships (e.g., formula parameters) are based on similar project contexts.
- Formal estimation models may be particularly useful in situations where the model is tailored to the organization’s context (either through use of own historical data or that the model is derived from similar projects and contexts), and/or it is likely that the experts’ estimates will be subject to a strong degree of wishful thinking.
The most robust finding, in many forecasting domains, is that combination of estimates from independent sources, preferable applying different approaches, will on average improve the estimation accuracy [13] [14] [15].
In addition, other factors such as ease of understanding and communicating the results of an approach, ease of use of an approach, cost of introduction of an approach should be considered in a selection process.
[edit] Uncertainty assessment approaches
The uncertainty of an effort estimate can be described through a prediction interval (PI). An effort PI is based on a stated certainty level and contains a minimum and a maximum effort value. For example, a project leader may estimate that the most likely effort of a project is 1000 work-hours and that it is 90% certain that the actual effort will be between 500 and 2000 work-hours. Then, the interval [500, 2000] work-hours is the 90% PI of the effort estimate of 1000 work-hours. Frequently, other terms are used instead of PI, e.g., prediction bounds, prediction limits, interval prediction, prediction region and, unfortunately, confidence interval. An important difference between confidence interval and PI is that PI refers to the uncertainty of an estimate, while confidence interval usually refers to the uncertainty associated with the parameters of an estimation model or distribution, e.g., the uncertainty of the mean value of a distribution of effort values. The confidence level of a PI refers to the expected (or subjective) probability that the real value is within the predicted interval[16].
There are several possible approaches to calculate effort PIs, e.g., formal approaches based on regression or bootstrapping [17], formal or judgmental approaches based on the distribution of previous estimation error [18], and pure expert judgment of minimum-maximum effort for a given level of confidence. Expert judgments based on the distribution of previous estimation error has been found to systematically lead to more realistic uncertainty assessment than the traditional minimum-maximum effort intervals in several studies, see for example [19].
[edit] Assessing and interpreting the accuracy of effort estimates
The most common measures of the average estimation accuracy is the MMRE (Mean Magnitude of Relative Error), where MRE is defined as:
MRE = |actual effort − estimated effort| / |actual effort|
This measure has been criticized [20] [21] [22] and there are several alternative measures, such as more symmetric measures [23] , Weighted Mean of Quartiles of relative errors (WMQ) [24] and Mean Variation from Estimate (MVFE) [25].
A high estimation error cannot automatically be interpreted as an indicator of low estimation ability. Alternative, competing or complementing, reasons include low cost control of project, high complexity of development work, and more delivered functionality than originally estimated. A framework for improved use and interpretation of estimation error measurement is included in [26].
[edit] Psychological issues related to effort estimation
There are many psychological factors potentially explaining the strong tendency towards over-optimistic effort estimates that need to be dealt with to increase accuracy of effort estimates. These factors are essential even when using formal estimation models, because much of the input to these models is judgment-based. Factors that have been demonstrated to be important are: Wishful thinking, anchoring, planning fallacy and cognitive dissonance. A discussion on these and other factors can be found in work by Jørgensen and Grimstad [27].
[edit] See also
- Parametric estimating
- Estimation in software engineering
- Wideband Delphi
- Project management
- Planning poker
- Cost overrun
- COCOMO
- SEER-SM
- Function points
- PROBE
- PERT
[edit] External links
- Special Interest Group on Software Effort Estimation: http://www.forecastingprinciples.com/Software_Estimation/index.html
- General forecasting principles: http://www.forecastingprinciples.com
- Estimation resources: http://www.itprojectestimation.com/estrefs.htm
- Downloadable research papers on effort estimation: http://simula.no/research/engineering/projects/best
- Mike Cohn's Estimating With Use Case Points from article from Methods & Tools: http://www.methodsandtools.com/archive/archive.php?id=25
- Resources on Software Estimation from Steve McConnell: http://www.construx.com/Page.aspx?nid=297
[edit] References
- ^ Jørgensen, M.. A Review of Studies on Expert Estimation of Software Development Effort.
- ^ Molokken, K. Jorgensen, M.. A review of software surveys on software effort estimation.
- ^ Jørgensen, M. Teigen, K.H. Ribu, K.. Better sure than safe? Over-confidence in judgement based software development effort prediction intervals.
- ^ Edwards, J.S. Moores, T.T. (1994), "A conflict between the use of estimating and planning tools in the management of information systems.". European Journal of Information Systems 3(2): 139-147.
- ^ Goodwin, P. (1998). Enhancing judgmental sales forecasting: The role of laboratory research. Forecasting with judgment. G. Wright and P. Goodwin. New York, John Wiley & Sons: 91-112.
- ^ Farr, L. Nanus, B.. Factors that affect the cost of computer programming.
- ^ Nelson, E. A. (1966). Management Handbook for the Estimation of Computer Programming Costs. AD-A648750, Systems Development Corp.
- ^ Anda, B. Angelvik, E. Ribu, K.. Improving Estimation Practices by Applying Use Case Models.
- ^ Briand, L. C. and I. Wieczorek (2002). Resource estimation in software engineering. Encyclopedia of software engineering. J. J. Marcinak. New York, John Wiley & Sons: 1160-1196.
- ^ Jørgensen, M. Shepperd, M.. A Systematic Review of Software Development Cost Estimation Studies.
- ^ Shepperd, M. Kadoda, G.. Comparing software prediction techniques using simulation.
- ^ Jørgensen, M.. Estimation of Software Development Work Effort:Evidence on Expert Judgment and Formal Models.
- ^ Winkler, R.L.. Combining forecasts: A philosophical basis and some current issues Manager.
- ^ Blattberg, R.C. Hoch, S.J.. Database Models and Managerial Intuition: 50% Model + 50% Manager.
- ^ Jørgensen, M.. Estimation of Software Development Work Effort:Evidence on Expert Judgment and Formal Models.
- ^ Armstrong, J. S.. Principles of forecasting: A handbook for researchers and practitioners.
- ^ Angelis, L. Stamelos, I.. A simulation tool for efficient analogy based cost estimation.
- ^ Jørgensen, M. Sjøberg, D.I.K.. An effort prediction interval approach based on the empirical distribution of previous estimation accuracy.
- ^ Jørgensen, M.. Realism in assessment of effort estimation uncertainty: It matters how you ask.
- ^ Shepperd, M. Cartwright, M. Kadoda, G.. On Building Prediction Systems for Software Engineers.
- ^ Kitchenham, B. Pickard, L.M. MacDonell, S.G. Shepperd,. What accuracy statistics really measure.
- ^ Foss, T. Stensrud, E. Kitchenham, B. Myrtveit, I.. A Simulation Study of the Model Evaluation Criterion MMRE. IEEE.
- ^ Miyazaki, Y. Terakado, M. Ozaki, K. Nozaki, H.. Robust regression for developing software estimation models.
- ^ Lo, B. Gao, X.. Assessing Software Cost Estimation Models: criteria for accuracy, consistency and regression.
- ^ Hughes, R.T. Cunliffe, A. Young-Martos, F.. Evaluating software development effort model-building techniquesfor application in a real-time telecommunications environment.
- ^ Grimstad, S. Jørgensen, M.. A Framework for the Analysis of Software Cost Estimation Accuracy.
- ^ Jørgensen, M. Grimstad, S.. How to Avoid Impact from Irrelevant and Misleading Information When Estimating Software Development Effort.