Forecasting

Forecasting is the process of making predictions of the future based on past and present data and most commonly by analysis of trends. A commonplace example might be estimation of some variable of interest at some specified future date. Prediction is a similar, but more general term. Both might refer to formal statistical methods employing time series, cross-sectional or longitudinal data, or alternatively to less formal judgmental methods. Usage can differ between areas of application: for example, in hydrology the terms "forecast" and "forecasting" are sometimes reserved for estimates of values at certain specific future times, while the term "prediction" is used for more general estimates, such as the number of times floods will occur over a long period.

Risk and uncertainty are central to forecasting and prediction; it is generally considered good practice to indicate the degree of uncertainty attaching to forecasts. In any case, the data must be up to date in order for the forecast to be as accurate as possible.[1]

Categories of forecasting methods

Qualitative vs. quantitative methods

Qualitative forecasting techniques are subjective, based on the opinion and judgment of consumers, experts; they are appropriate when past data are not available. They are usually applied to intermediate- or long-range decisions. Examples of qualitative forecasting methods are informed opinion and judgment, the Delphi method, market research, and historical life-cycle analogy.

Quantitative forecasting models are used to forecast future data as a function of past data. They are appropriate to use when past numerical data is available and when it is reasonable to assume that some of the patterns in the data are expected to continue into the future. These methods are usually applied to short- or intermediate-range decisions. Examples of quantitative forecasting methods are last period demand, simple and weighted N-Period moving averages, simple exponential smoothing, poisson process model based forecasting [2] and multiplicative seasonal indexes. Previous research shows that different methods may lead to different level of forecasting accuarcy. For example, GMDH neural network was found to have better forecasting performance than the classical forecasting algorithms such as Single Exponential Smooth, Double Exponential Smooth, ARIMA and back-propagation neural network. [3]

Average approach

In this approach, the predictions of all future values are equal to the mean of the past data. This approach can be used with any sort of data where past data is available. In time series notation:

[4]

where is the past data.

Although the time series notation has been used here, the average approach can also be used for cross-sectional data (when we are predicting unobserved values; values that are not included in the data set). Then, the prediction for unobserved values is the average of the observed values.

Naïve approach

Naïve forecasts are the most cost-effective forecasting model, and provide a benchmark against which more sophisticated models can be compared. This forecasting method is only suitable for time series data.[4] Using the naïve approach, forecasts are produced that are equal to the last observed value. This method works quite well for economic and financial time series, which often have patterns that are difficult to reliably and accurately predict.[4] If the time series is believed to have seasonality, seasonal naïve approach may be more appropriate where the forecasts are equal to the value from last season. The naïve method may also use a drift, which will take the last observation plus the average change from the first observation to the last observation.[4] In time series notation:

Drift method

A variation on the naïve method is to allow the forecasts to increase or decrease over time, where the amount of change over time (called the drift) is set to be the average change seen in the historical data. So the forecast for time is given by

[4]

This is equivalent to drawing a line between the first and last observation, and extrapolating it into the future.

Seasonal naïve approach

The seasonal naïve method accounts for seasonality by setting each prediction to be equal to the last observed value of the same season. For example, the prediction value for all subsequent months of April will be equal to the previous value observed for April. The forecast for time is:[4]

where =seasonal period and is the smallest integer greater than .

The seasonal naïve method is particularly useful for data that has a very high level of seasonality.

Time series methods

Time series methods use historical data as the basis of estimating future outcomes.

e.g. Box–Jenkins
Seasonal ARIMA or SARIMA

Causal / econometric forecasting methods

Some forecasting methods try to identify the underlying factors that might influence the variable that is being forecast. For example, including information about climate patterns might improve the ability of a model to predict umbrella sales. Forecasting models often take account of regular seasonal variations. In addition to climate, such variations can also be due to holidays and customs: for example, one might predict that sales of college football apparel will be higher during the football season than during the off season.[5]

Several informal methods used in causal forecasting do not employ strict algorithms , but instead use the judgment of the forecaster. Some forecasts take account of past relationships between variables: if one variable has, for example, been approximately linearly related to another for a long period of time, it may be appropriate to extrapolate such a relationship into the future, without necessarily understanding the reasons for the relationship.

Causal methods include:

Quantitative forecasting models are often judged against each other by comparing their in-sample or out-of-sample mean square error, although some researchers have advised against this.[7] Different forecasting approach has different level of accuracy. For example, it was found that GMDH has higher forecasting accuracy than traditional ARIMA [8]

Judgmental methods

Judgmental forecasting methods incorporate intuitive judgement, opinions and subjective probability estimates. Judgmental forecasting is used in cases where there is lack of historical data or during completely new and unique market conditions.[9]

Judgmental methods include:

Artificial intelligence methods

Often these are done today by specialized programs loosely labeled

Other methods

Forecasting accuracy

The forecast error (also known as a residual) is the difference between the actual value and the forecast value for the corresponding period.

where E is the forecast error at period t, Y is the actual value at period t, and F is the forecast for period t.

A good forecasting method will yield residuals that are uncorrelated and have zero mean. If there are correlations between residual values, then there is information left in the residuals which should be used in computing forecasts. If the residuals have a mean other than zero, then the forecasts are biased.

Measures of aggregate error:

Scaled Errors: The forecast error, E, is on the same scale as the data, as such, these accuracy measures are scale-dependent and cannot be used to make comparisons between series on different scales.
Mean absolute error (MAE) or mean absolute deviation (MAD)

Mean squared error (MSE) or mean squared prediction error (MSPE)
Root mean squared error (RMSE)
Average of Errors (E)
Percentage Errors: These are more frequently used to compare forecast performance between different data sets because they are scale-independent. However, they have the disadvantage of being infinite or undefined if Y is close to or equal to zero.
Mean absolute percentage error (MAPE) or mean absolute percentage deviation (MAPD)

Scaled Errors: Hyndman and Koehler (2006) proposed using scaled errors as an alternative to percentage errors.
Mean absolute scaled error (MASE)

* or 1 if non-seasonal

Other Measures:
Forecast skill (SS)

Business forecasters and practitioners sometimes use different terminology in the industry. They refer to the PMAD as the MAPE, although they compute this as a volume weighted MAPE.[10] For more information see Calculating demand forecast accuracy.

When comparing the accuracy of different forecasting methods on a specific data set, the measures of aggregate error are compared with each other and the method that yields the lowest error is preferred.

Training and test sets

It is important to evaluate forecast accuracy using genuine forecasts. That is, it is invalid to look at how well a model fits the historical data; the accuracy of forecasts can only be determined by considering how well a model performs on new data that were not used when fitting the model. When choosing models, it is common to use a portion of the available data for fitting, and use the rest of the data for testing the model, as was done in the above examples.[11]

Cross Validation

A more sophisticated version of training/test set.

for cross sectional data, cross-validation works as follows:

  1. Select observation i for the test set, and use the remaining observations in the training set. Compute the error on the test observation.
  2. Repeat the above step for i = 1,2,..., N where N is the total number of observations.
  3. Compute the forecast accuracy measures based on the errors obtained.

This is a much more efficient use of the available data, as you only omit one observation at each step

for time series data, the training set can only include observations prior to the test set. therefore no future observations can be used in constructing the forecast. Suppose k observations are needed to produce a reliable forecast then the process works as:

  1. Select the observation k + i for test set, and use the observations at times 1, 2, ..., k+i-1 to estimate the forecasting model. Compute the error on the forecast for k+i.
  2. Repeat the above step for i = 1,2,...,T-k where T is the total number of observations.
  3. Compute the forecast accuracy over all errors

This procedure is sometimes known as a "rolling forecasting origin" because the "origin" (k+i -1) at which the forecast is based rolls forward in time[12]

Limitations of Errors

The two most popular measures of accuracy that incorporate the forecast error are the Mean Absolute Error (MAE) and the Root Mean Squared Error (RMSE). Thus these measures are considered to be scale-dependent, that is, they are on the same scale as the original data. Consequently, these cannot be used to compare models of differing scales.

Percentage errors are simply forecast errors converted into percentages and are given by . A common accuracy measure that utilizes this is the Mean Absolute Percentage Error (MAPE). This allows for comparison between data on different scales. However, percentage errors are not quite meaningful when is close to or equal to zero, which results in extreme values or simply being undefined.[13] Scaled errors are a helpful alternative to percentage errors when comparing between different scales. They do not have the shortfall of giving unhelpful values if is close to or equal to zero.

See also

Seasonality and cyclic behaviour

Seasonality

Seasonality is a characteristic of a time series in which the data experiences regular and predictable changes which recur every calendar year. Any predictable change or pattern in a time series that recurs or repeats over a one-year period can be said to be seasonal. It is common in many situations – such as grocery store[14] or even in a Medical Examiner’s office[15]—that the demand depends on the day of the week. In such situations, the forecasting procedure calculates the seasonal index of the “season” – seven seasons, one for each day – which is the ratio of the average demand of that season (which is calculated by Moving Average or Exponential Smoothing using historical data corresponding only to that season) to the average demand across all seasons. An index higher than 1 indicates that demand is higher than average; an index less than 1 indicates that the demand is less than the average.

Cyclic behaviour

The cyclic behaviour of data takes place when there are regular fluctuations in the data which usually last for an interval of at least two years, and when the length of the current cycle cannot be predetermined. Cyclic behavior is not to be confused with seasonal behavior. Seasonal fluctuations follow a consistent pattern each year so the period is always known. As an example, during the Christmas period, inventories of stores tend to increase in order to prepare for Christmas shoppers. As an example of cyclic behaviour, the population of a particular natural ecosystem will exhibit cyclic behaviour when the population increases as its natural food source decreases, and once the population is low, the food source will recover and the population will start to increase again. Cyclic data cannot be accounted for using ordinary seasonal adjustment since it is not of fixed period.

Applications

Forecasting has applications in a wide range of fields where estimates of future conditions are useful. Not everything can be forecasted reliably, if the factors that relate to what is being forecast are known and well understood and there is a significant amount of data that can be used very reliable forecasts can often be obtained. If this is not the case or if the actual outcome is effected by the forecasts, the reliability of the forecasts can be significantly lower.[16]

Climate change and increasing energy prices have led to the use of Egain Forecasting for buildings. This attempts to reduce the energy needed to heat the building, thus reducing the emission of greenhouse gases. Forecasting is used in Customer Demand Planning in everyday business for manufacturing and distribution companies.

While the veracity of predictions for actual stock returns are disputed through reference to the Efficient-market hypothesis, forecasting of broad economic trends is common. Such analysis is provided by both non-profit groups as well as by for-profit private institutions (including brokerage houses[17] and consulting companies[18]).

Forecasting foreign exchange movements is typically achieved through a combination of chart and fundamental analysis. An essential difference between chart analysis and fundamental economic analysis is that chartists study only the price action of a market, whereas fundamentalists attempt to look to the reasons behind the action.[19] Financial institutions assimilate the evidence provided by their fundamental and chartist researchers into one note to provide a final projection on the currency in question.[20]

Forecasting has also been used to predict the development of conflict situations.[21] Forecasters perform research that uses empirical results to gauge the effectiveness of certain forecasting models.[22] However research has shown that there is little difference between the accuracy of the forecasts of experts knowledgeable in the conflict situation and those by individuals who knew much less.[23]

Similarly, experts in some studies argue that role thinking does not contribute to the accuracy of the forecast.[24] The discipline of demand planning, also sometimes referred to as supply chain forecasting, embraces both statistical forecasting and a consensus process. An important, albeit often ignored aspect of forecasting, is the relationship it holds with planning. Forecasting can be described as predicting what the future will look like, whereas planning predicts what the future should look like.[25][26] There is no single right forecasting method to use. Selection of a method should be based on your objectives and your conditions (data etc.).[27] A good place to find a method, is by visiting a selection tree. An example of a selection tree can be found here.[28] Forecasting has application in many situations:

Limitations

Limitations pose barriers beyond which forecasting methods cannot reliably predict. There are many events and values that cannot be forecast reliably. Events such as the roll of a die or the results of the lottery cannot be forecast because they are random events and there is no significant relationship in the data. When the factors that lead to what is being forecast are not known or well understood such as in stock and foreign exchange markets forecasts are often inaccurate or wrong as there is not enough data about everything that affects these markets for the forecasts to be reliable, in addition the outcomes of the forecasts of these markets change the behavior of those involved in the market further reducing forecast accuracy.[16]

Performance limits of fluid dynamics equations

As proposed by Edward Lorenz in 1963, long range weather forecasts, those made at a range of two weeks or more, are impossible to definitively predict the state of the atmosphere, owing to the chaotic nature of the fluid dynamics equations involved. Extremely small errors in the initial input, such as temperatures and winds, within numerical models double every five days.[30]

Complexity introduced by the technological singularity

The technological singularity is the hypothetical emergence of superintelligence through technological means.[31] Since the capabilities of such intelligence would be difficult for an unaided human mind to comprehend, the technological singularity is seen as an occurrence beyond which events cannot be predicted.

Ray Kurzweil predicts the singularity will occur around 2045 while Vernor Vinge predicts it will happen some time before 2030.

See also

References

  1. Scott Armstrong; Fred Collopy; Andreas Graefe; Kesten C. Green. "Answers to Frequently Asked Questions". Retrieved May 15, 2013.
  2. Mahmud, Tahmida; Hasan, Mahmudul; Chakraborty, Anirban; Roy-Chowdhury, Amit (19 August 2016). A poisson process model for activity forecasting. 2016 IEEE International Conference on Image Processing (ICIP). IEEE.
  3. Li, Rita Yi Man, Fong, S., Chong, W.S. (2017) Forecasting the REITs and stock indices: Group Method of Data Handling Neural Network approach, Pacific Rim Property Research Journal, 23(2), 1-38
  4. 1 2 3 4 5 6 https://www.otexts.org/fpp/2/3
  5. Nahmias, Steven (2009). Production and Operations Analysis.
  6. Ellis, Kimberly (2008). Production Planning and Inventory Control Virginia Tech. McGraw Hill. ISBN 978-0-390-87106-0.
  7. J. Scott Armstrong and Fred Collopy (1992). "Error Measures For Generalizing About Forecasting Methods: Empirical Comparisons" (PDF). International Journal of Forecasting. 8: 69–80. doi:10.1016/0169-2070(92)90008-w.
  8. 16. Li, Rita Yi Man, Fong, S., Chong, W.S. (2017) Forecasting the REITs and stock indices: Group Method of Data Handling Neural Network approach, Pacific Rim Property Research Journal, 23(2), 1-38
  9. https://www.otexts.org/fpp/3/1
  10. http://www.forecastingblog.com/?p=134
  11. "2.5 Evaluating forecast accuracy | OTexts". www.otexts.org. Retrieved 2016-05-14.
  12. "2.5 Evaluating forecast accuracy | OTexts". www.otexts.org. Retrieved 2016-05-17.
  13. https://www.otexts.org/fpp/2/5
  14. Erhun, F.; Tayur, S. (2003). "Enterprise-Wide Optimization of Total Landed Cost at a Grocery Retailer". Operations Research. 51 (3): 343. doi:10.1287/opre.51.3.343.14953.
  15. Omalu, B. I.; Shakir, A. M.; Lindner, J. L.; Tayur, S. R. (2007). "Forecasting as an Operations Management Tool in a Medical Examiner's Office". Journal of Health Management. 9: 75. doi:10.1177/097206340700900105.
  16. 1 2 https://www.otexts.org/fpp/1/1
  17. Fidelity. "2015 Stock Market Outlook", a sample outlook report by a brokerage house.
  18. McKinsey Insights & Publications. "Insights & Publications".
  19. Helen Allen; Mark P. Taylor (1990). "Charts, Noise and Fundamentals in the London Foreign Exchange Market". JSTOR 2234183.
  20. Pound Sterling Live. "Euro Forecast from Institutional Researchers", A list of collated exchange rate forecasts encompassing technical and fundamental analysis in the foreign exchange market.
  21. T. Chadefaux (2014). "Early warning signals for war in the news". Journal of Peace Research, 51(1), 5-18
  22. J. Scott Armstrong; Kesten C. Green; Andreas Graefe (2010). "Answers to Frequently Asked Questions" (PDF).
  23. Kesten C. Greene; J. Scott Armstrong (2007). "The Ombudsman: Value of Expertise for Forecasting Decisions in Conflicts" (PDF). Interfaces. INFORMS. 0: 1–12.
  24. Kesten C. Green; J. Scott Armstrong (1975). "Role thinking: Standing in other people’s shoes to forecast decisions in conflicts" (PDF). Role thinking: Standing in other people’s shoes to forecast decisions in conflicts. 39: 111–116.
  25. "FAQ". Forecastingprinciples.com. 1998-02-14. Retrieved 2012-08-28.
  26. Greene, Kesten C.; Armstrong, J. Scott. "Structured analogies for forecasting" (PDF). University of Pennsylvania.
  27. "FAQ". Forecastingprinciples.com. 1998-02-14. Retrieved 2012-08-28.
  28. "Selection Tree". Forecastingprinciples.com. 1998-02-14. Retrieved 2012-08-28.
  29. J. Scott Armstrong (1983). "Relative Accuracy of Judgmental and Extrapolative Methods in Forecasting Annual Earnings" (PDF). Journal of Forecasting. 2: 437–447. doi:10.1002/for.3980020411.
  30. Cox, John D. (2002). Storm Watchers. John Wiley & Sons, Inc. pp. 222–224. ISBN 0-471-38108-X.
  31. Superintelligence. Answer to the 2009 EDGE QUESTION: "WHAT WILL CHANGE EVERYTHING?": http://www.nickbostrom.com/views/superintelligence.pdf
Look up predict in Wiktionary, the free dictionary.
Look up forecast in Wiktionary, the free dictionary.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.