[1][5] According to a later paper by the authors, the following were the main conclusions of the M-Competition:[1] The findings of the study have been verified and replicated through the use of new methods by other researchers.
[7] Newbold (1983) was critical of the M-competition, and argued against the general idea of using a single competition to attempt to settle the complex issue.
The M2-Competition was organized in collaboration with four companies and included six macroeconomic series, and was conducted on a real-time basis.
[13] Fildes and Makridakis (1995) argue that despite the evidence produced by these competitions, the implications continued to be ignored by theoretical statisticians.
To get precise and compelling answers, the M4 Competition utilized 100,000 real-life series, and incorporates all major forecasting methods, including those based on Artificial Intelligence (Machine Learning, ML), as well as traditional statistical ones.
In his blog, Rob J. Hyndman said about M4: "The "M" competitions organized by Spyros Makridakis have had an enormous influence on the field of forecasting.
"[17] Below is the number of time series based on the time interval and the domain: In order to ensure that enough data are available to develop an accurate forecasting model, minimum thresholds were set for the number of observations: 13 for yearly, 16 for quarterly, 42 for monthly, 80 for weekly, 93 for daily and 700 for hourly series.
M4 was an Open one, with its most important objective (the same with that of the previous three M Competitions): "to learn to improve forecasting accuracy and advance the field as much as possible".
The data was provided by Walmart and consisted of around 42,000 hierarchical daily time series, starting at the level of SKUs and ending with the total demand of some large geographical area.
"[19] The LightGBM model, as well as deep neural networks, featured prominently in top submissions.
It was also noted that many ANN-based techniques fared considerably worse than simple forecasting methods, despite greater theoretical potential for good performance.
[20] Nassim Nicholas Taleb, in his book The Black Swan, references the Makridakis Competitions as follows: "The most interesting test of how academic methods fare in the real world was provided by Spyros Makridakis, who spent part of his career managing competitions between forecasters who practice a "scientific method" called econometrics—an approach that combines economic theory with statistical measurements.
Simply put, he made people forecast in real life and then he judged their accuracy.
Makridakis and Hibon reached the sad conclusion that "statistically sophisticated and complex methods do not necessarily provide more accurate forecasts than simpler ones.