I know about some existing methods like: Mobile Average, Exponential Smoothing, Multiple Regression, etc. What about the most recent-advanced-efficient forecasting technique (if it exists)?
Dear Nabil, it is impossible to have an forecasting method that outperforms all other models in all cases. In this sense, usually we have methods designed for particular features of the problem, for example, sarima for seasonal integrated time series. You need to think about what are the particular features of your specific problem and then look for the adequate method. In cases when all features of the prediction problem are not matched by an particular method, you should design a hybrid methodology for forecasting.
I agree with Juan. Selecting a forecasting procedure is not an easy goal. I would recommend to have a look to the book "Forecasting: principles and practise" by Hyndman and Athanasopoulos,
http://otexts.com/fpp/
and also to the M-Competition results, published by Makridakis and Hibon:
i want to bring to your attention the usefulness of Genetic Programming (GP). it has been used a lot recently in multiple fields (even in finance) check works of Prof. Edward Tsang (e.g.: http://repec.org/sce2006/up.13879.1141401469.pdf
In general a good answer to this problem need to consider multiple models. In fact every single model can be biased and combining results from multiple models could be very useful. That is a very relevant result on the forecasting literature. Clearly various strategies could be applied in order to obtain the best combination. A relevant problem is the weight to assign to each forecast obtained by each method. A proposal in this sense was to estimate the weights by using "the Granger-Ramanathan regression" useful in the combination. In every case on the site "ForPrin" Forecasting Principles
I can suggest you the survey by Armstrong (2013) "Combining Forecasts" in "Combining".
the complete reference: Principles of Forecasting: A Handbook for Researchers and Practitioners, J. Scott Armstrong (ed.): Norwell, MA: Kluwer Academic Publishers, 2001. December 31, 2013
This survey contains an updated list of works which use this approach. So there is a relevant and interesting empirical evidence on the method.
The answer to this question may turn on the definition of advanced! If by advanced you mean best then this question cannot be answered because forecasting is very hard. First, what is being forecast and over what time horizon? As an applied economist in the financial markets, there are essentially two types of forecasts: the one period ahead forecasts of the data (last Friday's employment report surprised at a mere 74,000 increase in payrolls compared to an average of forecasts of 197,000--the range of forecasts was 100,000 to 250,000 and no forecasting technique would have gotten such a low number). We are trying to forecast numbers that haven't even been finalized and usually estimate our models on 'final' data. The second type is forecasting macrovariables like GDP or inflation over two, three. or four years. This is being done in an environment where we have no experience with the current policy tools (QE, forward guidance) and their interaction with the economy. So I wouldn't think technique but forecasting process. The life of one-step ahead forecasts of money market economist types involves accumulating all kinds of knowledge. How are the data put together? What has been the pattern of revisions recently (when the economy is improving data are often revised higher)? Have I exploited the data efficiently is where technique comes in but I would say that this accounts for 5% of the forecasting process.
As the range of forecasts for Friday's jobs number shows! My point is there is a tendency to fall in love with sophisticated techniques when we might be better served by more spade work.
We all agree that there is no definitive answer to this question: the chosen method must be appropriate to data and be applied in respect of its assumptions.
To explore this topic, do you know of reference papers comparing different techniques on the same types of data?
Forecasting is partly technical and partly an act of art. You need to read a lot of publications on the future movements of the variables that you want to forecast. For instance, if you want to forecast the GDP of an economy, you need to read a plethora of research articles as well as World Bank publications for instance, discussing the future geo political changes of the variables that may affect the GDP of that economy. From these publications you may get a benchmark future GDP value or growth rate, on the basis of some assumptions that may hold true during the forecast time horizon, but things may change that you may never know. ARMA, VAR, VECM models are used in forecasting time series variables, but then, we need to remember that most of the macro variables are non-stationary that makes straight forward estimation 'spurious'. Thus, we need to test if the non stationary variables in consideration are co-integarted. However, we also need to test if there are variables that 'Granger Cause' the GDP. For instance, if the variables are found non-stationary but 'Granger Cause' the GDP and also found co-integrated, we may estimate the VECM, normalize the long run relationship for the GDP variable, check if the other variables obey the plausible sign restrictions, and perform the LR test to check if the restrictions that we may impose on the long run coefficients and short run adjustment parameters are binding or not. In case, the parameter restrictions are binding, we may solve the VECM model and make out-of-sample forcasts for the GDP after testing for the 'RMSE' and 'Theil inequality' criteria for within sample forecast etc.
I would like to point you to the attached papers, in which a novel machine learning technique is used for time series forecasting, with quite competitive results. In particular, the proposed method (the Gamma classifier) overcomes some well known models, such as TDNN and NARX Neural Network architectures, as well as ARIMA.
Again, like some previous authors mentioned, there is no such thing as the ideal or perfect predictor for all problems, as proved by the No-Free-Lunch Theorems.
Great answers by all to have responded. I agree especially with John R. on the need to focus on the process and apply a range of techniques as appropriate. To throw one more thing out there, in my experience one of the determinants of the choice of technique is the number of variables available for forecasting and the length of the data series. I do local government revenue forecasting and because of limited data availability we have to resort to methods that normally would be considered elementary by people forecasting GDP, national level payroll employment, etc. At the other end of the spectrum, you can read the work of Stock and Watson on using Bayes and semi-Bayes techniques for forecasting with many predictors and long time-series. As the others have said data availability, forecasting needs, timeframe for forecasts, etc. should drive the choice of method and not the other way around.
My opinion: the structural time series model of Andrew Harvey (and his colleagues) remains the most accurate and straightforward to interpret. While it is constantly being updated, the classic text is A. Harvey, Forecasting structural time series models and the Kalman filter (Cambridge U. Press, 1989). Also see his website at Cambridge University.
I think forecasting is a topic that will make the researchers ponder about continuously. Even the most experienced forecaster may end up with a forecast that has little to do with actual data when these are published. Whatever procedure we adopt, there are some a-priori assumptions and information we must depend on. In case of time series data, these happen to be the past behavior of variables we forecast. But does the past behavior always dictate the future value of a variable? In contemporary world, geo-political changes are so unpredictable that researchers may not have enough time to adjust their forecasts accordingly, while their forecasts are already published. In case of policy changes, we may have a little comfort zone, but that is also influenced by so called volatile changes mentioned above. For instance, we may refer to the recent Russian political move against a country and its effects on stock market in the United States. Fortunately, the move has been recalled, but who knows what may happen in near future. I conclude, forecasting is a risky business and will remain so in foreseeable future.
There is no benchmark model that ensures good forecasting. Moreover it also depends on what type of data you are trying to use. In general, ARMA/ARIMA models will produce good forecast if your data doesn't have much nonlinearity. Otherwise one can try with non-linear models like neural, fuzzy etc.. for effective forecast.
There is no benchmark forecasting model. I agree with this statement. However, in case of macroeconomic forecasting of an economy, forecatsers ususally forecast keeping in mind what various forecast publications (IMF, ADB, IDB, JAICA etc.) have already predicted for the macro variables of a relevant economy, such as GDP growth rate, inflation rate, current account balance etc. Needless to say that forecasters have different forecatsing models; ARMA, ARIMA, VECM etc. at their discretion in case of timeseries forecasting. But there is no gurantee that a particular forecasting model will produce reasonable forecasts for the variables in consideration.
While I have said above I think the structural time series model of Harvey is generally to be preferred, I also agree with those argue that forecasting economic data is very difficult and economists are not very good at it.