I'm afraid you cannot increase the number of observation by this kind of interpolation. If you do not have sufficient historical data, one approach here is to use judgment as an additional input. More details here:
Technical Report A joint Bayesian forecasting model of judgment and observed data
A more comprehensive analysis of alternative approaches to use judgment:
Thesis Integration of judgmental and statistical approaches for dem...
even though methods exist to disaggregate or to aggregate data but it's not without cost (reliability of the results, ecological bias ). So it's a bad idea in my opinion. Try instead to get more data.
I agree with Andrey Davydenko and Chiraz Karamti that it is not a good idea to convert annual data to quarterly data for that purpose.
What do you mean by " but the number of observations are less then requirement". Such requirements are purely artificial. In one case, I worked with one thousand of observations but the distribution of the average was not at all normal whereas a dozen of observations was enough in another case. Please tell more about your data and your purpose.
Guy Mélard sir basically i want to apply ARDL methodology on annual data for a signal country, according to my knowledge it can be apply if the observation are 35 or more.
I would like to say that there is practically no difference between 30 and 35. Who proposed that number 35? But I am no sure that 30 or 35 observations are enough. It depends on your data. I would recommend making simulations based on a model of your data.
My two cents from my personal experience (13+ years working with time series):
1) Interpolation methods will introduce more variance (noise) in the estimation, but it has sense when one is interested in low-frequency measure. E.g. let's say one is interested in quarterly forecasts, and 4 of 5 of the time series are quarterly and only 1 is annual, then the one that is annual can be reasonably interpolated. I found that simple methods introduce less noise and are easier to communicate when compared to methods as e.g. the Chow-Lin method for temporal disaggregation (which is cool anyway).
2) Related to (1), every interpolation is an additional estimation, and hence will introduce more error in the models, making s.e. higher (inference and forecast intervals are compromised).
3) Sometimes is better to go with the annual data, and even aggregate the other low-frequency time series (for example in annual policy/budgeting decisions, stress testing with a yearly frequency is more than enough to produce informative results so I tend to aggregate the quarterly results to annual even if I have 4 of 5 time series that are on a quarterly basis).
4) If there are really few observations (I sometimes work with models of only 8 observations: from the last 8 years), then one option is to reduce the complexity of the model (e.g. a bivariate VAR with a couple of lags instead of a multivariate SVAR, etc) and/or use Bayesian methods. Bayesian methods are extremely powerful and you do not need to worry about sample size with Bayesian methods.
30 observations is OK, especially for yearly data. Prediction intervals will usually reflect the fact that you may have limited sample. But this does mean you cannot construct a model based on 30 cases.
It will be good ti get more data or use other data as proxy converting the information into quarterly data would not give you concrete information on the analysis.
But in the case of 30 infirmation/data on a yearly basis, I feel it's OK sir.
@Rolando Gonzales, that's a great insight. However, most of the data we collect from central banks and others sources are at one point in time disaggregated. Working with macroeconomic policies for instance fiscal multipliers or even monetary policy related analysis most often fits better when using a quarterly data as the central banks or even government parastatal doesn't react to shocks immediately due to policy lags and other consideration.. So will using disaggregated data becloud the true dynamics of the model ? . I would love to have your view on this..
Hamid Muili , yes, in some cases disaggregated data can in fact obscure the dynamics of interest. Quarterly GDP time series and monthly inflation time series -both of interest to Central Banks, for example- tend to show seasonal patterns, which obscure other dynamics of interest for policy-making, i.e. the trends in inflation and in GDP, and the cycle component in GDP. While some type of filtering is a way to get rid of the unwanted seasonal dynamics, another option is to actually aggregate the time series at annual level, for example, if yearly changes are all that matters for policy makers. Example: for annual fiscal policies and annual budgeting decisions, a forecast of next year's inflation and GDP is more than enough and sometimes is all that the policy maker wants from a data scientist working for him. But, in other circumstances, the policy maker needs shorter term forecasts or nowcasting to take short-run policy decisions and hence the quarterly or monthly dynamics become more relevant, even with the seasonal pattern; in those cases I use high-frequency data, even interpolating quarterly time series to e.g. monthly series (a quarter can be too much for a policy!)
Another example that comes to my head is when I work with prices of stocks: daily or intra-daily is extremely noisy, so building a model with daily data will be harder than with weekly data or monthly data of the same stock, and the trend of stocks will be more visible with aggregated data. Again, it depends on the purpose of the model.
If quartile data is available, you can use them, thus increasing the number of observations. Using the methods depending on the intended use allows us to get the right results ( e.g. interpolation). So, I don't recommend converting the data to quarter data and using it.