It depends heavily on the type of data that you have and the methods being used. If you consider statistical methods, since most of them are based on asymptotic results, or at least require some initialization, and because they are only good for a short to moderate forecast horizon H, I would say leaving H observations at the end (where ex ante forecasting is interesting), and using the remaining observations to fit the models and compute ex post forecasts to be compared with those H observations. For example, for monthly data, leave one or two years at the end of the series and use the previous years to fit the models. The appropriateness of the models can be evaluated by the quality of fit, and then on the quality of the ex-post forecasts for these one or two years.
I concur with both Guy Mélard and Georgi Kiranchev .
Recalling that the training set is a subset to train a model, while a test set is a subset to test the trained model, there are two basic/common conditions the test set should meet - the test set should be:
(i). large enough to yield statistically meaningful results,
(ii). representative of the data set as a whole.
The ratio of the training set to the test set can be varied up to the point where the analyst can be convinced beyond reasonable doubt that the precision level would have been optimized.