I'm using SVM and (neural network) for a time series prediction data-set in MATLAB R2016a with 800 samples. Currently I'm using 10-fold cross validation and grid search to find best SVM parameters. I'm using 90 samples (after this 800 samples) as out-of-sample to check performance of final model using best SVM (and neural network) parameters and training my model on whole first 800 samples.
The test accuracy of final model (10-fold cross validation) is about 98% (sensitivity and specificity of about 98%) but when I check designed model on last 90 out-of-sample data (which trained using whole first 800 samples) I have a poor accuracy (about 55~59% total accuracy, sensitivity and specificity). This is daily forecasting of a financial market. Why I have this behavior? I checked normal k-fold cross validation and sliding window validation . I had mentioned behavior (poor out-of-sample accuracies) in two methods.