I developed a predictive model with k-fold cross validation (CV) technique. The cross-validation model gave me a more accurate predictive performance. What would be the conclusion, if the CV model is more accurate than the final model?
Well I guess I don't understand your question. In any case, I would suggest using adaptive lasso for predictive model building. See this site for full details: https://www4.stat.ncsu.edu/~boos/var.select/lasso.adaptive.html
R programs are included. Attached are 2 recent papers that you might find helpful as well. Best wishes, David Booth
Your question is not clear. Anyway, cross-validation does't generate one model. When comparing several predictive models, use same evaluation method such as cross-validation.
I am not sure to understand what do you call final model and how you got its performance, but I will try to answer with what I understood you did.
Cross validation is a way to assess the generalized performance of your final model, ie the performance you would have if you take all your database to train your model and you test it on a new and infinite other database.
When you performed your cross validation, you may have used the performance results to optimize the hyperparameters of your model, that is what a lot of people do. It is fine if you do not have too many hyperparameters. If you have a lot, your cross validation performance has a risk to be optimistically biased because of overfitting. And when you test your final model (taking all your database for training) on a new test dataset, using the best hyperparameters of your cross validation, you may have reduced performance because of it.
To avoid overfitting and optimistic performances with your cross validation you should use nested cross validation. Nested cross validation consists in splitting the database into 3 groups for each run: one for training, one for optimizing the hyperparameter, and one for the validation. The performance of the nested cross validation takes into account only the performances of the validation group of each run. For the choice of the hyperparmeters for your final model, you may choose majority vote or average of the hyperparameters you kept for each run.