This can be so. R2 in prediction depends on a particular choice of the test set, which can fit pretty well for your calibration. So in general there is nothing to worry about. Although to give a more detailed answer thorough inspection of the dataset is required. Hope it could be of some help. Kind regards, DK
Hi Mudassir, I think it might depend on what method you used for Cross-Validation (full cross validation, X-segment CV, leave-one-out CV), and how to split into validation and calibration datasets. Maybe your validation dataset includes values in a range that leads to "more optimistic" predictions, or samples that are particularly well predicted... How did you select your validation set? Also, if you share your LV vs R2 (or RMSE) we can guess something else together.. (Is there a big difference in terms of R2 or is it a very subtle one?)
Hello Mudassir Arif Chaudhry ., i think you need to use Kennard&Stone algorithm to split your dataset into calibration and validation and test it again. Sometimes this situation really happens, however you need to try other types of split to your data to avoid more optimistic evaluation of your models.