Cross validation can be used to assess the variogram model, in terms of prediction
accuracy and the accuracy of corresponding prediction uncertainty, i.e., the kriging variances. Using CV of the kriged residuals, you can compute frequency
distribution of the z -score defined as the residual divided by kriging standard error,
the mean error, the mean square normalized error, and the root mean
square error. Minimizing these indicators can improve variogram and in trurns the prediction accuracy.
I suspect you meant "approve" rather than "prove" as it is impossible to "prove" you have the correct variogram. All you can do is show that the variogram you have adopted gives an adequate reproduction of the observations in a cross-validation test. The variogram may still have problems that are not captured by the test. Thus you can conditionally approve the variogram using cross-validation. How well it works at points with no data is unknown.
Any interpolation method basically assumes that the scale of variations in the function sampled by the data are at a scale that is adequately captured by the data. If the function has variations at a higher frequency, then you are limited by the sampling (see literature on Nyquist-Shannon sampling theorem).
Take home message: think about the spatial variations that might be present, and whether these will be captured by the data. Don't fall into the trap of assuming that the data adequately sample the variations.
If you do need to find out (as in an indication, not proof!) if variogram (or other parameters) are fit (by comparison with other settings) for estimation or simulation you might want to take a look into using blind data (and in a more advanced fashion bootstraping, or spatial bootstraping*).
A possible example is: let's say you have 100 samples. You remove 10 from those 100 (thus called blind) and make estimation with 90. Than you compare the remaining 10 samples with estimated node values at their locations. This enables you to use real-data to assess the "local" quality of your estimation.
* Spatial bootstraping can be used to understand how your estimation is sensible to change in parameters, namely variogram ellipsoid.
Note that cross validation produces multiple statistics (mean error, mean square error, mean square normalized error, fraction of normalized errors greater than 2.5, histogram of errors, histogram of normalized errors, correlation of error vs data value, correlation of data value vs kriged value, coded plot of data locations, etc). No one of these is best or most important in every situation.
The behavior of the statistics is also different depending on whether you use the "blind" approach suggested above or the more common jackknifing approach. You also have to not only specify the variogram model but also its parameters and as well the kriging parameters (minimum & maximum number of data locations used for each interpolation, maximum search distance. The cross validation statistics are not equally sensitive to changes in these parameters.
As noted above cross validation does not identify the "correct" variogram, instead it allows you to compare different choices..
See 1991, Myers,D.E., On Variogram Estimation in Proceedings of the First Inter. Conf. Stat. Comp., Cesme, Turkey, 30 Mar.-2 April 1987, Vol II, American Sciences Press, 261-281