Is there a good way to measure variance of model experimentally? I mean, can we give a single real number stating variance of the model? Truly speaking, I want to measure the level of overfitting present in my model?
What's your mean about "variance"? Are you interested to measure the goodness of the model?
Or you like to describe what is effect of effective parameters? I think sensitivity analysis and specially analysis of variance (ANOVA) can be useful for both of mentioned proposes and ANOVA maybe is better due to its specific simplicity.
By variance, I mean "how variant the model is with respect to dataset?" We generally have some noise in our dataset. If the model has high VC dimension and learns even the noise present in it, the model is highly variant with respect to dataset. i.e., for different datasets, we get quite different parameters that fit the dataset. Overfitting results in high variance and underfitting results in high bias.
if your question is related to the notion of "stability" versus "generalisation", the attached paper way be of interest
to quote their introduction :
"In contrast to standard approaches to sensitivity analysis, we mainly focus on the sampling randomness and we thus are interested in how changes in the composition of the learning set influence the function produced by the algorithm."