I am a bit warry about asking this question regarding my ignorance and I am also not to sure if this question might bring out some emotional response.
Assume I want to have an estimate on the average R-squared and variability from literature for specific models y~x. I found 31 literature sources.
The questions are twofold :
1.) Can I shift the simulate of an ABC-rejection algorithm acting like it come from indeed from my target (see the first 4 figures)?
The parameter in this case is the draw from the prior deviating from the target and then shift it so it fits.
2.) I applied 4 methods in this case ABC-rejection (flat prior not really preferred), Bayesian bootstrap, Classical bootstrap and a one sided T-test (lower 4 figures). From all methods I extracted the 2.5-97.5% intervals. Given the information below, is it reasonable to go for Bayesian bootstrap in this case?
As sometimes suggested on RG and hidden deeply in some articles the intervals of the different intervals converse and are more-or-less-ish similar. However, I do have another smaller dataset which is also skewed. So I would personally prefer the Bayesian bootstrap as it smooths out and the extreme exactness in this case does not matter to much to me. Based on these results my coarse guestimate of the average variability would range from ~20-30% (To me it seems pot`a`to - pot`aa`to for either method disregarding the philosophical meaning). I also would like to use the individual estimates returned each bootstrap and technically it is not normal distributed (Beta-ish), although this does not seem to matter much in this pragmatic case.
Thank you in advance.