Dear all,
I am (re-)analyzing some previously published findings for a specific phenomenon (not really relevant here - this is a methodological question) in the area of JDM for a meta-analysis. Typically, researchers have used multiple regression or linear mixed effect models to indicate the influence of one predictor on the criteria. As I have read some papers about the advantages of using Bayesian stats and also meta-analysis, I thought I could give it a try and apply my limited but growing knowledge about Bayesian stats to this problem to assess how much evidence is actually backing the reported -and mostly significant- p-values.
However, I do not have the original raw data, but only the reported parameters in the literature (b, SE, p, t, beta etc.). The question is, whether I can compute a Bayes Factor (BF) to assess the strength of the influence of one specific variable. I know that typically one would test different regression models against each other and compute the relative change / improvement of BF. Sadly, I do not have this luxury, but can only use the available data.
Thus, I came up with the following idea: I could use the test-statistic of the parameters (t; giving evidence whether the effect is statistically significant from 0) and sample size to infer the BF for a specific parameter. However, I wonder if this approach is valid as I have never seen this in a published paper / blog and neither does the documentation of the R package BayesFactor (Morey & Rouder, 2014) mention such an application. Yet, there is the possibility of compute a BF based on a one-sample t-test against 0, which is equivalent to the t values of the parameter reported in the models.
Any Bayesian out there, who could help me out? Advice is much appreciated!
best,
Tom