If you have the 95% confidence intervals, you can impute the standard deviation and then calculate the pooled reduction in BP using simple generic inverse variance meta-analysis. This is available in meta or metafor for R and (I think) RevMan.
This is commonly known as a meta-analysis of change scores, which I'm sure you can find plenty of examples of.
Benjamin Behers 95% confidence intervals are by definition 3.92*SE wide, aka mean ± 1.92*SE. SE = standard error, aka SD*N^1/2Therefore, working backwards:
N^1/2 x CIupper - CIlower/3.92 will give you the SD.
Most software will do this for you if you input the lower and upper bounds of the 95% CI.
Jack Henry Also, the meta-analyses that I’m looking at give standard deviations for each individual study, but not the one that they use to calculate the CI. Therefore, I’d need to combine the SDs from the individual studies to get the overall SD that was used in development of the CI, correct?
Benjamin Behers Apologies, are you trying to combine the results of multiple meta-analyses? You can't really do this because you effectively add a variance term that isn't needed. It's also likely that they contain overlapping studies if they're on the same topic.
If you want to combine the results of the meta-analyses, then you would input the means and SDs from the studies they included into your own analysis.
Benjamin Behers In that case you need to extract the mean reduction and standard deviation for each individual study included in each meta-analysis. You cannot statistically compare the results of multiple meta-analyses directly. You definitely cannot impute the SD from the CIs of meta-analytic results, because the standard error it is derived from may account for heterogeneity (random effects model).
You need to extract the results of the individual studies from the meta-analyses, and you can then do your own meta-analysis and treat each treatment as a subgroup. You can statistically compare the sub-groups.
A more correct, but more complex, way to compare the multiple treatments would be to incorporate the results of the individual studies into a network meta-analysis (since they have all been ostensibly compared to placebo), which would give you the most definite estimation of their relative efficacy.
You would also need to do your own search regardless of which you choose, because these analyses were likely published at different points in time and also do not necessarily encompass the same body of literature and may have differing inclusion criteria. In addition, if you opt for a network analysis you would need to also search for studies that compare treatment A vs treatment B rather than just treatments vs placebo, as these would alsom be eligible.
Probably the easiest option is to just write a qualitative review comparing the analyses (often called an umbrella review).
Jack Henry Please I need your help with a nagging problem. I did a meta-analysis of single arm studies (surgical intervention with no feasible control) with four subgroups (H, LU, LIV, K) for a research collaboration. In an attempt to compare the outcome for the subgroups, I used the CI to determine significance/non-significance in the absence of P-values (since there is no comparator). This I did by observing presence or absence of overlaps in the CIs.
Some members of the group are of the opinion of using ANOVA to compare the point estimates in the subgroups. I have never seen where the result of a meta-analysis is further subjected to ANOVA or other primary data analysis methods. Do you have any experience with this and can you share your thoughts on the best way to go about this besides the approach I used above? I used random effect model which minimizes heterogeneity and that would ordinarily affect an ANOVA analysis of each group's point estimate.
Victor Ejigah You cannot use ANOVA in the traditional sense to compare point estimates of proportions directly. However, if we consider that subgroup analyses are equivalent to a meta-regression model with a categorical moderator, we can use Wald-type tests (analogous to ANOVA) to compare the different levels of the categorical moderator i.e. to compare the subgroups. You can calculate p-values for a difference between any two subgroups by reference to the null hypothesis that the difference between the group meta-regression coefficients is zero i.e. H0 = Beta(group1) = Beta(group2). The best way to go about doing this is to fit a mixed effects meta-regression model, then test for the above differences between the coefficients. This is possible using metafor for R, for which there are examples here:
It is important to note that this does not really give you any information that the confidence intervals do not - if the 95% CIs do not overlap, the ANOVA will not give you a p value < 0.05 and vice versa. Making multiple comparisons of this sort is prone to false positive error, so to me it makes more sense to interpret your results in terms of their relative effect sizes and precision when comparing subgroups of estimates of proportions, rather than relying entirely on the flawed concept of statistical significance.
Jack Henry I tried using metafor in R but I've hitting brick walls. I was able to install metafor and imported the xsl file. It looked like this
organ AE estimate SVR estimate Composite estimate
L 1 2 3
K 3 5 8
H 2 0 4
These are the estimates I got from performing the meta-analysis. I assumed (as you advised) that these are beta coefficients akin to a regression model.
The idea is to compare the coefficient of AE for L vs K, K vs H and L vs H and repeat the same for SVR estimate and other estimates