Results
Method 1 produced pooled SMDs of 0.14 (95% CI 0.10 to 0.17) and 0.31 (95% CI 0.14 to 0.51) for the fixed and random effects results. There is a marked difference between the results for fixed and random effects; with the fixed result having a smaller effect size and tighter confidence interval; this is because the fixed effects analysis gives more weight to large trials, which tended to have more modest effect sizes (Table 3).
Method 2 resulted in pooled OR 1.13 (95% CI 1.06 to 1.20) and pooled SMD 0.50 (95% CI 0.42 to 0.59) for fixed effects; OR 1.13 (95% CI 1.06 to 1.20) and SMD 0.92 (95% CI 0.11 to 1.73) for random effects. One study contributed data to both the odds ratio and SMD estimate. The method using odds ratios produced a far less heterogeneous result than that for the SMDs in this case but as they are from different sets of trials it is difficult to infer why.
Method 3 resulted in an SMD 0.57 (95% CI 0.50 to 0.64). This weighted average produced the narrowest confidence intervals for SMDs.
For Method 4, we can see from Figure 1 that all studies reported a positive effect so it is clear that, on average, credible source interventions seem effective. The fact that the points are not clustered around one particular contour line tells us that there is a high level of heterogeneity. Both large and small studies appear to be associated with very small p-values and large effect sizes, so there is little evidence of publication bias.
Three of the methods produced an SMD, which ranged from 0.14 to 0.57. All were statistically significant, suggesting that we can be reasonably confident that a positive effect exists, but less confident in estimating the size of the effect as it is sensitive to the method chosen.