Publication bias
Following Nakagawa et al. (2022), we relied on two complementary
approaches to assess small study effects, which may result from
publication bias. First, we visualized the relationship between effect
sizes and precision (SE) using funnel plots. To do this, we re-fitted
the selected models as random effect models and computed the residual
effect sizes conditional on the experiment, the observation, and factor
level, for those factors included as moderators in the main analyses.
These conditional residuals have the advantage of taking some of the
within-experiment non-independence into account, but they still make
unlikely assumptions about sampling variances (Nakagawa et al.2022).
We, therefore, complemented the funnel plots with a two-step, modified
Egger’s test for multilevel meta-analysis (Nakagawa et al. 2022).
In the first step of this test, the SE of effect sizes is included as
the only moderator in a meta-regression with the same random effect
structure as in our main MLMA analyses. A significant slope of this
moderator means that studies with low precision tended to report either
more negative or more positive effects than studies with higher
precision. Therefore, if the SE slope is different from zero, the second
step of the test is to fit a meta-regression with the variance of effect
sizes as the only moderator. The intercept of this second
meta-regression is then a more appropriate estimate of the overall
meta-analytic effect (Stanley & Doucouliagos 2014). Because we
uncovered evidence consistent with publication bias in Q1 and Q2, we
tested the robustness of the meta-analytic effects of moderators by
fitting a multi-level meta-regression (MLMR) with variance in addition
to the moderators of interest for each question in our study (see
Supporting Material).