[R] Metafor multilevel metaregression: total variance increases when moderator added?
Viechtbauer Wolfgang (SP)
wolfgang.viechtbauer at maastrichtuniversity.nl
Tue Feb 28 13:54:19 CET 2017
Very difficult to diagnose what is going on without actually seeing the data. But as I said on CV: Depending on the data, the variance components may not be estimated precisely, so negative values for those kinds of pseudo-R^2 statistics are quite possible. In fact, if a particular moderator is actually unrelated to the outcomes, then in roughly 50% of the cases, the pseudo-R^2 statistic will be negative.
Lopez-Lopez, J. A., Marin-Martinez, F., Sanchez-Meca, J., Van den Noortgate, W., & Viechtbauer, W. (2014). Estimation of the predictive power of the model in mixed-effects meta-regression: A simulation study. British Journal of Mathematical and Statistical Psychology, 67(1), 30-48.
We only examined the standard mixed-effects meta-regression model with a single moderator, but found that the pseudo-R^2 statistic can be all over the place unless k is quite large.
Now you seem to have a larger number of estimates (170), but these are nested in 'only' 26 studies. So, I suspect that the estimate-level variance component is estimated fairly precisely, but not the study-level variance component. You may want to examine the profile plots (with the profile() function) and/or get (profile-likelihood) CIs of the variance components (using the confint() function). Probably the CI for the study-level variance component is quite wide.
Wolfgang Viechtbauer, Ph.D., Statistician | Department of Psychiatry and
Neuropsychology | Maastricht University | P.O. Box 616 (VIJV1) | 6200 MD
Maastricht, The Netherlands | +31 (43) 388-4170 | http://www.wvbauer.com
>From: R-help [mailto:r-help-bounces at r-project.org] On Behalf Of Duncan,
>Sent: Monday, February 27, 2017 20:05
>To: r-help at r-project.org
>Subject: [R] Metafor multilevel metaregression: total variance increases
>when moderator added?
>I am running a two level multilevel meta-regression of 170 estimates
>nested within 3 informants nested within 26 studies. I run the null model
>to get a pooled estimate with random effects at the informant level and
>Then I test a series of potential moderators (one at a time, given small
>number of studies and adjust p-values for multiple testing). I use:
>(sum(Model1$sigma2) - sum(Model2$sigma2)) / sum(Model1$sigma2)
>to compute the proportional reduction in the total variance from here:
>For one moderator, I get a negative value for reduced total variance and
>an unexpected negative coefficient. Based on Wolfgang's response in the
>link above this is possible "depending on the size of your dataset, those
>variance components may not be estimated very precisely and that can lead
>to such counter-intuitive results".
>I am trying to diagnose why this model is not being estimated properly and
>why I am getting an unexpected negative result. When I remove the second
>level from the model and run a single-level random effects models of 170
>estimates nested within 26 studies, the coefficient is positive and as we
>Does anyone have any suggestions for what might be going on or how I might
>diagnose the problem with this model?
>Laura Duncan, M.A.
>Offord Centre for Child Studies
>Tel: 905 525 9140 x21504
>Fax: 905 574 6665
>duncanlj at mcmaster.ca
>Mailing Address Courier
>1280 Main St. W. MIP 201A 175 Longwood Rd. S.
>Hamilton, Ontario L8S 4K1 Hamilton, Ontario
More information about the R-help