[R] bootstrapping in regression
Greg.Snow at imail.org
Mon Feb 2 18:29:14 CET 2009
Others have confirmed that you use the predicted values plus permuted residuals is the new y variable and also referred you to some other articles.
On the question of does this work for mixed effects models: That is a good question, and it depends on what question you are trying to answer and what assumptions you are trying to make. The mixed effects model is more complicated in that you not only have residuals that you are permuting, but possibly also random effects, depending on your question(s) of interest. Then to further complicate things, you need to take into account any correlations between the different residuals/effects.
If you can work out a reduced model of interest under your null hypothesis, and see how to permute the other pieces in a way that preserves the correlation or works with assumed orthogonality. Then it should work for you (but not be simple).
I would suggest that you try doing a bunch of simulations where you first create data sets that follow your null hypothesis (reduced model), then do the permutation test on them. If everything is working correctly, then the p-values should follow a roughly uniform distribution (if not, then the permutation test is not working for your situation, your assumptions are not holding, or something else is messed up). Doing the simulations will force you to think about all the pieces that go into the analysis and how reasonable your assumptions are. If this works, then try simulating under the alternative (full model) to see what type of power you have to see the difference and compare that to other approaches.
Hope this helps,
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
greg.snow at imail.org
> -----Original Message-----
> From: r-help-bounces at r-project.org [mailto:r-help-bounces at r-
> project.org] On Behalf Of Thomas Mang
> Sent: Thursday, January 29, 2009 3:52 PM
> To: r-help at stat.math.ethz.ch
> Subject: Re: [R] bootstrapping in regression
> Greg Snow wrote:
> > What you are describing is actually a permutation test rather than a
> bootstrap (related concepts but with a subtle but important
> > The way to do a permutation test with multiple x's is to fit the
> reduced model (use all x's other than x1 if you want to test x1) on the
> original data and store the fitted values and the residuals.
> > Permute the residuals (randomize their order) and add them back to
> the fitted values and fit the full model (including x1 this time) to
> the permuted data set. Do this a bunch of times and it will give you
> the sampling distribution for the slope on x1 (or whatever your set of
> interest is) when the null hypothesis that it is 0 given the other
> variables in the model is true.
> Thanks to you and Tom for the correction regarding bootstrapping vs
> permutation, and to Chuck for the cool link. Yes of course I described
> I have a question here: I am not sure if I understand your 'fit the
> model ... to the permuted data set'. Am I correct to suppose that once
> the residuals of the reduced-model fit have been permuted and added
> to the fitted values, the values obtained this way (fitted + permuted
> residuals) now constitute the new y-values to which the full model is
> fitted? Is that correct ?
> Do you know if this procedure is also valid for a mixed-effects model ?
> thanks a lot,
> > Permuting just x1 only works if x1 is orthogonal to all the other
> predictors, otherwise the permuting destroys the relationship with the
> other predictors and does not do the test you want.
> > Bootstrapping depends on sampling with replacement, not permuting,
> and is used more for confidence intervals than for tests (the reference
> by John Fox given to you in another reply can help if that is the
> approach you want to take).
> > Hope this helps,
> R-help at r-project.org mailing list
> PLEASE do read the posting guide http://www.R-project.org/posting-
> and provide commented, minimal, self-contained, reproducible code.
More information about the R-help