[R] How to extract coefficients from sequential (type 1), ANOVAs using lmer and lme
phillip.alday at mpi.nl
Wed Nov 29 11:21:23 CET 2017
(This time with the r-help in the recipients...)
Be careful when mixing lme4 and lmerTest together -- lmerTest extends
and changes the behavior of various lme4 functions.
>From the help page for lme4-anova (?lme4::anova.merMod)
> ‘anova’: returns the sequential decomposition of the contributions
> of fixed-effects terms or, for multiple arguments, model
> comparison statistics. For objects of class ‘lmerMod’ the
> default behavior is to refit the models with ML if fitted
> with ‘REML = TRUE’, this can be controlled via the ‘refit’
> argument. See also ‘anova’.
So lme4-anova will give you sequential tests; note, however, that lme4
won't calculate the denominator degrees of freedom for you and thus
won't give p-values. See the FAQ
>From the help page for lmerTest-anova (?lmerTest::anova.merModLmerTest):
> ## S4 method for signature 'merModLmerTest'
> anova(object, ... , ddf="Satterthwaite",
> type: type of hypothesis to be tested. Could be type=3 or type=2 or
> type = 1 (The definition comes from SAS theory)
So lmerTest-anova by default gives you Type III ('marginal', although
Type II is what actually gives you tests that respect the Principle of
Marginality; see John Fox's Applied Regression Analysis (book) or
Venables' "Exegeses on Linear Models"
(https://www.stats.ox.ac.uk/pub/MASS3/Exegeses.pdf) for more information
on that. Type I tests are the sequential tests, so with anova(model,
type=1), you will get the sequential tests you want. lmerTest will
approximate the denominator degrees of freedom for you (using
Satterthwaite method by default, or the more computationally intensive
Kenward-Roger method), so you'll get p-values if that's what you want.
Finally, it's important to note two things:
1. The "type"-argument for nlme::summary doesn't actually do anything
(see ?nlme::summary.lme). It's just passed onto the 'print' method,
where it's silently ignored. The 'type' of sum of squares is an
ANOVA-thing; the closest correspondence in terms of model coefficients
is the coding of your categorical contrasts. See the literature
mentioned above for more details as well as Dale Barr's discussion on
simple vs. main effects in regression models
(?nlme::anova.lme does have indeed have a 'type' argument.)
2. It is possible for the sequential tests and the marginal tests to
yield the same results. Again, see the above literature. You have no
interactions in your model and continuous (i.e. not-categorical)
predictors, so if they're orthogonal, then the sequential and marginal
tests will be numerically the same, even if they test different
hypotheses. (See section 5.2, starting on page 14; the sequential tests
are the "eliminating" tests, while the marginal tests are the "ignoring"
tests in that explanation.)
On 28/11/17 12:00, r-help-request at r-project.org wrote:
> I wantto run sequential ANOVAs (i.e. type I sums of squares), and trying to getresults including ANOVA tables and associated coefficients for predictive variables(I am using the R 3.4.2 version). I think ANOVA tables look right, but believecoefficients are wrong. Specifically, it looks like that the coefficients arefrom ANOVA with ?marginal? (type III sums of squares). I have tried both lme (nlmepackage) and lmer (lme4 + lmerTEST packages). Examples of the results arebelow:
> Ibelieve the results from summary() are for ?marginal? instead of ?sequential?ANOVA because the p-value (i.e., 0.237 for narea) in summary are identical tothose in tables from ?marginal?. I also used lmer in the lme4 pacakge to findthe same results (summary() results look like from ?marginal?).
> Cananybody tell me how to get coefficients for ?sequential? ANOVAs? Thank you.
More information about the R-help