[R] lme mildly blowing up

Jim Battista jsb0027 at unt.edu
Tue Oct 21 21:05:53 CEST 2003


I'm running a hierarchical linear model of legislative committee
representativeness (so I have committees in chambers) using lme.  It's a
simple random-intercept-as-outcome model.

When I run it, everything converges and I get results like this, trimmed
for brevity.  The following are the group(chamber)-level variables.  The
dependent variable is bounded between zero and one.

                  Value Std.Error  DF    t-value p-value
effpty       0.29241814 0.1709523   5  1.7105246  0.1479
pay         -0.00395368 0.0051280   5 -0.7710054  0.4755
totalsta    -0.10386395 0.1466623   5 -0.7081842  0.5105
days         0.24975346 0.1626935   5  1.5351167  0.1853

Random effects:
 Formula: ~1 | stateid
        (Intercept)  Residual
StdDev:  0.01543434 0.3163471

BUT the intervals around the random effects are

sd((Intercept)) 2.440728e-08 0.01543434 9760.155

Which is obviously nonsense.

Now, I know some of what's going on here.  The model is overparameterized,
and I should be dropping some group-level variables.  And if I do that,
everything is kosher, and none of these variables matter there either.
OTOH, I can also get everything to come out apparently-kosher if I
estimate on a theoretically-relevant reduced dataset -- that is, if I drop
some observations (for committees nobody would ever care about), it
behaves again.

What I'm wondering is:

(1)  Is the model basically running home to OLS-or-very-close-to-it?  If
     I estimate the same model using stata's xtreg, it returns a sigma_u
     of zero, and if I estimate it with HLM, it generates a bad tau and
     tries again with one that is positive but weensy.  Are the algorithms
     in lme doing the same thing here, or something closely analogous?
     Generating an impermissible negative that gets truncated to zero, or
     substituting a very small positive number for it, or generating that
     very small number directly?
(2)  Assuming that the model is degenerating into OLS or something an
     epsilon away from OLS, can I still make the following inference?
	(a)  The std errors on group-level variables are underestimated,
	     since I'm running OLS(-like) on grouped data
	(b)  Therefore if they're not significant here, I can treat them
             as not significant (assuming I've checked for / dealt with
             collinearity problems, etc).

Thanks much,

Jim

James S. Coleman Battista
Dept. of Political Science, Univ. of North Texas
battista at unt.edu    (940)565-4960
-----
Pinky, are you pondering what I'm pondering?
I think so, but where will we find an open tattoo parlor at this time
of night?




More information about the R-help mailing list