[R] conservative robust estimation in (nonlinear) mixed models

Spencer Graves spencer.graves at pdf.com
Thu Mar 23 21:33:51 CET 2006


	  I know of two fairly common models for robust methods.  One is the 
contaminated normal that you mentioned.  The other is Student's t.  A 
normal plot of the data or of residuals will often indicate whether the 
assumption of normality is plausible or not;  when the plot indicates 
problems, it will often also indicate whether a contaminated normal or 
Student's t would be better.

	  Using Student's t introduces one additional parameter.  A 
contaminated normal would introduce 2;  however, in many applications, 
the contamination proportion (or its logit) will often b highly 
correlated with the ratio of the contamination standard deviation to 
that of the central portion of the distribution.  Thus, in some cases, 
it's often wise to fix the ratio of the standard deviations and estimate 
only the contamination proportion.

	  hope this helps.
	  spencer graves

dave fournier wrote:

> Conservative robust estimation methods do not appear to be
> currently available in the standard mixed model methods for R,
> where by conservative robust estimation I mean methods which
> work almost as well as the methods based on assumptions of
> normality when the assumption of normality *IS* satisfied.
> 
> We are considering adding such a conservative robust estimation option
> for the random effects to our AD Model Builder mixed model package,
> glmmADMB, for R, and perhaps extending it to do robust estimation for 
> linear mixed models at the same time.
> 
> An obvious candidate is to assume something like a mixture of
> normals. I have tested this in a simple linear mixed model
> using 5% contamination with  a normal with 3 times the standard 
> deviation, which seems to be
> a common assumption. Simulation results indicate that when the
> random effects are normally distributed this estimator is about
> 3% less efficient, while when the random effects are contaminated with
> 5% outliers  the estimator is about 23% more efficient, where by 23%
> more efficient I mean that one would have to use a sample size about
> 23% larger to obtain the same size confidence limits for the
> parameters.
> 
> Question?
> 
> I wonder if there are other distributions besides a mixture or normals. 
> which might be preferable. Three things to keep in mind are:
> 
>     1.)  It should be likelihood based so that the standard likelihood
>           based tests are applicable.
> 
>     2.)  It should work well when the random effects are normally
>          distributed so that things that are already fixed don't get
>          broke.
> 
>     3.)  In order to implement the method efficiently it is necessary to
>          be able to produce code for calculating the inverse of the
>          cumulative distribution function. This enables one to extend
>          methods based one the Laplace approximation for the random
>          effects (i.e. the Laplace approximation itself, adaptive
>          Gaussian integration, adaptive importance sampling) to the new
>          distribution.
> 
>       Dave
>




More information about the R-help mailing list