[R] Comparing model fits for NLME when models are not nested

Robert A LaBudde ral at lcfltd.com
Fri Jun 12 18:05:54 CEST 2009


At 05:42 AM 6/12/2009, Lindsay Banin wrote:
>Hi there,
>
>I am looking to compare nonlinear mixed effects models that have 
>different nonlinear functions (different types of growth 
>curve)embedded. Most of the literature I can find focuses on 
>comparing nested models with likelihood ratios and AIC. Is there a 
>way to compare model fits when models are not nested, i.e. when the 
>nonlinear functions are not the same?

Transform back into original units, if necessary, and compare 
distributions of and statistics of residuals from fitted values in 
original units.

This is not a significance-test, but instead a measure of the better 
approximation to the observed model.

Types of measures: 1) rms residual, 2) max absolute residual, 3) mean 
absolute residual.

In my opinion, models should be chosen based on the principles of 
causality (theory), degree of approximation and parsimony. None of 
these involve significance testing.

Choosing models based upon significance testing (which merely 
identifies whether or not the experiment is large enough to 
distinguish an effect clearly) amounts to admitting intellectually 
that you have no subject matter expertise, and you must therefore 
fall back on the crumbs of significance testing to get glimmers of 
understanding of what's going on. (Much like stepwise regression techniques.)

As an example, suppose you have two models, one with 5 parameters and 
one with only 1. The rms residual error for the two models are 0.50 
and 0.53 respectively. You have a very large study, and all 4 
additional parameters are significant at p = 0.01 or less.  What 
should you do? What I would do is select the 1 parameter study as my 
baseline model. It will be easy to interpret physically, will 
generalize to other studies much better (stable), and is almost 
identical in degree of approximation as the 5 parameter model. I 
would be excited that a one parameter model could do this. The fact 
that the other 4 parameters have detectable effects at a very low 
level is not important for modeling the study, but may conceivably 
have some special significance on their own for future investigations.

So not being able to do significance testing on non-nested models is 
not that big a loss, in my opinion. Such tests encourage wrong 
thinking, in my opinion.

What I've expressed as an opinion here (which I am sure some will 
disagree with) is similar to the philosophy of choosing the number of 
principal components to use, or number of latent factors in factor 
analysis. What investigation do people ever do on the small 
eigenvalue principal components, even if their contributions are 
"statistically significant"?

================================================================
Robert A. LaBudde, PhD, PAS, Dpl. ACAFS  e-mail: ral at lcfltd.com
Least Cost Formulations, Ltd.            URL: http://lcfltd.com/
824 Timberlake Drive                     Tel: 757-467-0954
Virginia Beach, VA 23464-3239            Fax: 757-467-2947

"Vere scire est per causas scire"




More information about the R-help mailing list