[R] OT: model diagnostics in the published literature

Greg Snow Greg.Snow at imail.org
Fri Sep 10 21:01:56 CEST 2010


My experience is that most medical journals (and probably others as well, but I am most familiar with the medical journals) have word or page limits on articles.  Diagnostic plots and tests that just show you what you expected and say that it is ok to use your model are not exciting enough to include.  And some of those plots/tests tend to confuse non-statisticians rather than help, have you ever given a QQ-plot of the residuals to a client to show that the normal approximation is OK?  I made this mistake a few times and ended up having to explain it over and over again.

Most papers that I have been involved with end up including less than half of the analyses that I actually did, just the most interesting results make it.  Sometimes a reviewer will ask about the tests on the assumptions and we will send them the results so they can see the model is reasonable, but rarely does it make it into the paper itself.  Though I think it would be better if more reviewers asked.

The drawback is that when you read a paper it is difficult (or impossible) to tell if they did all the tests and the results were as expected, or they did not do the tests and there could be major problems.

One bright spot for the future is that more journals are now allowing for online supplements where all the details that don't make it into the main paper can be provided for the few who are interested.

-- 
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.snow at imail.org
801.408.8111


> -----Original Message-----
> From: r-help-bounces at r-project.org [mailto:r-help-bounces at r-
> project.org] On Behalf Of Christopher W. Ryan
> Sent: Thursday, September 09, 2010 8:34 PM
> To: r-help at r-project.org
> Subject: [R] OT: model diagnostics in the published literature
> 
> This is a more general statiscal question, not specific to R:
> 
> As I move through my masters curriculum in statistics, I am becoming
> more and more attuned to issues of model fit and diagnostics (graphical
> methods, AIC, BIC, deviance, etc.) As my regression professor always
> likes to say, only draw substantive conclusions from valid models.
> 
> Yet in published articles in my field (medicine), I rarely see any
> explicit description of whether, and if so how, model fit was assessed
> and assumptions checked. Mostly the results sections are all about
> hypothesis testing on model coefficients.
> 
> Is this common in other disciplines? Are there fields of study in which
> it is customary to provide a discussion of model adequacy, either in
> the
> text or perhaps in an online appendix?
> 
> And if that discussion is not provided, what, if anything, can one
> conclude about whether, and how well, it was done? Is it sort of taken
> as a given that those diagnostic checks were carried out? Do journal
> editors often ask?
> 
> Thanks for your thoughts.
> 
> --Chris Ryan
> Clinical Associate Professor of Family Medicine
> SUNY Upstate Medical University Clinical Campus at Binghamton
> 
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.



More information about the R-help mailing list