[R] impute missing values in correlated variables: transcan?

Frank E Harrell Jr f.harrell at vanderbilt.edu
Tue Nov 30 20:21:26 CET 2004


Jonathan Baron wrote:
> I would like to impute missing data in a set of correlated
> variables (columns of a matrix).  It looks like transcan() from
> Hmisc is roughly what I want.  It says, "transcan automatically
> transforms continuous and categorical variables to have maximum
> correlation with the best linear combination of the other
> variables." And, "By default, transcan imputes NAs with "best
> guess" expected values of transformed variables, back transformed
> to the original scale."
> 
> But I can't get it to work.  I say
> 
> m1 <- matrix(1:20+rnorm(20),5,)  # four correlated variables
> colnames(m1) <- paste("R",1:4,sep="")
> m1[c(2,19)] <- NA                # simulate some missing data
> library(Hmisc)
> transcan(m1,data=m1)
> 
> and I get
> 
> Error in rcspline.eval(y, nk = nk, inclx = TRUE) : 
>       fewer than 6 non-missing observations with knots omitted

Jonathan - you would need many more observations to be able to fit 
flexible additive models as transcan does.  Also note that single 
imputation has problems and you may want to consider multiple imputation 
as done by the Hmisc aregImpute function, if you had more data.

Frank

> 
> I've tried a few other things, but I think it is time to ask for
> help.
> 
> The specific problem is a real one.  Our graduate admissions
> committee (4 members) rates applications, and we average the
> ratings to get an overall rating for each applicant.  Sometimes
> one of the committee members is absent, or late; hence the
> missing data.  The members differ in the way they use the rating
> scale, in both slope and intercept (if you regress each on the
> mean).  Many decisions end up depending on the second decimal
> place of the averages, so we want to do better than just averging
> the non-missing ratings.
> 
> Maybe I'm just not seeing something really simple.  In fact, the
> problem is simpler than transcan assumes, since we are willing to
> assume linearity of the regression of each variable on the other
> variables.  Other members proposed solutions that assumed this,
> but they did not take into account the fact that missing data at
> the high or low end of each variable (each member's ratings)
> would change its mean.
> 
> Jon


-- 
Frank E Harrell Jr   Professor and Chair           School of Medicine
                      Department of Biostatistics   Vanderbilt University




More information about the R-help mailing list