[R] Error in predict.lm() for models without intercept?

isabella at ghement.ca isabella at ghement.ca
Wed Sep 17 00:51:39 CEST 2014


Hi everyone, 

It appears my R code didn't come through the first time (thanks for letting me know, Ista). Here is my message again: 

Could there be an error in the predict() function in R for models without intercepts which include one or more predictors?  
When using the predict() function to obtain the standard errors of the fitted values produced by a linear model (lm), the 
behaviour of the standard errors seems to mirror that of models which include an intercept (which should not happen).  

Here is an attempt to produce standard errors (and hence confidence bands) for the fitted values in a linear model with a 
single predictor and no intercept using R code:

## simulate a response y and two predictors x and z 

x <- rnorm(100,mean=0, sd=1)

z <- runif(100,min=-1,max=1)

y <- 1*x + 2*z + rnorm(100,mean=0, sd=1)


## fit a linear model with no intercept but with one predictor 

mod <- lm(y ~ 0 + x)

## compute confidence bands (i.e., fitted values +/- 1.96 standard errors of fitted values)

conf.band.x <- predict(mod,newdata=data.frame(x = seq(from=ceiling(min(x)),to=floor(max(x)),by=0.01)), 
                           interval="confidence")

## display confidence bands

conf.band.x <- data.frame(lwr=conf.band.x[,"lwr"],
                              fit=conf.band.x[,"fit"],
                              upr=conf.band.x[,"upr"])

matplot(x=seq(from=ceiling(min(x)),to=floor(max(x)),by=0.01), y=conf.band.x, type="l", xlab="x", ylab="y")
abline(v=mean(x),lty=3,col="magenta")
title("Effect of x on y")

According to statistical theory, in a model with no intercept and one predictor, the standard errors should be directly 
proportional to the value of x at which they are evaluated.  If x=0, the standard errors should also be zero. If x increases, 
the standard errors should also increase. The resulting plot produced by matplot shows this is not the case - the standard 
errors appear to increase as one moves away from the average value of x.  We would expect this behaviour if the model included 
an intercept, which is not the case here.  

Here is some R code for looking at standard errors of fitted values when the model includes no intercept and two predictors x 
and z.  In this code, the value of the predictor z is set to its average level. 
## linear model with no intercept but with two predictors 

mod <- lm(y ~ 0 + x + z)

conf.band.x <- predict(mod,newdata=data.frame(x = seq(from=ceiling(min(x)),to=floor(max(x)),by=0.01),
                                                  z = mean(z)), 
                           interval="confidence")

conf.band.x <- data.frame(lwr=conf.band.x[,"lwr"],
                              fit=conf.band.x[,"fit"],
                              upr=conf.band.x[,"upr"])

matplot(x=seq(from=ceiling(min(x)),to=floor(max(x)),by=0.01), y=conf.band.x, type="l", xlab="x", ylab="y")
abline(v=mean(x),lty=3,col="magenta")
title("Partial Effect of x on y (obtained by setting z to its average level)")

Again, the standard errors seem to behave as though they would come from a model with an intercept (given that they are 
flaring up as one moves away from the average value of the predictor x).

I would very much appreciate any clarifications or suggestions for how to fix this problem. 

If the problem is confirmed, it appears to also carry over to the effects package in R, which constructs plots similar to the 
ones produced by matplot above by relying on the predict() function.  

Many thanks, 

Isabella 


Isabella R. Ghement, Ph.D. 
Ghement Statistical Consulting Company Ltd. 
301-7031 Blundell Road, Richmond, B.C., Canada, V6Y 1J5 
Tel: 604-767-1250 
Fax: 604-270-3922 
E-mail: isabella at ghement.ca 
Web: www.ghement.ca 



More information about the R-help mailing list