[R] Structural Equation Models(SEM)

John Fox jfox at mcmaster.ca
Wed Dec 16 15:25:36 CET 2009


Dear Jarrett et al.,

I apologize for missing Ralf Finne's original posting -- I was out of town
at the time.

A few comments on this code:

(1) You should be able to get standard errors from the GLS solution (though
there's a missing factor 0.5 in the definition of objective.2 that should
deflate the standard errors by a factor of sqrt(2), and unfortunately this
is what one would get by default), but other output from summary.sem() won't
be meaningful.

(2) The difference between objective.1 and objective.2 in the original code
for sem.default() is that the latter uses an analytic gradient. The
implementation here of objective.2 for GLS and ULS fits doesn't supply a
gradient and consequently it would be clearer just to use objective.2 <-
objective.1 for these cases. Even better would be to set analytic.gradient
to FALSE in these cases. Of course, one could figure out the gradient for
the GLS and ULS estimators, but that's probably not worth the effort.

(3) Most of the output from summary(), etc., including standard errors, for
the ULS fit won't be sensible. The ULS "fitting function" never made sense
to me since it's not invariant with respect to rescaling of the observed
variables, but perhaps I'm missing something.

(4) I think that it would make most sense to save the fitting function in
the object returned by sem(), and then to tailor the output of other
functions such as summary.sem() and vcov.sem() to each case.

Regards,
 John


> -----Original Message-----
> From: r-help-bounces at r-project.org [mailto:r-help-bounces at r-project.org]
On
> Behalf Of Jarrett Byrnes
> Sent: December-15-09 5:07 PM
> To: r-help at r-project.org
> Subject: Re: [R] Structural Equation Models(SEM)
> 
> Joerg Everman has a great solution to this.  He changed the middle of
> the sem.mod code to include a variable, fit, and then used the
> following approach around where you define the objectives:
> 
> 	if (fit=="ml") {
>     objective.1 <- function(par){
>         A <- P <- matrix(0, m, m)
>         val <- ifelse (fixed, ram[,5], par[sel.free])
>         A[arrows.1] <- val[one.head]
>         P[arrows.2t] <- P[arrows.2] <- val[!one.head]
>         I.Ainv <- solve(diag(m) - A)
>         C <- J %*% I.Ainv %*% P %*% t(I.Ainv) %*% t(J)
>         Cinv <- solve(C)
>         f <- sum(diag(S %*% Cinv)) + log(det(C))
>         attributes(f) <- list(C=C, A=A, P=P)
>         f
>         }
>     objective.2 <- function(par){
>         A <- P <- matrix(0, m, m)
>         val <- ifelse (fixed, ram[,5], par[sel.free])
>         A[arrows.1] <- val[one.head]
>         P[arrows.2t] <- P[arrows.2] <- val[!one.head]
>         I.Ainv <- solve(diag(m) - A)
>         C <- J %*% I.Ainv %*% P %*% t(I.Ainv) %*% t(J)
>         Cinv <- solve(C)
>         f <- sum(diag(S %*% Cinv)) + log(det(C))
>         grad.P <- correct * t(I.Ainv) %*% t(J) %*% Cinv %*% (C - S) %*
> % Cinv %*% J %*% I.Ainv
>         grad.A <- grad.P %*% P %*% t(I.Ainv)
>         gradient <- rep(0, t)
>         gradient[unique.free.1] <- tapply(grad.A[arrows.
> 1.free],ram[ram[,1]==1 & ram[,4]!=0, 4], sum)
>         gradient[unique.free.2] <- tapply(grad.P[arrows.
> 2.free],ram[ram[,1]==2 & ram[,4]!=0, 4], sum)
>         attributes(f) <- list(C=C, A=A, P=P, gradient=gradient)
>         f
>         }
> 		}
> 	if (fit=="gls") {
>     objective.1 <- function(par){
>         A <- P <- matrix(0, m, m)
>         val <- ifelse (fixed, ram[,5], par[sel.free])
>         A[arrows.1] <- val[one.head]
>         P[arrows.2t] <- P[arrows.2] <- val[!one.head]
>         I.Ainv <- solve(diag(m) - A)
>         C <- J %*% I.Ainv %*% P %*% t(I.Ainv) %*% t(J)
>         Sinv <- solve(S)
>         f <- 0.5 * sum(diag(   ((S - C) %*% Sinv) %*%  ((S - C) %*%
> Sinv) ))
>         attributes(f) <- list(C=C, A=A, P=P)
>         f
>         }
>     objective.2 <- function(par){
>         A <- P <- matrix(0, m, m)
>         val <- ifelse (fixed, ram[,5], par[sel.free])
>         A[arrows.1] <- val[one.head]
>         P[arrows.2t] <- P[arrows.2] <- val[!one.head]
>         I.Ainv <- solve(diag(m) - A)
>         C <- J %*% I.Ainv %*% P %*% t(I.Ainv) %*% t(J)
>         Sinv <- solve(S)
>         f <- sum(diag(   ((S - C) %*% Sinv) %*%  ((S - C) %*% Sinv) ))
> #        grad.P <- correct * t(I.Ainv) %*% t(J) %*% Cinv %*% (C - S) %*
> % Cinv %*% J %*% I.Ainv
> #        grad.A <- grad.P %*% P %*% t(I.Ainv)
> #        gradient <- rep(0, t)
> #        gradient[unique.free.1] <- tapply(grad.A[arrows.
> 1.free],ram[ram[,1]==1 & ram[,4]!=0, 4], sum)
> #        gradient[unique.free.2] <- tapply(grad.P[arrows.
> 2.free],ram[ram[,1]==2 & ram[,4]!=0, 4], sum)
> #        attributes(f) <- list(C=C, A=A, P=P, gradient=gradient)
>         attributes(f) <- list(C=C, A=A, P=P)
>         f
>         }
> 		}
> 	if (fit=="uls") {
>     objective.1 <- function(par){
>         A <- P <- matrix(0, m, m)
>         val <- ifelse (fixed, ram[,5], par[sel.free])
>         A[arrows.1] <- val[one.head]
>         P[arrows.2t] <- P[arrows.2] <- val[!one.head]
>         I.Ainv <- solve(diag(m) - A)
>         C <- J %*% I.Ainv %*% P %*% t(I.Ainv) %*% t(J)
>         Sinv <- solve(S)
>         f <- 0.5 * sum(diag(   (S - C) %*%  (S - C)  ))
>         attributes(f) <- list(C=C, A=A, P=P)
>         f
>         }
>     objective.2 <- function(par){
>         A <- P <- matrix(0, m, m)
>         val <- ifelse (fixed, ram[,5], par[sel.free])
>         A[arrows.1] <- val[one.head]
>         P[arrows.2t] <- P[arrows.2] <- val[!one.head]
>         I.Ainv <- solve(diag(m) - A)
>         C <- J %*% I.Ainv %*% P %*% t(I.Ainv) %*% t(J)
>         Sinv <- solve(S)
>         f <- 0.5 * sum(diag(   (S - C) %*%  (S - C)  ))
> #        grad.P <- correct * t(I.Ainv) %*% t(J) %*% Cinv %*% (C - S) %*
> % Cinv %*% J %*% I.Ainv
> #        grad.A <- grad.P %*% P %*% t(I.Ainv)
> #        gradient <- rep(0, t)
> #        gradient[unique.free.1] <- tapply(grad.A[arrows.
> 1.free],ram[ram[,1]==1 & ram[,4]!=0, 4], sum)
> #        gradient[unique.free.2] <- tapply(grad.P[arrows.
> 2.free],ram[ram[,1]==2 & ram[,4]!=0, 4], sum)
> #        attributes(f) <- list(C=C, A=A, P=P, gradient=gradient)
>         attributes(f) <- list(C=C, A=A, P=P)
>         f
>         }
> 		}
> 
> 
> On Dec 2, 2009, at 10:22 AM, Jeremy Miles wrote:
> 
> > In the world of SEM, GLS has pretty much fallen by the wayside - I
> > can't recall anything I've seen arguing for it's use in the past 10
> > years, and I also can't recall anyone using it over ML.   The
> > recommendations for non-normal distributions tend to be robust-ML, or
> > robust weighted least squares.  These are more computationally
> > intensive, and I *think* that John Fox (author of sem) has written
> > somewhere that it wouldn't be possible to implement them within R,
> > without using a lower level language - or rather that it might be
> > possible, but it would be really, really slow.
> >
> > However, ML and GLS are pretty similar, if you dug around in the
> > source code, you could probably make the change (see,
> > http://www2.gsu.edu/~mkteer/discrep.html for example, for the
> > equations; in fact GLS is somewhat computationally simpler, as you
> > don't need to invert the implied covariance matrix at each iteration).
> > However, the fact that it's not hard to make the change, and that no
> > one has made the change, is another argument that it's not a change
> > that needs to be made.
> >
> > Jeremy
> >
> >
> >
> > 2009/12/2 Ralf Finne <Ralf.Finne at novia.fi>:
> >> Hi R-colleagues.
> >>
> >> I have been using the sem(sem) function.  It uses
> >> maximum likelyhood as optimizing. method.
> >> According to simulation study in Umeå Sweden
> >>
(http://www.stat.umu.se/kursweb/vt07/stad04mom3/?download=UlfHolmberg.pdf
> >> Sorry it is in swedish, except the abstract)
> >> maximum likelihood is OK for large samples and normal distribution
> >> the SEM-problem should be optimized by GLS (Generalized Least
> >> Squares).
> >>
> >>
> >> So to the question:
> >>
> >> Is there any R-function that solves SEM with GLS?
> >>
> >>
> >> Ralf Finne
> >> Novia University of Applied Science
> >> Vasa  Finland
> >>
> >> ______________________________________________
> >> R-help at r-project.org mailing list
> >> https://stat.ethz.ch/mailman/listinfo/r-help
> >> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> >> and provide commented, minimal, self-contained, reproducible code.
> >>
> >
> >
> >
> > --
> > Jeremy Miles
> > Psychology Research Methods Wiki: www.researchmethodsinpsychology.com
> >
> > ______________________________________________
> > R-help at r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> 
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.




More information about the R-help mailing list