[R] ML optimization question--unidimensional unfolding scaling

Peter Muhlberger pmuhl830 at gmail.com
Mon Oct 3 20:33:20 CEST 2005


I'm trying to put together an R routine to conduct unidimensional unfolding
scaling analysis using maximum likelihood.  My problem is that ML
optimization will get stuck at latent scale points that are far from
optimal.  The point optimizes on one of the observed variables but not
others and for ML to move away from this 'local optimum', it has to move in
a direction in which the likelihood is decreasing, which it won't.

It's not hard to know where to look for a more optimal value--it'll be just
on the other side of the mean of a curve.  So, I can find better values, but
these values need to be fed back into ML for continued optimization.
Problem is, optim or nlm don't allow me to feed them new values for
parameters and in any event ML will likely choke w/ parameters jumping
around.  

One solution I've thought of is to restart optim or nlm w/ the new values
whenever a point jumps.  Is there any good way to get optim or nlm to
prematurely terminate, return control to the calling program, while
retaining a copy of the estimates?

Perhaps ML isn't the best approach for this kind of problem.  Suggestions
welcome!

Cheers,
Peter




More information about the R-help mailing list