R-beta: SEs for one-param MLE in R?
Douglas Bates
bates@stat.wisc.edu
21 Apr 1998 08:34:23 -0500
Martin Maechler <maechler@stat.math.ethz.ch> writes:
> With nls(.), this is quite different, I think.
> nls(.) has several nice features, (your "nls" did too!).
> Most notably, the calling syntax of nls(.) using model notation,
> is something I would want before I'd call a function "nls".
I have to admit to some bias here being one of the people who wrote
nls for S. I find a couple of other features for nls to be very
helpful. The "plinear" algorithm that profiles over any linear
parameters in the model can be very effective. It doesn't always
converge in fewer iterations but it starts at values of the parameters
that are much closer to the optimum and it requires fewer starting
estimates to be calculated.
I have just finished preparing a short course on nonlinear regression
to be presented this weekend. The slides are available at
http://franz.stat.wisc.edu/pub/NLME/Kansas/
Section 4 shows the use of some of the current and new facilties in
S for nonlinear regression.
I refer in those slides to the nonlinear regression test cases at the
U.S. National Institutes of Standards and Technology (NIST) web site.
It is interesting to compare how numerical analysts view nonlinear
least squares with the approach of statisticians. The NIST folks make
a big deal about using 128 bit arithmetic to produce "certified"
values of the parameter estimates. In their favor they also produce
the approximate standard errors for the estimates but they don't seem
to notice that the estimates are things like 2.3 +/- 0.7 and they are
worrying about whether the 8th decimal place should be a 6 or a 7.
Also their idea of a "difficult" problem is to take a relatively easy
model fit and multiply all the starting estimates by 100. In one
case, for the BoxBOD data where there is an increase to an asymptote,
all the responses are between 100 and 225 so they start the asymptote
at 1 and find it is difficult to converge. With the "plinear"
algorithm you don't even need a starting estimate for the asymptote.
Those slides show the new approach to selfStart models in S. We have
found that with a minimal effort it is possible to get convergence on
any of the examples on the NIST site simply by considering what the
parameters represent and by exploiting conditional (or "partial")
linearity.
For R it is still not clear to me how to manipulate the expressions
and functions. I saw some of the discussion about function closures
and why it is not easy to coerce to a function but that is still black
magic to me. I regret that I am not likely to be able to take the
time to understand it better in the near future.
-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-
r-devel mailing list -- Read http://www.ci.tuwien.ac.at/~hornik/R/R-FAQ.html
Send "info", "help", or "[un]subscribe"
(in the "body", not the subject !) To: r-devel-request@stat.math.ethz.ch
_._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._