[Rd] do_optimhess vs. fdHess ...

Ben Bolker bolker at zoo.ufl.edu
Sat Jul 3 15:42:37 CEST 2004


   Quick question:
   poking around and comparing the performance of do_optimhess
(C code within optim.c) and fdHess (in the nlme package), it looks
like do_optimhess evaluates an n-parameter function (2*n)^2 times,
while fdHess evaluates it (n+1)*(n+2)/2 times, to find a
numeric estimate of the Hessian -- and only (n^2+1) of do_optimhess's
evaluations are for unique values of the parameters.
Is there something I'm missing?
(My guess is that it's the result of evaluating the Hessian by comparing
evaluations of the gradient, which might overlap in the
numerical values they compute -- but which would make sense
if you were computing the gradient directly rather than by
finite differencing.)

   The reason I ask is that I'm working with some objective
functions that are moderately slow, and the Hessian calculation
at the end can take nearly as much time/as many function
evaluations as finding the optimum in the first place
(64 vs 79 function evaluations in one recent example)

    I can hack my own optim() that substitutes fdHess for do_optimhess
at the end if no gradient function is defined, but just wanted
to know if it would be a bad idea ... (?fdHess cites a reason
for its design, but I don't see any explanations lurking in the
do_optimhess code ...)

   [this is in R 1.9.0, but I don't see anything in the NEWS
file for 1.9.1 that indicates anything has changed ...]

  thanks,
    Ben Bolker



More information about the R-devel mailing list