[R] maximum likelihood, 1st and 2nd derivative

Ben Bolker bolker at zoo.ufl.edu
Thu Jan 11 23:03:40 CET 2007


francogrex <francogrex <at> mail.com> writes:

> [SNIP]
> This maximisation involves a search in five-dimensional
> parameter space {θ: α1,α2, β1, β2, P} for the vector that maximises the
> likelihood as evidenced by the first and second derivatives of the function
> being zero. The likelihood is L(θ) = Πij {P f (Nij; α1, β1, Eij) + (1-P) f
> (Nij; α2, β2, Eij)} This involves millions of calculations. The
> computational procedures required for these calculations are based on the
> Newton-Raphson method. This is an old calculus-based technique that was
> devised to find the roots of an equation (e.g. the values of the independent
> variable (e.g. x) for which the value of the function (e.g. f(x)) equals
> zero."

  I'm sure someone will correct me if I'm wrong, but this seems wrong
to me.  We only want the first derivatives to be zero.  It wouldn't be
impossible for second and higher derivatives to be zero, but it would
be somewhat pathological.
  While optimization will be faster and more stable if you can compute
the first derivatives (the _gradient_) analytically (and R has the
D() and deriv() functions for doing so), R will compute derivatives
numerically by finite differences if you don't.  See ?optim.

  Blatant plug:  www.zoo.ufl.edu/bolker/emdbook/chap7A.pdf  pp. 3-4
might be helpful too.

  Ben Bolker



More information about the R-help mailing list