# [R] numericDeriv

Jean Eid jeaneid at chass.utoronto.ca
Wed Apr 28 19:36:54 CEST 2004

True, True, However I am not estimating via MLE. The objective function is
bunch of moment conditions weighted according to the uncertainty of the
moment ( i.e. an estimate of the asymptotic Var-Cov matrix of the moments
(not the estimates)) Technically it looks more like a weighted nonlinear
least square problem. I have a bunch of momnets that look like this
E(e_{ik} z_i)=0
where e_{ik} is the error term and is a nonlinear function
of  the paramaters at observation i. . z_i is an instrument ( the model
have endogenous covariates). k above indicates that there is more than one
functional form for the residuals (simultaneous equation system that is
nonlinear). one of them look like
e_{ik}=\ln(p-{1\over \alpha} \Delta^{-1})-W\theta
There are two more.
I am interseted in estimating  \alpha, \theta, (\theta \in R^{k}) in
addition to other paramaters in the other equations.
I only want to use these moment conditions rather than assuming knowledge
of the distribution oof the error term.

At the end of the day, I need to use the delta method to get at an
estimate for the standard errors.

Hope this clarifies some bit more

On Wed, 28 Apr 2004, Spencer Graves wrote:

>       optim(..., hessian=TRUE, ...) outputs a list with a component
> hessian, which is the second derivative of the log(likelihood) at the
> minimum.  If your objective function is (-log(likelihood)), then
> optim(..., hessian=TRUE)$hessian is the observed information matrix. If > eigen(...$hessian)\$values are all positive with at most a few orders of
> magnitude between the largest and smallest, then it is invertable, and
> the square roots of the diagonal elements of the inverse give standard
> errors for the normal approximation to the distribution of parameter
> estimates.  With objective functions that may not always be well
> behaved, I find that optim sometimes stops short of the optimum.  I run
> it with method = "Nelder-Mead", "BFGS", and "CG", then restart the
> algorithm giving the best answer to one of the other algorithms.  Doug
> Bates and Brian Ripley could probably suggest something better, but this
> has produced acceptable answers for me in several cases, and I did not
> push it beyond that.
>
>       hope this helps.
>
> Jean Eid wrote:
>
> >Dear All,
> >I am trying to solve a Generalized Method of Moments problem which
> >necessitate the gradient of moments computation to get the
> >standard  errors of estimates.
> >I know optim does not output the gradient, but I can use numericDeriv to
> >get that. My question is: is this the best function to do this?
> >
> >Thank you
> >Jean,
> >
> >______________________________________________
> >R-help at stat.math.ethz.ch mailing list
> >https://www.stat.math.ethz.ch/mailman/listinfo/r-help
> >PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
> >
> >
>
>