[R] Problem with Newton_Raphson

Berend Hasselman bhh at xs4all.nl
Thu Sep 20 16:06:30 CEST 2012


On 20-09-2012, at 15:17, Christopher Kelvin wrote:

> By your suggestion if i understand you, i have changed (p[1]) and (p[2]) and have also corrected the error sum(t), but if i run it, the parameters estimates are very large.
> Can you please run it and help me out? The code is given below.
> 
> 
> p1<-0.6;b<-2
> n=20;rr=5000
> U<-runif(n,0,1)
> for (i in 1:rr){
> 
> x<-(-log(1-U^(1/p1))/b)
> 
>  meantrue<-gamma(1+(1/p1))*b
>   meantrue
>   d<-meantrue/0.30
>   cen<- runif(n,min=0,max=d)
>   s<-ifelse(x<=cen,1,0)
>   q<-c(x,cen)
> }
>     z<-function(data, p){ 
>     shape<-p[1]
>     scale<-p[2]
>     log1<-n*sum(s)*log(shape)+ n*sum(s)*log(scale)+(shape-1)*sum(s)*log(1-((exp(-(scale)*sum(x)))))
> -(scale)*sum(x) + (shape)*log((exp(-(scale)*sum(x))))-
> (shape)*sum(s)*log((exp(-(scale)*sum(x))))
>   return(-log1)
>   }
>   start <- c(1,1)
>   zz<-optim(start,fn=z,data=q,hessian=T)
>   zz


1. You think you are using a (quasi) Newton method. But the default method is "Nelder-Mead". You should specify method explicitly for example method="BFGS". When you do that you will see that the results are completely different. You should also carefully inspect the return value of optim. In your case zz$convergence is 1 which means "indicates that the iteration limit maxit had been reached.".
When you use method="BFGS" you will get zz$ convergence is 0.

This may do what you want

zz<-optim(start, fn=z, data=q, method="BFGS", hessian=T)
zz

2. when providing examples such as yours it is a good idea to issue set.seed(<number>) at the start of your script to ensure reproducibility.

3. The function z does not calculate what you want: since fully formed expressions are terminated by a newline and since the remainder of the line after log1<- is a complete expression, log1 does include the value of the two following  lines.
See the difference between 

a <- 1
b <- 11
a
-b

and

a-
b

So you could write this

    log1 <- n*sum(s)*log(shape)+ n*sum(s)*log(scale)+
            (shape-1)*sum(s)*log(1-((exp(-(scale)*sum(x))))) -(scale)*sum(x) + 
            (shape)*log((exp(-(scale)*sum(x)))) - (shape)*sum(s)*log((exp(-(scale)*sum(x))))

and you will see rather different results.
You will have to determine if they are what you expect/desire.

A final remark on function z:

- do not calculate things like n*sum(s) repeatedly: doing something like A<-n*sum(s) and reusing A is more efficient.
- same thing for log((exp(-(scale)*sum(x)))) (recalculated four times)
- same thing for sum(x)

See below for a partly rewritten function z and results with method="BFGS".
I have put a set.seed(1) at the start of your code.

good luck,

Berend

z<-function(p,data){ 
    shape<-p[1]
    scale<-p[2]
    log1 <- n*sum(s)*log(shape)+ n*sum(s)*log(scale)+
            (shape-1)*sum(s)*log(1-((exp(-(scale)*sum(x))))) -(scale)*sum(x) + 
            (shape)*log((exp(-(scale)*sum(x)))) - (shape)*sum(s)*log((exp(-(scale)*sum(x))))
    return(-log1)
}

start <- c(1,1)
> z(start, data=q)
[1] -138.7918

> zz<-optim(start, fn=z, data=q, method="BFGS", hessian=T)
There were 34 warnings (use warnings() to see them)
> zz
$par
[1] 1.009614e+11 8.568498e+01

$value
[1] -1.27583e+15

$counts
function gradient 
     270       87 

$convergence
[1] 0

$message
NULL

$hessian
       [,1] [,2]
[1,] -62500    0
[2,]      0    0




More information about the R-help mailing list