[R] Optim function returning always initial value for parameter to be optimized

Paul Gilbert pgilbert902 at gmail.com
Sat Feb 10 16:58:26 CET 2018



On 02/10/2018 06:00 AM, r-help-request at r-project.org wrote:
> Did you check the gradient? I don't think so. It's zero, so of course
> you end up where you start.
> 
> Try
> 
> data.input= data.frame(state1 = (1:500), state2 = (201:700) )
> err.th.scalar <- function(threshold, data){
> 
>      state1 <- data$state1
>      state2 <- data$state2
> 
>      op1l <- length(state1)
>      op2l <- length(state2)
> 
>      op1.err <- sum(state1 <= threshold)/op1l
>      op2.err <- sum(state2 >= threshold)/op2l

I think this function is not smooth, and not even continuous. Gradient 
methods require differentiable (smooth) functions. A numerical 
approximation will be zero unless you are right near a jump point, so 
you are unlikely to move from your initial guess.

Paul
> 
>      total.err <- (op1.err + op2.err)
> 
>      return(total.err)
> }
> 
> soln <- optim(par = 300, fn=err.th.scalar, data = data.input, method =
> "BFGS")
> soln
> require("numDeriv")
> gtest <- grad(err.th.scalar, x=300, data = data.input)
> gtest
> 
> 
> On 2018-02-09 09:05 AM, BARLAS Marios 247554 wrote:
>> data.input= data.frame(state1 = (1:500), state2 = (201:700) )
>>
>> with data that partially overlap in terms of values.
>>
>> I want to minimize the assessment error of each state by using this function:
>>
>> err.th.scalar <- function(threshold, data){
>>    
>>    state1 <- data$state1
>>    state2 <- data$state2
>>    
>>    op1l <- length(state1)
>>    op2l <- length(state2)
>>    
>>    op1.err <- sum(state1 <= threshold)/op1l
>>    op2.err <- sum(state2 >= threshold)/op2l
>>    
>>    total.err <- (op1.err + op2.err)
>>
>>    return(total.err)
>> }
>>
>>
>> SO I'm trying to minimize the total error. This Total Error should be a U shape essentially.
>>
>>
>> I'm using optim as follows:
>>
>> optim(par = 300, fn=err.th.scalar, data = data.input, method = "BFGS")
> 
> Maybe develop an analytic gradient if it is very small, as the numeric
> approximation can then be zero even when the true gradient is not.
> 
> JN
> 
>



More information about the R-help mailing list