[R] Problem in log

R. Michael Weylandt michael.weylandt at gmail.com
Wed Nov 30 19:49:53 CET 2011


You still haven't provided anything reproducible since we can't get to
your data file (also you used
the-you-really-shouldn't-use-this-function function attach) but here's
what I'd guess:

Your say the problem occurs when

exp(-alpha*d^(-beta)) > 1

Of course, this happens when

alpha*d^(-beta) < 0.

It seems like alpha , beta > 0, so you need to ask yourself why d < 0
: I can't immediately see why this would happen in your code. The
easiest way to get at this it just insert a print(d) statement or,
even better, a stopifnot(d > 0) with options(error = recover). This
will open an interactive debugger and you can do a post-mortem to see
exactly what happened.

This is really all I can see without minimal reproducible code, which
you aren't providing.

You may also want to work on vectorizing your code to make it faster
for long simulations (and arguably easier to read, though it's a
matter of taste)

E.g.,

for(i in 1:100){
        if(i!=k){
            if(inftime[i]==0){
                loglh<-loglh +log(1-p[i])
            }
            if(inftime[i]==2){
                loglh<-loglh + log(p[i])
            }
        }
    }

could become something like (very , very untested):

loglh <- loglh + sum((ifelse(inftime == 0, log(1-p), 0) +
ifelse(inftime == 2, log(p), 0))[-k])

which will be much faster. This takes all the elements and vectorizes
the two if statements, throws out the k case that you don't want, then
sums them and adds to loglh. It should be much faster than your loop
since it's all vectorized.

Finally, within the loglikelihood() function, you don't need to
pre-define d,p,k: it suffices to initialize them at the values you
calculate.

Hope something in here helps, but my bet is that your problem is
coming from the unprovided data and/or something with attach.

Michael

On Wed, Nov 30, 2011 at 7:43 AM, Gyanendra Pokharel
<gyanendra.pokharel at gmail.com> wrote:
> The loglikelihood() looks ok and gives some value. But I am using this
> function for the simulated annealing, generating the random samples from
> uniform proposal density., The codes are as follows
>
>
> epiannea <- function(T0 = 1, N = 500,beta = 0.1,x0 = 0.1, rho = 0.90, eps =
> 0.1, loglikelihood, data){
>
> moving <- 1
>
> count <- 0
>
> Temp <- T0
>
> alpha <- x0
>
> while(moving > 0){
>
> mmoving <- 0
>
> for(i in 1:N){
>
> r <- alpha + runif(1,-eps,eps)
>
> if(r > 0){
>
> a <- min(1,exp((loglikelihood(alpha,beta)-loglikelihood(r,beta))/Temp))
>
> }
>
> if(runif(1)< a){
>
> moving <- moving +1
>
> alpha <- r
>
> }
>
> }
>
> Temp <- Temp*rho
>
> count <- count + 1
>
> }
>
> #plot(a,loglikelihood(alpha,beta), type = "l")
>
> fmin <- loglikelihood(alpha, beta)
>
> return(c(fmin, alpha, count))
>
> }
>
> epiannea(loglikelihood=loglikelihood)
>
> Some times it shows the warnings that logp[i] produces NaNs, that means p[i]
> is negative, but it should not be that because p[i] is the probabiliy and
> can't be negative. Some times the code runs but never stop.
> On Wed, Nov 30, 2011 at 7:34 AM, R. Michael Weylandt
> <michael.weylandt at gmail.com> <michael.weylandt at gmail.com> wrote:
>>
>> I'd suggest you do some leg-work and figure out why you are getting values
>> >1. If your algorithm is motivated by some approximation then a min() or
>> pmin() *might* be the right fix, but if there are no approximations you may
>> need to start debugging properly to see why you are getting an out of bounds
>> value.
>>
>> Since there's no random number generation involved, I'd hesitate to just
>> throw out the result without knowing its source. Also keep in mind the
>> limitations of floating point arithmetic if you expect alpha*d^beta to be
>> small.
>>
>> Michael
>>
>> On Nov 29, 2011, at 6:58 PM, Sarah Goslee <sarah.goslee at gmail.com> wrote:
>>
>> > On Tue, Nov 29, 2011 at 6:55 PM, Gyanendra Pokharel
>> > <gyanendra.pokharel at gmail.com> wrote:
>> >> yes, log of negative number is undefined and R also do the same and
>> >> produces
>> >> NaNs. Here I want to reject the value of exp(-alpha*d^(-beta)) when
>> >> greater
>> >> than 1, and want to run the loop otherwise.
>> >> Thanks
>> >
>> > Then just add another if() statement checking for that condition.
>> >
>> >> On Tue, Nov 29, 2011 at 6:48 PM, Sarah Goslee <sarah.goslee at gmail.com>
>> >> wrote:
>> >>>
>> >>>> Here p[i] <- 1 - exp(-alpha*d^(-beta))> so,  log(p[i]) produces NaNs
>> >>>> when exp(-alpha*d^(-beta)) is greater than 1.> How can I remove
>> >>>> it.After
>> >>>> generating the out put we can omit it, but the> problem is different.
>> >>>
>> >>> Wait... you're complaining that you can't take the natural log of a
>> >>> negative
>> >>> number in R?
>> >>>
>> >>> You can't do that anywhere. What do you expect to happen? The log of a
>> >>> negative number IS NaN.
>> >>>
>> >>> Sarah
>> >>> On Tue, Nov 29, 2011 at 6:28 PM, Gyanendra Pokharel
>> >>> <gyanendra.pokharel at gmail.com> wrote:
>> >>>> I have following code:
>> >>>> loglikelihood <- function(alpha,beta= 0.1){
>> >>>>    loglh<-0
>> >>>>    d<-0
>> >>>>    p<-0
>> >>>>    k<-NULL
>> >>>>    data<-read.table("epidemic.txt",header = TRUE)
>> >>>>    attach(data, warn.conflicts = F)
>> >>>>    k <-which(inftime==1)
>> >>>>    d <- (sqrt((x-x[k])^2+(y-y[k])^2))^(-beta)
>> >>>>    p<-1 - exp(-alpha*d)
>> >>>>    for(i in 1:100){
>> >>>>        if(i!=k){
>> >>>>            if(inftime[i]==0){
>> >>>>                loglh<-loglh +log(1-p[i])
>> >>>>            }
>> >>>>            if(inftime[i]==2){
>> >>>>                loglh<-loglh + log(p[i])
>> >>>>            }
>> >>>>        }
>> >>>>    }
>> >>>>    return(loglh)
>> >>>> }
>> >>>> Here p[i] <- 1 - exp(-alpha*d^(-beta))
>> >>>> so,  log(p[i]) produces NaNs when exp(-alpha*d^(-beta)) is greater
>> >>>> than
>> >>>> 1.
>> >>>> How can I remove it.After generating the out put we can omit it, but
>> >>>> the
>> >>>> problem is different.
>> >>>>
>> >>>> On Tue, Nov 29, 2011 at 5:22 PM, Gyanendra Pokharel <
>> >>>> gyanendra.pokharel at gmail.com> wrote:
>> >>>>
>> >>>>> No, that,s not a problem Michael,
>> >>>>> I have following code:
>> >>>>> loglikelihood <- function(alpha,beta= 0.1){
>> >>>>>     loglh<-0
>> >>>>>     d<-0
>> >>>>>     p<-0
>> >>>>>     k<-NULL
>> >>>>>     data<-read.table("epidemic.txt",header = TRUE)
>> >>>>>     attach(data, warn.conflicts = F)
>> >>>>>     k <-which(inftime==1)
>> >>>>>     d <- (sqrt((x-x[k])^2+(y-y[k])^2))^(-beta)
>> >>>>>     p<-1 - exp(-alpha*d)
>> >>>>>     for(i in 1:100){
>> >>>>>         if(i!=k){
>> >>>>>             if(inftime[i]==0){
>> >>>>>                 loglh<-loglh +log(1-p[i])
>> >>>>>             }
>> >>>>>             if(inftime[i]==2){
>> >>>>>                 loglh<-loglh + log(p[i])
>> >>>>>             }
>> >>>>>         }
>> >>>>>     }
>> >>>>>     return(loglh)
>> >>>>> }
>> >>>>> Here p[i] <- 1 - exp(-alpha*d^(-beta))
>> >>>>> so,  log(p[i]) produces NaNs when exp(-alpha*d^(-beta)) is greater
>> >>>>> than
>> >>>>> 1.
>> >>>>> How can I remove it.After generating the out put we can omit it, but
>> >>>>> the
>> >>>>> problem is different.
>> >>>>>
>> >>>>>
>> >>>
>> >>> --
>> >>> Sarah Goslee
>> >>> http://www.functionaldiversity.org
>> >>
>> >>
>> >
>> > ______________________________________________
>> > R-help at r-project.org mailing list
>> > https://stat.ethz.ch/mailman/listinfo/r-help
>> > PLEASE do read the posting guide
>> > http://www.R-project.org/posting-guide.html
>> > and provide commented, minimal, self-contained, reproducible code.
>
>



More information about the R-help mailing list