[R] Best practices for handling very small numbers?

Ben Bolker bbolker at gmail.com
Tue Oct 18 15:28:04 CEST 2011


Duncan Murdoch <murdoch.duncan <at> gmail.com> writes:

> 
> On 11-10-18 4:30 AM, Seref Arikan wrote:
> > Hi Dan,
> > I've tried the log likelihood, but it reaches zero again, if I work with say
> > 1000 samples.
> > I need an approach that would scale to quite large sample sizes. Surely I
> > can't be the first one to encounter this problem, and I'm sure I'm missing
> > an option that is embarrassingly obvious.
> 
> I think you are calculating the log likelihood incorrectly.  Don't 
> calculate the likelihood and take the log; work out the formula for the 
> log of the likelihood, and calculate that.  (If the likelihood contains 
> a sum of terms, as in a mixture model, this takes some thinking, but it 
> is still worthwhile.)
> 
> With most models, it is just about impossible to cause the log 
> likelihood to underflow if it is calculated carefully.
> 
> Duncan Murdoch
> 

  I haven't followed this carefully, but there is a special problem
in Bayesian situations where at some point you may have to *integrate*
the likelihoods, which implies dealing with them on the likelihood
(not log-likelihood) scale.  There are various ways of dealing with
this, but one is to factor a constant (which will be a very small value)
out of the elements in the integrand.

  Ben Bolker



More information about the R-help mailing list