[R] problem with nls starting values

Ben Bolker bbolker at gmail.com
Wed Sep 26 14:47:35 CEST 2012


On 12-09-27 05:34 PM, Bert Gunter wrote:
> Good point, Ben.
> 
> I followed up my earlier reply offline with a brief note to Benedikt
> pointing out that "No" was the wrong answer: "maybe, maybe not" would
> have been better.
> 
> Nevertheless, the important point here is that even if you do get
> convergence, the over-parameterization means that the estimators don't
> mean anything: they are poorly determined/imprecise. This is a
> tautology, of course, but it is an important one. My experience is, as
> here, the poster wants to fit the over-parameterized model because
> "theory" demands it. That is, he wants to interpret the parameters
> mechanistically. But the message if the data is: "Sorry about that
> guys. Your theory may be fine, but the data do not contain the
> information to tell you what the parameters are in any useful way."
> We gloss over this distinction at our peril, as well as that of the
> science.
> 
> Cheers,
> Bert

  I absolutely agree that overparameterization can lead to nonsense
results, either because one quotes point estimates without noting that
the confidence intervals are effectively infinite, or because the
optimizer does something weird (mis-converging, mis-diagnosing the CI)
without warning. I've certainly seen lots of bad examples.

  On the other hand: there's an important difference between 'true'
overparameterization (strong unidentifiability) and more general weak
overparameterization. I claim that there do exist times when it's useful
to be able to fit, say, a 3-parameter model to 4 or 5 data points. In
addition to the bad overparameterization examples cited above, I've also
seen lots of examples where people (although lacking in numerical
sophistication/chops) had trouble fitting with nls and were told "well,
you're just trying to do something silly" -- when they weren't necessarily.


> 
> On Thu, Sep 27, 2012 at 2:17 PM, Ben Bolker <bbolker at gmail.com> wrote:
>> Bert Gunter <gunter.berton <at> gene.com> writes:
>>
>>>
>>> On Thu, Sep 27, 2012 at 12:43 PM, Benedikt Gehr
>>> <benedikt.gehr <at> ieu.uzh.ch> wrote:
>>>> now I feel very silly! I swear I was trying this for a long time and it
>>>> didn't work. Now that I closed R and restarted it it works also on my
>>>> machine.
>>>>
>>>> So is the only problem that my model is overparametrized with the data I
>>>> have?
>>> Probably.
>>>
>>>> however shouldn't it be possible to fit an nls to these data?
>>> (Obviously) no.
>>>
>>> I suggest you do a little reading up on optimization.
>>> Over-parameterization creates high dimensional ridges.
>>
>>   However, I will also point out that (from my experience and
>> others') nls is not the most robust optimizer ... you might consider
>> nlsLM (in the minpack.lm package), nls2 package, and/or doing nonlinear
>> least-squares by brute force using bbmle::mle2 as a convenient wrapper
>> for optim() or optimx().
>>
>>   cheers
>>     Ben Bolker
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
> 
> 
>



More information about the R-help mailing list