[R] Complicated nls formula giving singular gradient message

Bert Gunter gunter.berton at gene.com
Mon Dec 20 20:45:58 CET 2010


Jared:

You realize, of course, that just because you get estimates of the
parameters from the software is no guarantee that the estimates mean
anything? Nor does it mean that they mean nothing, I hasten to add.
If, as one might suspect, the model is overparameterized, the
estimates may be so imprecise that they are effectively useless -- but
the fitted values may nevertheless (__Especially__ if
overarameterized) fit your data very well. The model just won't fit
future data. In other words, you may have a well-fitting,
scientifically meaningless model.

Cheers,
Bert

On Mon, Dec 20, 2010 at 10:26 AM, Jared Blashka <evilamarant7x at gmail.com> wrote:
> Though my topic is slightly old already, I feel that it is necessary to post
> an update on my situation.
> I ended being able to estimate the parameters for this problem without
> having to worry as much about initial parameter estimates using AD Model
> Builder.
> It calculates the exact gradient using automatic differentiation so it's
> able to avoid the singular gradient problem nls can give.
>
> I also used the R package PBSadmb, which allowed me to run AD Model Builder
> and retrieve the results from within R. Then I could do what I liked with
> the results: generate graphs, more analysis, etc.
>
> Thanks to everyone who helped,
>
> Jared
>
> On Wed, Dec 15, 2010 at 8:47 AM, dave fournier <davef at otter-rsch.com> wrote:
>
>> Jared Blashka wrote:
>>
>> Hi,
>>
>> Can you write a little note to the R list saying something like
>>
>>   Re: SOLVED    [R] Complicated nls formula giving singular gradient
>> message
>>
>>  I was able to estimate the parameters for this problem using AD Mode
>> Builder which calculates
>>  the exact gradient for you using automatic differentiation and is thus
>> able to avoid the singular gradient
>> problem I encountered in nls.
>>
>> That way other R users who might be able to take advantage of the software
>> will hear about it.
>>
>>      Cheers,
>>
>>       Dave
>>
>>
>>
>>
>>> Dave,
>>>
>>> That's exactly what I was looking for!
>>>
>>> Thanks for all your help!
>>> Jared
>>>
>>> On Tue, Dec 14, 2010 at 7:13 AM, dave fournier <davef at otter-rsch.com<mailto:
>>> davef at otter-rsch.com>> wrote:
>>>
>>>    Jared Blashka wrote:
>>>
>>>    The source code for that is in jared.tpl
>>>
>>>    I changed from least squares to a concentrated likelihood so that you
>>>    could get estimated std devs via the delta method.  they are in
>>>    jared.std
>>>    I rescaled the parameters so that the condition number of the
>>>    Hessian is close to 1.
>>>    You can see the eigenvalues of the Hessian in jared.eva.
>>>    Your data are in jared.dat and the initial parameter values are in
>>>    jared.pin.
>>>
>>>    The parameter estimates with their estiamted std devs are:
>>>
>>>    index   name   value      std dev
>>>       1   NS     1.1254e-02 7.1128e-03
>>>       2   LogKi -8.8933e+00 8.2411e-02
>>>       3   LogKi -5.2005e+00 9.2179e-02
>>>       4   LogKi -7.2677e+00 7.7047e-02
>>>       5   BMax   2.1226e+05 5.1699e+03
>>>    ~                                            How does it look?
>>>
>>>     Cheers,
>>>       Dave
>>>
>>>
>>>
>>>
>>>        Dave - AD Model Builder looks like a great tool that I can
>>>        use, but I'm curious if it can also perform global parameter
>>>        estimations across multiple data sets.
>>>
>>>        In regards to the example I have provided, I have two similar
>>>        data sets that also need to be analyzed, but the values for NS
>>>        and BMax between the three data sets should be the same. Each
>>>        data set has a unique LogKi value however. In R, I
>>>        accomplished this by merging the three data sets and adding an
>>>        additional field for each data point that identified which set
>>>        it was originally from. Then in the regression formula I
>>>        specified the LogKi term as a vector: LogKi[dset]. The results
>>>        of the regression gave me one value each for NS and BMax, but
>>>        three LogKi values. I haven't had much time to look through
>>>        the AD Model Builder documentation yet, but are you aware if
>>>        such an analysis method is possible?
>>>
>>>        Here's one such example of a data set
>>>
>>>        all <-structure(list(X = c(-13, -11, -10, -9.5, -9, -8.5, -8,
>>>        -7.5, -7, -6.5, -6, -5, -13, -11, -10, -9.5, -9, -8.5, -8,
>>>        -7.5, -7, -6.5, -6, -5, -13, -11, -10, -9.5, -9, -8.5, -8,
>>>        -7.5, -7, -6.5, -6, -5, -13, -11, -10, -9.5, -9, -8.5, -8,
>>>        -7.5, -7, -6.5, -6, -5, -13, -11, -10, -9.5, -9, -8.5, -8,
>>>        -7.5, -7, -6.5, -6, -5, -13, -11, -10, -9.5, -9, -8.5, -8,
>>>        -7.5, -7, -6.5, -6, -5, -13, -11, -10, -9.5, -9, -8.5, -8,
>>>        -7.5, -7, -6.5, -6, -5, -13, -11, -10, -9.5, -9, -8.5, -8,
>>>        -7.5, -7, -6.5, -6, -5), Y = c(3146L, 3321L, 2773L, 2415L,
>>>        2183L, 1091L, 514L, 191L, 109L, 65L, 54L, 50L, 3288L, 3243L,
>>>        2826L, 2532L, 2060L, 896L, 517L, 275L, 164L, 106L, 202L, 53L,
>>>        3146L, 3502L, 2658L, 3038L, 3351L, 3238L, 2935L, 3212L, 3004L,
>>>        3088L, 2809L, 1535L, 3288L, 2914L, 2875L, 2489L, 3104L, 2771L,
>>>        2861L, 3309L, 2997L, 2361L, 2687L, 1215L, 3224L, 3131L, 3126L,
>>>        2894L, 2495L, 2935L, 2516L, 2994L, 3074L, 3008L, 2780L, 1454L,
>>>        3146L, 2612L, 2852L, 2774L, 2663L, 3097L, 2591L, 2295L, 1271L,
>>>        1142L, 646L, 68L, 3288L, 2606L, 2838L, 1320L, 2890L, 2583L,
>>>        2251L, 2155L, 1164L, 695L, 394L, 71L, 3224L, 2896L, 2660L,
>>>        2804L, 2762L, 2525L, 2615L, 1904L, 1364L, 682L, 334L, 64L),
>>>        dset = c(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
>>>        1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
>>>        2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
>>>        3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
>>>        3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3)), .Names = c("X",
>>>        "Y", "dset"
>>>        ), class = "data.frame", row.names = c(NA, 96L))
>>>
>>>        Thanks for your time,
>>>        Jared
>>>
>>>        On Mon, Dec 13, 2010 at 7:37 PM, dave fournier
>>>        <davef at otter-rsch.com <mailto:davef at otter-rsch.com>
>>>        <mailto:davef at otter-rsch.com <mailto:davef at otter-rsch.com>>>
>>>
>>>        wrote:
>>>
>>>           I always enjoy these direct comparisons between different
>>>        software
>>>           packages.
>>>           I coded this up in AD Model Builder which is freely
>>>        available at
>>>           http://admb-project.org   ADMB calculates exact derivatives via
>>>           automatic
>>>           differentiation so it tends to be more stable for these
>>>        difficult
>>>           problems.
>>>
>>>           The parameter estimates are
>>>           # Number of parameters = 3
>>>           Objective function value = 307873.  Maximum gradient
>>>        component =
>>>           1.45914e-06
>>>           # NS:
>>>           0.00865232633386
>>>           # LogKi:
>>>           -8.98700621813
>>>           # BMax:
>>>           237135.365156
>>>           The objective function is just least squares.
>>>           So it looks like SAS did pretty well before dying.
>>>
>>>
>>>           ______________________________________________
>>>           R-help at r-project.org <mailto:R-help at r-project.org>
>>>        <mailto:R-help at r-project.org <mailto:R-help at r-project.org>>
>>>
>>>        mailing list
>>>
>>>           https://stat.ethz.ch/mailman/listinfo/r-help
>>>           PLEASE do read the posting guide
>>>           http://www.R-project.org/posting-guide.html
>>>           and provide commented, minimal, self-contained,
>>>        reproducible code.
>>>
>>>
>>>
>>>
>>>
>>
>
>        [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Bert Gunter
Genentech Nonclinical Biostatistics
467-7374
http://devo.gene.com/groups/devo/depts/ncb/home.shtml



More information about the R-help mailing list