[R] Fitting Mixture distributions
bgunter.4567 at gmail.com
Thu Sep 8 08:47:40 CEST 2016
"please suggest what can I do to resolve this
Fitting normal mixtures can be difficult, and sometime the
optimization algorithm (EM) will get stuck with very slow convergence.
Presumably there are options in the package to either increase the max
number of steps before giving up or make the convergence criteria less
sensitive. The former will increase the run time and the latter will
reduce the optimality (possibly leaving you farther from the true
optimum). So you should look into changing these as you think
"The trouble with having an open mind is that people keep coming along
and sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Wed, Sep 7, 2016 at 3:51 PM, Aanchal Sharma
<aanchalsharma833 at gmail.com> wrote:
> Hi Simon
> I am facing same problem as described above. i am trying to fit gaussian
> mixture model to my data using normalmixEM. I am running a Rscript which
> has this function running as part of it for about 17000 datasets (in loop).
> The script runs fine for some datasets, but it terminates when it
> encounters one dataset with the following error:
> Error in normalmixEM(expr_glm_residuals, lambda = c(0.75, 0.25), k = 2, :
> Too many tries!
> (command used: expr_mix_gau <- normalmixEM(expr_glm_residuals, lambda =
> c(0.75,0.25), k = 2, epsilon = 1e-08, maxit = 10000, maxrestarts=200, verb
> = TRUE))
> (expr_glm_residuals is my dataset which has residual values for different
> It is suggested that one should define the mu and sigma in the command by
> looking at your dataset. But in my case there are many datasets and it will
> keep on changing every time. please suggest what can I do to resolve this
> On Tuesday, 16 July 2013 17:53:09 UTC-4, Simon Zehnder wrote:
>> Hi Tjun Kiat Teo,
>> you try to fit a Normal mixture to some data. The Normal mixture is very
>> delicate when it comes to parameter search: If the variance gets closer and
>> closer to zero, the log Likelihood becomes larger and larger for any values
>> of the remaining parameters. Furthermore for the EM algorithm it is known,
>> that it takes sometimes very long until convergence is reached.
>> Try the following:
>> Use as starting values for the component parameters:
>> start.par <- mean(your.data, na.rm = TRUE) + sd(your.data, na.rm = TRUE) *
>> For the weights just use either 1/K or the R cluster function with K
>> Here K is the number of components. Further enlarge the maximum number of
>> iterations. What you could also try is to randomize start parameters and
>> run an SEM (Stochastic EM). In my opinion the better method is in this case
>> a Bayesian method: MCMC.
>> On Jul 16, 2013, at 10:59 PM, Tjun Kiat Teo <teot... at gmail.com
>> > I was trying to use the normixEM in mixtools and I got this error
>> > And I got this error message
>> > One of the variances is going to zero; trying new starting values.
>> > Error in normalmixEM(as.matrix(temp[[gc]][, -(f + 1)])) : Too many
>> > Are there any other packages for fitting mixture distributions ?
>> > Tjun Kiat Teo
>> > [[alternative HTML version deleted]]
>> > ______________________________________________
>> > https://stat.ethz.ch/mailman/listinfo/r-help
>> > PLEASE do read the posting guide
>> > and provide commented, minimal, self-contained, reproducible code.
>> PLEASE do read the posting guide
>> and provide commented, minimal, self-contained, reproducible code.
> R-help at r-project.org mailing list -- To UNSUBSCRIBE and more, see
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
More information about the R-help