[R] On Corrections for Chi-Sq Goodness of Fit Test

Michael Fuller mmfuller at unm.edu
Sun Dec 25 20:31:04 CET 2011

Hi Rolf,
Thank you for clarifying. After reading the help page over more carefully, I realize that I misunderstood what chisq.test. was doing. I thought it was using simulation whenever the expected frequencies were given using the "p" argument, whereas it used the asymptotic chi-squared distribution of the test statistic when "p"was not given. I now see that, based on the help page, the asymptotic distribution is always used, unless simulate.p.value is TRUE.


On Dec 25, 2011, at 3:06 AM, Rolf Turner wrote:

> When the p-value is calculated via simulation the test is not, strictly
> speaking a ``chi-squared test''.    That is, it is *not* assumed that the
> distribution of the test statistic, under the null hypothesis, is a chi-squared
> distribution.  Instead an empirical distribution derived from simulating
> samples under the null hypothesis is used.  The *same statistic* is calculated,
> but the chi-squared distribution is never invoked.  The simulated p-value
> facility is provided so that tests can be conducted even when the requirements
> for the validity of the chi-squared (null) distribution are not met.
> The test, whether you want to call it a chi-squared test or not, is a perfectly
> valid goodness of fit test.  (Modulo the caveat that I previously mentioned,
> i.e. that it is required that there be no ties amongst the values of the test
> statistics.  Which is a mild worry, but has nothing to do with continuity corrections
> which are meaningless in this context.)
>    cheers,
>        Rolf Turner
> On 25/12/11 08:26, Michael Fuller wrote:
>> On Dec 22, 2011, at 8:56 PM, Rolf Turner wrote:
>>> On 20/12/11 10:24, Michael Fuller wrote:
>>>> TOPIC
>>>> My question regards the philosophy behind how R implements corrections to chi-square statistical tests. At least in recent versions (I'm using 2.13.1 (2011-07-08) on OSX 10.6.8.), the chisq.test function applies the Yates continuity correction for 2 by 2 contingency tables. But when used as a goodness of fit test (GoF, aka likelihood ratio test), chisq.test does not appear to implement any corrections for widely recognized problems, such as small sample size, non-uniform expected frequencies, and one D.F.
>>>>> From the help page:
>>>> "In the goodness-of-fit case simulation is done by random sampling from the discrete distribution specified by p, each sample being of size n = sum(x)."
>>>> Is the thinking that random sampling completely obviates the need for corrections?
>>>    Yes.
>>>> Wouldn't the same statistical issues still apply
>>>    No.
>>>> (e.g. poor continuity approximation with one D.F.,
>>>    There are no degrees of freedom involved.  There is no continuity involved.
>>>    The observed test statistics (say "Stat") is compared with a number of
>>>    test statistics, Stat_1, ..., Stat_N, calculated from data sets simulated under
>>>    the null hypothesis.  If the null is true, then Stat and Stat_1, ...., Stat_N are
>>>    all of ``equal status''.  If there are m values of the Stat_i which are greater
>>>    than Stat, then the ``probability of observing, under the null hypothesis,
>>>    data as extreme as, or more extreme than, what you actually observed''
>>>    is the probability of randomly selecting one of a specified set of m+1 ``slots''
>>>    out of a total of N+1 slots (where each slot has probability 1/(N+1)).
>> But if the test is truly a chi-square Goodness of Fit (GoF) test, then:
>> (1) the test compares the Stat to a chi-square distribution with (k-1) degrees of
>> freedom, for k frequency categories.
>> (2) the shape of the distribution depends on the degrees of freedom
>> (3) continuity is an issue, because the values for Stat are discrete, whereas
>> the chi-square distribution is continuous. Therefore the p-value is not exact.
>> If I understand your description, the function generates a probability
>> distribution by sampling a set of values with equal probability. The probability of
>> any value in the final distribution depends on its frequency in the sample set. This
>> is a standard Monte Carlo method, but the PDF it generates is not chi-square (the
>> resulting distribution depends upon the distribution of the sample set). Clearly,
>> using a permutation method such as this to determine the statistical significance of
>> a given test statistic is non-parametric and in particular, not chi-square.
>> If what you say is true, then the chisq.test function in R does not implement a true GoF
>> test. I don't understand why the GoF test is not computed the standard way. It should
>> compute the chi-square statistic, using the values of p to generate expected
>> frequencies. And it should use a continuity correction when degrees of freedom = 1.
>> Thank you for taking the time to respond to my message.
>> Cheers,
>> Mike
>>>    Thus the p-value is (exactly) equal to (m+1)/(N+1).
>>>    The only restriction is that there be no ties amongst the values of Stat
>>>    and Stat_1, ..., Stat_N.  There being ties is of fairly low probability, but is
>>>    not of zero probability --- since there is a finite number of possible samples
>>>    and hence of statistic values.  So this restriction is a mild worry.
>>>    However a ``continuity correction'' would be of no help whatsoever.
>>>> problems with non-uniform expected frequencies, etc) with random sampling?
>>>    Don't understand what you mean by this.
>>>        cheers,
>>>            Rolf Turner
>> ====================
>> Michael M. Fuller, Ph.D.
>> Department of Biology
>> University of New Mexico
>> Albuquerque, NM
>> EMAIL: mmfuller at unm.edu
>> WEB: biology.unm.edu/mmfuller

Michael M. Fuller, Ph.D.
Department of Biology
University of New Mexico
Albuquerque, NM
EMAIL: mmfuller at unm.edu
WEB: biology.unm.edu/mmfuller

More information about the R-help mailing list