[R] Multiple comparisons in a non parametric case

Marco Chiarandini machud at intellektik.informatik.tu-darmstadt.de
Wed Sep 8 11:18:48 CEST 2004


Thanks Rolf and Thomas,


> It looks to me like what you are doing is trying to judge
> significance of differences by non-overlap of single-sample
> confidence intervals.  While this is appealing, it's not quite
> right.


Yes, this is what I am trying to do. Apparently, when the replicates are
the same for each experimental unit and the experiment is balanced the
CI should be the same for all sample-pairs, therefore it is somehow like
having single sample CI.


> I just looked into my copy of Applied Nonparametric Statistics
> (second ed.) by Wayne W. Daniel (Duxbury, 1990) but that
> only deals with the situation where there is a single replicate
> per block-treatment combination (whereas you have 10 reps)
> and block-treatment interaction is assumed to be non-existent.


The problems (or instances of problems) are my blocking factor. But this
factor has significant interaction in the ANOVA model.


> The method that Daniel prescribes in this simple setting seems to be
> no more than applying the Bonferroni method of multiple comparisons.
> (Daniel does not say; his book is very much a cook-book.)  So you
> might simply try Bonferroni --- i.e. do all k-choose-2 pairwise
> comparisons between treatments (using the appropriate 2 sample method
> for each comparison) doing each comparison at the alpha/k-choose-2
> significance level.  Where k = the number of treatments = 4 in your
> case.  This method is not going to be super-powerful but it is
> sometimes surprizing how well Bonferroni stacks up against more
> ``sophisticated'' methods.


I knew about Bonferroni. But I am confused. I have actually two
references: Conover "Practical Nonparametric statistics" (page 371) and
Sheskin "Handbook and Nonparmetric statistical procedures" (page
675). Both these books deal with multiple comparison when the Friedman
test would be appropriate. But the formula given are different and the
CI I obtain are also different.

Sheskin, citing various sources (among them Daniel 1990), uses a formula
with the normal distribution z and adjust the alfa value according to
Bonferroni (strangely no sample statistic appears in the formula).
Conover (which is also a good reference) uses a formula with Student't
distribution but does not adjust alfa either in the example he provides
where 4 treatments are pairwise compared.

The CI I obtain are much smaller if I use the Conover procedure than the
Sheskin's. And this happens in spite of the p-adjustment in Sheskin.
Smaller CI are for me nicer because I can distinguish better differences
But the a factor of 3 between them let me doubt I can really use
Conover.

Which is your opinion?


Thansk again for the help,

Ragards,

	Marco




--
Marco Chiarandini, Fachgebiet Intellektik, Fachbereich Informatik,
Technische Universität Darmstadt, Hochschulstraße 10,
D-64289 Darmstadt - Germany, Office: S2/02 Raum E317
Tel: +49 (0)6151 16-6802 Fax: +49 (0)6151 16-5326
email: machud at intellektik.informatik.tu-darmstadt.de
web page: http://www.intellektik.informatik.tu-darmstadt.de/~machud




More information about the R-help mailing list