[R] Power analysis for MANOVA?

Adam D. I. Kramer adik at ilovebacon.org
Mon Feb 2 20:26:21 CET 2009


Hi Rick,

 	I understand the authors' point and also agree that post-hoc power
analysis is basically not telling me anything more than the p-value and
initial statistic for the test I am interested in computing power for.

 	Beta is a simple function of alpha, p, and the statistic.

 	My intention is, as I mentioned in my response to Stephan Kolassa,
to transform my p-value and statistic into a form of effect size--sample
size necessary to attain significance at alpha=.05. This will communicate no
more information, it is just a mathematical re-representation of my data in
a way I believe my readers will find more informative and useful. In other
words, there is no more information *encoded*, but there is more information
*communicated,* just like for any effect size measure.

 	If you have any suggestions on a more reliable effect size for
MANOVA which is *also* commonly known in the social psychology community
(e.g., a correlation or Cohen's d analogue), I'm interested--but the
multivariate nature of the beast makes these more or less impossible to
translate.

 	The poster I was asking for is now printed, and we reported the
multivariate R-squared using the techniques in Cohen (1988), though I'm
expecting to spend a lot of time explaining what that means to people in a
multivariate context, rather than describing the results of the study.

Cordially,
Adam D. I. Kramer

On Sun, 1 Feb 2009, Rick Bilonick wrote:

> On Wed, 2009-01-28 at 21:21 +0100, Stephan Kolassa wrote:
>> Hi Adam,
>>
>> first: I really don't know much about MANOVA, so I sadly can't help you
>> without learning about it an Pillai's V... which I would be glad to do,
>> but I really don't have the time right now. Sorry!
>>
>> Second: you seem to be doing a kind of "post-hoc power analysis", "my
>> result isn't significant, perhaps that's due to low power? Let's look at
>> the power of my experiment!" My impression is that "post-hoc power
>> analysis" and its interpretation is, shall we say, not entirely accepted
>> within the statistical community, see:
>>
>> Hoenig, J. M., & Heisey, D. M. (2001, February). The abuse of power: The
>> pervasive fallacy of power calculations for data analysis. The American
>> Statistician, 55 (1), 1-6
>>
>> And this:
>> http://staff.pubhealth.ku.dk/~bxc/SDC-courses/power.pdf
>>
>> However, I am sure that lots of people can discuss this more competently
>> than me...
>>
>> Best wishes
>> Stephan
>>
>
> The point of the article was that doing a so-called "retrospective"
> power analysis leads to logical contradictions with respect to the
> confidence intervals and p-values from the analysis of the data. In
> other words, DON'T DO IT! All the information is contained in the
> confidence intervals which are based on the observed data - an after the
> fact "power analysis" cannot provide any insight - it's not data
> analysis.
>
> Rick B.
>
>




More information about the R-help mailing list