[R] stratified kappa (measure agreement or interrater reliability)?

David Winsemius dwinsemius at comcast.net
Fri Oct 31 15:30:30 CET 2008


On Oct 30, 2008, at 11:38 PM, FD wrote:

> Hi All:
> Could anyone point me to a package that can calculate stratified  
> kappa?  My
> design is like this, 4 raters,  30 types of diagnosis scores, 20  
> patients.
> Each rater will rate each patient for each type of diagnosis score.  
> The
> rater's value is nominal.
>
> I know I can measure the agreement between raters for each type of  
> diagnosis
> score, e.g., calculate out 30 kappa values. My problem is I want to  
> have an
> overall agreement measure (a single value and its significance over  
> chance).
> Could anyone help me with this?

I am not a statistician or a psychometrician, so have no experience  
with any of these packages. A google search on produced this link:
http://www.mail-archive.com/r-help@stat.math.ethz.ch/ 
msg89858.html   ... and looking in package psy in CRAN, I see  
lkappa(), Light’s kappa for n raters,  which seems to meet your  
specifications.

The concord package may have the facilities but I am not able to tell  
from the documentation. Perhaps Jim Lemon can be queried.

-- 
David Winsemius


More information about the R-help mailing list