Five Ways to Look at Cohens Kappa

Matthijs J Warrens

Abstract

The kappa statistic is commonly used for quantifying inter-rater agreement on a nominal scale. In this review article we discuss five interpretations of this popular coefficient. Kappa is a function of the proportion of observed and expected agreement, and it may be interpreted as the proportion of agreement corrected for chance. Furthermore, kappa may be interpreted as the average category reliability as well as an intraclass correlation.

Relevant Publications in Journal of Psychology & Psychotherapy