Article,

An exact bootstrap confidence interval for kappa in small samples

, , , and .
Journal of the Royal Statistical Society: Series D (The Statistician), 51 (4): 467 - 478 (2002)

Abstract

Agreement between a pair of raters for binary outcome data is typically assessed by using the K-coefficient. When the total sample size is small to moderate, and the proportion of agreement is high, standard methods of calculating confidence intervals for K perform poorly. To improve the coverage of confidence intervals for K, Lee and Tu formed an interval based on the profile variance of the estimate of the K-coefficient, which requires the solution to a cubic polynomial. They showed in simulations that their method was the best available method with respect to the coverage probability and performs well except when the proportion of agreement is high and the sample size is small. Here, we propose a method that picks up where Lee and Tu's leaves off, namely when the proportion of agreement is high and the sample size is small. In particular, we propose the use of the bootstrap to form a confidence interval for K. With a 2×2 table, and sample sizes less than 200, instead of a Monte Carlo bootstrap, one can easily calculate the 'exact' bootstrap distribution of the estimate of K and use this distribution to calculate confidence intervals. We perform a simulation and show that the bootstrap gives slightly better coverage than Lee and Tu's method.

Tags

Users

  • @stefano
  • @seandalai

Comments and Reviews