Chapter 19

Intrarater Reliability

Kilem L. Gwet

19.1 Introduction

The notion of intrarater reliability will be of interest to researchers concerned about the reproducibility of clinical measurements. A rater in this context refers to any data-generating system, which includes individuals and laboratories; intrarater reliability is a metric for rater’s self-consistency in the scoring of subjects. The importance of data reproducibility stems from the need for scientific inquiries to be based on solid evidence. Reproducible clinical measurements are recognized as representing a well-defined characteristic of interest. Reproducibility is a source of concern caused by the extensive manipulation of medical equipment in test laboratories and the complexity of the judgmental processes involved in clinical data gathering. Grundy [1] stresses the importance of choosing a good laboratory when measuring cholesterol levels to ensure their validity and reliability. This article discusses some basic methodological aspects related to intrarater reliability estimation. For continuous data, the intraclass correlation (ICC) is the measure of choice and is discussed in Section 19.2 entitled “Intrarater Reliability for Continuous Scores.” For nominal data, the kappa coefficient of Cohen [2] and its many variants are the preferred statistics, and they are discussed in Section 19.3 entitled “Nominal Scale Score Data.” Section 19.4 is devoted to some extensions of kappa-like statistics aimed at intrarater ...

Get Methods and Applications of Statistics in Clinical Trials, Volume 2: Planning, Analysis, and Inferential Methods now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.