RATERS3

The program Raters3 is primarily meant to measure inter-rater reliability in the situation where three raters classify a sample of stimuli, or as we will call them: cases, into a restricted number of nominal categories. The categories must be mutually exclusive and exhaustive. It is assumed that the raters make their classifications independently in the sense that they do not exchange any information and receive no common information except that they are presented with the same cases and possibly have received the same instruction.

The program also estimates

  • the reliability of the three individual raters
  • the 'true' distribution of the categories

Downloadable files:

Documentation: raters3.pdf (pdf, 492 kB) (492 Kb)
Program: raters3.zip (zip, 765 kB) (765 Kb)


Be patient, documentation, soon available.