RATERS2
This program is primarily meant to measure interrater reliability in the situation where two raters classify a sample of stimuli, into a restricted number of nominal categories. The categories must be mutually exclusive and exhaustive. It is assumed that the raters make their classifications independently in the sense that they do not exchange any information and receive no common information except that they are presented with the same cases and possibly receive the same instruction.
The program also gives information
- on the reliability of the two individual raters
- on the 'true' distribution of the categories
Downloadable files:
Documentation: | raters2.pdf (pdf, 470 kB) | (470 Kb) |
Program: | raters2.zip (zip, 763 kB) | (764 Kb) |
Example 1: | RatersRaw6Ratings.zip (zip, 7,4 kB) | (7 Kb) |
Example 2: | RatersMatrix.zip (zip, 13 kB) | (13 Kb) |