Comparison of Inter-Rater Reliability Techniques in Performance-Based Assessment


Creative Commons License

Mancar S. A., GÜLLEROĞLU H. D.

INTERNATIONAL JOURNAL OF ASSESSMENT TOOLS IN EDUCATION, cilt.9, sa.2, ss.515-533, 2022 (ESCI) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 9 Sayı: 2
  • Basım Tarihi: 2022
  • Doi Numarası: 10.21449/ijate.993805
  • Dergi Adı: INTERNATIONAL JOURNAL OF ASSESSMENT TOOLS IN EDUCATION
  • Derginin Tarandığı İndeksler: Emerging Sources Citation Index (ESCI), ERIC (Education Resources Information Center), TR DİZİN (ULAKBİM)
  • Sayfa Sayıları: ss.515-533
  • Anahtar Kelimeler: Inter-rater reliability, Performance-based assessment, Generalizability theory, International baccalaureate diploma programme, Scientific literacy, AGREEMENT, INFORMATION, LITERACY
  • Ankara Üniversitesi Adresli: Evet

Özet

The aim of this study is to analyse the importance of the number of raters and compare the results obtained by techniques based on Classical Test Theory (CTT) and Generalizability (G) Theory. The Kappa and Krippendorff alpha techniques based on CTT were used to determine the inter-rater reliability. In this descriptive research data consists of twenty individual investigation performance reports prepared by the learners of the International Baccalaureate Diploma Programme (IBDP) and also five raters who rated these reports. Raters used an analytical rubric developed by the International Baccalaureate Organization (IBO) as a scoring tool. The results of the CTT study show that Kappa and Krippendorff alpha statistical techniques failed to provide information about the sources of the errors causing incompatibility in the criteria. The studies based on G Theory provided comprehensive data about the sources of the errors and increasing the number of raters would also increase the reliability of the values. However, the raters raised the idea that it is important to develop descriptors in the criteria in the rubric.