Improving course evaluation processes in higher education institutions: a modular system approach


Kocaoglu I., KARATAŞ E.

PEERJ COMPUTER SCIENCE, cilt.11, 2025 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 11
  • Basım Tarihi: 2025
  • Doi Numarası: 10.7717/peerj-cs.3110
  • Dergi Adı: PEERJ COMPUTER SCIENCE
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Compendex, Directory of Open Access Journals
  • Ankara Üniversitesi Adresli: Evet

Özet

Course and instructor evaluations (CIE) are essential tools for assessing educational quality in higher education. However, traditional CIE systems often suffer from inconsistencies between structured responses and open-ended feedback, leading to unreliable insights and increased administrative workload. This study suggests a modular system to address these challenges, leveraging sentiment analysis and inconsistency detection to enhance the reliability and efficiency of CIE processes. Background: Improving the reliability of CIE data is crucial for informed decision-making in higher education. Existing methods fail to address discrepancies between numerical scores and textual feedback, resulting in misleading evaluations. This study proposes a system to identify and exclude inconsistent data, providing more reliable insights. Methods: Using the Design Science Research Methodology (DSRM), a system architecture was developed with five modules: data collection, preprocessing, sentiment analysis, inconsistency detection, and reporting. A dataset of 13,651 anonymized Turkish CIE records was used to train and evaluate machine learning algorithms, including support vector machines, naive Bayes, random forest, decision trees, K-nearest neighbors, and OpenAI's GPT-4 Turbo Preview model. Sentiment analysis results from open-ended responses were compared with structured responses to identify inconsistencies. Results: The GPT-4 Turbo Preview model outperformed traditional algorithms, achieving 85% accuracy, 88% precision, and 95% recall. Analysis of a prototype system applied to 431 CIEs identified a 37% inconsistency rate. By excluding inconsistent data, the system generated reliable reports with actionable insights for course and instructor performance. The purpose of this study is to design and evaluate a new system using the Design Science Research (DSR) approach to enhance the accuracy and reliability of course evaluation processes employed in higher education institutions. The modular system effectively addresses inconsistencies in CIE processes, offering a scalable and adaptable solution for higher education institutions. By integrating advanced machine learning techniques, the system enhances the accuracy and reliability of evaluation reports, supporting data-driven decision-making. Future work will focus on refining sentiment analysis for neutral comments and broadening the system's applicability to diverse educational contexts. This innovative approach represents a significant advancement in leveraging technology to improve educational quality. Subjects Algorithms and Analysis of Algorithms, Artificial Intelligence, Data Mining and Machine Learning, Natural Language and Speech, Sentiment Analysis