A comprehensive evaluation of oversampling techniques for enhancing text classification performance


Taskiran S. F., TÜRKOĞLU B., Kaya E., Asuroglu T.

Scientific Reports, cilt.15, sa.1, 2025 (SCI-Expanded, Scopus) identifier identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 15 Sayı: 1
  • Basım Tarihi: 2025
  • Doi Numarası: 10.1038/s41598-025-05791-7
  • Dergi Adı: Scientific Reports
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, BIOSIS, Chemical Abstracts Core, MEDLINE, Veterinary Science Database, Directory of Open Access Journals
  • Anahtar Kelimeler: Imbalanced datasets, Synthetic minority over-sampling technique (SMOTE), Text classification
  • Ankara Üniversitesi Adresli: Evet

Özet

Class imbalance is a common and critical challenge in text classification tasks, where the underrepresentation of certain classes often impairs the ability of classifiers to learn minority class patterns effectively. According to the “garbage in, garbage out” principle, even high-performing models may fail when trained on skewed distributions. To address this issue, this study investigates the impact of oversampling techniques, specifically the Synthetic Minority Over-sampling Technique (SMOTE) and thirty of its variants, on two benchmark text classification datasets: TREC and Emotions. Each dataset was vectorized using the MiniLMv2 transformer model to obtain semantically rich representations, and classification was performed using six machine learning algorithms. The balanced and imbalanced scenarios were compared in terms of F1-Score and Balanced Accuracy. This work constitutes, to the best of our knowledge, the first large-scale, systematic benchmarking of SMOTE-based oversampling methods in the context of transformer-embedded text classification. Furthermore, statistical significance of the observed performance differences was validated using the Friedman test. The results provide practical insights into the selection of oversampling techniques tailored to dataset characteristics and classifier sensitivity, supporting more robust and fair learning in imbalanced natural language processing tasks.