Evaluating and Classifying the Gentleness of Surgeons in VR-Based Surgical Simulation


Keleş H. O.

2. ULUSAL NÖROGÖRÜNTÜLEME KONGRESİ, Ankara, Türkiye, 11 - 13 Eylül 2025, ss.97-98, (Özet Bildiri)

  • Yayın Türü: Bildiri / Özet Bildiri
  • Basıldığı Şehir: Ankara
  • Basıldığı Ülke: Türkiye
  • Sayfa Sayıları: ss.97-98
  • Ankara Üniversitesi Adresli: Evet

Özet

Objectives: This study evaluates and classifies participants’ gentleness and mental workload during VR-based surgical simulations using fNIRS-derived hemodynamic features. We trained and compared several machine-learning models to assess performance.

Methods: Twenty-three volunteers performed a laparoscopic double grasper task in a virtual-reality environment while cortical activity over frontal and motor areas was recorded with eighteen fNIRS channels. Alongside fNIRS, we collected subjective workload (NASA-TLX), task error rates, and a VR performance metric (gentleness score). This task involves using two grasper tools simultaneously to perform the balloon manipulation in a VR environment. Task-related fNIRS features including mean HbO/HbR changes and temporal dynamics were extracted. Four classifiers (Random Forest Classifier, Support Vector Classifier, AdaBoost Classifier, and Gaussian Naïve Bayes) were used and compared. fNIRS preprocessing was conducted in MATLAB (HOMER3). Labels were binarized as low vs. high using median splits for the gentleness score and NASA-TLX. Models were evaluated with stratified 5-fold cross-validation and summarized by accuracy.

Results: The classification of gentleness (VR score, low vs. high) achieved an accuracy of 0.8 with AdaBoost using mean HbO or HbT features and 0.9 with Random Forest using mean HbR features. For mental workload (NASA-TLX, low vs. high), AdaBoost (using mean HbO/HbT) and Gaussian Naïve Bayes (mean HbO) reached the highest accuracy, while HbR-based models achieved a peak accuracy of 0.80 with Support Vector Classifier (SVC) and Gaussian Naïve Bayes (GNB).

Conclusion: This multimodal approach offers potential for real-time, neurocognitively informed feedback in VR-based surgical training, supporting the validation of VR simulators for skill transfer to the operating room.