MetaXAI: Metahuman-assisted audio and visual explainability framework for Internet of Medical Things


Kök İ.

BIOMEDICAL SIGNAL PROCESSING AND CONTROL, cilt.100, sa.107034, ss.1-11, 2025 (SCI-Expanded)

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 100 Sayı: 107034
  • Basım Tarihi: 2025
  • Doi Numarası: 10.1016/j.bspc.2024.107034
  • Dergi Adı: BIOMEDICAL SIGNAL PROCESSING AND CONTROL
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Compendex, EMBASE, INSPEC
  • Sayfa Sayıları: ss.1-11
  • Ankara Üniversitesi Adresli: Evet

Özet

The next-generation Internet of Things (NGIoT) is expected to incorporate emerging technologies such as Artificial Intelligence(AI), edge computing, augmented reality, tactile Internet, digital twin, 5G, and distributed ledgers. These technologies hold the potential to revolutionize a broad spectrum of IoT applications ranging from environmental monitoring to smart healthcare. In particular, these technologies are critical for advanced monitoring, processing, management, and medical service delivery in healthcare-centric IoT networks such as the Internet of Medical Things (IoMT), Internet of Nano Things (IoNT), and Internet of Bio-Nano Things (IoBNT). Today, AI finds extensive application within decision support systems employed in the realm of IoMT; however, these systems are predominantly results-oriented and lack interpretability, posing concerns for medical professionals, patients, and the wider user community in trusting system decisions. In this paper, we propose an AI and virtual reality-supported audio and visual explainability framework, called MetaXAI, for intrusion detection in IoMT. MetaXAI presents the developed ML model decisions based on the ELI5, SHAP and LIME methods in a simulated 3D environment to the expert and the end user in an audible and visual way. Experimental results show that the MetaXAI framework can perform intrusion detection with a success rate of about 94% and effectively presents its decisions with rich and innovative 3D explanation interfaces.