Computer Science Review, cilt.60, 2026 (SCI-Expanded, Scopus)
Natural Language Processing and Sign Language Processing share common goals and challenges, as both fields focus on enabling computers to understand and generate modes of communication to enhance interaction between humans and computers. The interaction happens differently, with one relying on spoken or written text and the other on visual-gestural input. Although sign languages possess significant linguistic complexity and expressiveness, they have traditionally been rarely addressed in the fields of computational linguistics and natural language processing research. The fields share similarities (sequence modeling, contextual understanding, representation learning) as well as face similar challenges; annotated data sparsity, ambiguity resolution, and multilingual understanding. In this paper, the key tasks that can be addressed in sign language processing, particularly from a natural language processing perspective, are identified and deeply examined. Sign language translation and production, machine translation, part of speech tagging, named entity resolution, coreference resolution, sentiment analysis, and sign language models in sign languages are included. An overview of these sign language tasks, as well as previously unexplored tasks that are very apparent in natural language processing but not in sign language processing, is provided. Moreover, possible reuses of already available sign language data from a linguistic perspective are also shared. Limitations and open challenges are identified to direct future research toward the linguistic aspects of sign languages, recognizing that more language-based methodologies may be necessary for improved understanding and communication in it.