Enhancing Real-time Arabic Sign Language Translation
DOI:
https://doi.org/10.71335/6fk98t41Keywords:
Arabic Sign Language, Gesture Recognition, Deep Learning.Abstract
Objectives: This research aims to identify subtle signs in Arabic Sign Language to overcome the communication barrier for the deaf and hard-of-hearing community. This study focuses on Arabic-speaking communities, particularly in Saudi Arabia, and explores dialect differences in Sign Language, appropriateness, and generalization.
Methodology : A convergent mixed-methods design is used, combining quantitative evaluation of deep learning models with assessment by deaf and hard-of-hearing individuals. This design also demonstrates the relevance of having an available system to translate Arabic Sign Language.
Results : The results show a strong correlation between gesture recognition accuracy and visual generalization, with a 98% improvement in recognizing dynamic gestures.
Conclusion : The paper outlines directions for future research to enhance assistive communication systems, focusing on movement and body language analysis. The main considerations include precision mechanisms, potential applications, and the possibility of real-time adaptation. The paper also suggests that the software can be developed for mobile or desktop platforms to facilitate communication for the deaf in the Arabic-speaking world.