Gesture recognition (GR) is a rapidly developing field with diverse applications such as sign language interpretation, immersive gaming technologies, and various computer interfaces.
Visually impaired people often struggle with daily tasks including navigation, technology use, and social interaction. They also face the challenge of maintaining their individuality while requiring protection in their everyday activities.
Recognizing communication from visually challenged and deaf individuals can be achieved by recording their speech and comparing it with modern datasets to clarify their goals.
Traditional machine learning (ML) models rely on handcrafted features but frequently perform poorly in real-time environments. Deep learning (DL) models have gained popularity recently, surpassing conventional ML methods in effectiveness.
This study introduces Enhancing Gesture Recognition for the Visually Impaired using Deep Learning and an Improved Snake Optimization Algorithm (EGRVI-DLISOA), an advanced gesture recognition system implemented within an Internet of Things (IoT) framework that offers real-time gesture interpretation to support visually impaired users.
The method begins with noise reduction via the Sobel filter (SF) technique as part of the preprocessing phase.
"The EGRVI-DLISOA approach is an advanced GR system powered by DL in an IoT environment, designed to provide real-time interpretation of gestures to assist the visually impaired."
This approach represents a meaningful step forward in real-time gesture recognition technologies for accessibility enhancement.
Author's summary: This study presents a deep learning-based gesture recognition system integrated with improved optimization and IoT, designed to deliver efficient real-time assistance to visually impaired individuals.