Application of Image Processing for Sign Language Interpreter

LSTM GRU2 American Sign Language 3 Bi-LSTM Signal Processing

Authors

May 1, 2024

Downloads

The aim of this research is to develop a sign language interpreting system that interprets and translates American Sign Language (ASL) into English words and sentences through machine vision and machine learning. In the proposed methodology, algorithms for data augmentation and data preprocessing as well as model training and evaluation are developed along with the system’s Graphical User Interphase (GUI). 3 different models are trained while developing the system, LSTM, Bi-LSTM and GRU and among them, GRU achieves the highest training accuracy of 95.14% and evaluation accuracy of 95.56% hence it is implemented into the system. By testing it in real-time, the system is able to make predictions in 0.143 seconds with 98.79% confidence. Through other various tests, the system proved its capability to produce equally accurate predictions in real-time regardless of the signer’s position, distance and hands used. The system is also able to translate ASL sentences intogrammatically sound English sentences through OpenAI API.