Abstract
This paper aims to present an easy way of communication for speech impaired and hearing impaired people. This paper focuses on the use of neural network techniques, specifically MediaPipe Holistic and the Long Short-Term Memory (LSTM) module, to recognize sign language for individuals with disabilities. The utilization of MediaPipe Holistic, which integrates pose, hand, and face keypoints with precise levels, was used due to its low latency and high tracking accuracy in real-world scenarios. This paper deals with commonly used words of American Standard Sign Language. The result obtained shows that the proposed model can detect American Standard Sign Language with accuracy of 98.50 percentage.
Users
Please
log in to take part in the discussion (add own reviews or comments).