@jpmor

Android Based American Sign Language Recognition System with Skin Segmentation and SVM

, , , , and . 2018 9th International Conference on Computing, Communication and Networking Technologies (ICCCNT), page 1-6. Aurangabad, Maharashtra, India, IEEE, (July 2018)
DOI: 10.1109/ICCCNT.2018.8493838

Abstract

People with hearing impairment use sign language for communication. They use hand gestures to represent numbers, letters, words and sentences, which allows them to communicate among themselves. The problem arises when they need to interact with other people. An automation system that can convert sign language to text will make the interaction easier. Recently, many such systems for sign language recognition have been developed. But most of them were executed using laptop and computers, which are impractical to carry due to their weight and size. This article is based on the design and implementation of an Android application which converts the American Sign Language to text, so that it can be used anywhere and anytime. Image is captured by the smart phone camera and skin segmentation is done using YCbCr systems. Features are extracted from the image using HOG and classified to recognize the sign. The classification is done using Support Vector Machine (SVM).

Description

Android Based American Sign Language Recognition System with Skin Segmentation and SVM - IEEE Conference Publication

Links and resources

Tags

community