Real-Time Sign Language Gesture Recognition

Authors

  • Vartika Verma, Manish Pandey

Abstract

People with speech disabilities communicate in sign language and therefore have trouble in mingling with the non-disabled. It needs an interpretation system, which could act as a bridge between them and those who do not know their sign language. Gesture recognition is the mathematical interpretation of a human motion by a computing device. An uninterrupted communication between the impaired and unimpaired groups of the society is possible only if the unimpaired population is trained to interpret sign language. Since there is no standard mode of communication between and dumb people and regular people, they suffer a lot of problems. This paper aims to recognize the alphabets in a real-time environment using Python, OpenCV, and TensorFlow, along with a convolutional neural network model for classification. In python, we will use the Inception V3 and ImageNet Model. The Inception V3 is a generic image classification program that uses Google Machine Learning library, TensorFlow, and a pre-trained Deep Learning Convolution Neural network model.

Published

2020-11-01

Issue

Section

Articles