A multimodal framework for sensor based sign language recognition

dc.contributor.authorKumar P.en_US
dc.contributor.authorGauba H.en_US
dc.contributor.authorPratim Roy P.en_US
dc.contributor.authorProsad Dogra D.en_US
dc.date.accessioned2025-02-17T05:58:24Z
dc.date.issued2017
dc.description.abstractIn this paper, we propose a novel multimodal framework for isolated Sign Language Recognition (SLR) using sensor devices. Microsoft Kinect and Leap motion sensors are used in our framework to capture finger and palm positions from two different views during gesture. One sensor (Leap Motion) is kept below the hand(s) while the other (Kinect) is placed in front of the signer for capturing horizontal and vertical movement of fingers during sign gestures. A set of features is next extracted from the raw data captured with both sensors. Recognition is performed separately by Hidden Markov Model (HMM) and Bidirectional Long Short-Term Memory Neural Network (BLSTM-NN) based sequential classifiers. In the next phase, results are combined to boost-up the recognition performance. The framework has been tested on a dataset of 7500 Indian Sign Language (ISL) gestures comprised with 50 different sign-words. Our dataset includes single as well as double handed gestures. It has been observed that, accuracies can be improved if data from both sensors are fused as compared to single sensor-based recognition. We have recorded improvements of 2.26% (single hand) and 0.91% (both hands) using HMM and 2.88% (single hand) and 1.67% (both hands) using BLSTM-NN classifiers. Overall accuracies of 97.85% and 94.55% have been recorded by combining HMM and BLSTM-NN for single hand and double handed signs. � 2017 Elsevier B.V.en_US
dc.identifier.citation28en_US
dc.identifier.urihttp://dx.doi.org/10.1016/j.neucom.2016.08.132
dc.identifier.urihttps://idr.iitbbs.ac.in/handle/2008/1321
dc.language.isoenen_US
dc.subjectGesture recognitionen_US
dc.subjectMultimodal frameworken_US
dc.subjectSensor fusionen_US
dc.subjectSign language recognitionen_US
dc.titleA multimodal framework for sensor based sign language recognitionen_US
dc.typeArticleen_US

Files