Coupled HMM-based multi-sensor data fusion for sign language recognition

dc.contributor.authorKumar P.en_US
dc.contributor.authorGauba H.en_US
dc.contributor.authorRoy P.P.en_US
dc.contributor.authorDogra D.P.en_US
dc.date.accessioned2025-02-17T06:17:16Z
dc.date.issued2017
dc.description.abstractRecent development of low cost depth sensors such as Leap motion controller and Microsoft kinect sensor has opened up new opportunities for Human-Computer-Interaction (HCI). In this paper, we propose a novel multi-sensor fusion framework for Sign Language Recognition (SLR) using Coupled Hidden Markov Model (CHMM). CHMM provides interaction in state-space instead of observation states as used in classical HMM that fails to model correlation between inter-modal dependencies. The framework has been used to recognize dynamic isolated sign gestures performed by hearing impaired persons. The dataset has been tested using existing data fusion approaches. The best recognition accuracy has been achieved as high as 90.80% with CHMM. Our CHMM-based approach shows improvement in recognition performance over popular existing data fusion techniques. � 2016 Elsevier B.V.en_US
dc.identifier.citation54en_US
dc.identifier.urihttp://dx.doi.org/10.1016/j.patrec.2016.12.004
dc.identifier.urihttps://idr.iitbbs.ac.in/handle/2008/1538
dc.language.isoenen_US
dc.subjectBayesian classificationen_US
dc.subjectDepth sensorsen_US
dc.subjectHidden Markov model (Coupled HMM, HMM)en_US
dc.subjectSign language recognitionen_US
dc.titleCoupled HMM-based multi-sensor data fusion for sign language recognitionen_US
dc.typeArticleen_US

Files