Using Multiple Sensors for Mobile Sign Language Recognition

Loading...
Thumbnail Image
Author(s)
Brashear, Helene
Lukowicz, Paul
Junker, Holger
Advisor(s)
Editor(s)
Associated Organization(s)
Organizational Unit
Organizational Unit
Series
Supplementary to:
Abstract
We build upon a constrained, lab-based Sign Language recognition system with the goal of making it a mobile assistive technology. We examine using multiple sensors for disambiguation of noisy data to improve recognition accuracy. Our experiment compares the results of training a small gesture vocabulary using noisy vision data, accelerometer data and both data sets combined.
Sponsor
Date
2003-10
Extent
Resource Type
Text
Resource Subtype
Proceedings
Rights Statement
Rights URI