Using Multiple Sensors for Mobile Sign Language Recognition
Loading...
Author(s)
Advisor(s)
Editor(s)
Collections
Supplementary to:
Permanent Link
Abstract
We build upon a constrained, lab-based Sign Language
recognition system with the goal of making it a mobile assistive
technology. We examine using multiple sensors for disambiguation
of noisy data to improve recognition accuracy.
Our experiment compares the results of training a small
gesture vocabulary using noisy vision data, accelerometer
data and both data sets combined.
Sponsor
Date
2003-10
Extent
Resource Type
Text
Resource Subtype
Proceedings