Series
GVU Technical Report Series

Series Type
Publication Series
Description
Associated Organization(s)
Associated Organization(s)
Organizational Unit

Publication Search Results

Now showing 1 - 10 of 16
  • Item
    Recognizing Sign Language from Brain Imaging
    (Georgia Institute of Technology, 2009) Mehta, Nishant A. ; Starner, Thad ; Jackson, Melody Moore ; Babalola, Karolyn O. ; James, George Andrew
    The problem of classifying complex motor activities from brain imaging is relatively new territory within the fields of neuroscience and brain-computer interfaces. We report positive sign language classification results using a tournament of pairwise support vector machine classifiers for a set of 6 executed signs and also for a set of 6 imagined signs. For a set of 3 contrasted pairs of signs, executed sign and imagined sign classification accuracies were highly significant at 96.7% and 73.3% respectively. Multiclass classification results also were highly significant at 66.7% for executed sign and 50% for imagined sign. These results lay the groundwork for a brain-computer interface based on imagined sign language, with the potential to enable communication in the nearly 200,000 individuals that develop progressive muscular diseases each year.
  • Item
    The Use of Different Technologies During a Medical Interview: Effects on Perceived Quality of Care
    (Georgia Institute of Technology, 2007-10) Caldwell, Britt ; DeBlasio, Julia M. ; Jacko, Julie A. ; Kintz, Erin ; Lyons, Kent ; Mauney, Lisa M. ; Starner, Thad ; Walker, Bruce N.
    This two-phase study examines a physician’s use of one of five different types of technology to note a patient’s symptoms during the medical interview. In this between-subjects design, 342 undergraduates viewed one of several videos that demonstrated one condition of the doctor/patient interaction. After viewing the interaction, each participant completed a series of questionnaires that evaluated their general satisfaction with the quality of care demonstrated in the medical interview. A main effect of technology condition was present in both phases. Further, in Phase 2 we found that drawing the participant’s attention to the type of technology used has a divergent effect on their general satisfaction with the doctor/patient interaction depending on the technology condition. These findings have implications for healthcare providers such as how to address technology and which type of technology to use.
  • Item
    Reading on the Go: An Evaluation of Three Mobile Display Technologies
    (Georgia Institute of Technology, 2006) Vadas, Kristin ; Lyons, Kenton Michael ; Ashbrook, Daniel ; Yi, Ji Soo ; Starner, Thad ; Jacko, Julie A.
    As mobile technology becomes a more integral part of our everyday lives, understanding the impact of different displays on perceived ease of use and overall performance is becoming increasingly important. In this paper, we evaluate three mobile displays: the MicroOptical SV-3, the Sony Librie, and the OQO Model 01. These displays each use different underlying technologies and offer unique features which could impact mobile use. The OQO is a hand-held device that utilizes a traditional transflective liquid crystal display (LCD). The MicroOptical SV-3 is a head-mounted display that uses a miniature LCD and offers hands free use. Finally, the Librie uses a novel, low power reflective electronic ink technology. We present a controlled 15-participant evaluation to assess the effectiveness of using these displays for reading while in motion.
  • Item
    Revisiting and Validating a Model of Two-Thumb Text Entry
    (Georgia Institute of Technology, 2006) Clarkson, Edward C. ; Lyons, Kenton Michael ; Clawson, James ; Starner, Thad
    MacKenzie and Soukoreff have previously introduced a Fitts' Law--based performance model of expert two--thumb text entry on mini--QWERTY keyboards. In this work we validate the original model and update it to account for observed behavior. We conclude by corroborating our updated version of the model with our empirical data. The result is a validated model of two-thumb text entry that can inform the design of mobile computing devices.
  • Item
    Electronic Communication by Deaf Teenagers
    (Georgia Institute of Technology, 2005) Henderson, Valerie ; Grinter, Rebecca E. ; Starner, Thad
    We present a qualitative, exploratory study to examine the space of electronic based communication (e.g. instant messaging, short message service, email) by Deaf teenagers in the greater Atlanta metro area. We answer the basic questions of who, what, where, when, and how to understand Deaf teenage use of electronic, mobile communication technologies. Our findings reveal that both Deaf and hearing teens share similar communication goals such as communicating quickly, effectively, and with a variety of people. Distinctions between the two populations emerge from language differences. The teenagers perspectives allow us to view electronic communication not from a technologist's point of view, but from the use-centric view of teenagers who are indifferent to the underlying infrastructure supporting this communication. This study suggests several unique features of the Deaf teens' communication as well as further research questions and directions for study.
  • Item
    Expert Chording Text Entry on the Twiddler One-Handed Keyboard
    (Georgia Institute of Technology, 2004) Lyons, Kenton Michael ; Plaisted, Daniel ; Starner, Thad
    Previously, we demonstrated that after 400 minutes of practice, ten novices averaged over 26 words per minute (wpm) for text entry on the Twiddler one-handed chording keyboard, outperforming the multi-tap mobile text entry standard. Here we present an extension of this study that examines expert chording performance. Five subjects continued the study and achieved an average rate of 47 wpm after approximately 25 hours of practice in varying conditions. One subject achieved a rate of 67 wpm, equivalent to the typing rate of the last author who has been a Twiddler user for ten years. We provide evidence that lack of visual feedback does not hinder expert typing speed and examine the potential use of multiple character chords (MCCs) to increase text entry speed. We demonstrate the effects of learning on various aspects of chording and analyze how subjects adopt a simultaneous or sequential method of pushing the individual keys during a chord.
  • Item
    Twiddler Typing: One-Handed Chording Text Entry for Mobile Phones
    (Georgia Institute of Technology, 2003) Lyons, Kenton Michael ; Starner, Thad ; Plaisted, Daniel ; Fusia, James Gibson ; Lyons, Amanda ; Drew, Aaron ; Looney, E. W.
    An experienced user of the Twiddler, a one--handed chording keyboard, averages speeds of 60 words per minute with letter--by--letter typing of standard test phrases. This fast typing rate coupled with the Twiddler's 3x4 button design, similar to that of a standard mobile telephone, makes it a potential alternative to multi--tap for text entry on mobile phones. Despite this similarity, there is very little data on the Twiddler's performance and learnability. We present a longitudinal study of novice users' learning rates on the Twiddler. Ten participants typed for 20 sessions using two different methods. Each session is composed of 20 minutes of typing with multi--tap and 20 minutes of one--handed chording on the Twiddler. We found that users initially have a faster average typing rate with multi--tap; however, after four sessions the difference becomes negligible, and by the eighth session participants type faster with chording on the Twiddler. Furthermore, after 20 sessions typing rates for the Twiddler are still increasing.
  • Item
    Recognizing Workshop Activity Using Body Worn Microphones and Accelerometers
    (Georgia Institute of Technology, 2003) Atrash, Amin ; Starner, Thad
    Most gesture recognition systems analyze gestures intended for communication (e.g. sign language) or for command (e.g. navigation in a virtual world). We attempt instead to recognize gestures made in the course of performing everyday work activities. Specifically, we examine activities in a wood shop, both in isolation as well as in the context of a simulated assembly task. We apply linear discriminant analysis (LDA) and hidden Markov model (HMM) techniques to features derived from body-worn accelerometers and microphones. The resulting system can successfully segment and identify most shop activities with zero false positives and 83.5% accuracy.
  • Item
    Technology Trends Favor Thick Clients for User-Carried Wireless Devices
    (Georgia Institute of Technology, 2002) Starner, Thad
    A thin client approach to mobile computing pushes as many services as possible on a remote server. However, as will be shown, technology trends indicate that an easy route to improving thin client functionality is to ``thicken'' the client through addition of disk storage, CPU, and RAM. Thus, thin clients will rapidly become multi-purpose thick clients. With time, users may come to consider their mobile system as their primary general-purpose computing device, with their most used files maintained on the mobile system and with desktop systems used primarily for larger displays, keyboards, and other non-mobile interfaces.
  • Item
    Speech and Gesture Multimodal Control of a Whole Earth 3D Visualization Environment
    (Georgia Institute of Technology, 2002) Krum, David Michael ; Omoteso, Olugbenga ; Ribarsky, William ; Starner, Thad ; Hodges, Larry F.
    A growing body of research shows several advantages to multimodal interfaces including increased expressiveness, exibility, and user freedom. This paper investigates the design of such an interface that integrates speech and hand gestures. The interface has the additional property of operating relative to the user and can be used while the user is in motion or stands at a distance from the computer display. The paper then describes an implementation of the multimodal interface for a whole earth 3D visualization environment which presents navigation interface challenges due to the large magnitude of scale and extended spaces that is available. The characteristics of the multimodal interface are examined, such as speed, recognizability of gestures, ease and accuracy of use, and learnability under likely conditions of use. This implementation shows that such a multimodal interface can be effective in a real environment and sets some parameters for the design and use of such interfaces.