Series
GVU Technical Report Series

Series Type
Publication Series
Description
Associated Organization(s)
Associated Organization(s)
Organizational Unit

Publication Search Results

Now showing 1 - 10 of 15
Thumbnail Image
Item

Reading on the Go: An Evaluation of Three Mobile Display Technologies

2006 , Vadas, Kristin , Lyons, Kenton Michael , Ashbrook, Daniel , Yi, Ji Soo , Starner, Thad , Jacko, Julie A.

As mobile technology becomes a more integral part of our everyday lives, understanding the impact of different displays on perceived ease of use and overall performance is becoming increasingly important. In this paper, we evaluate three mobile displays: the MicroOptical SV-3, the Sony Librie, and the OQO Model 01. These displays each use different underlying technologies and offer unique features which could impact mobile use. The OQO is a hand-held device that utilizes a traditional transflective liquid crystal display (LCD). The MicroOptical SV-3 is a head-mounted display that uses a miniature LCD and offers hands free use. Finally, the Librie uses a novel, low power reflective electronic ink technology. We present a controlled 15-participant evaluation to assess the effectiveness of using these displays for reading while in motion.

Thumbnail Image
Item

Expert Chording Text Entry on the Twiddler One-Handed Keyboard

2004 , Lyons, Kenton Michael , Plaisted, Daniel , Starner, Thad

Previously, we demonstrated that after 400 minutes of practice, ten novices averaged over 26 words per minute (wpm) for text entry on the Twiddler one-handed chording keyboard, outperforming the multi-tap mobile text entry standard. Here we present an extension of this study that examines expert chording performance. Five subjects continued the study and achieved an average rate of 47 wpm after approximately 25 hours of practice in varying conditions. One subject achieved a rate of 67 wpm, equivalent to the typing rate of the last author who has been a Twiddler user for ten years. We provide evidence that lack of visual feedback does not hinder expert typing speed and examine the potential use of multiple character chords (MCCs) to increase text entry speed. We demonstrate the effects of learning on various aspects of chording and analyze how subjects adopt a simultaneous or sequential method of pushing the individual keys during a chord.

Thumbnail Image
Item

Technology Trends Favor Thick Clients for User-Carried Wireless Devices

2002 , Starner, Thad

A thin client approach to mobile computing pushes as many services as possible on a remote server. However, as will be shown, technology trends indicate that an easy route to improving thin client functionality is to ``thicken'' the client through addition of disk storage, CPU, and RAM. Thus, thin clients will rapidly become multi-purpose thick clients. With time, users may come to consider their mobile system as their primary general-purpose computing device, with their most used files maintained on the mobile system and with desktop systems used primarily for larger displays, keyboards, and other non-mobile interfaces.

Thumbnail Image
Item

Evaluation of a Multimodal Interface for 3D Terrain Visualization

2002 , Krum, David Michael , Omoteso, Olugbenga , Ribarsky, William , Starner, Thad , Hodges, Larry F.

This paper describes an evaluation of various interfaces for visual navigation of a whole Earth 3D terrain model. A mouse driven interface, a speech interface, a gesture interface, and a multimodal speech and gesture interface were used to navigate to targets placed at various points on the Earth. Novel speech and/or gesture interfaces are candidates for use in future mobile or ubiquitous applications. This study measured each participant's recall of target identity, order, and location as a measure of cognitive load. Timing information as well as a variety of subjective measures including discomfort and user preferences were taken. While the familiar and mature mouse interface scored best by most measures, the speech interface also performed well. The gesture and multimodal interface suffered from weaknesses in the gesture modality. Weaknesses in the speech and multimodal modalities are identifed and areas for improvement are discussed.

Thumbnail Image
Item

Revisiting and Validating a Model of Two-Thumb Text Entry

2006 , Clarkson, Edward C. , Lyons, Kenton Michael , Clawson, James , Starner, Thad

MacKenzie and Soukoreff have previously introduced a Fitts' Law--based performance model of expert two--thumb text entry on mini--QWERTY keyboards. In this work we validate the original model and update it to account for observed behavior. We conclude by corroborating our updated version of the model with our empirical data. The result is a validated model of two-thumb text entry that can inform the design of mobile computing devices.

Thumbnail Image
Item

Twiddler Typing: One-Handed Chording Text Entry for Mobile Phones

2003 , Lyons, Kenton Michael , Starner, Thad , Plaisted, Daniel , Fusia, James Gibson , Lyons, Amanda , Drew, Aaron , Looney, E. W.

An experienced user of the Twiddler, a one--handed chording keyboard, averages speeds of 60 words per minute with letter--by--letter typing of standard test phrases. This fast typing rate coupled with the Twiddler's 3x4 button design, similar to that of a standard mobile telephone, makes it a potential alternative to multi--tap for text entry on mobile phones. Despite this similarity, there is very little data on the Twiddler's performance and learnability. We present a longitudinal study of novice users' learning rates on the Twiddler. Ten participants typed for 20 sessions using two different methods. Each session is composed of 20 minutes of typing with multi--tap and 20 minutes of one--handed chording on the Twiddler. We found that users initially have a faster average typing rate with multi--tap; however, after four sessions the difference becomes negligible, and by the eighth session participants type faster with chording on the Twiddler. Furthermore, after 20 sessions typing rates for the Twiddler are still increasing.

Thumbnail Image
Item

Speech and Gesture Multimodal Control of a Whole Earth 3D Visualization Environment

2002 , Krum, David Michael , Omoteso, Olugbenga , Ribarsky, William , Starner, Thad , Hodges, Larry F.

A growing body of research shows several advantages to multimodal interfaces including increased expressiveness, exibility, and user freedom. This paper investigates the design of such an interface that integrates speech and hand gestures. The interface has the additional property of operating relative to the user and can be used while the user is in motion or stands at a distance from the computer display. The paper then describes an implementation of the multimodal interface for a whole earth 3D visualization environment which presents navigation interface challenges due to the large magnitude of scale and extended spaces that is available. The characteristics of the multimodal interface are examined, such as speed, recognizability of gestures, ease and accuracy of use, and learnability under likely conditions of use. This implementation shows that such a multimodal interface can be effective in a real environment and sets some parameters for the design and use of such interfaces.

Thumbnail Image
Item

Electronic Communication by Deaf Teenagers

2005 , Henderson, Valerie , Grinter, Rebecca E. , Starner, Thad

We present a qualitative, exploratory study to examine the space of electronic based communication (e.g. instant messaging, short message service, email) by Deaf teenagers in the greater Atlanta metro area. We answer the basic questions of who, what, where, when, and how to understand Deaf teenage use of electronic, mobile communication technologies. Our findings reveal that both Deaf and hearing teens share similar communication goals such as communicating quickly, effectively, and with a variety of people. Distinctions between the two populations emerge from language differences. The teenagers perspectives allow us to view electronic communication not from a technologist's point of view, but from the use-centric view of teenagers who are indifferent to the underlying infrastructure supporting this communication. This study suggests several unique features of the Deaf teens' communication as well as further research questions and directions for study.

Thumbnail Image
Item

Recognizing Workshop Activity Using Body Worn Microphones and Accelerometers

2003 , Atrash, Amin , Starner, Thad

Most gesture recognition systems analyze gestures intended for communication (e.g. sign language) or for command (e.g. navigation in a virtual world). We attempt instead to recognize gestures made in the course of performing everyday work activities. Specifically, we examine activities in a wood shop, both in isolation as well as in the context of a simulated assembly task. We apply linear discriminant analysis (LDA) and hidden Markov model (HMM) techniques to features derived from body-worn accelerometers and microphones. The resulting system can successfully segment and identify most shop activities with zero false positives and 83.5% accuracy.

Thumbnail Image
Item

Towards Conversational Speech Recognition for a Wearable Computer Based Appointment Scheduling Agent

2002 , Wong, Benjamin A. , Starner, Thad , McGuire, R. Martin

We present an original study of current mobile appointment scheduling devices. Our intention is to create a conversational wearable computing interface for the task of appointment scheduling. We employ both survey questionnaires and timing tests of mock scheduling tasks. The study includes over 150 participants and times each person using his or her own scheduling device (e.g., a paper planner or personal digital assistant). Our tests show that current scheduling devices take a surprisingly long time to access and that our subjects often do not use the primary scheduling device claimed on the questionnaire. Slower devices (e.g., PDAs) are disproportionately abandoned in favor of devices with faster access times (e.g., scrap paper). Many subjects indicate that they use a faster device when mobile as a buffer until they can reconcile the data with their primary scheduling device. The findings of this study motivated the design of two conversational speech systems for everyday--use wearable computers. The Calendar Navigator Agent provides extremely fast access to the user's calendar through a wearable computer with a head-up display. The user's verbal negotiation for a meeting time is monitored by the wearable which provides an appropriate calendar display based on the current conversation. The second system, now under development, attempts to minimize cognitive load by buffering and indexing appointment conversations for later processing by the user. Both systems use extreme restrictions to decrease speech recognition error rates, yet are designed to be socially graceful.