Person:
Mynatt, Elizabeth D.

Associated Organization(s)
Organizational Unit
ORCID
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 7 of 7
  • Item
    GVU Center Overview and Funded Research Projects
    (Georgia Institute of Technology, 2019-08-22) Edwards, W. Keith ; Mynatt, Elizabeth D. ; Trent, Tim ; Morshed, Mehrab Bin ; Sherman, Jihan ; Glass, Lelia ; Partridge, Andrew ; Swarts, Matthew E.
    In the first GVU Brown Bag Seminar of the academic year, Keith Edwards, GVU Center Director and Professor of Interactive Computing, will kick off our talk series with an overview of the GVU Center detailing its unique resources and opportunities, and previewing some of the events coming up this semester. Come, enjoy lunch, and learn about some of the ways you can connect with GVU. Also, each year, the GVU Center and IPaT announce funding for the Research and Engagement Grants, which support early stage work by Georgia Tech researchers. This year’s winners will give brief overviews of the work they will be doing over the coming academic year.
  • Item
    inSpace: Co-Designing the Physical and Digital Environment to Support Workplace Collaboration
    (Georgia Institute of Technology, 2008) Voida, Stephen ; McKeon, Matt ; Le Dantec, Christopher A. ; Forslund, C. ; Verma, Puja ; McMillan, B. ; Bunde-Pedersen, J. ; Edwards, W. Keith ; Mynatt, Elizabeth D. ; Mazalek, Ali
    In this paper, we unpack three themes for the multidisciplinary codesign of a physical and digital meeting space environment in supporting collaboration: that social practices should dictate design, the importance of supporting fluidity, and the need for technological artifacts to have a social voice. We describe a prototype meeting space named inSpace that explores how design grounded in these themes can create a user-driven, information-rich environment supporting a variety of meeting types. Our current space includes a table with integrated sensing and ambient feedback, a shared wall display that supports multiple concurrent users, and a collection of storage and infrastructure services for communication, and that also can automatically capture traces of how artifacts are used in the space.
  • Item
    An Architecture for Transforming Graphical Interfaces
    (Georgia Institute of Technology, 1995) Edwards, W. Keith ; Mynatt, Elizabeth D.
    While graphical user interfaces have gained much popularity in recent years, there are situations when the need to use existing applications in a nonvisual modality is clear. Examples of such situations include the use of applications on hand-held devices with limited screen space (or even no screen space, as in the case of telephones), or users with visual impairments. We have developed an architecture capable of transforming the graphical interfaces of existing applications into powerful and intuitive nonvisual interfaces. Our system, called Mercator, provides new input and output techniques for working in the nonvisual domain. Navigation is accomplished by traversing a hierarchical tree representation of the interface structure. Output is primarily auditory, although other output modalities (such as tactile) can be used as well. The mouse, an inherently visually-oriented device, is replaced by keyboard and voice interaction. Our system is currently in its third major revision. We have gained insight into both the nonvisual interfaces presented by our system and the architecture necessary to construct such interfaces. This architecture uses several novel techniques to efficiently and flexibly map graphical interfaces into new modalities.
  • Item
    Providing Access to Graphical User Interfaces - Not Graphical Screens
    (Georgia Institute of Technology, 1995) Edwards, W. Keith ; Mynatt, Elizabeth D. ; Stockton, Kathryn
    The 1990 paper "The Graphical User Interface: Crisis, Danger and Opportunity" summarized an overwhelming concern expressed by the blind community: a new type of visual interface threatened to erase the progress made by the innovators of screen reader software. Such software (as the name implies) could read the contents of a computer screen, allowing blind computer users equal access to the tools used by their sighted colleagues. Whereas ASCII-based screens were easily accessible, new graphical interfaces presented a host of technological challenges. The contents of the screen were mere pixel values, the on or off "dots" which form the basis of any bit-mapped display. The goal for screen reader providers was to develop new methods for bringing the meaning of these picture-based interfaces to users who could not see them. The crisis was imminent. Graphical user interfaces were quickly adopted by the sighted community as a more intuitive interface. Ironically, these interfaces were deemed more accessible by the sighted population because they seemed approachable for novice computer users. The danger was tangible in the forms of lost jobs, barriers to education, and the simple frustration of being left behind as the computer industry charged ahead. Much has changed since that article was published. Commercial screen reader interfaces now exist for two of the three main graphical environments. Some feel that the crisis has been adverted, that the danger is now diminished. But what about the opportunity? Have graphical user interfaces improved the lives of blind computer users? The simple answer is not very much. This opportunity has not been realized because current screen reader technology provides access to graphical screens, not graphical interfaces. In this paper, we discuss the historical reasons for this mismatch as well as analyze the contents of graphical user interfaces. Next, we describe one possible way for a blind user to interact with a graphical user interface, independent of its presentation on the screen. We conclude by describing the components of a software architecture which can capture and model a graphical user interface for presentation to a blind computer user.
  • Item
    The Mercator Environment: A Nonvisual Interface to X Windows and Unix Workstations
    (Georgia Institute of Technology, 1992) Mynatt, Elizabeth D. ; Edwards, W. Keith
    User interfaces to computer workstations are heavily dependent on visual information. These Graphical User Interfaces, commonly found on powerful desktop computers, are almost completely inaccessible to blind and visually impaired individuals. In order to make these types of computers accessible to non-sighted users, it will be necessary to develop a new interface which replaces the visual communication with audio and tactile communication. This paper describes the Mercator Environment--an auditory and tactile interface to X Windows and Unix workstations designed for the visually impaired.
  • Item
    Metaphors for Nonvisual Computing
    (Georgia Institute of Technology, 1992) Mynatt, Elizabeth D. ; Edwards, W. Keith
    Many of the systems in this book exemplify negotiating a technological barrier in order to provide access to a computer or other device. The necessity of providing access of any kind to existing devices has often outweighed the desire to design systems specifically for a small, although important, group of users. For example, computer access has been almost completely driven by the goal of overcoming more and more technological barriers. Currently, a significant problem in computer access is providing access to Graphical User Interfaces (GUIs) for computer users who are blind.
  • Item
    Mapping GUIs to Auditory Interfaces
    (Georgia Institute of Technology, 1992) Mynatt, Elizabeth D. ; Edwards, W. Keith
    This paper describes work to provide mappings between X-based graphical interfaces and auditory interfaces. In our system, dubbed Mercator, this mapping is transparent to applications. The primary motivation for this work is to provide accessibility to graphical applications for users who are blind or visually impaired. We describe the design of an auditory interface which simulates many of the features of graphical interfaces. We then describe the architecture we have built to model and transform graphical interfaces. Finally, we conclude with some indications of future research for improving our translation mechanisms and for creating an auditory "desktop" environment.