Organizational Unit:
School of Interactive Computing

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)

Publication Search Results

Now showing 1 - 6 of 6
  • Item
    The internet of living things: Enabling increased information flow in dog—human interactions
    (Georgia Institute of Technology, 2017-04-25) Alcaidinho, Joelle Marie
    The human–canine relationship is one of the oldest relationships between a hu- man and an animal. Even with this longevity and unique living arrangement, there is still a great deal that we don’t know about our dogs. What do we want to know and how can computing help provide avenues for dogs to tell us more? To address the question of “what do people wish their dogs could tell them?” In an un- published survey of UK dog-owners, the most frequent request was to know about their dog’s emotional state and the most frequent response regarding what they wish their dogs would tell them was about what they love and what they are thinking. These responses dominated the survey, outnumbering even the responses regarding the dog’s physical needs like toileting. This hunger for more and better information from dogs has created a boom in the number of devices targeting these desires with unverified claims that have appeared on the market within the past 5 years. Clearly there is a need for more research, particularly in computing, in this space. While my dissertation unfortunately does not provide a love–detector or dog–thought–decoder, it does lay out the space for what wearables on dogs could provide today and in the near future. My focus is on addressing the information asymmetry between dogs and people, specifically by using wearable computing to provide more and richer in- formation from the dog to more people. To do this, I break down the space into three categories of interactions. Within each of these categories I present research that explores how these interactions can work in the field through prototype systems. This area of research, Animal–Human–Computer Interaction is new, and the area of Canine–Centered–Computing is younger still. With the state of these fields in mind, that my goal with this dissertation is to help frame this space as it pertains to dogs and wearable computing.
  • Item
    Enabling In situ & context-based motion gesture design
    (Georgia Institute of Technology, 2017-04-05) Parnami, Aman
    Motion gestures, detected through body-worn inertial sensors, are an expressive, fast to access input technique, which is ubiquitously supported by mobile and wearable devices. Recent work on gesture authoring tools has shown that interaction designers can create and evaluate gesture recognizers in stationary and controlled environments. However, we still lack a generalized understanding of their design process and how to enable in situ and context-based motion gesture design. This dissertation advances our understanding of these problems in two ways. First, by characterizing the factors impacting a gesture designer's process, as well as their gesture designs and tools. Second, by demonstrating rapid motion gesture design in a variety of new contexts. Specifically, this dissertation presents: (1) a novel triadic framework that enhances our understanding of the motion gestures, their designers, and the factors influencing design of authoring tools; (2) the first ever explorations of in situ and context-based prototyping of motion gestures through development of two generations of a smartphone-based tool, Mogeste, followed by Comoge; and (3) a description of the challenges and advantages of designing motion gestures in situ, based on the first user study with both professional as well as student interaction designers.
  • Item
    Understanding visual analysis processes from user interactions using visual analytics
    (Georgia Institute of Technology, 2016-11-15) Han, Yi
    Understanding the visual analysis process taken by people when using a visualization application can help its designers improve the application. This goal is typically achieved by observing usage sessions. Unfortunately, many visualization applications are now deployed online so their use is occurring remotely. These remote usages make it very difficult for designers to directly observe usage sessions in person. A solution to the problem is to analyze interaction logs. While interaction logs are easy to collect remotely and at scale, they can be difficult to analyze because they require an analyst to make many difficult decisions about event organization and pattern discovery. For example, which events are irrelevant to the analysis and should be removed? Which events should be grouped because they are related to the same feature? Which events lead to meaningful patterns that help to understand user behaviors? An analyst needs to be able to make these decisions to identify different types of patterns and insights based on an analysis goal. If the analysis goal changes during the process, these decisions need to be revisited in order to obtain the best analysis results. Because of the subjective nature of the analysis process and such decisions, flexibility is required so the process cannot be fully automated. Every decision requires additional effort from an analyst that could reduce the practicality of the analysis process. Therefore, an effective interaction analysis method needs to balance the tradeoffs of flexibility and practicality to best support analysts. Visual analytics provides a promising solution to this problem because it leverages human’s broadband visual analysis abilities with the support of computational methods. For flexibility, the interactive visualizations can ensure an analyst can dynamically adjust decisions in every step of the process to maximize the variety of patterns that could be identified. For practicality, visualizations can help speed up the data inspection and decision-making process while computational methods can reduce the labor in efficiently extracting potentially useful patterns. Therefore, in this thesis I employ visual analytics in a visual interaction analysis framework to achieve flexibility and practicality in the visual analysis process for identifying patterns in interaction logs. I evaluate the framework by applying it to multiple visualization applications to assess the effectiveness of the analysis process and the usefulness of the patterns discovered.
  • Item
    Automatic eating detection in real-world settings with commodity sensing
    (Georgia Institute of Technology, 2016-01-07) Thomaz, Edison
    Motivated by challenges and opportunities in nutritional epidemiology and food journaling, ubiquitous computing researchers have proposed numerous techniques for automated dietary monitoring (ADM) over the years. Although progress has been made, a truly practical system that can automatically recognize what people eat in real-world settings remains elusive. This dissertation addresses the problem of ADM by focusing on practical eating moment detection. Eating detection is a foundational element of ADM since automatically recognizing when a person is eating is required before identifying what and how much is being consumed. Additionally, eating detection can serve as the basis for new types of dietary self-monitoring practices such as semi-automated food journaling. In this thesis, I show that everyday eating moments such as breakfast, lunch, and dinner can be automatically detected in real-world settings by opportunistically leveraging sensors in practical, off-the-shelf wearable devices. I refer to this instrumentation approach as "commodity sensing". The work covered by this thesis encompasses a series of experiments I conducted with a total of 106 participants where I explored a variety of sensing modalities for automatic eating moment detection. The modalities studied include first-person images taken with wearable cameras, ambient sounds, and on-body inertial sensors. I discuss the extent to which first-person images reflecting everyday experiences can be used to identify eating moments using two approaches: human computation, and by employing a combination of state-of-the-art machine learning and computer vision techniques. Furthermore, I also describe privacy challenges that arise with first-person photographs. Next, I present results showing how certain sounds associated with eating can be recognized and used to infer eating activities. Finally, I elaborate on findings from three studies focused on the use of on-body inertial sensors (head and wrists) to recognize eating moments both in a semi-controlled laboratory setting and in real-world conditions. I conclude by relating findings and insights to practical applications, and highlighting opportunities for future work.
  • Item
    Supporting remote synchronous communication between parents and young children
    (Georgia Institute of Technology, 2012-04-04) Yarosh, Svetlana
    Parents and children increasingly spend time living apart due to marital separation and work travel. I investigated parent--child separation in both of these contexts to find that current technologies frequently do not meet the needs of families. The telephone is easy-to-use and ubiquitous but does not provide an engaging way of communicating with children. Videochat is more emotionally expressive and has a greater potential for engagement but is difficult to set up and cannot be used by a child without the help of an adult. Both telephone and videochat fail to meet the needs of remote parenting because they focus on conversation rather than care and play activities, which are the mechanism by which parents and children build closeness. I also saw that in both types of separation the motivation to connect at times conflicted with desire to reduce disruption of the remote household. To address some of these issues, I designed a system called the ShareTable, which provides easy-to-initiate videochat with a shared tabletop activity space. After an initial lab-based evaluation confirmed the promise of this approach, I deployed the ShareTable to four households (two sets of divorced families). I collected data about the families' remote interactions before and during the deployment. Remote communication more than doubled for each of these families while using the ShareTable and I saw a marked increase in the number of communication sessions initiated by the child. The ShareTable provided benefits over previous communication systems and supported activities that are impossible with other currently available technologies. One of the biggest successes of the system was in providing an overlapped video space that families appropriated to communicate metaphorical touch and a sense of closeness. However, the ShareTable also introduced a new source of conflict for parents and challenged the families as they tried to develop practices of using the system that would be acceptable to all involved. The families' approach to these challenges as well as explicit feedback about the system informs future directions for synchronous communication systems for separated families.
  • Item
    Child's play: activity recognition for monitoring children's developmental progress with augmented toys
    (Georgia Institute of Technology, 2010-05-20) Westeyn, Tracy Lee
    The way in which infants play with objects can be indicative of their developmental progress and may serve as an early indicator for developmental delays. However, the observation of children interacting with toys for the purpose of quantitative analysis can be a difficult task. To better quantify how play may serve as an early indicator, researchers have conducted retrospective studies examining the differences in object play behaviors among infants. However, such studies require that researchers repeatedly inspect videos of play often at speeds much slower than real-time to indicate points of interest. The research presented in this dissertation examines whether a combination of sensors embedded within toys and automatic pattern recognition of object play behaviors can help expedite this process. For my dissertation, I developed the Child'sPlay system which uses augmented toys and statistical models to automatically provide quantitative measures of object play interactions, as well as, provide the PlayView interface to view annotated play data for later analysis. In this dissertation, I examine the hypothesis that sensors embedded in objects can provide sufficient data for automatic recognition of certain exploratory, relational, and functional object play behaviors in semi-naturalistic environments and that a continuum of recognition accuracy exists which allows automatic indexing to be useful for retrospective review. I designed several augmented toys and used them to collect object play data from more than fifty play sessions. I conducted pattern recognition experiments over this data to produce statistical models that automatically classify children's object play behaviors. In addition, I conducted a user study with twenty participants to determine if annotations automatically generated from these models help improve performance in retrospective review tasks. My results indicate that these statistical models increase user performance and decrease perceived effort when combined with the PlayView interface during retrospective review. The presence of high quality annotations are preferred by users and promotes an increase in the effective retrieval rates of object play behaviors.