Person:
Abowd, Gregory D.

Associated Organization(s)
Organizational Unit
ORCID
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 3 of 3
  • Item
    Leveraging Context to Support Automated Food Recognition in Restaurants
    (Georgia Institute of Technology, 2015-01) Bettadapura, Vinay ; Thomaz, Edison ; Parnam, Aman ; Abowd, Gregory D. ; Essa, Irfan
    The pervasiveness of mobile cameras has resulted in a dramatic increase in food photos, which are pictures re- flecting what people eat. In this paper, we study how tak- ing pictures of what we eat in restaurants can be used for the purpose of automating food journaling. We propose to leverage the context of where the picture was taken, with ad- ditional information about the restaurant, available online, coupled with state-of-the-art computer vision techniques to recognize the food being consumed. To this end, we demon- strate image-based recognition of foods eaten in restaurants by training a classifier with images from restaurant’s on- line menu databases. We evaluate the performance of our system in unconstrained, real-world settings with food im- ages taken in 10 restaurants across 5 different types of food (American, Indian, Italian, Mexican and Thai).
  • Item
    Predicting Daily Activities From Egocentric Images Using Deep Learning
    (Georgia Institute of Technology, 2015) Castro, Daniel ; Hickson, Steven ; Bettadapura, Vinay ; Thomaz, Edison ; Abowd, Gregory D. ; Christensen, Henrik I. ; Essa, Irfan
    We present a method to analyze images taken from a passive egocentric wearable camera along with the contextual information, such as time and day of week, to learn and predict everyday activities of an individual. We collected a dataset of 40,103 egocentric images over a 6 month period with 19 activity classes and demonstrate the benefit of state-of-the-art deep learning techniques for learning and predicting daily activities. Classification is conducted using a Convolutional Neural Network (CNN) with a classification method we introduce called a late fusion ensemble. This late fusion ensemble incorporates relevant contextual information and increases our classification accuracy. Our technique achieves an overall accuracy of 83.07% in predicting a person's activity across the 19 activity classes. We also demonstrate some promising results from two additional users by fine-tuning the classifier with one day of training data.
  • Item
    Recognizing Water-Based Activities in the Home Through Infrastructure-Mediated Sensing
    (Georgia Institute of Technology, 2012-09) Thomaz, Edison ; Bettadapura, Vinay ; Reyes, Gabriel ; Sandesh, Megha ; Schindler, Grant ; Plötz, Thomas ; Abowd, Gregory D. ; Essa, Irfan
    Activity recognition in the home has been long recognized as the foundation for many desirable applications in fields such as home automation, sustainability, and healthcare. However, building a practical home activity monitoring system remains a challenge. Striking a balance between cost, privacy, ease of installation and scalability continues to be an elusive goal. In this paper, we explore infrastructure-mediated sensing combined with a vector space model learning approach as the basis of an activity recognition system for the home. We examine the performance of our single-sensor water-based system in recognizing eleven high-level activities in the kitchen and bathroom, such as cooking and shaving. Results from two studies show that our system can estimate activities with overall accuracy of 82.69% for one individual and 70.11% for a group of 23 participants. As far as we know, our work is the first to employ infrastructure-mediated sensing for inferring high-level human activities in a home setting.