Person:
Abowd, Gregory D.

Associated Organization(s)
Organizational Unit
ORCID
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 4 of 4
Thumbnail Image
Item

Leveraging Context to Support Automated Food Recognition in Restaurants

2015-01 , Bettadapura, Vinay , Thomaz, Edison , Parnam, Aman , Abowd, Gregory D. , Essa, Irfan

The pervasiveness of mobile cameras has resulted in a dramatic increase in food photos, which are pictures re- flecting what people eat. In this paper, we study how tak- ing pictures of what we eat in restaurants can be used for the purpose of automating food journaling. We propose to leverage the context of where the picture was taken, with ad- ditional information about the restaurant, available online, coupled with state-of-the-art computer vision techniques to recognize the food being consumed. To this end, we demon- strate image-based recognition of foods eaten in restaurants by training a classifier with images from restaurant’s on- line menu databases. We evaluate the performance of our system in unconstrained, real-world settings with food im- ages taken in 10 restaurants across 5 different types of food (American, Indian, Italian, Mexican and Thai).

Thumbnail Image
Item

Recognizing Water-Based Activities in the Home Through Infrastructure-Mediated Sensing

2012-09 , Thomaz, Edison , Bettadapura, Vinay , Reyes, Gabriel , Sandesh, Megha , Schindler, Grant , Plötz, Thomas , Abowd, Gregory D. , Essa, Irfan

Activity recognition in the home has been long recognized as the foundation for many desirable applications in fields such as home automation, sustainability, and healthcare. However, building a practical home activity monitoring system remains a challenge. Striking a balance between cost, privacy, ease of installation and scalability continues to be an elusive goal. In this paper, we explore infrastructure-mediated sensing combined with a vector space model learning approach as the basis of an activity recognition system for the home. We examine the performance of our single-sensor water-based system in recognizing eleven high-level activities in the kitchen and bathroom, such as cooking and shaving. Results from two studies show that our system can estimate activities with overall accuracy of 82.69% for one individual and 70.11% for a group of 23 participants. As far as we know, our work is the first to employ infrastructure-mediated sensing for inferring high-level human activities in a home setting.

Thumbnail Image
Item

Feasibility of Identifying Eating Moments from First-Person Images Leveraging Human Computation

2013-11 , Thomaz, Edison , Parnami, Aman , Essa, Irfan , Abowd, Gregory D.

There is widespread agreement in the medical research community that more effective mechanisms for dietary assessment and food journaling are needed to fight back against obesity and other nutrition-related diseases. However, it is presently not possible to automatically capture and objectively assess an individual’s eating behavior. Currently used dietary assessment and journaling approaches have several limitations; they pose a significant burden on individuals and are often not detailed or accurate enough. In this paper, we describe an approach where we leverage human computation to identify eating moments in first-person point-of-view images taken with wearable cameras. Recognizing eating moments is a key first step both in terms of automating dietary assessment and building systems that help individuals reflect on their diet. In a feasibility study with 5 participants over 3 days, where 17,575 images were collected in total, our method was able to recognize eating moments with 89.68% accuracy.

Thumbnail Image
Item

Decoding Children’s Social Behavior

2013-06 , Rehg, James M. , Abowd, Gregory D. , Rozga, Agata , Romero, Mario , Clements, Mark A. , Sclaroff, Stan , Essa, Irfan , Ousley, Opal Y. , Li, Yin , Kim, Chanho , Rao, Hrishikesh , Kim, Jonathan C. , Presti, Liliana Lo , Zhang, Jianming , Lantsman, Denis , Bidwell, Jonathan , Ye, Zhefan

We introduce a new problem domain for activity recognition: the analysis of children’s social and communicative behaviors based on video and audio data. We specifically target interactions between children aged 1–2 years and an adult. Such interactions arise naturally in the diagnosis and treatment of developmental disorders such as autism. We introduce a new publicly-available dataset containing over 160 sessions of a 3–5 minute child-adult interaction. In each session, the adult examiner followed a semistructured play interaction protocol which was designed to elicit a broad range of social behaviors. We identify the key technical challenges in analyzing these behaviors, and describe methods for decoding the interactions. We present experimental results that demonstrate the potential of the dataset to drive interesting research questions, and show preliminary results for multi-modal activity recognition.