Title:
Feasibility of Identifying Eating Moments from First-Person Images Leveraging Human Computation

dc.contributor.author Thomaz, Edison
dc.contributor.author Parnami, Aman
dc.contributor.author Essa, Irfan
dc.contributor.author Abowd, Gregory D.
dc.contributor.corporatename Georgia Institute of Technology. School of Interactive Computing en_US
dc.contributor.corporatename Georgia Institute of Technology. Institute for Robotics and Intelligent Machines en_US
dc.date.accessioned 2014-03-03T18:02:52Z
dc.date.available 2014-03-03T18:02:52Z
dc.date.issued 2013-11
dc.description Copyright ©2013 ACM
dc.description Presented at the 2013 4th International SenseCam and Pervasive Imaging Conference (SenseCam ’13), November 18-19, 2013, La Jolla, CA.
dc.description DOI: 10.1145/2526667.2526672
dc.description.abstract There is widespread agreement in the medical research community that more effective mechanisms for dietary assessment and food journaling are needed to fight back against obesity and other nutrition-related diseases. However, it is presently not possible to automatically capture and objectively assess an individual’s eating behavior. Currently used dietary assessment and journaling approaches have several limitations; they pose a significant burden on individuals and are often not detailed or accurate enough. In this paper, we describe an approach where we leverage human computation to identify eating moments in first-person point-of-view images taken with wearable cameras. Recognizing eating moments is a key first step both in terms of automating dietary assessment and building systems that help individuals reflect on their diet. In a feasibility study with 5 participants over 3 days, where 17,575 images were collected in total, our method was able to recognize eating moments with 89.68% accuracy. en_US
dc.embargo.terms null en_US
dc.identifier.citation E. Thomaz, A. Parnami, I. Essa, and G. D. Abowd (2013), “Feasibility of Identifying Eating Moments from First-Person Images Leveraging Human Computation,” in Proceedings of ACM 4th International SenseCam and Pervasive Imaging (SenseCam ’13), 2013. en_US
dc.identifier.doi 10.1145/2526667.2526672
dc.identifier.uri http://hdl.handle.net/1853/51304
dc.language.iso en_US en_US
dc.publisher Georgia Institute of Technology en_US
dc.publisher.original Association for Computing Machinery
dc.subject Crowdsourcing en_US
dc.subject Diet en_US
dc.subject Egocentric photos en_US
dc.subject Health en_US
dc.subject Human computation en_US
dc.subject Lifestyle en_US
dc.subject Mechanical Turk en_US
dc.subject Wearable en_US
dc.title Feasibility of Identifying Eating Moments from First-Person Images Leveraging Human Computation en_US
dc.type Text
dc.type.genre Post-print
dc.type.genre Proceedings
dspace.entity.type Publication
local.contributor.author Essa, Irfan
local.contributor.author Abowd, Gregory D.
local.contributor.corporatename Institute for Robotics and Intelligent Machines (IRIM)
local.contributor.corporatename College of Computing
relation.isAuthorOfPublication 84ae0044-6f5b-4733-8388-4f6427a0f817
relation.isAuthorOfPublication a9e4f620-85d6-4fb9-8851-8b0c3a0e66b4
relation.isOrgUnitOfPublication 66259949-abfd-45c2-9dcc-5a6f2c013bcf
relation.isOrgUnitOfPublication c8892b3c-8db6-4b7b-a33a-1b67f7db2021
Files
Original bundle
Now showing 1 - 1 of 1
Thumbnail Image
Name:
2013-Thomaz-FIEMFFILHC.pdf
Size:
1.39 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
3.13 KB
Format:
Item-specific license agreed upon to submission
Description: