Title:
The role of representations in human activity recognition

dc.contributor.advisor Plötz, Thomas
dc.contributor.advisor Anderson, David V.
dc.contributor.author Haresamudram, Harish
dc.contributor.committeeMember Essa, Irfan
dc.contributor.committeeMember Vela, Patricio
dc.contributor.department Electrical and Computer Engineering
dc.date.accessioned 2020-05-20T16:57:59Z
dc.date.available 2020-05-20T16:57:59Z
dc.date.created 2019-05
dc.date.issued 2019-05-01
dc.date.submitted May 2019
dc.date.updated 2020-05-20T16:57:59Z
dc.description.abstract We investigate the role of representations in sensor based human activity recognition (HAR). In particular, we develop convolutional and recurrent autoencoder architectures for feature learning and compare their performance to a distribution-based representation as well as a supervised deep learning representation based on the DeepConvLSTM architecture. This is motivated by the promises deep learning methods offer – they learn end-to-end, eliminate the necessity for hand crafting features and generalize well across tasks and datasets. The choice of studying unsupervised learning methods is motivated by the fact that they afford the possibility of learning meaningful representations without the need for labeled data. Such representations allow for leveraging large, unlabeled datasets for performing feature and transfer learning. The study is performed on five datasets which are diverse in terms of the number of subjects, activities, and settings. The analysis is performed from a wearables standpoint, considering factors such as memory footprint, the effect of dimensionality, and computation time. We find that the convolutional and recurrent autoencoder based representations outperform the distribution-based representation on all datasets. Additionally, we conclude that autoencoder based representations offer comparable performance to supervised Deep-ConvLSTM based representation. On the larger datasets with multiple sensors such as Opportunity and PAMAP2, the convolutional and recurrent autoencoder based representations are observed to be highly effective. Resource-constrained scenarios justify the utilization of the distribution-based representation, which has low computational costs and memory requirements. Finally, when the number of sensors is low, we observe that the vanilla autoencoder based representations produce good performance.
dc.description.degree M.S.
dc.format.mimetype application/pdf
dc.identifier.uri http://hdl.handle.net/1853/62706
dc.language.iso en_US
dc.publisher Georgia Institute of Technology
dc.subject Unsupervised learning
dc.subject Human activity recognition
dc.subject Autoencoder models
dc.title The role of representations in human activity recognition
dc.type Text
dc.type.genre Thesis
dspace.entity.type Publication
local.contributor.advisor Anderson, David V.
local.contributor.corporatename School of Electrical and Computer Engineering
local.contributor.corporatename College of Engineering
relation.isAdvisorOfPublication eefeec08-2c7a-4e05-9f4b-7d25059e20a0
relation.isOrgUnitOfPublication 5b7adef2-447c-4270-b9fc-846bd76f80f2
relation.isOrgUnitOfPublication 7c022d60-21d5-497c-b552-95e489a06569
thesis.degree.level Masters
Files
Original bundle
Now showing 1 - 1 of 1
Thumbnail Image
Name:
HARESAMUDRAM-THESIS-2019.pdf
Size:
638 KB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
LICENSE.txt
Size:
3.87 KB
Format:
Plain Text
Description: