Title:
The role of representations in human activity recognition

Thumbnail Image
Author(s)
Haresamudram, Harish
Authors
Advisor(s)
Plötz, Thomas
Anderson, David V.
Advisor(s)
Editor(s)
Associated Organization(s)
Series
Supplementary to
Abstract
We investigate the role of representations in sensor based human activity recognition (HAR). In particular, we develop convolutional and recurrent autoencoder architectures for feature learning and compare their performance to a distribution-based representation as well as a supervised deep learning representation based on the DeepConvLSTM architecture. This is motivated by the promises deep learning methods offer – they learn end-to-end, eliminate the necessity for hand crafting features and generalize well across tasks and datasets. The choice of studying unsupervised learning methods is motivated by the fact that they afford the possibility of learning meaningful representations without the need for labeled data. Such representations allow for leveraging large, unlabeled datasets for performing feature and transfer learning. The study is performed on five datasets which are diverse in terms of the number of subjects, activities, and settings. The analysis is performed from a wearables standpoint, considering factors such as memory footprint, the effect of dimensionality, and computation time. We find that the convolutional and recurrent autoencoder based representations outperform the distribution-based representation on all datasets. Additionally, we conclude that autoencoder based representations offer comparable performance to supervised Deep-ConvLSTM based representation. On the larger datasets with multiple sensors such as Opportunity and PAMAP2, the convolutional and recurrent autoencoder based representations are observed to be highly effective. Resource-constrained scenarios justify the utilization of the distribution-based representation, which has low computational costs and memory requirements. Finally, when the number of sensors is low, we observe that the vanilla autoencoder based representations produce good performance.
Sponsor
Date Issued
2019-05-01
Extent
Resource Type
Text
Resource Subtype
Thesis
Rights Statement
Rights URI