Document Type
Article
Abstract
This article proposes a novel approach for augmenting generative adversarial network (GAN) with a self-supervised task in order to improve its ability for encoding video representations that are useful in downstream tasks such as human activity recognition. In the proposed method, input video frames are randomly transformed by different spatial transformations, such as rotation, translation and shearing or temporal transformations such as shuffling temporal order of frames. Then discriminator is encouraged to predict the applied transformation by introducing an auxiliary loss. Subsequently, results prove superiority of the proposed method over baseline methods for providing a useful representation of videos used in human activity recognition performed on datasets such as KTH, UCF101 and Ball-Drop. Ball-Drop dataset is a specifically designed dataset for measuring executive functions in children through physically and cognitively demanding tasks. Using features from proposed method instead of baseline methods caused the top-1 classification accuracy to increase by more then 4%. Moreover, ablation study was performed to investigate the contribution of different transformations on downstream task.
Publication Date
7-2-2021
Language
English
License
This work is licensed under a Creative Commons Attribution 4.0 International License.
Recommended Citation
Zahed, Mohammad Zaki; Jaiswal, Ashish; Ashwin, Ramesh Babu; Kyrarini, Maria; and Makedon, Fillia, "Self-Supervised Human Activity Recognition by Augmenting Generative Adversarial Networks" (2021). Association of Computing Machinery Open Access Agreement Publications. 21.
https://mavmatrix.uta.edu/utalibraries_acmoapubs/21