Author

Chunhai Feng

Graduation Semester and Year

2019

Language

English

Document Type

Dissertation

Degree Name

Doctor of Philosophy in Computer Engineering

Department

Computer Science and Engineering

First Advisor

Yonghe Liu

Abstract

Channel state information based movement recognition has gathered immense attention over recent years. Different from traditional systems which usually require wearable sensors or surveillance cameras, many existing works achieved desirable performance with only wireless signals in various applications, including healthcare, security and Internet of Things, with different machine learning algorithms. However, it still remains many challenges to be solved. Particularly, the location dependent nature of channel state information is one of the most significant challenges remaining. Firstly, many previous researchers deploy and evaluate their systems with employing machine learning or deep neural networks. Because of the aforementioned challenge, the models would need to be retrained with the dataset collected from new locations. However, they usually fail to consider the availability of enough samples to be trained. In other words, it generally requires a large number of samples to train a robust model, which is challenging especially at the early stage of system deployment. Therefore, it is significant to develop a system that is able to accommodate the size of available samples in the profile. Secondly, as the location dependent features are interleaved with movement dependent features, how to separate them effectively becomes the main challenge in order to correctly identify the activities at different locations without training new models. In order to address the first challenge, we propose a three-phase system Wi-multi that targets at recognizing multiple human movements in a wireless environment. Different system phases are applied according to the size of available collected samples. Specifically, distance-based classification using Dynamic Time Warping is applied when there are few samples in the profile. Then, Support Vector Machine is employed when representative features can be extracted from training samples. Lastly, recurrent neural networks is exploited when a large number of samples are available. In addition, an effective movement sample extraction algorithm is also proposed to identify the start and end points of multiple subject movements. A diverse dataset of multiple human activities is also built in order to evaluate the performance of this system. Extensive experiments results show that Wi-multi achieves an accuracy of 96.1\% on average. It is also able to achieve a desirable tradeoff between accuracy and efficiency in different phases. In order to solve the second challenge, a deep neural network system, consisted of feature extraction, feature separation, gesture recognition and location identification modules. The key idea in designing this system is to separate movement dependent features from location dependent features. Specifically, a feature extraction module that consisted of three long short-term memory layers network is employed to select representative features. Afterwards, the first half features are fed into the gesture recognition module while the second half is passed to location identification module. During the training process, the network will learn to cluster the first half features as gesture dependent features while the second half as location dependent features by minimizing the total loss of gesture recognition and location identification modules. The system is evaluated with a dataset collected from various subjects performing 4 different gestures in 2 rooms and 6 locations. The results show that the proposed location independent gesture recognition system is able to achieve 85.42\% accuracy on average in new locations.

Keywords

Channel state information, Activity recognition, Multiple subjects, Location independent

Disciplines

Computer Sciences | Physical Sciences and Mathematics

Comments

Degree granted by The University of Texas at Arlington

Share

COinS