ORCID Identifier(s)

0000-0002-2794-9115

Graduation Semester and Year

2019

Language

English

Document Type

Dissertation

Degree Name

Doctor of Philosophy in Computer Science

Department

Computer Science and Engineering

First Advisor

Fillia Makedon

Second Advisor

Vangelis Karkaletsis

Abstract

Artificial Intelligence has probably been the most rapidly evolving field of science during the last decade. Its numerous real-life applications have radically altered the way we experience daily-living with great impact in some of the most basic aspects of human lives including but not limited to health and well-being, communication and interaction, education, driving, daily, and entertainment. Human-Computer Interaction (HCI) is the field of Computer Science lying in the epicenter of this evolution and is responsible for transforming rudimentary research findings and theoretical principles into intuitive tools, responsible for enhancing human performance, increasing productivity and ensuring safety. Two of the core questions that HCI research tries to address relate to a) what does user want? and b) what can the user do? Multi-modal user monitoring has shown great potential towards answering those questions. Modeling and tracking different parameters of user's behavior has provided groundbreaking solutions in several fields such as smart rehabilitation, smart driving, and workplace-safety. Two of the dominant modalities that have been extensively deployed for such systems are speech and vision-based approaches with a special focus on activity and emotion recognition. Despite the great amount of research that has been done in these domains, there are numerous other implicit and explicit types of user-feedback produced during an HCI scenario, that are very informative but have attracted very limited research interest. This is usually due to the great levels of inherent noise that such signals tend to carry, or due to the highly invasive equipment that is required to capture this kind of information. These factors make most real-life applications almost impossible to implement. This research aims to investigate the potentials of multi-modal user monitoring towards designing personalized scenarios and interactive interfaces that focus on two different research axis. Firstly we explore the advantages of reusing existing knowledge across different information domains, application areas, and individual users in an effort to create predictive models that can expand their functionalities between distinct HCI scenarios. Secondly, we try to enhance multi-modal interaction by accessing information that stems from more sophisticated and less explored sources such as Electroencephalogram (EEG) and Electromyogram (EMG) analysis using minimally invasive sensors. We achieve this by designing a series of end-to-end experiments (from data collection to analysis and application) and by performing an extensive evaluation on various Machine Learning (ML) and Deep-Learning (DL) approaches on their ability to model diverge signals of interaction. As an outcome of this in-depth investigation and experimentation, we propose CogBeacon. A multi-modal dataset and data-collection platform, to our knowledge the first of its kind, towards predicting events of cognitive fatigue and understanding its impact on human performance.

Keywords

User modeling and monitoring, Machine learning, Deep learning, Cognitive and behavioral modeling, HCI

Disciplines

Computer Sciences | Physical Sciences and Mathematics

Comments

Degree granted by The University of Texas at Arlington

28131-2.zip (21066 kB)

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.