Document Type
Article
Abstract
As virtual reality (VR) offers an unprecedented experience than any existing multimedia technologies, VR videos, or called 360-degree videos, have attracted considerable attention from academia and industry. How to quantify and model end users' perceived quality in watching 360-degree videos, or called QoE, resides the center for high-quality provisioning of these multimedia services. In this work, we present EyeQoE, a novel QoE assessment model for 360-degree videos using ocular behaviors. Unlike prior approaches, which mostly rely on objective factors, EyeQoE leverages the new ocular sensing modality to comprehensively capture both subjective and objective impact factors for QoE modeling. We propose a novel method that models eye-based cues into graphs and develop a GCN-based classifier to produce QoE assessment by extracting intrinsic features from graph-structured data. We further exploit the Siamese network to eliminate the impact from subjects and visual stimuli heterogeneity. A domain adaptation scheme named MADA is also devised to generalize our model to a vast range of unseen 360-degree videos. Extensive tests are carried out with our collected dataset. Results show that EyeQoE achieves the best prediction accuracy at 92.9%, which outperforms state-of-the-art approaches. As another contribution of this work, we have publicized our dataset on https://github.com/MobiSec-CSE-UTA/EyeQoE_Dataset.git.
Publication Date
3-29-2022
Language
English
License
This work is licensed under a Creative Commons Attribution 4.0 International License.
Recommended Citation
Zhu, Huadi; Li, Tianhao; Wang, Chaowei; Jin, Wenqiang; Murali, Srinivasan; Xiao, Mingyan; Ye, Dongqing; and Li, Ming, "EyeQoE: A Novel QoE Assessment Model for 360-degree Videos Using Ocular Behaviors" (2022). Association of Computing Machinery Open Access Agreement Publications. 39.
https://mavmatrix.uta.edu/utalibraries_acmoapubs/39