Head Movement Prediction using FCNN

Rabia Shafi, Wan Shuai, Hao Gong, Muhammad Usman Younus

科研成果: 书/报告/会议事项章节会议稿件同行评审

摘要

Viewport adaptive streaming of 360-dgree videos relies on accurate prediction of the viewport, while the user generally suffers from significant quality degradation under long delay settings. To deal with this issue, advanced methods for long-term viewport prediction are highly desired to improve viewport prediction accuracy. To more accurately capture the non-linear relationship between the future and past viewpoints, this paper proposes a Fully Connected Neural Network (FCNN) model to make future predictions, which is light in computation. The input data such as yaw values, pitch values, Estimated Weighted Moving Average (EWMA) of yaw values, and EWMA of pitch values, are transformed into sine and cosine angles before feeding into the encoding layer of the FCNN model by considering the roll angle to zero. After transforming the data input into the proposed FCNN model, a long-term prediction length of up to 4 seconds has been explored, to capture the non-linear and long-term dependent relation between past and future viewport positions more accurately. Experimental results show that the proposed scheme performs well for the large size prediction window.

源语言英语
主期刊名2021 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2021 - Proceedings
出版商Institute of Electrical and Electronics Engineers Inc.
1458-1464
页数7
ISBN(电子版)9789881476890
出版状态已出版 - 2021
活动2021 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2021 - Tokyo, 日本
期限: 14 12月 202117 12月 2021

出版系列

姓名2021 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2021 - Proceedings

会议

会议2021 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2021
国家/地区日本
Tokyo
时期14/12/2117/12/21

指纹

探究 'Head Movement Prediction using FCNN' 的科研主题。它们共同构成独一无二的指纹。

引用此