To achieve the optimal users’ quality of experience, there are two fundamental challenges needed to be tackled: firstly, how to predict users’ future viewport accurately based on users’ past motion traces; and secondly, how to adaptively select bitrates in both space (viewport areas has higher bitrate than non-viewport areas) and time (bitrate between adjacent time step should not change a lot) dimensions under various network conditions.
Xiaolan Jiang, a PhD student at National Institute of Informatics (NII Tokyo) Japan, is currently visiting the Big Data Research Group at Vestlandsforsking, Norway under the student mobility programme of the BDEM project. His institute is a formal partner in the BDEM project.
At NII, Xiaolan is working on Virtual Reality (VR) (360-degree) video streaming, under the supervision of Professor Yusheng Ji.
He designed a Long Short-Term Memory (LSTM) -based model to predict users' future viewport and applied deep reinforcement learning to train a neural network to adaptively select bitrates for 360 video chunks under dynamic network conditions.
At Vestlandsforsking, Xiaolan is collaborating with Professor Akerkar and researchers, Dr. Minsung Hong and Dr. Hoang Long Nguyen on video stream processing challenges for capturing and processing video data from geographically distributed cameras.