scispace - formally typeset
Search or ask a question

Showing papers by "Shaoshi Yang published in 2020"


Proceedings ArticleDOI
01 Jul 2020
TL;DR: A deep learning-based viewport Prediction scheme, namely HOP, where the Historical viewport trajectory of viewers and Object tracking are jointly exploited by the long short-term memory (LSTM) networks.
Abstract: Panoramic video is considered to be an attractive video format, since it provides the viewers with an immersive experience, such as virtual reality (VR) gaming. However, the viewers only focus on part of panoramic video, which is referred to as viewport. Hence, the resources consumed for distributing the remaining part of the panoramic video are wasted. It is intuitive to only deliver the video data within this viewport for reducing the distribution cost. Empirically, viewports within a time interval are highly correlated, hence the historical trajectory may be used for predicting the future viewports. On the other hand, a viewer tends to sustain attention on a specific object in a panoramic video. Motivated by these findings, we propose a deep learning-based viewport Prediction scheme, namely HOP, where the Historical viewport trajectory of viewers and Object tracking are jointly exploited by the long short-term memory (LSTM) networks. Additionally, our solution is capable of predicting multiple future viewports, while a single viewport prediction was supported by the state-of-the-art contributions. Simulation results show that our proposed HOP scheme outperforms the benchmarkers by up to 33.5% in terms of the prediction error.

5 citations


Journal ArticleDOI
TL;DR: A predictive fallback mechanism for Voice Over Internet Protocol (VoIP) calls is described, wherein critical channel conditions are predicted to anticipate the fallback to, e.g., a traditional voice call, thus ensuring service continuity to the end user.
Abstract: 5G cellular networks are characterized by a servicebased architecture (SBA) where physical and virtual network functions (NFs) interact with each other. In conjunction with multi-access edge computing (MEC), 5G systems are expected to enable a wide range of advanced applications for vertical industries as well as over-the-top (OTT) service providers. Although MEC typically processes user-plane data, in this article, we exploit it to process control-plane data via the 5G network exposure function (NEF), enabling new context-aware applications. Based on cell-specific radio access network (RAN) signaling, we envision a machine learning (ML) solution that learns the user-context evolution, where the ML engine runs on a MEC host and its prediction is used to change the network setup for a given application. As an example, to address the challenging, fast-changing vehicular channel, we describe a predictive fallback mechanism for Voice Over Internet Protocol (VoIP) calls, wherein critical channel conditions are predicted to anticipate the fallback to, e.g., a traditional voice call, thus ensuring service continuity to the end user.

2 citations


Journal ArticleDOI
TL;DR: In this paper, the joint beam selection at both sides of the communication system and the strong interference imposed by the mobile stations (MSs) which share beams with some other MSs are investigated.

1 citations


Proceedings ArticleDOI
01 Nov 2020
TL;DR: In this article, the authors proposed a novel channel quality reporting approach for cellular communication systems, which features a differential coding scheme in stationary propagation conditions and a detector of non-stationary propagation conditions, which further triggers a channel-quality predictor for the nonstationary environment based on a machine learning method.
Abstract: In this paper, we propose a novel channel quality reporting approach for cellular communication systems. The proposed approach features a differential coding scheme in stationary propagation conditions and a detector of non-stationary propagation conditions, which further triggers a channel-quality predictor for the non-stationary environment based on a machine learning method. In particular, the machine learning engine learns about the specific large variations of the channel quality by collecting signaling information from mobile terminals in a given region. Our simulations in a controlled urban environment with vehicular users show that the proposed solution can effectively replace the 4-bit channel-quality reporting scheme of LTE and NR standards with a 2-bit one, providing correct channel-quality indication in non-stationary conditions with high probability.