scispace - formally typeset
Search or ask a question
Institution

Huawei

CompanyShenzhen, China
About: Huawei is a company organization based out in Shenzhen, China. It is known for research contribution in the topics: Terminal (electronics) & Signal. The organization has 41417 authors who have published 44698 publications receiving 343496 citations. The organization is also known as: Huawei Technologies & Huawei Technologies Co., Ltd..


Papers
More filters
Journal ArticleDOI
TL;DR: This article reviews recent advanced CB schemes as alternatives, requiring less overhead than JT CoMP while achieving good performance in realistic conditions, and assesses the resilience of the state-of- the-art CB to uncoordinated interference.
Abstract: Modern cellular networks in traditional frequency bands are notoriously interference-limited, especially in urban areas, where base stations are deployed in close proximity to one another. The latest releases of LTE incorporate features for coordinating downlink transmissions as an efficient means of managing interference. Recent field trial results and theoretical studies of the performance of JT CoMP schemes revealed, however, that their gains are not as high as initially expected, despite the large coordination overhead. These schemes are known to be very sensitive to defects in synchronization or information exchange between coordinating base stations as well as uncoordinated interference. In this article, we review recent advanced CB schemes as alternatives, requiring less overhead than JT CoMP while achieving good performance in realistic conditions. By stipulating that in certain LTE scenarios of increasing interest, uncoordinated interference constitutes a major factor in the performance of CoMP techniques at large, we hereby assess the resilience of the stateof- the-art CB to uncoordinated interference. We also describe how these techniques can leverage the latest specifications of current cellular networks, and how they may perform when we consider standardized feedback and coordination. This allows us to identify some key road blocks and research directions to address as LTE evolves toward the future of mobile communications.

69 citations

Proceedings ArticleDOI
Ze Cui1, Jing Wang1, Shangyin Gao1, Tiansheng Guo1, Yihui Feng1, Bo Bai1 
20 Jun 2021
TL;DR: In this article, the authors proposed a continuous rate adjustable learned image compression framework, Asymmetric Gained Variational Autoencoder (AG-VAE), which utilizes a pair of gain units to achieve discrete rate adaptation in one single model with a negligible additional computation.
Abstract: With the development of deep learning techniques, the combination of deep learning with image compression has drawn lots of attention. Recently, learned image compression methods had exceeded their classical counterparts in terms of rate-distortion performance. However, continuous rate adaptation remains an open question. Some learned image compression methods use multiple networks for multiple rates, while others use one single model at the expense of computational complexity increase and performance degradation. In this paper, we propose a continuously rate adjustable learned image compression framework, Asymmetric Gained Variational Autoencoder (AG-VAE). AG-VAE utilizes a pair of gain units to achieve discrete rate adaptation in one single model with a negligible additional computation. Then, by using exponential interpolation, continuous rate adaptation is achieved without compromising performance. Besides, we propose the asymmetric Gaussian entropy model for more accurate entropy estimation. Exhaustive experiments show that our method achieves comparable quantitative performance with SOTA learned image compression methods and better qualitative performance than classical image codecs. In the ablation study, we confirm the usefulness and superiority of gain units and the asymmetric Gaussian entropy model.

69 citations

Proceedings ArticleDOI
20 Jun 2021
TL;DR: Hu et al. as mentioned in this paper proposed a new paradigm that dynamically removes redundant filters by embedding the manifold information of all instances into the space of pruned networks and aligned the manifold relationship between instances and the pruned sub-networks.
Abstract: Neural network pruning is an essential approach for reducing the computational complexity of deep models so that they can be well deployed on resource-limited devices. Compared with conventional methods, the recently developed dynamic pruning methods determine redundant filters variant to each input instance which achieves higher acceleration. Most of the existing methods discover effective subnetworks for each instance independently and do not utilize the relationship between different inputs. To maximally excavate redundancy in the given network architecture, this paper proposes a new paradigm that dynamically removes redundant filters by embedding the manifold information of all instances into the space of pruned networks (dubbed as ManiDP). We first investigate the recognition complexity and feature similarity between images in the training set. Then, the manifold relationship between instances and the pruned sub-networks will be aligned in the training procedure. The effectiveness of the proposed method is verified on several benchmarks, which shows better performance in terms of both accuracy and computational cost compared to the state-of-the-art methods. For example, our method can reduce 55.3% FLOPs of ResNet-34 with only 0.57% top-1 accuracy degradation on ImageNet. The code will be available at https://github.com/huawei-noah/Pruning/tree/master/ManiDP.

69 citations

Journal ArticleDOI
TL;DR: In this article, the authors investigated the energy efficiency of optical OFDM-based networks and proposed a mixed integer linear programming model to minimize the total power consumption of rate and modulation adaptive OFDM networks.
Abstract: Orthogonal frequency-division multiplexing (OFDM) has been proposed as an enabling technique for elastic optical networks to support heterogeneous traffic demands by enabling rate and modulation adaptive bandwidth allocation. The authors investigate the energy efficiency of optical OFDM-based networks. A mixed integer linear programming model is developed to minimise the total power consumption of rate and modulation adaptive optical OFDM networks. Considering a symmetric traffic, the results show that optical OFDM-based networks can save up to 31% of the total network power consumption compared to conventional Internet protocol over wavelength division multiplexing (WDM) networks. Considering the power consumption of the optical layer, the optical OFDM-based network saves up to 55% of the optical layer power consumption. The results also show that under an asymmetric traffic scenario, where more traffic is destined to or originates from popular nodes, for example data centres, the power savings achieved by the optical OFDM-based networks are limited as the higher traffic demands to and from data centres reduce the bandwidth wastage associated with conventional WDM networks. Furthermore, the achievable power savings through data compression have been investigated, considering an optical OFDM-based network.

69 citations

Proceedings ArticleDOI
19 Oct 2017
TL;DR: A generic theoretical model is proposed to find out the optimal set of quality-variable video versions based on traces of head positions of users watching a 360-degree video, and a simplified version of the model with two quality levels and restricted shapes for the QER is solved.
Abstract: With the decreasing price of Head-Mounted Displays (HMDs), 360-degree videos are becoming popular. The streaming of such videos through the Internet with state of the art streaming architectures requires, to provide high immersion feeling, much more bandwidth than the median user's access bandwidth. To decrease the need for bandwidth consumption while providing high immersion to users, scientists and specialists proposed to prepare and encode 360-degree videos into quality-variable video versions and to implement viewport-adaptive streaming. Quality-variable versions are different versions of the same video with non-uniformly spread quality: there exists some so-called Quality Emphasized Regions (QERs). With viewport-adaptive streaming the client, based on head movement prediction, downloads the video version with the high quality region closer to where the user will watch. In this paper we propose a generic theoretical model to find out the optimal set of quality-variable video versions based on traces of head positions of users watching a 360-degree video. We propose extensions to adapt the model to popular quality-variable version implementations such as tiling and offset projection. We then solve a simplified version of the model with two quality levels and restricted shapes for the QER. With this simplified model, we show that an optimal set of four quality-variable video versions prepared by a streaming server, together with a perfect head movement prediction, allow for 45% bandwidth savings to display video with the same average quality as state of the art solutions or allows an increase of 102% of the displayed quality for the same bandwidth budget.

69 citations


Authors

Showing all 41483 results

NameH-indexPapersCitations
Yu Huang136149289209
Xiaoou Tang13255394555
Xiaogang Wang12845273740
Shaobin Wang12687252463
Qiang Yang112111771540
Wei Lu111197361911
Xuemin Shen106122144959
Li Chen105173255996
Lajos Hanzo101204054380
Luca Benini101145347862
Lei Liu98204151163
Tao Wang97272055280
Mohamed-Slim Alouini96178862290
Qi Tian96103041010
Merouane Debbah9665241140
Network Information
Related Institutions (5)
Alcatel-Lucent
53.3K papers, 1.4M citations

90% related

Bell Labs
59.8K papers, 3.1M citations

88% related

Hewlett-Packard
59.8K papers, 1.4M citations

87% related

Microsoft
86.9K papers, 4.1M citations

87% related

Intel
68.8K papers, 1.6M citations

87% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
202319
202266
20212,069
20203,277
20194,570
20184,476