Institution
Huawei
Company•Shenzhen, China•
About: Huawei is a company organization based out in Shenzhen, China. It is known for research contribution in the topics: Terminal (electronics) & Signal. The organization has 41417 authors who have published 44698 publications receiving 343496 citations. The organization is also known as: Huawei Technologies & Huawei Technologies Co., Ltd..
Papers published on a yearly basis
Papers
More filters
••
TL;DR: In this paper, the authors describe two methods that implement this strategy to optimize wireless communication networks and provide numerical results to assess the performance of the proposed approaches compared with purely data-driven implementations.
Abstract: Deep learning based on artificial neural networks (ANNs) is a powerful machine-learning method that, in recent years, has been successfully used to realize tasks such as image classification, speech recognition, and language translation, among others, that are usually simple for human beings but extremely difficult for machines. This is one reason deep learning is considered one of the main enablers for realizing artificial intelligence (AI). The current methodology in deep learning consists of employing a data-driven approach to identify the best architecture of an ANN that allows input-output data pairs to be fitted. Once the ANN is trained, it is capable of responding to never-observed inputs by providing the optimum output based on past acquired knowledge. In this context, a recent trend in the deep-learning community complements purely data-driven approaches with prior information based on expert knowledge. In this article, we describe two methods that implement this strategy to optimize wireless communication networks. In addition, we provide numerical results to assess the performance of the proposed approaches compared with purely data-driven implementations.
155 citations
••
02 Aug 2019TL;DR: IndexNet as discussed by the authors proposes a self-learning encoder-decoder framework where indices are self-learned adaptively from data and are used to guide the pooling and upsampling operators without extra training supervision.
Abstract: We show that existing upsampling operators can be unified using the notion of the index function. This notion is inspired by an observation in the decoding process of deep image matting where indices-guided unpooling can often recover boundary details considerably better than other upsampling operators such as bilinear interpolation. By viewing the indices as a function of the feature map, we introduce the concept of 'learning to index', and present a novel index-guided encoder-decoder framework where indices are self-learned adaptively from data and are used to guide the pooling and upsampling operators, without extra training supervision. At the core of this framework is a flexible network module, termed IndexNet, which dynamically generates indices conditioned on the feature map. Due to its flexibility, IndexNet can be used as a plug-in applying to almost all off-the-shelf convolutional networks that have coupled downsampling and upsampling stages. We demonstrate the effectiveness of IndexNet on the task of natural image matting where the quality of learned indices can be visually observed from predicted alpha mattes. Results on the Composition-1k matting dataset show that our model built on MobileNetv2 exhibits at least 16.1% improvement over the seminal VGG-16 based deep matting baseline, with less training data and lower model capacity. Code and models have been made available at: https://tinyurl.com/IndexNetV1.
155 citations
••
14 Jun 2020TL;DR: This paper develops a special back-propagation approach for AdderNets by investigating the full-precision gradient, and proposes an adaptive learning rate strategy to enhance the training procedure of Ad DerNets according to the magnitude of each neuron's gradient.
Abstract: Compared with cheap addition operation, multiplication operation is of much higher computation complexity. The widely-used convolutions in deep neural networks are exactly cross-correlation to measure the similarity between input feature and convolution filters, which involves massive multiplications between float values. In this paper, we present adder networks (AdderNets) to trade these massive multiplications in deep neural networks, especially convolutional neural networks (CNNs), for much cheaper additions to reduce computation costs. In AdderNets, we take the L1-norm distance between filters and input feature as the output response. The influence of this new similarity measure on the optimization of neural network have been thoroughly analyzed. To achieve a better performance, we develop a special back-propagation approach for AdderNets by investigating the full-precision gradient. We then propose an adaptive learning rate strategy to enhance the training procedure of AdderNets according to the magnitude of each neuron's gradient. As a result, the proposed AdderNets can achieve 74.9% Top-1 accuracy 91.7% Top-5 accuracy using ResNet-50 on the ImageNet dataset without any multiplication in convolutional layer. The codes are publicly available at: (https://github.com/huaweinoah/AdderNet).
155 citations
••
14 Jun 2020TL;DR: Adelai et al. as discussed by the authors proposed the Adaptive Bezier-Curve Network (\BeCan), which adaptively fit oriented or curved text by a parameterized bezier curve.
Abstract: Scene text detection and recognition has received increasing research attention. Existing methods can be roughly categorized into two groups: character-based and segmentation-based. These methods either are costly for character annotation or need to maintain a complex pipeline, which is often not suitable for real-time applications. Here we address the problem by proposing the Adaptive Bezier-Curve Network (\BeCan). Our contributions are three-fold: 1) For the first time, we adaptively fit oriented or curved text by a parameterized Bezier curve. 2) We design a novel BezierAlign layer for extracting accurate convolution features of a text instance with arbitrary shapes, significantly improving the precision compared with previous methods. 3) Compared with standard bounding box detection, our Bezier curve detection introduces negligible computation overhead, resulting in superiority of our method in both efficiency and accuracy. Experiments on oriented or curved benchmark datasets, namely Total-Text and CTW1500, demonstrate that \BeCan achieves state-of-the-art accuracy, meanwhile significantly improving the speed. In particular, on Total-Text, our real-time version is over 10 times faster than recent state-of-the-art methods with a competitive recognition accuracy. Code is available at \url{https://git.io/AdelaiDet}.
154 citations
•
23 Feb 2017TL;DR: In this paper, the authors present a data transmission method, user equipment, and a base station, so as to resolve a conflict problem in data transmission process with coexistence of different processing delay scenarios, where the control message is used to determine an RTT length corresponding to data transmitted between the user equipment and the base station.
Abstract: Embodiments of the present disclosure disclose a data transmission method, user equipment, and a base station, so as to resolve a conflict problem in a data transmission process with coexistence of different processing delay scenarios. The method in the embodiments of the present disclosure includes: receiving, by user equipment, a control message sent by a base station, where the control message is used to determine an RTT length corresponding to data transmitted between the user equipment and the base station; determining, by the user equipment according to the control message, the round-trip time RTT length corresponding to the data transmitted between the user equipment and the base station; and performing, by the user equipment, transmission of the data with the base station according to the RTT length.
153 citations
Authors
Showing all 41483 results
Name | H-index | Papers | Citations |
---|---|---|---|
Yu Huang | 136 | 1492 | 89209 |
Xiaoou Tang | 132 | 553 | 94555 |
Xiaogang Wang | 128 | 452 | 73740 |
Shaobin Wang | 126 | 872 | 52463 |
Qiang Yang | 112 | 1117 | 71540 |
Wei Lu | 111 | 1973 | 61911 |
Xuemin Shen | 106 | 1221 | 44959 |
Li Chen | 105 | 1732 | 55996 |
Lajos Hanzo | 101 | 2040 | 54380 |
Luca Benini | 101 | 1453 | 47862 |
Lei Liu | 98 | 2041 | 51163 |
Tao Wang | 97 | 2720 | 55280 |
Mohamed-Slim Alouini | 96 | 1788 | 62290 |
Qi Tian | 96 | 1030 | 41010 |
Merouane Debbah | 96 | 652 | 41140 |