scispace - formally typeset
Search or ask a question
Institution

Huawei

CompanyShenzhen, China
About: Huawei is a company organization based out in Shenzhen, China. It is known for research contribution in the topics: Terminal (electronics) & Node (networking). The organization has 41417 authors who have published 44698 publications receiving 343496 citations. The organization is also known as: Huawei Technologies & Huawei Technologies Co., Ltd..


Papers
More filters
Proceedings ArticleDOI
01 Oct 2017
TL;DR: This paper proposes Sequential Grouping Networks, a sequence of neural networks, each solving a sub-grouping problem of increasing semantic complexity in order to gradually compose objects out of pixels to tackle the problem of object instance segmentation.
Abstract: In this paper, we propose Sequential Grouping Networks (SGN) to tackle the problem of object instance segmentation. SGNs employ a sequence of neural networks, each solving a sub-grouping problem of increasing semantic complexity in order to gradually compose objects out of pixels. In particular, the first network aims to group pixels along each image row and column by predicting horizontal and vertical object breakpoints. These breakpoints are then used to create line segments. By exploiting two-directional information, the second network groups horizontal and vertical lines into connected components. Finally, the third network groups the connected components into object instances. Our experiments show that our SGN significantly outperforms state-of-the-art approaches in both, the Cityscapes dataset as well as PASCAL VOC.

302 citations

Journal ArticleDOI
03 Apr 2020
TL;DR: This paper presents adversarial domain adaptation with domain mixup (DM-ADA), which guarantees domain-invariance in a more continuous latent space and guides the domain discriminator in judging samples' difference relative to source and target domains.
Abstract: Recent works on domain adaptation reveal the effectiveness of adversarial learning on filling the discrepancy between source and target domains. However, two common limitations exist in current adversarial-learning-based methods. First, samples from two domains alone are not sufficient to ensure domain-invariance at most part of latent space. Second, the domain discriminator involved in these methods can only judge real or fake with the guidance of hard label, while it is more reasonable to use soft scores to evaluate the generated images or features, i.e., to fully utilize the inter-domain information. In this paper, we present adversarial domain adaptation with domain mixup (DM-ADA), which guarantees domain-invariance in a more continuous latent space and guides the domain discriminator in judging samples' difference relative to source and target domains. Domain mixup is jointly conducted on pixel and feature level to improve the robustness of models. Extensive experiments prove that the proposed approach can achieve superior performance on tasks with various degrees of domain shift and data complexity.

297 citations

Proceedings ArticleDOI
15 Jun 2019
TL;DR: Variational technique is introduced to estimate distribution of a newly proposed parameter, called channel saliency, based on which redundant channels can be removed from model via a simple criterion, and results in significant size reduction and computation saving.
Abstract: We propose a variational Bayesian scheme for pruning convolutional neural networks in channel level. This idea is motivated by the fact that deterministic value based pruning methods are inherently improper and unstable. In a nutshell, variational technique is introduced to estimate distribution of a newly proposed parameter, called channel saliency, based on this, redundant channels can be removed from model via a simple criterion. The advantages are two-fold: 1) Our method conducts channel pruning without desire of re-training stage, thus improving the computation efficiency. 2) Our method is implemented as a stand-alone module, called variational pruning layer, which can be straightforwardly inserted into off-the-shelf deep learning packages, without any special network design. Extensive experimental results well demonstrate the effectiveness of our method: For CIFAR-10, we perform channel removal on different CNN models up to 74\% reduction, which results in significant size reduction and computation saving. For ImageNet, about 40% channels of ResNet-50 are removed without compromising accuracy.

295 citations

Journal ArticleDOI
TL;DR: In this paper, the authors proposed and analyzed cache-based content delivery in a three-tier heterogeneous network (HetNet), where base stations (BSs), relays, and device-to-device (D2D) pairs are included.
Abstract: Caching popular multimedia content is a promising way to unleash the ultimate potential of wireless networks. In this paper, we propose and analyze cache-based content delivery in a three-tier heterogeneous network (HetNet), where base stations (BSs), relays, and device-to-device (D2D) pairs are included. We advocate proactively caching popular content in the relays and parts of the users with caching ability when the network is off-peak. The cached content can be reused for frequent access to offload the cellular network traffic. The node locations are first modeled as mutually independent Poisson point processes (PPPs) and the corresponding content access protocol is developed. The average ergodic rate and outage probability in the downlink are then analyzed theoretically. We further derive the throughput and the delay based on the multiclass processor-sharing queue model and the continuous-time Markov process. According to the critical condition of the steady state in the HetNet, the maximum traffic load and the global throughput gain are investigated. Moreover, impacts of some key network characteristics, e.g., the heterogeneity of multimedia contents, node densities, and the limited caching capacities, on the system performance are elaborated on to provide valuable insight.

293 citations

Posted Content
Yang He1, Ping Liu1, Ziwei Wang, Zhilan Hu2, Yi Yang1 
TL;DR: Unlike previous methods, FPGM compresses CNN models by pruning filters with redundancy, rather than those with“relatively less” importance, and when applied to two image classification benchmarks, the method validates its usefulness and strengths.
Abstract: Previous works utilized ''smaller-norm-less-important'' criterion to prune filters with smaller norm values in a convolutional neural network. In this paper, we analyze this norm-based criterion and point out that its effectiveness depends on two requirements that are not always met: (1) the norm deviation of the filters should be large; (2) the minimum norm of the filters should be small. To solve this problem, we propose a novel filter pruning method, namely Filter Pruning via Geometric Median (FPGM), to compress the model regardless of those two requirements. Unlike previous methods, FPGM compresses CNN models by pruning filters with redundancy, rather than those with ''relatively less'' importance. When applied to two image classification benchmarks, our method validates its usefulness and strengths. Notably, on CIFAR-10, FPGM reduces more than 52% FLOPs on ResNet-110 with even 2.69% relative accuracy improvement. Moreover, on ILSVRC-2012, FPGM reduces more than 42% FLOPs on ResNet-101 without top-5 accuracy drop, which has advanced the state-of-the-art. Code is publicly available on GitHub: this https URL

293 citations


Authors

Showing all 41483 results

NameH-indexPapersCitations
Yu Huang136149289209
Xiaoou Tang13255394555
Xiaogang Wang12845273740
Shaobin Wang12687252463
Qiang Yang112111771540
Wei Lu111197361911
Xuemin Shen106122144959
Li Chen105173255996
Lajos Hanzo101204054380
Luca Benini101145347862
Lei Liu98204151163
Tao Wang97272055280
Mohamed-Slim Alouini96178862290
Qi Tian96103041010
Merouane Debbah9665241140
Network Information
Related Institutions (5)
Alcatel-Lucent
53.3K papers, 1.4M citations

90% related

Bell Labs
59.8K papers, 3.1M citations

88% related

Hewlett-Packard
59.8K papers, 1.4M citations

87% related

Microsoft
86.9K papers, 4.1M citations

87% related

Intel
68.8K papers, 1.6M citations

87% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
202319
202266
20212,069
20203,277
20194,570
20184,476