scispace - formally typeset
Search or ask a question
Institution

Huawei

CompanyShenzhen, China
About: Huawei is a company organization based out in Shenzhen, China. It is known for research contribution in the topics: Terminal (electronics) & Node (networking). The organization has 41417 authors who have published 44698 publications receiving 343496 citations. The organization is also known as: Huawei Technologies & Huawei Technologies Co., Ltd..


Papers
More filters
Journal ArticleDOI
Mingsheng Long1, Jianmin Wang1, Guiguang Ding1, Dou Shen2, Qiang Yang3 
TL;DR: Graph Co-Regularized Transfer Learning (GTL) as mentioned in this paper proposes a general framework, referred to as graph co-regularized transfer learning, where various matrix factorization models can be incorporated.
Abstract: Transfer learning is established as an effective technology to leverage rich labeled data from some source domain to build an accurate classifier for the target domain. The basic assumption is that the input domains may share certain knowledge structure, which can be encoded into common latent factors and extracted by preserving important property of original data, e.g., statistical property and geometric structure. In this paper, we show that different properties of input data can be complementary to each other and exploring them simultaneously can make the learning model robust to the domain difference. We propose a general framework, referred to as Graph Co-Regularized Transfer Learning (GTL), where various matrix factorization models can be incorporated. Specifically, GTL aims to extract common latent factors for knowledge transfer by preserving the statistical property across domains, and simultaneously, refine the latent factors to alleviate negative transfer by preserving the geometric structure in each domain. Based on the framework, we propose two novel methods using NMF and NMTF, respectively. Extensive experiments verify that GTL can significantly outperform state-of-the-art learning methods on several public text and image datasets.

161 citations

Proceedings ArticleDOI
Jianyuan Guo1, Yuhui Yuan2, Lang Huang1, Chao Zhang1, Jin-Ge Yao2, Kai Han3 
22 Oct 2019
TL;DR: P2Net as mentioned in this paper applies a human parsing model to extract the binary human part masks and a self-attention mechanism to capture the soft latent (non-human) part masks, achieving state-of-the-art performance on three challenging benchmarks.
Abstract: Person re-identification is a challenging task due to various complex factors. Recent studies have attempted to integrate human parsing results or externally defined attributes to help capture human parts or important object regions. On the other hand, there still exist many useful contextual cues that do not fall into the scope of predefined human parts or attributes. In this paper, we address the missed contextual cues by exploiting both the accurate human parts and the coarse non-human parts. In our implementation, we apply a human parsing model to extract the binary human part masks and a self-attention mechanism to capture the soft latent (non-human) part masks. We verify the effectiveness of our approach with new state-of-the-art performance on three challenging benchmarks: Market-1501, DukeMTMC-reID and CUHK03. Our implementation is available at https://github.com/ggjy/P2Net.pytorch.

160 citations

Journal ArticleDOI
TL;DR: For flat Rayleigh-fading channels, strategies with only one-bit feedback per user are demonstrated that capture the double-logarithmic capacity growth (with number of users) of full-CSI systems and it is shown that one may achieve proportional fairness of scheduling in this regime with no loss of throughput.
Abstract: Opportunistic scheduling provides attractive sum-rate capacities in a multiuser network when the base-station has transmit-side channel state information (CSI), which is often estimated at the mobiles and provided to the base station via a feedback channel. This correspondence investigates opportunistic methods in the presence of limited feedback. For flat Rayleigh-fading channels, strategies with only one-bit feedback per user are demonstrated that capture the double-logarithmic capacity growth (with number of users) of full-CSI systems. Furthermore, for a given system configuration, it is shown that if the one-bit feedback is chosen judiciously, there is little to be gained by increasing the feedback rate. Our results provide optimal methods of calculating the one-bit feedback, as well as expressions for the sum-rate capacity in the one-bit feedback regime. It is shown that one may achieve proportional fairness of scheduling in this regime with no loss of throughput. For OFDM multiuser systems, the motivation for limited feedback is even more pronounced. An extension of the one-bit technique is presented for subchannel/user selection under both correlated and uncorrelated subchannel conditions, and optimal growth in capacity is demonstrated.

159 citations

Journal ArticleDOI
TL;DR: Fundamental and key technical issues in developing and realizing 3D multi-input multi-output technology for next generation mobile communications are discussed.
Abstract: Spectrum efficiency has long been at the center of mobile communication research, development, and operation. Today it is even more so with the explosive popularity of the mobile Internet, social networks, and smart phones that are more powerful than our desktops used to be not long ago. The discovery of spatial multiplexing via multiple antennas in the mid-1990s has brought new hope to boosting data rates regardless of the limited bandwidth. To further realize the potential of spatial multiplexing, the next leap will be accounting for the three-dimensional real world in which electromagnetic waves propagate. In this article we discuss fundamentals and key technical issues in developing and realizing 3D multi-input multi-output technology for next generation mobile communications.

159 citations

Journal ArticleDOI
Hang Li1
TL;DR: It’s time to get used to the idea of “novel”.
Abstract: 自然语言处理是人工智能的一个重要方向,研究让计算机使用人类语言、即自然语言的理论和方法。深度学习是指基于深度神经网络的机器学习技术。目前深度学习已成功应用于自然语言处理并取得了重大进展。 本文总结了深度学习用于自然语言处理的成果,并阐述了这一技术的优势和面临的挑战。 该文认为自然语言处理有五个主要任务:分类、匹配、翻译、结构预测和序列决策过程。 这五个任务中的前四个,深度学习方法的表现都优于或显著优于传统方法,并且成为解决这些问题的当前最好技术。第五项任务序列决策过程包括多轮对话,深度学习对该任务的贡献如何尚未得到完全验证。 在深度学习应用于自然语言处理的问题中,机器翻译的进展尤其引人注目,正成为该应用的代表性技术。此外,深度学习还首次使某些应用成为可能,比如,深度学习成功应用于图像检索、生成式的自然语言对话等。 深度学习在自然语言处理中的优势主要在于端到端的训练和表示学习,这使深度学习区别于传统机器学习方法,也使之成为自然语言处理的强大工具。 深度学习面临着一些挑战。比如,缺乏理论基础和模型可解释性、模型训练需要大量数据和强大的计算资源。而深度学习在自然语言处理中也面临一些独特的挑战,如长尾问题、与符号处理的结合,以及推理和决策。 可以预见,未来深度学习与其他技术(强化学习、推断、知识)结合起来将会使自然语言处理更上一层楼。

159 citations


Authors

Showing all 41483 results

NameH-indexPapersCitations
Yu Huang136149289209
Xiaoou Tang13255394555
Xiaogang Wang12845273740
Shaobin Wang12687252463
Qiang Yang112111771540
Wei Lu111197361911
Xuemin Shen106122144959
Li Chen105173255996
Lajos Hanzo101204054380
Luca Benini101145347862
Lei Liu98204151163
Tao Wang97272055280
Mohamed-Slim Alouini96178862290
Qi Tian96103041010
Merouane Debbah9665241140
Network Information
Related Institutions (5)
Alcatel-Lucent
53.3K papers, 1.4M citations

90% related

Bell Labs
59.8K papers, 3.1M citations

88% related

Hewlett-Packard
59.8K papers, 1.4M citations

87% related

Microsoft
86.9K papers, 4.1M citations

87% related

Intel
68.8K papers, 1.6M citations

87% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
202319
202266
20212,069
20203,277
20194,570
20184,476