scispace - formally typeset
Search or ask a question
Institution

Huawei

CompanyShenzhen, China
About: Huawei is a company organization based out in Shenzhen, China. It is known for research contribution in the topics: Terminal (electronics) & Signal. The organization has 41417 authors who have published 44698 publications receiving 343496 citations. The organization is also known as: Huawei Technologies & Huawei Technologies Co., Ltd..


Papers
More filters
Journal ArticleDOI
TL;DR: This work investigates the problem of distributed representation learning from information-theoretic grounds, through a generalization of Tishby's centralized Information Bottleneck (IB) method to the distributed setting, and produces representations that preserve as much information as possible about LaTeX.
Abstract: The problem of distributed representation learning is one in which multiple sources of information $X_1,\ldots, X_K$ X 1 , ... , X K are processed separately so as to learn as much information as possible about some ground truth $Y$ Y . We investigate this problem from information-theoretic grounds, through a generalization of Tishby's centralized Information Bottleneck (IB) method to the distributed setting. Specifically, $K$ K encoders, $K \geq 2$ K ≥ 2 , compress their observations $X_1,\ldots, X_K$ X 1 , ... , X K separately in a manner such that, collectively, the produced representations preserve as much information as possible about $Y$ Y . We study both discrete memoryless (DM) and memoryless vector Gaussian data models. For the discrete model, we establish a single-letter characterization of the optimal tradeoff between complexity (or rate) and relevance (or information) for a class of memoryless sources (the observations $X_1,\ldots, X_K$ X 1 , ... , X K being conditionally independent given $Y$ Y ). For the vector Gaussian model, we provide an explicit characterization of the optimal complexity-relevance tradeoff. Furthermore, we develop a variational bound on the complexity-relevance tradeoff which generalizes the evidence lower bound (ELBO) to the distributed setting. We also provide two algorithms that allow to compute this bound: i) a Blahut-Arimoto type iterative algorithm which enables to compute optimal complexity-relevance encoding mappings by iterating over a set of self-consistent equations, and ii) a variational inference type algorithm in which the encoding mappings are parametrized by neural networks and the bound approximated by Markov sampling and optimized with stochastic gradient descent. Numerical results on synthetic and real datasets are provided to support the efficiency of the approaches and algorithms developed in this paper.

65 citations

Posted Content
TL;DR: An adaptive margin principle is proposed to improve the generalization ability of metric-based meta-learning approaches for few-shot learning problems by developing a class-relevant additive margin loss, where semantic similarity between each pair of classes is considered to separate samples in the feature embedding space from similar classes.
Abstract: Few-shot learning (FSL) has attracted increasing attention in recent years but remains challenging, due to the intrinsic difficulty in learning to generalize from a few examples. This paper proposes an adaptive margin principle to improve the generalization ability of metric-based meta-learning approaches for few-shot learning problems. Specifically, we first develop a class-relevant additive margin loss, where semantic similarity between each pair of classes is considered to separate samples in the feature embedding space from similar classes. Further, we incorporate the semantic context among all classes in a sampled training task and develop a task-relevant additive margin loss to better distinguish samples from different classes. Our adaptive margin method can be easily extended to a more realistic generalized FSL setting. Extensive experiments demonstrate that the proposed method can boost the performance of current metric-based meta-learning approaches, under both the standard FSL and generalized FSL settings.

65 citations

Journal ArticleDOI
TL;DR: A probabilistic resource allocation approach to further exploit the flexibility of OSA based on the probabilities of channel availability obtained from spectrum sensing, which maximizes channel and power allocation in a multi-channel environment.
Abstract: Opportunistic spectrum access (OSA) in cognitive radio (CR) networks significantly improves spectrum efficiency by allowing secondary usage of licensed spectrum. In this paper, we propose a probabilistic resource allocation approach to further exploit the flexibility of OSA. Based on the probabilities of channel availability obtained from spectrum sensing, the proposed approach optimizes channel and power allocation in a multi-channel environment. The given algorithm maximizes the overall utility of a CR network and ensures sufficient protection of licensed users from unacceptable interference, which also supports diverse quality-of-service requirements and enables a distributed implementation in multi-user networks. Both analytical and simulation results demonstrate the effectiveness of this approach as well as its advantage over conventional approaches that rely upon the hard decisions on channel availability.

64 citations

Posted Content
TL;DR: Deep Match Tree (DeepMatch${tree}$) as discussed by the authors is a mining algorithm to discover patterns for matching two short texts, defined in the product space of dependency trees, and a deep neural network for matching short texts using the mined patterns, as well as a learning algorithm to build the network having a sparse structure.
Abstract: Many tasks in natural language processing, ranging from machine translation to question answering, can be reduced to the problem of matching two sentences or more generally two short texts. We propose a new approach to the problem, called Deep Match Tree (DeepMatch$_{tree}$), under a general setting. The approach consists of two components, 1) a mining algorithm to discover patterns for matching two short-texts, defined in the product space of dependency trees, and 2) a deep neural network for matching short texts using the mined patterns, as well as a learning algorithm to build the network having a sparse structure. We test our algorithm on the problem of matching a tweet and a response in social media, a hard matching problem proposed in [Wang et al., 2013], and show that DeepMatch$_{tree}$ can outperform a number of competitor models including one without using dependency trees and one based on word-embedding, all with large margins

64 citations

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a multiple access scheme for M2M communications based on the capacity-approaching analog fountain code to efficiently minimize the access delay and satisfy the delay requirement for each device.
Abstract: Future machine-to-machine (M2M) communications need to support a massive number of devices communicating with each other with little or no human intervention. Random access techniques were originally proposed to enable M2M multiple access, but suffer from severe congestion and access delay in an M2M system with a large number of devices. In this paper, we propose a novel multiple access scheme for M2M communications based on the capacity-approaching analog fountain code to efficiently minimize the access delay and satisfy the delay requirement for each device. This is achieved by allowing M2M devices to transmit at the same time on the same channel in an optimal probabilistic manner based on their individual delay requirements. Simulation results show that the proposed scheme achieves a near optimal rate performance and at the same time guarantees the delay requirements of the devices. We further propose a simple random access strategy and characterize the required overhead. Simulation results show that the proposed approach significantly outperforms the existing random access schemes currently used in long term evolution advanced (LTE-A) standard in terms of the access delay.

64 citations


Authors

Showing all 41483 results

NameH-indexPapersCitations
Yu Huang136149289209
Xiaoou Tang13255394555
Xiaogang Wang12845273740
Shaobin Wang12687252463
Qiang Yang112111771540
Wei Lu111197361911
Xuemin Shen106122144959
Li Chen105173255996
Lajos Hanzo101204054380
Luca Benini101145347862
Lei Liu98204151163
Tao Wang97272055280
Mohamed-Slim Alouini96178862290
Qi Tian96103041010
Merouane Debbah9665241140
Network Information
Related Institutions (5)
Alcatel-Lucent
53.3K papers, 1.4M citations

90% related

Bell Labs
59.8K papers, 3.1M citations

88% related

Hewlett-Packard
59.8K papers, 1.4M citations

87% related

Microsoft
86.9K papers, 4.1M citations

87% related

Intel
68.8K papers, 1.6M citations

87% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
202319
202266
20212,069
20203,277
20194,570
20184,476