Institution
Huawei
Company•Shenzhen, China•
About: Huawei is a company organization based out in Shenzhen, China. It is known for research contribution in the topics: Terminal (electronics) & Signal. The organization has 41417 authors who have published 44698 publications receiving 343496 citations. The organization is also known as: Huawei Technologies & Huawei Technologies Co., Ltd..
Papers published on a yearly basis
Papers
More filters
••
TL;DR: This work investigates the problem of distributed representation learning from information-theoretic grounds, through a generalization of Tishby's centralized Information Bottleneck (IB) method to the distributed setting, and produces representations that preserve as much information as possible about LaTeX.
Abstract: The problem of distributed representation learning is one in which multiple sources of information $X_1,\ldots, X_K$ X 1 , ... , X K are processed separately so as to learn as much information as possible about some ground truth $Y$ Y . We investigate this problem from information-theoretic grounds, through a generalization of Tishby's centralized Information Bottleneck (IB) method to the distributed setting. Specifically, $K$ K encoders, $K \geq 2$ K ≥ 2 , compress their observations $X_1,\ldots, X_K$ X 1 , ... , X K separately in a manner such that, collectively, the produced representations preserve as much information as possible about $Y$ Y . We study both discrete memoryless (DM) and memoryless vector Gaussian data models. For the discrete model, we establish a single-letter characterization of the optimal tradeoff between complexity (or rate) and relevance (or information) for a class of memoryless sources (the observations $X_1,\ldots, X_K$ X 1 , ... , X K being conditionally independent given $Y$ Y ). For the vector Gaussian model, we provide an explicit characterization of the optimal complexity-relevance tradeoff. Furthermore, we develop a variational bound on the complexity-relevance tradeoff which generalizes the evidence lower bound (ELBO) to the distributed setting. We also provide two algorithms that allow to compute this bound: i) a Blahut-Arimoto type iterative algorithm which enables to compute optimal complexity-relevance encoding mappings by iterating over a set of self-consistent equations, and ii) a variational inference type algorithm in which the encoding mappings are parametrized by neural networks and the bound approximated by Markov sampling and optimized with stochastic gradient descent. Numerical results on synthetic and real datasets are provided to support the efficiency of the approaches and algorithms developed in this paper.
65 citations
•
TL;DR: An adaptive margin principle is proposed to improve the generalization ability of metric-based meta-learning approaches for few-shot learning problems by developing a class-relevant additive margin loss, where semantic similarity between each pair of classes is considered to separate samples in the feature embedding space from similar classes.
Abstract: Few-shot learning (FSL) has attracted increasing attention in recent years but remains challenging, due to the intrinsic difficulty in learning to generalize from a few examples. This paper proposes an adaptive margin principle to improve the generalization ability of metric-based meta-learning approaches for few-shot learning problems. Specifically, we first develop a class-relevant additive margin loss, where semantic similarity between each pair of classes is considered to separate samples in the feature embedding space from similar classes. Further, we incorporate the semantic context among all classes in a sampled training task and develop a task-relevant additive margin loss to better distinguish samples from different classes. Our adaptive margin method can be easily extended to a more realistic generalized FSL setting. Extensive experiments demonstrate that the proposed method can boost the performance of current metric-based meta-learning approaches, under both the standard FSL and generalized FSL settings.
65 citations
••
TL;DR: A probabilistic resource allocation approach to further exploit the flexibility of OSA based on the probabilities of channel availability obtained from spectrum sensing, which maximizes channel and power allocation in a multi-channel environment.
Abstract: Opportunistic spectrum access (OSA) in cognitive radio (CR) networks significantly improves spectrum efficiency by allowing secondary usage of licensed spectrum. In this paper, we propose a probabilistic resource allocation approach to further exploit the flexibility of OSA. Based on the probabilities of channel availability obtained from spectrum sensing, the proposed approach optimizes channel and power allocation in a multi-channel environment. The given algorithm maximizes the overall utility of a CR network and ensures sufficient protection of licensed users from unacceptable interference, which also supports diverse quality-of-service requirements and enables a distributed implementation in multi-user networks. Both analytical and simulation results demonstrate the effectiveness of this approach as well as its advantage over conventional approaches that rely upon the hard decisions on channel availability.
64 citations
•
TL;DR: Deep Match Tree (DeepMatch${tree}$) as discussed by the authors is a mining algorithm to discover patterns for matching two short texts, defined in the product space of dependency trees, and a deep neural network for matching short texts using the mined patterns, as well as a learning algorithm to build the network having a sparse structure.
Abstract: Many tasks in natural language processing, ranging from machine translation to question answering, can be reduced to the problem of matching two sentences or more generally two short texts. We propose a new approach to the problem, called Deep Match Tree (DeepMatch$_{tree}$), under a general setting. The approach consists of two components, 1) a mining algorithm to discover patterns for matching two short-texts, defined in the product space of dependency trees, and 2) a deep neural network for matching short texts using the mined patterns, as well as a learning algorithm to build the network having a sparse structure. We test our algorithm on the problem of matching a tweet and a response in social media, a hard matching problem proposed in [Wang et al., 2013], and show that DeepMatch$_{tree}$ can outperform a number of competitor models including one without using dependency trees and one based on word-embedding, all with large margins
64 citations
••
TL;DR: In this paper, the authors proposed a multiple access scheme for M2M communications based on the capacity-approaching analog fountain code to efficiently minimize the access delay and satisfy the delay requirement for each device.
Abstract: Future machine-to-machine (M2M) communications need to support a massive number of devices communicating with each other with little or no human intervention. Random access techniques were originally proposed to enable M2M multiple access, but suffer from severe congestion and access delay in an M2M system with a large number of devices. In this paper, we propose a novel multiple access scheme for M2M communications based on the capacity-approaching analog fountain code to efficiently minimize the access delay and satisfy the delay requirement for each device. This is achieved by allowing M2M devices to transmit at the same time on the same channel in an optimal probabilistic manner based on their individual delay requirements. Simulation results show that the proposed scheme achieves a near optimal rate performance and at the same time guarantees the delay requirements of the devices. We further propose a simple random access strategy and characterize the required overhead. Simulation results show that the proposed approach significantly outperforms the existing random access schemes currently used in long term evolution advanced (LTE-A) standard in terms of the access delay.
64 citations
Authors
Showing all 41483 results
Name | H-index | Papers | Citations |
---|---|---|---|
Yu Huang | 136 | 1492 | 89209 |
Xiaoou Tang | 132 | 553 | 94555 |
Xiaogang Wang | 128 | 452 | 73740 |
Shaobin Wang | 126 | 872 | 52463 |
Qiang Yang | 112 | 1117 | 71540 |
Wei Lu | 111 | 1973 | 61911 |
Xuemin Shen | 106 | 1221 | 44959 |
Li Chen | 105 | 1732 | 55996 |
Lajos Hanzo | 101 | 2040 | 54380 |
Luca Benini | 101 | 1453 | 47862 |
Lei Liu | 98 | 2041 | 51163 |
Tao Wang | 97 | 2720 | 55280 |
Mohamed-Slim Alouini | 96 | 1788 | 62290 |
Qi Tian | 96 | 1030 | 41010 |
Merouane Debbah | 96 | 652 | 41140 |