scispace - formally typeset
Search or ask a question
Institution

Huawei

CompanyShenzhen, China
About: Huawei is a company organization based out in Shenzhen, China. It is known for research contribution in the topics: Terminal (electronics) & Node (networking). The organization has 41417 authors who have published 44698 publications receiving 343496 citations. The organization is also known as: Huawei Technologies & Huawei Technologies Co., Ltd..


Papers
More filters
Journal ArticleDOI
TL;DR: The objective of this article is to demonstrate the feasibility of on-demand creation of cloud-based elastic mobile core networks, along with their lifecycle management, with a number of different options, each with different characteristics, advantages, and disadvantages.
Abstract: The objective of this article is to demonstrate the feasibility of on-demand creation of cloud-based elastic mobile core networks, along with their lifecycle management. For this purpose the article describes the key elements to realize the architectural vision of EPC as a Service, an implementation option of the Evolved Packet Core, as specified by 3GPP, which can be deployed in cloud environments. To meet several challenging requirements associated with the implementation of EPC over a cloud infrastructure and providing it "as a Service," this article presents a number of different options, each with different characteristics, advantages, and disadvantages. A thorough analysis comparing the different implementation options is also presented.

189 citations

Proceedings Article
Lu Hou1, Zhiqi Huang1, Lifeng Shang1, Xin Jiang1, Xiao Chen1, Qun Liu1 
01 Jan 2020
TL;DR: A novel dynamic BERT model, which can run at adaptive width and depth, is proposed (abbreviated as DynaBERT), which has comparable performance as BERT (or RoBERTa), while at smaller widths and depths consistently outperforms existing BERT compression methods.
Abstract: The pre-trained language models like BERT, though powerful in many natural language processing tasks, are both computation and memory expensive. To alleviate this problem, one approach is to compress them for specific tasks before deployment. However, recent works on BERT compression usually compress the large BERT model to a fixed smaller size. They can not fully satisfy the requirements of different edge devices with various hardware performances. In this paper, we propose a novel dynamic BERT model (abbreviated as DynaBERT), which can flexibly adjust the size and latency by selecting adaptive width and depth. The training process of DynaBERT includes first training a width-adaptive BERT and then allowing both adaptive width and depth, by distilling knowledge from the full-sized model to small sub-networks. Network rewiring is also used to keep the more important attention heads and neurons shared by more sub-networks. Comprehensive experiments under various efficiency constraints demonstrate that our proposed dynamic BERT (or RoBERTa) at its largest size has comparable performance as BERT-base (or RoBERTa-base), while at smaller widths and depths consistently outperforms existing BERT compression methods. Code is available at this https URL.

187 citations

Journal ArticleDOI
TL;DR: The global situation of spectrum for 5G, both below and above 6 GHz, in both the regulatory status and technical aspects are introduced and the technical challenges of supporting 5G in millimeter-wave spectrum, such as coverage limitation and implementation aspects, are discussed.
Abstract: Availability of widely harmonized mobile spectrum is crucial for the success of 5G mobile communications, fulfilling the visions and requirements and delivering the full range of potential capabilities for 5G. This article introduces the global situation of spectrum for 5G, both below and above 6 GHz, in both the regulatory status and technical aspects. In particular, the technical challenges of supporting 5G in millimeter-wave spectrum, such as coverage limitation and implementation aspects, are discussed.

187 citations

Proceedings ArticleDOI
01 Jul 2019
TL;DR: DocRED as mentioned in this paper is a document-level relation extraction dataset constructed from Wikipedia and Wikidata with three features: (1) DocRED annotates both named entities and relations, and is the largest human-annotated dataset for documentlevel RE from plain text; (2) DocRed requires reading multiple sentences in a document to extract entities and infer their relations by synthesizing all information of the document; and (3) along with the humanannotated data, which enables DocRED to be adopted for both supervised and weakly supervised scenarios.
Abstract: Multiple entities in a document generally exhibit complex inter-sentence relations, and cannot be well handled by existing relation extraction (RE) methods that typically focus on extracting intra-sentence relations for single entity pairs. In order to accelerate the research on document-level RE, we introduce DocRED, a new dataset constructed from Wikipedia and Wikidata with three features: (1) DocRED annotates both named entities and relations, and is the largest human-annotated dataset for document-level RE from plain text; (2) DocRED requires reading multiple sentences in a document to extract entities and infer their relations by synthesizing all information of the document; (3) along with the human-annotated data, we also offer large-scale distantly supervised data, which enables DocRED to be adopted for both supervised and weakly supervised scenarios. In order to verify the challenges of document-level RE, we implement recent state-of-the-art methods for RE and conduct a thorough evaluation of these methods on DocRED. Empirical results show that DocRED is challenging for existing RE methods, which indicates that document-level RE remains an open problem and requires further efforts. Based on the detailed analysis on the experiments, we discuss multiple promising directions for future research. We make DocRED and the code for our baselines publicly available at https://github.com/thunlp/DocRED.

187 citations

Proceedings ArticleDOI
Zichao Li, Xin Jiang1, Lifeng Shang1, Hang Li1
01 Jan 2018
TL;DR: Experimental results on two datasets demonstrate the proposed models can produce more accurate paraphrases and outperform the state-of-the-art methods in paraphrase generation in both automatic evaluation and human evaluation.
Abstract: Automatic generation of paraphrases from a given sentence is an important yet challenging task in natural language processing (NLP). In this paper, we present a deep reinforcement learning approach to paraphrase generation. Specifically, we propose a new framework for the task, which consists of a generator and an evaluator, both of which are learned from data. The generator, built as a sequence-to-sequence learning model, can produce paraphrases given a sentence. The evaluator, constructed as a deep matching model, can judge whether two sentences are paraphrases of each other. The generator is first trained by deep learning and then further fine-tuned by reinforcement learning in which the reward is given by the evaluator. For the learning of the evaluator, we propose two methods based on supervised learning and inverse reinforcement learning respectively, depending on the type of available training data. Experimental results on two datasets demonstrate the proposed models (the generators) can produce more accurate paraphrases and outperform the state-of-the-art methods in paraphrase generation in both automatic evaluation and human evaluation.

186 citations


Authors

Showing all 41483 results

NameH-indexPapersCitations
Yu Huang136149289209
Xiaoou Tang13255394555
Xiaogang Wang12845273740
Shaobin Wang12687252463
Qiang Yang112111771540
Wei Lu111197361911
Xuemin Shen106122144959
Li Chen105173255996
Lajos Hanzo101204054380
Luca Benini101145347862
Lei Liu98204151163
Tao Wang97272055280
Mohamed-Slim Alouini96178862290
Qi Tian96103041010
Merouane Debbah9665241140
Network Information
Related Institutions (5)
Alcatel-Lucent
53.3K papers, 1.4M citations

90% related

Bell Labs
59.8K papers, 3.1M citations

88% related

Hewlett-Packard
59.8K papers, 1.4M citations

87% related

Microsoft
86.9K papers, 4.1M citations

87% related

Intel
68.8K papers, 1.6M citations

87% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
202319
202266
20212,069
20203,277
20194,570
20184,476