Institution
Huawei
Company•Shenzhen, China•
About: Huawei is a company organization based out in Shenzhen, China. It is known for research contribution in the topics: Terminal (electronics) & Signal. The organization has 41417 authors who have published 44698 publications receiving 343496 citations. The organization is also known as: Huawei Technologies & Huawei Technologies Co., Ltd..
Papers published on a yearly basis
Papers
More filters
••
01 Nov 2014TL;DR: Several issues that must be resolved in order to use beamforming for access at millimeter wave (mmWave) frequencies are discussed, and solutions for initial access for reliable network access and satisfactory coverage are presented.
Abstract: Cellular systems were designed for carrier frequencies in the microwave band (below 3 GHz) but will soon be operating in frequency bands up to 6 GHz. To meet the ever increasing demands for data, deployments in bands above 6 GHz, and as high as 75 GHz, are envisioned. However, as these systems migrate beyond the microwave band, certain channel characteristics can impact their deployment, especially the coverage range. To increase coverage, beamforming can be used but this role of beamforming is different than in current cellular systems, where its primary role is to improve data throughput. Because cellular procedures enable beamforming after a user establishes access with the system, new procedures are needed to enable beamforming during cell discovery and acquisition. This paper discusses several issues that must be resolved in order to use beamforming for access at millimeter wave (mmWave) frequencies, and presents solutions for initial access. Several approaches are verified by computer simulations, and it is shown that reliable network access and satisfactory coverage can be achieved in mmWave frequencies.
151 citations
••
01 Aug 2019TL;DR: A query-aware database tuning system QTune with a deep reinforcement learning (DRL) model, which can efficiently and effectively tune the database configurations based on both the query vector and database states, and which outperforms the state-of-the-art tuning methods.
Abstract: Database knob tuning is important to achieve high performance (e.g., high throughput and low latency). However, knob tuning is an NP-hard problem and existing methods have several limitations. First, DBAs cannot tune a lot of database instances on different environments (e.g., different database vendors). Second, traditional machine-learning methods either cannot find good configurations or rely on a lot of high-quality training examples which are rather hard to obtain. Third, they only support coarse-grained tuning (e.g., workload-level tuning) but cannot provide fine-grained tuning (e.g., query-level tuning).To address these problems, we propose a query-aware database tuning system QTune with a deep reinforcement learning (DRL) model, which can efficiently and effectively tune the database configurations. QTune first featurizes the SQL queries by considering rich features of the SQL queries. Then QTune feeds the query features into the DRL model to choose suitable configurations. We propose a Double-State Deep Deterministic Policy Gradient (DS-DDPG) model to enable query-aware database configuration tuning, which utilizes the actor-critic networks to tune the database configurations based on both the query vector and database states. QTune provides three database tuning granularities: query-level, workload-level, and cluster-level tuning. We deployed our techniques onto three real database systems, and experimental results show that QTune achieves high performance and outperforms the state-of-the-art tuning methods.
150 citations
••
04 Mar 2012TL;DR: The first layered decoding for LDPC convolutional codes designed for application in high speed optical transmission systems was successfully realized.
Abstract: We successfully realized layered decoding for LDPC convolutional codes designed for application in high speed optical transmission systems. A relatively short code with 20% redundancy was FPGA-emulated with a Q-factor of 5.7dB at BER of 10−15.
150 citations
•
TL;DR: It is found that code with bugs tends to be more entropic (i.e. unnatural), becoming less so as bugs are fixed, suggesting that entropy may be a valid, simple way to complement the effectiveness of PMD or FindBugs, and that search-based bug-fixing methods may benefit from using entropy both for fault-localization and searching for fixes.
Abstract: Real software, the kind working programmers produce by the kLOC to solve real-world problems, tends to be "natural", like speech or natural language; it tends to be highly repetitive and predictable. Researchers have captured this naturalness of software through statistical models and used them to good effect in suggestion engines, porting tools, coding standards checkers, and idiom miners. This suggests that code that appears improbable, or surprising, to a good statistical language model is "unnatural" in some sense, and thus possibly suspicious. In this paper, we investigate this hypothesis. We consider a large corpus of bug fix commits (ca.~8,296), from 10 different Java projects, and we focus on its language statistics, evaluating the naturalness of buggy code and the corresponding fixes. We find that code with bugs tends to be more entropic (i.e., unnatural), becoming less so as bugs are fixed. Focusing on highly entropic lines is similar in cost-effectiveness to some well-known static bug finders (PMD, FindBugs) and ordering warnings from these bug finders using an entropy measure improves the cost-effectiveness of inspecting code implicated in warnings. This suggests that entropy may be a valid language-independent and simple way to complement the effectiveness of PMD or FindBugs, and that search-based bug-fixing methods may benefit from using entropy both for fault-localization and searching for fixes.
150 citations
••
22 Aug 2016TL;DR: This paper presents CODA, a first attempt at automatically identifying and scheduling coflows without any application-level modifications, and employs an incremental clustering algorithm to perform fast, application-transparent coflow identification and proposes an error-tolerant coflow scheduler to mitigate occasional identification errors.
Abstract: Leveraging application-level requirements using coflows has recently been shown to improve application-level communication performance in data-parallel clusters. However, existing coflow-based solutions rely on modifying applications to extract coflows, making them inapplicable to many practical scenarios. In this paper, we present CODA, a first attempt at automatically identifying and scheduling coflows without any application-level modifications. We employ an incremental clustering algorithm to perform fast, application-transparent coflow identification and complement it by proposing an error-tolerant coflow scheduler to mitigate occasional identification errors. Testbed experiments and large-scale simulations with production workloads show that CODA can identify coflows with over 90% accuracy, and its scheduler is robust to inaccuracies, enabling communication stages to complete 2.4x (5.1x) faster on average (95-th percentile) compared to per-flow mechanisms. Overall, CODA's performance is comparable to that of solutions requiring application modifications.
150 citations
Authors
Showing all 41483 results
Name | H-index | Papers | Citations |
---|---|---|---|
Yu Huang | 136 | 1492 | 89209 |
Xiaoou Tang | 132 | 553 | 94555 |
Xiaogang Wang | 128 | 452 | 73740 |
Shaobin Wang | 126 | 872 | 52463 |
Qiang Yang | 112 | 1117 | 71540 |
Wei Lu | 111 | 1973 | 61911 |
Xuemin Shen | 106 | 1221 | 44959 |
Li Chen | 105 | 1732 | 55996 |
Lajos Hanzo | 101 | 2040 | 54380 |
Luca Benini | 101 | 1453 | 47862 |
Lei Liu | 98 | 2041 | 51163 |
Tao Wang | 97 | 2720 | 55280 |
Mohamed-Slim Alouini | 96 | 1788 | 62290 |
Qi Tian | 96 | 1030 | 41010 |
Merouane Debbah | 96 | 652 | 41140 |