scispace - formally typeset
Search or ask a question
Institution

Huawei

CompanyShenzhen, China
About: Huawei is a company organization based out in Shenzhen, China. It is known for research contribution in the topics: Terminal (electronics) & Signal. The organization has 41417 authors who have published 44698 publications receiving 343496 citations. The organization is also known as: Huawei Technologies & Huawei Technologies Co., Ltd..


Papers
More filters
Patent
Fang Chen1, Chengzhen Sun1, Xiaoqin Duan1
15 Nov 2007
TL;DR: In this article, a message interworking method and system of SIP message and conventional short message, using an entity SMI AS for message inter-working to execute the authorization of the interworking service of the message and the conventional network message, to transform the message format and to store and forward the message.
Abstract: The message interworking method and system of SIP message and conventional short message, using an entity SMI AS for message interworking to execute the authorization of the interworking service of SIP message and conventional network message, to transform the message format and to store and forward the message. Herein, SMI AS can be a new added network entity, or a new added function module in the exiting network entity, and it provides service for users by the third registration of users. Applying present invention enables the IMS-only terminal, which does not support the conventional short message service, to achieve the interworking of the message service with the conventional terminal, enriching the variety of the service. The entity for message interworking is also provided. Additionally, a message delivery report processing method and system are also provided.

102 citations

Posted Content
TL;DR: This paper equips adversarial domain adaptation with Gradually Vanishing Bridge mechanism on both generator and discriminator, and shows that the GVB methods outperform strong competitors, and cooperate well with other adversarial methods.
Abstract: In unsupervised domain adaptation, rich domain-specific characteristics bring great challenge to learn domain-invariant representations. However, domain discrepancy is considered to be directly minimized in existing solutions, which is difficult to achieve in practice. Some methods alleviate the difficulty by explicitly modeling domain-invariant and domain-specific parts in the representations, but the adverse influence of the explicit construction lies in the residual domain-specific characteristics in the constructed domain-invariant representations. In this paper, we equip adversarial domain adaptation with Gradually Vanishing Bridge (GVB) mechanism on both generator and discriminator. On the generator, GVB could not only reduce the overall transfer difficulty, but also reduce the influence of the residual domain-specific characteristics in domain-invariant representations. On the discriminator, GVB contributes to enhance the discriminating ability, and balance the adversarial training process. Experiments on three challenging datasets show that our GVB methods outperform strong competitors, and cooperate well with other adversarial methods. The code is available at this https URL.

102 citations

Journal ArticleDOI
TL;DR: There is a strong need to evaluate massive MIMO base station (BS) performance with over-the-air (OTA) methods to fulfill dramatic improvements in spectral efficiency for fifth-generation (5G) deployment in 2020.
Abstract: Massive multiple-input, multiple-output (MIMO) is seen as an enabling technology to fulfill dramatic improvements in spectral efficiency for fifth-generation (5G) deployment in 2020. For massive MIMO systems, the learning loop from early-stage prototype design to final-stage performance validation is expected to be slow and ineffective. There is a strong need to evaluate massive MIMO base station (BS) performance with over-the-air (OTA) methods. Until now, such OTA solutions have not been discussed for massive MIMO BS systems.

102 citations

Proceedings ArticleDOI
Lingyang Chu1, Xia Hu1, Juhua Hu1, Lanjun Wang2, Jian Pei 
19 Jul 2018
TL;DR: In this paper, the authors propose an elegant closed form solution named $OpenBox$ to compute exact and consistent interpretations for the family of piecewise linear neural networks (PLNN).
Abstract: Strong intelligent machines powered by deep neural networks are increasingly deployed as black boxes to make decisions in risk-sensitive domains, such as finance and medical. To reduce potential risk and build trust with users, it is critical to interpret how such machines make their decisions. Existing works interpret a pre-trained neural network by analyzing hidden neurons, mimicking pre-trained models or approximating local predictions. However, these methods do not provide a guarantee on the exactness and consistency of their interpretations. In this paper, we propose an elegant closed form solution named $OpenBox$ to compute exact and consistent interpretations for the family of Piecewise Linear Neural Networks (PLNN). The major idea is to first transform a PLNN into a mathematically equivalent set of linear classifiers, then interpret each linear classifier by the features that dominate its prediction. We further apply $OpenBox$ to demonstrate the effectiveness of non-negative and sparse constraints on improving the interpretability of PLNNs. The extensive experiments on both synthetic and real world data sets clearly demonstrate the exactness and consistency of our interpretation.

101 citations

Posted Content
TL;DR: Transformer iN Transformer (TNT) as discussed by the authors is a new kind of neural architecture which encodes the input data as powerful features via the attention mechanism, where the visual transformers first divide the input images into several local patches and then calculate both representations and their relationship.
Abstract: Transformer is a new kind of neural architecture which encodes the input data as powerful features via the attention mechanism. Basically, the visual transformers first divide the input images into several local patches and then calculate both representations and their relationship. Since natural images are of high complexity with abundant detail and color information, the granularity of the patch dividing is not fine enough for excavating features of objects in different scales and locations. In this paper, we point out that the attention inside these local patches are also essential for building visual transformers with high performance and we explore a new architecture, namely, Transformer iN Transformer (TNT). Specifically, we regard the local patches (e.g., 16$\times$16) as "visual sentences" and present to further divide them into smaller patches (e.g., 4$\times$4) as "visual words". The attention of each word will be calculated with other words in the given visual sentence with negligible computational costs. Features of both words and sentences will be aggregated to enhance the representation ability. Experiments on several benchmarks demonstrate the effectiveness of the proposed TNT architecture, e.g., we achieve an $81.5%$ top-1 accuracy on the ImageNet, which is about $1.7%$ higher than that of the state-of-the-art visual transformer with similar computational cost. The PyTorch code is available at this https URL, and the MindSpore code is at this https URL.

101 citations


Authors

Showing all 41483 results

NameH-indexPapersCitations
Yu Huang136149289209
Xiaoou Tang13255394555
Xiaogang Wang12845273740
Shaobin Wang12687252463
Qiang Yang112111771540
Wei Lu111197361911
Xuemin Shen106122144959
Li Chen105173255996
Lajos Hanzo101204054380
Luca Benini101145347862
Lei Liu98204151163
Tao Wang97272055280
Mohamed-Slim Alouini96178862290
Qi Tian96103041010
Merouane Debbah9665241140
Network Information
Related Institutions (5)
Alcatel-Lucent
53.3K papers, 1.4M citations

90% related

Bell Labs
59.8K papers, 3.1M citations

88% related

Hewlett-Packard
59.8K papers, 1.4M citations

87% related

Microsoft
86.9K papers, 4.1M citations

87% related

Intel
68.8K papers, 1.6M citations

87% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
202319
202266
20212,069
20203,277
20194,570
20184,476