scispace - formally typeset
Search or ask a question
Institution

Beihang University

EducationBeijing, China
About: Beihang University is a education organization based out in Beijing, China. It is known for research contribution in the topics: Control theory & Microstructure. The organization has 67002 authors who have published 73507 publications receiving 975691 citations. The organization is also known as: Beijing University of Aeronautics and Astronautics.


Papers
More filters
Posted Content
TL;DR: An efficient transformer-based model for LSTF, named Informer, with three distinctive characteristics: a self-attention mechanism, which achieves $O(L \log L)$ in time complexity and memory usage, and has comparable performance on sequences' dependency alignment.
Abstract: Many real-world applications require the prediction of long sequence time-series, such as electricity consumption planning. Long sequence time-series forecasting (LSTF) demands a high prediction capacity of the model, which is the ability to capture precise long-range dependency coupling between output and input efficiently. Recent studies have shown the potential of Transformer to increase the prediction capacity. However, there are several severe issues with Transformer that prevent it from being directly applicable to LSTF, including quadratic time complexity, high memory usage, and inherent limitation of the encoder-decoder architecture. To address these issues, we design an efficient transformer-based model for LSTF, named Informer, with three distinctive characteristics: (i) a $ProbSparse$ self-attention mechanism, which achieves $O(L \log L)$ in time complexity and memory usage, and has comparable performance on sequences' dependency alignment. (ii) the self-attention distilling highlights dominating attention by halving cascading layer input, and efficiently handles extreme long input sequences. (iii) the generative style decoder, while conceptually simple, predicts the long time-series sequences at one forward operation rather than a step-by-step way, which drastically improves the inference speed of long-sequence predictions. Extensive experiments on four large-scale datasets demonstrate that Informer significantly outperforms existing methods and provides a new solution to the LSTF problem.

832 citations

Proceedings ArticleDOI
13 Dec 2010
TL;DR: A detailed study of 11 widely used internal clustering validation measures for crisp clustering and shows that S\_Dbw is the only internal validation measure which performs well in all five aspects, while other measures have certain limitations in different application scenarios.
Abstract: Clustering validation has long been recognized as one of the vital issues essential to the success of clustering applications. In general, clustering validation can be categorized into two classes, external clustering validation and internal clustering validation. In this paper, we focus on internal clustering validation and present a detailed study of 11 widely used internal clustering validation measures for crisp clustering. From five conventional aspects of clustering, we investigate their validation properties. Experiment results show that S\_Dbw is the only internal validation measure which performs well in all five aspects, while other measures have certain limitations in different application scenarios.

830 citations

Proceedings ArticleDOI
01 Jun 2014
TL;DR: AdaRNN adaptively propagates the sentiments of words to target depending on the context and syntactic relationships between them and it is shown that AdaRNN improves the baseline methods.
Abstract: We propose Adaptive Recursive Neural Network (AdaRNN) for target-dependent Twitter sentiment classification. AdaRNN adaptively propagates the sentiments of words to target depending on the context and syntactic relationships between them. It consists of more than one composition functions, and we model the adaptive sentiment propagations as distributions over these composition functions. The experimental studies illustrate that AdaRNN improves the baseline methods. Furthermore, we introduce a manually annotated dataset for target-dependent Twitter sentiment analysis.

809 citations

Journal ArticleDOI
01 Mar 2019
TL;DR: Shui et al. as mentioned in this paper reported a class of concave Fe-N-C single-atom catalysts possessing an enhanced external surface area and mesoporosity that meets the 2018 PGM-free catalyst activity target.
Abstract: To achieve the US Department of Energy 2018 target set for platinum-group metal-free catalysts (PGM-free catalysts) in proton exchange membrane fuel cells, the low density of active sites must be overcome. Here, we report a class of concave Fe–N–C single-atom catalysts possessing an enhanced external surface area and mesoporosity that meets the 2018 PGM-free catalyst activity target, and a current density of 0.047 A cm–2 at 0.88 ViR-free under 1.0 bar H2–O2. This performance stems from the high density of active sites, which is realized through exposing inaccessible Fe–N4 moieties (that is, increasing their utilization) and enhancing the mass transport of the catalyst layer. Further, we establish structure–property correlations that provide a route for designing highly efficient PGM-free catalysts for practical application, achieving a power density of 1.18 W cm−2 under 2.5 bar H2–O2, and an activity of 129 mA cm−2 at 0.8 ViR-free under 1.0 bar H2–air. Iron single-atom catalysts are among the most promising fuel cell cathode materials in acid electrolyte solution. Now, Shui, Xu and co-workers report concave-shaped Fe–N–C nanoparticles with increased availability of active sites and improved mass transport, meeting the US Department of Energy 2018 target for platinum-group metal-free fuel cell catalysts.

803 citations

Posted Content
TL;DR: This paper extensively reviews 400+ papers of object detection in the light of its technical evolution, spanning over a quarter-century's time (from the 1990s to 2019), and makes an in-deep analysis of their challenges as well as technical improvements in recent years.
Abstract: Object detection, as of one the most fundamental and challenging problems in computer vision, has received great attention in recent years. Its development in the past two decades can be regarded as an epitome of computer vision history. If we think of today's object detection as a technical aesthetics under the power of deep learning, then turning back the clock 20 years we would witness the wisdom of cold weapon era. This paper extensively reviews 400+ papers of object detection in the light of its technical evolution, spanning over a quarter-century's time (from the 1990s to 2019). A number of topics have been covered in this paper, including the milestone detectors in history, detection datasets, metrics, fundamental building blocks of the detection system, speed up techniques, and the recent state of the art detection methods. This paper also reviews some important detection applications, such as pedestrian detection, face detection, text detection, etc, and makes an in-deep analysis of their challenges as well as technical improvements in recent years.

802 citations


Authors

Showing all 67500 results

NameH-indexPapersCitations
Yi Chen2174342293080
H. S. Chen1792401178529
Alan J. Heeger171913147492
Lei Jiang1702244135205
Wei Li1581855124748
Shu-Hong Yu14479970853
Jian Zhou128300791402
Chao Zhang127311984711
Igor Katkov12597271845
Tao Zhang123277283866
Nicholas A. Kotov12357455210
Shi Xue Dou122202874031
Li Yuan12194867074
Robert O. Ritchie12065954692
Haiyan Wang119167486091
Network Information
Related Institutions (5)
Harbin Institute of Technology
109.2K papers, 1.6M citations

96% related

Tsinghua University
200.5K papers, 4.5M citations

92% related

University of Science and Technology of China
101K papers, 2.4M citations

92% related

Nanyang Technological University
112.8K papers, 3.2M citations

92% related

City University of Hong Kong
60.1K papers, 1.7M citations

91% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20241
2023205
20221,178
20216,767
20206,916
20197,080