scispace - formally typeset
Search or ask a question
Institution

Harbin Institute of Technology

EducationHarbin, China
About: Harbin Institute of Technology is a education organization based out in Harbin, China. It is known for research contribution in the topics: Microstructure & Control theory. The organization has 88259 authors who have published 109297 publications receiving 1603393 citations. The organization is also known as: HIT.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, a facile two-step method was adopted to develop an in-situ heterostructure with NiCo-LDH nanowire as core and NiOOH nanosheets as shell on carbon fiber cloth.

309 citations

Journal ArticleDOI
TL;DR: In this paper, the resistance to electrochemical oxidation of carbon black (Vulcan XC-72) and chemical vapor deposited multiwalled carbon nanotubes (CVD-MWNTs), both widely used as catalyst supports for low temperature fuel cells, was investigated with potentiostatic oxidation in 0.5 −1 H2SO4.

308 citations

Proceedings ArticleDOI
TL;DR: DeepGauge as discussed by the authors proposes a set of multi-granularity testing criteria for DL systems, which aims at rendering a multi-faceted portrayal of the testbed.
Abstract: Deep learning (DL) defines a new data-driven programming paradigm that constructs the internal system logic of a crafted neuron network through a set of training data. We have seen wide adoption of DL in many safety-critical scenarios. However, a plethora of studies have shown that the state-of-the-art DL systems suffer from various vulnerabilities which can lead to severe consequences when applied to real-world applications. Currently, the testing adequacy of a DL system is usually measured by the accuracy of test data. Considering the limitation of accessible high quality test data, good accuracy performance on test data can hardly provide confidence to the testing adequacy and generality of DL systems. Unlike traditional software systems that have clear and controllable logic and functionality, the lack of interpretability in a DL system makes system analysis and defect detection difficult, which could potentially hinder its real-world deployment. In this paper, we propose DeepGauge, a set of multi-granularity testing criteria for DL systems, which aims at rendering a multi-faceted portrayal of the testbed. The in-depth evaluation of our proposed testing criteria is demonstrated on two well-known datasets, five DL systems, and with four state-of-the-art adversarial attack techniques against DL. The potential usefulness of DeepGauge sheds light on the construction of more generic and robust DL systems.

307 citations

Journal ArticleDOI
TL;DR: By combining radial basis function neural networks' universal approximation ability and adaptive backstepping technique with common stochastic Lyapunov function method, an adaptive control algorithm is proposed for the considered system and it is shown that the target signal can be almost surely tracked by the system output.

307 citations

Proceedings ArticleDOI
19 Feb 2020
TL;DR: CodeBERT as mentioned in this paper is a pre-trained model for natural language code search and code documentation generation with a hybrid objective function that incorporates the pre-training task of replaced token detection, which is to detect plausible alternatives sampled from generators.
Abstract: We present CodeBERT, a bimodal pre-trained model for programming language (PL) and natural language (NL). CodeBERT learns general-purpose representations that support downstream NL-PL applications such as natural language code search, code documentation generation, etc. We develop CodeBERT with Transformer-based neural architecture, and train it with a hybrid objective function that incorporates the pre-training task of replaced token detection, which is to detect plausible alternatives sampled from generators. This enables us to utilize both “bimodal” data of NL-PL pairs and “unimodal data, where the former provides input tokens for model training while the latter helps to learn better generators. We evaluate CodeBERT on two NL-PL applications by fine-tuning model parameters. Results show that CodeBERT achieves state-of-the-art performance on both natural language code search and code documentation generation. Furthermore, to investigate what type of knowledge is learned in CodeBERT, we construct a dataset for NL-PL probing, and evaluate in a zero-shot setting where parameters of pre-trained models are fixed. Results show that CodeBERT performs better than previous pre-trained models on NLPL probing.

307 citations


Authors

Showing all 89023 results

NameH-indexPapersCitations
Jiaguo Yu178730113300
Lei Jiang1702244135205
Gang Chen1673372149819
Xiang Zhang1541733117576
Hui-Ming Cheng147880111921
Yi Yang143245692268
Bruce E. Logan14059177351
Bin Liu138218187085
Peng Shi137137165195
Hui Li1352982105903
Lei Zhang135224099365
Jie Liu131153168891
Lei Zhang130231286950
Zhen Li127171271351
Kurunthachalam Kannan12682059886
Network Information
Related Institutions (5)
South China University of Technology
69.4K papers, 1.2M citations

95% related

Tianjin University
79.9K papers, 1.2M citations

95% related

Tsinghua University
200.5K papers, 4.5M citations

94% related

University of Science and Technology of China
101K papers, 2.4M citations

94% related

Nanyang Technological University
112.8K papers, 3.2M citations

93% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2023383
20221,896
202110,085
20209,817
20199,659
20188,215