scispace - formally typeset
Search or ask a question
Institution

Harbin Institute of Technology

EducationHarbin, China
About: Harbin Institute of Technology is a education organization based out in Harbin, China. It is known for research contribution in the topics: Microstructure & Control theory. The organization has 88259 authors who have published 109297 publications receiving 1603393 citations. The organization is also known as: HIT.


Papers
More filters
Proceedings ArticleDOI
03 Sep 2018
TL;DR: DeepGauge is proposed, a set of multi-granularity testing criteria for DL systems, which aims at rendering a multi-faceted portrayal of the testbed and sheds light on the construction of more generic and robust DL systems.
Abstract: Deep learning (DL) defines a new data-driven programming paradigm that constructs the internal system logic of a crafted neuron network through a set of training data. We have seen wide adoption of DL in many safety-critical scenarios. However, a plethora of studies have shown that the state-of-the-art DL systems suffer from various vulnerabilities which can lead to severe consequences when applied to real-world applications. Currently, the testing adequacy of a DL system is usually measured by the accuracy of test data. Considering the limitation of accessible high quality test data, good accuracy performance on test data can hardly provide confidence to the testing adequacy and generality of DL systems. Unlike traditional software systems that have clear and controllable logic and functionality, the lack of interpretability in a DL system makes system analysis and defect detection difficult, which could potentially hinder its real-world deployment. In this paper, we propose DeepGauge, a set of multi-granularity testing criteria for DL systems, which aims at rendering a multi-faceted portrayal of the testbed. The in-depth evaluation of our proposed testing criteria is demonstrated on two well-known datasets, five DL systems, and with four state-of-the-art adversarial attack techniques against DL. The potential usefulness of DeepGauge sheds light on the construction of more generic and robust DL systems.

380 citations

Journal ArticleDOI
TL;DR: A pilot-scale study of bio-hydrogen production was performed in a continuous flow anaerobic fermentation reactor (with an available volume of 1.48 m3) for over 200 days as mentioned in this paper.

377 citations

Posted Content
TL;DR: Results show that code structure and newly introduced pre-training tasks can improve GraphCodeBERT and achieves state-of-the-art performance on the four downstream tasks and it is shown that the model prefers structure-level attentions over token- level attentions in the task of code search.
Abstract: Pre-trained models for programming language have achieved dramatic empirical improvements on a variety of code-related tasks such as code search, code completion, code summarization, etc. However, existing pre-trained models regard a code snippet as a sequence of tokens, while ignoring the inherent structure of code, which provides crucial code semantics and would enhance the code understanding process. We present GraphCodeBERT, a pre-trained model for programming language that considers the inherent structure of code. Instead of taking syntactic-level structure of code like abstract syntax tree (AST), we use data flow in the pre-training stage, which is a semantic-level structure of code that encodes the relation of "where-the-value-comes-from" between variables. Such a semantic-level structure is neat and does not bring an unnecessarily deep hierarchy of AST, the property of which makes the model more efficient. We develop GraphCodeBERT based on Transformer. In addition to using the task of masked language modeling, we introduce two structure-aware pre-training tasks. One is to predict code structure edges, and the other is to align representations between source code and code structure. We implement the model in an efficient way with a graph-guided masked attention function to incorporate the code structure. We evaluate our model on four tasks, including code search, clone detection, code translation, and code refinement. Results show that code structure and newly introduced pre-training tasks can improve GraphCodeBERT and achieves state-of-the-art performance on the four downstream tasks. We further show that the model prefers structure-level attentions over token-level attentions in the task of code search.

377 citations

Journal ArticleDOI
TL;DR: Qualitative and quantitative experimental results on three typical image fusion tasks validate the effectiveness and universality of U2Fusion, a unified model that is applicable to multiple fusion tasks.
Abstract: This study proposes a novel unified and unsupervised end-to-end image fusion network, termed as U2Fusion, which is capable of solving different fusion problems, including multi-modal, multi-exposure, and multi-focus cases. Using feature extraction and information measurement, U2Fusion automatically estimates the importance of corresponding source images and comes up with adaptive information preservation degrees. Hence, different fusion tasks are unified in the same framework. Based on the adaptive degrees, a network is trained to preserve the adaptive similarity between the fusion result and source images. Therefore, the stumbling blocks in applying deep learning for image fusion, e.g., the requirement of ground-truth and specifically designed metrics, are greatly mitigated. By avoiding the loss of previous fusion capabilities when training a single model for different tasks sequentially, we obtain a unified model that is applicable to multiple fusion tasks. Moreover, a new aligned infrared and visible image dataset, RoadScene (available at https://github.com/hanna-xu/RoadScene), is released to provide a new option for benchmark evaluation. Qualitative and quantitative experimental results on three typical image fusion tasks validate the effectiveness and universality of U2Fusion. Our code is publicly available at https://github.com/hanna-xu/U2Fusion.

377 citations

Journal ArticleDOI
TL;DR: In this paper, a Co(III) rich-Co3O4 nanorod material with improved electrochemical kinetics was reported, achieving a high voltage of 2.2 V, a capacity of 205 mA h g−1 (Co 3O4) and an extreme cycling stability of 92% capacity retention even after 5000 cycles.
Abstract: The Zn/Co3O4 battery is one of the few aqueous electrolyte batteries with a potential >2 V voltage. Unfortunately, so far, all reported Zn/Co3O4 batteries are using an alkaline electrolyte, resulting in poor cycling stability and environmental problems. Here, we report a Co(III) rich-Co3O4 nanorod material with vastly improved electrochemical kinetics. Zn/Co(III) rich-Co3O4 batteries can work well in ZnSO4 with a CoSO4 additive aqueous solution as a mild electrolyte, delivering a high voltage of 2.2 V, a capacity of 205 mA h g−1 (Co3O4) and an extreme cycling stability of 92% capacity retention even after 5000 cycles. Further mechanistic study reveals a conversion reaction between in situ formed CoO and Co3O4, which has never been observed in an alkaline Zn/Co3O4 battery. Subsequently, a flexible solid-state battery is constructed and reveals high flexibility and a high energy density of 360.8 W h kg−1 at a current density of 0.5 A g−1. Our research initiates the first Zn/Co3O4 battery working in a mild electrolyte, resulting in excellent electrochemical performance. It also indicates that the electrochemical kinetics can be effectively enhanced by fine tuning the atomic structure of electrode materials, opening a new door to improve the performance of aqueous electrolyte batteries.

376 citations


Authors

Showing all 89023 results

NameH-indexPapersCitations
Jiaguo Yu178730113300
Lei Jiang1702244135205
Gang Chen1673372149819
Xiang Zhang1541733117576
Hui-Ming Cheng147880111921
Yi Yang143245692268
Bruce E. Logan14059177351
Bin Liu138218187085
Peng Shi137137165195
Hui Li1352982105903
Lei Zhang135224099365
Jie Liu131153168891
Lei Zhang130231286950
Zhen Li127171271351
Kurunthachalam Kannan12682059886
Network Information
Related Institutions (5)
South China University of Technology
69.4K papers, 1.2M citations

95% related

Tianjin University
79.9K papers, 1.2M citations

95% related

Tsinghua University
200.5K papers, 4.5M citations

94% related

University of Science and Technology of China
101K papers, 2.4M citations

94% related

Nanyang Technological University
112.8K papers, 3.2M citations

93% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2023383
20221,896
202110,085
20209,817
20199,659
20188,215