scispace - formally typeset
Search or ask a question

Showing papers by "Beihang University published in 2020"


Posted Content
TL;DR: An efficient transformer-based model for LSTF, named Informer, with three distinctive characteristics: a self-attention mechanism, which achieves $O(L \log L)$ in time complexity and memory usage, and has comparable performance on sequences' dependency alignment.
Abstract: Many real-world applications require the prediction of long sequence time-series, such as electricity consumption planning. Long sequence time-series forecasting (LSTF) demands a high prediction capacity of the model, which is the ability to capture precise long-range dependency coupling between output and input efficiently. Recent studies have shown the potential of Transformer to increase the prediction capacity. However, there are several severe issues with Transformer that prevent it from being directly applicable to LSTF, including quadratic time complexity, high memory usage, and inherent limitation of the encoder-decoder architecture. To address these issues, we design an efficient transformer-based model for LSTF, named Informer, with three distinctive characteristics: (i) a $ProbSparse$ self-attention mechanism, which achieves $O(L \log L)$ in time complexity and memory usage, and has comparable performance on sequences' dependency alignment. (ii) the self-attention distilling highlights dominating attention by halving cascading layer input, and efficiently handles extreme long input sequences. (iii) the generative style decoder, while conceptually simple, predicts the long time-series sequences at one forward operation rather than a step-by-step way, which drastically improves the inference speed of long-sequence predictions. Extensive experiments on four large-scale datasets demonstrate that Informer significantly outperforms existing methods and provides a new solution to the LSTF problem.

832 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper investigated the relationship between the transmissibility of COVID-19 and the temperature/humidity, by controlling for various demographic, socioeconomic, geographic, healthcare and policy factors and correcting for cross-sectional correlation.
Abstract: With the ongoing global pandemic of COVID-19, a question is whether the coming summer in the northern hemisphere will reduce the transmission intensity of COVID-19 with increased humidity and temperature. In this paper, we investigate this problem using the data from the cases with symptom-onset dates from January 19 to February 10, 2020 for 100 Chinese cities, and cases with confirmed dates from March 15 to April 25 for 1,005 U.S. counties. Statistical analysis is performed to assess the relationship between the transmissibility of COVID-19 and the temperature/humidity, by controlling for various demographic, socio-economic, geographic, healthcare and policy factors and correcting for cross-sectional correlation. We find a similar influence of the temperature and relative humidity on effective reproductive number (R values) of COVID-19 for both China and the U.S. before lockdown in both countries: one-degree Celsius increase in temperature reduces R value by about 0.023 (0.026 (95% CI [-0.0395,-0.0125]) in China and 0.020 (95% CI [-0.0311, -0.0096]) in the U.S.), and one percent relative humidity rise reduces R value by 0.0078 (0.0076 (95% CI [-0.0108,-0.0045]) in China and 0.0080 (95% CI [-0.0150,-0.0010]) in the U.S.). If assuming a 30 degree and 25 percent increase in temperature and relative humidity from winter to summer in the northern hemisphere, we expect the R values to decline about 0.89 (0.69 by temperature and 0.20 by humidity). Moreover, after the lockdowns in China and the U.S., temperature and relative humidity still play an important role in reducing the R values but to a less extent. Given the notion that the non-intervened R values are around 2.5 to 3, only weather factors cannot make the R values below their critical condition of R<1, under which the epidemic diminishes gradually. Therefore, public health intervention such as social distancing is crucial to block the transmission of COVID-19 even in summer.

556 citations


Proceedings ArticleDOI
14 Jun 2020
TL;DR: This paper proposes a novel filter pruning method by exploring the High Rank of feature maps (HRank), inspired by the discovery that the average rank of multiple feature maps generated by a single filter is always the same, regardless of the number of image batches CNNs receive.
Abstract: Neural network pruning offers a promising prospect to facilitate deploying deep neural networks on resource-limited devices. However, existing methods are still challenged by the training inefficiency and labor cost in pruning designs, due to missing theoretical guidance of non-salient network components. In this paper, we propose a novel filter pruning method by exploring the High Rank of feature maps (HRank). Our HRank is inspired by the discovery that the average rank of multiple feature maps generated by a single filter is always the same, regardless of the number of image batches CNNs receive. Based on HRank, we develop a method that is mathematically formulated to prune filters with low-rank feature maps. The principle behind our pruning is that low-rank feature maps contain less information, and thus pruned results can be easily reproduced. Besides, we experimentally show that weights with high-rank feature maps contain more important information, such that even when a portion is not updated, very little damage would be done to the model performance. Without introducing any additional constraints, HRank leads to significant improvements over the state-of-the-arts in terms of FLOPs and parameters reduction, with similar accuracies. For example, with ResNet-110, we achieve a 58.2%-FLOPs reduction by removing 59.2% of the parameters, with only a small loss of $0.14\%$ in top-1 accuracy on CIFAR-10. With Res-50, we achieve a 43.8%-FLOPs reduction by removing 36.7% of the parameters, with only a loss of 1.17% in the top-1 accuracy on ImageNet. The codes can be available at https://github.com/lmbxmu/HRank.

527 citations


Journal ArticleDOI
TL;DR: Wannier90 as mentioned in this paper is an open-source computer program for calculating maximally-localised Wannier functions (MLWFs) from a set of Bloch states, which is interfaced to many widely used electronic-structure codes thanks to its independence from the basis sets representing these BLoch states.
Abstract: Wannier90 is an open-source computer program for calculating maximally-localised Wannier functions (MLWFs) from a set of Bloch states. It is interfaced to many widely used electronic-structure codes thanks to its independence from the basis sets representing these Bloch states. In the past few years the development of Wannier90 has transitioned to a community-driven model; this has resulted in a number of new developments that have been recently released in Wannier90 v3.0. In this article we describe these new functionalities, that include the implementation of new features for wannierisation and disentanglement (symmetry-adapted Wannier functions, selectively-localised Wannier functions, selected columns of the density matrix) and the ability to calculate new properties (shift currents and Berry-curvature dipole, and a new interface to many-body perturbation theory); performance improvements, including parallelisation of the core code; enhancements in functionality (support for spinor-valued Wannier functions, more accurate methods to interpolate quantities in the Brillouin zone); improved usability (improved plotting routines, integration with high-throughput automation frameworks), as well as the implementation of modern software engineering practices (unit testing, continuous integration, and automatic source-code documentation). These new features, capabilities, and code development model aim to further sustain and expand the community uptake and range of applicability, that nowadays spans complex and accurate dielectric, electronic, magnetic, optical, topological and transport properties of materials.

476 citations


Journal ArticleDOI
03 Feb 2020
TL;DR: In this paper, the synthesis and properties of two-dimensional transition metal carbides, carbonitrides, and nitrides have been summarized and highlighted in environment-related applications and challenges and perspectives for future research are also outlined.
Abstract: In recent years, a new large family of two dimensional transition metal carbides, carbonitrides, and nitrides, so-called MXenes, have grabbed considerable attention, owing to their many fascinating physical and chemical properties that are closely related to the rich diversity of their elemental compositions and surface terminations. In particular, it is easy for MXenes to form composites with other materials such as polymers, oxides, and carbon nanotubes, which further provides an effective way to tune the properties of MXenes for various applications. Not only have MXenes and MXene-based composites come into prominence as electrode materials in the energy storage field as is widely known, but they have also shown great potential in environment-related applications including electro/photocatalytic water splitting, photocatalytic reduction of carbon dioxide, water purification and sensors, thanks to their high conductivity, reducibility and biocompatibility. In this review, we summarize the synthesis and properties of MXenes and MXene-based composites and highlight their recent advances in environment-related applications. Challenges and perspectives for future research are also outlined.

432 citations


Proceedings Article
14 Dec 2020
TL;DR: Informer as discussed by the authors proposes a probSparse self-attention mechanism, which achieves O(L log L) in time complexity and memory usage, and has comparable performance on sequences' dependency alignment.
Abstract: Many real-world applications require the prediction of long sequence time-series, such as electricity consumption planning. Long sequence time-series forecasting (LSTF) demands a high prediction capacity of the model, which is the ability to capture precise long-range dependency coupling between output and input efficiently. Recent studies have shown the potential of Transformer to increase the prediction capacity. However, there are several severe issues with Transformer that prevent it from being directly applicable to LSTF, including quadratic time complexity, high memory usage, and inherent limitation of the encoder-decoder architecture. To address these issues, we design an efficient transformer-based model for LSTF, named Informer, with three distinctive characteristics: (i) a ProbSparse self-attention mechanism, which achieves O(L log L) in time complexity and memory usage, and has comparable performance on sequences' dependency alignment. (ii) the self-attention distilling highlights dominating attention by halving cascading layer input, and efficiently handles extreme long input sequences. (iii) the generative style decoder, while conceptually simple, predicts the long time-series sequences at one forward operation rather than a step-by-step way, which drastically improves the inference speed of long-sequence predictions. Extensive experiments on four large-scale datasets demonstrate that Informer significantly outperforms existing methods and provides a new solution to the LSTF problem.

429 citations


Proceedings ArticleDOI
14 Jun 2020
TL;DR: In this paper, the authors propose a pair similarity optimization viewpoint on deep feature learning, aiming to maximize the within-class similarity $s_p$ and minimize the betweenclass similarity$s_n$.
Abstract: This paper provides a pair similarity optimization viewpoint on deep feature learning, aiming to maximize the within-class similarity $s_p$ and minimize the between-class similarity $s_n$. We find a majority of loss functions, including the triplet loss and the softmax cross-entropy loss, embed $s_n$ and $s_p$ into similarity pairs and seek to reduce $(s_n-s_p)$. Such an optimization manner is inflexible, because the penalty strength on every single similarity score is restricted to be equal. Our intuition is that if a similarity score deviates far from the optimum, it should be emphasized. To this end, we simply re-weight each similarity to highlight the less-optimized similarity scores. It results in a Circle loss, which is named due to its circular decision boundary. The Circle loss has a unified formula for two elemental deep feature learning paradigms, \emph {i.e.}, learning with class-level labels and pair-wise labels. Analytically, we show that the Circle loss offers a more flexible optimization approach towards a more definite convergence target, compared with the loss functions optimizing $(s_n-s_p)$. Experimentally, we demonstrate the superiority of the Circle loss on a variety of deep feature learning tasks. On face recognition, person re-identification, as well as several fine-grained image retrieval datasets, the achieved performance is on par with the state of the art.

400 citations


Journal ArticleDOI
TL;DR: It is reported that room-temperature nitrate electroreduction catalyzed by strained ruthenium nanoclusters generates ammonia at a higher rate than the Haber-Bosch process, highlighting the potential of nitrate Electroreduction in real-world, low-tem temperature ammonia synthesis.
Abstract: The limitations of the Haber–Bosch reaction, particularly high-temperature operation, have ignited new interests in low-temperature ammonia-synthesis scenarios. Ambient N2 electroreduction is a com...

393 citations


Proceedings ArticleDOI
23 Aug 2020
TL;DR: The LayoutLM is proposed to jointly model interactions between text and layout information across scanned document images, which is beneficial for a great number of real-world document image understanding tasks such as information extraction from scanned documents.
Abstract: Pre-training techniques have been verified successfully in a variety of NLP tasks in recent years. Despite the widespread use of pre-training models for NLP applications, they almost exclusively focus on text-level manipulation, while neglecting layout and style information that is vital for document image understanding. In this paper, we propose the LayoutLM to jointly model interactions between text and layout information across scanned document images, which is beneficial for a great number of real-world document image understanding tasks such as information extraction from scanned documents. Furthermore, we also leverage image features to incorporate words' visual information into LayoutLM. To the best of our knowledge, this is the first time that text and layout are jointly learned in a single framework for document-level pre-training. It achieves new state-of-the-art results in several downstream tasks, including form understanding (from 70.72 to 79.27), receipt understanding (from 94.02 to 95.24) and document image classification (from 93.07 to 94.42). The code and pre-trained LayoutLM models are publicly available at https://aka.ms/layoutlm.

388 citations


Journal ArticleDOI
Xu Qin1, Zhilin Wang2, Yuanchao Bai1, Xiaodong Xie1, Huizhu Jia1 
03 Apr 2020
TL;DR: Zhang et al. as mentioned in this paper proposed an end-to-end feature fusion at-tention network (FFA-Net) to directly restore the haze-free image, which consists of three key components: Feature Attention (FA) module combines Channel Attention with Pixel Attention mechanism, considering that different channel-wise features contain totally different weighted information and haze distribution is uneven on the different image pixels.
Abstract: In this paper, we propose an end-to-end feature fusion at-tention network (FFA-Net) to directly restore the haze-free image. The FFA-Net architecture consists of three key components:1) A novel Feature Attention (FA) module combines Channel Attention with Pixel Attention mechanism, considering that different channel-wise features contain totally different weighted information and haze distribution is uneven on the different image pixels. FA treats different features and pixels unequally, which provides additional flexibility in dealing with different types of information, expanding the representational ability of CNNs. 2) A basic block structure consists of Local Residual Learning and Feature Attention, Local Residual Learning allowing the less important information such as thin haze region or low-frequency to be bypassed through multiple local residual connections, let main network architecture focus on more effective information. 3) An Attention-based different levels Feature Fusion (FFA) structure, the feature weights are adaptively learned from the Feature Attention (FA) module, giving more weight to important features. This structure can also retain the information of shallow layers and pass it into deep layers.The experimental results demonstrate that our proposed FFA-Net surpasses previous state-of-the-art single image dehazing methods by a very large margin both quantitatively and qualitatively, boosting the best published PSNR metric from 30.23 dB to 36.39 dB on the SOTS indoor test dataset. Code has been made available at GitHub.

382 citations


Posted Content
TL;DR: Results show that code structure and newly introduced pre-training tasks can improve GraphCodeBERT and achieves state-of-the-art performance on the four downstream tasks and it is shown that the model prefers structure-level attentions over token- level attentions in the task of code search.
Abstract: Pre-trained models for programming language have achieved dramatic empirical improvements on a variety of code-related tasks such as code search, code completion, code summarization, etc. However, existing pre-trained models regard a code snippet as a sequence of tokens, while ignoring the inherent structure of code, which provides crucial code semantics and would enhance the code understanding process. We present GraphCodeBERT, a pre-trained model for programming language that considers the inherent structure of code. Instead of taking syntactic-level structure of code like abstract syntax tree (AST), we use data flow in the pre-training stage, which is a semantic-level structure of code that encodes the relation of "where-the-value-comes-from" between variables. Such a semantic-level structure is neat and does not bring an unnecessarily deep hierarchy of AST, the property of which makes the model more efficient. We develop GraphCodeBERT based on Transformer. In addition to using the task of masked language modeling, we introduce two structure-aware pre-training tasks. One is to predict code structure edges, and the other is to align representations between source code and code structure. We implement the model in an efficient way with a graph-guided masked attention function to incorporate the code structure. We evaluate our model on four tasks, including code search, clone detection, code translation, and code refinement. Results show that code structure and newly introduced pre-training tasks can improve GraphCodeBERT and achieves state-of-the-art performance on the four downstream tasks. We further show that the model prefers structure-level attentions over token-level attentions in the task of code search.

Journal ArticleDOI
30 Oct 2020-Carbon
TL;DR: In this paper, a review of recent achievements in manufacturing EM microwave absorption materials, particularly focusing on the unique and key factors in design and control of structures and components is presented, and current challenges and prospects for future development in this rapidly blossoming field are discussed.

Journal ArticleDOI
TL;DR: A fully automatic deep learning system is proposed for COVID-19 diagnostic and prognostic analysis by routinely used computed tomography that automatically focused on abnormal areas that showed consistent characteristics with reported radiological findings.
Abstract: Confounding variation, such as batch effects, are a pervasive issue in single-cell RNA sequencing experiments. While methods exist for aligning cells across batches, it is yet unclear how to correct for other types of confounding variation which may be observed at the subject level, such as age and sex, and at the cell level, such as library size and other measures of cell quality. On the specific problem of batch alignment, many questions still persist despite recent advances: Existing methods can effectively align batches in low-dimensional representations of cells, yet their effectiveness in aligning the original gene expression matrices is unclear. Nor is it clear how batch correction can be performed alongside data denoising, the former treating technical biases due to experimental stratification while the latter treating technical variation due inherently to the random sampling that occurs during library construction and sequencing. Here, we propose SAVERCAT, a method for dimension reduction and denoising of single-cell gene expression data that can flexibly adjust for arbitrary observed covariates. We benchmark SAVERCAT against existing single-cell batch correction methods and show that while it matches the best of the field in low-dimensional cell alignment, it significantly improves upon existing methods on the task of batch correction in the high-dimensional expression matrix. We also demonstrate the ability of SAVERCAT to effectively integrate batch correction and denoising through a data down-sampling experiment. Finally, we apply SAVERCAT to a single cell study of Alzheimer’s disease where batch is confounded with the contrast of interest, and demonstrate how adjusting for covariates other than batch allows for more interpretable analysis.

Journal ArticleDOI
TL;DR: A comprehensive survey of algorithms proposed for binary neural networks, mainly categorized into the native solutions directly conducting binarization, and the optimized ones using techniques like minimizing the quantization error, improving the network loss function, and reducing the gradient error are presented.

Journal ArticleDOI
Binghe Liu1, Yikai Jia1, Chunhao Yuan1, Lubing Wang1, Xiang Gao1, Sha Yin1, Jun Xu1 
TL;DR: In this article, the authors present a review of experimental, theoretical, and modeling studies in each battery evolution phase under mechanical abuse loading, and summarize a state-of-the-art modeling framework to describe the multiphysical behavior of batteries.

Posted Content
TL;DR: The Circle loss is demonstrated, which has a unified formula for two elemental deep feature learning paradigms, learning with class-level labels and pair-wise labels, and the superiority of the Circle loss on a variety ofDeep feature learning tasks.
Abstract: This paper provides a pair similarity optimization viewpoint on deep feature learning, aiming to maximize the within-class similarity $s_p$ and minimize the between-class similarity $s_n$. We find a majority of loss functions, including the triplet loss and the softmax plus cross-entropy loss, embed $s_n$ and $s_p$ into similarity pairs and seek to reduce $(s_n-s_p)$. Such an optimization manner is inflexible, because the penalty strength on every single similarity score is restricted to be equal. Our intuition is that if a similarity score deviates far from the optimum, it should be emphasized. To this end, we simply re-weight each similarity to highlight the less-optimized similarity scores. It results in a Circle loss, which is named due to its circular decision boundary. The Circle loss has a unified formula for two elemental deep feature learning approaches, i.e. learning with class-level labels and pair-wise labels. Analytically, we show that the Circle loss offers a more flexible optimization approach towards a more definite convergence target, compared with the loss functions optimizing $(s_n-s_p)$. Experimentally, we demonstrate the superiority of the Circle loss on a variety of deep feature learning tasks. On face recognition, person re-identification, as well as several fine-grained image retrieval datasets, the achieved performance is on par with the state of the art.

Journal ArticleDOI
TL;DR: The ability of single metal atoms to effectively trap the dissolved lithium polysulfides (LiPSs) and catalytically convert the LiPSs/Li2S during cycling, significantly improved sulfur utilization, rate capability and cycling life.
Abstract: Lithium–sulfur (Li–S) batteries are promising next-generation energy storage technologies due to their high theoretical energy density, environmental friendliness, and low cost. However, low conduc...

Journal ArticleDOI
TL;DR: A smartphone inertial accelerometer-based architecture for HAR is designed and a real-time human activity classification method based on a convolutional neural network (CNN) is proposed, which uses a CNN for local feature extraction on the UCI and Pamap2 datasets.
Abstract: With the widespread application of mobile edge computing (MEC), MEC is serving as a bridge to narrow the gaps between medical staff and patients. Relatedly, MEC is also moving toward supervising in ...

Journal ArticleDOI
TL;DR: The intermittent fault-tolerance scheme is taken into fully account in designing a reliable asynchronous sampled-data controller, which ensures such that the resultant neural networks is asymptotically stable.

Journal ArticleDOI
TL;DR: The field of magnetic skyrmions has been actively investigated across a wide range of topics during the last decades as discussed by the authors, including information storage, logic computing gates and non-conventional devices such as neuromorphic computing devices.
Abstract: The field of magnetic skyrmions has been actively investigated across a wide range of topics during the last decades. In this topical review, we mainly review and discuss key results and findings in skyrmion research since the first experimental observation of magnetic skyrmions in 2009. We particularly focus on the theoretical, computational and experimental findings and advances that are directly relevant to the spintronic applications based on magnetic skyrmions, i.e. their writing, deleting, reading and processing driven by magnetic field, electric current and thermal energy. We then review several potential applications including information storage, logic computing gates and non-conventional devices such as neuromorphic computing devices. Finally, we discuss possible future research directions on magnetic skyrmions, which also cover rich topics on other topological textures such as antiskyrmions and bimerons in antiferromagnets and frustrated magnets.

Journal ArticleDOI
TL;DR: High energy efficiency and evaporation rate under high salinity is achieved through an energy reutilizing strategy based on interfacial water film inhomogeneity on a biomimetic structure, indicating the potential for sustainable and practical applications.
Abstract: Solar-driven water evaporation represents an environmentally benign method of water purification/desalination. However, the efficiency is limited by increased salt concentration and accumulation. Here, we propose an energy reutilizing strategy based on a bio-mimetic 3D structure. The spontaneously formed water film, with thickness inhomogeneity and temperature gradient, fully utilizes the input energy through Marangoni effect and results in localized salt crystallization. Solar-driven water evaporation rate of 2.63 kg m−2 h−1, with energy efficiency of >96% under one sun illumination and under high salinity (25 wt% NaCl), and water collecting rate of 1.72 kg m−2 h−1 are achieved in purifying natural seawater in a closed system. The crystalized salt freely stands on the 3D evaporator and can be easily removed. Additionally, energy efficiency and water evaporation are not influenced by salt accumulation thanks to an expanded water film inside the salt, indicating the potential for sustainable and practical applications. Solar-driven water evaporation technology still faces main challenges of limited efficiency and salt fouling. Here the authors achieve high energy efficiency and evaporation rate under high salinity through an energy reutilizing strategy based on interfacial water film inhomogeneity on a biomimetic structure.

Journal ArticleDOI
TL;DR: In this article, the effects of processing techniques on the microstructure and hysteresis of permanent magnets are largely understood, and new methods of increasing magnet stability at elevated temperature are developed, and integrated multifunctionality of hard magnets with other useful properties is now envisaged.

Journal ArticleDOI
M. Ablikim, M. N. Achasov1, M. N. Achasov2, Patrik Adlarson3  +500 moreInstitutions (73)
Abstract: There has recently been a dramatic renewal of interest in hadron spectroscopy and charm physics. This renaissance has been driven in part by the discovery of a plethora of charmonium-like XYZ states at BESIII and B factories, and the observation of an intriguing proton-antiproton threshold enhancement and the possibly related X(1835) meson state at BESIII, as well as the threshold measurements of charm mesons and charm baryons. We present a detailed survey of the important topics in tau-charm physics and hadron physics that can be further explored at BESIII during the remaining operation period of BEPCII. This survey will help in the optimization of the data-taking plan over the coming years, and provides physics motivation for the possible upgrade of BEPCII to higher luminosity.

Book ChapterDOI
23 Aug 2020
TL;DR: Refool is proposed, a new type of backdoor attack inspired by an important natural phenomenon: reflection to plant reflections as backdoor into a victim model and can attack state-of-the-art DNNs with high success rate, and is resistant to state of theart backdoor defenses.
Abstract: Recent studies have shown that DNNs can be compromised by backdoor attacks crafted at training time. A backdoor attack installs a backdoor into the victim model by injecting a backdoor pattern into a small proportion of the training data. At test time, the victim model behaves normally on clean test data, yet consistently predicts a specific (likely incorrect) target class whenever the backdoor pattern is present in a test example. While existing backdoor attacks are effective, they are not stealthy. The modifications made on training data or labels are often suspicious and can be easily detected by simple data filtering or human inspection. In this paper, we present a new type of backdoor attack inspired by an important natural phenomenon: reflection. Using mathematical modeling of physical reflection models, we propose reflection backdoor (Refool) to plant reflections as backdoor into a victim model. We demonstrate on 3 computer vision tasks and 5 datasets that, Refoolcan attack state-of-the-art DNNs with high success rate, and is resistant to state-of-the-art backdoor defenses.

Journal ArticleDOI
TL;DR: In this article, the authors reviewed the rapid responses in the community of medical imaging (empowered by AI) toward COVID-19, including image acquisition, segmentation, diagnosis, and follow-up.
Abstract: (This paper was submitted as an invited paper to IEEE Reviews in Biomedical Engineering on April 6, 2020.) The pandemic of coronavirus disease 2019 (COVID-19) is spreading all over the world. Medical imaging such as X-ray and computed tomography (CT) plays an essential role in the global fight against COVID-19, whereas the recently emerging artificial intelligence (AI) technologies further strengthen the power of the imaging tools and help medical specialists. We hereby review the rapid responses in the community of medical imaging (empowered by AI) toward COVID-19. For example, AI-empowered image acquisition can significantly help automate the scanning procedure and also reshape the workflow with minimal contact to patients, providing the best protection to the imaging technicians. Also, AI can improve work efficiency by accurate delination of infections in X-ray and CT images, facilitating subsequent quantification. Moreover, the computer-aided platforms help radiologists make clinical decisions, i.e., for disease diagnosis, tracking, and prognosis. In this review paper, we thus cover the entire pipeline of medical imaging and analysis techniques involved with COVID-19, including image acquisition, segmentation, diagnosis, and follow-up. We particularly focus on the integration of AI with X-ray and CT, both of which are widely used in the frontline hospitals, in order to depict the latest progress of medical imaging and radiology fighting against COVID-19.

Journal ArticleDOI
TL;DR: A dual protection strategy has been developed by nanocasting SiO2 into metal–organic frameworks to prepare high-loading SACs with excellent catalytic performance toward oxygen reduction and a general synthetic methodology toward high-content Sacs (such as FeSA, CoSA, NiSA).
Abstract: Single-atom catalysts (SACs) have sparked broad interest recently while the low metal loading poses a big challenge for further applications. Herein, a dual protection strategy has been developed to give high-content SACs by nanocasting SiO2 into porphyrinic metal–organic frameworks (MOFs). The pyrolysis of SiO2@MOF composite affords single-atom Fe implanted N-doped porous carbon (FeSA–N–C) with high Fe loading (3.46 wt%). The spatial isolation of Fe atoms centered in porphyrin linkers of MOF sets the first protective barrier to inhibit the Fe agglomeration during pyrolysis. The SiO2 in MOF provides additional protection by creating thermally stable FeN4/SiO2 interfaces. Thanks to the high-density FeSA sites, FeSA–N–C demonstrates excellent oxygen reduction performance in both alkaline and acidic medias. Meanwhile, FeSA–N–C also exhibits encouraging performance in proton exchange membrane fuel cell, demonstrating great potential for practical application. More far-reaching, this work grants a general synthetic methodology toward high-content SACs (such as FeSA, CoSA, NiSA). Single-atom catalysts (SACs) with high metal loading are highly desired to improve catalytic performance. Here, the authors report a dual protection strategy by nanocasting SiO2 into metal–organic frameworks to prepare high-loading SACs with excellent catalytic performance toward oxygen reduction.

Journal ArticleDOI
01 Mar 2020
TL;DR: In this article, the accumulation and dissipation of magnetic skyrmions in ferrimagnetic multilayers can be controlled with electrical pulses to represent the variations in the synaptic weights.
Abstract: Magnetic skyrmions are topologically protected spin textures that have nanoscale dimensions and can be manipulated by an electric current. These properties make the structures potential information carriers in data storage, processing and transmission devices. However, the development of functional all-electrical electronic devices based on skyrmions remains challenging. Here we show that the current-induced creation, motion, detection and deletion of skyrmions at room temperature can be used to mimic the potentiation and depression behaviours of biological synapses. In particular, the accumulation and dissipation of magnetic skyrmions in ferrimagnetic multilayers can be controlled with electrical pulses to represent the variations in the synaptic weights. Using chip-level simulations, we demonstrate that such artificial synapses based on magnetic skyrmions could be used for neuromorphic computing tasks such as pattern recognition. For a handwritten pattern dataset, our system achieves a recognition accuracy of ~89%, which is comparable to the accuracy achieved with software-based ideal training (~93%). The electrical current-induced creation, motion, detection and deletion of skyrmions in ferrimagnetic multilayers can be used to mimic the behaviour of biological synapses, providing devices that could be used for neuromorphic computing tasks such as pattern recognition.

Journal ArticleDOI
Albert M. Sirunyan1, Armen Tumasyan1, Wolfgang Adam, Federico Ambrogi  +2248 moreInstitutions (155)
TL;DR: For the first time, predictions from pythia8 obtained with tunes based on NLO or NNLO PDFs are shown to reliably describe minimum-bias and underlying-event data with a similar level of agreement to predictions from tunes using LO PDF sets.
Abstract: New sets of CMS underlying-event parameters (“tunes”) are presented for the pythia8 event generator. These tunes use the NNPDF3.1 parton distribution functions (PDFs) at leading (LO), next-to-leading (NLO), or next-to-next-to-leading (NNLO) orders in perturbative quantum chromodynamics, and the strong coupling evolution at LO or NLO. Measurements of charged-particle multiplicity and transverse momentum densities at various hadron collision energies are fit simultaneously to determine the parameters of the tunes. Comparisons of the predictions of the new tunes are provided for observables sensitive to the event shapes at LEP, global underlying event, soft multiparton interactions, and double-parton scattering contributions. In addition, comparisons are made for observables measured in various specific processes, such as multijet, Drell–Yan, and top quark-antiquark pair production including jet substructure observables. The simulation of the underlying event provided by the new tunes is interfaced to a higher-order matrix-element calculation. For the first time, predictions from pythia8 obtained with tunes based on NLO or NNLO PDFs are shown to reliably describe minimum-bias and underlying-event data with a similar level of agreement to predictions from tunes using LO PDF sets.

Journal ArticleDOI
TL;DR: In this paper, a flexible reduced graphene oxide (rGO) sheet was crosslinked by a conjugated molecule (1-aminopyrene-disuccinimidyl suberate, AD), which reduced the voids within the graphene sheet and improved the alignment of graphene platelets, resulting in much higher compactness and high toughness.
Abstract: Flexible reduced graphene oxide (rGO) sheets are being considered for applications in portable electrical devices and flexible energy storage systems. However, the poor mechanical properties and electrical conductivities of rGO sheets are limiting factors for the development of such devices. Here we use MXene (M) nanosheets to functionalize graphene oxide platelets through Ti-O-C covalent bonding to obtain MrGO sheets. A MrGO sheet was crosslinked by a conjugated molecule (1-aminopyrene-disuccinimidyl suberate, AD). The incorporation of MXene nanosheets and AD molecules reduces the voids within the graphene sheet and improves the alignment of graphene platelets, resulting in much higher compactness and high toughness. In situ Raman spectroscopy and molecular dynamics simulations reveal the synergistic interfacial interaction mechanisms of Ti-O-C covalent bonding, sliding of MXene nanosheets, and π-π bridging. Furthermore, a supercapacitor based on our super-tough MXene-functionalized graphene sheets provides a combination of energy and power densities that are high for flexible supercapacitors.

Journal ArticleDOI
08 Jan 2020-Small
TL;DR: The recent advances in the design and synthesis of UOR catalysts for urea electrolysis, photoelectrochemical urea splitting, and direct urea fuel cells are reviewed here and particular attention is paid to those design concepts, which specifically target the characteristics of urea molecules.
Abstract: Urea oxidation reaction (UOR) is the underlying reaction that determines the performance of modern urea-based energy conversion technologies. These technologies include electrocatalytic and photoelectrochemical urea splitting for hydrogen production and direct urea fuel cells as power engines. They have demonstrated great potentials as alternatives to current water splitting and hydrogen fuel cell systems with more favorable operating conditions and cost effectiveness. At the moment, UOR performance is mainly limited by the 6-electron transfer process. In this case, various material design and synthesis strategies have recently been reported to produce highly efficient UOR catalysts. The performance of these advanced catalysts is optimized by the modification of their structural and chemical properties, including porosity development, heterostructure construction, defect engineering, surface functionalization, and electronic structure modulation. Considering the rich progress in this field, the recent advances in the design and synthesis of UOR catalysts for urea electrolysis, photoelectrochemical urea splitting, and direct urea fuel cells are reviewed here. Particular attention is paid to those design concepts, which specifically target the characteristics of urea molecules. Moreover, challenges and prospects for the future development of urea-based energy conversion technologies and corresponding catalysts are also discussed.