scispace - formally typeset
Search or ask a question

Showing papers by "National University of Defense Technology published in 2018"


Journal ArticleDOI
TL;DR: The blockchain taxonomy is given, the typical blockchain consensus algorithms are introduced, typical blockchain applications are reviewed, and the future directions in the blockchain technology are pointed out.
Abstract: Blockchain has numerous benefits such as decentralisation, persistency, anonymity and auditability. There is a wide spectrum of blockchain applications ranging from cryptocurrency, financial services, risk management, internet of things (IoT) to public and social services. Although a number of studies focus on using the blockchain technology in various application aspects, there is no comprehensive survey on the blockchain technology in both technological and application perspectives. To fill this gap, we conduct a comprehensive survey on the blockchain technology. In particular, this paper gives the blockchain taxonomy, introduces typical blockchain consensus algorithms, reviews blockchain applications and discusses technical challenges as well as recent advances in tackling the challenges. Moreover, this paper also points out the future directions in the blockchain technology.

1,928 citations


Posted ContentDOI
Spyridon Bakas1, Mauricio Reyes, Andras Jakab2, Stefan Bauer3  +435 moreInstitutions (111)
TL;DR: This study assesses the state-of-the-art machine learning methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018, and investigates the challenge of identifying the best ML algorithms for each of these tasks.
Abstract: Gliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histologic sub-regions, i.e., peritumoral edematous/invaded tissue, necrotic core, active and non-enhancing core. This intrinsic heterogeneity is also portrayed in their radio-phenotype, as their sub-regions are depicted by varying intensity profiles disseminated across multi-parametric magnetic resonance imaging (mpMRI) scans, reflecting varying biological properties. Their heterogeneous shape, extent, and location are some of the factors that make these tumors difficult to resect, and in some cases inoperable. The amount of resected tumoris a factor also considered in longitudinal scans, when evaluating the apparent tumor for potential diagnosis of progression. Furthermore, there is mounting evidence that accurate segmentation of the various tumor sub-regions can offer the basis for quantitative image analysis towards prediction of patient overall survival. This study assesses thestate-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018. Specifically, we focus on i) evaluating segmentations of the various glioma sub-regions in pre-operative mpMRI scans, ii) assessing potential tumor progression by virtue of longitudinal growth of tumor sub-regions, beyond use of the RECIST/RANO criteria, and iii) predicting the overall survival from pre-operative mpMRI scans of patients that underwent gross tota lresection. Finally, we investigate the challenge of identifying the best ML algorithms for each of these tasks, considering that apart from being diverse on each instance of the challenge, the multi-institutional mpMRI BraTS dataset has also been a continuously evolving/growing dataset.

1,165 citations


Journal ArticleDOI
TL;DR: SOAPnuke is demonstrated as a tool with abundant functions for a “QC-Preprocess-QC” workflow and MapReduce acceleration framework that enables large scalability to distribute all the processing works to an entire compute cluster.
Abstract: Quality control (QC) and preprocessing are essential steps for sequencing data analysis to ensure the accuracy of results. However, existing tools cannot provide a satisfying solution with integrated comprehensive functions, proper architectures, and highly scalable acceleration. In this article, we demonstrate SOAPnuke as a tool with abundant functions for a "QC-Preprocess-QC" workflow and MapReduce acceleration framework. Four modules with different preprocessing functions are designed for processing datasets from genomic, small RNA, Digital Gene Expression, and metagenomic experiments, respectively. As a workflow-like tool, SOAPnuke centralizes processing functions into 1 executable and predefines their order to avoid the necessity of reformatting different files when switching tools. Furthermore, the MapReduce framework enables large scalability to distribute all the processing works to an entire compute cluster.We conducted a benchmarking where SOAPnuke and other tools are used to preprocess a ∼30× NA12878 dataset published by GIAB. The standalone operation of SOAPnuke struck a balance between resource occupancy and performance. When accelerated on 16 working nodes with MapReduce, SOAPnuke achieved ∼5.7 times the fastest speed of other tools.

1,043 citations


Journal ArticleDOI
TL;DR: The Micius satellite is confirmed as a robust platform for quantum key distribution with different ground stations on Earth, and points towards an efficient solution for an ultralong-distance global quantum network.
Abstract: We perform decoy-state quantum key distribution between a low-Earth-orbit satellite and multiple ground stations located in Xinglong, Nanshan, and Graz, which establish satellite-to-ground secure keys with similar to kHz rate per passage of the satellite Micius over a ground station. The satellite thus establishes a secure key between itself and, say, Xinglong, and another key between itself and, say, Graz. Then, upon request from the ground command, Micius acts as a trusted relay. It performs bitwise exclusive OR operations between the two keys and relays the result to one of the ground stations. That way, a secret key is created between China and Europe at locations separated by 7600 km on Earth. These keys are then used for intercontinental quantum-secured communication. This was, on the one hand, the transmission of images in a one-time pad configuration from China to Austria as well as from Austria to China. Also, a video conference was performed between the Austrian Academy of Sciences and the Chinese Academy of Sciences, which also included a 280 km optical ground connection between Xinglong and Beijing. Our work clearly confirms the Micius satellite as a robust platform for quantum key distribution with different ground stations on Earth, and points towards an efficient solution for an ultralong-distance global quantum network.

575 citations


Journal ArticleDOI
TL;DR: All-organic, flexible superhydrophobic nanocomposite coatings that demonstrate strong mechanical robustness under cyclic tape peels and Taber abrasion, sustain exposure to highly corrosive media, namely aqua regia and sodium hydroxide solutions, and can be applied to surfaces through scalable techniques such as spraying and brushing are described.
Abstract: Superhydrophobicity is a remarkable evolutionary adaption manifested by several natural surfaces. Artificial superhydrophobic coatings with good mechanical robustness, substrate adhesion and chemical robustness have been achieved separately. However, a simultaneous demonstration of these features along with resistance to liquid impalement via high-speed drop/jet impact is challenging. Here, we describe all-organic, flexible superhydrophobic nanocomposite coatings that demonstrate strong mechanical robustness under cyclic tape peels and Taber abrasion, sustain exposure to highly corrosive media, namely aqua regia and sodium hydroxide solutions, and can be applied to surfaces through scalable techniques such as spraying and brushing. In addition, the mechanical flexibility of our coatings enables impalement resistance to high-speed drops and turbulent jets at least up to ~35 m s−1 and a Weber number of ~43,000. With multifaceted robustness and scalability, these coatings should find potential usage in harsh chemical engineering as well as infrastructure, transport vehicles and communication equipment.

486 citations


Journal ArticleDOI
TL;DR: An energy-aware offloading scheme, which jointly optimizes communication and computation resource allocation under the limited energy and sensitive latency, and an iterative search algorithm combining interior penalty function with D.C. (the difference of two convex functions/sets) programming to find the optimal solution.
Abstract: Mobile edge computing (MEC) brings computation capacity to the edge of mobile networks in close proximity to smart mobile devices (SMDs) and contributes to energy saving compared with local computing, but resulting in increased network load and transmission latency. To investigate the tradeoff between energy consumption and latency, we present an energy-aware offloading scheme, which jointly optimizes communication and computation resource allocation under the limited energy and sensitive latency. In this paper, single and multicell MEC network scenarios are considered at the same time. The residual energy of smart devices’ battery is introduced into the definition of the weighting factor of energy consumption and latency. In terms of the mixed integer nonlinear problem for computation offloading and resource allocation, we propose an iterative search algorithm combining interior penalty function with D.C. (the difference of two convex functions/sets) programming to find the optimal solution. Numerical results show that the proposed algorithm can obtain lower total cost (i.e., the weighted sum of energy consumption and execution latency) comparing with the baseline algorithms, and the energy-aware weighting factor is of great significance to maintain the lifetime of SMDs.

467 citations


Journal ArticleDOI
TL;DR: In this paper, a pyridinic-N-dominated doped graphene with abundant vacancy defects was constructed and the optimized sample with an ultrahigh pore volume (3.43 cm3 g-1) exhibits unprecedented ORR activity with a half-wave potential of 0.28 V for ORR and 0.85 V in alkaline.
Abstract: Identification of catalytic sites for oxygen reduction reaction (ORR) and oxygen evolution reaction (OER) in carbon materials remains a great challenge. Here, we construct a pyridinic-N-dominated doped graphene with abundant vacancy defects. The optimized sample with an ultrahigh pore volume (3.43 cm3 g–1) exhibits unprecedented ORR activity with a half-wave potential of 0.85 V in alkaline. For the first time, density functional theory results indicate that the quadri-pyridinic N-doped carbon site synergized with a vacancy defect is the active site, which presents the lowest overpotential of 0.28 V for ORR and 0.28 V for OER. The primary Zn–air batteries display a maximum power density of 115.2 mW cm–2 and an energy density as high as 872.3 Wh kg–1. The rechargeable Zn–air batteries illustrate a low discharge–charge overpotential and high stability (>78 h). This work provides new insight into the correlation between the N configuration synergized with a vacancy defect in electrocatalysis.

427 citations


Journal ArticleDOI
TL;DR: In this article, a fully programmable two-qubit quantum processor is presented, which enables universal quantum information processing in optics, using large-scale silicon photonic circuits to implement an extension of the linear combination of quantum operators scheme.
Abstract: Photonics is a promising platform for implementing universal quantum information processing. Its main challenges include precise control of massive circuits of linear optical components and effective implementation of entangling operations on photons. By using large-scale silicon photonic circuits to implement an extension of the linear combination of quantum operators scheme, we realize a fully programmable two-qubit quantum processor, enabling universal two-qubit quantum information processing in optics. The quantum processor is fabricated with mature CMOS-compatible processing and comprises more than 200 photonic components. We programmed the device to implement 98 different two-qubit unitary operations (with an average quantum process fidelity of 93.2 ± 4.5%), a two-qubit quantum approximate optimization algorithm, and efficient simulation of Szegedy directed quantum walks. This fosters further use of the linear-combination architecture with silicon photonics for future photonic quantum processors.

403 citations


Proceedings ArticleDOI
21 May 2018
TL;DR: In this article, the authors proposed a novel monocular visual odometry (VO) system called UnDeepVO, which is able to estimate the 6-DoF pose of a monocular camera and the depth of its view by using deep neural networks.
Abstract: We propose a novel monocular visual odometry (VO) system called UnDeepVO in this paper. UnDeepVO is able to estimate the 6-DoF pose of a monocular camera and the depth of its view by using deep neural networks. There are two salient features of the proposed UnDeepVo:one is the unsupervised deep learning scheme, and the other is the absolute scale recovery. Specifically, we train UnDeepVoby using stereo image pairs to recover the scale but test it by using consecutive monocular images. Thus, UnDeepVO is a monocular system. The loss function defined for training the networks is based on spatial and temporal dense information. A system overview is shown in Fig. 1. The experiments on KITTI dataset show our UnDeepVO achieves good performance in terms of pose accuracy.

399 citations


Journal ArticleDOI
TL;DR: This paper gives a systematic survey of clustering with deep learning in views of architecture and introduces the preliminary knowledge for better understanding of this field.
Abstract: Clustering is a fundamental problem in many data-driven application domains, and clustering performance highly depends on the quality of data representation. Hence, linear or non-linear feature transformations have been extensively used to learn a better data representation for clustering. In recent years, a lot of works focused on using deep neural networks to learn a clustering-friendly representation, resulting in a significant increase of clustering performance. In this paper, we give a systematic survey of clustering with deep learning in views of architecture. Specifically, we first introduce the preliminary knowledge for better understanding of this field. Then, a taxonomy of clustering with deep learning is proposed and some representative methods are introduced. Finally, we propose some interesting future opportunities of clustering with deep learning and give some conclusion remarks.

386 citations


Journal ArticleDOI
TL;DR: In this article, a bifunctional oxygen electrocatalyst with a "framework-active sites" structure was proposed, which encapsulated in 3D N-doped graphene and bamboo-like CNTs (Fe@C-NG/NCNTs).
Abstract: 3d transition metals or their derivatives encapsulated in nitrogen-doped nanocarbon show promising potential in non-precious metal oxygen electrocatalysts. Herein, we describe the simple construction of a bifunctional oxygen electrocatalyst with a “framework-active sites” structure, namely Fe/Fe3C@C (Fe@C) nanoparticles encapsulated in 3D N-doped graphene and bamboo-like CNTs (Fe@C–NG/NCNTs). The Fe@C structure provides additional electrons on the carbon surface, promoting the oxygen reduction reaction (ORR) on adjacent Fe–Nx active sites. The 3D NG hybrid with a bamboo-like CNTs framework facilitates fast reactant diffusion and rapid electron transfer. The optimized sample displays excellent ORR and oxygen evolution reaction (OER) activity, with a potential difference of only 0.84 V; this places it among the best bifunctional ORR/OER electrocatalysts. Most importantly, Zn–air batteries using Fe@C–NG/NCNTs as the cathode catalyst deliver a peak power density of 101.2 mW cm−2 and a specific capacity of 682.6 mA h g−1 (energy density of 764.5 W h kg−1). After 297 continuous cycle tests (99 h), the rechargeable batteries using Fe@C–NG/NCNTs show a voltage gap increase of only 0.13 V, almost half that of Pt/C + Ir/C (0.22 V) under the same conditions. This work provides new insight into advanced electrocatalysts utilizing the structural features of host nanocarbon materials and guest active species toward energy conversion.

Journal ArticleDOI
TL;DR: This paper proposes a unified and effective method for simultaneously detecting multi-class objects in remote sensing images with large scales variability, and shows that the method is more accurate than existing algorithms and is effective for multi-modalRemote sensing images.
Abstract: Automatic detection of multi-class objects in remote sensing images is a fundamental but challenging problem faced for remote sensing image analysis. Traditional methods are based on hand-crafted or shallow-learning-based features with limited representation power. Recently, deep learning algorithms, especially Faster region based convolutional neural networks (FRCN), has shown their much stronger detection power in computer vision field. However, several challenges limit the applications of FRCN in multi-class objects detection from remote sensing images: (1) Objects often appear at very different scales in remote sensing images, and FRCN with a fixed receptive field cannot match the scale variability of different objects; (2) Objects in large-scale remote sensing images are relatively small in size and densely peaked, and FRCN has poor localization performance with small objects; (3) Manual annotation is generally expensive and the available manual annotation of objects for training FRCN are not sufficient in number. To address these problems, this paper proposes a unified and effective method for simultaneously detecting multi-class objects in remote sensing images with large scales variability. Firstly, we redesign the feature extractor by adopting Concatenated ReLU and Inception module, which can increases the variety of receptive field size. Then, the detection is preformed by two sub-networks: a multi-scale object proposal network (MS-OPN) for object-like region generation from several intermediate layers, whose receptive fields match different object scales, and an accurate object detection network (AODN) for object detection based on fused feature maps, which combines several feature maps that enables small and densely packed objects to produce stronger response. For large-scale remote sensing images with limited manual annotations, we use cropped image blocks for training and augment them with re-scalings and rotations. The quantitative comparison results on the challenging NWPU VHR-10 data set, aircraft data set, Aerial-Vehicle data set and SAR-Ship data set show that our method is more accurate than existing algorithms and is effective for multi-modal remote sensing images.

Journal ArticleDOI
TL;DR: This paper revisits existing security threats and gives a systematic survey on them from two aspects, the training phase and the testing/inferring phase, and categorizes current defensive techniques of machine learning into four groups: security assessment mechanisms, countermeasures in theTraining phase, those in the testing or inferring phase; data security, and privacy.
Abstract: Machine learning is one of the most prevailing techniques in computer science, and it has been widely applied in image processing, natural language processing, pattern recognition, cybersecurity, and other fields. Regardless of successful applications of machine learning algorithms in many scenarios, e.g., facial recognition, malware detection, automatic driving, and intrusion detection, these algorithms and corresponding training data are vulnerable to a variety of security threats, inducing a significant performance decrease. Hence, it is vital to call for further attention regarding security threats and corresponding defensive techniques of machine learning, which motivates a comprehensive survey in this paper. Until now, researchers from academia and industry have found out many security threats against a variety of learning algorithms, including naive Bayes, logistic regression, decision tree, support vector machine (SVM), principle component analysis, clustering, and prevailing deep neural networks. Thus, we revisit existing security threats and give a systematic survey on them from two aspects, the training phase and the testing/inferring phase. After that, we categorize current defensive techniques of machine learning into four groups: security assessment mechanisms, countermeasures in the training phase, those in the testing or inferring phase, data security, and privacy. Finally, we provide five notable trends in the research on security threats and defensive techniques of machine learning, which are worth doing in-depth studies in future.

Journal ArticleDOI
TL;DR: A framework of the deep neural network to address the DOA estimation problem, so as to obtain good adaptation to array imperfections and enhanced generalization to unseen scenarios andSimulations are carried out to show that the proposed method performs satisfyingly in both generalization and imperfection adaptation.
Abstract: Lacking of adaptation to various array imperfections is an open problem for most high-precision direction-of-arrival (DOA) estimation methods. Machine learning-based methods are data-driven, they do not rely on prior assumptions about array geometries, and are expected to adapt better to array imperfections when compared with model-based counterparts. This paper introduces a framework of the deep neural network to address the DOA estimation problem, so as to obtain good adaptation to array imperfections and enhanced generalization to unseen scenarios. The framework consists of a multitask autoencoder and a series of parallel multilayer classifiers. The autoencoder acts like a group of spatial filters, it decomposes the input into multiple components in different spatial subregions. These components thus have more concentrated distributions than the original input, which helps to reduce the burden of generalization for subsequent DOA estimation classifiers. The classifiers follow a one-versus-all classification guideline to determine if there are signal components near preseted directional grids, and the classification results are concatenated to reconstruct a spatial spectrum and estimate signal directions. Simulations are carried out to show that the proposed method performs satisfyingly in both generalization and imperfection adaptation.

Journal ArticleDOI
TL;DR: The success of EDEV reveals that, through an appropriate ensemble framework, different DE variants of different merits can support one another to cooperatively solve optimization problems.

Journal ArticleDOI
TL;DR: In this article, the authors review the physics and various manifestations of the generalized Kerker effect, including the progress in the emerging field of meta-optics that focuses on interferences of electromagnetic multipoles of different orders and origins.
Abstract: The original Kerker effect was introduced for a hypothetical magnetic sphere, and initially it did not attract much attention due to a lack of magnetic materials required. Rejuvenated by the recent explosive development of the field of metamaterials and especially its core concept of optically-induced artificial magnetism, the Kerker effect has gained an unprecedented impetus and rapidly pervaded different branches of nanophotonics. At the same time, the concept behind the effect itself has also been significantly expanded and generalized. Here we review the physics and various manifestations of the generalized Kerker effects, including the progress in the emerging field of meta-optics that focuses on interferences of electromagnetic multipoles of different orders and origins. We discuss not only the scattering by individual particles and particle clusters, but also the manipulation of reflection, transmission, diffraction, and absorption for metalattices and metasurfaces, revealing how various optical phenomena observed recently are all ubiquitously related to the Kerker’s concept.

Proceedings ArticleDOI
18 Jun 2018
TL;DR: In this article, the authors propose a network architecture to incorporate all steps of stereo matching, including matching cost calculation, matching cost aggregation, disparity calculation, and disparity refinement, which achieves the state-of-the-art performance on the KITTI 2012 and KittI 2015 benchmarks while maintaining a very fast running time.
Abstract: Stereo matching algorithms usually consist of four steps, including matching cost calculation, matching cost aggregation, disparity calculation, and disparity refinement. Existing CNN-based methods only adopt CNN to solve parts of the four steps, or use different networks to deal with different steps, making them difficult to obtain the overall optimal solution. In this paper, we propose a network architecture to incorporate all steps of stereo matching. The network consists of three parts. The first part calculates the multi-scale shared features. The second part performs matching cost calculation, matching cost aggregation and disparity calculation to estimate the initial disparity using shared features. The initial disparity and the shared features are used to calculate the feature constancy that measures correctness of the correspondence between two input images. The initial disparity and the feature constancy are then fed into a sub-network to refine the initial disparity. The proposed method has been evaluated on the Scene Flow and KITTI datasets. It achieves the state-of-the-art performance on the KITTI 2012 and KITTI 2015 benchmarks while maintaining a very fast running time. Source code is available at http://github.com/leonzfa/iResNet.

Journal ArticleDOI
D. Adey, F. P. An1, A. B. Balantekin2, H. R. Band3  +204 moreInstitutions (39)
TL;DR: A measurement of electron antineutrino oscillation from the Daya Bay Reactor Neutrinos Experiment with nearly 4 million reactor ν[over ¯]_{e} inverse β decay candidates observed over 1958 days of data collection is reported.
Abstract: We report a measurement of electron antineutrino oscillation from the Daya Bay Reactor Neutrino Experiment with nearly 4 million reactor ν[over ¯]_{e} inverse β decay candidates observed over 1958 days of data collection. The installation of a flash analog-to-digital converter readout system and a special calibration campaign using different source enclosures reduce uncertainties in the absolute energy calibration to less than 0.5% for visible energies larger than 2 MeV. The uncertainty in the cosmogenic ^{9}Li and ^{8}He background is reduced from 45% to 30% in the near detectors. A detailed investigation of the spent nuclear fuel history improves its uncertainty from 100% to 30%. Analysis of the relative ν[over ¯]_{e} rates and energy spectra among detectors yields sin^{2}2θ_{13}=0.0856±0.0029 and Δm_{32}^{2}=(2.471_{-0.070}^{+0.068})×10^{-3} eV^{2} assuming the normal hierarchy, and Δm_{32}^{2}=-(2.575_{-0.070}^{+0.068})×10^{-3} eV^{2} assuming the inverted hierarchy.

Journal ArticleDOI
TL;DR: A novel decomposition-based EMO algorithm called multiobjective evolutionary algorithm based on decomposition LWS (MOEA/D-LWS) is proposed in which the WS method is applied in a local manner, and is a competitive algorithm for many-objective optimization.
Abstract: Decomposition via scalarization is a basic concept for multiobjective optimization. The weighted sum (WS) method, a frequently used scalarizing method in decomposition-based evolutionary multiobjective (EMO) algorithms, has good features such as computationally easy and high search efficiency, compared to other scalarizing methods. However, it is often criticized by the loss of effect on nonconvex problems. This paper seeks to utilize advantages of the WS method, without suffering from its disadvantage, to solve many-objective problems. A novel decomposition-based EMO algorithm called multiobjective evolutionary algorithm based on decomposition LWS (MOEA/D-LWS) is proposed in which the WS method is applied in a local manner. That is, for each search direction, the optimal solution is selected only amongst its neighboring solutions. The neighborhood is defined using a hypercone. The apex angle of a hypervcone is determined automatically in a priori . The effectiveness of MOEA/D-LWS is demonstrated by comparing it against three variants of MOEA/D, i.e., MOEA/D using Chebyshev method, MOEA/D with an adaptive use of WS and Chebyshev method, MOEA/D with a simultaneous use of WS and Chebyshev method, and four state-of-the-art many-objective EMO algorithms, i.e., preference-inspired co-evolutionary algorithm, hypervolume-based evolutionary, $\boldsymbol {\theta }$ -dominance-based algorithm, and SPEA2+SDE for the WFG benchmark problems with up to seven conflicting objectives. Experimental results show that MOEA/D-LWS outperforms the comparison algorithms for most of test problems, and is a competitive algorithm for many-objective optimization.

Book ChapterDOI
08 Sep 2018
TL;DR: An efficient single-stage pedestrian detection architecture (denoted as ALFNet) is designed, achieving state-of-the-art performance on CityPersons and Caltech, two of the largest pedestrian detection benchmarks, and hence resulting in an attractive pedestrian detector in both accuracy and speed.
Abstract: Though Faster R-CNN based two-stage detectors have witnessed significant boost in pedestrian detection accuracy, it is still slow for practical applications. One solution is to simplify this working flow as a single-stage detector. However, current single-stage detectors (e.g. SSD) have not presented competitive accuracy on common pedestrian detection benchmarks. This paper is towards a successful pedestrian detector enjoying the speed of SSD while maintaining the accuracy of Faster R-CNN. Specifically, a structurally simple but effective module called Asymptotic Localization Fitting (ALF) is proposed, which stacks a series of predictors to directly evolve the default anchor boxes of SSD step by step into improving detection results. As a result, during training the latter predictors enjoy more and better-quality positive samples, meanwhile harder negatives could be mined with increasing IoU thresholds. On top of this, an efficient single-stage pedestrian detection architecture (denoted as ALFNet) is designed, achieving state-of-the-art performance on CityPersons and Caltech, two of the largest pedestrian detection benchmarks, and hence resulting in an attractive pedestrian detector in both accuracy and speed. Code is available at https://github.com/VideoObjectSearch/ALFNet.

Journal ArticleDOI
TL;DR: In this article, a novel frequency-selective rasorber (FSR) is proposed, which has a nearly transparent window between two absorption bands, and the insertion loss of FSR at the resonant frequency of lossless bandpass FSS is proven to be only related to the equivalent impedance of the resistive sheet.
Abstract: A novel frequency-selective rasorber (FSR) is proposed in this paper which has a nearly transparent window between two absorption bands. The FSR consists of a resistive sheet and a bandpass frequency-selective surface (FSS). The impedance conditions of absorption/transmission for both the resistive sheet and the bandpass FSS are theoretically derived based on equivalent circuit analysis. The insertion loss of FSR at the resonant frequency of lossless bandpass FSS is proven to be only related to the equivalent impedance of the resistive sheet. When the resistive sheet is in parallel resonance at the passband, a nearly transparent window can be achieved regardless of lossy properties. An interdigital resonator (IR) is designed to realize parallel resonance in the resistive element by extending one finger of a strip-type interdigital capacitor to connect the two separate parts of the capacitor. The IR is equivalent to a parallel LC circuit. Lumped resistors are loaded around the IR to absorb the incident wave at lower and upper absorption bands. With the bandpass FSS as the ground plane, the absorption performances at both the lower and upper bands around the resonant frequency are improved compared to a metal-plane-backed absorber structure. The FSR passband is designed at 10 GHz with an insertion loss of 0.2 dB. The band with a reflection coefficient below −10 dB extends from 4.8 to 15.5 GHz. A further extension to dual-polarized FSR is designed, fabricated, and measured to validate the proposed design.

Journal ArticleDOI
TL;DR: This paper focuses on the security and privacy requirements related to data flow in MIoT and makes in-depth study on the existing solutions to security andPrivacy issues, together with the open challenges and research issues for future work.
Abstract: Medical Internet of Things, also well known as MIoT, is playing a more and more important role in improving the health, safety, and care of billions of people after its showing up. Instead of going to the hospital for help, patients’ health-related parameters can be monitored remotely, continuously, and in real time, then processed, and transferred to medical data center, such as cloud storage, which greatly increases the efficiency, convenience, and cost performance of healthcare. The amount of data handled by MIoT devices grows exponentially, which means higher exposure of sensitive data. The security and privacy of the data collected from MIoT devices, either during their transmission to a cloud or while stored in a cloud, are major unsolved concerns. This paper focuses on the security and privacy requirements related to data flow in MIoT. In addition, we make in-depth study on the existing solutions to security and privacy issues, together with the open challenges and research issues for future work.

Journal ArticleDOI
TL;DR: The finite-time multivariable terminal sliding mode control and composite-loop design are pursued to enable integration into the FTC, which can ensure the safety of the postfault vehicle in a timely manner.
Abstract: This paper proposes a fault-tolerant control (FTC) scheme for a hypersonic gliding vehicle to counteract actuator faults and model uncertainties. Starting from the kinematic and aerodynamic models of the hypersonic vehicle, the control-oriented model subject to actuator faults is built. The observers are designed to estimate the information of actuator faults and model uncertainties, and to guarantee the estimation errors for converging to zero in fixed settling time. Subsequently, the finite-time multivariable terminal sliding mode control and composite-loop design are pursued to enable integration into the FTC, which can ensure the safety of the postfault vehicle in a timely manner. Simulation studies of a six degree-of-freedom nonlinear model of the hypersonic gliding vehicle are carried out to manifest the effectiveness of the investigated FTC system.

Journal ArticleDOI
TL;DR: The findings imply that dysfunctional integration of the cortical-striatal-cerebellar circuit across the default, salience, and control networks may play an important role in the “disconnectivity” model underlying the pathophysiology of schizophrenia.

Journal ArticleDOI
TL;DR: Two algorithms are proposed: a centralized deployment algorithm and a distributed motion control algorithm that enables each UAV to autonomously control its motion, find the UEs and converge to on-demand coverage and the connectivity of the UAV network is maintained.
Abstract: Due to the flying nature of unmanned aerial vehicles (UAVs), it is very attractive to deploy UAVs as aerial base stations and construct airborne networks to provide service for on-ground users at temporary events (such as disaster relief, military operation, and so on). In the constructing of UAV airborne networks, a challenging problem is how to deploy multiple UAVs for on-demand coverage while at the same time maintaining the connectivity among UAVs. To solve this problem, we propose two algorithms: a centralized deployment algorithm and a distributed motion control algorithm. The first algorithm requires the positions of user equipments (UEs) on the ground and provides the optimal deployment result (i.e., the minimal number of UAVs and their respective positions) after a global computation. This algorithm is applicable to the scenario that requires a minimum number of UAVs to provide desirable service for already known on-ground UEs. Differently, the second algorithm requires no global information or computation, instead, it enables each UAV to autonomously control its motion, find the UEs and converge to on-demand coverage. This distributed algorithm is applicable to the scenario where using a given number of UAVs to cover UEs without UEs’ specific position information. In both algorithms, the connectivity of the UAV network is maintained. Extensive simulations validate our proposed algorithms.

Journal ArticleDOI
TL;DR: A hybrid structure which includes Convolutional Neural Network and Extreme Learning Machine, and integrates the synergy of two classifiers to deal with age and gender classification is introduced.

Journal ArticleDOI
27 Apr 2018-PLOS ONE
TL;DR: Hoaxy as discussed by the authors is an open platform that enables large-scale, systematic studies of how misinformation and fact-checking spread and compete on Twitter and quantifies how effectively the network can be disrupted by penalizing the most central nodes.
Abstract: Massive amounts of fake news and conspiratorial content have spread over social media before and after the 2016 US Presidential Elections despite intense fact-checking efforts. How do the spread of misinformation and fact-checking compete? What are the structural and dynamic characteristics of the core of the misinformation diffusion network, and who are its main purveyors? How to reduce the overall amount of misinformation? To explore these questions we built Hoaxy, an open platform that enables large-scale, systematic studies of how misinformation and fact-checking spread and compete on Twitter. Hoaxy captures public tweets that include links to articles from low-credibility and fact-checking sources. We perform k-core decomposition on a diffusion network obtained from two million retweets produced by several hundred thousand accounts over the six months before the election. As we move from the periphery to the core of the network, fact-checking nearly disappears, while social bots proliferate. The number of users in the main core reaches equilibrium around the time of the election, with limited churn and increasingly dense connections. We conclude by quantifying how effectively the network can be disrupted by penalizing the most central nodes. These findings provide a first look at the anatomy of a massive online misinformation diffusion network.

Journal ArticleDOI
TL;DR: The core idea is to incorporate expert knowledge of target scattering mechanism interpretation and polarimetric feature mining to assist deep CNN classifier training and improve the final classification performance.
Abstract: Polarimetric synthetic aperture radar (PolSAR) image classification is an important application. Advanced deep learning techniques represented by deep convolutional neural network (CNN) have been utilized to enhance the classification performance. One current challenge is how to adapt deep CNN classifier for PolSAR classification with limited training samples, while keeping good generalization performance. This letter attempts to contribute to this problem. The core idea is to incorporate expert knowledge of target scattering mechanism interpretation and polarimetric feature mining to assist deep CNN classifier training and improve the final classification performance. A polarimetric-feature-driven deep CNN classification scheme is established. Both classical roll-invariant polarimetric features and hidden polarimetric features in the rotation domain are used to drive the proposed deep CNN model. Comparison studies validate the efficiency and superiority of the proposal. For the benchmark AIRSAR data, the proposed method achieves the state-of-the-art classification accuracy. Meanwhile, the convergence speed from the proposed polarimetric-feature-driven CNN approach is about 2.3 times faster than the normal CNN method. For multitemporal UAVSAR data sets, the proposed scheme achieves comparably high classification accuracy as the normal CNN method for train-used temporal data, while for train-not-used data it obtains an average of 4.86% higher overall accuracy than the normal CNN method. Furthermore, the proposed strategy can also produce very promising classification accuracy even with very limited training samples.

Proceedings ArticleDOI
01 Jul 2018
TL;DR: The Reinforced Mnemonic Reader for machine reading comprehension tasks, which enhances previous attentive readers in two aspects: a reattention mechanism is proposed to refine current attentions by directly accessing to past attentions that are temporally memorized in a multi-round alignment architecture.
Abstract: In this paper, we introduce the Reinforced Mnemonic Reader for machine reading comprehension tasks, which enhances previous attentive readers in two aspects. First, a reattention mechanism is proposed to refine current attentions by directly accessing to past attentions that are temporally memorized in a multi-round alignment architecture, so as to avoid the problems of attention redundancy and attention deficiency. Second, a new optimization approach, called dynamic-critical reinforcement learning, is introduced to extend the standard supervised method. It always encourages to predict a more acceptable answer so as to address the convergence suppression problem occurred in traditional reinforcement learning algorithms. Extensive experiments on the Stanford Question Answering Dataset (SQuAD) show that our model achieves state-of-the-art results. Meanwhile, our model outperforms previous systems by over 6% in terms of both Exact Match and F1 metrics on two adversarial SQuAD datasets.

Journal ArticleDOI
TL;DR: This paper focuses on the process of EMR processing and emphatically analyzes the key techniques and makes an in-depth study on the applications developed based on text mining together with the open challenges and research issues for future work.
Abstract: Currently, medical institutes generally use EMR to record patient’s condition, including diagnostic information, procedures performed, and treatment results. EMR has been recognized as a valuable resource for large-scale analysis. However, EMR has the characteristics of diversity, incompleteness, redundancy, and privacy, which make it difficult to carry out data mining and analysis directly. Therefore, it is necessary to preprocess the source data in order to improve data quality and improve the data mining results. Different types of data require different processing technologies. Most structured data commonly needs classic preprocessing technologies, including data cleansing, data integration, data transformation, and data reduction. For semistructured or unstructured data, such as medical text, containing more health information, it requires more complex and challenging processing methods. The task of information extraction for medical texts mainly includes NER (named-entity recognition) and RE (relation extraction). This paper focuses on the process of EMR processing and emphatically analyzes the key techniques. In addition, we make an in-depth study on the applications developed based on text mining together with the open challenges and research issues for future work.