scispace - formally typeset
Search or ask a question

Showing papers by "Xidian University published in 2016"


Proceedings Article
12 Feb 2016
TL;DR: A novel model for learning graph representations, which generates a low-dimensional vector representation for each vertex by capturing the graph structural information directly, and which outperforms other stat-of-the-art models in such tasks.
Abstract: In this paper, we propose a novel model for learning graph representations, which generates a low-dimensional vector representation for each vertex by capturing the graph structural information. Different from other previous research efforts, we adopt a random surfing model to capture graph structural information directly, instead of using the sampling-based method for generating linear sequences proposed by Perozzi et al. (2014). The advantages of our approach will be illustrated from both theorical and empirical perspectives. We also give a new perspective for the matrix factorization method proposed by Levy and Goldberg (2014), in which the pointwise mutual information (PMI) matrix is considered as an analytical solution to the objective function of the skip-gram model with negative sampling proposed by Mikolov et al. (2013). Unlike their approach which involves the use of the SVD for finding the low-dimensitonal projections from the PMI matrix, however, the stacked denoising autoencoder is introduced in our model to extract complex features and model non-linearities. To demonstrate the effectiveness of our model, we conduct experiments on clustering and visualization tasks, employing the learned vertex representations as features. Empirical results on datasets of varying sizes show that our model outperforms other stat-of-the-art models in such tasks.

919 citations


Journal ArticleDOI
TL;DR: This paper investigates partial computation offloading by jointly optimizing the computational speed of smart mobile device (SMD), transmit power of SMD, and offloading ratio with two system design objectives: energy consumption of ECM minimization and latency of application execution minimization.
Abstract: The incorporation of dynamic voltage scaling technology into computation offloading offers more flexibilities for mobile edge computing. In this paper, we investigate partial computation offloading by jointly optimizing the computational speed of smart mobile device (SMD), transmit power of SMD, and offloading ratio with two system design objectives: energy consumption of SMD minimization (ECM) and latency of application execution minimization (LM). Considering the case that the SMD is served by a single cloud server, we formulate both the ECM problem and the LM problem as nonconvex problems. To tackle the ECM problem, we recast it as a convex one with the variable substitution technique and obtain its optimal solution. To address the nonconvex and nonsmooth LM problem, we propose a locally optimal algorithm with the univariate search technique. Furthermore, we extend the scenario to a multiple cloud servers system, where the SMD could offload its computation to a set of cloud servers. In this scenario, we obtain the optimal computation distribution among cloud servers in closed form for the ECM and LM problems. Finally, extensive simulations demonstrate that our proposed algorithms can significantly reduce the energy consumption and shorten the latency with respect to the existing offloading schemes.

819 citations


Journal ArticleDOI
TL;DR: The capability of a deep convolutional neural network (CNN) combined with three types of data augmentation operations in SAR target recognition is investigated, showing that it is a practical approach for target recognition in challenging conditions of target translation, random speckle noise, and missing pose.
Abstract: Many methods have been proposed to improve the performance of synthetic aperture radar (SAR) target recognition but seldom consider the issues in real-world recognition systems, such as the invariance under target translation, the invariance under speckle variation in different observations, and the tolerance of pose missing in training data. In this letter, we investigate the capability of a deep convolutional neural network (CNN) combined with three types of data augmentation operations in SAR target recognition. Experimental results demonstrate the effectiveness and efficiency of the proposed method. The best performance is obtained by using the CNN trained by all types of augmentation operations, showing that it is a practical approach for target recognition in challenging conditions of target translation, random speckle noise, and missing pose.

582 citations


Proceedings ArticleDOI
01 Jun 2016
TL;DR: This paper proposes an End-to-End learning approach to address ordinal regression problems using deep Convolutional Neural Network, which could simultaneously conduct feature learning and regression modeling, and achieves the state-of-the-art performance on both the MORPH and AFAD datasets.
Abstract: To address the non-stationary property of aging patterns, age estimation can be cast as an ordinal regression problem. However, the processes of extracting features and learning a regression model are often separated and optimized independently in previous work. In this paper, we propose an End-to-End learning approach to address ordinal regression problems using deep Convolutional Neural Network, which could simultaneously conduct feature learning and regression modeling. In particular, an ordinal regression problem is transformed into a series of binary classification sub-problems. And we propose a multiple output CNN learning algorithm to collectively solve these classification sub-problems, so that the correlation between these tasks could be explored. In addition, we publish an Asian Face Age Dataset (AFAD) containing more than 160K facial images with precise age ground-truths, which is the largest public age dataset to date. To the best of our knowledge, this is the first work to address ordinal regression problems by using CNN, and achieves the state-of-the-art performance on both the MORPH and AFAD datasets.

562 citations


Proceedings ArticleDOI
31 Oct 2016
TL;DR: A Deep-learning-based prediction model for Spatio-Temporal data (DeepST), which is comprised of two components: spatio-temporal and global, and built on a real-time crowd flow forecasting system called UrbanFlow1.
Abstract: Advances in location-acquisition and wireless communication technologies have led to wider availability of spatio-temporal (ST) data, which has unique spatial properties (i.e. geographical hierarchy and distance) and temporal properties (i.e. closeness, period and trend). In this paper, we propose a Deep-learning-based prediction model for Spatio-Temporal data (DeepST). We leverage ST domain knowledge to design the architecture of DeepST, which is comprised of two components: spatio-temporal and global. The spatio-temporal component employs the framework of convolutional neural networks to simultaneously model spatial near and distant dependencies, and temporal closeness, period and trend. The global component is used to capture global factors, such as day of the week, weekday or weekend. Using DeepST, we build a real-time crowd flow forecasting system called UrbanFlow1. Experiment results on diverse ST datasets verify DeepST's ability to capture ST data's spatio-temporal properties, showing the advantages of DeepST beyond four baseline methods.

544 citations


Journal ArticleDOI
Maoguo Gong1, Jiaojiao Zhao1, Jia Liu1, Qiguang Miao1, Licheng Jiao1 
TL;DR: This paper presents a novel change detection approach for synthetic aperture radar images based on deep learning that accomplishes the detection of the changed and unchanged areas by designing a deep neural network.
Abstract: This paper presents a novel change detection approach for synthetic aperture radar images based on deep learning. The approach accomplishes the detection of the changed and unchanged areas by designing a deep neural network. The main guideline is to produce a change detection map directly from two images with the trained deep neural network. The method can omit the process of generating a difference image (DI) that shows difference degrees between multitemporal synthetic aperture radar images. Thus, it can avoid the effect of the DI on the change detection results. The learning algorithm for deep architectures includes unsupervised feature learning and supervised fine-tuning to complete classification. The unsupervised feature learning aims at learning the representation of the relationships between the two images. In addition, the supervised fine-tuning aims at learning the concepts of the changed and unchanged pixels. Experiments on real data sets and theoretical analysis indicate the advantages, feasibility, and potential of the proposed method. Moreover, based on the results achieved by various traditional algorithms, respectively, deep learning can further improve the detection performance.

513 citations


Journal ArticleDOI
TL;DR: A new hyperspectral image super-resolution method from a low-resolution (LR) image and a HR reference image of the same scene to improve the accuracy of non-negative sparse coding and to exploit the spatial correlation among the learned sparse codes.
Abstract: Hyperspectral imaging has many applications from agriculture and astronomy to surveillance and mineralogy. However, it is often challenging to obtain high-resolution (HR) hyperspectral images using existing hyperspectral imaging techniques due to various hardware limitations. In this paper, we propose a new hyperspectral image super-resolution method from a low-resolution (LR) image and a HR reference image of the same scene. The estimation of the HR hyperspectral image is formulated as a joint estimation of the hyperspectral dictionary and the sparse codes based on the prior knowledge of the spatial-spectral sparsity of the hyperspectral image. The hyperspectral dictionary representing prototype reflectance spectra vectors of the scene is first learned from the input LR image. Specifically, an efficient non-negative dictionary learning algorithm using the block-coordinate descent optimization technique is proposed. Then, the sparse codes of the desired HR hyperspectral image with respect to learned hyperspectral basis are estimated from the pair of LR and HR reference images. To improve the accuracy of non-negative sparse coding, a clustering-based structured sparse coding method is proposed to exploit the spatial correlation among the learned sparse codes. The experimental results on both public datasets and real LR hypspectral images suggest that the proposed method substantially outperforms several existing HR hyperspectral image recovery techniques in the literature in terms of both objective quality metrics and computational efficiency.

404 citations


Journal ArticleDOI
TL;DR: It is proved that under the proposed ETCC there is no Zeno behavior exhibited, and a self-triggered consensus controller (STCC) is proposed to relax the requirement of continuous monitoring of each agent's own states.

377 citations


Journal ArticleDOI
TL;DR: An MOEA based on decision variable analyses (DVAs) is proposed and control variable analysis is used to recognize the conflicts among objective functions.
Abstract: State-of-the-art multiobjective evolutionary algorithms (MOEAs) treat all the decision variables as a whole to optimize performance. Inspired by the cooperative coevolution and linkage learning methods in the field of single objective optimization, it is interesting to decompose a difficult high-dimensional problem into a set of simpler and low-dimensional subproblems that are easier to solve. However, with no prior knowledge about the objective function, it is not clear how to decompose the objective function. Moreover, it is difficult to use such a decomposition method to solve multiobjective optimization problems (MOPs) because their objective functions are commonly conflicting with one another. That is to say, changing decision variables will generate incomparable solutions. This paper introduces interdependence variable analysis and control variable analysis to deal with the above two difficulties. Thereby, an MOEA based on decision variable analyses (DVAs) is proposed in this paper. Control variable analysis is used to recognize the conflicts among objective functions. More specifically, which variables affect the diversity of generated solutions and which variables play an important role in the convergence of population. Based on learned variable linkages, interdependence variable analysis decomposes decision variables into a set of low-dimensional subcomponents. The empirical studies show that DVA can improve the solution quality on most difficult MOPs. The code and supplementary material of the proposed algorithm are available at http://web.xidian.edu.cn/fliu/paper.html .

301 citations


Journal ArticleDOI
TL;DR: This paper presents the first attribute-based keyword search scheme with efficient user revocation (ABKS-UR) that enables scalable fine-grained (i.e., file-level) search authorization and formalizes the security definition and proves the proposed AB KS-UR scheme selectively secure against chosen-keyword attack.
Abstract: Search over encrypted data is a critically important enabling technique in cloud computing, where encryption-before-outsourcing is a fundamental solution to protecting user data privacy in the untrusted cloud server environment. Many secure search schemes have been focusing on the single-contributor scenario, where the outsourced dataset or the secure searchable index of the dataset are encrypted and managed by a single owner, typically based on symmetric cryptography. In this paper, we focus on a different yet more challenging scenario where the outsourced dataset can be contributed from multiple owners and are searchable by multiple users, i.e., multi-user multi-contributor case. Inspired by attribute-based encryption (ABE), we present the first attribute-based keyword search scheme with efficient user revocation (ABKS-UR) that enables scalable fine-grained (i.e., file-level) search authorization. Our scheme allows multiple owners to encrypt and outsource their data to the cloud server independently. Users can generate their own search capabilities without relying on an always online trusted authority. Fine-grained search authorization is also implemented by the owner-enforced access policy on the index of each file. Further, by incorporating proxy re-encryption and lazy re-encryption techniques, we are able to delegate heavy system update workload during user revocation to the resourceful semi-trusted cloud server. We formalize the security definition and prove the proposed ABKS-UR scheme selectively secure against chosen-keyword attack. To build confidence of data user in the proposed secure search system, we also design a search result verification scheme. Finally, performance evaluation shows the efficiency of our scheme.

279 citations


Journal ArticleDOI
08 Apr 2016-ACS Nano
TL;DR: This review focuses on recent developments of nanoscintillators with high energy transfer efficiency, their rational designs, as well as potential applications in next-generation PDT.
Abstract: Achieving effective treatment of deep-seated tumors is a major challenge for traditional photodynamic therapy (PDT) due to difficulties in delivering light into the subsurface. Thanks to their great tissue penetration, X-rays hold the potential to become an ideal excitation source for activating photosensitizers (PS) that accumulate in deep tumor tissue. Recently, a wide variety of nanoparticles have been developed for this purpose. The nanoparticles are designed as carriers for loading various kinds of PSs and can facilitate the activation process by transferring energy harvested from X-ray irradiation to the loaded PS. In this review, we focus on recent developments of nanoscintillators with high energy transfer efficiency, their rational designs, as well as potential applications in next-generation PDT. Treatment of deep-seated tumors by using radioisotopes as an internal light source will also be discussed.

Proceedings ArticleDOI
27 Jun 2016
TL;DR: This paper presents a probabilistic collaborative representation based classifier (ProCRC), which jointly maximizes the likelihood that a test sample belongs to each of the multiple classes, and shows superior performance to many popular classifiers, including SRC, CRC and SVM.
Abstract: Conventional representation based classifiers, ranging from the classical nearest neighbor classifier and nearest subspace classifier to the recently developed sparse representation based classifier (SRC) and collaborative representation based classifier (CRC), are essentially distance based classifiers. Though SRC and CRC have shown interesting classification results, their intrinsic classification mechanism remains unclear. In this paper we propose a probabilistic collaborative representation framework, where the probability that a test sample belongs to the collaborative subspace of all classes can be well defined and computed. Consequently, we present a probabilistic collaborative representation based classifier (ProCRC), which jointly maximizes the likelihood that a test sample belongs to each of the multiple classes. The final classification is performed by checking which class has the maximum likelihood. The proposed ProCRC has a clear probabilistic interpretation, and it shows superior performance to many popular classifiers, including SRC, CRC and SVM. Coupled with the CNN features, it also leads to state-of-the-art classification results on a variety of challenging visual datasets.

Journal ArticleDOI
TL;DR: A general Inc-VDB framework is proposed by incorporating the primitive of vector commitment and the encrypt-then-incremental MAC mode of encryption and it is proved that the construction can achieve the desired security properties.
Abstract: The notion of verifiable database (VDB) enables a resource-constrained client to securely outsource a very large database to an untrusted server so that it could later retrieve a database record and update a record by assigning a new value. Also, any attempt by the server to tamper with the data will be detected by the client. When the database undergoes frequent while small modifications, the client must re-compute and update the encrypted version (ciphertext) on the server at all times. For very large data, it is extremely expensive for the resources-constrained client to perform both operations from scratch. In this paper, we formalize the notion of verifiable database with incremental updates (Inc-VDB). Besides, we propose a general Inc-VDB framework by incorporating the primitive of vector commitment and the encrypt-then-incremental MAC mode of encryption. We also present a concrete Inc-VDB scheme based on the computational Diffie-Hellman (CDH) assumption. Furthermore, we prove that our construction can achieve the desired security properties.

Journal ArticleDOI
01 Aug 2016-Brain
TL;DR: This work draws attention to the identification of diametrically opposing patterns of variability changes between schizophrenia and attention deficit hyperactivity disorder/autism and provides insights into the dynamic organization of the resting brain and how it changes in brain disorders.
Abstract: SEE MATTAR ET AL DOI101093/AWW151 FOR A SCIENTIFIC COMMENTARY ON THIS ARTICLE: Functional brain networks demonstrate significant temporal variability and dynamic reconfiguration even in the resting state. Currently, most studies investigate temporal variability of brain networks at the scale of single (micro) or whole-brain (macro) connectivity. However, the mechanism underlying time-varying properties remains unclear, as the coupling between brain network variability and neural activity is not readily apparent when analysed at either micro or macroscales. We propose an intermediate (meso) scale analysis and characterize temporal variability of the functional architecture associated with a particular region. This yields a topography of variability that reflects the whole-brain and, most importantly, creates an analytical framework to establish the fundamental relationship between variability of regional functional architecture and its neural activity or structural connectivity. We find that temporal variability reflects the dynamical reconfiguration of a brain region into distinct functional modules at different times and may be indicative of brain flexibility and adaptability. Primary and unimodal sensory-motor cortices demonstrate low temporal variability, while transmodal areas, including heteromodal association areas and limbic system, demonstrate the high variability. In particular, regions with highest variability such as hippocampus/parahippocampus, inferior and middle temporal gyrus, olfactory gyrus and caudate are all related to learning, suggesting that the temporal variability may indicate the level of brain adaptability. With simultaneously recorded electroencephalography/functional magnetic resonance imaging and functional magnetic resonance imaging/diffusion tensor imaging data, we also find that variability of regional functional architecture is modulated by local blood oxygen level-dependent activity and α-band oscillation, and is governed by the ratio of intra- to inter-community structural connectivity. Application of the mesoscale variability measure to multicentre datasets of three mental disorders and matched controls involving 1180 subjects reveals that those regions demonstrating extreme, i.e. highest/lowest variability in controls are most liable to change in mental disorders. Specifically, we draw attention to the identification of diametrically opposing patterns of variability changes between schizophrenia and attention deficit hyperactivity disorder/autism. Regions of the default-mode network demonstrate lower variability in patients with schizophrenia, but high variability in patients with autism/attention deficit hyperactivity disorder, compared with respective controls. In contrast, subcortical regions, especially the thalamus, show higher variability in schizophrenia patients, but lower variability in patients with attention deficit hyperactivity disorder. The changes in variability of these regions are also closely related to symptom scores. Our work provides insights into the dynamic organization of the resting brain and how it changes in brain disorders. The nodal variability measure may also be potentially useful as a predictor for learning and neural rehabilitation.

Journal ArticleDOI
Shixing Yu1, Long Li1, Guangming Shi1, Cheng Zhu1, Xiaoxiao Zhou1, Yan Shi1 
TL;DR: In this paper, a reflective metasurface is designed, fabricated, and experimentally demonstrated to generate an orbital angular angular momentum (OAM) vortexwave in radio frequency domain.
Abstract: In this paper, a reflective metasurface is designed, fabricated, and experimentally demonstrated to generate an orbital angular momentum (OAM) vortexwave in radio frequency domain. Theoretical formula of phase-shift distribution is deduced and used to design the metasurface producing vortexradio waves. The prototype of a practical configuration is designed, fabricated, and measured to validate the theoretical analysis at 5.8 GHz. The simulated and experimental results verify that the vortexwaves with different OAM mode numbers can be flexibly generated by using sub-wavelength reflective metasurfaces. The proposed method and metasurface pave a way to generate the OAM vortexwaves for radio and microwave wireless communication applications.

Journal ArticleDOI
TL;DR: In this paper, a new approach to reducing the monostatic radar cross section (RCS) and preserving the radiation characteristics of a slot array antenna by employing polarization conversion metasurfaces (PCMs) is presented in this communication.
Abstract: A new approach to reducing the monostatic radar cross section (RCS) and preserving the radiation characteristics of a slot array antenna by employing polarization conversion metasurfaces (PCMs) is presented in this communication. The PCM is arranged in a chessboard configuration consisting of fishbone-shaped element. It is placed on the surface of the slot array antenna. The characteristics and mechanism of the RCS reduction are analyzed. Simulated and experimental results show that the monostatic RCS reduction band of the antenna with PCM ranges between 6.0 and 18.0 GHz for normally impinging both $x$ - and $y$ -polarized waves. The radiation characteristics of the antenna are well preserved simultaneously in terms of the impedance bandwidth, radiation patterns, and realized boresight gains.

Journal ArticleDOI
TL;DR: A deep feature based framework for breast mass classification task that mainly contains a convolutional neural network (CNN) and a decision mechanism to better simulate the diagnostic procedure operated by doctors and achieved state-of-art performance.

Journal ArticleDOI
Shixing Yu1, Long Li1, Guangming Shi1, Cheng Zhu1, Yan Shi1 
TL;DR: In this paper, an electromagnetic metasurface is designed, fabricated, and experimentally demonstrated to generate multiple orbital angular momentum (OAM) vortex beams in radio frequency domain.
Abstract: In this paper, an electromagnetic metasurface is designed, fabricated, and experimentally demonstrated to generate multiple orbital angular momentum (OAM) vortex beams in radio frequency domain. Theoretical formula of compensated phase-shift distribution is deduced and used to design the metasurface to produce multiple vortex radio waves in different directions with different OAM modes. The prototype of a practical configuration of square-patch metasurface is designed, fabricated, and measured to validate the theoretical analysis at 5.8 GHz. The simulated and experimental results verify that multiple OAM vortex waves can be simultaneously generated by using a single electromagnetic metasurface. The proposed method paves an effective way to generate multiple OAM vortex waves in radio and microwave wireless communication applications.

Journal ArticleDOI
Puzhao Zhang1, Maoguo Gong1, Linzhi Su1, Jia Liu1, Li Zhizhou1 
TL;DR: This paper presents a novel multi-spatial-resolution change detection framework, which incorporates deep-architecture-based unsupervised feature learning and mapping-based feature change analysis, and tries to explore the inner relationships between them by building a mapping neural network.
Abstract: Multi-spatial-resolution change detection is a newly proposed issue and it is of great significance in remote sensing, environmental and land use monitoring, etc. Though multi-spatial-resolution image-pair are two kinds of representations of the same reality, they are often incommensurable superficially due to their different modalities and properties. In this paper, we present a novel multi-spatial-resolution change detection framework, which incorporates deep-architecture-based unsupervised feature learning and mapping-based feature change analysis. Firstly, we transform multi-resolution image-pair into the same pixel-resolution through co-registration, followed by details recovery, which is designed to remedy the spatial details lost in the registration. Secondly, the denoising autoencoder is stacked to learn local and high-level representation/feature from the local neighborhood of the given pixel, in an unsupervised fashion. Thirdly, motivated by the fact that multi-resolution image-pair share the same reality in the unchanged regions, we try to explore the inner relationships between them by building a mapping neural network. And it can be used to learn a mapping function based on the most-unlikely-changed feature-pairs, which are selected from all the feature-pairs via a coarse initial change map generated in advance. The learned mapping function can bridge the different representations and highlight changes. Finally, we can build a robust and contractive change map through feature similarity analysis, and the change detection result is obtained through the segmentation of the final change map. Experiments are carried out on four real datasets, and the results confirmed the effectiveness and superiority of the proposed method.

Journal ArticleDOI
TL;DR: A new privacy- Preserving patient-centric clinical decision support system, which helps clinician complementary to diagnose the risk of patients' disease in a privacy-preserving way and can efficiently calculate patient's disease risk with high accuracy in a Privacy-Preserving way is proposed.
Abstract: Clinical decision support system, which uses advanced data mining techniques to help clinician make proper decisions, has received considerable attention recently. The advantages of clinical decision support system include not only improving diagnosis accuracy but also reducing diagnosis time. Specifically, with large amounts of clinical data generated everyday, naive Bayesian classification can be utilized to excavate valuable information to improve a clinical decision support system. Although the clinical decision support system is quite promising, the flourish of the system still faces many challenges including information security and privacy concerns. In this paper, we propose a new privacy-preserving patient-centric clinical decision support system, which helps clinician complementary to diagnose the risk of patients’ disease in a privacy-preserving way. In the proposed system, the past patients’ historical data are stored in cloud and can be used to train the naive Bayesian classifier without leaking any individual patient medical data, and then the trained classifier can be applied to compute the disease risk for new coming patients and also allow these patients to retrieve the top- $k$ disease names according to their own preferences. Specifically, to protect the privacy of past patients’ historical data, a new cryptographic tool called additive homomorphic proxy aggregation scheme is designed. Moreover, to leverage the leakage of naive Bayesian classifier, we introduce a privacy-preserving top- $k$ disease names retrieval protocol in our system. Detailed privacy analysis ensures that patient's information is private and will not be leaked out during the disease diagnosis phase. In addition, performance evaluation via extensive simulations also demonstrates that our system can efficiently calculate patient's disease risk with high accuracy in a privacy-preserving way.

Journal ArticleDOI
TL;DR: It is demonstrated that the replacement neighborhood size is critical for population diversity and convergence, and an approach for adjusting this size dynamically is developed.
Abstract: Multiobjective evolutionary algorithms based on decomposition (MOEA/D) decompose a multiobjective optimization problem into a set of simple optimization subproblems and solve them in a collaborative manner. A replacement scheme, which assigns a new solution to a subproblem, plays a key role in balancing diversity and convergence in MOEA/D. This paper proposes a global replacement scheme which assigns a new solution to its most suitable subproblems. We demonstrate that the replacement neighborhood size is critical for population diversity and convergence, and develop an approach for adjusting this size dynamically. A steady-state algorithm and a generational one with this approach have been designed and experimentally studied. The experimental results on a number of test problems have shown that the proposed algorithms have some advantages.

Proceedings ArticleDOI
01 Jun 2016
TL;DR: This paper proposes a propagation step with constrained random search radius between adjacent levels on the hierarchical architecture that outperforms the state of the art on MPI-Sintel and KITTI, and runs much faster than the competing methods.
Abstract: As a key component in many computer vision systems, optical flow estimation, especially with large displacements, remains an open problem. In this paper we present a simple but powerful matching method works in a coarsetofine scheme for optical flow estimation. Inspired by the nearest neighbor field (NNF) algorithms, our approach, called CPM (Coarse-to-fine PatchMatch), blends an efficient random search strategy with the coarse-to-fine scheme for optical flow problem. Unlike existing NNF techniques, which is efficient but the results is often too noisy for optical flow caused by the lack of global regularization, we propose a propagation step with constrained random search radius between adjacent levels on the hierarchical architecture. The resulting correspondences enjoys a built-in smoothing effect, which is more suited for optical flow estimation than NNF techniques. Furthermore, our approach can also capture the tiny structures with large motions which is a problem for traditional coarse-to-fine optical flow algorithms. Interpolated by an edge-preserving interpolation method (EpicFlow), our method outperforms the state of the art on MPI-Sintel and KITTI, and runs much faster than the competing methods.

Journal ArticleDOI
TL;DR: It is shown that increasing the number of channels may result in an increase of outage probability in the D2D-enabled cellular network, and a unified framework is provided to analyze the downlink outage probabilities in a multichannel environment with Rayleigh fading.
Abstract: In this paper, we study the outage probability of device-to-device (D2D)-communication-enabled cellular networks from a general threshold-based perspective. Specifically, a mobile user equipment (UE) transmits in D2D mode if the received signal strength (RSS) from the nearest base station (BS) is less than a specified threshold $\beta \ge 0$ ; otherwise, it connects to the nearest BS and transmits in cellular mode. The RSS-threshold-based setting is general in the sense that by varying $\beta$ from $\beta = 0$ to $\beta = \infty$ , the network accordingly evolves from a traditional cellular network (including only cellular mode) toward a wireless ad hoc network (including only D2D mode). We provide a unified framework to analyze the downlink outage probability in a multichannel environment with Rayleigh fading, where the spatial distributions of BSs and UEs are well explicitly accounted for by utilizing stochastic geometry. We derive closed-form expressions for the outage probability of a generic UE and that in both cellular mode and D2D mode and quantify the performance gains in outage probability that can be obtained by allowing such RSS-threshold-based D2D communications. We show that increasing the number of channels, although able to support more cellular UEs, may result in an increase of outage probability in the D2D-enabled cellular network. The corresponding condition and reason are also identified by applying our framework.

Journal ArticleDOI
TL;DR: This paper proposes an efficient anonymous batch authentication scheme (ABAH) to replace the CRL checking process by calculating the hash message authentication code (HMAC), and uses HMAC to avoid the time-consuming CRL Checking and to ensure the integrity of messages that may get loss in previous batch authentication.
Abstract: In vehicular ad hoc networks (VANETs), when a vehicle receives a message, the certificate revocation list (CRL) checking process will operate before certificate and signature verification. However, large communication sources, storage space, and checking time are needed for CRLs that cause the privacy disclosure issue as well. To address these issues, in this paper, we propose an efficient anonymous batch authentication scheme (ABAH) to replace the CRL checking process by calculating the hash message authentication code (HMAC). In our scheme, we first divide the precinct into several domains, in which road-side units (RSUs) manage vehicles in a localized manner. Then, we adopt pseudonyms to achieve privacy-preserving and realize batch authentication by using an identity-based signature (IBS). Finally, we use HMAC to avoid the time-consuming CRL checking and to ensure the integrity of messages that may get loss in previous batch authentication. The security and performance analysis are carried out to demonstrate that ABAH is more efficient in terms of verification delay than the conventional authentication methods employing CRLs. Meanwhile, our solution can keep conditional privacy in VANETs.

Journal ArticleDOI
TL;DR: In this article, a broadband polarization rotation reflective surface (PRRS) with a high polarization conversion ratio (PCR) is proposed, which can reflect the linearly polarized incident wave with 90° PR.
Abstract: A novel broadband polarization rotation (PR) reflective surface (PRRS) with a high polarization conversion ratio (PCR) is proposed, which can reflect the linearly polarized incident wave with 90° PR. The proposed PRRS consists of a periodic array of square patches printed on a substrate, which is backed by a metallic ground. By connecting the square patch with the ground using two nonsymmetric vias, a 49% PR bandwidth is achieved with a high PCR of 96%, which is a significant improvement from the state-of-the-art 29% PR bandwidth. Moreover, the frequency responses within the operation frequency band are consistent under oblique incident waves. Furthermore, another ultra-wideband PRRS with a periodic array of quasi-L-shaped patches is proposed, which increases the PR bandwidth further to 103%. In addition, the designed PRRS is applied to wideband radar cross section (RCS) reduction. Different arrangements of the unit cells of the PRRS are proposed and their effects on RCS reduction are investigated. To validate the simulation results, prototypes of the PRRSs are fabricated and measured. The measured results are in good agreement with the simulated ones.

Journal ArticleDOI
TL;DR: In this article, a piezoelectric-composite slurry with BaTiO3 nanoparticles (100nm) was 3D printed using Mask-Image-Projection-based Stereolithography (MIP-SL) technology.

Journal ArticleDOI
TL;DR: The necessary and sufficient conditions used to construct three-way concepts on the basis of classical concepts are proved, and the algorithms building three- way concept lattices on the based of classical concept lattice are presented.
Abstract: The model of three-way concept lattices, a novel model for widely used three-way decisions, is an extension of classical concept lattices in formal concept analysis. This paper systematically analyses the connections between two types of three-way concept lattices (object-induced and attribute-induced three-way concept lattices) and classical concept lattices. The relationships are discussed from the viewpoints of elements, sets and orders, respectively. Furthermore, the necessary and sufficient conditions used to construct three-way concepts on the basis of classical concepts are proved, the algorithms building three-way concept lattices on the basis of classical concept lattices are presented. The obtained results are finally demonstrated and verified by examples.

Book ChapterDOI
Yupu Hu1, Huiwen Jia1
08 May 2016
TL;DR: In this paper, the authors present several efficient attacks on GGH map, aiming at multipartite key exchange MKE and the instance of witness encryption WE based on the hardness of exact-3-cover X3C problem.
Abstract: Multilinear map is a novel primitive which has many cryptographic applications, and GGH map is a major candidate of K-linear maps for $$K>2$$K>2. GGH map has two classes of applications, which are applications with public tools for encoding and with hidden tools for encoding. In this paper, we show that applications of GGH map with public tools for encoding are not secure, and that one application of GGH map with hidden tools for encoding is not secure. On the basis of weak-DL attack presented by the authors themselves, we present several efficient attacks on GGH map, aiming at multipartite key exchange MKE and the instance of witness encryption WE based on the hardness of exact-3-cover X3C problem. First, we use special modular operations, which we call modified Encoding/zero-testing to drastically reduce the noise. Such reduction is enough to break MKE. Moreover, such reduction negates K-GMDDH assumption, which is a basic security assumption. The procedure involves mostly simple algebraic manipulations, and rarely needs to use any lattice-reduction tools. The key point is our special tools for modular operations. Second, under the condition of public tools for encoding, we break the instance of WE based on the hardness of X3C problem. To do so, we not only use modified Encoding/zero-testing, but also introduce and solve "combined X3C problem", which is a problem that is not difficult to solve. In contrast with the assumption that multilinear map cannot be divided back, this attack includes a division operation, that is, solving an equivalent secret from a linear equation modular some principal ideal. The quotient the equivalent secret is not small, so that modified Encoding/zero-testing is needed to reduce size. This attack is under an assumption that some two vectors are co-prime, which seems to be plausible. Third, for hidden tools for encoding, we break the instance of WE based on the hardness of X3C problem. To do so, we construct level-2 encodings of 0, which are used as alternative tools for encoding. Then, we break the scheme by applying modified Encoding/zero-testing and combined X3C, where the modified Encoding/zero-testing is an extended version. This attack is under two assumptions, which seem to be plausible. Finally, we present cryptanalysis of two simple revisions of GGH map, aiming at MKE. We show that MKE on these two revisions can be broken under the assumption that $$2^{K}$$2K is polynomially large. To do so, we further extend our modified Encoding/zero-testing.

Proceedings ArticleDOI
06 Jun 2016
TL;DR: The aim is to provide a systematic and compact framework regarding the recent development and the current state-of-the-arts in graph matching.
Abstract: Graph matching, which refers to a class of computational problems of finding an optimal correspondence between the vertices of graphs to minimize (maximize) their node and edge disagreements (affinities), is a fundamental problem in computer science and relates to many areas such as combinatorics, pattern recognition, multimedia and computer vision. Compared with the exact graph (sub)isomorphism often considered in a theoretical setting, inexact weighted graph matching receives more attentions due to its flexibility and practical utility. A short review of the recent research activity concerning (inexact) weighted graph matching is presented, detailing the methodologies, formulations, and algorithms. It highlights the methods under several key bullets, e.g. how many graphs are involved, how the affinity is modeled, how the problem order is explored, and how the matching procedure is conducted etc. Moreover, the research activity at the forefront of graph matching applications especially in computer vision, multimedia and machine learning is reported. The aim is to provide a systematic and compact framework regarding the recent development and the current state-of-the-arts in graph matching.

Journal ArticleDOI
Maoguo Gong1, Jianan Yan1, Bo Shen1, Lijia Ma1, Qing Cai1 
TL;DR: In this study, an optimization model based on a local influence criterion is established for the influence maximization problem and a discrete particle swarm optimization algorithm is proposed to optimize theLocal influence criterion.