scispace - formally typeset
Search or ask a question

Showing papers by "Xidian University published in 2014"


Proceedings ArticleDOI
23 Jun 2014
TL;DR: Experimental results clearly show that the proposed WNNM algorithm outperforms many state-of-the-art denoising algorithms such as BM3D in terms of both quantitative measure and visual perception quality.
Abstract: As a convex relaxation of the low rank matrix factorization problem, the nuclear norm minimization has been attracting significant research interest in recent years. The standard nuclear norm minimization regularizes each singular value equally to pursue the convexity of the objective function. However, this greatly restricts its capability and flexibility in dealing with many practical problems (e.g., denoising), where the singular values have clear physical meanings and should be treated differently. In this paper we study the weighted nuclear norm minimization (WNNM) problem, where the singular values are assigned different weights. The solutions of the WNNM problem are analyzed under different weighting conditions. We then apply the proposed WNNM algorithm to image denoising by exploiting the image nonlocal self-similarity. Experimental results clearly show that the proposed WNNM algorithm outperforms many state-of-the-art denoising algorithms such as BM3D in terms of both quantitative measure and visual perception quality.

1,876 citations


Journal ArticleDOI
TL;DR: This paper investigates the properties of trust, proposes objectives of IoT trust management, and provides a survey on the current literature advances towards trustworthy IoT to propose a research model for holistic trust management in IoT.

1,001 citations


Journal ArticleDOI
TL;DR: In this paper, the effect of axial strain on the electronic band structure of phosphorene was studied using first-principles methods including density functional theory (DFT) and hybrid functionals.
Abstract: Recently fabricated two-dimensional phosphorene crystal structures have demonstrated great potential in applications of electronics. In this paper, strain effect on the electronic band structure of phosphorene was studied using first-principles methods including density functional theory (DFT) and hybrid functionals. It was found that phosphorene can withstand a tensile stress and strain up to 10 N/m and 30%, respectively. The band gap of phosphorene experiences a direct-indirect-direct transition when axial strain is applied. A moderate −2% compression in the zigzag direction can trigger this gap transition. With sufficient expansion (+11.3%) or compression (−10.2% strains), the gap can be tuned from indirect to direct again. Five strain zones with distinct electronic band structure were identified, and the critical strains for the zone boundaries were determined. Although the DFT method is known to underestimate band gap of semiconductors, it was proven to correctly predict the strain effect on the electronic properties with validation from a hybrid functional method in this work. The origin of the gap transition was revealed, and a general mechanism was developed to explain energy shifts with strain according to the bond nature of near-band-edge electronic orbitals. Effective masses of carriers in the armchair direction are an order of magnitude smaller than that of the zigzag axis, indicating that the armchair direction is favored for carrier transport. In addition, the effective masses can be dramatically tuned by strain, in which its sharp jump/drop occurs at the zone boundaries of the direct-indirect gap transition.

822 citations


Journal ArticleDOI
TL;DR: This work proposes a novel BIQA model that utilizes the joint statistics of two types of commonly used local contrast features: 1) the gradient magnitude (GM) map and 2) the Laplacian of Gaussian response.
Abstract: Blind image quality assessment (BIQA) aims to evaluate the perceptual quality of a distorted image without information regarding its reference image. Existing BIQA models usually predict the image quality by analyzing the image statistics in some transformed domain, e.g., in the discrete cosine transform domain or wavelet domain. Though great progress has been made in recent years, BIQA is still a very challenging task due to the lack of a reference image. Considering that image local contrast features convey important structural information that is closely related to image perceptual quality, we propose a novel BIQA model that utilizes the joint statistics of two types of commonly used local contrast features: 1) the gradient magnitude (GM) map and 2) the Laplacian of Gaussian (LOG) response. We employ an adaptive procedure to jointly normalize the GM and LOG features, and show that the joint statistics of normalized GM and LOG features have desirable properties for the BIQA task. The proposed model is extensively evaluated on three large-scale benchmark databases, and shown to deliver highly competitive performance with state-of-the-art BIQA models, as well as with some well-known full reference image quality assessment models.

535 citations


Journal ArticleDOI
TL;DR: A nonlocal low-rank regularization approach toward exploiting structured sparsity and its application into CS of both photographic and MRI images is proposed and the use of a nonconvex log det as a smooth surrogate function for the rank instead of the convex nuclear norm is proposed.
Abstract: Sparsity has been widely exploited for exact reconstruction of a signal from a small number of random measurements. Recent advances have suggested that structured or group sparsity often leads to more powerful signal reconstruction techniques in various compressed sensing (CS) studies. In this paper, we propose a nonlocal low-rank regularization (NLR) approach toward exploiting structured sparsity and explore its application into CS of both photographic and MRI images. We also propose the use of a nonconvex log det ( X) as a smooth surrogate function for the rank instead of the convex nuclear norm and justify the benefit of such a strategy using extensive experiments. To further improve the computational efficiency of the proposed algorithm, we have developed a fast implementation using the alternative direction multiplier method technique. Experimental results have shown that the proposed NLR-CS algorithm can significantly outperform existing state-of-the-art CS techniques for image recovery.

523 citations


Journal ArticleDOI
TL;DR: Experimental results indicate that MOEA/D-AWA outperforms the benchmark algorithms in terms of the IGD metric, particularly when the PF of the MOP is complex.
Abstract: Recently, MOEA/D multi-objective evolutionary algorithm based on decomposition has achieved great success in the field of evolutionary multi-objective optimization and has attracted a lot of attention. It decomposes a multi-objective optimization problem MOP into a set of scalar subproblems using uniformly distributed aggregation weight vectors and provides an excellent general algorithmic framework of evolutionary multi-objective optimization. Generally, the uniformity of weight vectors in MOEA/D can ensure the diversity of the Pareto optimal solutions, however, it cannot work as well when the target MOP has a complex Pareto front PF; i.e., discontinuous PF or PF with sharp peak or low tail. To remedy this, we propose an improved MOEA/D with adaptive weight vector adjustment MOEA/D-AWA. According to the analysis of the geometric relationship between the weight vectors and the optimal solutions under the Chebyshev decomposition scheme, a new weight vector initialization method and an adaptive weight vector adjustment strategy are introduced in MOEA/D-AWA. The weights are adjusted periodically so that the weights of subproblems can be redistributed adaptively to obtain better uniformity of solutions. Meanwhile, computing efforts devoted to subproblems with duplicate optimal solution can be saved. Moreover, an external elite population is introduced to help adding new subproblems into real sparse regions rather than pseudo sparse regions of the complex PF, that is, discontinuous regions of the PF. MOEA/D-AWA has been compared with four state of the art MOEAs, namely the original MOEA/D, Adaptive-MOEA/D, -MOEA/D, and NSGA-II on 10 widely used test problems, two newly constructed complex problems, and two many-objective problems. Experimental results indicate that MOEA/D-AWA outperforms the benchmark algorithms in terms of the IGD metric, particularly when the PF of the MOP is complex.

514 citations


Journal ArticleDOI
TL;DR: This paper proposes Dekey, a new construction in which users do not need to manage any keys on their own but instead securely distribute the convergent key shares across multiple servers and demonstrates that Dekey incurs limited overhead in realistic environments.
Abstract: Data deduplication is a technique for eliminating duplicate copies of data, and has been widely used in cloud storage to reduce storage space and upload bandwidth. Promising as it is, an arising challenge is to perform secure deduplication in cloud storage. Although convergent encryption has been extensively adopted for secure deduplication, a critical issue of making convergent encryption practical is to efficiently and reliably manage a huge number of convergent keys. This paper makes the first attempt to formally address the problem of achieving efficient and reliable key management in secure deduplication. We first introduce a baseline approach in which each user holds an independent master key for encrypting the convergent keys and outsourcing them to the cloud. However, such a baseline key management scheme generates an enormous number of keys with the increasing number of users and requires users to dedicatedly protect the master keys. To this end, we propose Dekey , a new construction in which users do not need to manage any keys on their own but instead securely distribute the convergent key shares across multiple servers. Security analysis demonstrates that Dekey is secure in terms of the definitions specified in the proposed security model. As a proof of concept, we implement Dekey using the Ramp secret sharing scheme and demonstrate that Dekey incurs limited overhead in realistic environments.

511 citations


Journal ArticleDOI
TL;DR: PLEK is an efficient alignment-free computational tool to distinguish lncRNAs from mRNAs in RNA-seq transcriptomes of species lacking reference genomes and is especially suitable for PacBio or 454 sequencing data and large-scale transcriptome data.
Abstract: High-throughput transcriptome sequencing (RNA-seq) technology promises to discover novel protein-coding and non-coding transcripts, particularly the identification of long non-coding RNAs (lncRNAs) from de novo sequencing data. This requires tools that are not restricted by prior gene annotations, genomic sequences and high-quality sequencing. We present an alignment-free tool called PLEK (predictor of long non-coding RNAs and messenger RNAs based on an improved k-mer scheme), which uses a computational pipeline based on an improved k-mer scheme and a support vector machine (SVM) algorithm to distinguish lncRNAs from messenger RNAs (mRNAs), in the absence of genomic sequences or annotations. The performance of PLEK was evaluated on well-annotated mRNA and lncRNA transcripts. 10-fold cross-validation tests on human RefSeq mRNAs and GENCODE lncRNAs indicated that our tool could achieve accuracy of up to 95.6%. We demonstrated the utility of PLEK on transcripts from other vertebrates using the model built from human datasets. PLEK attained >90% accuracy on most of these datasets. PLEK also performed well using a simulated dataset and two real de novo assembled transcriptome datasets (sequenced by PacBio and 454 platforms) with relatively high indel sequencing errors. In addition, PLEK is approximately eightfold faster than a newly developed alignment-free tool, named Coding-Non-Coding Index (CNCI), and 244 times faster than the most popular alignment-based tool, Coding Potential Calculator (CPC), in a single-threading running manner. PLEK is an efficient alignment-free computational tool to distinguish lncRNAs from mRNAs in RNA-seq transcriptomes of species lacking reference genomes. PLEK is especially suitable for PacBio or 454 sequencing data and large-scale transcriptome data. Its open-source software can be freely downloaded from https://sourceforge.net/projects/plek/files/ .

509 citations


Journal ArticleDOI
TL;DR: The proposed FDDL model is extensively evaluated on various image datasets, and it shows superior performance to many state-of-the-art dictionary learning methods in a variety of classification tasks.
Abstract: The employed dictionary plays an important role in sparse representation or sparse coding based image reconstruction and classification, while learning dictionaries from the training data has led to state-of-the-art results in image classification tasks. However, many dictionary learning models exploit only the discriminative information in either the representation coefficients or the representation residual, which limits their performance. In this paper we present a novel dictionary learning method based on the Fisher discrimination criterion. A structured dictionary, whose atoms have correspondences to the subject class labels, is learned, with which not only the representation residual can be used to distinguish different classes, but also the representation coefficients have small within-class scatter and big between-class scatter. The classification scheme associated with the proposed Fisher discrimination dictionary learning (FDDL) model is consequently presented by exploiting the discriminative information in both the representation residual and the representation coefficients. The proposed FDDL model is extensively evaluated on various image datasets, and it shows superior performance to many state-of-the-art dictionary learning methods in a variety of classification tasks.

474 citations


Journal ArticleDOI
TL;DR: This work proposes a new Secure Outsourced ABE system, which supports both secure outsourced key-issuing and decryption and proposes an outsourced ABE construction which provides checkability of the outsourced computation results in an efficient way.
Abstract: Attribute-Based Encryption (ABE) is a promising cryptographic primitive which significantly enhances the versatility of access control mechanisms. Due to the high expressiveness of ABE policies, the computational complexities of ABE key-issuing and decryption are getting prohibitively high. Despite that the existing Outsourced ABE solutions are able to offload some intensive computing tasks to a third party, the verifiability of results returned from the third party has yet to be addressed. Aiming at tackling the challenge above, we propose a new Secure Outsourced ABE system, which supports both secure outsourced key-issuing and decryption. Our new method offloads all access policy and attribute related operations in the key-issuing process or decryption to a Key Generation Service Provider (KGSP) and a Decryption Service Provider (DSP), respectively, leaving only a constant number of simple operations for the attribute authority and eligible users to perform locally. In addition, for the first time, we propose an outsourced ABE construction which provides checkability of the outsourced computation results in an efficient way. Extensive security and performance analysis show that the proposed schemes are proven secure and practical.

403 citations


Proceedings ArticleDOI
08 Jul 2014
TL;DR: Evaluation results show that the proposed DLS algorithm can significantly improve the privacy level in terms of entropy, and an enhanced-DLS algorithm that can enlarge the cloaking region while keeping similar privacy level as the DLS algorithms.
Abstract: Location-Based Service (LBS) has become a vital part of our daily life. While enjoying the convenience provided by LBS, users may lose privacy since the untrusted LBS server has all the information about users in LBS and it may track them in various ways or release their personal data to third parties. To address the privacy issue, we propose a Dummy- Location Selection (DLS) algorithm to achieve k-anonymity for users in LBS. Different from existing approaches, the DLS algorithm carefully selects dummy locations considering that side information may be exploited by adversaries. We first choose these dummy locations based on the entropy metric, and then propose an enhanced-DLS algorithm, to make sure that the selected dummy locations are spread as far as possible. Evaluation results show that the proposed DLS algorithm can significantly improve the privacy level in terms of entropy. The enhanced-DLS algorithm can enlarge the cloaking region while keeping similar privacy level as the DLS algorithm. I. INTRODUCTION With the rapid development of mobile devices and social networks, Location-Based Service (LBS) has become a vital part in our daily activities in recent years. With smartphones or tablets, users can download location-based applications from Apple Store or Google Play Store. With the help of these

Journal ArticleDOI
TL;DR: This paper comprehensively surveys the development of face hallucination, including both face super-resolution and face sketch-photo synthesis techniques, and presents a comparative analysis of representative methods and promising future directions.
Abstract: This paper comprehensively surveys the development of face hallucination (FH), including both face super-resolution and face sketch-photo synthesis techniques. Indeed, these two techniques share the same objective of inferring a target face image (e.g. high-resolution face image, face sketch and face photo) from a corresponding source input (e.g. low-resolution face image, face photo and face sketch). Considering the critical role of image interpretation in modern intelligent systems for authentication, surveillance, law enforcement, security control, and entertainment, FH has attracted growing attention in recent years. Existing FH methods can be grouped into four categories: Bayesian inference approaches, subspace learning approaches, a combination of Bayesian inference and subspace learning approaches, and sparse representation-based approaches. In spite of achieving a certain level of development, FH is limited in its success by complex application conditions such as variant illuminations, poses, or views. This paper provides a holistic understanding and deep insight into FH, and presents a comparative analysis of representative methods and promising future directions.

Journal ArticleDOI
TL;DR: Based on the proposed discrete framework, a multiobjective discrete particle swarm optimization algorithm is proposed to solve the network clustering problem and the decomposition mechanism is adopted.
Abstract: The field of complex network clustering has been very active in the past several years. In this paper, a discrete framework of the particle swarm optimization algorithm is proposed. Based on the proposed discrete framework, a multiobjective discrete particle swarm optimization algorithm is proposed to solve the network clustering problem. The decomposition mechanism is adopted. A problem-specific population initialization method based on label propagation and a turbulence operator are introduced. In the proposed method, two evaluation objectives termed as kernel k-means and ratio cut are to be minimized. However, the two objectives can only be used to handle unsigned networks. In order to deal with signed networks, they have been extended to the signed version. The clustering performances of the proposed algorithm have been validated on signed networks and unsigned networks. Extensive experimental studies compared with ten state-of-the-art approaches prove that the proposed algorithm is effective and promising.

Journal ArticleDOI
TL;DR: It is proved that the first-order and second-order multi-agent systems can achieve consensus by choosing proper design parameters by using the Nussbaum-type function to design adaptive control laws.
Abstract: This note addresses the adaptive consensus problem of first-order and second-order linearly parameterized multi-agent systems with unknown identical control directions. First, we propose a new Nussbaum-type function based on which a key lemma is established. The lemma plays an important role in analyzing the consensus of the closed-loop multi-agent systems. Second, the Nussbaum-type function is used to design adaptive control laws for first-order and second-order linearly parameterized multi-agent systems so that each agent seeks for the unknown control direction adaptively and cooperatively. Then, under the assumption that the interconnection topology is undirected and connected, it is proved that the first-order and second-order multi-agent systems can achieve consensus by choosing proper design parameters. Two simulation examples are given to illustrate the effectiveness of the proposed control laws.

Journal ArticleDOI
TL;DR: This paper proposes a new secure outsourcing algorithm for (variable-exponent, variable-base) exponentiation modulo a prime in the two untrusted program model and proposes the first efficient outsource-secure algorithm for simultaneous modular exponentiations.
Abstract: With the rapid development of cloud services, the techniques for securely outsourcing the prohibitively expensive computations to untrusted servers are getting more and more attention in the scientific community. Exponentiations modulo a large prime have been considered the most expensive operations in discrete-logarithm-based cryptographic protocols, and they may be burdensome for the resource-limited devices such as RFID tags or smartcards. Therefore, it is important to present an efficient method to securely outsource such operations to (untrusted) cloud servers. In this paper, we propose a new secure outsourcing algorithm for (variable-exponent, variable-base) exponentiation modulo a prime in the two untrusted program model. Compared with the state-of-the-art algorithm, the proposed algorithm is superior in both efficiency and checkability. Based on this algorithm, we show how to achieve outsource-secure Cramer-Shoup encryptions and Schnorr signatures. We then propose the first efficient outsource-secure algorithm for simultaneous modular exponentiations. Finally, we provide the experimental evaluation that demonstrates the efficiency and effectiveness of the proposed outsourcing algorithms and schemes.

Journal ArticleDOI
TL;DR: The general architecture of big data analytics is formalized, the corresponding privacy requirements are identified, and an efficient and privacy-preserving cosine similarity computing protocol is introduced as an example in response to data mining's efficiency and privacy requirements in the big data era.
Abstract: Big data, because it can mine new knowledge for economic growth and technical innovation, has recently received considerable attention, and many research efforts have been directed to big data processing due to its high volume, velocity, and variety (referred to as "3V") challenges. However, in addition to the 3V challenges, the flourishing of big data also hinges on fully understanding and managing newly arising security and privacy challenges. If data are not authentic, new mined knowledge will be unconvincing; while if privacy is not well addressed, people may be reluctant to share their data. Because security has been investigated as a new dimension, "veracity," in big data, in this article, we aim to exploit new challenges of big data in terms of privacy, and devote our attention toward efficient and privacy-preserving computing in the big data era. Specifically, we first formalize the general architecture of big data analytics, identify the corresponding privacy requirements, and introduce an efficient and privacy-preserving cosine similarity computing protocol as an example in response to data mining's efficiency and privacy requirements in the big data era.

Journal ArticleDOI
TL;DR: In this article, a compact multiple-input-multiple-output (MIMO) antenna for ultrawideband (UWB) applications is presented, which consists of two open L-shaped slot (LS) antenna elements and a narrow slot on the ground plane.
Abstract: A compact multiple-input-multiple-output (MIMO) antenna is presented for ultrawideband (UWB) applications. The antenna consists of two open L-shaped slot (LS) antenna elements and a narrow slot on the ground plane. The antenna elements are placed perpendicularly to each other to obtain high isolation, and the narrow slot is added to reduce the mutual coupling of antenna elements in the low frequency band (3-4.5 GHz). The proposed MIMO antenna has a compact size of 32 ×32 mm 2 , and the antenna prototype is fabricated and measured. The measured results show that the proposed antenna design achieves an impedance bandwidth of larger than 3.1-10.6 GHz, low mutual coupling of less than 15 dB, and a low envelope correlation coefficient of better than 0.02 across the frequency band, which are suitable for portable UWB applications.

Proceedings Article
08 Dec 2014
TL;DR: Compared with conventional DL methods, the proposed DPL method can not only greatly reduce the time complexity in the training and testing phases, but also lead to very competitive accuracies in a variety of visual classification tasks.
Abstract: Discriminative dictionary learning (DL) has been widely studied in various pattern classification problems. Most of the existing DL methods aim to learn a synthesis dictionary to represent the input signal while enforcing the representation coefficients and/or representation residual to be discriminative. However, the l0 or l1-norm sparsity constraint on the representation coefficients adopted in most DL methods makes the training and testing phases time consuming. We propose anew discriminative DL framework, namely projective dictionary pair learning (DPL), which learns a synthesis dictionary and an analysis dictionary jointly to achieve the goal of signal representation and discrimination. Compared with conventional DL methods, the proposed DPL method can not only greatly reduce the time complexity in the training and testing phases, but also lead to very competitive accuracies in a variety of visual classification tasks.

Journal ArticleDOI
TL;DR: A pair of efficient and light-weight authentication protocols to enable remote WBAN users to anonymously enjoy healthcare service and outperform the existing schemes in terms of better trade-off between desirable security properties and computational overhead, nicely meeting the needs of WBANs.
Abstract: Wireless body area network (WBAN) has been recognized as one of the promising wireless sensor technologies for improving healthcare service, thanks to its capability of seamlessly and continuously exchanging medical information in real time. However, the lack of a clear in-depth defense line in such a new networking paradigm would make its potential users worry about the leakage of their private information, especially to those unauthenticated or even malicious adversaries. In this paper, we present a pair of efficient and light-weight authentication protocols to enable remote WBAN users to anonymously enjoy healthcare service. In particular, our authentication protocols are rooted with a novel certificateless signature (CLS) scheme, which is computational, efficient, and provably secure against existential forgery on adaptively chosen message attack in the random oracle model. Also, our designs ensure that application or service providers have no privilege to disclose the real identities of users. Even the network manager, which serves as private key generator in the authentication protocols, is prevented from impersonating legitimate users. The performance of our designs is evaluated through both theoretic analysis and experimental simulations, and the comparative studies demonstrate that they outperform the existing schemes in terms of better trade-off between desirable security properties and computational overhead, nicely meeting the needs of WBANs.

Journal ArticleDOI
TL;DR: Theoretical analysis and experimental results on real SAR datasets show that the proposed approach can detect the real changes as well as mitigate the effect of speckle noises and is computationally simple in all the steps involved.
Abstract: In this paper, we put forward a novel approach for change detection in synthetic aperture radar (SAR) images. The approach classifies changed and unchanged regions by fuzzy c-means (FCM) clustering with a novel Markov random field (MRF) energy function. In order to reduce the effect of speckle noise, a novel form of the MRF energy function with an additional term is established to modify the membership of each pixel. In addition, the degree of modification is determined by the relationship of the neighborhood pixels. The specific form of the additional term is contingent upon different situations, and it is established ultimately by utilizing the least-square method. There are two aspects to our contributions. First, in order to reduce the effect of speckle noise, the proposed approach focuses on modifying the membership instead of modifying the objective function. It is computationally simple in all the steps involved. Its objective function can just return to the original form of FCM, which leads to its consuming less time than that of some obviously recently improved FCM algorithms. Second, the proposed approach modifies the membership of each pixel according to a novel form of the MRF energy function through which the neighbors of each pixel, as well as their relationship, are concerned. Theoretical analysis and experimental results on real SAR datasets show that the proposed approach can detect the real changes as well as mitigate the effect of speckle noises. Theoretical analysis and experiments also demonstrate its low time complexity.

Journal ArticleDOI
TL;DR: An overview of the security functionality of the LTE and LTE-A networks and the security vulnerabilities existing in the architecture and the design are explored and the potential research issues for the future research works are shown.
Abstract: High demands for broadband mobile wireless communications and the emergence of new wireless multimedia applications constitute the motivation to the development of broadband wireless access technologies in recent years. The Long Term Evolution/System Architecture Evolution (LTE/SAE) system has been specified by the Third Generation Partnership Project (3GPP) on the way towards fourth-generation (4G) mobile to ensure 3GPP keeping the dominance of the cellular communication technologies. Through the design and optimization of new radio access techniques and a further evolution of the LTE systems, the 3GPP is developing the future LTE-Advanced (LTE-A) wireless networks as the 4G standard of the 3GPP. Since the 3GPP LTE and LTE-A architecture are designed to support flat Internet Protocol (IP) connectivity and full interworking with heterogeneous wireless access networks, the new unique features bring some new challenges in the design of the security mechanisms. This paper makes a number of contributions to the security aspects of the LTE and LTE-A networks. First, we present an overview of the security functionality of the LTE and LTE-A networks. Second, the security vulnerabilities existing in the architecture and the design of the LTE and LTE-A networks are explored. Third, the existing solutions to these problems are classically reviewed. Finally, we show the potential research issues for the future research works.

Journal ArticleDOI
Maoguo Gong1, Shengmeng Zhao1, Licheng Jiao1, Dayong Tian1, Shuang Wang1 
TL;DR: A novel coarse-to-fine scheme for automatic image registration which is implemented by the scale-invariant feature transform approach equipped with a reliable outlier removal procedure and the maximization of mutual information using a modified Marquardt-Levenberg search strategy in a multiresolution framework.
Abstract: Automatic image registration is a vital yet challenging task, particularly for remote sensing images. A fully automatic registration approach which is accurate, robust, and fast is required. For this purpose, a novel coarse-to-fine scheme for automatic image registration is proposed in this paper. This scheme consists of a preregistration process (coarse registration) and a fine-tuning process (fine registration). To begin with, the preregistration process is implemented by the scale-invariant feature transform approach equipped with a reliable outlier removal procedure. The coarse results provide a near-optimal initial solution for the optimizer in the fine-tuning process. Next, the fine-tuning process is implemented by the maximization of mutual information using a modified Marquardt-Levenberg search strategy in a multiresolution framework. The proposed algorithm is tested on various remote sensing optical and synthetic aperture radar images taken at different situations (multispectral, multisensor, and multitemporal) with the affine transformation model. The experimental results demonstrate the accuracy, robustness, and efficiency of the proposed algorithm.

Journal ArticleDOI
TL;DR: This paper presents a verifiable privacy-preserving multi-keyword text search (MTS) scheme with similarity-based ranking and proposes two secure index schemes to meet the stringent privacy requirements under strong threat models.
Abstract: With the growing popularity of cloud computing, huge amount of documents are outsourced to the cloud for reduced management cost and ease of access. Although encryption helps protecting user data confidentiality, it leaves the well-functioning yet practically-efficient secure search functions over encrypted data a challenging problem. In this paper, we present a verifiable privacy-preserving multi-keyword text search (MTS) scheme with similarity-based ranking to address this problem. To support multi-keyword search and search result ranking, we propose to build the search index based on term frequency and the vector space model with cosine similarity measure to achieve higher search result accuracy. To improve the search efficiency, we propose a tree-based index structure and various adaptive methods for multi-dimensional (MD) algorithm so that the practical search efficiency is much better than that of linear search. To further enhance the search privacy, we propose two secure index schemes to meet the stringent privacy requirements under strong threat models, i.e., known ciphertext model and known background model. In addition, we devise a scheme upon the proposed index tree structure to enable authenticity check over the returned search results. Finally, we demonstrate the effectiveness and efficiency of the proposed schemes through extensive experimental evaluation.

Proceedings ArticleDOI
10 Nov 2014
TL;DR: A novel image representation method by learning and using kernel classifiers using the one-against-all rule and the Euclidean distance between the classification response vectors is used as the new similarity measure.
Abstract: The learning of image representation is always the most important problem in computer vision community. In this paper, we propose a novel image representation method by learning and using kernel classifiers. We firstly train classifiers using the one-against-all rule, then use them classify the candidate images, and finally using the classification responses as the new representations. The Euclidean distance between the classification response vectors are used as the new similarity measure. The experimental results from a large scale image database show that the proposed algorithm can outperform the original feature on image retrieval problem.

Journal ArticleDOI
TL;DR: A device-to-device communication-based load balancing algorithm, which utilizes D2D communications as bridges to flexibly offload traffic among different tier cells and achieve efficient load balancing according to their real-time traffic distributions is proposed.
Abstract: In LTE-Advanced networks, besides the overall coverage provided by traditional macrocells, various classes of low-power nodes (e.g., pico eNBs, femto eNBs, and relays) can be distributed throughout the macrocells as a more targeted underlay to further enhance the area?s spectral efficiency, alleviate traffic hot zones, and thus improve the end-user experience. Considering the limited backhaul connections within lowpower nodes and the imbalanced traffic distribution among different cells, it is highly possible that some cells are severely congested while adjacent cells are very lightly loaded. Therefore, it is of critical importance to achieve efficient load balancing among multi-tier cells in LTEAdvanced networks. However, available techniques such as smart cell and biasing, although able to alleviate congestion or distribute traffic to some extent, cannot respond or adapt flexibly to the real-time traffic distributions among multi-tier cells. Toward this end, we propose in this article a device-to-device communicationbased load balancing algorithm, which utilizes D2D communications as bridges to flexibly offload traffic among different tier cells and achieve efficient load balancing according to their real-time traffic distributions. Besides identifying the research issues that deserve further study, we also present numerical results to show the performance gains that can be achieved by the proposed algorithm.

Journal ArticleDOI
TL;DR: This work extends the original similarity to the signed similarity based on the social balance theory and proposes a multiobjective evolutionary algorithm, called MEAs-SN, which can detect overlapping communities directly and switch between different representations during the evolutionary process.
Abstract: Various types of social relationships, such as friends and foes, can be represented as signed social networks (SNs) that contain both positive and negative links. Although many community detection (CD) algorithms have been proposed, most of them were designed primarily for networks containing only positive links. Thus, it is important to design CD algorithms which can handle large-scale SNs. To this purpose, we first extend the original similarity to the signed similarity based on the social balance theory. Then, based on the signed similarity and the natural contradiction between positive and negative links, two objective functions are designed to model the problem of detecting communities in SNs as a multiobjective problem. Afterward, we propose a multiobjective evolutionary algorithm, called MEAsSN. In MEAs-SN, to overcome the defects of direct and indirect representations for communities, a direct and indirect combined representation is designed. Attributing to this representation, MEAs-SN can switch between different representations during the evolutionary process. As a result, MEAs-SN can benefit from both representations. Moreover, owing to this representation, MEAs-SN can also detect overlapping communities directly. In the experiments, both benchmark problems and large-scale synthetic networks generated by various parameter settings are used to validate the performance of MEAs-SN. The experimental results show the effectiveness and efficacy of MEAs-SN on networks with 1000, 5000, and 10000 nodes and also in various noisy situations. A thorough comparison is also made between MEAs-SN and three existing algorithms, and the results show that MEAs-SN outperforms other algorithms.

Journal ArticleDOI
13 Jan 2014
TL;DR: This paper proposes a novel privacy-preserving mechanism that supports public auditing on shared data stored in the cloud that exploits ring signatures to compute verification metadata needed to audit the correctness of shared data.
Abstract: With cloud data services, it is commonplace for data to be not only stored in the cloud, but also shared across multiple users. Unfortunately, the integrity of cloud data is subject to skepticism due to the existence of hardware/software failures and human errors. Several mechanisms have been designed to allow both data owners and public verifiers to efficiently audit cloud data integrity without retrieving the entire data from the cloud server. However, public auditing on the integrity of shared data with these existing mechanisms will inevitably reveal confidential information-identity privacy-to public verifiers. In this paper, we propose a novel privacy-preserving mechanism that supports public auditing on shared data stored in the cloud. In particular, we exploit ring signatures to compute verification metadata needed to audit the correctness of shared data. With our mechanism, the identity of the signer on each block in shared data is kept private from public verifiers, who are able to efficiently verify shared data integrity without retrieving the entire file. In addition, our mechanism is able to perform multiple auditing tasks simultaneously instead of verifying them one by one. Our experimental results demonstrate the effectiveness and efficiency of our mechanism when auditing shared data integrity.

Proceedings ArticleDOI
08 Jul 2014
TL;DR: This paper presents the first attribute-based keyword search scheme with efficient user revocation (ABKS-UR) that enables scalable fine-grained (i.e. file-level) search authorization and formalizes the security definition and proves the proposed AB KS-UR scheme selectively secure against chosen-keyword attack.
Abstract: Search over encrypted data is a critically important enabling technique in cloud computing, where encryption-before-outsourcing is a fundamental solution to protecting user data privacy in the untrusted cloud server environment. Many secure search schemes have been focusing on the single-contributor scenario, where the outsourced dataset or the secure searchable index of the dataset are encrypted and managed by a single owner, typically based on symmetric cryptography. In this paper, we focus on a different yet more challenging scenario where the outsourced dataset can be contributed from multiple owners and are searchable by multiple users, i.e. multi-user multi-contributor case. Inspired by attribute-based encryption (ABE), we present the first attribute-based keyword search scheme with efficient user revocation (ABKS-UR) that enables scalable fine-grained (i.e. file-level) search authorization. Our scheme allows multiple owners to encrypt and outsource their data to the cloud server independently. Users can generate their own search capabilities without relying on an always online trusted authority. Fine-grained search authorization is also implemented by the owner-enforced access policy on the index of each file. Further, by incorporating proxy re-encryption and lazy re-encryption techniques, we are able to delegate heavy system update workload during user revocation to the resourceful semi-trusted cloud server. We formalize the security definition and prove the proposed ABKS-UR scheme selectively secure against chosen-keyword attack. Finally, performance evaluation shows the efficiency of our scheme.

Journal ArticleDOI
TL;DR: A new algorithm is proposed to construct polar codes, aimed at minimizing the exact BLER, instead of the upper bound of theBLER, and analysis indicates that the new method is less complex than the existing methods.
Abstract: Polar codes are usually constructed to minimize the upper bound of a block error ratio (BLER). In this paper, we discuss the estimation of the exact BLERs of polar codes as well as the construction of polar codes. Assuming that successive cancellation (SC) decoding is employed, we present a method for estimating the exact BLER of polar codes with the help of Gaussian approximation (GA). A new algorithm is proposed to construct polar codes, aimed at minimizing the exact BLER, instead of the upper bound of the BLER. Analysis indicates that the new method is less complex than the existing methods. It is also shown that the estimation results match the simulations well.

Journal ArticleDOI
TL;DR: This paper presents a linear protocol for heterogeneous multi-agent systems such that the second-order integrator agents converge to the convex hull spanned by the first-orderIntegrator agents if and only if the directed graph contains a directed spanning forest.
Abstract: In this paper, we consider the containment control problem for a group of autonomous agents modelled by heterogeneous dynamics. The communication networks among the leaders and the followers are directed graphs. When the leaders are first-order integrator agents, we present a linear protocol for heterogeneous multi-agent systems such that the second-order integrator agents converge to the convex hull spanned by the first-order integrator agents if and only if the directed graph contains a directed spanning forest. If the leaders are second-order integrator agents, we propose a nonlinear protocol and obtain a necessary and sufficient condition that the heterogeneous multi-agent system solves the containment control problem in finite time. Simulation examples are also provided to illustrate the effectiveness of the theoretical results.