scispace - formally typeset
Search or ask a question

Showing papers by "Xidian University published in 2011"


Proceedings ArticleDOI
06 Nov 2011
TL;DR: This paper indicates that it is the CR but not the l1-norm sparsity that makes SRC powerful for face classification, and proposes a very simple yet much more efficient face classification scheme, namely CR based classification with regularized least square (CRC_RLS).
Abstract: As a recently proposed technique, sparse representation based classification (SRC) has been widely used for face recognition (FR). SRC first codes a testing sample as a sparse linear combination of all the training samples, and then classifies the testing sample by evaluating which class leads to the minimum representation error. While the importance of sparsity is much emphasized in SRC and many related works, the use of collaborative representation (CR) in SRC is ignored by most literature. However, is it really the l 1 -norm sparsity that improves the FR accuracy? This paper devotes to analyze the working mechanism of SRC, and indicates that it is the CR but not the l1-norm sparsity that makes SRC powerful for face classification. Consequently, we propose a very simple yet much more efficient face classification scheme, namely CR based classification with regularized least square (CRC_RLS). The extensive experiments clearly show that CRC_RLS has very competitive classification results, while it has significantly less complexity than SRC.

2,001 citations


Journal ArticleDOI
TL;DR: Extensive experiments on image deblurring and super-resolution validate that by using adaptive sparse domain selection and adaptive regularization, the proposed method achieves much better results than many state-of-the-art algorithms in terms of both PSNR and visual perception.
Abstract: As a powerful statistical image modeling technique, sparse representation has been successfully used in various image restoration applications. The success of sparse representation owes to the development of the l1-norm optimization techniques and the fact that natural images are intrinsically sparse in some domains. The image restoration quality largely depends on whether the employed sparse domain can represent well the underlying image. Considering that the contents can vary significantly across different images or different patches in a single image, we propose to learn various sets of bases from a precollected dataset of example image patches, and then, for a given patch to be processed, one set of bases are adaptively selected to characterize the local sparse domain. We further introduce two adaptive regularization terms into the sparse representation framework. First, a set of autoregressive (AR) models are learned from the dataset of example image patches. The best fitted AR models to a given patch are adaptively selected to regularize the image local structures. Second, the image nonlocal self-similarity is introduced as another regularization term. In addition, the sparsity regularization parameter is adaptively estimated for better image restoration performance. Extensive experiments on image deblurring and super-resolution validate that by using adaptive sparse domain selection and adaptive regularization, the proposed method achieves much better results than many state-of-the-art algorithms in terms of both PSNR and visual perception.

1,253 citations


Proceedings ArticleDOI
06 Nov 2011
TL;DR: A novel dictionary learning (DL) method based on the Fisher discrimination criterion, whose dictionary atoms have correspondence to the class labels is learned so that the reconstruction error after sparse coding can be used for pattern classification.
Abstract: Sparse representation based classification has led to interesting image recognition results, while the dictionary used for sparse coding plays a key role in it. This paper presents a novel dictionary learning (DL) method to improve the pattern classification performance. Based on the Fisher discrimination criterion, a structured dictionary, whose dictionary atoms have correspondence to the class labels, is learned so that the reconstruction error after sparse coding can be used for pattern classification. Meanwhile, the Fisher discrimination criterion is imposed on the coding coefficients so that they have small within-class scatter but big between-class scatter. A new classification scheme associated with the proposed Fisher discrimination DL (FDDL) method is then presented by using both the discriminative information in the reconstruction error and sparse coding coefficients. The proposed FDDL is extensively evaluated on benchmark image databases in comparison with existing sparse representation and DL based classification methods.

1,002 citations


Proceedings ArticleDOI
20 Jun 2011
TL;DR: A double-header l1-optimization problem where the regularization involves both dictionary learning and structural structuring is formulated and a new denoising algorithm built upon clustering-based sparse representation (CSR) is proposed.
Abstract: Where does the sparsity in image signals come from? Local and nonlocal image models have supplied complementary views toward the regularity in natural images — the former attempts to construct or learn a dictionary of basis functions that promotes the sparsity; while the latter connects the sparsity with the self-similarity of the image source by clustering. In this paper, we present a variational framework for unifying the above two views and propose a new denoising algorithm built upon clustering-based sparse representation (CSR). Inspired by the success of l 1 -optimization, we have formulated a double-header l 1 -optimization problem where the regularization involves both dictionary learning and structural structuring. A surrogate-function based iterative shrinkage solution has been developed to solve the double-header l 1 -optimization problem and a probabilistic interpretation of CSR model is also included. Our experimental results have shown convincing improvements over state-of-the-art denoising technique BM3D on the class of regular texture images. The PSNR performance of CSR denoising is at least comparable and often superior to other competing schemes including BM3D on a collection of 12 generic natural images.

503 citations


Journal ArticleDOI
03 Jun 2011-PLOS ONE
TL;DR: The results suggested that long-term internet addiction would result in brain structural alterations, which probably contributed to chronic dysfunction in subjects with IAD.
Abstract: Background: Recent studies suggest that internet addiction disorder (IAD) is associated with structural abnormalities in brain gray matter. However, few studies have investigated the effects of internet addiction on the microstructural integrity of major neuronal fiber pathways, and almost no studies have assessed the microstructural changes with the duration of internet addiction. Methodology/Principal Findings: We investigated the morphology of the brain in adolescents with IAD (N=18) using an optimized voxel-based morphometry (VBM) technique, and studied the white matter fractional anisotropy (FA) changes using the diffusion tensor imaging (DTI) method, linking these brain structural measures to the duration of IAD. We provided evidences demonstrating the multiple structural changes of the brain in IAD subjects. VBM results indicated the decreased gray matter volume in the bilateral dorsolateral prefrontal cortex (DLPFC), the supplementary motor area (SMA), the orbitofrontal cortex (OFC), the cerebellum and the left rostral ACC (rACC). DTI analysis revealed the enhanced FA value of the left posterior limb of the internal capsule (PLIC) and reduced FA value in the white matter within the right parahippocampal gyrus (PHG). Gray matter volumes of the DLPFC, rACC, SMA, and white matter FA changes of the PLIC were significantly correlated with the duration of internet addiction in the adolescents with IAD. Conclusions: Our results suggested that long-term internet addiction would result in brain structural alterations, which probably contributed to chronic dysfunction in subjects with IAD. The current study may shed further light on the potential brain effects of IAD.

324 citations


Journal ArticleDOI
TL;DR: An improved artificial bee colony (IABC) algorithm for global optimization is presented, Inspired by differential evolution and introducing a parameter M, that uses a selective probability p to control the frequency of introducing “ABC/rand/1” and “ ABC/best/1" and gets a new search mechanism.

316 citations


Journal ArticleDOI
TL;DR: In this paper, the consensus problem of heterogeneous multi-agent systems is considered and sufficient conditions for consensus are established when the communication topologies are undirected connected graphs and leader-following networks.
Abstract: In this study, the consensus problem of heterogeneous multi-agent system is considered. First, the heterogeneous multi-agent system is proposed which is composed of first-order and second-order integrator agents in two aspects. Then, the consensus problem of heterogeneous multi-agent system is discussed with the linear consensus protocol and the saturated consensus protocol, respectively. By applying the graph theory and Lyapunov direct method, some sufficient conditions for consensus are established when the communication topologies are undirected connected graphs and leader-following networks. Finally, some examples are presented to illustrate the theoretical results.

293 citations


Journal ArticleDOI
TL;DR: An algorithm for license plate recognition (LPR) applied to the intelligent transportation system is proposed on the basis of a novel shadow removal technique and character recognition algorithms based on the improved Bernsen algorithm combined with the Gaussian filter.
Abstract: An algorithm for license plate recognition (LPR) applied to the intelligent transportation system is proposed on the basis of a novel shadow removal technique and character recognition algorithms. This paper has two major contributions. One contribution is a new binary method, i.e., the shadow removal method, which is based on the improved Bernsen algorithm combined with the Gaussian filter. Our second contribution is a character recognition algorithm known as support vector machine (SVM) integration. In SVM integration, character features are extracted from the elastic mesh, and the entire address character string is taken as the object of study, as opposed to a single character. This paper also presents improved techniques for image tilt correction and image gray enhancement. Our algorithm is robust to the variance of illumination, view angle, position, size, and color of the license plates when working in a complex environment. The algorithm was tested with 9026 images, such as natural-scene vehicle images using different backgrounds and ambient illumination particularly for low-resolution images. The license plates were properly located and segmented as 97.16% and 98.34%, respectively. The optical character recognition system is the SVM integration with different character features, whose performance for numerals, Kana, and address recognition reached 99.5%, 98.6%, and 97.8%, respectively. Combining the preceding tests, the overall performance of success for the license plate achieves 93.54% when the system is used for LPR in various complex conditions.

291 citations


Journal ArticleDOI
TL;DR: An intuitionistic fuzzy multi-criteria group decision making method with grey relational analysis (GRA) is proposed and a numerical example for personnel selection is given to illustrate the proposed method.
Abstract: Due to the increasing competition of globalization, selection of the most appropriate personnel is one of the key factors for an organization's success.The importance and complexity of the personnel selection problem call for the method combining both subjective and objective assessments rather than just subjective decisions. The aim of this paper is to develop a new method for solving the decision making process. An intuitionistic fuzzy multi-criteria group decision making method with grey relational analysis (GRA) is proposed. Intuitionistic fuzzy weighted averaging (IFWA) operator is utilized to aggregate individual opinions of decision makers into a group opinion. Intuitionistic fuzzy entropy is used to obtain the entropy weights of the criteria. GRA is applied to the ranking and selection of alternatives. A numerical example for personnel selection is given to illustrate the proposed method finally.

279 citations


Journal ArticleDOI
TL;DR: An optimized ICF method which determines an optimal frequency response filter for each ICF iteration using convex optimization techniques is developed and the clipped OFDM symbols obtained have less distortion and lower out-of-band radiation than the existing method.
Abstract: Iterative clipping and filtering (ICF) is a widely used technique to reduce the peak-to-average power ratio (PAPR) of orthogonal frequency division multiplexing (OFDM) signals. However, the ICF technique, when implemented with a fixed rectangular window in the frequency-domain, requires many iterations to approach specified PAPR threshold in the complementary cumulative distribution function (CCDF). In this paper, we develop an optimized ICF method which determines an optimal frequency response filter for each ICF iteration using convex optimization techniques. The design of optimal filter is to minimize signal distortion such that the OFDM symbol's PAPR is below a specified value. Simulation results show that our proposed method can achieve a sharp drop of CCDF curve and reduce PAPR to an acceptable level after only 1 or 2 iterations, whereas the classical ICF method would require 8 to 16 iterations to achieve a similar PAPR reduction. Moreover, the clipped OFDM symbols obtained by our optimized ICF method have less distortion and lower out-of-band radiation than the existing method.

270 citations


Journal ArticleDOI
Qiguang Miao1, Cheng Shi1, Pengfei Xu1, Mei Yang1, Yao-bo Shi 
TL;DR: As shearlet transform has the features of directionality, localization, anisotropy and multiscale, it is introduced into image fusion to obtain a fused image that contains more detail and smaller distortion information than any other methods does.

Proceedings ArticleDOI
06 Nov 2011
TL;DR: A novel sparse representation model called centralized sparse representation (CSR) is proposed, which achieves convincing improvement over previous state-of-the-art methods on image restoration tasks by exploiting the nonlocal image statistics.
Abstract: This paper proposes a novel sparse representation model called centralized sparse representation (CSR) for image restoration tasks. In order for faithful image reconstruction, it is expected that the sparse coding coefficients of the degraded image should be as close as possible to those of the unknown original image with the given dictionary. However, since the available data are the degraded (noisy, blurred and/or down-sampled) versions of the original image, the sparse coding coefficients are often not accurate enough if only the local sparsity of the image is considered, as in many existing sparse representation models. To make the sparse coding more accurate, a centralized sparsity constraint is introduced by exploiting the nonlocal image statistics. The local sparsity and the nonlocal sparsity constraints are unified into a variational framework for optimization. Extensive experiments on image restoration validated that our CSR model achieves convincing improvement over previous state-of-the-art methods.

Journal ArticleDOI
TL;DR: A novel and computationally efficient method to design optimal control places, and an iteration approach that only computes the reachability graph of a plant Petri net model once in order to obtain a maximally permissive liveness-enforcing supervisor for an FMS.
Abstract: Deadlock prevention plays an important role in the modeling and control of flexible manufacturing systems (FMS). This paper presents a novel and computationally efficient method to design optimal control places, and an iteration approach that only computes the reachability graph of a plant Petri net model once in order to obtain a maximally permissive liveness-enforcing supervisor for an FMS. By using a vector covering approach, a minimal covering set of legal markings and a minimal covered set of first-met bad markings (FBM) are computed. At each iteration, an FBM from the minimal covered set is selected. By solving an integer linear programming problem, a place invariant is designed to prevent the FBM from being reached and no marking in the minimal covering set of legal markings is forbidden. This process is carried out until no FBM can be reached. In order to make the considered problem computationally tractable, binary decision diagrams (BDD) are used to compute the sets of legal markings and FBM, and solve the vector covering problem to get a minimal covering set of legal markings and a minimal covered set of FBM. Finally, a number of FMS examples are presented to illustrate the proposed approaches.

Journal ArticleDOI
Gang Wang1, Kehu Yang1
TL;DR: A new approach to the localization problem in wireless sensor networks using received-signal-strength (RSS) measurements, which is approximately approached by the maximum likelihood (ML) parameter estimation, which the authors refer to as the weighted least squares (WLS) approach.
Abstract: In this letter, we propose a new approach to the localization problem in wireless sensor networks using received-signal-strength (RSS) measurements. The problem is reformulated under the equivalent exponential transformation of the conventional path loss measurement model and the unscented transformation (UT), and is approximately approached by the maximum likelihood (ML) parameter estimation, which we refer to as the weighted least squares (WLS) approach. This formulation is used for sensor node localization in both noncooperative and cooperative scenarios. Simulation results confirm the effectiveness of the approach for both outdoor and indoor environments.

Journal ArticleDOI
TL;DR: A memetic algorithm is proposed to optimize another quality function, modularity density, which includes a tunable parameter that allows one to explore the network at different resolutions, and the effectiveness and the multiresolution ability of the proposed method is shown.
Abstract: Community structure is one of the most important properties in networks, and community detection has received an enormous amount of attention in recent years. Modularity is by far the most used and best known quality function for measuring the quality of a partition of a network, and many community detection algorithms are developed to optimize it. However, there is a resolution limit problem in modularity optimization methods. In this study, a memetic algorithm, named Meme-Net, is proposed to optimize another quality function, modularity density, which includes a tunable parameter that allows one to explore the network at different resolutions. Our proposed algorithm is a synergy of a genetic algorithm with a hill-climbing strategy as the local search procedure. Experiments on computer-generated and real-world networks show the effectiveness and the multiresolution ability of the proposed method.

Journal ArticleDOI
TL;DR: In this paper, a modified antipodal Vivaldi antenna is presented and a novel tapered slot edge (TSE) structure is employed in this design, which has the capacity to extend the low-end bandwidth limitation and improve the radiation characteristics in the lower frequencies.
Abstract: In this letter, a modified antipodal Vivaldi antenna is presented. A novel tapered slot edge (TSE) structure is employed in this design. The proposed TSE has the capacity to extend the low-end bandwidth limitation and improve the radiation characteristics in the lower frequencies. A prototype of the modified antenna is fabricated and experimentally studied as well. The measured results show reasonable agreement with the simulated ones that validate the design procedure and confirm the benefits of the modification.

Journal ArticleDOI
TL;DR: A deadlock prevention policy for flexible manufacturing systems (FMS) is proposed, which can obtain a maximally permissive liveness-enforcing Petri net supervisor while the number of control places is compressed.

Journal ArticleDOI
Wei-Jun Wu1, Yingzeng Yin1, Shaoli Zuo1, Zhi-Ya Zhang1, Jiao-Jiao Xie1 
TL;DR: In this paper, two microstrip square open-loop resonators, a coupled line, and a Γ-shaped antenna are used and integrated to be a filter-antenna.
Abstract: Design, fabrication, and measurement of a new compact filter-antenna for modern wireless communication systems are presented in this letter. Two microstrip square open-loop resonators, a coupled line, and a Γ-shaped antenna are used and integrated to be a filter-antenna. The Γ-shaped antenna is excited by a coupled line that is treated as the admittance inverter in filter design. The Γ-shaped antenna performs not only a radiator, but also the last resonator of the bandpass filter. Therefore, near-zero transition loss is achieved between the filter and the antenna. The design procedure follows the circuit approach-synthesis of bandpass filters. Measured results show that the filter-antenna achieves an impedance bandwidth of 16.3% (over 2.26-2.66 GHz) at a reflection coefficient |S11 | <; - 10 dB and has a gain of 2.41 dBi.

Journal ArticleDOI
TL;DR: The distribution is demonstrated to be a CFCR representation that is computed without using any searching operation and to generate a new TF representation, called inverse LVD (ILVD), and a new ambiguity function, called Lv's ambiguity function (LVAF), both of which may break through the tradeoff between resolution and cross terms.
Abstract: This paper proposes a novel representation, known as Lv's distribution (LVD), of linear frequency modulated (LFM) signals. It has been well known that a monocomponent LFM signal can be uniquely determined by two important physical quantities, centroid frequency and chirp rate (CFCR). The basic reason for expressing a LFM signal in the CFCR domain is that these two quantities may not be apparent in the time or time-frequency (TF) domain. The goal of the LVD is to naturally and accurately represent a mono- or multicomponent LFM in the CFCR domain. The proposed LVD is simple and only requires a two-dimensional (2-D) Fourier transform of a parametric scaled symmetric instantaneous autocorrelation function. It can be easily implemented by using the complex multiplications and fast Fourier transforms (FFT) based on the scaling principle. The computational complexity, properties, detection performance and representation errors are analyzed for this new distribution. Comparisons with three other popular methods, Radon-Wigner transform (RWT), Radon-Ambiguity transform (RAT), and fractional Fourier transform (FRFT) are performed. With several numerical examples, our distribution is demonstrated to be a CFCR representation that is computed without using any searching operation. The main significance of the LVD is to convert a 1-D LFM into a 2-D single-frequency signal. One of the most important applications of the LVD is to generate a new TF representation, called inverse LVD (ILVD), and a new ambiguity function, called Lv's ambiguity function (LVAF), both of which may break through the tradeoff between resolution and cross terms.

Journal ArticleDOI
TL;DR: In this paper, a novel triband square-slot antenna with symmetrical L-strips is presented for WLAN and WiMAX applications, which can yield three different resonances to cover the desired bands while maintaining small size and simple structure.
Abstract: A novel triband square-slot antenna with symmetrical L-strips is presented for WLAN and WiMAX applications. The proposed antenna is composed of a square slot, a pair of L-strips, and a monopole radiator. By employing these structures, the antenna can yield three different resonances to cover the desired bands while maintaining small size and simple structure. Based on this concept, a prototype of a triband antenna is designed, fabricated, and tested. The experimental results show the antenna has the impedance bandwidths of 480 MHz (2.34-2.82 GHz), 900 MHz (3.16-4.06 GHz), and 680 MHz (4.69-5.37 GHz), which can cover both WLAN in the 2.4/5.2-GHz bands and WiMAX in the 2.5/3.5-GHz bands.

Proceedings ArticleDOI
22 Mar 2011
TL;DR: This paper proposes a multi-authority ciphertext-policy (AND gates with wildcard) ABE scheme with accountability, which allows tracing the identity of a misbehaving user who leaked the decryption key to others, and thus reduces the trust assumptions not only on the authorities but also the users.
Abstract: Attribute-based encryption (ABE) is a promising tool for implementing fine-grained cryptographic access control. Very recently, motivated by reducing the trust assumption on the authority, and enhancing the privacy of users, a multiple-authority key-policy ABE system, together with a semi-generic anonymous key-issuing protocol, have been proposed by Chase and Chow in CCS 2009. Since ABE allows encryption for multiple users with attributes satisfying the same policy, it may not be always possible to associate a decryption key to a particular individual. A misbehaving user could abuse the anonymity by leaking the key to someone else, without worrying of being traced. In this paper, we propose a multi-authority ciphertext-policy (AND gates with wildcard) ABE scheme with accountability, which allows tracing the identity of a misbehaving user who leaked the decryption key to others, and thus reduces the trust assumptions not only on the authorities but also the users. The tracing process is efficient and its computational overhead is only proportional to the length of the identity.

Journal ArticleDOI
Aifei Liu1, Guisheng Liao1, Cao Zeng1, Zhiwei Yang1, Qing Xu1 
TL;DR: A new DOA estimation method based on the eigendecomposition of a covariance matrix which is constructed by the dot product of the array output vector and its conjugate is proposed.
Abstract: In this paper, we consider the problem of direction of arrival (DOA) estimation in the presence of sensor gain-phase errors. Under some mild assumptions, we propose a new DOA estimation method based on the eigendecomposition of a covariance matrix which is constructed by the dot product of the array output vector and its conjugate. By combining the new DOA estimation with the conventional gain-phase error estimation, a method is proposed to simultaneously estimate the DOA and gain-phase errors without joint iteration. Theoretical analysis shows that the proposed method performs independently of phase errors and thus behaves well regardless of phase errors. However, the resolution capability of the proposed method is lower than that of the method in [A. J. Weiss and B. Friedlander, “Eigenstructure methods for direction finding with sensor gain and phase uncertainties,” Circuits Systems Signal Process., vol. 9, no. 3, pp. 271-300, 1990], named as the WF method. In order to improve the resolution capability and maintain phase error independence, a combined strategy is developed using the proposed and WF methods. The advantage of the proposed methods is that they are independent of phase errors, leading to the cancellation of phase error calibration during the operation life of an array. Moreover, the proposed methods avoid the problem of suboptimal convergence which occurs in the WF method. The drawbacks of the proposed methods are their high computational complexity and their requirement for the condition that at least two signals are spatially far from each other, and they are not applicable to a linear array. Simulation results verify the effectiveness of the proposed methods.

Journal ArticleDOI
Shengqi Zhu1, Guisheng Liao1, Yi Qu1, Zhengguang Zhou1, Xiangyang Liu1 
TL;DR: Theoretical analysis confirms that the methodology can precisely focus targets without interpolation procedure, and the effectiveness of the proposed imaging technique is demonstrated by both simulated and real airborne SAR data.
Abstract: It is well known that the motion of a target induces range migration, especially for high-resolution synthetic aperture radar (SAR) systems. Ground moving target imaging necessitates the correction of the unknown range migration. To finely refocus a moving target, one must accurately obtain the motion parameters for compensating the target trajectory. However, in practice, these parameters usually cannot be precisely estimated. This paper proposes a new imaging approach for ground moving targets without a priori knowledge of their motion parameters. In the devised method, the azimuth compression function is constructed in range frequency domain, which can eliminate the coupling effect between range and azimuth. Theoretical analysis confirms that the methodology can precisely focus targets without interpolation procedure. The effectiveness of the proposed imaging technique is demonstrated by both simulated and real airborne SAR data.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed position-patch based face hallucination method is very effective in producing high-quality hallucinated face images.
Abstract: We provide a position-patch based face hallucination method using convex optimization. Recently, a novel position-patch based face hallucination method has been proposed to save computational time and achieve high-quality hallucinated results. This method has employed least square estimation to obtain the optimal weights for face hallucination. However, the least square estimation approach can provide biased solutions when the number of the training position-patches is much larger than the dimension of the patch. To overcome this problem, this letter proposes a new position-patch based face hallucination method which is based on convex optimization. Experimental results demonstrate that our method is very effective in producing high-quality hallucinated face images.

Journal ArticleDOI
TL;DR: A novel framework for LDE is developed by incorporating the merits from the generalized statistical quantity histogram (GSQH) and the histogram-based embedding and is secure for copyright protection because of the safe storage and transmission of side information.
Abstract: Histogram-based lossless data embedding (LDE) has been recognized as an effective and efficient way for copyright protection of multimedia. Recently, a LDE method using the statistical quantity histogram has achieved good performance, which utilizes the similarity of the arithmetic average of difference histogram (AADH) to reduce the diversity of images and ensure the stable performance of LDE. However, this method is strongly dependent on some assumptions, which limits its applications in practice. In addition, the capacities of the images with the flat AADH, e.g., texture images, are a little bit low. For this purpose, we develop a novel framework for LDE by incorporating the merits from the generalized statistical quantity histogram (GSQH) and the histogram-based embedding. Algorithmically, we design the GSQH driven LDE framework carefully so that it: (1) utilizes the similarity and sparsity of GSQH to construct an efficient embedding carrier, leading to a general and stable framework; (2) is widely adaptable for different kinds of images, due to the usage of the divide-and-conquer strategy; (3) is scalable for different capacity requirements and avoids the capacity problems caused by the flat histogram distribution; (4) is conditionally robust against JPEG compression under a suitable scale factor; and (5) is secure for copyright protection because of the safe storage and transmission of side information. Thorough experiments over three kinds of images demonstrate the effectiveness of the proposed framework.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed ISAR imaging framework is capable of precise reconstruction of ISAR images and effective suppression of both phase error and noise.
Abstract: From the theory of compressive sensing (CS), we know that the exact recovery of an unknown sparse signal can be achieved from limited measurements by solving a sparsity-constrained optimization problem. For inverse synthetic aperture radar (ISAR) imaging, the backscattering field of a target is usually composed of contributions by a very limited amount of strong scattering centers, the number of which is much smaller than that of pixels in the image plane. In this paper, a novel framework for ISAR imaging is proposed through sparse stepped-frequency waveforms (SSFWs). By using the framework, the measurements, only at some portions of frequency subbands, are used to reconstruct full-resolution images by exploiting sparsity. This waveform strategy greatly reduces the amount of data and acquisition time and improves the antijamming capability. A new algorithm, named the sparsity-driven High-Resolution Range Profile (HRRP) synthesizer, is presented in this paper to overcome the error phase due to motion usually degrading the HHRP synthesis. The sparsity-driven HRRP synthesizer is robust to noise. The main novelty of the proposed ISAR imaging framework is twofold: 1) dividing the motion compensation into three steps and therefore allowing for very accurate estimation and 2) both sparsity and signal-to-noise ratio are enhanced dramatically by coherent integrant in cross-range before performing HRRP synthesis. Both simulated and real measured data are used to test the robustness of the ISAR imaging framework with SSFWs. Experimental results show that the framework is capable of precise reconstruction of ISAR images and effective suppression of both phase error and noise.

Journal ArticleDOI
24 Mar 2011-Sensors
TL;DR: Insight is provided into routing protocols designed specifically for large-scale WSNs based on the hierarchical structure and a comparison of each routing protocol is conducted to demonstrate the differences between the protocols.
Abstract: With the advances in micro-electronics, wireless sensor devices have been made much smaller and more integrated, and large-scale wireless sensor networks (WSNs) based the cooperation among the significant amount of nodes have become a hot topic. “Large-scale” means mainly large area or high density of a network. Accordingly the routing protocols must scale well to the network scope extension and node density increases. A sensor node is normally energy-limited and cannot be recharged, and thus its energy consumption has a quite significant effect on the scalability of the protocol. To the best of our knowledge, currently the mainstream methods to solve the energy problem in large-scale WSNs are the hierarchical routing protocols. In a hierarchical routing protocol, all the nodes are divided into several groups with different assignment levels. The nodes within the high level are responsible for data aggregation and management work, and the low level nodes for sensing their surroundings and collecting information. The hierarchical routing protocols are proved to be more energy-efficient than flat ones in which all the nodes play the same role, especially in terms of the data aggregation and the flooding of the control packets. With focus on the hierarchical structure, in this paper we provide an insight into routing protocols designed specifically for large-scale WSNs. According to the different objectives, the protocols are generally classified based on different criteria such as control overhead reduction, energy consumption mitigation and energy balance. In order to gain a comprehensive understanding of each protocol, we highlight their innovative ideas, describe the underlying principles in detail and analyze their advantages and disadvantages. Moreover a comparison of each routing protocol is conducted to demonstrate the differences between the protocols in terms of message complexity, memory requirements, localization, data aggregation, clustering manner and other metrics. Finally some open issues in routing protocol design in large-scale wireless sensor networks and conclusions are proposed.

Journal ArticleDOI
TL;DR: Three low-complexity relay-selection strategies, namely, selective amplify and forward, selective decode and forward (S-DF), and amplified and forward with partial relay selection (PRS-AF) in a spectrum-sharing scenario are studied and the diversity and coding gains are derived and compared.
Abstract: Three low-complexity relay-selection strategies, namely, selective amplify and forward (S-AF), selective decode and forward (S-DF), and amplify and forward with partial relay selection (PRS-AF) in a spectrum-sharing scenario are studied. First, we consider a scenario where perfect channel state information (CSI) is available. For these scenarios, the respective asymptotic outage behaviors of the secondary systems are analyzed, from which the diversity and coding gains are derived and compared. Unlike the coding gain, which is shown to be very sensitive with the position of the primary receiver, the diversity gain of the secondary system is the same as the nonspectrum-sharing system. In addition, depending on the cooperative strategy employed, an increase in the number of relays may lead to severe loss of the coding gain. Afterwards, the impacts of imperfect CSI regarding the interference and transmit channels on the outage behavior of the secondary systems are analyzed. On one hand, the imperfect CSI concerning the interference channels only affects the outage performance of the primary system, whereas it has no effect on the diversity gain of the secondary system. On the other hand, the imperfect CSI concerning the transmit channels of the secondary systems may reduce the diversity gain of the three relay-selection strategies to unity, which is validated by both theoretical and numerical results.

Journal ArticleDOI
TL;DR: A novel pointwise-adaptive speckle filter based on local homogeneous-region segmentation with pixel-relativity measurement and a novel evaluation metric of edge-preservation degree based on ratio of average is provided for more precise quantitative assessment.
Abstract: This paper provides a novel pointwise-adaptive speckle filter based on local homogeneous-region segmentation with pixel-relativity measurement. A ratio distance is proposed to measure the distance between two speckled-image patches. The theoretical proofs indicate that the ratio distance is valid for multiplicative speckle, while the traditional Euclidean distance failed in this case. The probability density function of the ratio distance is deduced to map the distance into a relativity value. This new relativity-measurement method is free of parameter setting and more functional compared with the Gaussian kernel-projection-based ones. The new measurement method is successfully applied to segment a local shape-adaptive homogeneous region for each pixel, and a simplified strategy for the segmentation implementation is given in this paper. After segmentation, the maximum likelihood rule is introduced to estimate the true signal within every homogeneous region. A novel evaluation metric of edge-preservation degree based on ratio of average is also provided for more precise quantitative assessment. The visual and numerical experimental results show that the proposed filter outperforms the existing state-of-the-art despeckling filters.

Journal ArticleDOI
TL;DR: In this paper, a decentralized adaptive neural network (NN) output-feedback stabilization problem is investigated for a class of large-scale stochastic nonlinear strict-feedbacks systems, which interact through their outputs.
Abstract: In this paper, the decentralized adaptive neural network (NN) output-feedback stabilization problem is investigated for a class of large-scale stochastic nonlinear strict-feedback systems, which interact through their outputs. The nonlinear interconnections are assumed to be bounded by some unknown nonlinear functions of the system outputs. In each subsystem, only a NN is employed to compensate for all unknown upper bounding functions, which depend on its own output. Therefore, the controller design for each subsystem only need its own information and is more simplified than the existing results. It is shown that, based on the backstepping method and the technique of nonlinear observer design, the whole closed-loop system can be proved to be stable in probability by constructing an overall state-quartic and parameter-quadratic Lyapunov function. The simulation results demonstrate the effectiveness of the proposed control scheme. Copyright © 2010 John Wiley & Sons, Ltd.