scispace - formally typeset
Search or ask a question

Showing papers by "Xidian University published in 2012"


Journal ArticleDOI
TL;DR: Thorough experimental results suggest that the proposed SR method can reconstruct higher quality results both quantitatively and perceptually and propose a maximum a posteriori probability framework for SR recovery.
Abstract: Image super-resolution (SR) reconstruction is essentially an ill-posed problem, so it is important to design an effective prior. For this purpose, we propose a novel image SR method by learning both non-local and local regularization priors from a given low-resolution image. The non-local prior takes advantage of the redundancy of similar patches in natural images, while the local prior assumes that a target pixel can be estimated by a weighted average of its neighbors. Based on the above considerations, we utilize the non-local means filter to learn a non-local prior and the steering kernel regression to learn a local prior. By assembling the two complementary regularization terms, we propose a maximum a posteriori probability framework for SR recovery. Thorough experimental results suggest that the proposed SR method can reconstruct higher quality results both quantitatively and perceptually.

527 citations


Journal ArticleDOI
TL;DR: An improved solution search equation is proposed, which is based on that the bee searches only around the best solution of the previous iteration to improve the exploitation and makes up the modified ABC, which excludes the probabilistic selection scheme and scout bee phase.

526 citations


Journal ArticleDOI
TL;DR: An unsupervised distribution-free change detection approach for synthetic aperture radar (SAR) images based on an image fusion strategy and a novel fuzzy clustering algorithm that exhibited lower error than its preexistences.
Abstract: This paper presents an unsupervised distribution-free change detection approach for synthetic aperture radar (SAR) images based on an image fusion strategy and a novel fuzzy clustering algorithm. The image fusion technique is introduced to generate a difference image by using complementary information from a mean-ratio image and a log-ratio image. In order to restrain the background information and enhance the information of changed regions in the fused difference image, wavelet fusion rules based on an average operator and minimum local area energy are chosen to fuse the wavelet coefficients for a low-frequency band and a high-frequency band, respectively. A reformulated fuzzy local-information C-means clustering algorithm is proposed for classifying changed and unchanged regions in the fused difference image. It incorporates the information about spatial context in a novel fuzzy way for the purpose of enhancing the changed information and of reducing the effect of speckle noise. Experiments on real SAR images show that the image fusion strategy integrates the advantages of the log-ratio operator and the mean-ratio operator and gains a better performance. The change detection results obtained by the improved fuzzy clustering algorithm exhibited lower error than its preexistences.

508 citations


Proceedings ArticleDOI
24 Jun 2012
TL;DR: This paper proposes a novel privacy-preserving mechanism that supports public auditing on shared data stored in the cloud that exploits ring signatures to compute verification metadata needed to audit the correctness of shared data.
Abstract: With cloud storage services, it is commonplace for data to be not only stored in the cloud, but also shared across multiple users. However, public auditing for such shared data --- while preserving identity privacy --- remains to be an open challenge. In this paper, we propose the first privacy-preserving mechanism that allows public auditing on shared data stored in the cloud. In particular, we exploit ring signatures to compute the verification information needed to audit the integrity of shared data. With our mechanism, the identity of the signer on each block in shared data is kept private from a third party auditor (TPA), who is still able to verify the integrity of shared data without retrieving the entire file. Our experimental results demonstrate the effectiveness and efficiency of our proposed mechanism when auditing shared data.

389 citations


Journal ArticleDOI
TL;DR: A modified ABC algorithm (denoted as ABC/best), which is based on that each bee searches only around the best solution of the previous iteration in order to improve the exploitation, is proposed.

380 citations


Journal ArticleDOI
TL;DR: This paper presents a new classifier, kernel sparse representation-based classifier (KSRC), based on SRC and the kernel trick which is a usual technique in machine learning and shows KSRC improves the performance of SRC.
Abstract: Sparse representation-based classifier (SRC), a combined result of machine learning and compressed sensing, shows its good classification performance on face image data. However, SRC could not well classify the data with the same direction distribution. The same direction distribution means that the sample vectors belonging to different classes distribute on the same vector direction. This paper presents a new classifier, kernel sparse representation-based classifier (KSRC), based on SRC and the kernel trick which is a usual technique in machine learning. KSRC is a nonlinear extension of SRC and can remedy the drawback of SRC. To make the data in an input space separable, we implicitly map these data into a high-dimensional kernel feature space by using some nonlinear mapping associated with a kernel function. Since this kernel feature space has a very high (or possibly infinite) dimensionality, or is unknown, we have to avoid working in this space explicitly. Fortunately, we can indeed reduce the dimensionality of the kernel feature space by exploiting kernel-based dimensionality reduction methods. In the reduced subspace, we need to find sparse combination coefficients for a test sample and assign a class label to it. Similar to SRC, KSRC is also cast into an l1-minimization problem or a quadratically constrained l1 -minimization problem. Extensive experimental results on UCI and face data sets show KSRC improves the performance of SRC.

329 citations


Journal ArticleDOI
TL;DR: A sparse neighbor selection scheme for SR reconstruction is proposed that can achieve competitive SR quality compared with other state-of-the-art baselines and develop an extended Robust-SL0 algorithm to simultaneously find the neighbors and to solve the reconstruction weights.
Abstract: Until now, neighbor-embedding-based (NE) algorithms for super-resolution (SR) have carried out two independent processes to synthesize high-resolution (HR) image patches. In the first process, neighbor search is performed using the Euclidean distance metric, and in the second process, the optimal weights are determined by solving a constrained least squares problem. However, the separate processes are not optimal. In this paper, we propose a sparse neighbor selection scheme for SR reconstruction. We first predetermine a larger number of neighbors as potential candidates and develop an extended Robust-SL0 algorithm to simultaneously find the neighbors and to solve the reconstruction weights. Recognizing that the k-nearest neighbor (k-NN) for reconstruction should have similar local geometric structures based on clustering, we employ a local statistical feature, namely histograms of oriented gradients (HoG) of low-resolution (LR) image patches, to perform such clustering. By conveying local structural information of HoG in the synthesis stage, the k-NN of each LR input patch is adaptively chosen from their associated subset, which significantly improves the speed of synthesizing the HR image while preserving the quality of reconstruction. Experimental results suggest that the proposed method can achieve competitive SR quality compared with other state-of-the-art baselines.

310 citations


Journal ArticleDOI
01 Jul 2012
TL;DR: This study surveys the state-of-the-art deadlock-control strategies for automated manufacturing systems by reviewing the principles and techniques that are involved in preventing, avoiding, and detecting deadlocks.
Abstract: Deadlocks are a rather undesirable situation in a highly automated flexible manufacturing system. Their occurrences often deteriorate the utilization of resources and may lead to catastrophic results in safety-critical systems. Graph theory, automata, and Petri nets are three important mathematical tools to handle deadlock problems in resource allocation systems. Particularly, Petri nets are considered as a popular formalism because of their inherent characteristics. They received much attention over the past decades to deal with deadlock problems, leading to a variety of deadlock-control policies. This study surveys the state-of-the-art deadlock-control strategies for automated manufacturing systems by reviewing the principles and techniques that are involved in preventing, avoiding, and detecting deadlocks. The focus is deadlock prevention due to its large and continuing stream of efforts. A control strategy is evaluated in terms of computational complexity, behavioral permissiveness, and structural complexity of its deadlock-free supervisor. This study provides readers with a conglomeration of the updated results in this area and facilitates engineers in finding a suitable approach for their industrial scenarios. Future research directions are finally discussed.

274 citations


Journal ArticleDOI
TL;DR: A novel algorithm, called graph dual regularization non-negative matrix factorization (DNMF), which simultaneously considers the geometric structures of both the data manifold and the feature manifold is proposed.

243 citations


Journal ArticleDOI
TL;DR: This paper proposes two classes of consensus protocols with and without velocity measurements, and proves that the protocol with velocity measurements can solve the finite-time consensus under a strongly connected graph and leader-following network, respectively.

222 citations


Journal ArticleDOI
TL;DR: The community detection is solved as a multiobjective optimization problem by using the multiobjectives evolutionary algorithm based on decomposition, which maximizes the density of internal degrees, and minimizes thedensity of external degrees simultaneously.
Abstract: Community structure is an important property of complex networks. Most optimization-based community detection algorithms employ single optimization criteria. In this study, the community detection is solved as a multiobjective optimization problem by using the multiobjective evolutionary algorithm based on decomposition. The proposed algorithm maximizes the density of internal degrees, and minimizes the density of external degrees simultaneously. It can produce a set of solutions which can represent various divisions to the networks at different hierarchical levels. The number of communities is automatically determined by the non-dominated individuals resulting from our algorithm. Experiments on both synthetic and real-world network datasets verify that our algorithm is highly efficient at discovering quality community structure.

Journal ArticleDOI
TL;DR: In this paper, a new compact pattern reconfigurable U-slot antenna is presented, which can operate in either monopolar patch or normal patch mode in similar frequency ranges, and its radiation pattern can be switched between conical and boresight patterns electrically.
Abstract: A new compact pattern reconfigurable U-slot antenna is presented The antenna consists of a U-slot patch and eight shorting posts Each edge of the square patch is connected to two shorting posts via PIN diodes By switching between the different states of the PIN diodes, the proposed antenna can operate in either monopolar patch or normal patch mode in similar frequency ranges Therefore, its radiation pattern can be switched between conical and boresight patterns electrically In addition, the plane with the maximum power level of the conical pattern can be changed between two orthogonal planes Owing to a novel design of the switch geometry, the antenna does not need dc bias lines The measured overlapping impedance bandwidth (|S11| <; -10 dB) of the two modes is 66% with a center frequency of 532 GHz The measured radiation patterns agree well with simulated results The antennas are incorporated in a 2 × 2 multiple-input-multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) system to demonstrate the improvement in system capacity In the real-time MIMO-OFDM channel measurement, it is shown that compared to omnidirectional antennas, the pattern reconfigurable antennas can enhance the system capacity, with 17% improvement in a line-of-sight (LOS) scenario and 12% in a non-LOS (NLOS) scenario at a signal-to-noise ratio (SNR) of 10 dB

Book ChapterDOI
26 Jun 2012
TL;DR: Knox, a privacy-preserving auditing mechanism for data stored in the cloud and shared among a large number of users in a group, utilizes group signatures to construct homomorphic authenticators, so that a third party auditor (TPA) is able to verify the integrity of shared data for users without retrieving the entire data.
Abstract: With cloud computing and storage services, data is not only stored in the cloud, but routinely shared among a large number of users in a group It remains elusive, however, to design an efficient mechanism to audit the integrity of such shared data, while still preserving identity privacy In this paper, we propose Knox, a privacy-preserving auditing mechanism for data stored in the cloud and shared among a large number of users in a group In particular, we utilize group signatures to construct homomorphic authenticators, so that a third party auditor (TPA) is able to verify the integrity of shared data for users without retrieving the entire data Meanwhile, the identity of the signer on each block in shared data is kept private from the TPA With Knox, the amount of information used for verification, as well as the time it takes to audit with it, are not affected by the number of users in the group In addition, Knox exploits homomorphic MACs to reduce the space used to store such verification information Our experimental results show that Knox is able to efficiently audit the correctness of data, shared among a large number of users

Journal ArticleDOI
TL;DR: The performance comparisons of the proposed NR operator with a traditional ratio operator and a log-ratio operator indicate that the NR operator is superior to these traditional methods and produces better detection results.
Abstract: This letter presents a novel neighborhood-based ratio (NR) operator to produce a difference image for change detection in synthetic aperture radar (SAR) images. In order to reduce the negative influence of speckle noise on SAR images, the proposed NR operator produces a difference image by combining gray level information and spatial information of neighbor pixels. The performance comparisons of the proposed operator with a traditional ratio operator and a log-ratio operator indicate that the NR operator is superior to these traditional methods and produces better detection results.

Journal ArticleDOI
TL;DR: This paper proposes a multiple-geometric-dictionaries-based clustered sparse coding scheme for SISR, and adds a self-similarity constraint on the recovered image in patch aggregation to reveal new features and details.
Abstract: Recently, single image super-resolution reconstruction (SISR) via sparse coding has attracted increasing interest. In this paper, we proposed a multiple-geometric-dictionaries-based clustered sparse coding scheme for SISR. Firstly, a large number of high-resolution (HR) image patches are randomly extracted from a set of example training images and clustered into several groups of “geometric patches,” from which the corresponding “geometric dictionaries” are learned to further sparsely code each local patch in a low-resolution image. A clustering aggregation is performed on the HR patches recovered by different dictionaries, followed by a subsequent patch aggregation to estimate the HR image. Considering that there are often many repetitive image structures in an image, we add a self-similarity constraint on the recovered image in patch aggregation to reveal new features and details. Finally, the HR residual image is estimated by the proposed recovery method and compensated to better preserve the subtle details of the images. Some experiments test the proposed method on natural images, and the results show that the proposed method outperforms its counterparts in both visual fidelity and numerical measures.

Journal ArticleDOI
TL;DR: The authors prove that the weights assigned to pixels in the target candidate region by BWH are proportional to those without background information, that is, BWH does not introduce any new information because the mean-shift iteration formula is invariant to the scale transformation of weights.
Abstract: The background-weighted histogram (BWH) algorithm proposed by Comaniciu et al. attempts to reduce the interference of background in target localisation in mean-shift tracking. However, the authors prove that the weights assigned to pixels in the target candidate region by BWH are proportional to those without background information, that is, BWH does not introduce any new information because the mean-shift iteration formula is invariant to the scale transformation of weights. Then a corrected BWH (CBWH) formula is proposed by transforming only the target model but not the target candidate model. The CBWH scheme can effectively reduce background's interference in target localisation. The experimental results show that CBWH can lead to faster convergence and more accurate localisation than the usual target representation in mean-shift tracking. Even if the target is not well initialised, the proposed algorithm can still robustly track the object, which is hard to achieve by the conventional target representation.

Book ChapterDOI
10 Sep 2012
TL;DR: This paper proposes a new secure outsourcing algorithm for (variable-exponent, variable-base) exponentiation modulo a prime in the two untrusted program model and proposes the first efficient outsource-secure algorithm for simultaneous modular exponentiations.
Abstract: Modular exponentiations have been considered the most expensive operation in discrete-logarithm based cryptographic protocols. In this paper, we propose a new secure outsourcing algorithm for exponentiation modular a prime in the one-malicious model. Compared with the state-of-the-art algorithm [33], the proposed algorithm is superior in both efficiency and checkability. We then utilize this algorithm as a subroutine to achieve outsource-secure Cramer-Shoup encryptions and Schnorr signatures. Besides, we propose the first outsource-secure and efficient algorithm for simultaneous modular exponentiations. Moreover, we prove that both the algorithms can achieve the desired security notions.

Journal ArticleDOI
Biao Li1, Yingzeng Yin1, Wei Hu1, Yang Ding1, Yang Zhao1 
TL;DR: In this paper, a coax-feed wideband dual-polarized patch antenna with low cross polarization and high port isolation is presented, which can be used as a base station antenna for PCS, UMTS, and WLAN/WiMAX applications.
Abstract: A coax-feed wideband dual-polarized patch antenna with low cross polarization and high port isolation is presented in this letter. The proposed antenna contains two pairs of T-shaped slots on the two bowtie-shaped patches separately. This structure changes the path of the current and keeps the cross polarization under -40 dB. By introducing two short pins, the isolation between the two ports remains more than 38 dB in the whole bandwidth with the front-to-back ratio better than 19 dB. Moreover, the proposed antenna achieving a 10-dB return loss bandwidth of 1.70-2.73 GHz has a compact structure, thus making it easy to be extended to form an array, which can be used as a base station antenna for PCS, UMTS, and WLAN/WiMAX applications.

Journal ArticleDOI
TL;DR: In this paper, a comprehensive solution based on the elliptic integrals is proposed for solving large deflection problems in compliant mechanisms by explicitly incorporating the number of inflection points and the sign of the end-moment load in the derivation.
Abstract: The elliptic integral solution is often considered to be the most accurate method for analyzing large deflections of thin beams in compliant mechanisms. In this paper, a comprehensive solution based on the elliptic integrals is proposed for solving large deflection problems. By explicitly incorporating the number of inflection points and the sign of the end-moment load in the derivation, the comprehensive solution is capable of solving large deflections of thin beams with multiple inflection points and subject to any kinds of load cases. The comprehensive solution also extends the elliptic integral solutions to be suitable for any beam end angle. Deflected configurations of complex modes solved by the comprehensive solution are presented and discussed. The use of the comprehensive solution in analyzing compliant mechanisms is also demonstrated by examples.

Journal ArticleDOI
TL;DR: A joint learning technique is applied to train two projection matrices simultaneously and to map the original LR and HR feature spaces onto a unified feature subspace to overcome or at least to reduce the problem for NE-based SR reconstruction.
Abstract: The neighbor-embedding (NE) algorithm for single-image super-resolution (SR) reconstruction assumes that the feature spaces of low-resolution (LR) and high-resolution (HR) patches are locally isometric. However, this is not true for SR because of one-to-many mappings between LR and HR patches. To overcome or at least to reduce the problem for NE-based SR reconstruction, we apply a joint learning technique to train two projection matrices simultaneously and to map the original LR and HR feature spaces onto a unified feature subspace. Subsequently, the k -nearest neighbor selection of the input LR image patches is conducted in the unified feature subspace to estimate the reconstruction weights. To handle a large number of samples, joint learning locally exploits a coupled constraint by linking the LR-HR counterparts together with the K-nearest grouping patch pairs. In order to refine further the initial SR estimate, we impose a global reconstruction constraint on the SR outcome based on the maximum a posteriori framework. Preliminary experiments suggest that the proposed algorithm outperforms NE-related baselines.

Journal ArticleDOI
TL;DR: A new autofocus algorithm to exploit the sparse apertures (SAs) data for ISAR imagery and an approach to determine the sparsity coefficient in the optimization by using constant-false-alarm-rate (CFAR) detection is proposed.
Abstract: Compressive sensing (CS) theory indicates that the optimal reconstruction of an unknown sparse signal can be achieved from limited noisy measurements by solving a sparsity-driven optimization problem. For inverse synthetic aperture radar (ISAR) imagery, the scattering field of the target is usually composed of only a limited number of strong scattering centers, representing strong spatial sparsity. This paper derives a new autofocus algorithm to exploit the sparse apertures (SAs) data for ISAR imagery. A sparsity-driven optimization based on Bayesian compressive sensing (BCS) is developed. In addition, we also propose an approach to determine the sparsity coefficient in the optimization by using constant-false-alarm-rate (CFAR) detection. Solving the sparsity-driven optimization with a modified Quasi-Newton algorithm, the phase error is corrected by combining a two-step phase correction approach, and well-focused image with effective noise suppression is obtained from SA data. Real data experiments show the validity of the proposed method.

Journal ArticleDOI
TL;DR: The proposed sketch-photo synthesis method works at patch level and is composed of two steps: sparse neighbor selection (SNS) for an initial estimate of the pseudoimage (pseudosketch or pseudophoto) and sparse-representation-based enhancement (SRE) for further improving the quality of the synthesized image.
Abstract: Sketch-photo synthesis plays an important role in sketch-based face photo retrieval and photo-based face sketch retrieval systems. In this paper, we propose an automatic sketch-photo synthesis and retrieval algorithm based on sparse representation. The proposed sketch-photo synthesis method works at patch level and is composed of two steps: sparse neighbor selection (SNS) for an initial estimate of the pseudoimage (pseudosketch or pseudophoto) and sparse-representation-based enhancement (SRE) for further improving the quality of the synthesized image. SNS can find closely related neighbors adaptively and then generate an initial estimate for the pseudoimage. In SRE, a coupled sparse representation model is first constructed to learn the mapping between sketch patches and photo patches, and a patch-derivative-based sparse representation method is subsequently applied to enhance the quality of the synthesized photos and sketches. Finally, four retrieval modes, namely, sketch-based, photo-based, pseudosketch-based, and pseudophoto-based retrieval are proposed, and a retrieval algorithm is developed by using sparse representation. Extensive experimental results illustrate the effectiveness of the proposed face sketch-photo synthesis and retrieval algorithms.

Journal ArticleDOI
TL;DR: A hybrid genetic algorithm based on the fitness value and the concentration value and its convergence was proved and the results demonstrated the effectiveness of the proposed algorithm.

Journal ArticleDOI
28 Dec 2012-PLOS ONE
TL;DR: The findings may provide new insights into the characterization of migraine as a condition affecting brain activity in intrinsic connectivity networks, and the abnormalities may be the consequence of a persistent central neural system dysfunction, reflecting cumulative brain insults due to frequent ongoing migraine attacks.
Abstract: Background: Previous studies have defined low-frequency, spatially consistent intrinsic connectivity networks (ICN) in resting functional magnetic resonance imaging (fMRI) data which reflect functional interactions among distinct brain areas. We sought to explore whether and how repeated migraine attacks influence intrinsic brain connectivity, as well as how activity in these networks correlates with clinical indicators of migraine.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed algorithm performs better than the further improved MOEA/D for almost all the CEC 2009 problems, and the results obtained are very competitive when comparing UMODE/D with some other algorithms on these multiobjective knapsack problems.

Journal ArticleDOI
TL;DR: Both simulated and real-data experiments show that the proposed MOCO approach is appropriate for highly precise imaging for UAV SAR equipped with only low-accuracy inertial navigation system.
Abstract: Unmanned aerial vehicle (UAV) synthetic aperture radar (SAR) is an essential tool for modern remote sensing applications. Owing to its size and weight constraints, UAV is very sensitive to atmospheric turbulence that causes serious trajectory deviations. In this paper, a novel databased motion compensation (MOCO) approach is proposed for the UAV SAR imagery. The approach is implemented by a three-step process: 1) The range-invariant motion error is estimated by the weighted phase gradient autofocus (WPGA), and the nonsystematic range cell migration function is calculated from the estimate for each subaperture SAR data; 2) the retrieval of the range-dependent phase error is executed by a local maximum-likelihood WPGA algorithm; and 3) the subaperture phase errors are coherently combined to perform the MOCO for the full-aperture data. Both simulated and real-data experiments show that the proposed approach is appropriate for highly precise imaging for UAV SAR equipped with only low-accuracy inertial navigation system.

Journal ArticleDOI
01 Jan 2012
TL;DR: A deadlock prevention method that makes a good tradeoff between optimality and computational tractability for a class of Petri nets, which can model many FMS.
Abstract: Deadlocks are an undesirable situation in automated flexible manufacturing systems (FMS) Their occurrences often deteriorate the utilization of resources and may lead to catastrophic results Finding an optimal supervisor is NP-hard A computationally efficient method often ends up with a suboptimal one This paper develops a deadlock prevention method that makes a good tradeoff between optimality and computational tractability for a class of Petri nets, which can model many FMS The theory of regions guides our efforts toward the development of near-optimal solutions for deadlock prevention Given a plant net, a minimal initial marking is first decided by structural analysis, and an optimal live controlled system is computed Then, a set of inequality constraints is derived with respect to the markings of monitors and the places in the model such that no siphon can be insufficiently marked A method is proposed to identify the redundancy condition for constraints For a new initial marking of the plant net, a deadlock-free controlled system can be obtained by regulating the markings of the monitors such that the inequality constraints are satisfied, without changing the structure of the controlled system The near-optimal performance of a controlled net system via the proposed method is shown through several examples

Journal ArticleDOI
TL;DR: This article considers the consensus problem of heterogeneous multi-agent system composed of first-order and second- order agents, in which the second-order integrator agents cannot obtain the velocity (second state) measurements for feedback.
Abstract: This article considers the consensus problem of heterogeneous multi-agent system composed of first-order and second-order agents, in which the second-order integrator agents cannot obtain the velocity (second state) measurements for feedback. Two different consensus protocols are proposed. First, we propose a consensus protocol and discuss the consensus problem of heterogeneous multi-agent system. By applying the graph theory and the Lyapunov direct method, some sufficient conditions for consensus are established when the communication topologies are undirected connected graphs and leader-following networks. Second, due to actuator saturation, we propose another consensus protocol with input constraint and obtain the consensus criterions for heterogeneous multi-agent system. Finally, some examples are presented to illustrate the effectiveness of the obtained criterions.

Proceedings ArticleDOI
16 Jun 2012
TL;DR: A simple yet effective algorithm based upon the sparse representation of natural scene statistics (NSS) feature that outperforms representative BIQA algorithms and some full-reference metrics is introduced.
Abstract: Blind image quality assessment (BIQA) is an important yet difficult task in image processing related applications. Existing algorithms for universal BIQA learn a mapping from features of an image to the corresponding subjective quality or divide the image into different distortions before mapping. Although these algorithms are promising, they face the following problems: 1) they require a large number of samples (pairs of distorted image and its subjective quality) to train a robust mapping; 2) they are sensitive to different datasets; and 3) they have to be retrained when new training samples are available. In this paper, we introduce a simple yet effective algorithm based upon the sparse representation of natural scene statistics (NSS) feature. It consists of three key steps: extracting NSS features in the wavelet domain, representing features via sparse coding, and weighting differential mean opinion scores by the sparse coding coefficients to obtain the final visual quality values. Thorough experiments on standard databases show that the proposed algorithm outperforms representative BIQA algorithms and some full-reference metrics.

Journal ArticleDOI
TL;DR: In this paper, the authors report on the design, fabrication, and measurement of a triple-band absorber enhanced from a planar two-dimensional artificial metamaterial transmission line (TL) concept.
Abstract: We report on the design, fabrication, and measurement of a triple-band absorber enhanced from a planar two-dimensional artificial metamaterial transmission line (TL) concept. Unlike previous multiband absorbers, this implementation incorporates fractal geometry into the artificial TL framework. As a consequence of the formed large $LC$ values, the utilized element is compact in size, which approaches ${\ensuremath{\lambda}}_{0}$/15 at the lowest fundamental resonant frequency. For independent control and design, a theoretical characterization based on a circuit model analysis (TL theory) is performed and a set of design procedures is also derived. Both numerical and experimental results have validated three strong absorption peaks across the $S$, $C$, and $X$ bands, respectively, which are attributable to a series of self-resonances induced in the specific localized area. The absorber features near-unity absorption for a wide range of incident angles and polarization states and a great degree of design flexibility by manipulating the $LC$ values in a straightforward way.