scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Signal Processing Letters in 2015"


Journal ArticleDOI
TL;DR: In this article, the authors investigated power allocation in NOMA from a fairness perspective and developed low-complexity polynomial algorithms that yield the optimal solution in both cases considered.
Abstract: In non-orthogonal multiple access (NOMA) downlink, multiple data flows are superimposed in the power domain and user decoding is based on successive interference cancellation. NOMA’s performance highly depends on the power split among the data flows and the associated power allocation (PA) problem. In this letter, we study NOMA from a fairness standpoint and we investigate PA techniques that ensure fairness for the downlink users under i) instantaneous channel state information (CSI) at the transmitter, and ii) average CSI. Although the formulated problems are non-convex, we have developed low-complexity polynomial algorithms that yield the optimal solution in both cases considered.

667 citations


Journal ArticleDOI
TL;DR: This work presents the application of single DNN for both SR and LR using the 2013 Domain Adaptation Challenge speaker recognition (DAC13) and the NIST 2011 language recognition evaluation (LRE11) benchmarks and demonstrates large gains on performance.
Abstract: The impressive gains in performance obtained using deep neural networks (DNNs) for automatic speech recognition (ASR) have motivated the application of DNNs to other speech technologies such as speaker recognition (SR) and language recognition (LR). Prior work has shown performance gains for separate SR and LR tasks using DNNs for direct classification or for feature extraction. In this work we present the application of single DNN for both SR and LR using the 2013 Domain Adaptation Challenge speaker recognition (DAC13) and the NIST 2011 language recognition evaluation (LRE11) benchmarks. Using a single DNN trained for ASR on Switchboard data we demonstrate large gains on performance in both benchmarks: a 55% reduction in EER for the DAC13 out-of-domain condition and a 48% reduction in ${C_{avg}}$ on the LRE11 30 s test condition. It is also shown that further gains are possible using score or feature fusion leading to the possibility of a single i-vector extractor producing state-of-the-art SR and LR performance

429 citations


Journal ArticleDOI
TL;DR: This letter introduces a robust representation of 3-D shapes, named DeepPano, learned with deep convolutional neural networks (CNN), where a row-wise max-pooling layer is inserted between the convolution and fully-connected layers, making the learned representations invariant to the rotation around a principle axis.
Abstract: This letter introduces a robust representation of 3-D shapes, named DeepPano, learned with deep convolutional neural networks (CNN). Firstly, each 3-D shape is converted into a panoramic view, namely a cylinder projection around its principle axis. Then, a variant of CNN is specifically designed for learning the deep representations directly from such views. Different from typical CNN, a row-wise max-pooling layer is inserted between the convolution and fully-connected layers, making the learned representations invariant to the rotation around a principle axis. Our approach achieves state-of-the-art retrieval/classification results on two large-scale 3-D model datasets (ModelNet-10 and ModelNet-40), outperforming typical methods by a large margin.

404 citations


Journal ArticleDOI
TL;DR: It is shown here that the noise eigenspace of this matrix can be directly obtained from another matrix R̃ which is much easier to compute from data.
Abstract: Sparse arrays such as nested and coprime arrays use a technique called spatial smoothing in order to successfully perform MUSIC in the difference-coarray domain. In this paper it is shown that the spatial smoothing step is not necessary in the sense that the effect achieved by that step can be obtained more directly. In particular, with ${\widetilde {\bf R}_{ss}}$ denoting the spatial smoothed matrix with finite snapshots, it is shown here that the noise eigenspace of this matrix can be directly obtained from another matrix $\widetilde {\bf R}$ which is much easier to compute from data.

380 citations


Journal ArticleDOI
TL;DR: This work proposes a median filtering detection method based on convolutional neural networks (CNNs), which can automatically learn and obtain features directly from the image and achieves significant performance improvements, especially in the cut-and-paste forgery detection.
Abstract: Median filtering detection has recently drawn much attention in image editing and image anti-forensic techniques. Current image median filtering forensics algorithms mainly extract features manually. To deal with the challenge of detecting median filtering from small-size and compressed image blocks, by taking into account of the properties of median filtering, we propose a median filtering detection method based on convolutional neural networks (CNNs), which can automatically learn and obtain features directly from the image. To our best knowledge, this is the first work of applying CNNs in median filtering image forensics. Unlike conventional CNN models, the first layer of our CNN framework is a filter layer that accepts an image as the input and outputs its median filtering residual (MFR). Then, via alternating convolutional layers and pooling layers to learn hierarchical representations, we obtain multiple features for further classification. We test the proposed method on several experiments. The results show that the proposed method achieves significant performance improvements, especially in the cut-and-paste forgery detection.

361 citations


Journal ArticleDOI
TL;DR: Validations based on four publicly available databases show that the proposed patch-based contrast quality index (PCQI) method provides accurate predictions on the human perception of contrast variations.
Abstract: Contrast is a fundamental attribute of images that plays an important role in human visual perception of image quality With numerous approaches proposed to enhance image contrast, much less work has been dedicated to automatic quality assessment of contrast changed images Existing approaches rely on global statistics to estimate contrast quality Here we propose a novel local patch-based objective quality assessment method using an adaptive representation of local patch structure, which allows us to decompose any image patch into its mean intensity, signal strength and signal structure components and then evaluate their perceptual distortions in different ways A unique feature that differentiates the proposed method from previous contrast quality models is the capability to produce a local contrast quality map, which predicts local quality variations over space and may be employed to guide contrast enhancement algorithms Validations based on four publicly available databases show that the proposed patch-based contrast quality index (PCQI) method provides accurate predictions on the human perception of contrast variations

270 citations


Journal ArticleDOI
TL;DR: A simple but effective method for no-reference quality assessment of contrast distorted images based on the principle of natural scene statistics (NSS), which demonstrates the promising performance of the proposed method based on three publicly available databases.
Abstract: Contrast distortion is often a determining factor in human perception of image quality, but little investigation has been dedicated to quality assessment of contrast-distorted images without assuming the availability of a perfect-quality reference image. In this letter, we propose a simple but effective method for no-reference quality assessment of contrast distorted images based on the principle of natural scene statistics (NSS). A large scale image database is employed to build NSS models based on moment and entropy features. The quality of a contrast-distorted image is then evaluated based on its unnaturalness characterized by the degree of deviation from the NSS models. Support vector regression (SVR) is employed to predict human mean opinion score (MOS) from multiple NSS features as the input. Experiments based on three publicly available databases demonstrate the promising performance of the proposed method.

268 citations


Journal ArticleDOI
TL;DR: This letter will study the problem of too little attention paid to the convergence issue of the fixed-point MCC algorithms and give a sufficient condition to guarantee the convergence of a fixed- point MCC algorithm.
Abstract: The maximum correntropy criterion (MCC) has received increasing attention in signal processing and machine learning due to its robustness against outliers (or impulsive noises). Some gradient based adaptive filtering algorithms under MCC have been developed and available for practical use. The fixed-point algorithms under MCC are, however, seldom studied. In particular, too little attention has been paid to the convergence issue of the fixed-point MCC algorithms. In this letter, we will study this problem and give a sufficient condition to guarantee the convergence of a fixed-point MCC algorithm.

264 citations


Journal ArticleDOI
TL;DR: In this article, a new feasible point pursuit successive convex approximation (FPP-SCA) algorithm is proposed for non-convex quadratic programs (QCQPs), which adds slack variables to sustain feasibility and a penalty to ensure slacks are sparingly used.
Abstract: Quadratically constrained quadratic programs (QCQPs) have a wide range of applications in signal processing and wireless communications. Non-convex QCQPs are NP-hard in general. Existing approaches relax the non-convexity using semi-definite relaxation (SDR) or linearize the non-convex part and solve the resulting convex problem. However, these techniques are seldom successful in even obtaining a feasible solution when the QCQP matrices are indefinite. In this letter, a new feasible point pursuit successive convex approximation (FPP-SCA) algorithm is proposed for non-convex QCQPs. FPP-SCA linearizes the non-convex parts of the problem as conventional SCA does, but adds slack variables to sustain feasibility, and a penalty to ensure slacks are sparingly used. When FPP-SCA is successful in identifying a feasible point of the non-convex QCQP, convergence to a Karush-Kuhn-Tucker (KKT) point is thereafter ensured. Simulations show the effectiveness of our proposed algorithm in obtaining feasible and near-optimal solutions, significantly outperforming existing approaches.

191 citations


Journal ArticleDOI
TL;DR: The Chi-square detector and cosine similarity matching approach is found to be robust for detecting false data injection attacks as well as other attacks in the smart grids.
Abstract: The transformation of traditional energy networks to smart grids can assist in revolutionizing the energy industry in terms of reliability, performance and manageability. However, increased connectivity of power grid assets for bidirectional communications presents severe security vulnerabilities. In this letter, we investigate Chi-square detector and cosine similarity matching approaches for attack detection in smart grids where Kalman filter estimation is used to measure any deviation from actual measurements. The cosine similarity matching approach is found to be robust for detecting false data injection attacks as well as other attacks in the smart grids. Once the attack is detected, system can take preventive action and alarm the manager to take preventative action to limit the risk. Numerical results obtained from simulations corroborate our theoretical analysis.

186 citations


Journal ArticleDOI
TL;DR: This letter considers the coherent integration problem for a maneuvering target, involving range migration (RM) and Doppler frequency migration (DFM) within one coherent pulse interval, and proposes a new coherent integration method, known as Radon-Lv's distribution (RLVD), which can not only eliminate the RM effect via jointly searching in the target's motion parameters space, but also remove the DFM.
Abstract: This letter considers the coherent integration problem for a maneuvering target, involving range migration (RM) and Doppler frequency migration (DFM) within one coherent pulse interval. A new coherent integration method, known as Radon-Lv’s distribution (RLVD), is proposed. It can not only eliminate the RM effect via jointly searching in the target’s motion parameters space, but also remove the DFM and achieve the coherent integration via Lv’s distribution (LVD). Finally, several simulations are provided to demonstrate the effectiveness. The results show that for detection ability, the proposed method is superior to the moving target detection (MTD), Radon-Fourier transform (RFT), and Radon-fractional Fourier transform (RFRFT) under low signal-to-noise-ratio (SNR) environment.

Journal ArticleDOI
TL;DR: Numerical results demonstrate that the proposed algorithm outperforms the well-known approximate message passing (AMP) algorithm when a partial DFT sensing matrix is involved.
Abstract: In this letter, we propose a turbo compressed sensing algorithm with partial discrete Fourier transform (DFT) sensing matrices. Interestingly, the state evolution of the proposed algorithm is shown to be consistent with that derived using the replica method. Numerical results demonstrate that the proposed algorithm outperforms the well-known approximate message passing (AMP) algorithm when a partial DFT sensing matrix is involved.

Journal ArticleDOI
TL;DR: The results show that the fusion method improves the quality of the output image visually and outperforms the previous DCT based techniques and the state-of-art methods in terms of the objective evaluation.
Abstract: Multi-focus image fusion in wireless visual sensor networks (WVSN) is a process of fusing two or more images to obtain a new one which contains a more accurate description of the scene than any of the individual source images. In this letter, we propose an efficient algorithm to fuse multi-focus images or videos using discrete cosine transform (DCT) based standards in WVSN. The spatial frequencies of the corresponding blocks from source images are calculated as the contrast criteria, and the blocks with the larger spatial frequencies compose the DCT presentation of the output image. Experiments on plenty of pairs of multi-focus images coded in Joint Photographic Experts Group (JPEG) standard are conducted to evaluate the fusion performance. The results show that our fusion method improves the quality of the output image visually and outperforms the previous DCT based techniques and the state-of-art methods in terms of the objective evaluation. 2014 IEEE.

Journal ArticleDOI
TL;DR: The evaluation results show that the visual quality can be preserved after a considerable amount of message bits have been embedded into the contrast-enhanced images, even better than three specific MATLAB functions used for image contrast enhancement.
Abstract: In this letter, a novel reversible data hiding (RDH) algorithm is proposed for digital images. Instead of trying to keep the PSNR value high, the proposed algorithm enhances the contrast of a host image to improve its visual quality. The highest two bins in the histogram are selected for data embedding so that histogram equalization can be performed by repeating the process. The side information is embedded along with the message bits into the host image so that the original image is completely recoverable. The proposed algorithm was implemented on two sets of images to demonstrate its efficiency. To our best knowledge, it is the first algorithm that achieves image contrast enhancement by RDH. Furthermore, the evaluation results show that the visual quality can be preserved after a considerable amount of message bits have been embedded into the contrast-enhanced images, even better than three specific MATLAB functions used for image contrast enhancement.

Journal ArticleDOI
TL;DR: The reported results show that resting-state functional brain network topology provides better classification performance than using only a measure of functional connectivity, and may represent an optimal solution for the design of next generation EEG based biometric systems.
Abstract: Recently, there has been a growing interest in the use of brain activity for biometric systems. However, so far these studies have focused mainly on basic features of the Electroencephalography. In this study we propose an approach based on phase synchronization, to investigate personal distinctive brain network organization. To this end, the importance, in terms of centrality, of different regions was determined on the basis of EEG recordings. We hypothesized that nodal centrality enables the accurate identification of individuals. EEG signals from a cohort of 109 64-channels EEGs were band-pass filtered in the classical frequency bands and functional connectivity between the sensors was estimated using the Phase Lag Index. The resulting connectivity matrix was used to construct a weighted network, from which the nodal Eigenvector Centrality was computed. Nodal centrality was successively used as feature vector. Highest recognition rates were observed in the gamma band (equal error rate ( ${\rm EER}) = 0.044$ ) and high beta band ( ${\rm EER} = 0.102$ ). Slightly lower recognition rate was observed in the low beta band ( ${\rm EER} = 0.144$ ), while poor recognition rates were observed for the others frequency bands. The reported results show that resting-state functional brain network topology provides better classification performance than using only a measure of functional connectivity, and may represent an optimal solution for the design of next generation EEG based biometric systems. This study also suggests that results from biometric systems based on high-frequency scalp EEG features should be interpreted with caution.

Journal ArticleDOI
TL;DR: Simulations for a wireless sensor network illustrate the advantages of the proposed scheme and algorithm in terms of convergence rate and mean square error performance.
Abstract: This letter proposes a novel distributed compressed estimation scheme for sparse signals and systems based on compressive sensing techniques. The proposed scheme consists of compression and decompression modules inspired by compressive sensing to perform distributed compressed estimation. A design procedure is also presented and an algorithm is developed to optimize measurement matrices, which can further improve the performance of the proposed distributed compressed estimation scheme. Simulations for a wireless sensor network illustrate the advantages of the proposed scheme and algorithm in terms of convergence rate and mean square error performance.

Journal ArticleDOI
TL;DR: The low complexity transceiver structure of the MIMO-OFDM-IM scheme is developed and it is shown via computer simulations that the proposed MIM o-OF DM scheme achieves significantly better error performance than classical MIMo- OFDM for several different system configurations.
Abstract: Orthogonal frequency division multiplexing with index modulation (OFDM-IM) is a novel multicarrier transmission technique which has been proposed as an alternative to classical OFDM. The main idea of OFDM-IM is the use of the indices of the active subcarriers in an OFDM system as an additional source of information. In this work, we propose multiple-input multiple-output OFDM-IM (MIMO-OFDM-IM) scheme by combining OFDM-IM and MIMO transmission techniques. The low complexity transceiver structure of the MIMO-OFDM-IM scheme is developed and it is shown via computer simulations that the proposed MIMO-OFDM-IM scheme achieves significantly better error performance than classical MIMO-OFDM for several different system configurations.

Journal ArticleDOI
TL;DR: This work shows that using the STFT leads to improved performance over recovery from the oversampled Fourier magnitude with the same number of measurements, and suggests an efficient algorithm for recovery of a sparse input from theSTFT magnitude.
Abstract: We consider the classical 1D phase retrieval problem. In order to overcome the difficulties associated with phase retrieval from measurements of the Fourier magnitude, we treat recovery from the magnitude of the short-time Fourier transform (STFT). We first show that the redundancy offered by the STFT enables unique recovery for arbitrary nonvanishing inputs, under mild conditions. An efficient algorithm for recovery of a sparse input from the STFT magnitude is then suggested, based on an adaptation of the recently proposed GESPAR algorithm. We demonstrate through simulations that using the STFT leads to improved performance over recovery from the oversampled Fourier magnitude with the same number of measurements.

Journal ArticleDOI
TL;DR: This letter considers the motion parameters estimation problem for a maneuvering target with arbitrary parameterized motion and a fast estimation method based on adjacent cross correlation function (ACCF) is proposed, where the iterative adjacentCross correlation operation is employed to remove the range migration and reduce the order of Doppler frequency migration.
Abstract: This letter considers the motion parameters estimation problem for a maneuvering target with arbitrary parameterized motion. The slant range of the target is modeled as a polynomial function in terms of its multiple motion parameters and a fast estimation method based on adjacent cross correlation function (ACCF) is proposed, where the iterative adjacent cross correlation operation is employed to remove the range migration and reduce the order of Doppler frequency migration. Then the motion parameters are estimated via Fourier transform. Compared with the generalized Radon Fourier transform (GRFT), the proposed method can estimate the parameters without searching procedure and acquire close estimation performance at high signal-to-noise ratio (SNR) with a much lower computational cost. Finally, simulations are provided to demonstrate the effectiveness.

Journal ArticleDOI
TL;DR: A new and efficient image features descriptor based on the local diagonal extrema pattern (LDEP) is proposed for CT image retrieval which speeds up the image retrieval task and solves the “Curse of dimensionality” problem also.
Abstract: The medical image retrieval plays an important role in medical diagnosis where a physician can retrieve most similar images from template images against a query image of a particular patient. In this letter, a new and efficient image features descriptor based on the local diagonal extrema pattern (LDEP) is proposed for CT image retrieval. The proposed approach finds the values and indexes of the local diagonal extremas to exploit the relationship among the diagonal neighbors of any center pixel of the image using first-order local diagonal derivatives. The intensity values of the local diagonal extremas are compared with the intensity value of the center pixel to utilize the relationship of central pixel with its neighbors. Finally, the descriptor is formed on the basis of the indexes and comparison of center pixel and local diagonal extremas. The consideration of only diagonal neighbors greatly reduces the dimension of the feature vector which speeds up the image retrieval task and solves the “Curse of dimensionality” problem also. The LDEP is tested for CT image retrieval over Emphysema-CT and NEMA-CT databases and compared with the existing approaches. The superiority in terms of performance and efficiency in terms of speedup of the proposed method are confirmed by the experiments.

Journal ArticleDOI
TL;DR: This paper proposes a new penalty based on a smooth approximation to the ℓ/ℓ 2 function and develops a proximal-based algorithm to solve variational problems involving this function and derives theoretical convergence results.
Abstract: The ${\ell _1}/{\ell _2}$ ratio regularization function has shown good performance for retrieving sparse signals in a number of recent works, in the context of blind deconvolution. Indeed, it benefits from a scale invariance property much desirable in the blind context. However, the ${\ell _1}/{\ell _2}$ function raises some difficulties when solving the nonconvex and nonsmooth minimization problems resulting from the use of such a penalty term in current restoration methods. In this paper, we propose a new penalty based on a smooth approximation to the ${\ell _1}/{\ell _2}$ function. In addition, we develop a proximal-based algorithm to solve variational problems involving this function and we derive theoretical convergence results. We demonstrate the effectiveness of our method through a comparison with a recent alternating optimization strategy dealing with the exact ${\ell _1}/{\ell _2}$ term, on an application to seismic data blind deconvolution.

Journal ArticleDOI
TL;DR: This letter proposes the use of a non-convex regularizer constrained so that the total objective function to be minimized maintains its convexity.
Abstract: Total variation (TV) denoising is an effective noise suppression method when the derivative of the underlying signal is known to be sparse. TV denoising is defined in terms of a convex optimization problem involving a quadratic data fidelity term and a convex regularization term. A non-convex regularizer can promote sparsity more strongly, but generally leads to a non-convex optimization problem with non-optimal local minima. This letter proposes the use of a non-convex regularizer constrained so that the total objective function to be minimized maintains its convexity. Conditions for a non-convex regularizer are given that ensure the total TV denoising objective function is convex. An efficient algorithm is given for the resulting problem.

Journal ArticleDOI
TL;DR: A robust discriminative segmentation method from the view of information theoretic learning is proposed to simultaneously select the informative feature and to reduce the uncertainties of supervoxel assignment for discrim inative brain tissue segmentation.
Abstract: Automatic segmentation of brain tissues from MRI is of great importance for clinical application and scientific research Recent advancements in supervoxel-level analysis enable robust segmentation of brain tissues by exploring the inherent information among multiple features extracted on the supervoxels Within this prevalent framework, the difficulties still remain in clustering uncertainties imposed by the heterogeneity of tissues and the redundancy of the MRI features To cope with the aforementioned two challenges, we propose a robust discriminative segmentation method from the view of information theoretic learning The prominent goal of the method is to simultaneously select the informative feature and to reduce the uncertainties of supervoxel assignment for discriminative brain tissue segmentation Experiments on two brain MRI datasets verified the effectiveness and efficiency of the proposed approach

Journal ArticleDOI
TL;DR: A closed-form expression for the secrecy outage probability (SOP) is derived, and two relay and jammer selection methods for SOP minimization are developed.
Abstract: Secure relay and jammer selection for physical-layer security is studied in a wireless network with multiple intermediate nodes and eavesdroppers, where each intermediate node either helps to forward messages as a relay, or broadcasts noise as a jammer. We derive a closed-form expression for the secrecy outage probability (SOP), and we develop two relay and jammer selection methods for SOP minimization. In both methods a selection vector and a corresponding threshold are designed and broadcast by the destination to ensure each intermediate node knows its own role while knowledge of the relay and jammer set is kept secret from all eavesdroppers. Simulation results show the SOP of the proposed methods are very close to that obtained by an exhaustive search, and that maintaining the privacy of the selection result greatly improves the SOP performance.

Journal ArticleDOI
TL;DR: This paper formulates wavelet-TV (WATV) denoising as a unified problem and uses non-convex penalty functions to strongly induce wavelet sparsity.
Abstract: Algorithms for signal denoising that combine wavelet-domain sparsity and total variation (TV) regularization are relatively free of artifacts, such as pseudo-Gibbs oscillations, normally introduced by pure wavelet thresholding. This paper formulates wavelet-TV (WATV) denoising as a unified problem. To strongly induce wavelet sparsity, the proposed approach uses non-convex penalty functions. At the same time, in order to draw on the advantages of convex optimization (unique minimum, reliable algorithms, simplified regularization parameter selection), the non-convex penalties are chosen so as to ensure the convexity of the total objective function. A computationally efficient, fast converging algorithm is derived.

Journal ArticleDOI
TL;DR: This letter presents a framework which generalizes the traditional Chan-Vese algorithm, and represents the illumination of the regions of interest in a lower dimensional subspace using a set of pre-specified basis functions, which enables to accommodate heterogeneous objects, even in presence of noise.
Abstract: We propose a novel region based segmentation method capable of segmenting objects in presence of significant intensity variation. Current solutions use some form of local processing to tackle intra-region inhomogeneity, which makes such methods susceptible to local minima. In this letter, we present a framework which generalizes the traditional Chan-Vese algorithm. In contrast to existing local techniques, we represent the illumination of the regions of interest in a lower dimensional subspace using a set of pre-specified basis functions. This representation enables us to accommodate heterogeneous objects, even in presence of noise. We compare our results with three state of the art techniques on a dataset focusing on biological/biomedical images with tubular or filamentous structures. Quantitatively, we achieve a 44% increase in performance, which demonstrates efficacy of the method.

Journal ArticleDOI
TL;DR: In the simulations the proposed methods achieve better accuracy than the alternative methods, the computational complexity of the filter being roughly 5 to 10 times that of the Kalman filter.
Abstract: Filtering and smoothing algorithms for linear discrete- time state-space models with skewed and heavy-tailed measurement noise are presented. The algorithms use a variational Bayes approximation of the posterior distribution of models that have normal prior and skew- $t$ -distributed measurement noise. The proposed filter and smoother are compared with conventional low- complexity alternatives in a simulated pseudorange positioning scenario. In the simulations the proposed methods achieve better accuracy than the alternative methods, the computational complexity of the filter being roughly 5 to 10 times that of the Kalman filter.

Journal ArticleDOI
TL;DR: In this article, a harvest-and-forward (H2F) relay with multiple antennas is considered, where the relay harvests energy and obtains information from the source with the radio-frequent signals by jointly using the antenna selection and power splitting techniques.
Abstract: The simultaneous wireless transfer of information and power with the help of a relay equipped with multiple antennas is considered in this letter, where a “harvest-and-forward” strategy is proposed. In particular, the relay harvests energy and obtains information from the source with the radio-frequent signals by jointly using the antenna selection (AS) and power splitting (PS) techniques, and then the processed information is amplified and forwarded to the destination relying on the harvested energy. This letter jointly optimizes AS and PS to maximize the achievable rate for the proposed strategy. Considering that the joint optimization is according to the non-convex problem, a two-stage procedure is proposed to determine the optimal ratio of received signal power split for energy harvesting, and the optimized antenna set engaged in information forwarding. Simulation results confirm the accuracy of the two-stage procedure, and demonstrate that the proposed “harvest-and-forward” strategy outperforms the conventional amplify-and-forward (AF) relaying and the direct transmission.

Journal ArticleDOI
Min Zhang1, Chisako Muramatsu1, Xiangrong Zhou1, Takeshi Hara1, Hiroshi Fujita1 
TL;DR: The experimental results for two representative databases show that the proposed method is strongly correlated to subjective quality evaluations and competitive to the state-of-the-art NR-IQA methods.
Abstract: Multimedia, including audio, image, and video, etc., is a ubiquitous part of modern life. Quality evaluation, both objective and subjective, is of fundamental importance for various multimedia applications. In this letter, a novel quality-aware feature is proposed for blind/no-reference (NR) image quality assessment (IQA). The new quality-aware feature is generated from the proposed joint generalized local binary pattern (GLBP) statistics. In this method, using the Laplacian of Gaussian (LOG) filters, the images are first decomposed into multi-scale subband images. Then, the subband images are encoded with the proposed GLBP operator and the quality-aware features are formed from the joint GLBP histograms from the encoding maps of each subband image. Finally, using support vector regression (SVR), the quality-aware features are mapped to the image's subjective quality score for NR-IQA. The experimental results for two representative databases show that the proposed method is strongly correlated to subjective quality evaluations and competitive to the state-of-the-art NR-IQA methods.

Journal ArticleDOI
TL;DR: A learning to rank based framework for assessing the face image quality is proposed and Experimental result demonstrates its effectiveness in improving the robustness of face detection and recognition.
Abstract: Face image quality is an important factor affecting the accuracy of automatic face recognition. It is usually possible for practical recognition systems to capture multiple face images from each subject. Selecting face images with high quality for recognition is a promising stratagem for improving the system performance. We propose a learning to rank based framework for assessing the face image quality. The proposed method is simple and can adapt to different recognition methods. Experimental result demonstrates its effectiveness in improving the robustness of face detection and recognition.