scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Signal Processing Letters in 2016"


Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed a deep cascaded multitask framework that exploits the inherent correlation between detection and alignment to boost up their performance, which leverages a cascaded architecture with three stages of carefully designed deep convolutional networks to predict face and landmark location in a coarse-to-fine manner.
Abstract: Face detection and alignment in unconstrained environment are challenging due to various poses, illuminations, and occlusions. Recent studies show that deep learning approaches can achieve impressive performance on these two tasks. In this letter, we propose a deep cascaded multitask framework that exploits the inherent correlation between detection and alignment to boost up their performance. In particular, our framework leverages a cascaded architecture with three stages of carefully designed deep convolutional networks to predict face and landmark location in a coarse-to-fine manner. In addition, we propose a new online hard sample mining strategy that further improves the performance in practice. Our method achieves superior accuracy over the state-of-the-art techniques on the challenging face detection dataset and benchmark and WIDER FACE benchmarks for face detection, and annotated facial landmarks in the wild benchmark for face alignment, while keeps real-time performance.

3,980 citations


Journal ArticleDOI
TL;DR: A recently emerged signal decomposition model known as convolutional sparse representation (CSR) is introduced into image fusion to address this problem, motivated by the observation that the CSR model can effectively overcome the above two drawbacks.
Abstract: As a popular signal modeling technique, sparse representation (SR) has achieved great success in image fusion over the last few years with a number of effective algorithms being proposed. However, due to the patch-based manner applied in sparse coding, most existing SR-based fusion methods suffer from two drawbacks, namely, limited ability in detail preservation and high sensitivity to misregistration, while these two issues are of great concern in image fusion. In this letter, we introduce a recently emerged signal decomposition model known as convolutional sparse representation (CSR) into image fusion to address this problem, which is motivated by the observation that the CSR model can effectively overcome the above two drawbacks. We propose a CSR-based image fusion framework, in which each source image is decomposed into a base layer and a detail layer, for multifocus image fusion and multimodal image fusion. Experimental results demonstrate that the proposed fusion methods clearly outperform the SR-based methods in terms of both objective assessment and visual quality.

615 citations


Journal ArticleDOI
TL;DR: Although it learns from only one type of noise residual, the proposed CNN is competitive in terms of detection performance compared with the SRM with ensemble classifiers on the BOSSbase for detecting S-UNIWARD and HILL.
Abstract: Recent studies have indicated that the architectures of convolutional neural networks (CNNs) tailored for computer vision may not be best suited to image steganalysis. In this letter, we report a CNN architecture that takes into account knowledge of steganalysis. In the detailed architecture, we take absolute values of elements in the feature maps generated from the first convolutional layer to facilitate and improve statistical modeling in the subsequent layers; to prevent overfitting, we constrain the range of data values with the saturation regions of hyperbolic tangent ( TanH ) at early stages of the networks and reduce the strength of modeling using $1\times1$ convolutions in deeper layers. Although it learns from only one type of noise residual, the proposed CNN is competitive in terms of detection performance compared with the SRM with ensemble classifiers on the BOSSbase for detecting S-UNIWARD and HILL. The results have implied that well-designed CNNs have the potential to provide a better detection performance in the future.

506 citations


Journal ArticleDOI
TL;DR: A new method, termed dispersion entropy (DE), is introduced, to quantify the regularity of time series and gain insight into the dependency of DE on several straightforward signal-processing concepts via a set of synthetic time series.
Abstract: One of the most powerful tools to assess the dynamical characteristics of time series is entropy. Sample entropy (SE), though powerful, is not fast enough, especially for long signals. Permutation entropy (PE), as a broadly used irregularity indicator, considers only the order of the amplitude values and hence some information regarding the amplitudes may be discarded. To tackle these problems, we introduce a new method, termed dispersion entropy (DE), to quantify the regularity of time series. We gain insight into the dependency of DE on several straightforward signal-processing concepts via a set of synthetic time series. The results show that DE, unlike PE, can detect the noise bandwidth and simultaneous frequency and amplitude change. We also employ DE to three publicly available real datasets. The simulations on real-valued signals show that the DE method considerably outperforms PE to discriminate different groups of each dataset. In addition, the computation time of DE is significantly less than that of SE and PE.

429 citations


Journal ArticleDOI
TL;DR: A measure to evaluate the reliability of depth map, and use it to reduce the influence of poor depth map on saliency detection, and two saliency maps are integrated into a final saliency map through weighted-sum method according to their importance.
Abstract: Stereoscopic perception is an important part of human visual system that allows the brain to perceive depth. However, depth information has not been well explored in existing saliency detection models. In this letter, a novel saliency detection method for stereoscopic images is proposed. First, we propose a measure to evaluate the reliability of depth map, and use it to reduce the influence of poor depth map on saliency detection. Then, the input image is represented as a graph, and the depth information is introduced into graph construction. After that, a new definition of compactness using color and depth cues is put forward to compute the compactness saliency map. In order to compensate the detection errors of compactness saliency when the salient regions have similar appearances with background, foreground saliency map is calculated based on depth-refined foreground seeds' selection (DRSS) mechanism and multiple cues contrast. Finally, these two saliency maps are integrated into a final saliency map through weighted-sum method according to their importance. Experiments on two publicly available stereo data sets demonstrate that the proposed method performs better than other ten state-of-the-art approaches.

240 citations


Journal ArticleDOI
TL;DR: A novel feature vector with depth information is computed and fed into the Hidden Conditional Neural Field (HCNF) classifier to recognize dynamic hand gestures and Experimental results show that the proposed method is suitable for certain dynamic hand gesture recognition tasks.
Abstract: Dynamic hand gesture recognition is a crucial but challenging task in the pattern recognition and computer vision communities. In this paper, we propose a novel feature vector which is suitable for representing dynamic hand gestures, and presents a satisfactory solution to recognizing dynamic hand gestures with a Leap Motion controller (LMC) only. These have not been reported in other papers. The feature vector with depth information is computed and fed into the Hidden Conditional Neural Field (HCNF) classifier to recognize dynamic hand gestures. The systematic framework of the proposed method includes two main steps: feature extraction and classification with the HCNF classifier. The proposed method is evaluated on two dynamic hand gesture datasets with frames acquired with a LMC. The recognition accuracy is 89.5% for the LeapMotion-Gesture3D dataset and 95.0% for the Handicraft-Gesture dataset. Experimental results show that the proposed method is suitable for certain dynamic hand gesture recognition tasks.

201 citations


Journal ArticleDOI
TL;DR: A low-feedback nonorthogonal multiple access (NOMA) scheme using massive multiple-input multiple-output (MIMO) transmission is proposed, and analytical results are developed to evaluate the performance of the proposed scheme for two scenarios.
Abstract: In this letter, a low-feedback nonorthogonal multiple access (NOMA) scheme using massive multiple-input multiple-output (MIMO) transmission is proposed. In particular, the proposed scheme can decompose a massive-MIMO-NOMA system into multiple separated single-input single-output (SISO) NOMA channels, and analytical results are developed to evaluate the performance of the proposed scheme for two scenarios, with perfect user ordering and with one-bit feedback, respectively.

201 citations


Journal ArticleDOI
TL;DR: In the proposed method, a novel structural feature is extracted as the gradient-weighted histogram of local binary pattern calculated on the gradient map (GWH-GLBP), which is effective to describe the complex degradation pattern introduced by multiple distortions.
Abstract: In practice, images available to consumers usually undergo several stages of processing including acquisition, compression, transmission, and presentation, and each stage may introduce certain type of distortion. It is common that images are simultaneously distorted by multiple types of distortions. Most existing objective image quality assessment (IQA) methods have been designed to estimate perceived quality of images corrupted by a single image processing stage. In this letter, we propose a no-reference (NR) IQA method to predict the visual quality of multiply-distorted images based on structural degradation. In the proposed method, a novel structural feature is extracted as the gradient-weighted histogram of local binary pattern (LBP) calculated on the gradient map (GWH-GLBP), which is effective to describe the complex degradation pattern introduced by multiple distortions. Extensive experiments conducted on two public multiply-distorted image databases have demonstrated that the proposed GWH-GLBP metric compares favorably with existing full-reference and NR IQA methods in terms of high accordance with human subjective ratings.

178 citations


Journal ArticleDOI
TL;DR: This letter first investigates the optimal decoding order when the transmitter knows only the average CSI, and then develops the optimal power allocation schemes in closed form by employing the feature of the NOMA principle for the two problems.
Abstract: In this letter, we study a downlink non-orthogonal multiple access (NOMA) transmission system, where only the average channel state information (CSI) is available at the transmitter. Two criteria in terms of transmit power and user fairness for NOMA systems are used to formulate two optimization problems, subjected to outage probabilistic constraints and the optimal decoding order. We first investigate the optimal decoding order when the transmitter knows only the average CSI, and then, we develop the optimal power allocation schemes in closed form by employing the feature of the NOMA principle for the two problems. Furthermore, the power difference between NOMA systems and OMA systems under outage constraints is obtained.

163 citations


Journal ArticleDOI
TL;DR: A simple closed-form solution method by constructing new relationships between the hybrid measurements and the unknown source position is proposed, which can attain the Cramér-Rao bound for Gaussian noise over the small error region where the bias compared to variance is small to be ignored.
Abstract: This letter focuses on locating passively a point source in the three-dimensional (3D) space, using the hybrid measurements of time difference of arrival (TDOA) and angle of arrival (AOA) observed at two stations. We propose a simple closed-form solution method by constructing new relationships between the hybrid measurements and the unknown source position. The mean-square error (MSE) matrix of the proposed solution is derived under the small error condition. Theoretical analysis discloses that the performance of the proposed solution can attain the Cramer-Rao bound (CRB) for Gaussian noise over the small error region where the bias compared to variance is small to be ignored. The proposed solution can be extended directly to more than two observing stations with CRB performance maintained theoretically. Simulations validate the performance of the proposed method.

163 citations


Journal ArticleDOI
TL;DR: Simulation results demonstrate that the improved Spearman-distance-based K-Nearest-Neighbor (KNN) scheme outperforms the original KNN method under the indoor environment with severe multipath fading and temporal dynamics.
Abstract: Indoor localization based on existing Wi-Fi Received Signal Strength Indicator (RSSI) is attractive since it can reuse the existing Wi-Fi infrastructure. However, it suffers from dramatic performance degradation due to multipath signal attenuation and environmental changes. To improve the localization accuracy under the above-mentioned circumstances, an improved Spearman-distance-based K-Nearest-Neighbor (KNN) scheme is proposed. Simulation results demonstrate that our improved method outperforms the original KNN method under the indoor environment with severe multipath fading and temporal dynamics.

Journal ArticleDOI
TL;DR: The directions of arrival (DOA) of plane waves are estimated from multisnapshot sensor array data using sparse Bayesian learning (SBL), where the hyperparameters are automatically selected by maximizing the evidence and promoting sparse DOA estimates.
Abstract: The directions of arrival (DOA) of plane waves are estimated from multisnapshot sensor array data using sparse Bayesian learning (SBL). The prior for the source amplitudes is assumed independent zero-mean complex Gaussian distributed with hyperparameters, the unknown variances (i.e., the source powers). For a complex Gaussian likelihood with hyperparameter, the unknown noise variance, the corresponding Gaussian posterior distribution is derived. The hyperparameters are automatically selected by maximizing the evidence and promoting sparse DOA estimates. The SBL scheme for DOA estimation is discussed and evaluated competitively against LASSO (l 1 -regularization), conventional beamforming, and MUSIC.

Journal ArticleDOI
TL;DR: A simple Bayesian sampler for linear regression with the horseshoe hierarchy is derived and Chib's algorithm may be used to easily compute the marginal likelihood of the model.
Abstract: In this note we derive a simple Bayesian sampler for linear regression with the horseshoe hierarchy. A new interpretation of the horseshoe model is presented, and extensions to logistic regression and alternative hierarchies, such as horseshoe+, are discussed. Due to the conjugacy of the proposed hierarchy, Chib’s algorithm may be used to easily compute the marginal likelihood of the model.

Journal ArticleDOI
TL;DR: With a computational cost at worst twice that of the noniterative scheme, the proposed algorithm provides significantly better quality, particularly at low signal-to-noise ratio, outperforming much costlier state-of-the-art alternatives.
Abstract: We denoise Poisson images with an iterative algorithm that progressively improves the effectiveness of variance-stabilizing transformations (VST) for Gaussian denoising filters. At each iteration, a combination of the Poisson observations with the denoised estimate from the previous iteration is treated as scaled Poisson data and filtered through a VST scheme. Due to the slight mismatch between a true scaled Poisson distribution and this combination, a special exact unbiased inverse is designed. We present an implementation of this approach based on the BM3D Gaussian denoising filter. With a computational cost at worst twice that of the noniterative scheme, the proposed algorithm provides significantly better quality, particularly at low signal-to-noise ratio, outperforming much costlier state-of-the-art alternatives.

Journal ArticleDOI
TL;DR: A novel method named spatial power spectrum sampling (SPSS) is proposed to reconstruct the INC matrix more efficiently, with the corresponding beamforming algorithm developed, where the covariance matrix taper (CMT) technique is employed to further improve its performance.
Abstract: Recently, a robust adaptive beamforming (RAB) technique based on interference-plus-noise covariance (INC) matrix reconstruction has been proposed, which utilizes the Capon spectrum estimator integrated over a region separated from the direction of the desired signal. Inspired by the sampling and reconstruction idea, in this paper, a novel method named spatial power spectrum sampling (SPSS) is proposed to reconstruct the INC matrix more efficiently, with the corresponding beamforming algorithm developed, where the covariance matrix taper (CMT) technique is employed to further improve its performance. Simulation results are provided to demonstrate the effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: Some numerical results are shown to highlight the effectiveness of the new technique to devise high-performance radar waveforms complying with the spectral compatibility requirements.
Abstract: Radar signal design in spectrally dense environments is a very challenging and topical problem. This letter deals with the synthesis of waveforms optimizing radar performance while satisfying multiple spectral compatibility constraints. Unlike some counterparts available in the open literature, a specific control on the interference energy radiated on each shared bandwidth is enforced. To tackle the resulting NP-hard optimization problem, a polynomial computational complexity procedure based on semidefinite relaxation (SDR) and randomization is developed. Hence, some numerical results are shown to highlight the effectiveness of the new technique to devise high-performance radar waveforms complying with the spectral compatibility requirements.

Journal ArticleDOI
TL;DR: This letter proposes a novel transmission scheme for jointly optimal allocation of the BS broadcasting power and time sharing among the wireless nodes, which maximizes the overall network throughput, under the constraint of average transmit power and maximum transmit power at the BS.
Abstract: In this letter, we consider wireless powered communication networks which could operate perpetually, as the base station (BS) broadcasts energy to the multiple energy harvesting (EH) information transmitters. These employ “harvest then transmit” mechanism, as they spend all of their energy harvested during the previous BS energy broadcast to transmit the information towards the BS. Assuming time division multiple access (TDMA), we propose a novel transmission scheme for jointly optimal allocation of the BS broadcasting power and time sharing among the wireless nodes, which maximizes the overall network throughput, under the constraint of average transmit power and maximum transmit power at the BS. The proposed scheme significantly outperforms “state of the art” schemes that employ only the optimal time allocation. If a single EH transmitter is considered, we generalize the optimal solutions for the case of fixed circuit power consumption, which refers to a much more practical scenario.

Journal ArticleDOI
TL;DR: In this article, a data-driven scheme for learning optimal thresholding functions for iterative shrinkage/thresholding algorithm (ISTA) is presented, which is obtained by relating iterations of ISTA to layers of a simple feedforward neural network and developing a corresponding error backpropagation algorithm for fine-tuning the thresholding function.
Abstract: Iterative shrinkage/thresholding algorithm (ISTA) is a well-studied method for finding sparse solutions to ill-posed inverse problems. In this letter, we present a data-driven scheme for learning optimal thresholding functions for ISTA. The proposed scheme is obtained by relating iterations of ISTA to layers of a simple feedforward neural network and developing a corresponding error backpropagation algorithm for fine-tuning the thresholding functions. Simulations on sparse statistical signals illustrate potential gains in estimation quality due to the proposed data adaptive ISTA.

Journal ArticleDOI
TL;DR: A method to select the most discriminative human body part based on group Lasso of motion to reduce the intra-class variation so as to improve the recognition performance is proposed.
Abstract: Gait recognition is an emerging biometric technology that identifies people through the analysis of the way they walk. The challenge of model-free based gait recognition is to cope with various intra-class variations such as clothing variations, carrying conditions and angle variations that adversely affect the recognition performance. This paper proposes a method to select the most discriminative human body part based on group Lasso of motion to reduce the intra-class variation so as to improve the recognition performance. The proposed method is evaluated using CASIA Gait Dataset B. Experimental results demonstrate that the proposed technique gives promising results.

Journal ArticleDOI
TL;DR: An effective sparse array extension method for maximizing the number of consecutive lags in the fourth-order difference co-array is proposed, leading to a novel enhanced sparse array structure based on co-prime arrays (CPAs) with significantly increased number of degrees of freedom (DOFs).
Abstract: An effective sparse array extension method for maximizing the number of consecutive lags in the fourth-order difference co-array is proposed, leading to a novel enhanced sparse array structure based on co-prime arrays (CPAs) with significantly increased number of degrees of freedom (DOFs). One method to exploit the increased DOFs based on nonstationary signals is also proposed, with simulation results provided to demonstrate the effectiveness of the proposed structure.

Journal ArticleDOI
TL;DR: This paper proposes a direct-position-determination-based method for localization of multiple emitters that transmit unknown signals that is based on minimum-variance-distortionless-response considerations to achieve a high resolution estimator that requires only a two-dimensional search for planar geometry, and a three- dimensional search for the general case.
Abstract: The most common methods for localization of radio frequency transmitters are based on two processing steps. In the first step, parameters such as angle of arrival or time of arrival are estimated at each base station independently. In the second step, the estimated parameters are used to determine the location of the transmitters. The direct position determination approach advocates using the observations from all the base stations together in order to estimate the locations in a single step. This single-step method is known to outperform two-step methods when the signal-to-noise ratio is low. In this paper, we propose a direct-position-determination-based method for localization of multiple emitters that transmit unknown signals. The method does not require knowledge of the number of emitters. It is based on minimum-variance-distortionless-response considerations to achieve a high resolution estimator that requires only a two-dimensional search for planar geometry, and a three-dimensional search for the general case.

Journal ArticleDOI
TL;DR: A robust Gaussian approximate fixed-interval smoother for nonlinear systems with heavy-tailed process and measurement noises is proposed and results show the efficiency and superiority of the proposed smoother as compared with existing smoothers.
Abstract: In this letter, a robust Gaussian approximate (GA) fixed-interval smoother for nonlinear systems with heavy-tailed process and measurement noises is proposed. The process and measurement noises are modeled as stationary Student’s t distributions, and the state trajectory and noise parameters are inferred approximately based on the variational Bayesian (VB) approach. Simulation results show the efficiency and superiority of the proposed smoother as compared with existing smoothers.

Journal ArticleDOI
TL;DR: Analytical and simulation results show that significant increase in the capacity gain of the system is achieved by using the proposed strategy compared to OMA and conventional NOMA.
Abstract: This letter proposes a strategy to efficiently utilize the spectrum of unpaired users in non-orthogonal multiple access (NOMA) systems. This is done by pairing multiple similar gain users to a single distant user in a nonoverlapping frequency band pairing fashion. We consider a case where the number of far users in a cell is larger than the number of near users. Thus, some far users cannot be paired and should get data using conventional multiple access (OMA). In such case, we propose a strategy where a near user can be paired with multiple (two) far users to optimally utilize the spectrum of unpaired users in the NOMA system. Performance of the proposed pairing scheme is evaluated in terms of ergodic sum capacity over independent Rayleigh flat fading channel. Analytical and simulation results show that significant increase in the capacity gain of the system is achieved by using the proposed strategy compared to OMA and conventional NOMA.

Journal ArticleDOI
TL;DR: This letter first extracts the gradient direction based on the local information of the image gradient magnitude, which not only preserves gradient direction consistency in local regions, but also demonstrates sensitivities to the distortions introduced to the SCI.
Abstract: In this letter, we make the first attempt to explore the usage of the gradient direction to conduct the perceptual quality assessment of the screen content images (SCIs). Specifically, the proposed approach first extracts the gradient direction based on the local information of the image gradient magnitude, which not only preserves gradient direction consistency in local regions, but also demonstrates sensitivities to the distortions introduced to the SCI. A deviation-based pooling strategy is subsequently utilized to generate the corresponding image quality index. Moreover, we investigate and demonstrate the complementary behaviors of the gradient direction and magnitude for SCI quality assessment. By jointly considering them together, our proposed SCI quality metric outperforms the state-of-the-art quality metrics in terms of correlation with human visual system perception.

Journal ArticleDOI
TL;DR: A novel cross-corpus speech emotion recognition (SER) method using domain-adaptive least-squares regression (DaLSR) model that achieves better recognition accuracies than the state-of-the-art methods.
Abstract: In this letter, a novel cross-corpus speech emotion recognition (SER) method using domain-adaptive least-squares regression (DaLSR) model is proposed. In this method, an additional unlabeled data set from target speech corpus is used to serve as an auxiliary data set and combined with the labeled training data set from source speech corpus for jointly training the DaLSR model. In contrast to the traditional least-squares regression (LSR) method, the major novelty of DaLSR is that it is able to handle the mismatch problem between source and target speech corpora. Hence, the proposed DaLSR method is very suitable for coping with cross-corpus SER problem. For evaluating the performance of the proposed method in dealing with the cross-corpus SER problem, we conduct extensive experiments on three emotional speech corpora and compare the results with several state-of-the-art transfer learning methods that are widely used for cross-corpus SER problem. The experimental results show that the proposed method achieves better recognition accuracies than the state-of-the-art methods.

Journal ArticleDOI
TL;DR: This letter employs parameterized nonconvex penalty functions to estimate the nonzero singular values more accurately than the nuclear norm in low-rank matrices by formulating a convex optimization problem with non Convex regularization.
Abstract: This letter proposes to estimate low-rank matrices by formulating a convex optimization problem with nonconvex regularization. We employ parameterized nonconvex penalty functions to estimate the nonzero singular values more accurately than the nuclear norm. A closed-form solution for the global optimum of the proposed objective function (sum of data fidelity and the nonconvex regularizer) is also derived. The solution reduces to singular value thresholding method as a special case. The proposed method is demonstrated for image denoising.

Journal ArticleDOI
TL;DR: An underdetermined signal model (more sources than sensors) is considered and conditions under which Cramér-Rao Bounds exist are investigated, highlighting crucial roles played by the array geometry, as well as the correlation between source signals.
Abstract: Although Cramer–Rao Bounds (CRB) for direction-of-arrival (DOA) estimation have been extensively studied for decades, existing results are mainly applicable when there are fewer sources than sensors. In contrast, this letter considers an underdetermined signal model (more sources than sensors) and investigates conditions under which CRB exist. Necessary and sufficient conditions are derived for the associated Fisher information matrix to be nonsingular, which in turn, leads to closed-form expressions for the CRBs for underdetermined DOA estimation. These conditions highlight crucial roles played by the array geometry, as well as the correlation between source signals. The CRB for different array geometries are numerically compared both in the overdetermined and underdetermined settings.

Journal ArticleDOI
TL;DR: The use of cross correlation of the templates extracted during the registration and authentication stages can reduce the time required to achieve the target false acceptance rate (FAR) and false rejection rate (FRR).
Abstract: We propose a practical system design for biometrics authentication based on electrocardiogram (ECG) signals collected from mobile or wearable devices. The ECG signals from such devices can be corrupted by noise as a result of movement, signal acquisition type, etc. This leads to a tradeoff between captured signal quality and ease of use. We propose the use of cross correlation of the templates extracted during the registration and authentication stages. The proposed approach can reduce the time required to achieve the target false acceptance rate (FAR) and false rejection rate (FRR). The proposed algorithms are implemented in a wearable watch for verification of feasibility. In the experiment results, the FAR and FRR are 5.2% and 1.9%, respectively, at approximately 3 s of authentication and 30 s of registration.

Journal ArticleDOI
TL;DR: In this paper, a novel algorithm is proposed which consists of two main steps of MA Cancellation and Spectral Analysis, where the MA cancellation step cleanses the MA-contaminated PPG signals utilizing the acceleration data and the spectral analysis step estimates a higher resolution spectrum of the signal and selects the spectral peaks corresponding to HR.
Abstract: This letter considers the problem of casual heart rate tracking during intensive physical exercise using simultaneous 2 channel photoplethysmographic (PPG) and 3 dimensional (3D) acceleration signals recorded from wrist. This is a challenging problem because the PPG signals recorded from wrist during exercise are contaminated by strong Motion Artifacts (MAs). In this work, a novel algorithm is proposed which consists of two main steps of MA Cancellation and Spectral Analysis. The MA cancellation step cleanses the MA-contaminated PPG signals utilizing the acceleration data and the spectral analysis step estimates a higher resolution spectrum of the signal and selects the spectral peaks corresponding to HR. Experimental results on datasets recorded from 12 subjects during fast running at the peak speed of 15 km/hour showed that the proposed algorithm achieves an average absolute error of 1.25 beat per minute (BPM). These experimental results also confirm that the proposed algorithm keeps high estimation accuracies even in strong MA conditions.

Journal ArticleDOI
TL;DR: This letter proposes a robust fast multiband image fusion method that requires fewer computational operations and is also more robust with respect to the blurring kernel compared with the one developed by Wei et al. with a reduced computational cost.
Abstract: This letter proposes a robust fast multiband image fusion method to merge a high-spatial low-spectral resolution image and a low-spatial high-spectral resolution image. Following the method recently developed by Wei et al. , the generalized Sylvester matrix equation associated with the multiband image fusion problem is solved in a more robust and efficient way by exploiting the Woodbury formula, avoiding any permutation operation in the frequency domain as well as the blurring kernel invertibility assumption required in their method. Thanks to this improvement, the proposed algorithm requires fewer computational operations and is also more robust with respect to the blurring kernel compared with the one developed by Wei et al. The proposed new algorithm is tested with different priors considered by Wei et al. Our conclusion is that the proposed fusion algorithm is more robust than the one by Wei et al. with a reduced computational cost.