scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Signal Processing Letters in 2014"


Journal ArticleDOI
TL;DR: In this letter, the performance of non-orthogonal multiple access (NOMA) is investigated in a cellular downlink scenario with randomly deployed users and developed analytical results show that NOMA can achieve superior performance in terms of ergodic sum rates; however, the outage performance of N OMA depends critically on the choices of the users' targeted data rates and allocated power.
Abstract: In this letter, the performance of non-orthogonal multiple access (NOMA) is investigated in a cellular downlink scenario with randomly deployed users. The developed analytical results show that NOMA can achieve superior performance in terms of ergodic sum rates; however, the outage performance of NOMA depends critically on the choices of the users' targeted data rates and allocated power. In particular, a wrong choice of the targeted data rates and allocated power can lead to a situation in which the user's outage probability is always one, i.e. the user's targeted quality of service will never be met.

1,762 citations


Journal ArticleDOI
TL;DR: This letter presents a regression-based speech enhancement framework using deep neural networks (DNNs) with a multiple-layer deep architecture that tends to achieve significant improvements in terms of various objective quality measures.
Abstract: This letter presents a regression-based speech enhancement framework using deep neural networks (DNNs) with a multiple-layer deep architecture. In the DNN learning process, a large training set ensures a powerful modeling capability to estimate the complicated nonlinear mapping from observed noisy speech to desired clean signals. Acoustic context was found to improve the continuity of speech to be separated from the background noises successfully without the annoying musical artifact commonly observed in conventional speech enhancement algorithms. A series of pilot experiments were conducted under multi-condition training with more than 100 hours of simulated speech data, resulting in a good generalization capability even in mismatched testing conditions. When compared with the logarithmic minimum mean square error approach, the proposed DNN-based algorithm tends to achieve significant improvements in terms of various objective quality measures. Furthermore, in a subjective preference evaluation with 10 listeners, 76.35% of the subjects were found to prefer DNN-based enhanced speech to that obtained with other conventional technique.

860 citations


Journal ArticleDOI
TL;DR: Simulation results agree with the theoretical calculations quite well and establish a fixed-point equation to solve the exact value of the steady-state EMSE of the adaptive filtering under the maximum correntropy criterion.
Abstract: The steady-state excess mean square error (EMSE) of the adaptive filtering under the maximum correntropy criterion (MCC) has been studied. For Gaussian noise case, we establish a fixed-point equation to solve the exact value of the steady-state EMSE, while for non-Gaussian noise case, we derive an approximate analytical expression for the steady-state EMSE, based on a Taylor expansion approach. Simulation results agree with the theoretical calculations quite well.

355 citations


Journal ArticleDOI
TL;DR: This paper proposes a parallel framework to decide coding unit trees through in-depth understanding of the dependency among different coding units, and achieves averagely more than 11 and 16 times speedup for 1920x1080 and 2560x1600 video sequences, respectively, without any coding efficiency degradation.
Abstract: High Efficiency Video Coding (HEVC) uses a very flexible tree structure to organize coding units, which leads to a superior coding efficiency compared with previous video coding standards. However, such a flexible coding unit tree structure also places a great challenge for encoders. In order to fully exploit the coding efficiency brought by this structure, huge amount of computational complexity is needed for an encoder to decide the optimal coding unit tree for each image block. One way to achieve this is to use parallel computing enabled by many-core processors. In this paper, we analyze the challenge to use many-core processors to make coding unit tree decision. Through in-depth understanding of the dependency among different coding units, we propose a parallel framework to decide coding unit trees. Experimental results show that, on the Tile64 platform, our proposed method achieves averagely more than 11 and 16 times speedup for 1920x1080 and 2560x1600 video sequences, respectively, without any coding efficiency degradation.

342 citations


Journal ArticleDOI
TL;DR: This work improves DeLong's algorithm by reducing the order of time complexity from quadratic down to linearithmic (the product of sample size and its logarithm).
Abstract: Among algorithms for comparing the areas under two or more correlated receiver operating characteristic (ROC) curves, DeLong's algorithm is perhaps the most widely used one due to its simplicity of implementation in practice. Unfortunately, however, the time complexity of DeLong's algorithm is of quadratic order (the product of sample sizes), thus making it time-consuming and impractical when the sample sizes are large. Based on an equivalent relationship between the Heaviside function and mid-ranks of samples, we improve DeLong's algorithm by reducing the order of time complexity from quadratic down to linearithmic (the product of sample size and its logarithm). Monte Carlo simulations verify the computational efficiency of our algorithmic findings in this work.

293 citations


Journal ArticleDOI
TL;DR: An Adaptive Denoising Autoencoder based on an unsupervised domain adaptation method, where prior knowledge learned from a target set is used to regularize the training on a source set to achieve matched feature space representation for the target and source sets while ensuring target domain knowledge transfer.
Abstract: With the availability of speech data obtained from different devices and varied acquisition conditions, we are often faced with scenarios, where the intrinsic discrepancy between the training and the test data has an adverse impact on affective speech analysis. To address this issue, this letter introduces an Adaptive Denoising Autoencoder based on an unsupervised domain adaptation method, where prior knowledge learned from a target set is used to regularize the training on a source set. Our goal is to achieve a matched feature space representation for the target and source sets while ensuring target domain knowledge transfer. The method has been successfully evaluated on the 2009 INTERSPEECH Emotion Challenge’s FAU Aibo Emotion Corpus as target corpus and two other publicly available speech emotion corpora as sources. The experimental results show that our method significantly improves over the baseline performance and outperforms related feature domain adaptation methods.

253 citations


Journal ArticleDOI
TL;DR: This work investigates convolutional neural networks for large vocabulary distant speech recognition, trained using speech recorded from a single distant microphone (SDM) and multiple distant microphones (MDM), and proposes a channel-wise convolution with two-way pooling.
Abstract: We investigate convolutional neural networks (CNNs) for large vocabulary distant speech recognition, trained using speech recorded from a single distant microphone (SDM) and multiple distant microphones (MDM). In the MDM case we explore a beamformed signal input representation compared with the direct use of multiple acoustic channels as a parallel input to the CNN. We have explored different weight sharing approaches, and propose a channel-wise convolution with two-way pooling. Our experiments, using the AMI meeting corpus, found that CNNs improve the word error rate (WER) by 6.5% relative compared to conventional deep neural network (DNN) models and 15.7% over a discriminatively trained Gaussian mixture model (GMM) baseline. For cross-channel CNN training, the WER improves by 3.5% relative over the comparable DNN structure. Compared with the best beamformed GMM system, cross-channel convolution reduces the WER by 9.7% relative, and matches the accuracy of a beamformed DNN.

248 citations


Journal ArticleDOI
TL;DR: Experiments and comparisons demonstrate that the proposed adaptive weighted mean filter has very low detection error rate and high restoration quality especially for high-level noise.
Abstract: In this letter, we propose a new adaptive weighted mean filter (AWMF) for detecting and removing high level of salt-and-pepper noise. For each pixel, we firstly determine the adaptive window size by continuously enlarging the window size until the maximum and minimum values of two successive windows are equal respectively. Then the current pixel is regarded as noise candidate if it is equal to the maximum or minimum values, otherwise, it is regarded as noise-free pixel. Finally, the noise candidate is replaced by the weighted mean of the current window, while the noise-free pixel is left unchanged. Experiments and comparisons demonstrate that our proposed filter has very low detection error rate and high restoration quality especially for high-level noise.

199 citations


Journal ArticleDOI
TL;DR: This letter uses joint sparsity reconstruction methods to explore the underlying structure between the sparse signal and the gird mismatch, and demonstrates that the proposed methods can fully utilize the virtual aperture created by co-prime arrays and also outperform the previously proposed MUSIC method with spatial smoothing.
Abstract: In this letter, we consider the problem of direction of arrival estimation using sparsity enforced reconstruction methods. Co-prime arrays with M + N sensors are utilized to increase the degrees of the freedom from O(M + N) to O(MN). The key to the success of sparse-based direction of arrival estimation is that every target must fall on the predefined grid. Off-grid targets can highly jeopardize the reconstruction performance. In this letter, we use joint sparsity reconstruction methods to explore the underlying structure between the sparse signal and the gird mismatch. Two types of sparse reconstruction methods, the greedy method and the convex relaxation method, are considered. By implementing numerical experiments, we demonstrate that our proposed methods can fully utilize the virtual aperture created by co-prime arrays and also outperform the previously proposed MUSIC method with spatial smoothing.

167 citations


Journal ArticleDOI
TL;DR: The proposed approach has a very low computational complexity and the performance analysis shows that the approach outperforms the state-of-the-art techniques in terms of correlation with human vision system on several commonly used databases.
Abstract: This letter proposes a simple and fast approach for no-reference image sharpness quality assessment. In this proposal, we define the maximum local variation (MLV) of each pixel as the maximum intensity variation of the pixel with respect to its 8-neighbors. The MLV distribution of the pixels is an indicative of sharpness. We use standard deviation of the MLV distribution as a feature to measure sharpness. Since high variations in the pixel intensities is a better indicator of the sharpness than low variations, the MLV of the pixels are subjected to a weighting scheme in such a way that heavier weights are assigned to greater MLVs to make the tail end of MLV distribution thicker. The weighting leads to an improvement of the MLV distribution to be more discriminative for different blur degrees. Finally, the standard deviation of the weighted MLV distribution is used as a metric to measure sharpness. The proposed approach has a very low computational complexity and the performance analysis shows that our approach outperforms the state-of-the-art techniques in terms of correlation with human vision system on several commonly used databases.

160 citations


Journal ArticleDOI
TL;DR: This work tries to take authentication up to unique physical characteristics of the frames that are placed by each node on the bus, and shows that distinguishing between certain nodes is clearly possible and each message can be precisely linked to its sender.
Abstract: The CAN (Controller Area Network) bus, i.e., the de facto standard for connecting ECUs inside cars, is increasingly becoming exposed to some of the most sophisticated security threats. Due to its broadcast nature and ID oriented communication, each node is sightless in regards to the source of the received messages and assuring source identification is an uneasy challenge. While recent research has focused on devising security in CAN networks by the use of cryptography at the protocol layer, such solutions are not always an alternative due to increased communication and computational overheads, not to mention backward compatibility issues. In this work we set steps for a distinct approach, namely, we try to take authentication up to unique physical characteristics of the frames that are placed by each node on the bus. For this we analyze the frames by taking measurements of the voltage, filtering the signal and examining mean square errors and convolutions in order to uniquely identify each potential sender. Our experimental results show that distinguishing between certain nodes is clearly possible and by clever choices of transceivers and frame IDs each message can be precisely linked to its sender.

Journal ArticleDOI
TL;DR: This work proposes a salient object detection algorithm via multi-scale analysis on superpixels that achieves the highest precision value when evaluated on one of the most popular datasets, the ASD dataset.
Abstract: We propose a salient object detection algorithm via multi-scale analysis on superpixels. First, multi-scale segmentations of an input image are computed and represented by superpixels. In contrast to prior work, we utilize various Gaussian smoothing parameters to generate coarse or fine results, thereby facilitating the analysis of salient regions. At each scale, three essential cues from local contrast, integrity and center bias are considered within the Bayesian framework. Next, we compute saliency maps by weighted summation and normalization. The final saliency map is optimized by a guided filter which further improves the detection results. Extensive experiments on two large benchmark datasets demonstrate the proposed algorithm performs favorably against state-of-the-art methods. The proposed method achieves the highest precision value of 97.39% when evaluated on one of the most popular datasets, the ASD dataset.

Journal ArticleDOI
TL;DR: Experimental results on two benchmark datasets demonstrate the better co-saliency detection performance of the proposed model compared to the state-of-the-art co- saliency models.
Abstract: Co-saliency detection, an emerging and interesting issue in saliency detection, aims to discover the common salient objects in a set of images. This letter proposes a hierarchical segmentation based co-saliency model. On the basis of fine segmentation, regional histograms are used to measure regional similarities between region pairs in the image set, and regional contrasts within each image are exploited to evaluate the intra-saliency of each region. On the basis of coarse segmentation, an object prior for each region is measured based on the connectivity with image borders. Finally, the global similarity of each region is derived based on regional similarity measures, and then effectively integrated with intra-saliency map and object prior map to generate the co-saliency map for each image. Experimental results on two benchmark datasets demonstrate the better co-saliency detection performance of the proposed model compared to the state-of-the-art co-saliency models.

Journal ArticleDOI
TL;DR: This work investigates the problem of downlink physical layer multicasting and proposes a provably convergent iterative second-order cone programming (SOCP) solution that offers improved power efficiency and a massively reduced computational complexity.
Abstract: We investigate the problem of downlink physical layer multicasting that aims at minimizing the transmit power with a massive antenna array installed at the transmitter site. We take a solution based on semidefinite relaxation (SDR) as our benchmark. It is shown that instead of working on the semidefinite program (SDP) naturally produced by the SDR, the dual counterpart of the same problem may provide a more efficient numerical implementation. Later, by using a successive convex approximation strategy, we arrive at a provably convergent iterative second-order cone programming (SOCP) solution. Our thorough numerical investigations report that the newly proposed SOCP solution offers improved power efficiency and a massively reduced computational complexity. Therefore, the SOCP solution is seen as a suitable candidate for obtaining beamformers that minimize transmit power, especially, when a very large number of antennas is used at the transmitter.

Journal ArticleDOI
TL;DR: A closed-form formula for calculating the Chi square and higher-order Chi distances between statistical distributions belonging to the same exponential family with affine natural space and an analytic formula for the f-divergences based on Taylor expansions and relying on an extended class of Chi-type distances are reported.
Abstract: We report closed-form formula for calculating the Chi square and higher-order Chi distances between statistical distributions belonging to the same exponential family with affine natural space, and instantiate those formula for the Poisson and isotropic Gaussian families. We then describe an analytic formula for the f-divergences based on Taylor expansions and relying on an extended class of Chi-type distances.

Journal ArticleDOI
TL;DR: This letter investigates downlink resource reuse between multiple D2D links and multiple CUs to achieve a network utility enhancement for D 2D communication while ensuring the QoS of the CUs.
Abstract: The full potential of Device-to-device (D2D) communication relies on efficient resource reuse strategies including power control and matching of D2D links and cellular users (CUs). This letter investigates downlink resource reuse between multiple D2D links and multiple CUs. Our goal is to achieve a network utility enhancement for D2D communication while ensuring the QoS of the CUs. Despite the combinatorial nature of the problem and the coupled power constraints, we characterize the optimal D2D-CU matching as well as their power coordination, and propose an efficient algorithm to jointly optimize all D2D links and CUs. The proposed downlink resource reuse strategy shows a superiority over existing D2D schemes.

Journal ArticleDOI
TL;DR: A ghost-free high dynamic range (HDR) image synthesis algorithm using a low-rank matrix completion framework, which is called RM-HDR, which can often provide significant gains in synthesized HDR image quality over state-of-the-art approaches.
Abstract: We propose a ghost-free high dynamic range (HDR) image synthesis algorithm using a low-rank matrix completion framework, which we call RM-HDR. Based on the assumption that irradiance maps are linearly related to low dynamic range (LDR) image exposures, we formulate ghost region detection as a rank minimization problem. We incorporate constraints on moving objects, i.e., sparsity, connectivity, and priors on under- and over-exposed regions into the framework. Experiments on real image collections show that the RM-HDR can often provide significant gains in synthesized HDR image quality over state-of-the-art approaches. Additionally, a complexity analysis is performed which reveals computational merits of RM-HDR over recent advances in deghosting for HDR.

Journal ArticleDOI
TL;DR: To discover the proper delay, this work proposes an autocorrelation like (ACL) function of the signals, and applies the introduced topological approach to analyze breathing sound signals for wheeze detection.
Abstract: We propose a new approach to detect and quantify the periodic structure of dynamical systems using topological methods. We propose to use delay-coordinate embedding as a tool to detect the presence of harmonic structures by using persistent homology for robust analysis of point clouds of delay-coordinate embeddings. To discover the proper delay, we propose an autocorrelation like (ACL) function of the signals, and apply the introduced topological approach to analyze breathing sound signals for wheeze detection. Experiments have been carried out to substantiate the capabilities of the proposed method.

Journal ArticleDOI
TL;DR: New optimization algorithms to minimize a sum of convex functions, which may be smooth or not and composed or not with linear operators, are proposed, which include various forms of regularized inverse problems in imaging.
Abstract: We propose new optimization algorithms to minimize a sum of convex functions, which may be smooth or not and composed or not with linear operators. This generic formulation encompasses various forms of regularized inverse problems in imaging. The proposed algorithms proceed by splitting: the gradient or proximal operators of the functions are called individually, without inner loop or linear system to solve at each iteration. The algorithms are easy to implement and have proven convergence to an exact solution. The classical Douglas–Rachford and forward–backward splitting methods, as well as the recent and efficient algorithm of Chambolle–Pock, are recovered as particular cases. The application to inverse imaging problems regularized by the total variation is detailed.

Journal ArticleDOI
TL;DR: In this paper, a low-complexity robust adaptive beamforming (RAB) technique which estimates the steering vector using a Low-Complexity Shrinkage-Based Mismatch Estimation (LOCSME) algorithm is proposed.
Abstract: In this work, we propose a low-complexity robust adaptive beamforming (RAB) technique which estimates the steering vector using a Low-Complexity Shrinkage-Based Mismatch Estimation (LOCSME) algorithm. The proposed LOCSME algorithm estimates the covariance matrix of the input data and the interference-plus-noise covariance (INC) matrix by using the Oracle Approximating Shrinkage (OAS) method. LOCSME only requires prior knowledge of the angular sector in which the actual steering vector is located and the antenna array geometry. LOCSME does not require a costly optimization algorithm and does not need to know extra information from the interferers, which avoids direction finding for all interferers. Simulations show that LOCSME outperforms previously reported RAB algorithms and has a performance very close to the optimum.

Journal ArticleDOI
TL;DR: The classification via sparse representation of the monogenic signal is presented for target recognition in SAR images and is robust towards noise corruption, as well as configuration and depression variations.
Abstract: In this letter, the classification via sparse representation of the monogenic signal is presented for target recognition in SAR images. To characterize SAR images, which have broad spectral information yet spatial localization, the monogenic signal is performed. Then an augmented monogenic feature vector is generated via uniform down-sampling, normalization and concatenation of the monogenic components. The resulting feature vector is fed into a recently developed framework, i.e., sparse representation based classification (SRC). Specifically, the feature vectors of the training samples are utilized as the basis vectors to code the feature vector of the test sample as a sparse linear combination of them. The representation is obtained via ${\ell _1}$ -norm minimization, and the inference is reached according to the characteristics of the representation on reconstruction. Extensive experiments on MSTAR database demonstrate that the proposed method is robust towards noise corruption, as well as configuration and depression variations.

Journal ArticleDOI
TL;DR: This letter presents a no-reference quality assessment algorithm for JPEG compressed images (NJQA), which testing on various image-quality databases demonstrates that NJQA is either competitive with or outperforms modern competing methods on JPEG images.
Abstract: This letter presents a no-reference quality assessment algorithm for JPEG compressed images (NJQA). Our method does not specifically aim to measure blockiness. Instead, quality is estimated by first counting the number of zero-valued DCT coefficients within each block, and then using a map, which we call the quality relevance map, to weight these counts. The quality relevance map for an image is a map that indicates which blocks are naturally uniform (or near-uniform) vs. which blocks have been made uniform (or near-uniform) via JPEG compression. Testing on various image-quality databases demonstrates that NJQA is either competitive with or outperforms modern competing methods on JPEG images.

Journal ArticleDOI
TL;DR: In this paper, a computationally efficient subspace algorithm is developed for two-dimensional (2D) direction-of-arrival (DOA) estimation with L-shaped array structured by two uniform linear arrays.
Abstract: In this letter, a computationally efficient subspace algorithm is developed for two-dimensional (2-D) direction-of-arrival (DOA) estimation with L-shaped array structured by two uniform linear arrays (ULAs). The proposed method requires neither constructing the correlation matrix of the received data nor performing the singular value decomposition (SVD) of the correlation matrix. The problem is solved by dealing with three vectors composed of the first column, the first row and diagonal entries of the correlation matrix, which reduces the computational burden. Simultaneously, the proposed method utilizes the conjugate symmetry to enlarge the effective array aperture, which improves the estimation precision. The simulation results are presented to validate the effectiveness of the proposed algorithm.

Journal ArticleDOI
TL;DR: A robust adaptive filtering algorithm based on the convex combination of two adaptive filters under the maximum correntropy criterion (MCC) is proposed, which shows a better robustness against impulsive interference and a novel weight transfer method to further improve the tracking performance.
Abstract: A robust adaptive filtering algorithm based on the convex combination of two adaptive filters under the maximum correntropy criterion (MCC) is proposed. Compared with conventional minimum mean square error (MSE) criterion-based adaptive filtering algorithm, the MCC-based algorithm shows a better robustness against impulsive interference. However, its major drawback is the conflicting requirements between convergence speed and steady-state mean square error. In this letter, we use the convex combination method to overcome the tradeoff problem. Instead of minimizing the squared error to update the mixing parameter in conventional convex combination scheme, the method of maximizing the correntropy is introduced to make the proposed algorithm more robust against impulsive interference. Additionally, we report a novel weight transfer method to further improve the tracking performance. The good performance in terms of convergence rate and steady-state mean square error is demonstrated in plant identification scenarios that include impulsive interference and abrupt changes.

Journal ArticleDOI
TL;DR: A novel local texture operator, named the second-order local ternary pattern (LTP), is proposed for median filtering detection, and it is shown that the proposed scheme performs better than several state-of-the-art approaches investigated.
Abstract: Recently, detecting the traces introduced by the content-preserving image manipulations has received a great deal of attention from forensic analyzers. It is well known that the median filter is a widely used nonlinear denoising operator. Therefore, the detection of median filtering is of important realistic significance in image forensics. In this letter, a novel local texture operator, named the second-order local ternary pattern (LTP), is proposed for median filtering detection. The proposed local texture operator encodes the local derivative direction variations by using a 3-valued coding function and is capable of effectively capturing the changes of local texture caused by median filtering. In addition, kernel principal component analysis (KPCA) is exploited to reduce the dimensionality of the proposed feature set, making the computational cost manageable. The experiment results have shown that the proposed scheme performs better than several state-of-the-art approaches investigated.

Journal ArticleDOI
TL;DR: A new approach to direction of arrival (DOA) estimation of narrowband sources using an antenna array which overcomes basis mismatch effects and is hybrid in nature, using a low rank matrix denoising approach followed by a MUSIC-like subspace method to estimate the DOAs.
Abstract: The problem of direction of arrival (DOA) estimation of narrowband sources using an antenna array is considered where the number of sources can potentially exceed the number of sensors. In earlier works, the authors showed that using a suitable antenna geometry, such as the nested and coprime arrays, it is possible to localize O(M2) sources using M sensors. To this end, two different approaches have been proposed. One is based on an extension of subspace based methods such as MUSIC to these sparse arrays, and the other employs l1 norm minimization based sparse estimation techniques by assuming an underlying grid. While the former requires the knowledge of number of sources, the latter suffers from basis mismatch effects. In this letter, a new approach is proposed which overcomes both these weaknesses. The method is hybrid in nature, using a low rank matrix denoising approach followed by a MUSIC-like subspace method to estimate the DOAs. The number of sources is revealed as a by-product of the low rank denoising stage. Moreover, it does not assume any underlying grid and thereby does not suffer from basis mismatch. Numerical examples validate the effectiveness of the proposed method when compared against existing techniques.

Journal ArticleDOI
TL;DR: Numerical results demonstrate that a significant gain in the output SINR can be achieve in this active array, compared to the conventional phased-array radar and omnidirectional multiple-input-multiple-output (MIMO) radar.
Abstract: We jointly design the transmit and receive beamforming based on a-priori information on the locations of target and interferences in an active array, where each transmit element emits the same waveform up to a complex scalar. A sequential optimization algorithm is proposed to maximize the output signal-to-interference-plus-noise ratio (SINR). Numerical results demonstrate that a significant gain in the output SINR can be achieve in this active array, compared to the conventional phased-array radar and omnidirectional multiple-input-multiple-output (MIMO) radar.

Journal ArticleDOI
TL;DR: This letter presents a Referenceless quality Measure of Blocking artifacts (RMB) using Tchebichef moments, based on the observation that TcheBichef kernels with different orders have varying abilities to capture blockiness.
Abstract: This letter presents a Referenceless quality Measure of Blocking artifacts (RMB) using Tchebichef moments. It is based on the observation that Tchebichef kernels with different orders have varying abilities to capture blockiness. In a block manner, high-odd-order moments are computed to score the blocking artifacts. The blockiness scores are further weighted to incorporate the characteristic of Human Visual System (HVS), which is achieved by classifying the blocks into smooth and textured. Experimental results and comparisons demonstrate the advantage of the proposed method.

Journal ArticleDOI
TL;DR: It is shown that the improved adaptive scheme achieves the best convergence performance among all the considered methods with a low computational complexity.
Abstract: We propose a reduced-rank beamformer based on the rank- D Joint Iterative Optimization (JIO) of the modified Widely Linear Constrained Minimum Variance (WLCMV) problem for non-circular signals. The novel WLCMV-JIO scheme takes advantage of both the Widely Linear (WL) processing and the reduced-rank concept, outperforming its linear counterpart as well as the full-rank WL beamformer. We develop an augmented recursive least squares algorithm and present an improved structured version with a much more efficient implementation. It is shown that the improved adaptive scheme achieves the best convergence performance among all the considered methods with a low computational complexity.

Journal ArticleDOI
Hadi Zayyani1
TL;DR: A new adaptive filtering algorithm in system identification applications which is based on a continuous mixed p-norm, controlled by a continuous probability density-like function of p which is assumed to be uniform in this letter.
Abstract: We propose a new adaptive filtering algorithm in system identification applications which is based on a continuous mixed $p$ -norm It enjoys the advantages of various error norms since it combines p-norms for $1 \leq p \leq 2$ The mixture is controlled by a continuous probability density-like function of $p$ which is assumed to be uniform in our derivations in this letter Two versions of the suggested algorithm are developed The robustness of the proposed algorithms against impulsive noise are demonstrated in a system identification simulation