scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Journal of Selected Topics in Signal Processing in 2011"


Journal ArticleDOI
TL;DR: Recent advances in research related to cognitive radios are surveyed, including the fundamentals of cognitive radio technology, architecture of a cognitive radio network and its applications, and important issues in dynamic spectrum allocation and sharing are investigated in detail.
Abstract: With the rapid deployment of new wireless devices and applications, the last decade has witnessed a growing demand for wireless radio spectrum. However, the fixed spectrum assignment policy becomes a bottleneck for more efficient spectrum utilization, under which a great portion of the licensed spectrum is severely under-utilized. The inefficient usage of the limited spectrum resources urges the spectrum regulatory bodies to review their policy and start to seek for innovative communication technology that can exploit the wireless spectrum in a more intelligent and flexible way. The concept of cognitive radio is proposed to address the issue of spectrum efficiency and has been receiving an increasing attention in recent years, since it equips wireless users the capability to optimally adapt their operating parameters according to the interactions with the surrounding radio environment. There have been many significant developments in the past few years on cognitive radios. This paper surveys recent advances in research related to cognitive radios. The fundamentals of cognitive radio technology, architecture of a cognitive radio network and its applications are first introduced. The existing works in spectrum sensing are reviewed, and important issues in dynamic spectrum allocation and sharing are investigated in detail.

1,329 citations


Journal ArticleDOI
TL;DR: This paper derives two sparse Bayesian learning algorithms, which have superior recovery performance compared to existing algorithms, especially in the presence of high temporal correlation, and provides analysis of the global and local minima of their cost function.
Abstract: We address the sparse signal recovery problem in the context of multiple measurement vectors (MMV) when elements in each nonzero row of the solution matrix are temporally correlated. Existing algorithms do not consider such temporal correlation and thus their performance degrades significantly with the correlation. In this paper, we propose a block sparse Bayesian learning framework which models the temporal correlation. We derive two sparse Bayesian learning (SBL) algorithms, which have superior recovery performance compared to existing algorithms, especially in the presence of high temporal correlation. Furthermore, our algorithms are better at handling highly underdetermined problems and require less row-sparsity on the solution matrix. We also provide analysis of the global and local minima of their cost function, and show that the SBL cost function has the very desirable property that the global minimum is at the sparsest solution to the MMV problem. Extensive experiments also provide some interesting results that motivate future theoretical research on the MMV model.

792 citations


Journal ArticleDOI
TL;DR: The main families of active learning algorithms are reviewed and tested: committee, large margin, and posterior probability-based, which aims at building efficient training sets by iteratively improving the model performance through sampling.
Abstract: Defining an efficient training set is one of the most delicate phases for the success of remote sensing image classification routines. The complexity of the problem, the limited temporal and financial resources, as well as the high intraclass variance can make an algorithm fail if it is trained with a suboptimal dataset. Active learning aims at building efficient training sets by iteratively improving the model performance through sampling. A user-defined heuristic ranks the unlabeled pixels according to a function of the uncertainty of their class membership and then the user is asked to provide labels for the most uncertain pixels. This paper reviews and tests the main families of active learning algorithms: committee, large margin, and posterior probability-based. For each of them, the most recent advances in the remote sensing community are discussed and some heuristics are detailed and tested. Several challenging remote sensing scenarios are considered, including very high spatial resolution and hyperspectral image classification. Finally, guidelines for choosing the good architecture are provided for new and/or unexperienced user.

481 citations


Journal ArticleDOI
TL;DR: This paper proposes a new sparsity-based algorithm for automatic target detection in hyperspectral imagery (HSI) based on the concept that a pixel in HSI lies in a low-dimensional subspace and thus can be represented as a sparse linear combination of the training samples.
Abstract: In this paper, we propose a new sparsity-based algorithm for automatic target detection in hyperspectral imagery (HSI). This algorithm is based on the concept that a pixel in HSI lies in a low-dimensional subspace and thus can be represented as a sparse linear combination of the training samples. The sparse representation (a sparse vector corresponding to the linear combination of a few selected training samples) of a test sample can be recovered by solving an l0-norm minimization problem. With the recent development of the compressed sensing theory, such minimization problem can be recast as a standard linear programming problem or efficiently approximated by greedy pursuit algorithms. Once the sparse vector is obtained, the class of the test sample can be determined by the characteristics of the sparse vector on reconstruction. In addition to the constraints on sparsity and reconstruction accuracy, we also exploit the fact that in HSI the neighboring pixels have a similar spectral characteristic (smoothness). In our proposed algorithm, a smoothness constraint is also imposed by forcing the vector Laplacian at each reconstructed pixel to be minimum all the time within the minimization process. The proposed sparsity-based algorithm is applied to several hyperspectral imagery to detect targets of interest. Simulation results show that our algorithm outperforms the classical hyperspectral target detection algorithms, such as the popular spectral matched filters, matched subspace detectors, adaptive subspace detectors, as well as binary classifiers such as support vector machines.

385 citations


Journal ArticleDOI
TL;DR: This paper studies two problems which often occur in various applications arising in wireless sensor networks, and provides a diminishing step size algorithm which guarantees asymptotic convergence of the consensus problem and the problem of cooperative solution to a convex optimization problem.
Abstract: In this paper, we study two problems which often occur in various applications arising in wireless sensor networks. These are the problem of reaching an agreement on the value of local variables in a network of computational agents and the problem of cooperative solution to a convex optimization problem, where the objective function is the aggregate sum of local convex objective functions. We incorporate the presence of a random communication graph between the agents in our model as a more realistic abstraction of the gossip and broadcast communication protocols of a wireless network. An added ingredient is the presence of local constraint sets to which the local variables of each agent is constrained. Our model allows for the objective functions to be nondifferentiable and accommodates the presence of noisy communication links and subgradient errors. For the consensus problem we provide a diminishing step size algorithm which guarantees asymptotic convergence. The distributed optimization algorithm uses two diminishing step size sequences to account for communication noise and subgradient errors. We establish conditions on these step sizes under which we can achieve the dual task of reaching consensus and convergence to the optimal set with probability one. In both cases we consider the constant step size behavior of the algorithm and establish asymptotic error bounds.

366 citations


Journal ArticleDOI
TL;DR: Simulations testify the effectiveness of the proposed cooperative sensing approach in multi-hop CR networks and a decentralized consensus optimization algorithm is derived to attain high sensing performance at a reasonable computational cost and power overhead.
Abstract: In wideband cognitive radio (CR) networks, spectrum sensing is an essential task for enabling dynamic spectrum sharing, but entails several major technical challenges: very high sampling rates required for wideband processing, limited power and computing resources per CR, frequency-selective wireless fading, and interference due to signal leakage from other coexisting CRs. In this paper, a cooperative approach to wideband spectrum sensing is developed to overcome these challenges. To effectively reduce the data acquisition costs, a compressive sampling mechanism is utilized which exploits the signal sparsity induced by network spectrum under-utilization. To collect spatial diversity against wireless fading, multiple CRs collaborate during the sensing task by enforcing consensus among local spectral estimates; accordingly, a decentralized consensus optimization algorithm is derived to attain high sensing performance at a reasonable computational cost and power overhead. To identify spurious spectral estimates due to interfering CRs, the orthogonality between the spectrum of primary users and that of CRs is imposed as constraints for consensus optimization during distributed collaborative sensing. These decentralized techniques are developed for both cases of with and without channel knowledge. Simulations testify the effectiveness of the proposed cooperative sensing approach in multi-hop CR networks.

297 citations


Journal ArticleDOI
TL;DR: The paper establishes a distributed observability condition under which the distributed estimates are consistent and asymptotically normal, and introduces the distributed notion equivalent to the (centralized) Fisher information rate, which is a bound on the mean square error reduction rate of any distributed estimator.
Abstract: This paper considers gossip distributed estimation of a (static) distributed random field (a.k.a., large-scale unknown parameter vector) observed by sparsely interconnected sensors, each of which only observes a small fraction of the field. We consider linear distributed estimators whose structure combines the information flow among sensors (the consensus term resulting from the local gossiping exchange among sensors when they are able to communicate) and the information gathering measured by the sensors (the sensing or innovations term). This leads to mixed time scale algorithms-one time scale associated with the consensus and the other with the innovations. The paper establishes a distributed observability condition (global observability plus mean connectedness) under which the distributed estimates are consistent and asymptotically normal. We introduce the distributed notion equivalent to the (centralized) Fisher information rate, which is a bound on the mean square error reduction rate of any distributed estimator; we show that under the appropriate modeling and structural network communication conditions (gossip protocol) the distributed gossip estimator attains this distributed Fisher information rate, asymptotically achieving the performance of the optimal centralized estimator. Finally, we study the behavior of the distributed gossip estimator when the measurements fade (noise variance grows) with time; in particular, we consider the maximum rate at which the noise variance can grow and still the distributed estimator being consistent, by showing that, as long as the centralized estimator is consistent, the distributed estimator remains consistent.

277 citations


Journal ArticleDOI
TL;DR: It is demonstrated that, to be successful, music audio signal processing techniques must be informed by a deep and thorough insight into the nature of music itself.
Abstract: Music signal processing may appear to be the junior relation of the large and mature field of speech signal processing, not least because many techniques and representations originally developed for speech have been applied to music, often with good results. However, music signals possess specific acoustic and structural characteristics that distinguish them from spoken language or other nonmusical signals. This paper provides an overview of some signal analysis techniques that specifically address musical dimensions such as melody, harmony, rhythm, and timbre. We will examine how particular characteristics of music signals impact and determine these techniques, and we highlight a number of novel music analysis and retrieval tasks that such processing makes possible. Our goal is to demonstrate that, to be successful, music audio signal processing techniques must be informed by a deep and thorough insight into the nature of music itself.

246 citations


Journal ArticleDOI
TL;DR: This paper considers the DIBR-based synthesized view evaluation problem, and provides hints for a new objective measure for 3DTV quality assessment.
Abstract: 3DTV technology has brought out new challenges such as the question of synthesized views evaluation. Synthesized views are generated through a depth image-based rendering (DIBR) process. This process induces new types of artifacts whose impact on visual quality has to be identified considering various contexts of use. While visual quality assessment has been the subject of many studies in the last 20 years, there are still some unanswered questions regarding new technological improvement. DIBR is bringing new challenges mainly because it deals with geometric distortions. This paper considers the DIBR-based synthesized view evaluation problem. Different experiments have been carried out. They question the protocols of subjective assessment and the reliability of the objective quality metrics in the context of 3DTV, in these specific conditions (DIBR-based synthesized views), and they consist in assessing seven different view synthesis algorithms through subjective and objective measurements. Results show that usual metrics are not sufficient for assessing 3-D synthesized views, since they do not correctly render human judgment. Synthesized views contain specific artifacts located around the disoccluded areas, but usual metrics seem to be unable to express the degree of annoyance perceived in the whole image. This study provides hints for a new objective measure. Two approaches are proposed: the first one is based on the analysis of the shifts of the contours of the synthesized view; the second one is based on the computation of a mean SSIM score of the disoccluded areas.

218 citations


Journal ArticleDOI
TL;DR: This work modified an existing unsupervised learning approach and applied it to HSI data to learn an optimal sparse coding dictionary, which improves the performance of a supervised classification algorithm, both in terms of the classifier complexity and generalization from very small training sets.
Abstract: The spectral features in hyperspectral imagery (HSI) contain significant structure that, if properly characterized, could enable more efficient data acquisition and improved data analysis. Because most pixels contain reflectances of just a few materials, we propose that a sparse coding model is well-matched to HSI data. Sparsity models consider each pixel as a combination of just a few elements from a larger dictionary, and this approach has proven effective in a wide range of applications. Furthermore, previous work has shown that optimal sparse coding dictionaries can be learned from a dataset with no other a priori information (in contrast to many HSI “endmember” discovery algorithms that assume the presence of pure spectra or side information). We modified an existing unsupervised learning approach and applied it to HSI data (with significant ground truth labeling) to learn an optimal sparse coding dictionary. Using this learned dictionary, we demonstrate three main findings: 1) the sparse coding model learns spectral signatures of materials in the scene and locally approximates nonlinear manifolds for individual materials; 2) this learned dictionary can be used to infer HSI-resolution data with very high accuracy from simulated imagery collected at multispectral-level resolution, and 3) this learned dictionary improves the performance of a supervised classification algorithm, both in terms of the classifier complexity and generalization from very small training sets.

209 citations


Journal ArticleDOI
TL;DR: This paper applies adaptive diffusion techniques to guide the self-organization process, including harmonious motion and collision avoidance, of adaptive networks when the individual agents are allowed to move in pursuit of a target.
Abstract: In this paper, we investigate the self-organization and cognitive abilities of adaptive networks when the individual agents are allowed to move in pursuit of a target. The nodes act as adaptive entities with localized processing and are able to respond to stimuli in real-time. We apply adaptive diffusion techniques to guide the self-organization process, including harmonious motion and collision avoidance. We also provide stability and mean-square performance analysis of the proposed strategies, together with computer simulation to illustrate results.

Journal ArticleDOI
TL;DR: The K-SVD using Wavelets approach presented here applies dictionary learning in the analysis domain of a fixed multi-scale operator, so that sub-dictionaries at different data scales, consisting of small atoms, are trained.
Abstract: In this paper, we present a multi-scale dictionary learning paradigm for sparse and redundant signal representations. The appeal of such a dictionary is obvious-in many cases data naturally comes at different scales. A multi-scale dictionary should be able to combine the advantages of generic multi-scale representations (such as Wavelets), with the power of learned dictionaries, in capturing the intrinsic characteristics of a family of signals. Using such a dictionary would allow representing the data in a more efficient, i.e., sparse, manner, allowing applications to take a more global look at the signal. In this paper, we aim to achieve this goal without incurring the costs of an explicit dictionary with large atoms. The K-SVD using Wavelets approach presented here applies dictionary learning in the analysis domain of a fixed multi-scale operator. This way, sub-dictionaries at different data scales, consisting of small atoms, are trained. These dictionaries can then be efficiently used in sparse coding for various image processing applications, potentially outperforming both single-scale trained dictionaries and multi-scale analytic ones. In this paper, we demonstrate this construction and discuss its potential through several experiments performed on fingerprint and coastal scenery images.

Journal ArticleDOI
TL;DR: Distributed clustering schemes are developed in this paper for both deterministic and probabilistic approaches to unsupervised learning that can exhibit improved robustness to initialization than their centralized counterparts.
Abstract: Clustering spatially distributed data is well motivated and especially challenging when communication to a central processing unit is discouraged, e.g., due to power constraints. Distributed clustering schemes are developed in this paper for both deterministic and probabilistic approaches to unsupervised learning. The centralized problem is solved in a distributed fashion by recasting it to a set of smaller local clustering problems with consensus constraints on the cluster parameters. The resulting iterative schemes do not exchange local data among nodes, and rely only on single-hop communications. Performance of the novel algorithms is illustrated with simulated tests on synthetic and real sensor data. Surprisingly, these tests reveal that the distributed algorithms can exhibit improved robustness to initialization than their centralized counterparts.

Journal ArticleDOI
Xi Zhang1, Hang Su1
TL;DR: An efficient Cognitive Radio-EnAbled Multi-channel MAC (CREAM-MAC) protocol, which integrates the cooperative sequential spectrum sensing at physical layer and the packet scheduling at MAC layer, over the wireless DSA networks, is proposed.
Abstract: As the novel and effective approach to improving the utilization of the precious radio spectrum, cognitive radio technology is the key to realize the dynamic spectrum access (DSA) networks, where the secondary (unlicensed) users can opportunistically utilize the unused licensed spectrum in a way that confines the level of interference to the range the primary (licensed) users can tolerate. However, there are many new challenges associated with cognitive-radio-based DSA networks, such as the multi-channel hidden terminal problem and the fact that the time-varying channel availability differs for different secondary users, in the medium access control (MAC) layer. To overcome these challenges, we propose an efficient Cognitive Radio-EnAbled Multi-channel MAC (CREAM-MAC) protocol, which integrates the cooperative sequential spectrum sensing at physical layer and the packet scheduling at MAC layer, over the wireless DSA networks. Under the proposed CREAM-MAC protocol, each secondary user is equipped with a cognitive radio-enabled transceiver and multiple channel sensors. Our cooperative sequential spectrum sensing scheme improves the accuracy of spectrum sensing and further protects the primary users. The proposed CREAM-MAC enables the secondary users to best utilize the unused frequency spectrum while avoiding the collisions among secondary users and between secondary users and primary users. We develop the Markov chain model and M/GY/1 queueing model to rigorously study our proposed CREAM-MAC protocol for both the saturation networks and the non-saturation networks. We also conduct extensive simulations to validate our developed protocol and analytical models.

Journal ArticleDOI
TL;DR: Experimental results from tests carried out with actual SAR images demonstrate that the GΓD can achieve better goodness of fit than the state-of-the-art pdfs.
Abstract: In this paper, an efficient statistical model, called generalized Gamma distribution (GΓD), for the empirical modeling of synthetic aperture radar (SAR) images is proposed. The GΓD forms a large variety of alternative distributions (especially including Rayleigh, exponential, Nakagami, Gamma, Weibull, and log-normal distributions commonly used for the probability density function (pdf) of SAR images as special cases), and is flexible to model the SAR images with different land-cover typologies. Moreover, based on second-kind cumulants, a closed-form estimator for GΓD parameters is derived by exploiting the second-order approximation for Polygamma function. Without involving the numerical iterative process for solutions, this estimator is computationally efficient and, hence, can make the GΓD convenient for applications in the online SAR image processing. Finally, experimental results from tests carried out with actual SAR images demonstrate that the GΓD can achieve better goodness of fit than the state-of-the-art pdfs.

Journal ArticleDOI
TL;DR: A cooperative cognitive radio (CR) sensing problem is considered, where a number of CRs collaboratively detect the presence of primary users (PUs) by exploiting the novel notion of channel gain maps, developed in both centralized and distributed formats to reduce computational complexity and memory requirements of a batch alternative.
Abstract: A cooperative cognitive radio (CR) sensing problem is considered, where a number of CRs collaboratively detect the presence of primary users (PUs) by exploiting the novel notion of channel gain (CG) maps. The CG maps capture the propagation medium per frequency from any point in space and time to each CR user. They are updated in real-time using Kriged Kalman filtering (KKF), a tool with well-appreciated merits in geostatistics. In addition, the CG maps enable tracking the transmit-power and location of an unknown number of PUs, via a sparse regression technique. The latter exploits the sparsity inherent to the PU activities in a geographical area, using an l1-norm regularized, sparsity-promoting weighted least-squares formulation. The resulting sparsity-cognizant tracker is developed in both centralized and distributed formats, to reduce computational complexity and memory requirements of a batch alternative. Numerical tests demonstrate considerable performance gains achieved by the proposed algorithms .

Journal ArticleDOI
TL;DR: A method to address the problem of mixed pixels and to obtain a finer spatial resolution of the land cover classification maps is proposed, which exploits the advantages of both soft classification techniques and spectral unmixing algorithms, in order to determine the fractional abundances of the classes at a sub-pixel scale.
Abstract: The problem of classification of hyperspectral images containing mixed pixels is addressed. Hyperspectral imaging is a continuously growing area of remote sensing applications. The wide spectral range of such imagery, providing a very high spectral resolution, allows to detect and classify surfaces and chemical elements of the observed image. The main problem of hyperspectral data is the (relatively) low spatial resolution, which can vary from a few to tens of meters. Many factors make the spatial resolution one of the most expensive and hardest to improve in imaging systems. For classification, the major problem caused by low spatial resolution are the mixed pixels, i.e., parts of the image where more than one land cover map lie in the same pixel. In this paper, we propose a method to address the problem of mixed pixels and to obtain a finer spatial resolution of the land cover classification maps. The method exploits the advantages of both soft classification techniques and spectral unmixing algorithms, in order to determine the fractional abundances of the classes at a sub-pixel scale. Spatial regularization by simulated annealing is finally performed to spatially locate the obtained classes. Experiments carried out on synthetic real data sets show excellent results both from a qualitative and quantitative point of view.

Journal ArticleDOI
TL;DR: A source/filter signal model which provides a mid-level representation which makes the pitch content of the signal as well as some timbre information available, hence keeping as much information from the raw data as possible.
Abstract: When designing an audio processing system, the target tasks often influence the choice of a data representation or transformation. Low-level time-frequency representations such as the short-time Fourier transform (STFT) are popular, because they offer a meaningful insight on sound properties for a low computational cost. Conversely, when higher level semantics, such as pitch, timbre or phoneme, are sought after, representations usually tend to enhance their discriminative characteristics, at the expense of their invertibility. They become so-called mid-level representations. In this paper, a source/filter signal model which provides a mid-level representation is proposed. This representation makes the pitch content of the signal as well as some timbre information available, hence keeping as much information from the raw data as possible. This model is successfully used within a main melody extraction system and a lead instrument/accompaniment separation system. Both frameworks obtained top results at several international evaluation campaigns.

Journal ArticleDOI
TL;DR: The distribution of the ratio of extreme eigenvalues of a complex Wishart matrix is studied in order to calculate the exact decision threshold as a function of the desired probability of false alarm for the maximum-minimum eigenvalue (MME) detector.
Abstract: In this paper, the distribution of the ratio of extreme eigenvalues of a complex Wishart matrix is studied in order to calculate the exact decision threshold as a function of the desired probability of false alarm for the maximum-minimum eigenvalue (MME) detector. In contrast to the asymptotic analysis reported in the literature, we consider a finite number of cooperative receivers and a finite number of samples and derive the exact decision threshold for the probability of false alarm. The proposed exact formulation is further reduced to the case of two receiver-based cooperative spectrum sensing. In addition, an approximate closed-form formula of the exact threshold is derived in terms of a desired probability of false alarm for a special case having equal number of receive antennas and signal samples. Finally, the derived analytical exact decision thresholds are verified with Monte-Carlo simulations. We show that the probability of detection performance using the proposed exact decision thresholds achieves significant performance gains compared to the performance of the asymptotic decision threshold.

Journal ArticleDOI
TL;DR: A novel joint sparse representation-based image fusion method that can carry out image denoising and fusion simultaneously, while the images are corrupted by additive noise is proposed.
Abstract: In this paper, a novel joint sparse representation-based image fusion method is proposed. Since the sensors observe related phenomena, the source images are expected to possess common and innovation features. We use sparse coefficients as image features. The source image is represented with the common and innovation sparse coefficients by joint sparse representation. The sparse coefficients are consequently weighted by the mean absolute values of the innovation coefficients. Furthermore, since sparse representation has been significantly successful in the development of image denoising algorithms, our method can carry out image denoising and fusion simultaneously, while the images are corrupted by additive noise. Experiment results show that the performance of the proposed method is better than that of other methods in terms of several metrics, as well as in the visual quality.

Journal ArticleDOI
TL;DR: This paper develops power and channel allocation approaches for cooperative relay in cognitive radio networks that can significantly improve the overall end-to-end throughput and further develops a low complexity approach that can obtain most of the benefits from power andChannel allocation with minor performance loss.
Abstract: In this paper, we investigate power and channel allocation for cooperative relay in a three-node cognitive radio network. Different from conventional cooperative relay channels, cognitive radio relay channels can be divided into three categories: direct, dual-hop, and relay channels, which provide three types of parallel end-to-end transmission. In the context, those spectrum bands available at all three nodes may either perform relay diversity transmission or assist the transmission in direct or dual-hop channels. On the other hand, the relay node involves both dual-hop and relay diversity transmission. In this paper, we develop power and channel allocation approaches for cooperative relay in cognitive radio networks that can significantly improve the overall end-to-end throughput. We further develop a low complexity approach that can obtain most of the benefits from power and channel allocation with minor performance loss.

Journal ArticleDOI
TL;DR: In this paper, the characteristics of multispectral (MS) and panchromatic (P) image fusion methods are investigated and simulated misalignments evidence the quality-shift tradeoff of the two classes.
Abstract: In this paper, the characteristics of multispectral (MS) and panchromatic (P) image fusion methods are investigated. Depending on the way spatial details are extracted from P, pansharpening methods can be broadly labeled into two main classes, corresponding to methods based on either component substitution (CS) or multiresolution analysis (MRA). Theoretical investigations and experimental results evidence that CS-based fusion is far less sensitive than MRA-based fusion to: 1) registration errors, i.e., spatial misalignments between MS and P images, possibly originated by cartographic projection and resampling of individual data sets; 2) aliasing occurring in MS bands and stemming from modulation transfer functions (MTF) of MS channels that are excessively broad for the sampling step. In order to assess the sensitiveness of methods, aliasing is simulated at degraded spatial scale by means of several MTF-shaped digital filters. Analogously, simulated misalignments, carried out at both full and degraded scale, evidence the quality-shift tradeoff of the two classes. MRA yields a slightly superior quality in the absence of aliasing/misalignments, but is more penalized than CS, whenever either aliasing or shifts between MS and P occur. Conversely, CS generally produces a slightly lower quality, but is intrinsically more aliasing/shift tolerant.

Journal ArticleDOI
TL;DR: An unmixing algorithm capable of extracting endmembers and determining their abundances in hyperspectral imagery under nonlinear mixing assumptions is presented, based upon simplex volume maximization and uses shortest-path distances in a nearest-neighbor graph in spectral space, respecting the nontrivial geometry of the data manifold in the case of nonlinearly mixed pixels.
Abstract: Spectral mixtures observed in hyperspectral imagery often display nonlinear mixing effects. Since most traditional unmixing techniques are based upon the linear mixing model, they perform poorly in finding the correct endmembers and their abundances in the case of nonlinear spectral mixing. In this paper, we present an unmixing algorithm that is capable of extracting endmembers and determining their abundances in hyperspectral imagery under nonlinear mixing assumptions. The algorithm is based upon simplex volume maximization, and uses shortest-path distances in a nearest-neighbor graph in spectral space, hereby respecting the nontrivial geometry of the data manifold in the case of nonlinearly mixed pixels. We demonstrate the algorithm on an artificial data set, the AVIRIS Cuprite data set, and a hyperspectral image of a heathland area in Belgium.

Journal ArticleDOI
TL;DR: This work studies the performance of the consensus-based multi-agent distributed subgradient method and shows how it depends on the probability distribution of the random graph.
Abstract: We investigate collaborative optimization of an objective function expressed as a sum of local convex functions, when the agents make decisions in a distributed manner using local information, while the communication topology used to exchange messages and information is modeled by a graph-valued random process, assumed independent and identically distributed. Specifically, we study the performance of the consensus-based multi-agent distributed subgradient method and show how it depends on the probability distribution of the random graph. For the case of a constant stepsize, we first give an upper bound on the difference between the objective function, evaluated at the agents' estimates of the optimal decision vector, and the optimal value. Second, for a particular class of convex functions, we give an upper bound on the distances between the agents' estimates of the optimal decision vector and the minimizer. In addition, we provide the rate of convergence to zero of the time varying component of the aforementioned upper bound. The addressed metrics are evaluated via their expected values. As an application, we show how the distributed optimization algorithm can be used to perform collaborative system identification and provide numerical experiments under the randomized and broadcast gossip protocols.

Journal ArticleDOI
TL;DR: A factor-graph-based approach to joint channel-estimation-and-decoding (JCED) of bit-interleaved coded orthogonal frequency division multiplexing (BICM-OFDM) is proposed, capable of exploiting not only sparsity in sampled channel taps but also clustering among the large taps, behaviors which are known to manifest at larger communication bandwidths.
Abstract: We propose a factor-graph-based approach to joint channel-estimation-and-decoding (JCED) of bit-interleaved coded orthogonal frequency division multiplexing (BICM-OFDM). In contrast to existing designs, ours is capable of exploiting not only sparsity in sampled channel taps but also clustering among the large taps, behaviors which are known to manifest at larger communication bandwidths. In order to exploit these channel-tap structures, we adopt a two-state Gaussian mixture prior in conjunction with a Markov model on the hidden state. For loopy belief propagation, we exploit a “generalized approximate message passing” (GAMP) algorithm recently developed in the context of compressed sensing, and show that it can be successfully coupled with soft-input soft-output decoding, as well as hidden Markov inference, through the standard sum-product framework. For N subcarriers and any channel length L<;N, the resulting JCED-GAMP scheme has a computational complexity of only O(N log2 N +N|S|), where |S| is the constellation size. Numerical experiments using IEEE 802.15.4a channels show that our scheme yields BER performance within 1 dB of the known-channel bound and 3-4 dB better than soft equalization based on LMMSE and LASSO.

Journal ArticleDOI
TL;DR: The use of the Gini index (GI), of a discrete signal, is explored as a more effective measure of its sparsity for a significantly improved performance in its reconstruction from compressive samples.
Abstract: Sparsity is a fundamental concept in compressive sampling of signals/images, which is commonly measured using the l0 norm, even though, in practice, the l1 or the lp ( 0 <; p <; 1) (pseudo-) norm is preferred. In this paper, we explore the use of the Gini index (GI), of a discrete signal, as a more effective measure of its sparsity for a significantly improved performance in its reconstruction from compressive samples. We also successfully incorporate the GI into a stochastic optimization algorithm for signal reconstruction from compressive samples and illustrate our approach with both synthetic and real signals/images.

Journal ArticleDOI
TL;DR: Numerical simulation and experimental data collected from the 2008 Surface Processes and Acoustic Communications Experiment (SPACE08) show that the proposed receiver can self adapt to channel variations, enjoying low complexity in good channel conditions while maintaining excellent performance in adverse channel conditions.
Abstract: Multicarrier modulation in the form of orthogonal-frequency-division-multiplexing (OFDM) has been intensively pursued for underwater acoustic (UWA) communications recently due to its ability to handle long dispersive channels. Fast variation of UWA channels destroys the orthogonality of the sub-carriers and leads to inter-carrier interference (ICI), which degrades the system performance significantly. In this paper, we propose a progressive receiver dealing with time-varying UWA channels. The progressive receiver is in nature an iterative receiver, based on the turbo principle. However, it distinguishes itself from existing iterative receivers in that the system model for channel estimation and data detection is itself continually updated during the iterations. When the decoding in the current iteration is not successful, the receiver increases the span of the ICI in the system model and utilizes the currently available soft information from the decoder to assist the next iteration which deals with a channel with larger Doppler spread. Numerical simulation and experimental data collected from the 2008 Surface Processes and Acoustic Communications Experiment (SPACE08) show that the proposed receiver can self adapt to channel variations, enjoying low complexity in good channel conditions while maintaining excellent performance in adverse channel conditions.

Journal ArticleDOI
TL;DR: This paper believes that textures may be contained in multiple manifolds, corresponding to classes, and presents a novel example-based image super-resolution reconstruction algorithm with clustering and supervised neighbor embedding (CSNE).
Abstract: Neighbor embedding algorithm has been widely used in example-based super-resolution reconstruction from a single frame, which makes the assumption that neighbor patches embedded are contained in a single manifold. However, it is not always true for complicated texture structure. In this paper, we believe that textures may be contained in multiple manifolds, corresponding to classes. Under this assumption, we present a novel example-based image super-resolution reconstruction algorithm with clustering and supervised neighbor embedding (CSNE). First, a class predictor for low-resolution (LR) patches is learnt by an unsupervised Gaussian mixture model. Then by utilizing class label information of each patch, a supervised neighbor embedding is used to estimate high-resolution (HR) patches corresponding to LR patches. The experimental results show that the proposed method can achieve a better recovery of LR comparing with other simple schemes using neighbor embedding.

Journal ArticleDOI
TL;DR: This paper demonstrates that near-optimal performance in MIMO-ISI channels with large dimensions can be achieved at low complexities through simple yet effective simplifications/approximations, and shows that these message passing algorithms can be used in an iterative manner with local neighborhood search algorithms to improve the reliability/performance of M-QAM symbol detection.
Abstract: In this paper, we deal with low-complexity near-optimal detection/equalization in large-dimension multiple-input multiple-output inter-symbol interference (MIMO-ISI) channels using message passing on graphical models. A key contribution in the paper is the demonstration that near-optimal performance in MIMO-ISI channels with large dimensions can be achieved at low complexities through simple yet effective simplifications/approximations, although the graphical models that represent MIMO-ISI channels are fully/densely connected (loopy graphs). These include 1) use of Markov random field (MRF)-based graphical model with pairwise interaction, in conjunction with message damping, and 2) use of factor graph (FG)-based graphical model with Gaussian approximation of interference (GAI). The per-symbol complexities are O(K2nt2) and O(Knt) for the MRF and the FG with GAI approaches, respectively, where K and nt denote the number of channel uses per frame, and number of transmit antennas, respectively. These low-complexities are quite attractive for large dimensions, i.e., for large Knt. From a performance perspective, these algorithms are even more interesting in large-dimensions since they achieve increasingly closer to optimum detection performance for increasing Knt. Also, we show that these message passing algorithms can be used in an iterative manner with local neighborhood search algorithms to improve the reliability/performance of M-QAM symbol detection.

Journal ArticleDOI
TL;DR: In this paper, a greedy adaptive dictionary learning algorithm was proposed to find sparse atoms for speech signals, which is applied to the problem of speech representation and speech denoising, and its performance is compared to other existing methods.
Abstract: For dictionary-based decompositions of certain types, it has been observed that there might be a link between sparsity in the dictionary and sparsity in the decomposition. Sparsity in the dictionary has also been associated with the derivation of fast and efficient dictionary learning algorithms. Therefore, in this paper we present a greedy adaptive dictionary learning algorithm that sets out to find sparse atoms for speech signals. The algorithm learns the dictionary atoms on data frames taken from a speech signal. It iteratively extracts the data frame with minimum sparsity index, and adds this to the dictionary matrix. The contribution of this atom to the data frames is then removed, and the process is repeated. The algorithm is found to yield a sparse signal decomposition, supporting the hypothesis of a link between sparsity in the decomposition and dictionary. The algorithm is applied to the problem of speech representation and speech denoising, and its performance is compared to other existing methods. The method is shown to find dictionary atoms that are sparser than their time-domain waveform, and also to result in a sparser speech representation. In the presence of noise, the algorithm is found to have similar performance to the well established principal component analysis.